On July 19, 2024, CrowdStrike pushed out an update to its Falcon security software. That would normally have been a routine event, but this particular update had a bug that brought down an estimated 8.5 million Windows Systems worldwide. Many of these systems had the “blue screen of death,” meaning they could not be recovered remotely. IT teams in affected organizations scrambled to find workarounds, update systems and recover devices, not knowing the root cause of the problem.
Because many critical industries rely on Windows systems, the outage had a widespread impact. Thousands of flights were canceled or delayed. Some government services, including 911 call centers, were affected. Healthcare providers, mass transit systems and many other sectors suffered downtime. In all, experts say the glitch cost $5.4 billion in losses globally.
The CrowdStrike outage may well be a “black swan” event. However, it points to a growing problem among DevOps teams. Many organizations are pushing developers to release code faster, often utilizing AI-enabled tools. However, DevSecOps teams may not be prepared to perform testing at equal velocity.
Testing Processes Aren’t Keeping Up with AI-Enabled Development
Software development has been accelerating for years, and AI has put it into overdrive. In Black Duck’s Global State of DevSecOps 2024 report, more than 90 percent of IT professionals surveyed said their teams are using AI-assisted development in some capacity. However, just 24 percent felt very confident about their policies and processes for testing AI-generated code.
The report notes that AI-generated code has disrupted software development in much the same way as open source software adoption. Both are difficult to manage and can introduce bugs and security flaws. The problem is exacerbated by the use of “shadow” AI tools that bypass company policies and procedures.
Testing already lags behind software development processes, and AI is exacerbating the problem. The DORA 2024 Accelerate State of DevOps report found that while AI is improving development, it’s not improving software delivery. The researchers believe that a lack of robust testing processes plays a major role.
The Role of Automation and Continuous Testing
Few security and operations teams can keep up with the pace of AI-generated code using manual testing processes. They need to integrate automated testing tools into the CI/CD pipeline in a continuous testing model.
With continuous testing, predefined test scripts are used at every stage of development. When source code passes an automated test, it is immediately passed to the next stage of the software development lifecycle. If it fails a test, developers are notified so that they can address issues in their code before they affect other teams or stages. Automated testing dramatically increases efficiency and allows for the discovery of errors early in the development process. It should include regression testing to ensure that updates don’t break what’s already working.
However, automated testing doesn’t let humans off the hook. Organizations should strike a balance between automated and manual testing, maximizing each of their strengths. Automated tools are ideal for streamlining repetitive tasks, freeing up humans for more complex or nuanced issues.
The Importance of a Measured Approach
Organizations should also set a pace for software deployment that minimizes risk. Some applications are not suited to continuous software delivery, and not all DevOps and testing processes are capable of handling it. Additionally, the DORA researchers found that optimizing software delivery requires not only robust testing but manageable volumes of changes.
Technologent has teams dedicated to both DevSecOps and automation. We have helped customers select the right tools and optimize their DevSecOps processes. Let us help you implement automated testing tools and define testing policies and procedures that align with your software delivery needs and objectives.

March 16, 2025
Comments