In a previous post, we discussed the challenges and benefits of the DevOps model. By tearing down the silos between traditional software development and IT operations roles, DevOps aims to accelerate the deployment of applications and services while enhancing performance and security. A key element is continuous integration (CI) and continuous delivery (CD), in which new code and changes are frequently released for testing and staging to optimize quality and minimize problems.
With CI, developers regularly merge their code into a shared branch to reduce the risk of conflict with other development efforts. With CD, code is automatically bug tested and prepared for deployment by the operations team. A related “CD,” continuous deployment, takes the concept a step further by automatically deploying code into production.
The CI/CD workflow is referred to as a “pipeline” that incorporates a high degree of automation into software development. It reduces reliance on manual processes that create bottlenecks in testing and deployment and introduce errors.
The CI/CD pipeline is not a new concept. For example, the Jenkins project — a leading open source automation tool — was launched in 2004. However, the CI/CD approach is ideally suited to today’s cloud-built applications. After all, the cloud makes it easy to provision the IT resources needed to support an application. It only makes sense to extend that simplicity to the deployment process.
Furthermore, the cloud is a complex, distributed system that can take advantage of applications built using multiple, loosely connected services that are deployed independently. Each of these microservices has its own software development stack, enabling developers to use their preferred tools. Components in a microservices architecture can be updated easily and scaled independently based upon business requirements.
A unified CI/CD pipeline helps ensure that all of the components work together as various development teams release code at their own pace. It also minimizes the risk that bad code will go into production in this decentralized development environment.
Google was an early adopter of the DevOps model, but found that the concept left gaps in real-world development processes. The company pioneered site reliability engineering (SRE) as a means of implementing DevOps and achieving its objectives.
Site reliability engineers are involved in both development and operational tasks, although In some SRE models they are less involved in code changes and serve as consultants to development teams. In either case, site reliability engineers are responsible for solving problems that would traditionally have been passed back and forth between development and operations. This minimizes finger-pointing and enhances the collaboration between the two teams.
Site reliability engineers also facilitate collaboration and information sharing between development and operations and provide oversight of monitoring, change management and related functions. Additionally, the SRE model encourages the adoption of consistent automation tools to minimize compatibility and integration issues.
The DevOps Institute’s 2020 Upskilling: Enterprise DevOps Skills report finds that adoption of DevOps and SRE is on the rise. Almost three-quarters (74 percent) of organizations are using DevOps frameworks, up from 66 percent in 2019. More than a quarter (28 percent) are using SRE, up from just 10 percent in 2019. However, 65 percent of organizations find it somewhat to extremely difficult to recruit personnel with the necessary skill sets to fully implement these models.
Technologent is here to help fill those gaps. Our team can evaluate your current processes and toolsets to identify opportunities for consolidation, rationalization and enhancement. We can then help you evaluate solutions that can streamline your CI/CD pipeline. Contact us in your region to discuss your particular needs and objectives.