Are you having trouble figuring out what constitutes a reasonable first step on the DevOps journey? Are you wondering whether continuous integration (CI) is enough for your needs, or whether you need to implement continuous delivery (CD)?Or perhaps you’re trying to figure out why CD seems to have different meanings when different vendors are talking about it—and why the acronym is sometimes used to refer to continuous deployment instead of continuous delivery.
If so, this blog post is for you. Keep reading for a comprehensive explanation of what CI and CD really mean, and what the benefits of each are.
Welcome to the age of DevOps, where the continuous building, testing, and deploying of services and applications with little to no human interaction is the ultimate goal, and is achievable. The journey starts with continuous integration (CI), then adds continuous delivery (CD), and finally ends up with continuous deployment. Let’s get started detailing what each level entails, and all the pieces of the puzzle required to get there.
The current leaders in this space are git-based services like GitHub, GitLab, Bitbucket, or Visual Studio Team Services. More established companies often are still using Subversion or Perforce. At this point the code commit triggers a webhook (automatic notification via an HTTP call with a specific application key), which triggers the next step.
The next step is when the CI/CD automation platform receives a notification and proceeds to kick off the specific build pipeline that is associated with the key received by the webhook from the previous step. CI/CD automation platforms can also be referred to as build servers, which is how they started but are much more capable now. Commonly used products in this space are Wercker, Travis-CI, CircleCI, and Jenkins.
The build pipeline has three essential stages, and the steps are often performed on a slave virtual machine (often called a runner) that is created solely for the purpose of this execution and destroyed after to ensure consistency.
- First stage is getting the proper fork of the source code from the code repository then proceeding to walk through the build process that is defined in the CI file in the code base. As part of the build, all dependencies are retrieved and incorporated into the runtime.
- The second stage is running any unit, functional, security, or even integration tests that are defined within the code base and referenced in the CI file. Tests can have their own unique dependencies specified like a MySQL database so the test can successfully execute but will not be included in the final build.
- The third stage is to package what was built and tested, minus the test-specific dependencies, and copy the artifact that is created to an artifact repository. An artifact repository can be anything from a file system to JFrog Artifactory, to Docker Hub.
Once the code has been built, tested, packaged, and stored, the runner will stop and cleanup and wait for the next code revision and do it all again.
Continuous Delivery involves the steps required to deploy to non-production environments. The diagram above lists two common environments, but it isn’t rare to have anywhere from one to four pre-production environments. Some companies can have many more distinct testing environments depending on contractual requirements and business needs around testing integrations with legacy solutions. With continuous delivery, it really doesn’t matter since it the deployment is scripted and additional environments would simply use more compute resources at that point.
One of the two most important pieces of continuous delivery is defining what criteria needs to be passed before proceeding to the next environment, whether it is all pre-production environments at once, only upon successful manual validation, or even time-based like every Monday UAT gets a new build. The other important piece is by having the automated deployment run all the time, it takes the risk out of deploying to production as it uses the same tested scripts run the same proven way.
Last but not least is continuous deployment, which should not be confused with continuous delivery.
Put simply, continuous deployment is continuous delivery that extends to automatically pushing the deployment to production once some predetermined criteria is met. You could think of continuous deployment as part of the continuous delivery process, but be careful to recognize the differences between a staging and production environment. If you treat continuous delivery and continuous deployment as synonymous, you run the risk of treating your pre-production and production environments the same, which is not ideal.
Conclusion: How to Get Started with CI/CD
While continuous deployment may seem ideal, many organizations do not reach this level for their core systems because they want more rigid controls when code goes to production. They want to be able to do things like coordinate feature releases between products, or ensure that they can only touch production on Sunday mornings, during off-peak hours.
If you haven’t started down the CI/CD road yet, take your time. Get continuous integration right first by making sure you have adequate tests available in your application. Once CI is fully automated and in place, you’ll be in a position to implement CD, and to show the value of CD to your bosses.
About the Author
Vince Power is a Solution Architect who has a focus on cloud adoption and technology implementations using open source-based technologies. He has extensive experience with core computing and networking (IaaS), identity and access management (IAM), application platforms (PaaS), and continuous delivery.
We’re hiring! Check out the careers page for open positions in Amsterdam, London and San Francisco.
As usual, if you want to stay in the loop follow us on twitter @wercker or hop on our public slack channel. If it’s your first time using Wercker, be sure to tweet out your #greenbuilds, and we’ll send you some swag!