In early 2001, a group of software professionals gathered in Utah to discuss how the development process could be improved. Their goal was to break free of document driven development and embrace a methodology that would create more stable software with a shorter turn around time. From this meeting the Agile Manifesto emerged.
The first principle established in the Agile Manifesto states: “Our highest priority is to satisfy the customer through early and continuous delivery of valuable software.” Whether your organization produces commercial software or manufactures cars, a short cycle time, or the time required between identifying and releasing a piece of software, is vital to the profitability of your organization. Software only makes money when it is in production.
Over the past two years, the idea of continuous delivery has become a practice of itself. Through visibility and automation, continuous delivery strives to create a reliable release process that can be repeated over and over again. This reliable, repeatable process will improve software quality and reduce cycle time.
Deployment Pipeline
The first step in a continuous delivery environment is to store all artifacts in a version control system. This statement seems logical from a software source code perspective, but we need to take it one step further. We want to also store server configurations, OS patches, third-party packages, database scripts; in short, anything that goes into the final production environment our application runs in.
Once we have a reliable version control system in place, we can implement continuous integration. Continuous integration is a concept involving both human and computer processes. From a human perspective, developers and system administrators must learn to check in changes to the system as soon as they occur. Once checked in, the computer processes kick in. Through continuous integration software, a build of the new system should be kicked off immediately and deployed to an integration test environment. The best practice is for each test environment to mirror your production environment.
In addition to compiling the updated source code, the build process should also run a series of unit and acceptance tests. If the build process completes with no failures, the developer or system administrator can move on to their next task. However, if any test fails, the individual responsible for the breakage is responsible for finding the cause and resolving it.
When development environments are setup correctly, build failures should not occur in the integration environment. Prior to checking in changes, individuals should insure their local environment is up to date and builds successfully. If this is the case, then in theory, when they check in, the integration environment should also build successfully.
After our new feature or system patch has successfully passed all automated tests, the build can be pushed to QA and staging environments where tests that cannot be automated will be tested. These environments must match production. Like the earlier build and push to development integration testing, the push to this environment must be accomplished only through the push of a button or a system trigger of some sort. To create a repeatable process, we do not want to manually put the release in a new environment. By using a manual process, it is too easy to forget a step or fail to check in a configuration change.
Finally, after all tests, automated and human have been completed, we are ready to release to production. Once again, this should be an automated process.
Build it Once
Most organizations are familiar with tagging their code for a release, then building a new set of binaries for each environment. In continuous delivery, the goal is to use the same binaries through the entire deployment pipeline. This creates an interesting situation for organizations familiar with having different configuration files that get utilized based on the environment the build is for.
By using the same binaries through the entire pipeline, you are able to insure that the build for the given environment is not responsible for any breakage. To accomplish using the same binaries, developers have a few options at their disposal. The most prevalent is the use of environment variables. Another options include RDBMS, LDAP and web services.
What is our Goal?
Frequently when you hear about continuous delivery the topic involves the ability for organizations like etsy.com or flickr.com to deploy to production multiple times per day. Flickr even makes it easy for you to see how many deployments have occurred in the last week and who triggered the deployment.
Our goal on the other hand is to create a process that illuminates problems on our code, systems and configurations before it reaches production. We do this through automation of the entire process. By bringing errors to light early, we are able to resolve before they impact users and our bottom line. An initial resource investment is required to put the system in motion, but like test driven development, it pays itself forward in the long run.
Your organization may not deploy multiple times per day. Your goal should be to start small; move from 6 month deployments to two month deployments. Then drop to monthly. Before long you will have a process that allows you to have your source code generating revenue faster. As is true with the rest of the agile philosophy, recognize incremental improvements.
If you are interested in learning more about continuous delivery, check back to this blog for in-depth posts and deployment white-papers on the concepts discussed above.
Contact Us
Ready to achieve your vision? We're here to help.
We'd love to start a conversation. Fill out the form and we'll connect you with the right person.
Searching for a new career?
View job openings