Contact

Technology

Oct 05, 2015

Microservice Delivery using Jenkins and Docker

Bradley Mickunas

Bradley Mickunas

Default image background

Intro

Do you enjoy fast updates for features or fixes on your daily software applications? What is the time to market for your customers to enjoy your newly developed feature? How many people does it take to move the feature from the developer’s computer to the production server? Containers and microservices are hot topics these days as a wealth of information is being produced. You may already be investigating the topics and tools as you wonder if you could improve the process of delivering updates for your application. I recently wrapped up a client project where our team assumed the DevOps role implementing the deployment pipeline for a microservices application using Docker containers. I enjoyed the challenge and lived to blog about it.

Deployment Pipelines

Deployment pipelines break up the application build process into several stages. Early stages of a deployment pipeline do the heavy lifting of compilation and produce a versioned artifact for application deployment. The artifact passes through automated testing to verify functionality, quirks, and fixes. After automated testing, the artifact can be deployed to begin manual processes of quality assurance and user testing. Once the build artifact is proven, approval is granted and the artifact is deployed to production.

Our process used Jenkins jobs triggered off a code commit in GitHub and continued automatically through several stages to a confirmation page for production deployment. The stages in between performed NodeJs unit tests with MongoDB, built the artifact as a Docker image, executed Mocha integration tests, and deployed the Docker containers to the development environment via Ansible playbooks. Once deployed to the development environment, the Jenkins Workflow plugin made a confirmation button available to the Jenkins user allowing confirmation of deployment to the production environment. If any of the stages failed, JIRA and Slack notified the development team, and the team treated the issue as a high priority defect.

Figure 1 - Deployment Pipeline Stages
Figure 1 - Deployment Pipeline Stages

Figure 1 – Deployment Pipeline Stages

Figure 2 - Deployment Pipeline Tools (Replacing each stage with the appropriate tool)
Figure 2 - Deployment Pipeline Tools (Replacing each stage with the appropriate tool)

Figure 2 – Deployment Pipeline Tools (Replacing each stage with the appropriate tool)

Deployment Strategy

A deployment pipeline for microservices needs a significant amount of time for planning each stage of the process.  Consider the following questions:

  • By what date are your key stakeholders expecting applications to be in development, stage, or production environments?

  • How many services will you have at the end of your project?

  • Are there any tests written for the applications and services?

  • What frameworks will the various applications use to execute tests?

Each new stage will affect how developers work together internally, externally, and how they demonstrate their work to stakeholders. This is no small thing, especially since microservices can increase the number of teams involved in development. We served five different teams, helping them adapt to the changes in the deployment pipeline. Updating the deployment pipeline alongside application development can present its fires and hurdles, so plan time for helping developers along the way. For example, we realized the need to be consistent with the Docker image operating system from the build to run-time. We eventually ran into library incompatibilities because the base image of the Jenkins container used Debian, while the application container base image used Ubuntu. Incompatible library versions caused some trouble when the application container was started, so this took some time to troubleshoot and find a solution for the application developer and us.

Unit Test and Build

unit test & build
unit test & build

The process began with developer commits following a feature branch model. Once developers completed a feature, they merged with the master branch triggering a build in Jenkins. All projects for our client were living in GitHub, so Jenkins cloned the latest and began executing commands to install application dependencies. After the install, a local MongoDB server was started for the unit tests. If the install and unit tests succeeded, a Bash script built the Docker image using the Docker Remote API. Eventually multiple builds using the Node.js and MongoDB processes caused conflicts, which were resolved by executing the builds inside slave Docker containers.

Integration Tests and Quality Analysis

integration test & QA
integration test & QA

Integration tests happened prior to deployment using the locally built Docker image. The script pulled the latest deployed images in the development environment from Quay.io (private Docker image repository), started the containers, and executed Mocha (JavaScript testing framework) tests among the running containers. If all tests passed, the Docker image was tagged according to the build number and sent to the image repository with a Bash script using the Docker Remote API. The initial group of tests we created finished in a few seconds, but parallel testing or testing dependency subsets would be necessary if completion time became cumbersome. For example, if the tests were taking an hour, you could consider parallel testing, starting multiple groups of the locally built image with its dependencies and running a subset of tests on each group of containers.

We did not finalize the quality analysis piece of this stage due to project constraints. Since the applications were all Node.js apps, we would have used JSLint and placed the output into a report. If the quality exceeded a specified threshold, the build would fail. The key to quality analysis is finding the correct threshold, so the reports provide value rather than noise.

Version

version
version

All development teams used semantic versioning for their microservice. They were responsible for bumping the version appropriately with each commit; however, we tested auto-versioning of the static assets of the application using Gulp, which updated the package.json file once it was ready for deployment. Each commit updated the minor version while the developer was responsible for manually updating the major version number. Unfortunately this failed because the commit from Gulp continually triggered more and more builds. With more time we could have conditionally triggered the build based on the commit author, like creating our own GitHub WebHook and triggering the build from a separate service only if the commit author is someone other than the GitHub robot user (or at least this would be a starting point).

Deployment

deployment
deployment

Once the built image was deployed to Quay.io, a deployment Bash script pulled the latest Ansible playbooks (OpenSSH IT automation tool) from a separate repository and executed the appropriate playbook. Developers would write their own application playbooks based on a template playbook I created and steps documented in Confluence. The example playbook below has a few key steps related to deploying the Docker container.

  1. Set the owner and permissions for the configuration path on the host.

  2. Copy over the necessary keys.

  3. Create the systemd unit file for starting and restarting the service on the host.

  4. Pull the Docker image from Quay.io.

  5. Start the container.

Figure 3 - my-docker-playbook.yml
Figure 3 - my-docker-playbook.yml

Figure 3 – my-docker-playbook.yml

Once the application deployed successfully to the development environment, the Jenkins job waited on confirmation to deploy to production using the Jenkins Workflow plugin. The user could either confirm it was ready for production or abort the job leaving the latest image deployed in the development environment. In the future they could add a step here for toggling features of the service in case a dependent service finished its portion of a feature before another. A staging environment was also in the future for this pipeline; however, it made more sense to spend resources on integration tests before creating an environment identical to production.

One difference between deployment for development and production was how we managed the production environment passwords and secrets. The secrets needed to be in one place, ignored by code repositories and backed up consistently. Therefore, we configured the deployer container with a separate drive as a volume. The volume allowed the container and playbooks read access to the secrets.

Failed

fail
fail

If the application failed at different stages in the pipeline, two things occurred. First was the creation of the JIRA ticket via the JIRA REST API. The JIRA ticket was marked as a defect and assigned to the code commit author with the URL for the build in the ticket. If the author’s email address did not exist in JIRA, a default user for the application was assigned the defect. Once this script was complete and implemented across all Jenkins jobs, these tickets were treated with high priority and resolved as quickly as possible. In addition to the ticket, the jobs in Jenkins were also configured with the Slack plugin, sending messages of failure and success to channels created by the developers.

Conclusion

Like I said earlier, it takes planning to design and implement a deployment pipeline for microservices. A microservices architecture can increase the amount of teams and complexity from a DevOps perspective. Our DevOps work was simpler because all the applications were deployed inside a Docker container. Consider using Docker for microservices in your own projects. Perhaps you can reduce your time to market and lower the cost and effort of deployment.

Helpful Links

Conversation Icon

Contact Us

Ready to achieve your vision? We're here to help.

We'd love to start a conversation. Fill out the form and we'll connect you with the right person.

Searching for a new career?

View job openings