This is the first in a multi-part, multi-track blog series on Docker for the Enterprise
The Case for Docker
“Shipping code to a server is really hard. It should not be hard.” – Solomon Hykes, Founder of Docker
Does this sentiment resonate with you? What about the following situations?
– Deploying application changes across your development, user acceptance testing (UAT), and production environments is a painstakingly manual process, and therefore inconsistent and infrequent.
– Your application has difficulty scaling up to handle increased load and consumes a lot of CPU cycles on virtual machines (VMs) when it does.
– It would be impossible to recover from a catastrophic hardware failure on your production servers because their specific configuration has grown over time and not been documented.
Docker claims to be able to help with all of these. In this blog series, we’ll be examining these claims with a deep dive into a Docker example application, and we’ll also consider the practical business considerations that need to be made when deciding if Docker is right for your organization.
What is Docker?
Docker, Inc. is the company that created Docker. Docker is a technology (or rather a collection of technologies) based on wrapping an entire environment into a Container. It is relatively new in the app dev space, having just released version 1.0.0 on June 9, 2014. The intent of Docker is to enable entire application ecosystems to be deployed seamlessly across multiple environments and hardware configurations.
The Benefits Claim
Docker boasts an impressive list of benefits, which makes it sound like the panacea of all software development, deployment, and configuration management woes:
– Faster delivery of your applications
– Deploy and scale more easily
– Get higher density and run more workloads
– Control the configuration of your application environments
But What Is It?
These benefits are achieved through a handful of coordinated technologies. Here are the technologies created and maintained by Docker, Inc.:
– Containers – The actual bits composing an entire working application environment, including the applications, libraries, file systems, networking, etc., even down to the OS binaries.
– Docker Engine – A tool for building, starting, and managing Docker Containers.
– Docker Hub – A place to upload and share Docker Containers.
How does all of this come together? According to Docker, it is as easy as “Build, Ship, Run”:
1. Build – Developers build a Docker Container image for each independent application component (such as a web application, a NoSQL cluster node, etc.) that includes all of the software required to make that component work.
2. Ship – The Docker Containers are distributed via Docker Hub or a private repository to any environment that has the Docker Engine installed.
3. Run – The Docker Containers are started and managed in the target environment by the Docker Engine.
The Docker Lifecycle
While Docker is a great tool for creating dynamically scaling applications, these tools alone are not sufficient. In our experience, there are a host of tools available in the broader Docker community that are required to create an application that can be deployed, scaled, and managed according to the vision of Docker. We’ll cover these tools and others in depth during our technical deep-dive.
Docker leverages Linux kernel namespaces and control groups to create isolated execution environments for each Container running on the same kernel. It also uses a layered file system to augment a base image with additional files and directories specific to the needs of each application Container. In this way, a base image can be used as a starting point for many other images.
Wait, That Sounds Just Like a VM!
No, not quite. A Docker Container is similar to a virtual machine in that it encapsulates an entire environment. The difference is a Docker Container leverages the kernel of the host OS to manage resources, instead of packaging up the kernel into an image like a VM.
A host running Virtual Machines must load the Guest OS for each VM
(image copyright Docker, Inc.)
In addition to having to run multiple OS images, virtual machines also require a hypervisor such as VMWare or Xen. The hypervisor manages each VM’s interaction with the host OS and all of the system resources. The net result is the hypervisor adds some amount of overhead and limits the number of virtual machines that a physical server can run.
A host running Docker Containers only loads the binaries and libraries required by the applications in the Container. Docker Containers can even be run on a guest VM within a virtualized data center, with no adverse effects.
The result is that Docker Containers are much lighter weight, start quicker, and only contain the components they need in order to run.
How It All Adds Up
Let’s revisit the benefit claims of Docker and consider how each point is achieved.
– Faster delivery of your applications – Because Docker Containers can run unmodified in any environment that has Docker Engine installed, it is easier to move Containers across environments.
– Deploy and scale more easily – Due to the isolated nature of Docker Containers, they are separated from any environmental dependencies, allowing them to be deployed easily. This isolation also allows them to spin up or down dynamically (although not without the proper application architecture, as we’ll see in our future blog posts).
– Get higher density and run more workloads – Docker Containers are lightweight, containing only the bits required to make the contained application run. Therefore, more containers can run more jobs on the same hardware compared to virtual machines.
– Control the configuration of your application environments – Because Containers encapsulate an entire environment, the configuration of a given application component is easily controlled simply by checking the Dockerfile (Container configuration file, more on these later) for the component into source control.
Is Docker Right for Me?
In this blog series, we’re going to get a Docker example application up and running, and we’ll explore the issues we encounter along the way. We’ll also be discussing the business considerations related to using Docker. Finally, we’ll wrap it all up into a technical position and business-focused summary. Here are some of the topics we plan to cover:
Technical Track – for Developers, Architects, and Sys Admins Deploying Docker
– Containerizing Application Components
– Security and Compliance
– Deployment, Scaling, and Orchestration
– Developing Docker Applications
– Version 2.0 – Upgrading
Business Track – for Managers and Executives Considering Docker
– Thinking in Containers
– Realizing the Benefits of Using Docker
– Adopting Docker
– Executive Summary – So What?