Using Containers for Continuous Deployment

[article]
Summary:

Pini Reznik explains how containers can help shorten the software development feedback loop by drastically reducing the overhead involved in deploying new software environments. This leads to faster build and test executions and simplifies the standardization of the development and production environments, allowing for an easier transition to continuous deployment.

In his seminal article “Continuous Integration,” Martin Fowler defines a set of practices to improve the quality and increase the speed of the software development process. These practices include having a fast and fully automated build all the way from development to production and conformity between testing and production environments.

Since Fowler’s article was published, continuous integration has become one of the key practices of modern agile development, and many of us are in a constant battle to speed up the build process and test automation stages. The growing complexity of software and our aspiration to deliver it to the customer in a matter of days or even hours doesn’t make this battle any easier.

The recent rise of containers as a tool to ease the journey from development to production may help us address these challenges.

Containers (OS Level Virtualization)
Containers allow us to create multiple isolated and secure environments within a single instance of an operating system. As opposed to virtual machines (VMs), containers do not launch a separate OS but instead share the host kernel while maintaining the isolation of resources and processes where required.

This architectural difference leads to the drastic reduction in the overheads related to starting and running instances. As a result, the startup time can commonly be reduced from thirty plus seconds to 0.1 seconds. The number of containers running on a typical server can easily reach dozens or even hundreds while the same server would struggle to support ten to fifteen VMs.

The following article written by David Strauss provides an excellent explanation about containers: “Containers—Not Virtual Machines—Are the Future Cloud.

Deployment Pipelines
I started building deployment pipelines about a decade ago when I moved from a software development role into configuration management. 

At my first job I reduced the build time from a week to around two to three hours, including the creation of a VM image and the deployment of a full system on a hypervisor. Over 50 percent of the build time was wasted on the creation and deployment of the VM while the build and the automated system testing required less than an hour.

Throughout the years virtualization technology took a few leaps forward and now we can deploy a fully functional multi VM system on a private or a public cloud in just few minutes. Although this represents a huge amount of progress, it still creates enormous difficulties when trying to create a real continuous deployment pipeline. Having a fast build means keeping the continuous integration build under ten minutes as suggested by Martin Fowler. Achieving this speed normally means confining the testing to running a suite of unit tests; there just isn't the time to build a full image, copy the image over the network, deploy the VMs and run a set of system tests.

But what if we could build a new image in just a few seconds, copy to the cloud only the changed pieces of the software and boot up a fully functional system in under one second?

As unimaginable as it sounds, it is already possible using containers. In addition to being able to deploy and test a full system at the continuous integration stage, containers can also help to reduce the complexity related to supporting variations in operating systems. The same container image created during a normal build can be reused in production environments to remove any potential differences between the operating systems on the developer’s laptops and those on the production servers.

The same functionality is theoretically possible using standard VMs, but in practice it never really works. The overhead of maintaining huge VM images, distributing them and re-deploying them multiple times per day has prevented this way of working from becoming popular.

Test Automation
The more efficient usage of resources and almost instant startup times provided by containers is hugely important for super-fast continuous integration builds, but there are even bigger benefits for large test execution scenarios.

Today, the redeployment of a full operating system for the each test automation suite is out of the question, therefore the same system is used to execute hundreds or even thousands of test cases in a quick succession. This requires very sophisticated teardown procedures between the execution of the test suites, which is often a source of errors. Such errors may cause a butterfly effect, leading to an unexpected failure in an unrelated part of the test execution. In addition, teardown procedures are expensive to write and maintain and may take significant time to execute.

With containers, we can eliminate teardown procedures entirely. Every single test automation suite can run in a separate disposable container, which can simply be thrown away after execution. A notable side effect of this is that we can now run all the tests in fully isolated containers in parallel and on any hosting environment that supports containers. This can potentially reduce test automation time from hours to minutes or even seconds depending on the resources at your disposal.

DevOps and Continuous Delivery
The DevOps movement is increasingly popular and aims to bridge the communication gap between development and operations. I see the creation of a common technical language as one of the biggest challenges for DevOps, since these two departments have typically been trying to solve different problems. Although containers will not fix or resolve this challenge, they may significantly simplify the situation for both parties and create an environment, which is easily accessible for both programmers and IT operators.

The standardization of tooling in both departments will allow software to move faster through the pipeline and reduce the complexity required at each stage. By doing that we can take another step towards faster feedback loops with continuous delivery.

In Conclusion
Modern software development relies on the extensive reuse and integration of existing software components. This makes setting up development and production environments especially challenging.

Containers have been here for a while, but only recently have they become reliable and usable enough for daily operations. They will very soon allow us to remove the burden of running thousands of unneeded operating system instances and focus on the real services we provide to our customers.

Containers are making continuous deployment look like an achievable goal.

User Comments

2 comments
Clifford Berg's picture

The author makes a great point that is often overlooked - the fact that containers are so disposable - one need to do test teardown because one does not need to reuse a container. In devops processes people often rebuild their environments only daily - or less frequently - because of the time it takes. But if one creates a container image from a good baseline, one can reload that image almost instantly, use it for testing, and then throw it away - one can then always use clean test environments.

November 2, 2015 - 4:26pm

About the author

CMCrossroads is a TechWell community.

Through conferences, training, consulting, and online resources, TechWell helps you develop and deliver great software every day.