Using Docker with Atlassian’s Bamboo for Better Continuous Integration


Some projects built with Bamboo require the build system to have certain specialized software. Installing the software can become a problem when different projects require conflicting sets of software. To solve this, developers can create remote agents with different sets of software and run them all on the same machine with Docker.

Building software to be deployed can be very challenging. Some of the projects we build with the Bamboo continuous integration server require specialized software, such as the Atlassian SDK to be installed. To accomplish this, we install the software on the machine where Bamboo is running, or set up remote build agents on other machines that already have this software. However, this can become a problem when different projects require conflicting sets of software (such as different versions of the Atlassian SDK) or earlier versions of software than what’s already in use on a machine, and there aren’t enough machines to dedicate to builds.

To solve this issue, we create remote agents with different sets of software and run them all on the same machine with Docker. Docker is a virtualization container that allows us to create images of Linux systems, customized in any way one would normally customize Linux, which can then run as self-contained instances or containers. However, this is no easy task, so in this article I’ll give you a detailed view of solving some of the challenges inherent in using containers like Docker for deployment. The only limit on the number of containers for images is resources, as the process is inherently scalable ad infinitum.


A Docker image is constructed from a Dockerfile, which consists of the name of a base image (most often a version of Debian or Ubuntu), a series of commands to run in the base image to customize it, and a CMD line specifying the default command to run when the resulting image is started as a container. For example:

A Docker image is constructed from a Dockerfile

By saving these commands in a file named “Dockerfile” and running the command “docker build -t custom-agent .” in the directory containing the file and, in this case, also containing atlassian-bamboo-agent-installer jar, we create a Docker image named “custom-agent” based on Ubuntu that contains Git, Maven, Java, and a Bamboo remote agent. The ubuntu:14.04 base image is available on the Docker registry and will be downloaded automatically during the image build process if it is not already on the local machine. (Atlassian provides a remote agent Docker image of its own, but it is based on its own custom version of Ubuntu with contents about which they are not forthcoming, so we made our own.) At each subsequent line of the Dockerfile, Docker runs a command inside this image or otherwise modifies it until at the end we have an image to our liking.

Docker RUN runs a command inside the base image just as if we were running it from the command line (specifically, in /bin/sh. Note that each command is run in a separate shell, so RUN cd /path and RUN export VAR=val won’t affect subsequent RUN commands; instead, do WORKDIR /path and ENV VAR val). COPY copies a file (in this case, the Bamboo remote agent JAR) from the local directory containing the Dockerfile into the image. CMD specifies the default command to run when a new container is created from the image. The command used here makes use of an environment variable that will not be set until we run the image.

Better Building

When we run docker build, Docker stores not only the final image, but also the image as it existed after each command in its build cache. These intermediate images are called layers. When we run docker build again, Docker compares each command against the ones that created the layers in the build cache. If the combination of the current image state plus the next Dockerfile command (including the contents of COPYed files) has already been done before, Docker will use the cached result rather than running the command again. However, these cached layers are remembered only as long as they’re part of an image stored on the local machine. If we use docker rmi to delete all images that have a given layer in their build history, the layer will be deleted from the build cache as well. As a result, by structuring Dockerfiles appropriately, users can cut down on the time it takes to rebuild the image whenever the file is modified. Best practices include placing time-intensive commands that are unlikely to change (like compiling a specific version of a program from source) at the top of the Dockerfile, while placing quicker
commands that are likely to change often (like copying in a list of hosts to interact with) at the bottom.


If we create several images that differ only slightly, like in the version of the primary software, we can give them all the same name but different tags. Each image has a tag—usually a version number as part of its name—that is appended to the base name of the image with a colon; e.g., we can write “docker build – t image:1.0.2 .”, “FROM image:1.0.3”, and “docker run image:1.0.4”. If we don’t specify a tag when creating or using an image, the tag “latest” is implied; thus, “docker build -t image .” actually builds image:latest, “FROM image” uses image:latest as the base image, and “docker run image” runs image:latest. We used a tag in the Dockerfile above when writing “FROM ubuntu: 14.04” in order to specify the specific version of Ubuntu to build from; the available tags for an image in the Docker registry are listed on the page for that image, sometimes under the “Information” tab and always under the “Tags” tab.


The custom-agent image can be run with the command:

The custom-agent image can be run with this command

(If we don’t want to run the default command set in the Dockerfile, we can explicitly specify a command to run with “docker run custom-agent command args”, and we can run a shell inside our custom environment with “docker run -i -t custom-agent /bin/bash”.) The -d option causes the container to run in detached mode as a background process, with all of its output being logged. –name agent-container assigns the container a name (which must be unique among all Docker containers that currently exist on the system) that we can use to refer to the container in further Docker commands. The container can also be referred to by a hash value that is output when it starts up. (If the container is not run with -d, this value will not be output, and you will have to find the ID by finding your container among all running containers in the output of docker ps.)

When the container created from the image starts, the remote agent will run inside the operating system defined by the image with access to all the software in the image (and only that software), and the agent will connect to the Bamboo server and perform builds for it like a normal remote agent. Depending on how Bamboo is configured, users may have to authenticate the remote agent by visiting a URL that the process outputs. The output from the container can be viewed by running “docker logs agent-container” or whatever a user named the container on the machine.

The output from the container can be viewed by running whatever a user named the container on the machine

The container will run until the primary process inside stops, either through exiting successfully or encountering an error—or, if that doesn’t happen, until a user makes it stop by running “docker stop agent-container”. However, stopped containers continue to exist on the local machine (until it reboots, at least), though they won’t show up under docker ps unless someone adds the -a option. The idea behind keeping the container around is to allow the user to examine the stopped container, possibly copying out files with docker cp, or to restart it with docker start. If a user dosn’t need or want to examine or restart a stopped container, it can be deleted with docker rm agent-container. If a user is sure he or she won’t ever want to keep a container around once it stops, the container can be told to automatically delete itself when done by including the –rm option in the docker run command.

Setting Remote Agent Capabilities

One important feature of Bamboo agents is their capabilities, which Bamboo uses to determine which agents can perform which jobs. The remote agent can determine the values for some capabilities of a system automatically (such as the Java home directory and the location of Git), but others must be explicitly specified in a ~/bamboo-agent-home/bin/ file. For example, if a Docker image has the program foo installed in /usr/bin/foo and a Maven 3 installation in the /usr /local/share/maven directory, users can make the agent aware of these by adding a file to the Docker build directory with the following contents:

A file

and then adding these lines to the Dockerfile:

Added lines to the Docker file

Other Uses

Docker can be used when deploying Bamboo builds, as well. A Tomcat server can be created as a Docker container with:

A Tomcat server can be created as a Docker container

This uses version 6.0 of the tutum/tomcat image on the Docker registry and sets it to listen on the local machine’s port 7000. The admin password for the server can be found by inspecting its output with “docker logs tutum_server”, after which Bamboo can deploy web apps to the server as it would deploy to any other Tomcat instance.


Docker is useful for automating the shipping of software, avoiding issues between parts of cross-functional teams that may have their tools out of sync, providing speed and flexibility for builds, and allowing for infrastructure agnosticism. Of course, users should take the time to learn the tool and all its command line arguments to take advantage of all its features. However, as Docker gains traction, it will be increasingly supported by higher-level tools. For example, Atlassian’s Bamboo introduced support for Docker agents in version 5.7 in November 2014 and Docker tasks in version 5.8 in March 2015. Atlassian has made numerous commitments to increasing its support of Docker as well. However, as a group of “hardcore coders,” we still recommend learning what’s going on under the hood as the integration process continues. Regardless, Docker remains a great choice as a complement to a team’s build pipeline.

About the author

CMCrossroads is a TechWell community.

Through conferences, training, consulting, and online resources, TechWell helps you develop and deliver great software every day.