An Evaluation Framework for Continuous Integration Tools

[article]
Summary:
Tools that enable continuous integration are vital to any agile project. Learn how putting together a well-planned evaluation process for the selection of those tools enables your entire team to work more cohesively, while eliminating the waste and damage that ineffective tools can cause.

The Importance of Continuous Integration
As agile and other lean methodologies move to the forefront of software development, one thing is abundantly clear: continuous integration is the key process for making agile work. Without it, teams have no feedback as to the stability, quality or suitability-to-task of their software. Given the short cycle times in agile methods (2 week sprints, for example), this lack of feedback is the equivalent of driving on a dark, unfamiliar road with no headlights. By introducing continuous integration, teams can implement these agile micro-cycles more effectively, and ensure that the project is progressing appropriately.

Since continuous integration is so important, teams need to have a solid framework in place for evaluating tools. Choosing the wrong tool results in wasted time and effort, forcing teams to go back to square one in order to meet the goals of fast, effective build and test cycles. Conversely, when the right tool is introduced, developers become more productive, managers gain insight into the process, and business stakeholders (often resistant to agile) buy into the process.

The Evaluation Framework

With all of this in mind, I recently embarked on a continuous integration tool evaluation. You can read the first installment of the detailed results on my blog at Technistas.com. Rather than rehashing the specific results of the evaluation, my goal in this article is to present the evaluation framework that I used, discuss why it was useful – and to save readers from investing the time and effort of developing their own framework.

At a high level, the framework facilitates evaluation of continuous integration tools along several vectors:

    • Installation
    • Configuration and maintenance
    • Running a simple job
    • Availability of results and metrics
    • Interacting with development tools
    • Automating complex build processes

The remainder of this article expands on each of these points and provides tips on how to evaluate tools in these areas. For each, I provide a table listing several specific features that you should look for. The table also has a ranking column, and I suggest that you assign a rank from 1 to 5, with 1 being a poor score and 5 a high score. In this way, some simple math at the end of your evaluation will give you an instant snapshot of how the various tools in your evaluation stack up.

Installation

Often overlooked, the ability to easily and properly install a continuous integration tool is important. Complex and failed installations waste time, and rightly sour the evaluation. If the tool installs quickly and intelligently on the other hand, you as an evaluator have cleared the first hurdle and already have a sense of the quality-consciousness of the people who developed the tool.

Continuous integration tools are typically architected as client/server products, with a web server back end and typically a web interface on the front end. Consequently, you should look at several things when installing the tool:


Installation Criteria


Ranking (1-5)


Ability to change default ports


 


Clear description of installation options


 


Ability to perform basic configuration as part of the installation


 

Configuration and Maintenance

Once the tool is installed, you may need to perform some configuration. My bias with respect to configuration is simple: less is better. By this I mean that the tool should be immediately useful once installed. Any tool that requires you to spend too much time understanding its concepts and tweaking settings before you can use it should be treated with suspicion: if it’s hard to configure, there’s a good chance that it will be hard to use.

Any tool in your shop ultimately needs to be maintained, whether it is patching clients and servers, adding plug-ins, or tuning and tweaking. In better designed tools, this functionality is built into the user interface, so that configuration and maintenance are performed from a dashboard screen in the tool’s web client. Here are some of the important things to look at for configuration and maintenance:


Configuration and Maintenance Criteria


Ranking (1-5)


Ability to change default ports


 


Security (at a minimum, the ability to segregate administrators from users)


 


Ability to change install options


 


Administrative dashboard


 


Useful out-of-the-box


 

Running a Simple Job

If you’ve reached this point in the evaluation, you’re probably itching to do something with the tool. Rather than dive right into building code and running tests, I suggest that you first attempt to create a ‘hello world’ job that does something simple, like printing the current machine environment to the log (the equivalent of ‘set’ in DOS, or ‘set’ or ‘export’ in UNIX). By doing this, you get to take a single pass through the tool’s job setup mechanism, which will give you a good sense of what the day-to-day interaction with the tool will be like.

While setting up your simple job, you should look for one thing above all: flexibility. If the tool is forcing a rigid conceptual framework on you, this should raise a red flag. Similarly, if job parameters are always hard-coded (such as directory path names), this is less desirable than a tool that supports macros (variable substitution). Here are some of the important things to look at:


Running a Simple Job Criteria


Ranking (1-5)


Intuitive concepts


 


Ability to parameterize jobs (macro support)


 


Simple user interface


 


Useful defaults


 

Availability of Results and Metrics

After a job runs in a continuous integration tool, you need to view the results and gather metrics. Though development tools are often geek paradise, managers are an important set of tool users – and their needs are quite different from those of a software developer or QA engineer. Managers are tasked with controlling and monitoring the development process, which makes the continuous integration process their ‘factory floor’ – and the tool’s management console their command center. Developers might have a more focused interest, perhaps just wanting to see the results of recent jobs. Both groups of users must be satisfied with the tool, or there will be tension in the development organization, and likely lost productivity.

Here are the key points to evaluate when looking at how a tool provides feedback on jobs and on the development process metrics:


Availability of Results and Metrics Criteria


Ranking (1-5)


Ability to query for a subset of results


 


Management dashboard


 


Pass/fail and performance metrics


 


Integrated test results


 


Build farm view (ability to see all jobs on all remote machines)


 


Web access to individual logs and reports


 


Dependency reports


 

Interacting with Development Tools

From an application lifecycle perspective, continuous integration is a sub-process in the larger lifecycle, but likely touches more tools than any other phase of the lifecycle. Source code management, issue and bug tracking systems, test tools and profiling or analysis tools all tend to be integrated into the continuous integration process. The desired output of a continuous integration run is a stable piece of code from a known location in your source repository that meets certain quality criteria, so choosing a tool that lets your development tools work together intelligently is critical. Though the raw number of available tool integrations is important, you should also look at how easy or hard it is to add a ‘test step’ or a ‘static analysis’ step to the build job. Also pay attention to the process for integrating new tools – since you almost certainly have commercial, open source and home grown development tools that your continuous integration tool doesn’t support out-of-the-box.

Here is the short list of features that you should look for when determining how the continuous integration tool will fit in with your existing development tools:


Interacting with Development Tools Criteria


Ranking (1-5)


Number of built-in integrations


 


Process for creating custom integrations (low coding effort is best)


 


Ability to display tool output in reports


 


Ability to store encrypted tool credentials


 


GUI support for common tool options


 


Support for SCM triggered or clock triggered jobs


 

Automating Complex Build Processes

Once you’ve figured out how to call your tools, it’s time to put together a real build job, one that orchestrates your code and your tools inside of the continuous integration process. This is really the end goal of the evaluation: to automate your process and tool chain. Continuous integration tools vary widely in this area. Some don’t allow any complex orchestration of tools – you can check out from source code, but then are constrained to run your existing build script. Others provide complete workflow engines that let you chain tools together into complex build workflows, and then trigger these workflows based on source code changes or other criteria.

Since you will likely have a small mountain of existing build scripts already in use, you should always be able to run an existing script as part of a build process. Beyond that, here are the points to be aware of when determining how well a continuous integration tool will let you automate your build process:


Automating Complex Build Processes Criteria


Ranking (1-5)


Support for tool call chains


 


Ability to run existing scripts


 


Ability to insert ad hoc scripts and command line calls into the tool chain


 


Nesting (ability to call one job from within another job)


 


Distribution (ability to partition individual steps in a job to run on different machines)


 


Virtual machine support (ability to spin up virtual machines to run job steps)


 

Summary

To recap, the goal of this article was to present the evaluation framework that helps continuous integration tool evaluators make sense of the wide array of features and functionality available today. Continuous integration has come a long way from the early days of ‘Cruise Control or nothing’, and has emerged as a key business process, affecting engineering, sales and marketing organizations. Engineering owns the process, and so has the responsibility to ensure that the results are unambiguously adding value to the organization. This framework will help you make a decision that is right for your organization, and pave the way for your team to benefit from the important practice of continuous integration.


Matt Laudato is an independent consultant and editor-in-chief at technistas.com <http://technistas.com> , and has over 20 years of experience in software engineering and IT. He has held engineering, marketing and sales leadership positions at Northeastern University, Dun and Bradstreet, OpenWave, AccuRev and OpenMake Software over his career. He is also a published expert on machine learning algorithms and statistical analysis. He holds a BS in Physics from Stony Brook University and an MS in Physics from the University of Wisconsin, Madison.

About the author

CMCrossroads is a TechWell community.

Through conferences, training, consulting, and online resources, TechWell helps you develop and deliver great software every day.