Leverage Containers to Create Simulated Test Environments on Demand

Adopting service virtualization can allow organizations to achieve more effective software development and testing by removing traditional test environment bottlenecks. Integrating service virtualization within the continuous delivery pipeline using containerization helps teams reach the level of flexibility required by today's competitive markets.

In the ever more intense battle for customer attention and satisfaction, organizations are continuously looking to increase their flexibility when it comes to software creation in order to react to or even get ahead of market demands. One popular approach to achieve this is by adopting a continuous delivery (CD) model of software development and delivery.

For those of you not familiar with CD, the one-sentence summary of the philosophy behind it might sound something like this:

CD is a software engineering approach where development teams focus on creating software in short cycles, thereby ensuring that this software can be safely released into a production environment at any given point in time.

A cornerstone of the CD philosophy lies in the word safely. In order to ensure that any given version of the software under development can be released safely into production, the development team has to be able to trust the testing procedures and quality gates that are part of the CD pipeline. Often, a big part of these quality measures consists of automated checks, ranging from unit tests all the way to the end-to-end level.

In order to enable true CD, it is critical that these automated checks can be run on demand (known as continuous testing), unattended (i.e., no manual intervention should be required to run them and interpret their results), and as often as necessary. This requires careful engineering of the automated testing suite, ensuring that tests run fast, that they do not generate false negatives (which would unnecessarily stall the CD pipeline) or false positives (which would create a false sense of security), and that they propagate test results to the pipeline engine in a clear and unambiguous manner.

An often overlooked part in creating these automated checks that enable teams to adopt the CD approach, however, is the strain that the ability to test continuously places on test environments. Especially with modern-day distributed and (micro-)services-based applications, you simply cannot test an application or a component in isolation and release it safely into a production environment. You'll need a test environment, complete with all dependencies required, that is just as on-demand as the test suite that exercises it. This means all the dependencies required to run integration and end-to-end tests (unit tests often use mock objects to abstract away dependencies) should be available and in the right state all the time.

Anybody who has ever been involved in testing distributed applications knows that this is not an easy feat. Dependencies that are critical to the completion of integration and end-to-end tests are often hard to access or simply unavailable, for any of the following reasons:

  • The dependency itself is under development, making it unavailable or unfit to use for testing
  • It is hard—or even impossible—to repeatedly set up the test data required for the tests to be executed
  • The dependency is shared between teams, meaning that they can be used only at certain points in time (mainframes in test environments often suffer from this phenomenon)
  • The dependency is a third-party component that requires access fees to be paid before one is allowed to use them

About the author

CMCrossroads is a TechWell community.

Through conferences, training, consulting, and online resources, TechWell helps you develop and deliver great software every day.