There are many articles describing configuration management best practices, such as continuous integration and continuous delivery. These articles focus on continuous improvement, from handling the challenges of a large monolithic application to distributed build, package, and deployment.
But the fundamental principle is to always be improving, and a big part of that is thinking ahead in the game and planning for worst-case scenarios. This may involve transitioning from one technology to another, such as moving from Ant to Maven so that you can focus on convention over configuration, saving significant time and benefitting from well-defined practices. You may then consider switching from Maven to Gradle so that you’ll enjoy greater flexibility without compromising dependency management, albeit at a cost of extra maintenance work.
As most IT professionals know, even the transition from physical servers to virtual machines and, now, to containers has brought many challenges. Whether you are adopting the latest software technology or simply transitioning from conventional hard drives to solid-state drives, there is always a need for continuous improvement.
Improve Software Quality in Your Delivery Pipeline
Continuously improving your configuration management practice results in delivering high-quality, complex software beautifully and efficiently. Quality can’t be achieved overnight, but with proven processes in place, it still can be achieved relatively quickly.
There are many parameters that define software quality, including usability, feature richness, creativity, security, maintainability when it comes to patches and upgrades, and, finally, time. Security, maintainability, and usability are extremely important, especially when it comes to migration to the cloud.
Configuration management practices can help quantify these software quality parameters and even reduce the time it takes to achieve them. If stable feature changes are pushed to customer’s hands quickly, the software becomes feature-rich. Stability can be achieved by preferring unit test runs that are closer to the code, which helps catch issues quicker than traditional integration tests.
My advice is to measure code coverage to make sure automated tests thoroughly evaluate the software, and run security scans to make your best effort in controlling security issues. Keep distribution in check and make sure your system is structured is in a way that avoids large input/output operations. Comprehensive testing is a fundamental requirement for configuration management best practices and must be an inherent part of everything you do.
Planning for Worst-Case Scenarios
It is extremely important to envision how the delivery pipeline will behave, not only in expected ways, but also to unexpected events. How would the delivery pipeline respond if you need to scale up or down by ten times overnight, or if the underlying infrastructure version is not supported or out of warranty, or if an unexpected event like hardware failure takes place?
You should always be prepared for worst-case scenarios by planning process improvements ahead of time. The delivery pipeline is infrastructure-independent, so if for some reason fifty virtual machines go down, as long as you smartly configured jobs in a way that all relevant dependencies are installed, you’ll be able to get the pipeline up and running in no time on another platform. An adaptable source code repository and powerful binary repository make for an ultra-flexible pipeline.
Make sure you also consider how Moore’s law applies to us. Moore’s law states that the number of transistors in a dense integrated circuit doubles about every two years. This tells us that we must learn to scale and adapt with time. The number of services or modules will keep increasing; if it takes thirty minutes to build something, in six months it should take only twenty minutes, depending on how frequently features are released. This reduction in time is possible by eliminating inefficiencies, cutting down input/output time, and moving to more effective technologies and approaches.
Apart from planning for worst-case scenarios, Moore’s law also helps us understand how CM-related quality improvement activities need to be aligned with the release cycle to manage change better. Flexible sprints define a closure point nicely; whether it’s for a two-week or six-month release cycle, the length of a sprint should become shorter as we come closer to the release date. I often start with a three-week sprint and move to a two-week sprint and, finally, a one-week sprint as we reach the deadline for releasing the product. It is important to control change without affecting the pace of these changes.
Software professionals should focus on continuously improving their configuration management processes. But it’s equally as important to observe the entire CM process so you can envision and plan for worst-case scenarios as well as think about how you can scale with time—and keep on improving.