Improving Application Quality by Controlling Application Infrastructure

[article]
Summary:

Today, applications are undeniably the proxies for key business processes.  So, improving application quality drives directly to improving overall business performance.  It's no wonder then that improving application quality and uptime is a top of mind issue for IT managers and executives. 

But with today's web-based composite applications, improving application quality is easier said than done.  Gartner's Research VP Ronni Colville calls them "wiggly apps" because of their large number of moving pieces and the high rate of change associated with them.  As a result, while the measuring stick for quality is "five nines" availability, "five eights" is closer to reality.

Why does this occur and what can you do about it?

Let's start with why this occurs.  According to leading research analysts at Gartner, Forrester and Enterprise Management Associates, application configuration errors are the leading cause of quality and downtime problems-between 40% and 60% of occurrences.  mValent's own research with customers and prospects syncs with these analysts, showing that 73% of companies experience configuration-related downtime.

When you dig into the detail, the drivers behind these problems become clear.  Here is a quick summary.

1. Application volume.  The sheer volume of applications and environments that IT must manage is staggering.

  • The average Global 2000 firm has more than1000 applications that their IT organization deploys and supports. (You can hear Gartner's comments on this subject "Conquering Complexity with Configuration Management" by clicking here.)
  • IT supports these 1000+ applications in multiple, different environments; naturally there is production and disaster recovery as well as all phases of pre-production-such as QA, staging, performance testing and others.
  • There are usually multiple instances of the applications running in some of these environments.

2. Inherent complexity of composite applications.

These "wiggly apps," as Ms. Colville calls them, provide both tremendous flexibility and complexity for IT to manage. The infrastructure is composed of multiple layers, including application servers, Web servers, databases, middleware and the OS. 

But while the industry buzz is all about "best of breed" and standards, there is a dirty little secret in the software industry: 

  • Thousands of individual configuration properties or configuration items in the application infrastructure need to be set tuned or controlled. This number actually grows into the hundreds of thousands when you consider all environments.
  • Typically, multiple consoles are used to manage the configuration properties for different layers of the application infrastructure.
  • Often the configuration properties are stored in text files where uncontrolled changes can be easily introduced.

3. Business demands and workflows.

IT has a requirement to be responsive to the business while existing in a fluid, high change environment. Frequently, there is a conflict between the business demands and a coherent sensible IT process. Often, the QA schedule is where the squeeze occurs.  Some of the dynamics include:   

  • High change volumes. For example, research shows that larger enterprises introduce more than 400 changes monthly into their application infrastructure
  • Rapid development cycles followed by applications being "thrown over the wall" for IT to test, deploy and manage
  • Management by silo in IT where the multiple layers of the application infrastructure are managed independently by different groups. Often there is scant understanding of how changes in one layer impact performance of stability in another.
  • And large enterprises with sizable groups of people involved in these processes, many devoting more than 1 dozen or two people to management of the application infrastructure.

When one examines these three sets of factors, it is easy to understand how application quality suffers. Undeniably, high volume, complexity and high rates of change put significant stress on IT and application availability. Or as we say at mValent, when you combine lots of moving parts with lots of changes involving lots of people, you end up with lots of problems.

How can you make this better?

Everything we know about applications tells us that the volume will continue to grow and so will change within these environments. So rather than hope for a slowdown, IT teams need tools and approaches that help them improve quality in the face of explosive growth. 

In the area of the application infrastructure, there are a few prescriptions for enterprise IT teams to consider:

  • Employ a "Design for Production" approach based on configuration standards
  • Insure application stability by enabling rollback of any infrastructure changes
  • Automate the provisioning of infrastructure configuration properties

Each of these will contribute to improving the quality of those 1000+ enterprise applications.

First, the Design for Production approach, as it applies to application infrastructure, enables quality to be embedded before your developers write their first line of code.  By agreeing on configuration standards for the application infrastructure, and insuring that these standards are used in all environments, enterprises experience substantial improvements in application quality. 

Often it is the case that QA processes come to a halt because of stability or performance problems yet the developers respond "It works on our systems ..." When IT promulgates and enforces configuration standards, it eliminates one critical variable in the various stages of the application life-cycle.  The applications achieve higher quality because the same foundation is used from Development to QA to Staging through to Production and Disaster Recovery. 

A proper mechanism for enforcing this consistency across the application life-cycle is to collect and store all of your configuration properties in a centralized repository. Then, develop templates that reflect the desired configurations for each element in the application infrastructure. When these two steps are combined with a systematic method for distributing these templates or profiles, you can insure a consistent foundation for your applications which inherently produces higher quality applications.

Second, IT teams can improve quality through better control of the change process for application infrastructure.  Change causes instability in IT environments yet it is unrealistic to eliminate change.  Instead you need to plan sensibly for change and allow for rapid response when changes produce outages.

The single most important tool for controlling the undesired impact of change in the application infrastructure is the ability to "snapshot" the configuration properties. By recording snapshots of the configuration items in the infrastructure, IT has the basis for backing out of changes that produce problems for application quality or availability. There is no quicker remedy for an application outage then to roll back to a previous version which worked. With the application back on line, IT can then orchestrate an orderly diagnosis to ascertain the source of the outage.

Higher quality is the result as downtime is minimized and IT can allocate more time to inspecting changes before they are deployed.

Third, IT teams need a way to test changes to the configuration properties for the application infrastructure and then release these "change packages" so that they are distributed correctly to all systems and environments. While the IT change process may have provided ample review and testing of changes being introduced to insure their quality, without the tools to insure that these are distributed everywhere, the change process is incomplete.

Many IT teams use manual methods to introduce changes to application infrastructure configurations. Invariably, when there are 20 to 30 systems that need to be updated, some number are performed incorrectly, incompletely-or not at all. Quality once again takes a step back.

With an automated approach to releasing configuration changes, IT teams can insure that changes are validated and deployed completely.

Conclusion.  Configuration errors are at the heart of application quality issues and are the leading source of application downtime. With rising complexity and increasing volumes, IT teams can look to configuration management techniques to drive quality improvements. A forward-looking strategy of Design for Production, combined with tools for versioning and rollback plus methodology for managing configuration releases coherently will yield improvements in application quality-and fewer calls to the help desk!


Jim Hickey, mValent Inc.'s Vice President of Marketing, has more than 20 years of software marketing and sales experience with emerging and growing companies. He is a former strategy consultant with global consulting giant, Booz-Allen & Hamilton and has an MBA degree from the Stanford University Graduate School of Business and a BA degree in Economics from Harvard College.

About the author

CMCrossroads is a TechWell community.

Through conferences, training, consulting, and online resources, TechWell helps you develop and deliver great software every day.