Why Change and Configuration Management Needs Analytics

[article]
Summary:

Analytics-driven management stands to end the key challenges that constrain change and configuration management. By applying powerful analytics to the overwhelming change and configuration data, IT Operations Analytics (ITOA) technology can turn massive amounts of information into clear, actionable insights.

Information technology (IT) organizations are struggling. With constrained budgets, tighter efficiency requirements, and a need to streamline costs, IT managers are challenged to launch new applications, maintain high levels of application availability, and deliver on strict Service Level Agreements (SLAs). It’s typical for IT teams that support a wide variety of applications running on distinct platforms that they will face complex operations to manage today’s enterprise data centers, fighting to overcome change and configuration management problems that affect performance and availability. While a variety of IT management tools have been implemented to automate and control IT operations, they were not designed to deal with the complexity and dynamics of the modern data center, leaving IT operations overwhelmed with lots of raw data, which makes change and configuration problems a chronic pain for IT operations.

Analytics-driven management stands to end the key challenges that constrain change and configuration management. By applying powerful analytics to the overwhelming change and configuration data, IT Operations Analytics (ITOA) technology can turn massive amounts of information into clear, actionable insights.

Complexity, Dynamics, and Silos
It is a major undertaking for IT operations to link, match, cleanse, and transform data across systems into useful information, before IT management spirals out of control. Today's volume of data is the result of a number of recent developments.

Complexity
Complex IT environments are regarded as part and parcel of a company’s business operations. This complexity comes out of the variety of technologies at play, a high degree of technology customization, a mix of legacy and modern software, and dynamic infrastructure. In the cloud scenario, self-service provisioning has multiplied the amount of activities occurring outside of static processes, going beyond the capabilities of IT management. IT complexity translates into higher costs and reduced agility and flexibility.

Dynamics
The pace of change has dramatically accelerated in today’s IT environments, spurred by dynamic business demands for flexibility and scalability. While being generations ahead of the typical ITIL change control process, IT organizations face greater demands, growth of server virtualization, and a shift to use more dynamic services, like the cloud. IT teams relying on traditional change management processes find it more than a little difficult to keep pace with the frequency of change required to support their environments. As a result, they can end up sifting through what can amount to petabytes of data before they are able to take strategic action.

Silos
Traditional IT management tools view the world from a bottom-up perspective, managing individual components under specific silos. While the silo method of working promotes efficiency within that specific silo, this undermines the possibilities for working collaboratively and seamlessly across an IT organization. The challenge here is figuring out how to manage changes while applications, servers, network devices, and databases are all siloed in various divisions of IT infrastructure, which creates a barrier to visibility. No matter how much you fine-tune these silos, it is not enough to combat the day-to-day availability and performance problems that crop up.

A Big Data Problem: Surge in Volumes of Performance and Event Data
Data is the lifeblood of the modern IT department. It not only provides valuable metrics on overall system health, but it can be a powerful enablement tool for initiatives like agile development and DevOps.

Yet with systems becoming increasingly heterogeneous, customizable, and distributed, monitoring technologies now captures much larger volumes of data per time period. Furthermore, the adoption of agile-style development methodologies and DevOps has dramatically increased the rate at which systems change (in many enterprises by an order of magnitude). These initiatives require an almost instantaneous understanding of how changes affect the overall system.

To continuously extract meaning from event and performance data, IT operations wade through Big Data—mountains of data generated through IT operations at all levels of the business system stack. To make matters worse, the context of this data might change as systems are updated according to business demands, putting these massive volumes of data into motion. Getting bigger and faster, data and its context is growing and changing far more rapidly than IT can cope; a potentially destructive situation if not managed properly. As complex IT environments face growth in the volume, velocity, and variety of sources of data, the real issue is what is ultimately done with the data, not the fact that IT operations collects such large amounts. IT Ops needs to bring disparate data sources (applications, web servers, databases, middleware, operating systems, etc.) generated across the IT organization together and extract valuable, actionable information from the mountains of data collected in enterprise environments.

About the author

CMCrossroads is a TechWell community.

Through conferences, training, consulting, and online resources, TechWell helps you develop and deliver great software every day.