Stop the Bad MBOs

[article]
Fight the Good Fight Against Abusive "Management by Objectives"
Summary:

Some managers use "management by objectives" effectively; however, too often they are used destructively and undermine the team. In this article Rex gives the clarion call to stop the bad MBOs and gives three case studies of what not to do.

Introduction
Do you want to introduce a new objective to your organization that will help everyone who works for you? How about this: Stop destroying your team with bad
MBOs.

MBOs, or management by objectives, are commonly used as part of yearly performance reviews. In theory, the perfect set of objectives defines, in a quantified way, exactly what the employee is to achieve over the coming year. At the end of the year, in the annual performance review, the manager simply measures whether-or to what extent-the objectives were achieved.

However, some management-by-objectives approaches backfire, often in dramatic ways. Let's look at three case studies of bad MBOs, how they negatively affected the team, and how the managers could have done better.

Case Study One: The Whipping Boys and Girls
One company set two yearly performance objectives for the test team. The first, which was common across all teams, was the on-time delivery of the system under development. The second, which was specific to the test manager, was the delivery of a quality system with few bugs.

The test team did a great job of finding lots of bugs during development; they found thousands, in fact. They found so many bugs that the system was not deployed until months after the original ship date. They found so many bugs that the developers could not fix them all; most were deferred under the pretense that they were not "customer facing" bugs. Most of these bugs later were reported to customer service people.

When the yearly performance review came around for the test team, they were told they had failed to meet either of their objectives. After all, the system shipped late, and it shipped with lots of bugs that had resulted in lots of customer service calls. The dispirited test manager immediately began looking for another job, and resigned to go elsewhere within a few months of this incident.

Clearly, the "ship on time" objective was inappropriate. When we testers do our jobs well, we almost always find problems, and those often result in delays. The "quality system" objective was likewise inappropriate. Testers don't have bags of magic pixie dust that can inject quality into systems.

The choice of objectives was really unfortunate because the test team had contributed a lot:

  • The transfer of knowledge and testing best practices to vendors providing key subsystems
  • The creation of an automated regression test tool that included pioneering integration between custom and commercial-off-the-shelf test tools
  • The large percentage of bugs detected by the test team prior to deployment (even though those bugs were deferred by management fiat)

The situation would have turned out much differently if the management team had created measurable objectives based on those contributions.

Case Study Two: While You're at It, Please Walk Across the Atlantic Ocean
I recently heard of an organization whose testers were evaluated on the same two objectives every year:

  1. How many bugs were detected by customers after release in a subsystem tested by the tester? The tester's performance evaluation rating goes down as this number goes up.
  2. How many bug reports did the tester file that developers returned as "works as designed," "irreproducible," etc.? The testers performance evaluation rating goes-yep, you guessed it-down as this number goes up.

These two metrics are at least partly under the tester's control. But notice how only a perfect tester-or a tester testing a completely trivial subsystem-could ever hope to get a good performance rating under these objectives. Furthermore, note that any attempt to drive one of the metrics toward zero will tend to drive the other metric up. Also notice the power of the developer, who in some gray areas of design, may or may not return a report with "works as designed." If a tester finds a problem in the design or a conflict in requirements, he better not say anything because a "works-as-designed" report will hurt his or her evaluation! Finally, while the testers did more than simply run tests, they weren't measured on any of those other activities.

The testers subjected to these objectives were totally up in arms. The few who complained to the manager about these objectives were hit on the forehead with the big red "Not a Team Player" stamp that all managers have in their desks. Despite the absurdity, most of them decided they would do the right thing, as best they could tell what the right thing was-to totally ignore the performance evaluation scheme imposed on them, and just do their work.

The manager could have at least made the two objectives realistic. There is a natural level of error and variation that is inherent in any process. A realistic and attainable pair of target metrics for these objectives would have alleviated some of the most caustic effects.

Case Study Three: Stepping on your Own Landmine
In both of the case studies above, the managers imposing these objectives probably had their hearts in the right places. They had been told, and led to believe, that quantitative performance evaluation schemes are eminently fair and would align employee behaviors with company needs. However, in some cases, we encounter managers who are actually using MBOs for nefarious purposes. And sometimes they get what they deserve for doing so.

Not too long ago, I heard of a perfect example of cynical manipulation of the MBO process which backfired in a spectacular way. In that case, a secret management email memo from the CEO directed that due to a cash crunch brought on by excessive mergers and acquisitions, no manager could give more than one member of his staff a raise. Furthermore, the memo stated, each manager was required to manipulate the (already established) way of counting people's MBOs. This was to make sure that everyone but one person in their staff rated "satisfactory" or below, therefore, giving only the person that rated  "above expectations" a whopping two percent raise.

A very short period of time after that email going out, someone-whether a rebellious manager or an individual contributor who had intercepted the message-decided to forward the email anonymously to everyone in the company. A huge hue and cry resulted of course. Many talented people with other options left. Someone anonymously posted an email that read, "Pay freeze = code freeze?" Company morale was completely extinguished. The company ultimately failed.

The executives had certainly put themselves between a rock and a hard place by blowing all their cash, but surely there were better options than such a devious scheme. Perhaps the CEO could have called a company meeting, admitted the situation, accepted responsibility for it, and informed people that raises would be deferred to a subsequent year when the cash situation improved. People wouldn't have been happy, and they might have grumbled about the stupidity of upper management, but at least they couldn't have accused them of stabbing them in the back.

Conclusion
As the saying goes, when you find yourself in a hole, the first thing to do is stop digging. So, if you're digging a punji trap for your employees with MBOs, put down the shovel and step slowly away from the sharpened bamboo.

But I can't stop, you might well protest-the big corporate kahunas have all decreed that managers shall use MBOs as part of the yearly performance review process. Don't worry, our resolution is not to stop using MBOs, it's to stop using bad MBOs.

Having written objectives that worked for testers and test teams before, I've discovered a few helpful practices:

  • Identify how the test team provides valuable services to the
    organization. What role does each employee play in providing those
    services?
  • Use the SMART mnemonic to remind you to create objectives that are:

Specific in terms of the quantification to be measured;
Measurable, including the way in which the measurement  will happen;
Attainable by the employee;
Realistic with respect to the team, organization, and current project context;
Time-boxed in the sense that those objectives should  appen in a set period of time.

  • Consider what incentives you're giving employees in each objective. Will the objective point the employee in the direction of serving the customer better?
  • Ask yourself what signals you're giving with each objective. Are you directing employees toward what's most important?
  • Specify the objective, but leave the means of achieving it to the individual. You don't want to be a micromanager, do you?

In spite of the problems that can arise with management by objectives, it's true that many managers use this technique successfully. Those managers have learned to apply the tips-and to avoid falling into the traps-I discussed above. So, let's start a new trend together: No more bad MBOs!

About the author

CMCrossroads is a TechWell community.

Through conferences, training, consulting, and online resources, TechWell helps you develop and deliver great software every day.