Modeling Uncertainty

[article]
Summary:

Some may think that predicting a project's future is impossible and a waste of time. Payson Hall did, but when he was forced to think otherwise, he learned there are ways to manage uncertainty. While these methods may not generate perfect predictions, they help conquer the unknown. In this week's column, Payson explains a process for gaining insights into a project's future and getting buy-in on your predictions.

My adventure with uncertainty started with the question "How much hardware will be needed to support the new application once it is in production?"

It was 1988. I was employed by a large systems integration firm, and my job was to design an advanced image processing system to scan, compress, move, decompress, OCR, index, and store forty million document images. As the lead architect I was supposed to be the expert, but this was bleeding-edge stuff. We were boldly going where no one had gone before.

When the technical review manager asked me to predict the final hardware configuration early in the design phase, I thought it was a stupid question that was keeping me from more important work. Until we could build the system, we wouldn't have enough information to answer such a question. I tried to explain this as diplomatically as I could, concluding with what I thought was a helpful suggestion: "Let me build it, and then I'll tell you."

The technical review manager, flown in from company headquarters to make sure the project was likely to be both feasible and profitable, listened patiently to my response and explained, "We need an educated guess about the configuration now so we can make some projections, and you are the best person to make that guess."

I was flattered, impatient, and unimpressed. "I can't know until we have built the system--then we will add hardware until it performs well enough for production," I said.

He went on to explain that our company needed assurance that the system would run on some reasonable configuration before it invested much more in the project.

I still didn't get it. "We will need as much hardware as we need," I said, trying not to sound belligerent.

"Think of it this way," he countered, becoming frustrated, "If you need $20 million worth of hardware, we can cancel your project now. We must develop a crude model of system behavior and assumptions about its use today, based on what we know now and on what our experience tells us are reasonable assumptions." He explained that if the initial model of the system suggested it would run on a reasonable hardware configuration, the project would continue and we could refine the model as more information became available.

Over the next few days, he helped me build a spreadsheet model of the system's components based on assumptions about the memory and processing requirements for each part. We documented assumptions about the response times needed for various tasks and established a "budget" for processor cycles, memory, and network bandwidth for different activities. We identified the tasks that required real-time processing and those that could be done asynchronously off hours. We imagined together what the application usage patterns might be and documented assumptions about the transaction mix and throughput requirements. When we finished, we had a guess about the necessary configuration.

We gave headquarters the results of our preliminary analysis to provide crude assurance the system would run on a reasonable configuration. We agreed to revisit the configuration prediction after a few months when we would have real performance and resource consumption information about some of the components that would allow us to validate and refine our assumptions and the model.

Those first estimates were wrong, but not ridiculously off. In the end, we had not allocated enough processor by a factor of four and had configured too much memory by a factor of two. That ballpark estimate allowed our management to make an informed decision to continue the project. Our work on the model also gave us a starting point for making increasingly more informed projections of actual requirements and established targets for subsystem performance. When actual performance information became available for the subsystems, we compared it to our assumptions and budget to identify problems and opportunities. As we substituted better and better data for the assumptions in our model, we were able to refine the target hardware configuration and reduce the uncertainty.

My evolution from prima donna (refusing to speculate about the future without all the facts) to senior system architect (able to support technical business decisions based upon incomplete and imperfect information) required that I learn to deal with uncertainty. Perfect prediction was impossible in this instance, but it was reasonable to ask for an educated guess. For this review, it was necessary to understand what level of confidence was "good enough" for the time being. If our exercise at making a preliminary prediction had suggested that we would require vastly more hardware than the project justified, it would have been critical to identify that problem early so we could either cancel the project, change the approach, or more closely examine the most challenging components.

Working with uncertainty means accepting that there are no facts about the future. Everything we think we know about the future is an assumption, a placeholder for information to come. For assumptions to be useful, they must be reasonable and credible--informed by the review and refinement of other subject matter experts. For those criteria to be met, the rationale and basis for assumptions must be documented. When more information becomes available--when "facts" start coming in--we can test and revise our assumptions and improve and update our models.

If you must make educated guesses about the future, don't let the uncertainty paralyze you. Instead do the following:

  • Develop a model to represent significant parts of the problem.
  • Make and document a set of assumptions about relevant factors in the model.
  • Review your model and assumptions with others to get their perspective on the problem, your model, and your assumptions. Be open to suggestions about revisions and refinements.
  • Make it clear to the "powers that be" that your model and projections are based upon assumptions. Set expectations about when you anticipate additional information to be available for revising and refining your
    projections.
  • Use your model--do the math and show your work.
  • Revisit the model periodically to look for improvements and refinements in the model and your assumptions.

When you present your findings, you might want to discuss the answers to some of the following questions:

  • "This is how we imagine the future will be, based upon these assumptions. If the assumptions are wrong, our answer will be wrong. Do the assumptions seem reasonable to you?"
  • "We will have better information and be able to refine our assumptions and model on (a predicted date). Is that soon enough for you?"
  • "If you are uncomfortable with our model or assumptions, is there someone you trust who you would like to review it?"

The process of documenting your assumptions about the future and how they factor into your predictions, then reviewing them with the people who want the predictions, will not enable you to perfectly predict the future but does help make your projections credible. Reviewing the thought process behind your prediction communicates the current level of confidence and sets expectations about when better predictions will be available. Getting buy-in on the process helps everyone better manage the uncertainty of the future so that they can make more informed decisions. Isn't that what project management is all about?

About the author

CMCrossroads is a TechWell community.

Through conferences, training, consulting, and online resources, TechWell helps you develop and deliver great software every day.