Test Documenting Over the Cliff

[article]
Summary:
Unless you're in a test role where full, complete documentation is necessary in order to be federally compliant or to keep people from dying or losing a limb, attempting to document every little thing is a fool's errand. Software changes. A lot. With constant change, what we document one day may be obsolete the next.

Remember the Cliff Hanger game on The Price is Right? I remember it because one summer during middle school I watched The Price is Right every day before volleyball camp. At least once every few shows, the little yodeling mountain climber (called "Hans," "Fritz," or "Yodely Guy" by various hosts) would dutifully slide up the twenty-five-step mountain toward a steep cliff. If the contestant guessed the prices of three small items and was off by more than $25 total, Hans would climb to the edge and fall to his doom, accompanied by a Looney Toons-ish crashing sound much bigger than any six-inch cardboard yodeler would make should he actually fall off a cliff.

When we testers yodel up Documentation Mountain, we might tumble over a similar cliff, taking out all the other testers harnessed in with us. The cliff exists right at the point where we've documented so many manual regression tests that the test suite is no longer manageable, so testers don't run the tests at all. Whatever value we're seeking to generate with all those artifacts doesn't just taper off at that point, it nosedives into oblivion.

It's not uncommon for us testers to want to document everything we test. In fact, every day, I struggle with the temptation to over document. Documentation quiets that nagging voice in my mind that whispers "What if?" What if a bug gets missed in regression because I didn't document my test case fully? What if management wants proof that I'm an asset? What if I'm struck with the plague and they have to hire someone new?

Documentation is comforting. Even when our documents become bloated, dysfunctional, and impossible to digest, we still cling to false security.

Unless you're in a test role where full, complete documentation is necessary in order to be federally compliant or to keep people from dying or losing a limb, attempting to document every little thing is a fool's errand. Software changes. A lot. With constant change, what we document one day may be obsolete the next. We'd have to change our job titles to novelist or scribe if we truly wanted to keep up, but then who would do the testing?

Here's an idea that might help determine what’s worth documenting and what isn’t: All documented manual regression tests should be tested in each major round of regression testing. Many teams have hundreds of regression tests that required hundreds of man-hours to write, yet the tests sit around collecting dust like cheap trinkets at a yard sale. What value does a so-called regression test have if we don't need to run it in every major regression run?

Practicing the discipline of running every test helps teams avoid the messy complications of over-documentation, because it forces them to answer questions and make decisions—two tasks that excessive documentation tends to inhibit. In order to keep the regression test suite light enough that it can feasibly be run when required, testers must learn their product well enough so they can answer, "What are the most important things to test in order to supply information about the product to those who make decisions about it?" To answer this question, testers must think about the product's innards, how the components integrate with each other, the customer's needs, how the product will be used in the wild, how it is most likely to fail, and so on. It requires testers to be smart, rather than simply be able to follow dumbed-down directions for checking that the product "works" exactly as someone documented it to work in the past.

To that end, we can also keep testers from falling off the cliff by not over-specifying the test cases we do choose to include in every manual regression run. Since testers are more than animated poking sticks, we can assume that they know something about the product area they are testing, they can figure out minor details and intents, and they can form and ask questions, if needed. Indicating the intent of a test rather than specifying each and every step makes the tests more maintainable and, again, utilizes the tester's mind—creativity, experience, and intelligence.

If you have bloated regression test suites and you’d like to increase the cognitive input of your test team, start by paring down the details in your existing regression test cases. Consider what value tests have that only confirm well-known details. Judge whether or not steps that were copied and pasted across multiple test cases really need to be in that many artifacts.

Next time you do a manual regression run, see how many tests were not run. Are these tests really worth keeping? Will they ever deliver any real value? What will you do with your answers—delete tests or spend more time regression testing next time?

About the author

CMCrossroads is a TechWell community.

Through conferences, training, consulting, and online resources, TechWell helps you develop and deliver great software every day.