So Many Tests, So Little Time

[article]
Summary:

In this corner—A harried project manager whose testing time has just been cut in half. And in this corner—A time-honored management tool to scale back project scope and make testing tasks do-able. Johanna Rothman shows us the ropes of timeboxing and explains why time constraints don't have to be a TKO.

I'm sure you've heard conversations like this:


Senior Manager: "Candace, I know you said you needed twelve weeks to test this release, but we're really in a jam. I need you to release sooner. What can you do for me in six weeks?"
As much as you might like to say, "Um, not much," that's not a career-enhancing answer. Instead, if you say something like, "Give me a week, and I'll tell you what we can do," you'll not only be providing the organization with value, you'll be as proactive as you can be, given the situation.

Even if you're not in the position where your testing time is cut in half, you may feel as if you have too much testing to do for the project—and not enough time to do it all. When that happens, there are several techniques you can use to manage the testing tasks, but timeboxing is an approach that works well for projects with severe time constraints.

Timeboxing is a well-established project management tool used to limit project scope in a fixed duration by forcing tradeoffs—a valuable technique for testing. Say you have six weeks between the time you discover you're supposed to test a product and the time the organization wants to release the product, as Candace does above. Here's how to timebox the testing:

1. Review your original plan. You had some idea of what and how you had planned to test. Make it clear to your management and other stakeholders that you are not going to accomplish everything in the original plan, and determine what you can complete.

2. Start by defining your plan of attack in the first week. Define how you'll discover what pieces of the product you will attempt to test and how you'll test them. You may choose exploratory testing for discovering what to test, and you may need combinatorial test techniques for how you'll test. At the end of this week, you will have a ranked list (1,2,3,4,…) of what you'll test in the product and how you'll test it. Part of defining your attack plan is to explain the two major risks of timeboxing testing: a) You may find something critical during testing that the developers won't have time to fix, and b) You may miss something critical that the customers will find a significant problem. Everyone must understand that the test team won't know everything about the product, and the organization could be releasing a product different than the one everyone anticipated.

3. During the next three weeks, develop tests and continue to refine the test plan. The key here is to develop tests and test for specific work flows (or areas of the product) one at a time. You don't sit around waiting for something to test; but you test a workflow or piece of the product from beginning to end before starting a new workflow or product area. For example, if you are testing a banking system, you might test from opening a specific account type to verifying the account is in the database and is active. You don't test just opening different kind of accounts; you test one specific kind of account from beginning to end. If you are testing a biomedical device, you test that the device can accept a specific input, perform the computation, and generate the expected output-just one specific input. Again, you don't test all inputs and all outputs; you test each end-to-end result serially. As you're refining the test plan, you're confirming scope as you proceed. Every time you realize there's something else you can't test, you list that piece in the not-to-test category and assign a risk to not testing it. As you complete testing, you update the test plan (or preferably, the test reporting) with your completed plans and tests.

4. Evaluate your progress at the end of every week of testing, and report test data. If you can verify fixes as you test, plan to continue testing and verifying through the fifth week. If not, plan on completing the testing—as far as you can go-in week four.

5. You're now at week five. If you haven't been able to verify fixes yet, this is the time to do so. As you verify fixes, you'll perform whatever regression tests you have created to make sure the fixes didn't break anything. If this takes you the full two weeks, you're done. If you have another week, you can attack more features, employing the end-to-end testing you've done before.

6. In week six, you verify the last of the fixes and report on your progress and what you know and don't know about the product.

I'm certainly not recommending you only utilize six weeks for testing on a project. The time you need for testing is dependent on what's in the project and how well the product is built. But, if you're ever caught in a pickle, where you don't have enough time to test everything, use timeboxing to help you evaluate how little you can do and still deliver a valuable result to the organization.

Acknowledgements:
I thank Dave Liebreich, Bob Johnson, and James Tierney for their reviews. 

About the author

CMCrossroads is a TechWell community.

Through conferences, training, consulting, and online resources, TechWell helps you develop and deliver great software every day.