Test Faster: How We Cut Our Test Cycle Time in Half

[article]
Summary:
In just a year, one test team reduced its test cycle by more than 50 percent. It took analysis, planning, and effort—first they looked into how they spent their time, and then they questioned whether they could reduce time in any of those areas. Once they knew where they could be more efficient, they could start tackling their blockers. Here's how you can, too.

“No ducking way!”

This was the reaction from the team when I asked if we could accelerate the system test schedule by one week. Thank goodness for autocorrect.

In this project, we normally had a nine-month delivery cycle, with three months dedicated to system testing after the code complete milestone, and use needed every day of those three months. We were working nights and weekends in the final few weeks. The meeting where I explored with the team the possibility of taking a week out was met with many reasons why we needed more time, not less.

So, we took a different tack. We spent some time asking where the time goes. We started listing all the things we do during those twelve weeks, and we ended up with a pseudo-equation that modeled the time:

System test duration = RC * (TC / TV + B * BHT) / PC 

RC = retest cycles, or how many times we retest due to changes in the app
TC = the number of test cases executed in each cycle
TV = the test velocity, or on average, how many test cases we executed in a unit of time
B = the number of bugs found and fixed during the testing phase
BHT = bug-handling time, or the amount of effort dealing with each bug (diagnosis, documenting, verifying fixes, etc.)
PC = people capacity, or the number of people testing, along with their productivity

Now that we had this model as a way to express the time in a simpler manner, we could ask more tangible questions: 

  • What can we do to reduce the number of retest cycles?
  • Do we need to execute all the test cases?
  • How can we increase the test velocity?
  • How can we reduce the number of bugs we have to deal with during the test phase?
  • Can we be more efficient with handling the bugs?
  • Where can we find more people to help with testing, and how can we make them more productive?

Spending time decomposing the big, ugly problem into more actionable questions really helped us see that we could actually make a difference. We weren’t so helpless after all.

We restarted the meetings with asking just one of these questions, then explored ways to make an improvement for just that one variable. In the end, after a year of making improvements, we were able to reduce the system test time to five weeks. When we started talking about it, we didn’t think we could take one day out of a twelve-week test cycle; we ended up cutting the time by more than half.

Reducing Bugs in System Test

We started tackling the number of bugs we found during system test, because we did find and fix many bugs, so there was a good opportunity to make improvements. But more importantly, reducing bugs was crucial to our main goal of delivering high-quality software.

Until this point, we were not recording the root causes of the bugs found in system test, so we had to use some judgment and collaboration with the development team. We read through a sampling of the bugs found in the previous test cycle and looked for any patterns. We found a number of simple coding errors and a number of memory leaks (it was a C++ application).

We invested some time in making sure we were getting the most out of our code reviews to help reduce the number of coding errors. We did code review training, tracked the code reviews, and reported the code review results to the team. We also started using some tools that were designed to detect memory leaks. These two improvements started to chip away at the effort required to deal with bugs during system test.

We eventually started recording the root cause of the bugs, and we did a regular analysis to find other opportunities to make improvements.

Improving Bug-Handling Time

When we looked at our bug list, I was embarrassed to find that half of the bugs we reported were closed without a fix. Two of the largest contributing factors were that developers could not duplicate the bug and that the bug was a duplicate of one already in the system.

This required one simple change: We trained the test team to search the bug-tracking system before creating a new bug. If they found a similar bug, they would then look into modifying the original bug report with the updated data, or consult with the developer assigned to that bug.

For the bugs that could not be duplicated, we tried an experiment. Instead of a tester writing a bug and moving on, the tester would hold a “bug huddle” to demonstrate the bug to the development team. This bug huddle usually happened at the end of the day. The tester would then write the bug report after that conversation. This led to much faster fixes, as the developers would often say, “Ah, I see what’s happening.” The bug demonstration went a long way toward reducing any ambiguity in the steps to reproduce a bug.

After these improvements, we saw that more than 80 percent of the reported bugs were actually fixed, and we had fewer “bug ping-pong” games.

Increasing Test Velocity

Increasing test automation coverage seemed like the obvious direction for increasing test velocity, and automation really did help. But there were also other things that we found would help improve the velocity.

We build tools to help populate test data automatically after installation, so the testers have a build already installed, with the needed test data already in place, when they arrive to work in the morning.

We also created a “most wanted bug fix” list and raised the priority of these bugs. The most wanted bugs were those that were blocking tests from completing, so we tied the priority that the developers used to the testers’ productivity. That helped reduce the amount of time testers had to wait for a fix.

Reducing Test Cases

The idea of reducing the number of tests we execute was pretty controversial when we started thinking about it. But if we thought about it as risk-based testing, where we put the most testing effort into the areas that have the greatest risk, we started to make progress.

We rated each test suite in two dimensions: the probability that those tests would find a bug or fault, and the severity of consequence to the customer if a bug were in that area of the product. Each dimension was rated high, medium, or low, and we used a chart to determine our approach: 

 

 

Probability of a Problem

 

 

High

Medium

Low

Severity if Problem Exists

High

P1

P1

P1

Medium

P1

P2

P3

Low

P2

P3

P3

We reviewed this analysis with the development team, asking them for the areas they thought were most risky. They were able to name a few areas right away, and they also did some research in the change logs for code that had frequent bug fixes.

We reviewed this data with the product management team to assess the customer impact for the severity level. Likewise, they had some immediate concerns and also did a follow-up analysis based on usage analytics.

The tests suites with a P1 priority were the highest priority for us to test. We made sure to test these early and often—early in the cycle and then again later, making sure there were no regressions during the cycle.

The P2 test suites were next, and for these, we took a little more latitude on the regression testing toward the end of the cycle. For the P3 test suites, we examined them in detail and trimmed those test suites down, using sampling and just executing them once in the system test.

Finding More People

I saved this improvement for last because it was the first thing the team asked for: a bigger team. I was apprehensive about asking for a larger budget at first; I didn’t want to come across as doing more of the same. But, after making some of the improvements above and showing their progress, our leaders asked what else we could do.

Making dramatic improvements does take an investment in time by your team. We were able to ask for more people and show the specific improvements we were expecting with the new investment.

We ended up finding an offshore partner to help with a lot of the regression testing. This gave the existing team more time to make further improvements, which ended up being a virtuous cycle.

Before engaging with the offshore partner, we did get a little help by engaging other folks in the organization to help with some of the testing. The product managers, customer support, and technical documentation teams were really invested in helping make a better product, and they volunteered time to help out with testing.

We also did a number of “test jams,” where we brought the whole team together for a day to test various areas of the product. The developers and managers pitched in and everyone tested for a day. We gave prizes to the person who found the most interesting bug and the most severe bug. The test jams were fun and a good team-building event.

Choose Your Battles

In the end, we reduced our test duration from twelve weeks down to five weeks. It wasn’t easy or straightforward, but once we created that equation to represent the time and started chipping away at each variable, we ended up successfully addressing our challenges.

Since this project, I’ve used this technique many times to accelerate test cycles. The teams really like the process of breaking the time down into specific variables and finding opportunities to make improvements right away.

How could your team test faster?

About the author

CMCrossroads is a TechWell community.

Through conferences, training, consulting, and online resources, TechWell helps you develop and deliver great software every day.