In this interview, Penny Wyatt discusses her Agile Development Conference West session, "Transform Your Agile Test Process to Ship Fast with High Quality," to share some of the ways she and her team achieved success, and how they—and you—can continue to develop and grow together.
Noel: The transition to agile is not easy. What kinds of advice do you give to teams who aren't attempting agile for the first time, but to those who've attempted it, and were unsuccessful? Where should a team start, or restart, to give the next attempt at it a better chance at success?
Penny: I think teams have a better chance of success of implementing agile if they start with a goal, and then look at which agile processes can help them achieve it. This is as opposed to starting with the idea of being agile and rote-applying the processes associated with that, in the hope of some kind of beneficial outcome.
In our case, we thought that we were agile all along—we had iterations, we had no specifications, and work was broken up into individual user stories. The clue that we weren't really agile was that we weren't reaping the benefits—we were shipping on a nine- to twelve-month major release cycle, where the last few months were nothing but bug-fixing and extra testing. This was self-perpetuating—when you're not going to ship for a year, does it matter if this iteration's stories aren't really complete at the end of the iteration? When processes appear to exist only for the sake of process, it is almost inevitable that people will start looking for ways to subvert them when they have other pressures applied.
What eventually triggered change for us was being given the ambitious goal of shipping every two weeks. This was impossible with the way we'd been working, forcing us to completely redesign our processes from scratch. Every piece of the process had to be absolutely essential, or it was cut. Now, we all know exactly why we do everything we do, and there are immediate impacts when we cut corners. This keeps us on the right track.
Noel: Your upcoming session at the Agile Development Conference West mentions that you'll describe "how to move responsibility for quality back to developers." What led to that responsibility initially being taken away from developers, and have you ever witnessed any resistance to this being back in developers' hands?
Penny: Originally, the JIRA team had no QA engineers at all—they were a small, fast-moving group of developers, that could ship to production after the automated tests were green. They didn't want a QA team and they didn't think they needed one. When the first QA engineer was hired, she quickly proved to them that she was needed—that the many bugs they were shipping could, and should, have been caught before release.
The developers had only ever written automated tests, and had never performed exploratory testing. They dismissed it as "manual testing" and viewed it as a waste of time—something only done by unskilled people, that added no value, that was made obsolete by automation. However, by the time I joined the team, the devs had become completely reliant on QA finding all the bugs that couldn't be caught by automation.
They were aware that their code was still very buggy after it had passed all their automated tests, and they relied on QA's testing to find all the bugs they had missed before we shipped. Yet, they still saw exploratory testing as an unskilled task that was beneath them.
Given this attitude, we had a lot of resistance from developers when we started asking them to take on more exploratory testing! Our toughest challenge was teaching them to do it well—they could be forced to test by their dev leads, but if they didn't want to learn, we didn't have much of a chance to teach them how to do it well.
Noel: I read in your bio where you state that "hiring a small test team to educate developers and prevent bugs" is "much more efficient than hiring a large test team to find bugs." How is efficiency increased by doing this?
Penny: By providing the right information to developers at the right time to prevent bugs, we save all the time that would have been spent on those bugs otherwise:
• Reproducing, writing, analyzing, triaging and prioritizing the bugs,
• Identifying the original stories that introduced the bugs,
• Fixing the bugs,
• Bouncing stories back and forth between developer and tester to verify the bug-fixes,
• Duplicating testing between the original developer and the tester.
If we hired a separate test team to perform the testing, we would still incur all these costs to dev speed. By teaching developers how to avoid avoidable bugs, and to test their own work to find the bugs they couldn't avoid, we avoid much of this rework and context-switching. Instead, we focus on doing things once only, and doing them properly.
Noel: I know that different companies and teams use different lengths of their iteration cycles, what kinds of factors go into deciding what iteration lengths work best?
Penny: In general, we see shorter iterations as better, as it forces us to break up epics into small stories and keep the code in a good, known state. The few times when we've had quality issues with a release have been when there has been too much change going out in a single release. However, there are costs associated with the end of an iteration. Since we cut a release to ship to customers on the last day of the iteration, we have to ensure there's a single point in time when all stories are Done and there is no work in progress on the master branch, which has an impact on the devs' workflow.
So for us, it's a matter of practicality—if a team is working on small stories, then we try to keep iterations to a week. If we find a team's stories are larger or complex, and getting all stories to Done by the end of the first week is unrealistic, we will switch to two-week iterations instead.
Noel: With agile being a practice, and always believing in the ability to constantly improve, where do you personally see areas where you and your own team are still working to improve your own processes, and how are those goals being reached?
Penny: Currently, our development process involves one developer implementing and testing their own work, and then a second developer performing a second round of testing on the story. Overall in JIRA 6.0, 15 percent of stories failed this second round of testing—a major improvement from the old days when almost every story failed testing.
The time that developers spend doing this second round of testing is an inefficiency in the process that we'd like to remove—but we won't accept any decrease in quality in order to achieve this.
We're doing this by:
• Helping the original developer to test better—testing using the best data, in the best production-like environment,
• Being selective about what scenarios we ask the second developer to test,
• Identifying when stories are low risk and cautiously deciding when stories can safely skip the second round of testing completely.
We tried this on one sub-team in JIRA 6.0 with positive effects:
• They completed fifty-five stories, which have now all been running in production for several months.
• Of those stories, QA decided thirty-five of them had been adequately tested by the original developer and was confident in skipping the second round of testing.
• These decisions have now been validated by the lack of bugs coming back from Support and customers for this sub-team's work.
• Of the remaining twenty stories that QA decided still needed testing by another developer, nine were rejected. This indicated that the time spent by the second developer was worthwhile.
Overall, this saved the team testing time on thirty-five stories with no decrease in quality of the product.
With ten years of software industry experience, Penny Wyatt works at Atlassian as the QA team lead for JIRA, an issue tracker used by more than 11,000 organizations worldwide. Penny started her career as a software developer, but after joining Microsoft in Redmond as a developer in test, she discovered that breaking software is much more enjoyable than building it. After a few years of developing testing tools, Penny realized that hiring a small test team to educate developers and prevent bugs is much more efficient than hiring a large test team to find bugs.
Here at hospital “A” I have encountered the same testing push back from application builders. They believe small, or easy changes require little to no testing. Worse they often omit training notification changes as well.
Our Health Information Technology department has introduced a strict change control policy, changes must be tested and approved by different teams from end to end prior to submitting to live environment.
I disagree with low risk areas skipping second review testing. Small changes may work in an order entry pathways but if charge testing is omitted it can cause significant revenue losses. Spending additional time testing, ensures enhancements work right the first time, and,saves time, money and lives.