For years, MONOPOLY® has entertained countless people with the fictional thrill of what it might be like to make a killing in real estate-or to lose your shirt. As Rob Sabourin explains, the board game is similar to the real-world experience of running a software test project. Rob guides you through some of MONOPOLY's powerful lessons and strategies relating to test planning, risk management, technical debt, context-driven test strategies, contingencies, and decision making. In MONOPOLY, winning players consistently select, adapt, and apply strategies. Skilled testers adapt on the fly to their discoveries, applying heuristics and risk models to consistently deliver value. Winning at MONOPOLY, just like successful testing, is all about people: relationships, negotiation, and communication. To succeed in testing or MONOPOLY, you've got to be ready for whatever drawbacks or opportunities Chance happens to throw your way.
Erik Boelen starts his risk-based testing where most others stop. Too often, risk-based test strategies are defined in the initial test plan and are never looked at or used again. Erik explores how a dynamic, living risk-based testing strategy gives testers a vital tool to manage and control testing activities and identify the infrastructure they need to perform these activities. Find out how to use your risk-based testing strategy as a tool for negotiations among the different stakeholders. Take on the important role of risk mediator for all of the parties in the project. The risk-based test strategy is a tool you can use to defend testing’s need for time and resources, especially when late delivery is possible. Use your risk-based strategy to drive and manage exploratory testing sessions.
Over the years, experts have defined testing as a process of checking, a process of exploring, a process of evaluating, a process of measuring, and a process of improving. For a quarter of a century, we have been focused on the internal process of testing, while generally disregarding its real purpose-creating information that others on the project can use to improve product quality. Join Lee Copeland as he discusses why quantifying the value of testing is difficult work. Perhaps that’s why we concentrate so much on test process; it is much easier to explain. Lee identifies stakeholders for the information we create and presents a three-step approach to creating the information they need to make critical decisions. He shares key attributes of this information-accuracy, timeliness, completeness, relevancy, and more.
Whether you are a tester or a test manager, Jon Bach believes you have little time to do the things you want to do. Even the things on your "absolutely must do" list are competing for your limited time. Jon has a list of what he calls "half-baked" ideas on how to cope. That is, these ideas are still in the oven-still being tested. In his role as a tester and manager, Jon has learned that it's not about time management; it's really about energy management-where you focus your personal energy and direct your team’s energy. Jon shares ideas that have worked for him and some that have failed: Open-Book Testing, Dawn Patrols, Tester Show-and-Tell, Test Team Feud, and Color-Aided Design. Learn how these ideas may solve your problems with test execution, reporting, measurement, and management-all at low or no cost and relatively easy to implement.
It seemed simple enough-hire the best available technical staff that would work from home to build some great software. Along the way, the team encountered the usual problems: time zone differences, communication headaches, and a surprising regression test monster. Matt Heusser describes how Socialtext built their high-performance development and test team, got the right people on the bus, built a culture of "assume good intent and then just do it," created the infrastructure to enable remote work, and employed a lightweight yet accountable process. Of course, the story has the impossible deadlines, conflicting expectations, unclear roles, and everything you'd get in many development projects. Matt shares how the team cut through the noise, including building a test framework integrated into the product, to achieve their product and quality aims.
Over the years, the test manager's role has evolved from "struggling to get involved early" to today's more common "indispensable partner in project success." In the past, when "us vs. them" thinking was common, it was easy to complain that the testing effort could not be carried out as planned due to insufficient specs, not enough people, late and incomplete delivery, no appropriate environments, no tools, tremendous time pressure, etc. Martin Pol explains how today's test managers must focus on providing a high level of performance. By using a service-driven test management approach, test managers support and enhance product development, enabling the project team to improve overall quality and find solutions for any testing problem that could negatively impact the project's success.
This session is a deeper examination of how to apply dashboards in software testing.I spent several months on a project primarily building a software testing dashboard. I have learned some interesting things, including:
We hear that more rigor means good testing and, conversely, that less rigor means bad testing. Some managers-who've never studied testing, done testing, or even "seen" testing up close-insist that testing be rigorously planned in advance and fully documented, perhaps with tidy metrics thrown in to make it look more scientific. However, sometimes measurement, documentation, and planning don't help. In those cases, rigor may require us not to do them. As part of winning court cases, James Bach has done some of the most rigorous testing any tester will do in a career. James shows that rigor is at least as dangerous as it is useful and that we must apply care and judgment. He describes the struggle in our craft, not just over how rigorous our processes should be, but what kind of rigor matters and when rigor should be applied.
What features of your software do customers use the most? What parts of the software do they find frustrating or completely useless? Wouldn't you like to target these critical areas in your testing? Most organizations get feedback-much later than anyone would like-from customer complaints, product reviews, and online discussion forums. Microsoft employs proactive approaches to gather detailed customer usage data from both beta tests and released products, achieving greater understanding of the experience of its millions of users. Product teams analyze this data to guide improvement efforts, including test planning, throughout the product cycle. Alan Page shares the inner workings of Microsoft's methods for gathering customer data, including how to know what features are used, when they are used, where crashes are occurring, and when customers are feeling pain.
Many organizations refer to their test teams and testers as QA departments and QA engineers. However, because errant systems can damage-even destroy-products and businesses, software quality must be the responsibility of the entire development team and every stakeholder. As the ones who find and report defects, and sometimes carry the “quality assurance” moniker, the test community has a unique opportunity to take up the cause of error prevention as a priority. Jeff Payne paints a picture of team and organization-wide quality assurance that is not the process-wonky, touchy, feely QA of the past that no one respects. Rather, it's tirelessly evaluating the software development artifacts beyond code; it’s measuring robustness, reliability, security, and other attributes that focus on product quality rather than process quality; it’s using risk management to drive business decisions around quality; and more.