What is the biggest management problem you are facing in 2009? Doing more with less? Demonstrating the value of testing to your company? Improving your team's skills while keeping up with projects? Automating more tests? Testing Dialogues is a unique platform for you to learn from experienced test managers around the world and share your ideas and experiences at the same time. Facilitated by Rob Sabourin and Lee Copeland, this double session focuses on test management issues that you face every day. You'll share your expertise and successes and learn from others' challenges and lessons learned. Lee and Rob will help participants generate topics in real-time and structure discussions in a framework so that everyone will receive a summary of the work product after the conference. Many past participants in Management Dialogues have rated it session their best experience at the STAR conference.
As an experienced test manager, Lloyd Roden believes that test estimation is one of the most difficult aspects of test management. You must deal with many unknowns, including dependencies on development activities and the variable quality of the software you test. Lloyd presents seven proven methods he has used to estimate test effort. Some are easy and quick but prone to abuse; others are more detailed and complex but may be more accurate. Lloyd discusses FIA (finger in the air), formula/percentage, historical reference, Parkinson's Law vs. pricing, work breakdown structures, estimation models, and assessment estimation. He shares spreadsheet templates and utilities that you can use and take back to help you improve your estimations. By the end of this session, you might just be thinking that the once painful experience of test estimation can, in fact, be painless.
Jason Bryant shows how you can transform readily available raw data into visual information that improves the decision-making process with simple measures that yield power for both testing and development managers. A quality dashboard helps focus regression tests to cover turmoil risk, ensures issues are resolved before beta, identifies risks in the defect pool, and provides information to monitor the team’s adherence to standard processes. Creating, measuring, and monitoring release criteria are fundamental practices for ensuring consistent delivery of software products. Schlumberger has implemented a quality dashboard that helps them continuously gauge how projects are progressing against their quality release criteria (QRC). By using dashboard data, Schlumberger makes better decisions and subsequently is able to see how those decisions affect projects.
Even if you have the best tools and processes in the world, if your staff is not motivated and productive, your testing efforts will be weak and ineffective. Retired Marine Colonel Rick Craig describes how using the Marine Corps Principles of Leadership can help you become a better leader and, as a result, a better test manager. Learn the difference between leadership and management and how they complement each other. Join in the discussion and share ideas that have helped energize your testers (and those that didn't). Rick discusses motivation, morale, training, span of control, immersion time, and how to promote the testing discipline within your organization. He also addresses the importance of influence leaders and how they can be used as agents of change.
As testers, we focus our efforts on measuring the quality of our organization's products. We count defects and list them by severity; we compute defect density; we examine the changes in those metrics over time for trends, and we chart customer satisfaction. While these are important, Lee Copeland suggests that to reach a higher level of testing maturity, we must apply similar measurements to ourselves. He suggests you count the number of defects in your own test cases and the length of time needed to find and fix them; compute test coverage--the measure of how much of the software you have actually exercised under test conditions--and determine Defect Removal Effectiveness--the ratio of the number of defects you actually found divided by the total number you should have found. These and other metrics will help you evaluate and then improve the effectiveness and efficiency of your testing process.
Traditional testing teams often agonize over exploratory testing. How can they plan and design tests without detailed up-front documentation? Stubborn testers may want to quit because they are being asked to move out of their comfort zone. Can a team’s testing culture be changed? Rob Sabourin describes how several teams have undergone dramatic shifts to embrace exploratory testing. Learn how to blend cognitive thinking skills, subject matter expertise, and “hard earned” experience to help refocus your team and improve your outcomes. Learn to separate bureaucracy from thinking and paperwork from value. Explore motivations for change and resistance to it in different project contexts. Leverage Parkinson's Law-work expands to fill the time available-and Dijkstra-s Principle-testing can show the presence of bugs, but not their absence-to inspire and motivate you and your team to get comfortable in the world of exploratory testing.
Many of the misunderstandings within software development organizations can trace their roots to different interpretations of the role of testers. The terms quality control (QC), quality assurance (QA), and quality analysis are often used interchangeably. However, they are quite different and require different approaches and very different skill sets. Quality control is a measurement of the product at delivery compared to a benchmark standard, at which point the decision is made to ship or reject the product. Quality assurance is the systematic lifecycle effort to assure that a product meets expectations in all aspects of its development. It includes processes, procedures, guidelines, and tools that lead to quality in each phase. Quality analysis evaluates historical trends and assesses the future customer needs as well as trends in technology to provide guidance for future system development.
In large organizations with multiple, simultaneous, and related projects, how do you coordinate testing efforts for better utilization and higher quality? Some organizations have opened Program Test Management offices to oversee the multiple streams of testing projects and activities, each with its own test manager. Should the Program Test Manager be an über-manager in control of everything, or is this office more of an aggregation and reporting function? Graham Thomas examines the spectrum of possible duties and powers of this position. He also shares the critical factors for successful program test management, including oversight of the testing products and deliverables; matrix management of test managers; stakeholder, milestone, resource, and dependency management; and the softer but vital skills of influence and negotiation with very senior managers.
In a recent survey of 130 U.S. software testers and test managers, Randall Rice learned that 83 percent of the respondents have experienced burnout, 53 percent have experienced depression of some type, and 97 percent have experienced high levels of stress at some time during their software testing careers. Randall details the sources of these problems and the most common ways to deal with them-some healthy, some not. There are positive things testers and managers can do to reduce and relieve their stress without compromising team effectiveness. By understanding the proper role of testing inside your organization and building a personal support system, you can manage stress and avoid its destructive consequences. Randall identifies the stress factors you can personally alleviate and helps you deal with those stressors you can't change.
It seems that senior management is always complaining that testing costs too much. And their opinion is accurate if they consider only the costs-and not the benefits-of testing. What if you could show management how much you have saved the organization by finding defects during testing? The most expensive defects are ones not found during testing-defects that ultimately get delivered to the user. Their consequential damages and repair costs can far exceed the cost of finding them before deploying a system. Instead of focusing only on the cost of testing, Leo van der Aalst shows you how to determine the real value that testing adds to the project. He shares a model that he has used to calculate the losses testing prevents-losses that did not occur because testing found the error before the application was put into production. Leo explains the new testing math: Loss Prevented – Cost of Testing = Added Value of Testing.