As a tester on an agile team, are you still creating lots of scripted test cases the old way? Are you still caught in the classic waterfall-always behind-while the rest of the team is doing Scrum and looking forward? Then, change course and work with your team to become a test specialist, coordinating testing rather than only doing testing. Henrik Andersson describes his experiences on a Scrum team and their transition to his test specialist role. To orchestrate such a change, they needed new tools and approaches. So, Henrik gives a short introduction to behavior-driven development. For developing automated unit tests, he describes how their team learned to write tests in English-like Gherkin notation. Then, he demonstrates Developers’ Exploratory Testing, in which the entire team tests together and shares joint responsibility for the quality of the software.
You procrastinate. You worry that you may be making the wrong choice. You spend time on the irrelevant. You don't select the most important tasks from your many "to do's." You can't get things done on time. Join James Martin as he shares his experience with analysis paralysis, procrastination, and failure to deliver what others expect. After a look at why we procrastinate, James turns his attention to his personal story of a "bubble" of super productivity in which he delivered more relevant work in a two-week period than he believed possible. Along with the techniques and tips you would expect from a productivity boosting experience report, James explains the state of mind that will help you distinguish important from trivial tasks, reduce waste in your work, and discover the most important thing to do next. You can get It all done in record time-and with less angst than you ever dreamed possible.
Innovation is a word tossed around frequently in organizations today. The standard clichés are "Do more with less" and "Be creative." Companies want to be innovative but often struggle with how to define, implement, prioritize, and track their innovation efforts. Using the Innovation to Types- model, Jennifer Bonine will help you transform your thinking regarding innovation and understand if your team and company goals match their innovation efforts. Learn how to classify your activities as "core" (to the business) or "context" (essential, but non-revenue generating). Once you understand how your innovation activities are related to revenue generating activities, you can better decide how much of your effort should be spent on core or context activities.
Is your company experiencing difficulty and frustration with its offshore project teams? Are your teams not consistently performing well? Are the results not what was expected? Gerie Owen shares her experiences in managing offshore test teams through each phase of the project cycle-from selecting the team and executing the project through presenting and documenting its results. Gerie explains how to assess the team’s knowledge and skill level. Because your offshore team members often are new to you, it is critical to recognize and handle training issues as early as possible. With the challenges of time zones, language, and cultural differences, Gerie addresses the critical issues of providing explicit direction and expressing clear expectations.
When do you ship an application and expose it to your customers and users? The answer seems simple-you ship it when it's ready. However, there are many possible definitions of "ready." According to Peter Varhol, customers, users, and development teams must all agree on what this term means-before work begins on the project. Otherwise, you may be tempted to deploy an application before its product goals are met. Peter Varhol presents different approaches to determining when an application has the required quality to be ready to ship. He describes how to determine and track quality measures, so that the team actively works toward getting the application ready to deploy and knows what needs to be done to ensure fitness for deployment. Learn what factors on which to base your ready-to-ship decision so that the project team and the business will know whether to continue working or declare, "Ready."
Lightning Talks are a very popular part of many STAR conferences. Lightning Talk sessions consist of a series of five-minute talks by different speakers within one presentation period and are the opportunity to deliver their single biggest bang-for-the-buck idea in a rapid-fire presentation.
Many test leaders believe that development, business, and management don't understand, support, or properly value our contributions. You know what-these test leaders are probably right! So, why do they feel that way? Bob Galen believes it’s our inability and ineffectiveness in communicating-selling-ourselves, our abilities, our contributions, and our value to the organization. As testers, we believe that the work speaks for itself. Wrong! We must work harder to create the crucial conversations that communicate our value and impact. Bob shares specific techniques for holding context-based conversations, producing informative status reports, conducting attention-getting quality assessments, and delivering solid defect reports. Learn how to improve your communication skills so that key partners understand your role, value, and contributions.
Many testing organizations mistakenly declare success when they first introduce test automation into an application or system. However, the true measure of success is sustaining and growing the automation suite over time. You need to develop and implement a flexible process, and engage knowledgeable testers and automation engineers. Kiran Pyneni describes Aetna’s two-team automation structure, the functions that each group performs, and how their collaborative efforts provide for the most efficient test automation. Kiran explains how to seamlessly integrate your test automation lifecycle with your software development lifecycle. He shares specific details on how Aetna’s automation lifecycle benefits their entire IT department and organization, and the measurements they use to track and report progress.
The basic problem in software testing is choosing a subset from the near infinite number of possible test cases. Testers must select test cases to design, create, and then execute. Often, test resources are limited-but you still want to select the best possible set of tests. Peter M. Kruse and Magdalena Luniak share their experiences designing test cases with the Classification-Tree Editor (CTE XL), the most popular tool for systematic black-box test case design of classification tree-based tests. Peter and Magdalena show how to integrate weighting factors into classification trees and automatically obtain prioritized test suites. In addition to “classical” approaches such as minimal combination and pair-wise, they share new generation rules and demonstrate the upcoming version of CTE XL that supports prioritization by occurrence probability, error probability, or risk.
Peter Kruse, Berner & Mattner Systemtechnik GmbH
This session is a deeper examination of how to apply dashboards in software testing.I spent several months on a project primarily building a software testing dashboard. I have learned some interesting things, including: