STAREAST 2004 - Software Testing Conference
PRESENTATIONS
Building A Dynamic Test Automation Environment
Even in a perfect world, building an organization's test automation environment is a daunting task. The numerous applications you need to test and their many operating environments require careful planning to build and operate a cost-effective test automation environment. |
Dave Kapelanski, Compuware Corporation
|
Building an Independent Test Group
Are you attempting to start an independent test group or increase the scope and value of your present group? After building a highly effective thirty-person test group, Scott Eder reflects on the three major areas where he focused and the challenges he faced along the way. Take away sample work scope and purpose statements for your test group, and learn how to set realistic expectations at all levels within your organization. Find out the key processes that Scott implemented immediately to get his team off to a good start. |
Scott Eder, Catalina Marketing
|
Contrasting White-Box and Black-Box Performance Testing
What exactly do people mean when they say they are going to run a "black box performance test"? And why would they choose to adopt such a test strategy over a potentially more revealing approach such as "white box performance testing"? Steve Splaine answers these and other performance testing questions by comparing and contrasting these two techniques, focusing on test design, test execution, and test results. |
Steve Splaine, Nielsen Media Research
|
Enabling Technologies for Outsourced Testing
The outsourcing of test case development, automation, and execution presents opportunities for some organizations seeking new sources of competitive advantage. Compared to software development outsourcing, test outsourcing has unique technical requirements that must be understood and carefully managed. Based on his experiences, Rob Spade explains the ideal technical capabilities you need for test outsourcing. |
Rob Spade, Lumenare Networks
|
Evaluating Test Plans Using Rubrics
The phrase "test plan" means different things to different people. There is even more disagreement about what makes one test plan better than another one. Bernie Berger makes the case for using multi-dimensional measurements to evaluate the goodness of test plans. Walk away with a practical technique to systematically evaluate any complex structure such as a test plan. Learn how to qualitatively measure multiple dimensions of test planning and gain a context-neutral framework for ranking each dimension. |
Bernie Berger, Test Assured Inc. |
Fault Injection to Stress Test Windows Applications
Testing an application's robustness and tolerance for failures in its natural environment can be difficult or impossible. Developers and testers buy tool suites to simulate load, write programs that fill memory, and create large files on disk, all to determine the behavior of their application under test in a hostile and unpredictable environment. Herbert Thompson describes and demonstrates new, cutting edge methods for simulating stress that are more efficient and reliable than current industry practices. |
Herbert Thompson, Security Innovation
|
Getting a Grip on Exploratory Testing
Many testers have heard about exploratory testing, and everyone does some testing without a script or a detailed plan. But how is exploratory testing different from ad-hoc testing? In this interactive session, James Lyndsay demonstrates the approaches to exploratory testing he often uses at work. With specially built exercises, he explains his thought process as he explores the application. He analyzes applications by looking at their inputs and outputs and by observing their behaviors and states. |
James Lyndsay, Workroom Productions
|
High Volume Test Automation
Most test design starts from the premise that extensive testing is not possible--too may tests, not enough time. What if we could generate millions of tests, execute them, and evaluate them automatically. This would dramatically change your approach to test planning. Learn how to perform this style of automation using free scripting tools (such as Ruby or Python) that are reasonably priced and easy to learn. |
Cem Kaner, Florida Institute of Technology |
Influencing Others: Business Speak for Testers
One of the major goals of testing is to provide information to decision-makers about the quality of the product under test and the risks of releasing or not releasing the software. But whether or not management hears what we have to say depends on how we deliver the message. The truth is management often doesn't care about the number of defects or their severity level; instead, they care about revenue, costs, and customer impact. |
Esther Derby, Esther Derby Associates Inc |
Introducing Test Driven Development
You may ask, why would anyone write an automated unit test for code that has not yet been written? With Test-Driven Development (TDD), that's exactly what you do-write an automated test that fails; then write the code that makes the test pass; then write another automated test that fails; etc., until the system is completed. This provides an automated regression test suite up front, before the tests can be "skipped" because the project is "running late". |
Matthew Heusser, Priority-Health |
Pages
Recommended Web Seminars
On Demand | Building Confidence in Your Automation |
On Demand | Leveraging Open Source Tools for DevSecOps |
On Demand | Five Reasons Why Agile Isn't Working |
On Demand | Building a Stellar Team |
On Demand | Agile Transformation Best Practices |