STAREAST 2007 - Software Testing Conference

PRESENTATIONS

The Risks of Risk-Based Testing

Risk-based testing has become an important part of the tester’s strategy in balancing the scope of testing against the time available. Although risk-based methods have always been helpful in prioritizing testing, it is vital to remember that we can be fooled in our risk analysis. Risk, by its very nature, contains a degree of uncertainty. We estimate the probability of a risk, but what is the probability that we are accurate in our estimate? Randall Rice describes twelve ways that risk assessment and risk-based methods may fail.

Randy Rice, Rice Consulting Services Inc

Top Ten Reasons Test Automation Projects Fail

Test automation is the perennial "hot topic" for many test managers. The promises of automation are many; however, many test automation initiatives fail to achieve those promises. Shrini Kulkarni explores ten classic reasons why test automation fails. Starting with Number Ten ... having no clear objectives. Often people set off down different, uncoordinated paths. With no objectives, there is no defined direction. At Number Nine ... expecting immediate payback.

Shrinivas Kulkarni, iGATE Global Solutions
Top Ten Tendencies That Trap Testers

A trap is an unidentified problem that limits or obstructs us in some way. We don't intentionally fall into traps, but our behavioral tendencies aim us toward them. For example, have you ever found a great bug and celebrated only to have one of your fellow testers find a bigger bug just one more keystroke away? A tendency to celebrate too soon can make you nearsighted. Have you ever been confused about a behavior you saw during a test and shrugged it off?

Jon Bach, Quardev Laboratories

Unit Testing Code Coverage: Myths, Mistakes, and Realities

You've committed to an agile process that encourages test driven development. That decision has fostered a concerted effort to actively unit test your code. But, you may be wondering about the effectiveness of those tests. Experience shows that while the collective confidence of the development team is increased, defects still manage to raise their ugly heads. Are your tests really covering the code adequately or are big chunks remaining untested? And, are those areas that report coverage really covered with robust tests?

Andrew Glover, Stelligent

Verification Points for Better Testing Efficiency

More then one-third of all testing time is spent verifying test results-determining if the actual result matches the expected result within some pre-determined tolerance. Sometimes actual test results are simple-a value displayed on a screen. Other results are more complex-a database that has been properly updated, a state change within the application, or an electrical signal sent to an external device. Dani Almog suggests a different approach to results verification: separating the design of verification from the design of the tests.

Dani Almog, Amdocs Inc
When There is Too Much to Test: Ask Pareto for Help

Preventing defects has been our goal for years, but the changing technology landscape-architectures, languages, operating systems, data bases, Web standards, software releases, service packs, and patches-makes perfection impossible to reach. The Pareto Principle, which states that for many phenomena 80% of the consequences stem from 20% of the causes, often applies to defects in software.

Claire Caudry, Perceptive Software
Will Your SOA Systems Work in the Real World?

The fundamental promise of Service Oriented Architectures (SOA) and Web services demands consistent and reliable interoperability. Despite this promise, existing Web services standards and emerging specifications present an array of challenges for developers and testers alike. Because these standards and specifications often permit multiple acceptable implementation alternatives or usage options, interoperability issues often result.

Jacques Durand, Fujitsu Software Corporation
You're the New Test Manager - Now What?

You've wanted this promotion to QA/Test manager for so long and now, finally, it's yours. But, you have a terrible sinking feeling ... "What have I gotten myself into?" "How will I do this?" You have read about Six Sigma and developer to tester ratio-but what does this mean to you? Should you use black-box or white-box testing? Is there a gray box testing? Your manager is mumbling about offshore outsourcing. Join Brett Masek as he explains what you need to know to become the best possible test manager.

Brett Masek, American HealthTech

Pages

CMCrossroads is a TechWell community.

Through conferences, training, consulting, and online resources, TechWell helps you develop and deliver great software every day.