Conference Presentations

Cross-Organizational Change Management

The phrase "test plan" means different things to different people. There is even more disagreement about what makes one test plan better than another one. Bernie Berger makes the case for using multi-dimensional measurements to evaluate the goodness of test plans. Walk away with a practical technique to systematically evaluate any complex structure such as a test plan. Learn how to qualitatively measure multiple dimensions of test planning and gain a context-neutral framework for ranking each dimension. You'll also find out why measurement of staff technical performance is often worse than no measurement at all and how to use this technique as an alternative approach to traditional practices. [This presentation is based on work at Software Test Managers Roundtable (STMR) #8 held in conjunction with the STAR conference.]

• Qualitatively evaluate complex structures, like test plans
• Ten dimensions of test planning

Federico Pacquing, Jr., Getty Images, Inc.
Test Metrics: A Practical Approach To Tracking and Interpretation

You can improve the overall quality of a software project through the use of test metrics. Test metrics can be used to track and measure the efficiency, effectiveness, and the success or shortcomings of various activities of a software development project. While it is important to recognize the value of gathering test metrics data, it is the interpretation of that data which makes the metrics meaningful or not. Shaun Bradshaw describes the metrics he tracks during a test effort and explains how to interpret the metrics so they are meaningful to the project and its team members.

  • What types of test metrics should be tracked
  • How to track and interpret test metrics
  • The two categories of test metrics: base and calculated
Shaun Bradshaw, Questcon Technologies Inc
Measuring Testing Effectiveness using Defect Detection Percentage

How good is your testing? Can you demonstrate the detrimental effect on testing if not enough time is allowed? Dorothy Graham discusses a simple measure that has proved very useful in a number of organizations-Defect Detection Percentage or DDP. Learn what DDP is, how to calculate it, and how to use it in your organization to communicate the effectiveness of your testing. From case studies of organizations that are using DPP, you'll find out the problems you may encounter and ways to overcome them.

  • Learn what DDP is and how to calculate it using defect data you may already have
  • How best to start measuring and using DDP
  • Calculate DDP for different stages of testing (integration, system, user acceptance)
Dorothy Graham, Grove Consultants UK
Testing Dialogues- Technical Issues

Test professionals face a myriad of issues with immature development technologies, changing systems environments, increasingly complex applications, and 24/7 reliability demands. We must choose the right methodology and best testing techniques to meet these challenges, all with a limited set of tools and not enough time. In this double-track session, you'll be able to ask for help from your peers, share you expertise with the group, and develop some new approaches to your biggest challenges. Johanna Rothman and Esther Derby facilitate this session, focusing on topics such as model-based testing, security testing, testing without requirements, testing in the XP/Agile world, and configuration management. Discussions are structured in a framework so participants will receive a summary of their work product after the conference.

Johanna Rothman, Rothman Consulting Group, Inc.
Use of Inspections for Product and Process Improvement

It is widely known that software inspections are a cost-effective approach for finding defects in source code as well as other project documents such as requirements specifications. You can take your inspection process to the next level by using inspections and the resulting data for process improvement throughout your software organization. Lawrence Day presents a basic process flow for inspecting source code and documentation and the keys to implementing a cost-effective inspection approach. Then, he offers a proven approach for using the inspection data to identify process and product improvement opportunities. By viewing inspections a part of your development process, you'll learn to see inspections as a valuable improvement tool.

  • The basic software inspection process, paths, and benefits
  • Inspections as a process improvement process
Lawrence Day, Boeing
Quality Assurance and .NET: How to Effectively Test Your New .NET Applications

If your organization is migrating to .NET, you need to be concerned about how .NET will impact your department's testing and quality assurance efforts. First you need to understand the technology underlying .NET applications; then you need to learn what is different about testing applications using this technology. Dan Koloski provides an overview of .NET technologies and the special considerations you need to know for testing them. Learn about testing practices that have worked for Dan and others to help your organization deliver high quality .NET applications.

  • The .NET architecture stack
  • Common and uncommon risk factors with .NET applications
  • The pitfalls of testing .NET technologies and tooling available to help
Dan Koloski, Empirix Software
Pair-Wise Testing: Moving from Theory to Practice

We've all heard the phrase, "You can't test everything." This axiom is particularly appropriate for testing multiple combinations of options, selections, and configurations. To test all combinations in some of these instances would require millions of tests. A systematic way to reduce the number of tests is called pair-wise testing. Gretchen Henrich describes the process of integrating this technique into your test practices and offers her experiences testing multiple releases of a product using pair-wise testing. She discusses her company's migration from a textbook orthogonal array approach, to free "all-pairs" software, and finally to a commercial tool that also creates the test data for pair-wise testing.

  • Automatically create test designs and even test data based on all pairs of inputs
Gretchen Henrich, LexisNexis
Automated API Testing: A Model-Based Approach

API testing is difficult, even with automated support. However, with traditional automated testing solutions, the cost to create and maintain a test suite can be more than the savings realized from automated test execution. By creating a model of the API to test and generating the test scripts automatically from the model, test automation becomes more cost-effective. Kirk Sayre describes how to create models of APIs; how to take the expected use of the API under test into account with Markov chains; how to augment the models with the information needed to generate automated test scripts; and how to use and interpret test results. You'll see concrete examples of automated model-based testing of APIs written in Java, PHP, and C.

  • Create a model of an API for use in model-based testing
  • The basics of testing using Markov chain usage models
Kirk Sayre, The University of Tennessee
Getting a Grip on Exploratory Testing

Many testers have heard about exploratory testing, and everyone does some testing without a script or a detailed plan. But how is exploratory testing different from ad-hoc testing? In this interactive session, James Lyndsay demonstrates the approaches to exploratory testing he often uses at work. With specially built exercises, he explains his thought process as he explores the application. He analyzes applications by looking at their inputs and outputs and by observing their behaviors and states. He employs both cultural and empirical models to establish a basis for observing whether a test succeeds or fails. Through this process, you will gain insights about how to improve your own exploratory style.

  • Using active play to parse and understand a sample application
  • Analysis of inputs, outputs, and their linkage to enhance explorations
James Lyndsay, Workroom Productions
Lessons Learned from End-to-End Systems Testing

End-to-end testing of large, distributed systems is a complex and often expensive task. Interface testing at this high level involves multiple sub-systems and often requires cooperation among many groups. From mimicking real-world production configurations to difficult project management and risk issues, Marc Bloom describes the challenges and successes he's experienced at Capital One in performing end-to-end testing. Learn how to define and scope end-to-end system testing and develop a customized framework for repeatable test execution. Find out ways to support knowledge sharing across different test teams to improve the coverage and efficiency of your interface testing.

  • The benefits and value added of comprehensive end-to-end testing
  • Guidelines for developing an end-to-end test plan and implementing it
Marc Bloom, Capital One Financial Corp

Pages

CMCrossroads is a TechWell community.

Through conferences, training, consulting, and online resources, TechWell helps you develop and deliver great software every day.