Conference Presentations

Assessing Automated Testing Tools: A "How To" Evaluation Approach

You've been assigned the task of evaluating automated testing tools for your organization. Whether it's your first experience or you're looking to make a change, selecting the "best" automated testing tool can be a daunting task. With so many toolsets available, we sometimes make decisions that don't provide appropriate functionality. This presentation takes you through a number of steps that should be understood--and addressed--prior to acquiring any regression or performance-based toolset.

  • Learn to correlate your organization's requirements and existing framework with the toolsets available
  • Examine how integrated components help to identify potential problems
  • Determine what to ask/require from each vendor before committing to a purchase
Dave Kapelanski, Compuware Corporation
Optimize Application Infrastructure Performance Before Going Live

Our surveys have shown that over 75 percent of applications fail to meet their performance goals. With today's complex multi-tier systems and integration projects, finding these performance bottlenecks can be like finding a needle in a haystack. Customers need a systematic approach to identifying, isolating and resolving performance bottlenecks prior to going live. Mr. Radhakrishnan will address methodology and technology available today to solve performance problems for multi-tier systems in a deployment or production setting. the presentation will cover:

  • Practices for dealing with performance optimization in the lab and in deployment
  • Solo tools for databases, J2EE apps, ERP/CRM solutions, as well as holistic system wide tools
  • A complete methodology for tuning and optimization practices
  • Expanding performance assessment and optimization to capacity planning
Rajesh Radhakrishnan, Mercury Interactive
A Formula for Test Automation Success: Finding the Right Mix of Skill Sets and Tools

Not sure what elements to consider now that you're ready to embark on the mission of automating your testing? This session explores the possibilities-the key mix of skill sets, processes, and tools-that can make or break any automation effort. The instructor shows you how to develop an informed set of priorities that can make all the difference in your effort's success, and help you avoid project failure.

  • Create better, more reusable tests to improve efficiency and effectiveness
  • Increase the value and reputation of QA within your organization
  • Establish a closer relationship with developers based on mutual respect
Gerd Weishaar, IBM Rational software
Differential Testing: A Cost-Effective Automated Approach for Large, Complex Systems

Differential testing is an automated method you can use in testing large, complex systems. It's especially useful in situations where part or all of an existing production system is being upgraded, and the end-to-end functionality of the new system is expected to be the same as the old one. Rick Hower uses two case studies to provide descriptive examples of this novel and surprisingly effective approach. One case involves the rewrite of a complex business rule processing system for a large financial institution; the second involves the replacement of a critical sub-system in a telecom billing process.

  • Learn how to determine if differential testing will be useful for a project
  • Obtain some useful methods for selecting appropriate automated test data
  • Discover critical factors in the success of differential testing
Rick Hower, Digital Media Group, Inc.
The Journey to Test Automation Maturity

There's a pattern to the way test automation typically emerges within an organization. Since you want your automation projects to excel, considering the possibilities of what could happen based on those patterns can help you successfully prepare yourself and your team for anything. By doing this you'll avoid pitfalls, counteract resistance to automation, and set realistic expectations for what automation can do. From other people's common experiences, you can extract information that will help you at all stages of automation maturity.

  • Explore the patterns from pre-launch to advanced levels of test automation maturity
  • Learn traps to avoid and tips for success
  • Discover ways to sustain the benefits of automation even after the first flush of enthusiasm has passed
Dorothy Graham, Grove Consultants
Implementing an Enterprise Monitoring Solution: Testing and Operations Deliver Together

Achieving high levels of availability and scalability for large server environments is a challenge that impacts all aspects of application design, development, testing, deployment, and operations. In this presentation, Nancy Landau provides a real-world case study of a successful implementation of a multi-layered enterprise system that supports 600-plus servers in multiple sites. You'll see how a wide assortment of monitoring tools were integrated together to assess the health of server farms, the individual components within the environments, and the applications themselves. Learn how test engineers and operations staff work together to improve performance and reliability.

  • Discover how the team overcame process and tool challenges
  • Dissect the case study to determine what led to the project's success
Nancy Landau, Fidelity Information Services
Fault Injection to Stress Test Windows Applications

Testing an application's robustness and tolerance for failures in its natural environment can be difficult or impossible. Developers and testers buy tool suites to simulate load, write programs that fill memory, and create large files on disk, all to determine the behavior of their application under test in a hostile and unpredictable environment. Herbert Thompson describes and demonstrates new, cutting edge methods for simulating stress that are more efficient and reliable than current industry practices. Using Windows Media Player and Winamp as examples, he demonstrates how new methods of fault injection can be used to simulate stress on Windows applications.

  • Runtime fault injection as a testing and assessment tool
  • Cutting edge stress-testing techniques
  • An in-depth case study on runtime fault injection
Herbert Thompson, Security Innovation
Planned Chaos: Malicious Test Day

In a test and verification organization, it can be easy to fall into predictable ruts and miss finding important defects. Use the creativity of your test team, developers, users, and managers to find those hidden bugs before the software goes into production. Ted Rivera details how his organization conceived of, administers, evaluates, and benefits from periodic malicious test days. Learn ways to make your days of planned chaos productive, valuable, and, yes, even fun. Give both testers and non-testers an opportunity to find inventive ways to break your products and you'll get some surprising results.

  • The danger of too much predictability and the results you can expect from a malicious test day
  • Create and administer your own malicious test day
  • Maximize the benefits of malicious test days
Ted Rivera, Tivoli/IBM Quality Assurance
Influencing Others: Business Speak for Testers

One of the major goals of testing is to provide information to decision-makers about the quality of the product under test and the risks of releasing or not releasing the software. But whether or not management hears what we have to say depends on how we deliver the message. The truth is management often doesn't care about the number of defects or their severity level; instead, they care about revenue, costs, and customer impact. Find out more about what motivates managers and how to frame test results and status about product quality and product risks in language managers will understand. Learn how to present the business case clearly and convincingly. Then let the chips fall where they may.

  • Key skills you need to influence decisions for the good of the organization
  • How to assess risk and their effects and present a strong case
Esther Derby, Esther Derby Associates Inc
Ongoing Retrospectives: Project Reviews That Work

As evaluators of quality, testers can often identify critical software development problems during the process. So, how do you get other members of the development team to take notice? Lauri MacKinnon offers real-world case studies to illustrate how ongoing project retrospectives make for better testing and higher quality software. She describes ways to get objective data from project reviews done during the project, giving your team a better chance of making timely adjustments. Learn the basics of conducting a project review and interpreting the resulting data. Then, turn this data into useful process improvement changes within the test group and the rest of the software development department.

  • Project reviews throughout the development cycle for continuous feedback
  • A way to improve testing and other development activities during the project, not after
Lauri MacKinnon, Phase Forward Inc

Pages

CMCrossroads is a TechWell community.

Through conferences, training, consulting, and online resources, TechWell helps you develop and deliver great software every day.