Conference Presentations

Early Testing without the Test and Test Again Syndrome

Developers and testers sometimes get into a frustrating dance in which the developers provide code for test, the testers run tests and document findings, developers fix the problems and re-release for testing, and the testers rerun and document new, different problems, and so on. For good reasons teams often begin "formal" testing on new software while it is still being coded. In this case the testers are working full tilt: running tests, investigating and isolating faults, writing up defects, rerunning the tests, and verifying fixes; but a lot of time is wasted by everyone on problems the developers already know about. As a manager, developer, or tester, you can break out of this vicious cycle and get to a better place. Douglas Hoffman shares his experiences seeing, participating in, and getting out of the test and test again syndrome.

  • How to know when the test and test again syndrome is happening to you
Douglas Hoffman, Software Quality Methods LLC
Design Testability and Service Level Measurements into Software

Design and architecture decisions made early in the project have a profound influence on the testability of an application. Although testing is a necessary and integral part of application development, architecture and design considerations rarely include the impacts of development design decisions on testability. In addition, build vs. buy, third party controls, open source vs. proprietary, and other similar questions can affect greatly the ability of an organization to carry out automated functional and performance testing-both positively and negatively. If the software or service is delivered to a separate set of end-users who then need to perform testing activities, the problems compound. Join Jay Weiser to find out about the important design and architecture decisions that will ensure more efficient and effective testability of your applications.

Jay Weiser, WorkSoft
Assessing Automated Testing Tools: A "How To" Evaluation Approach

You've been assigned the task of evaluating automated testing tools for your organization. Whether it's your first experience or you're looking to make a change, selecting the "best" automated testing tool can be a daunting task. With so many toolsets available, we sometimes make decisions that don't provide appropriate functionality. This presentation takes you through a number of steps that should be understood--and addressed--prior to acquiring any regression or performance-based toolset.

  • Learn to correlate your organization's requirements and existing framework with the toolsets available
  • Examine how integrated components help to identify potential problems
  • Determine what to ask/require from each vendor before committing to a purchase
Dave Kapelanski, Compuware Corporation
Optimize Application Infrastructure Performance Before Going Live

Our surveys have shown that over 75 percent of applications fail to meet their performance goals. With today's complex multi-tier systems and integration projects, finding these performance bottlenecks can be like finding a needle in a haystack. Customers need a systematic approach to identifying, isolating and resolving performance bottlenecks prior to going live. Mr. Radhakrishnan will address methodology and technology available today to solve performance problems for multi-tier systems in a deployment or production setting. the presentation will cover:

  • Practices for dealing with performance optimization in the lab and in deployment
  • Solo tools for databases, J2EE apps, ERP/CRM solutions, as well as holistic system wide tools
  • A complete methodology for tuning and optimization practices
  • Expanding performance assessment and optimization to capacity planning
Rajesh Radhakrishnan, Mercury Interactive
A Formula for Test Automation Success: Finding the Right Mix of Skill Sets and Tools

Not sure what elements to consider now that you're ready to embark on the mission of automating your testing? This session explores the possibilities-the key mix of skill sets, processes, and tools-that can make or break any automation effort. The instructor shows you how to develop an informed set of priorities that can make all the difference in your effort's success, and help you avoid project failure.

  • Create better, more reusable tests to improve efficiency and effectiveness
  • Increase the value and reputation of QA within your organization
  • Establish a closer relationship with developers based on mutual respect
Gerd Weishaar, IBM Rational software
Differential Testing: A Cost-Effective Automated Approach for Large, Complex Systems

Differential testing is an automated method you can use in testing large, complex systems. It's especially useful in situations where part or all of an existing production system is being upgraded, and the end-to-end functionality of the new system is expected to be the same as the old one. Rick Hower uses two case studies to provide descriptive examples of this novel and surprisingly effective approach. One case involves the rewrite of a complex business rule processing system for a large financial institution; the second involves the replacement of a critical sub-system in a telecom billing process.

  • Learn how to determine if differential testing will be useful for a project
  • Obtain some useful methods for selecting appropriate automated test data
  • Discover critical factors in the success of differential testing
Rick Hower, Digital Media Group, Inc.
The Journey to Test Automation Maturity

There's a pattern to the way test automation typically emerges within an organization. Since you want your automation projects to excel, considering the possibilities of what could happen based on those patterns can help you successfully prepare yourself and your team for anything. By doing this you'll avoid pitfalls, counteract resistance to automation, and set realistic expectations for what automation can do. From other people's common experiences, you can extract information that will help you at all stages of automation maturity.

  • Explore the patterns from pre-launch to advanced levels of test automation maturity
  • Learn traps to avoid and tips for success
  • Discover ways to sustain the benefits of automation even after the first flush of enthusiasm has passed
Dorothy Graham, Grove Consultants
Implementing an Enterprise Monitoring Solution: Testing and Operations Deliver Together

Achieving high levels of availability and scalability for large server environments is a challenge that impacts all aspects of application design, development, testing, deployment, and operations. In this presentation, Nancy Landau provides a real-world case study of a successful implementation of a multi-layered enterprise system that supports 600-plus servers in multiple sites. You'll see how a wide assortment of monitoring tools were integrated together to assess the health of server farms, the individual components within the environments, and the applications themselves. Learn how test engineers and operations staff work together to improve performance and reliability.

  • Discover how the team overcame process and tool challenges
  • Dissect the case study to determine what led to the project's success
Nancy Landau, Fidelity Information Services
Fault Injection to Stress Test Windows Applications

Testing an application's robustness and tolerance for failures in its natural environment can be difficult or impossible. Developers and testers buy tool suites to simulate load, write programs that fill memory, and create large files on disk, all to determine the behavior of their application under test in a hostile and unpredictable environment. Herbert Thompson describes and demonstrates new, cutting edge methods for simulating stress that are more efficient and reliable than current industry practices. Using Windows Media Player and Winamp as examples, he demonstrates how new methods of fault injection can be used to simulate stress on Windows applications.

  • Runtime fault injection as a testing and assessment tool
  • Cutting edge stress-testing techniques
  • An in-depth case study on runtime fault injection
Herbert Thompson, Security Innovation
Planned Chaos: Malicious Test Day

In a test and verification organization, it can be easy to fall into predictable ruts and miss finding important defects. Use the creativity of your test team, developers, users, and managers to find those hidden bugs before the software goes into production. Ted Rivera details how his organization conceived of, administers, evaluates, and benefits from periodic malicious test days. Learn ways to make your days of planned chaos productive, valuable, and, yes, even fun. Give both testers and non-testers an opportunity to find inventive ways to break your products and you'll get some surprising results.

  • The danger of too much predictability and the results you can expect from a malicious test day
  • Create and administer your own malicious test day
  • Maximize the benefits of malicious test days
Ted Rivera, Tivoli/IBM Quality Assurance

Pages

CMCrossroads is a TechWell community.

Through conferences, training, consulting, and online resources, TechWell helps you develop and deliver great software every day.