OpenSTA is a solid open-source testing tool that, when used effectively, fulfills the basic needs of performance testing of Web applications. Dan Downing will introduce you to the basics of OpenSTA including downloading and installing
the tool, using the Script Modeler to record and customize performance test scripts, defining load scenarios, running tests using Commander, capturing the results using Collector, interpreting the results, as well as exporting captured performance data into Excel for analysis and reporting. As with many open source tools, self-training is the rule. Support is not provided by a big vendor
staff but by fellow practitioners via email. Learn how to find critical documentation that is often hidden in FAQs and discussion forum threads. If you are up to the support challenge, OpenSTA is an excellent alternative to high-priced commercial tools.
Software testing is tough-it can be exhausting and there is never enough time to find all the important bugs. Wouldn't it be nice to have a staff of tireless servants working day and night to make you look good? Well, those days are here. Two decades ago, software test engineers were cheap and machine time was expensive, demanding test suites to run as quickly and efficiently as possible. Today, test engineers are expensive and CPUs are cheap, so it becomes reasonable to move test creation to the shoulders of a test machine army. But we're not talking about the run-of-the-mill automated scripts that only do what you explicitly told them … we're talking about programs that create and execute tests you never thought of and find bugs you never dreamed of. In this presentation, Harry Robinson will show you how to create your robot army using tools lying around on the Web.
Software defects come in many forms--from those that cause a brief inconvenience to those that cause fatalities. Patricia McQuaid believes it is important to study software disasters, to alert developers and testers to be ever vigilant, and to understand that huge catastrophes can arise from what seem like small problems. Examining such failures as the Therac-25, Denver airport baggage handling, the Mars Polar Lander, and the Patriot missile, Pat focuses on factors that led to these problems, analyzes the problems, and then explains the lessons to be learned that relate to software engineering, safety engineering, government and corporate regulations, and oversight by users of the systems.
Learn from our mistakes-not in generalities but in specifics
Understand the synergistic effects of errors
Distinguish between technical failures and management failures
Software and technology managers often quote industry benchmarks such as The Standish Group's CHAOS report on software project failures; other organizations use this data to judge their internal operations. Although these external benchmarks can provide insights into your company's software development performance, you need to balance the picture with internal information to make an objective evaluation. Jim Brosseau takes a deeper look at common benchmarks, including the CHAOS report, published SEI benchmark data, and more. He describes the pros and cons of these commonly used industry benchmarks with key insights into often-quoted statistics. Take away an approach that Jim has used successfully with companies to help them gain an understanding of the relationship between the demographics, practices, and performance in their groups and how these relate to external benchmarks.
James McCaffrey describes in detail how to use measurement theory to create a simple software system that predicts with 87 percent accuracy the results of NFL professional football game scores. So, what does this have to do with a conference about developing better software? You can apply the same measurement theory principles embedded in this program to more accurately predict or compare results of software development, testing, and management. Using the information James presents, you can extend the system to predict the scores in other sports and apply the principles to a wide range of software engineering problems such as predicting the Web site usage in a new system, evaluating the overall quality of similar systems, and much more.
Why the statistical approach does not work for making some accurate predictions
You've made a commitment to automate unit testing as part of your development process or you are spending precious resources for automated functional testing at higher levels. You may be asking yourself: How good are those tests anyway? Are many tests checking the same thing while large parts of the code go completely untested? Are your tests triggering the exceptions that normally show up only in production? Are your automated tests adequately covering the code, the requirements-both, neither? Andrew discusses the truths and untruths about code coverage and looks at the tools available to gather and report coverage metrics in both the opensource and commercial worlds. He describes the different types of code coverage, their advantages and disadvantages, and how to interpret the results of coverage reports.
The concept of mutation testing and how it fits into a code coverage strategy
The technique of expressing requirements as user stories is one of the most broadly applicable techniques introduced by Extreme Programming. In fact, user stories are an effective approach on all time-constrained projects, not just those using Agile methods. Mike Cohn explains how to identify the functionality for a user story and how to write it well. He describes the attributes all good user stories must exhibit and presents guidelines for writing them. Learn to employ user role modeling when gathering a project’s initial stories. Whether you are a developer, tester, manager, or analyst, you can learn to write user stories that will speed up development and help you deliver the systems that users really need.
Defining a user story and learning how to write one
Six attributes of all user stories
Thirteen guidelines for writing better user stories
While effective for modeling requirements, analysis, or design of a software system, UML diagrams are typically used in isolation or only for portions of a system. The resulting inconsistencies have the potential to create more confusion than clarity, negating the investment in the modeling process. Explore tips, tricks, and techniques to build a complete, traceable UML model for all aspects of a software application. Thomas Bullinger shares ways to gather behavioral requirements and map them into UML use cases. Learn to map use cases onto sequence or activity diagrams and extract them onto class diagrams. In a recursive process, each of the UML diagrams and associated descriptions is logically related to ensure a complete problem model and a consistent design solution.
Create self-consistent UML models of requirements behavior and designs
Manage change in UML models to reflect updates to requirements
Designing Web services is all about the interface. Although tools for Web services development have advanced to the point where exposing application functionality is simple, the ease of building Web services does not diminish the need for careful planning and a highly functional design. Dave Mount opens his presentation by spinning the cautionary tale of slapping together a Web services interface on a poorly structured application. This scenario serves as a reference point for a subsequent discussion of the pitfalls of a poorly designed interface. Dave illustrates techniques for correcting problems and improving the Web services interface. Looking at high profile Web services provided by Google, eBay, and Salesforce.com, he shows how an external perspective that emphasizes consistency and conceptual clarity is key to Web services interface design.