Release Management

Conference Presentations

How to Build Your Own Robot Army

Software testing is tough-it can be exhausting and there is never enough time to find all the important bugs. Wouldn't it be nice to have a staff of tireless servants working day and night to make you look good? Well, those days are here. Two decades ago, software test engineers were cheap and machine time was expensive, demanding test suites to run as quickly and efficiently as possible. Today, test engineers are expensive and CPUs are cheap, so it becomes reasonable to move test creation to the shoulders of a test machine army. But we're not talking about the run-of-the-mill automated scripts that only do what you explicitly told them … we're talking about programs that create and execute tests you never thought of and find bugs you never dreamed of. In this presentation, Harry Robinson will show you how to create your robot army using tools lying around on the Web.

Harry Robinson, Google
Software Disasters and Lessons Learned

Software defects come in many forms--from those that cause a brief inconvenience to those that cause fatalities. Patricia McQuaid believes it is important to study software disasters, to alert developers and testers to be ever vigilant, and to understand that huge catastrophes can arise from what seem like small problems. Examining such failures as the Therac-25, Denver airport baggage handling, the Mars Polar Lander, and the Patriot missile, Pat focuses on factors that led to these problems, analyzes the problems, and then explains the lessons to be learned that relate to software engineering, safety engineering, government and corporate regulations, and oversight by users of the systems.

  • Learn from our mistakes-not in generalities but in specifics
  • Understand the synergistic effects of errors
  • Distinguish between technical failures and management failures
Patricia McQuaid, Cal Poly State University
Industry Benchmarks: Insights and Pitfalls

Software and technology managers often quote industry benchmarks such as The Standish Group's CHAOS report on software project failures; other organizations use this data to judge their internal operations. Although these external benchmarks can provide insights into your company's software development performance, you need to balance the picture with internal information to make an objective evaluation. Jim Brosseau takes a deeper look at common benchmarks, including the CHAOS report, published SEI benchmark data, and more. He describes the pros and cons of these commonly used industry benchmarks with key insights into often-quoted statistics. Take away an approach that Jim has used successfully with companies to help them gain an understanding of the relationship between the demographics, practices, and performance in their groups and how these relate to external benchmarks.

Jim Brosseau, Clarrus Consulting Group, Inc.
Beat the Odds in Vega$: Measurement Theory Applied to Development and Testing

James McCaffrey describes in detail how to use measurement theory to create a simple software system that predicts with 87 percent accuracy the results of NFL professional football game scores. So, what does this have to do with a conference about developing better software? You can apply the same measurement theory principles embedded in this program to more accurately predict or compare results of software development, testing, and management. Using the information James presents, you can extend the system to predict the scores in other sports and apply the principles to a wide range of software engineering problems such as predicting the Web site usage in a new system, evaluating the overall quality of similar systems, and much more.

  • Why the statistical approach does not work for making some accurate predictions
  • Measurement theory to predict and compare
James McCaffrey, Volt Information Sciences, Inc.
Code Coverage Myths and Realities

You've made a commitment to automate unit testing as part of your development process or you are spending precious resources for automated functional testing at higher levels. You may be asking yourself: How good are those tests anyway? Are many tests checking the same thing while large parts of the code go completely untested? Are your tests triggering the exceptions that normally show up only in production? Are your automated tests adequately covering the code, the requirements-both, neither? Andrew discusses the truths and untruths about code coverage and looks at the tools available to gather and report coverage metrics in both the opensource and commercial worlds. He describes the different types of code coverage, their advantages and disadvantages, and how to interpret the results of coverage reports.

  • The concept of mutation testing and how it fits into a code coverage strategy
Andrew Glover, Vanward Technologies
User Stories for Better Software Requirements

The technique of expressing requirements as user stories is one of the most broadly applicable techniques introduced by Extreme Programming. In fact, user stories are an effective approach on all time-constrained projects, not just those using Agile methods. Mike Cohn explains how to identify the functionality for a user story and how to write it well. He describes the attributes all good user stories must exhibit and presents guidelines for writing them. Learn to employ user role modeling when gathering a project’s initial stories. Whether you are a developer, tester, manager, or analyst, you can learn to write user stories that will speed up development and help you deliver the systems that users really need.

  • Defining a user story and learning how to write one
  • Six attributes of all user stories
  • Thirteen guidelines for writing better user stories
Mike Cohn, Mountain Goat Software
Building Traceable UML Models

While effective for modeling requirements, analysis, or design of a software system, UML diagrams are typically used in isolation or only for portions of a system. The resulting inconsistencies have the potential to create more confusion than clarity, negating the investment in the modeling process. Explore tips, tricks, and techniques to build a complete, traceable UML model for all aspects of a software application. Thomas Bullinger shares ways to gather behavioral requirements and map them into UML use cases. Learn to map use cases onto sequence or activity diagrams and extract them onto class diagrams. In a recursive process, each of the UML diagrams and associated descriptions is logically related to ensure a complete problem model and a consistent design solution.

  • Create self-consistent UML models of requirements behavior and designs
  • Manage change in UML models to reflect updates to requirements
Thomas Bullinger, ArchSynergy, Ltd.
Building Secure Software with New Web Technologies

The latest generation of Web technologies-AJAX, improved client-side scripting, support for extensive DOM manipulation in browsers, content syndication, Web service APIs, and simple interchange formats such as JSON-are all driving new, powerful Web applications. Based on his work on real world "Web 2.0" applications, Ivan Krstic discusses the security implications of these new technologies. Ivan describes specific attacks such as Web-based worms, XSS, CSRF, and HTTP response splitting and offers advice on mitigating security risks during the engineering process. Learn how standard security guidelines such as The Confidentiality-Integrity-Availability (CIA) model apply to the modern Web and about the role of cryptography and crypto-engineering in Web security.

Ivan Krstic, Harvard University
Introduction to the Capability Maturity Model® Integration (CMMI®)

Many organizations have achieved success in using the SEI Capability Maturity Model Integrated (CMMI®) as a framework for their process improvement program. Steven Lett describes the structure and contents of the CMMI®, including the continuous and staged versions of the model. He discusses each of the five maturity levels and their process areas, the specific and generic practices that exist within each process area, and the typical process documentation and work products required for each. Learn an effective approach that companies take in driving change across their software engineering organizations. Find out how the model is meant to be interpreted and take back examples of the successes that companies have experienced in using both CMMI® and the earlier Capability Maturity Model (CMM®). Capability Maturity Model® and CMMI® are registered trademarks of Carnegie Mellon University.

Steven Lett, The David Consulting Group
Don't Settle for Better Software - Make Truly Great Software

Too many teams create very decent products that, for whatever reason, fail to rise above the crowd and truly capture the popular imagination. They are surprised when their products are mostly ignored by the marketplace, which seems to be captivated by some other shiny geegaw that's functionally inferior and more expensive. In many product categories, from software to consumer electronics, the product with the most market share is often more expensive and less functional than the number two product. Joel Spolsky will explore why this happens and suggest some ways to design a "blue chip" product that people will love. After you get great software and products using the usual repertoire of debugging, usability testing, etc., you have to go still further and think about beauty, user happiness, and emotional impact. Let Joel help you figure out what makes truly great software-great.

Joel Spolsky, Fog Creek Software

Pages

CMCrossroads is a TechWell community.

Through conferences, training, consulting, and online resources, TechWell helps you develop and deliver great software every day.