Conference Presentations

Static Analysis and Secure Code Reviews

Security threats are becoming increasingly more dangerous to consumers and to your organization. Paco Hope provides the latest on static analysis techniques for finding vulnerabilities and the tools you need for performing white-box secure code reviews. He provides guidance on selecting and using source code static analysis and navigation tools. Learn why secure code reviews are imperative and how to implement a secure code review process in terms of tasks, tools, and artifacts. In addition to describing the steps in the static analysis process, Paco explains methods for examining threat boundaries, error handling, and other "hot spots" in software. Find out about the analysis techniques of Attack Resistance Analysis, Ambiguity Analysis, and Underlying Framework Analysis as ways to expose risk and prioritize remediation of insecure code.

  • Why secure code reviews are the right approach for finding security defects
Paco Hope, Cigital
Improving Code Quality with Eclipse and its Java Plug-ins

One of the features that makes Eclipse so popular within the Java community is the abundance of easy to use plug-ins. Many of these are freely available open-source tools. Plug-ins are available for virtually anything from implementing database connectivity to instant messaging. Because code quality is a critical aspect of production software applications, Eclipse has built-in tools that help developers write and deliver high quality code. Levent Gurses has employed a number of external plug-ins, including PMD, CheckStyle, JDepend, FindBugs, Cobertura, CPD, Metrics, and others to transform Eclipse into a powerhouse for writing, testing, and releasing high quality Java code. Levent shows you how to use Eclipse to improve your team's coding habits, enforce organizational standards, and zap bugs before they reach the client.

  • The standard quality check tools available in Eclipse
Levent Gurses, Stelligent
Testing Web Applications for Security Defects

Approximately three-fourths of today's successful system security breaches are perpetrated not through network or operating system security flaws but through customer-facing Web applications. How can you ensure that your organization is protected from holes that let hackers invade your systems? Only by thoroughly testing your Web applications for security defects and vulnerabilities. Michael Sutton describes the three basic security testing approaches available to testers-source code analysis, manual penetration testing, and automated penetration testing. Michael explains the key differences in these methods, the types of defects and vulnerabilities that each detects, and the advantages and disadvantages of each method. Learn how to get started in security testing and how to choose the best strategy for

  • Basic security vulnerabilities in Web applications
  • Skills needed in security testing
Michael Sutton, SPI Dynamics
Simple Ain't Easy: Software Design Myths and Realities

The definition of "simple design" varies from person to person. But achieving simplicity isn't just about maintaining simple point solutions.

Brad Appleton's picture Brad Appleton
Open Source Tools for Web Application Performance Testing

OpenSTA is a solid open-source testing tool that, when used effectively, fulfills the basic needs of performance testing of Web applications. Dan Downing will introduce you to the basics of OpenSTA including downloading and installing
the tool, using the Script Modeler to record and customize performance test scripts, defining load scenarios, running tests using Commander, capturing the results using Collector, interpreting the results, as well as exporting captured performance data into Excel for analysis and reporting. As with many open source tools, self-training is the rule. Support is not provided by a big vendor
staff but by fellow practitioners via email. Learn how to find critical documentation that is often hidden in FAQs and discussion forum threads. If you are up to the support challenge, OpenSTA is an excellent alternative to high-priced commercial tools.

  • Learn the capabilities of OpenSTA
Dan Downing, Mentora Inc
How to Build Your Own Robot Army

Software testing is tough-it can be exhausting and there is never enough time to find all the important bugs. Wouldn't it be nice to have a staff of tireless servants working day and night to make you look good? Well, those days are here. Two decades ago, software test engineers were cheap and machine time was expensive, demanding test suites to run as quickly and efficiently as possible. Today, test engineers are expensive and CPUs are cheap, so it becomes reasonable to move test creation to the shoulders of a test machine army. But we're not talking about the run-of-the-mill automated scripts that only do what you explicitly told them … we're talking about programs that create and execute tests you never thought of and find bugs you never dreamed of. In this presentation, Harry Robinson will show you how to create your robot army using tools lying around on the Web.

Harry Robinson, Google
Software Disasters and Lessons Learned

Software defects come in many forms--from those that cause a brief inconvenience to those that cause fatalities. Patricia McQuaid believes it is important to study software disasters, to alert developers and testers to be ever vigilant, and to understand that huge catastrophes can arise from what seem like small problems. Examining such failures as the Therac-25, Denver airport baggage handling, the Mars Polar Lander, and the Patriot missile, Pat focuses on factors that led to these problems, analyzes the problems, and then explains the lessons to be learned that relate to software engineering, safety engineering, government and corporate regulations, and oversight by users of the systems.

  • Learn from our mistakes-not in generalities but in specifics
  • Understand the synergistic effects of errors
  • Distinguish between technical failures and management failures
Patricia McQuaid, Cal Poly State University
Industry Benchmarks: Insights and Pitfalls

Software and technology managers often quote industry benchmarks such as The Standish Group's CHAOS report on software project failures; other organizations use this data to judge their internal operations. Although these external benchmarks can provide insights into your company's software development performance, you need to balance the picture with internal information to make an objective evaluation. Jim Brosseau takes a deeper look at common benchmarks, including the CHAOS report, published SEI benchmark data, and more. He describes the pros and cons of these commonly used industry benchmarks with key insights into often-quoted statistics. Take away an approach that Jim has used successfully with companies to help them gain an understanding of the relationship between the demographics, practices, and performance in their groups and how these relate to external benchmarks.

Jim Brosseau, Clarrus Consulting Group, Inc.
Beat the Odds in Vega$: Measurement Theory Applied to Development and Testing

James McCaffrey describes in detail how to use measurement theory to create a simple software system that predicts with 87 percent accuracy the results of NFL professional football game scores. So, what does this have to do with a conference about developing better software? You can apply the same measurement theory principles embedded in this program to more accurately predict or compare results of software development, testing, and management. Using the information James presents, you can extend the system to predict the scores in other sports and apply the principles to a wide range of software engineering problems such as predicting the Web site usage in a new system, evaluating the overall quality of similar systems, and much more.

  • Why the statistical approach does not work for making some accurate predictions
  • Measurement theory to predict and compare
James McCaffrey, Volt Information Sciences, Inc.
Code Coverage Myths and Realities

You've made a commitment to automate unit testing as part of your development process or you are spending precious resources for automated functional testing at higher levels. You may be asking yourself: How good are those tests anyway? Are many tests checking the same thing while large parts of the code go completely untested? Are your tests triggering the exceptions that normally show up only in production? Are your automated tests adequately covering the code, the requirements-both, neither? Andrew discusses the truths and untruths about code coverage and looks at the tools available to gather and report coverage metrics in both the opensource and commercial worlds. He describes the different types of code coverage, their advantages and disadvantages, and how to interpret the results of coverage reports.

  • The concept of mutation testing and how it fits into a code coverage strategy
Andrew Glover, Vanward Technologies


CMCrossroads is a TechWell community.

Through conferences, training, consulting, and online resources, TechWell helps you develop and deliver great software every day.