Where Does Exploratory Testing Fit? Exploratory tests, unlike scripted tests, are not defined in advance or carried out precisely according to a plan. So where and how do they fit with the other tasks testers must perform? James Bach, a chief proponent of exploratory testing, provides some insight on how best to exercise exploration in your testing effort. |
||
Targeted Software Fault Insertion This paper presents data on the effectiveness of software fault insertion, discusses the advantages and risks of fault insertion, provides tips on gaining cultural acceptance for fault insertion and suggests high payback areas for fault insertion which have proven themselves over multiple products. In a typical software development cycle, defect detection starts to trail off once the mainline code stabilizes. With software fault insertion, it was found that the defect detection rate does not level off and the hardest task becomes not one of finding defects but one of prioritizing the stream of defects. |
||
What You Don't Know May Help You Some testers take it upon themselves to learn as much as possible about the inner workings of the system under test. This type of "gray box" testing is valuable, and most testers have the technical wherewithal to grasp much of what's going on behind the scenes. But it's important to recognize that sometimes "ignorance is strength" when it comes to finding problems that users will encounter. |
||
Testing Strategy for a Web Site The Quality Management System (QMS) aims at automating the Quality Processes in a Software Industry. The system is built having Adaptive Quality System (AQS) as its reference. |
||
White Paper: Test Case Point Analysis This is a white paper on Test Case Point (TCP) Analysis, which basically deals with the estimation of the effort needed for testing projects. The purpose of this article is to provide an introduction to TCP Analysis and its application in non-traditional computing situations. This approach is technology independent and supports the need for estimating, project management and measuring quality. |
Nirav Patel
May 29, 2001 |
|
Managing Concurrent Software Releases in Management and Test Customers are requiring frequent and feature-rich releases of software products to support Lucent Hardware. The fundamental problem is that the time required to develop and test features often exceeds the release interval. One option to meet the needs of our customers is to use concurrent development and testing; however, the use of concurrent development has several potential pitfalls. The primary problems associated with concurrent development are: 1) How to isolate the long lead features from the features that fit within a development cycle, 2) How to manage the propagation of bugs fixed between releases that are in the field and releases that are still being developed, and 3) Developers must be trained to work in the concurrent paradigm. This paper describes a unique approach, using existing Configuration Management tools, to managing the development load lines in support of concurrent Fixed Interval Feature Delivery (FIFD). Software load line management is the infrastructure and p |
David Shinberg
May 2, 2001 |
|
Measurement in the CMM A recent question in the Quality Engineering-Metrics message board (on StickyMinds.com) asked about measurement at the different levels in the Software Capability Maturity Model. This article begins a series that will highlight the measurement requirements at each of the CMM Levels, starting with Level 2. Measurement in the CMM is often misunderstood when people focus only on the "Measurements and Analysis" sections of the model. This article offers an in-depth explanation of the CMM. |
||
Using Statistics to Evaluate Processes It is often necessary or advantageous to examine differences between processes (or technologies), for the purpose of making business decisions. Statistical thinking is needed to evaluate the impact of process of other changes on organizational performance. In statistical thinking, past experience is summarized or generalized. Statistical thinking allows us to make predictions and reach conclusions. |
Paul Below
April 26, 2001 |
|
A Tester’s Tips for Dealing with Developers Is the tester doing a good job or a bad job when she proves that the program is full of bugs? It’s a bad job from some developers’ points of view. Ridiculous as it seems, there are project managers blaming testers for the late shipment of a product and developers complaining (often jokingly) that “the testers are too tough on the program.” Obviously, there is more to successful testing than bug counts. Here are some tips about how testers can build successful relationships with developers. |
||
e-Talk Radio: Paulk, Mark, 28 November 2000 Ms. Dekkers and Mr. Paulk discuss the history of standardized, high maturity processes in the field of software development. |
Carol Dekkers
March 13, 2001 |
Pages
Upcoming Events
Apr 28 |
STAREAST Software Testing Conference in Orlando & Online |
Jun 02 |
AI Con USA Bridging Minds and Machines |
Sep 22 |
STARWEST Software Testing Conference in Anaheim & Online |
Oct 13 |
Agile + DevOps USA The Conference for Agile and DevOps Professionals |