Test-driven Development

[article]
Do Testers Still Own the Test Tools?
Member Submitted
Summary:

Test-driven development (TDD) is a practice associated with a number of Agile flavors. It has become an increasingly popular method for integrating quality into software development at an early stage of the project lifecycle. In this article John Reber asks the question, Which members of the project team should own the tools that are driving the automated testing process within TDD? Are they now solely developer tools or should the testers be the owners?

Agile, TDD and the tester
Testers have always advocated early involvement in the lifecycle of a project through the likes of the V model. However, the V model cannot be successfully applied without buy-in from the entire project team - invariably this just isn't the case. Ask the average developer or project manager what the V model recommends and you are likely to be left with a number of quizzical looks.

The Agile approach on the other hand can be viewed as an answer to a QA analysts long held dream of being involved in the project at an early stage - the methodology is practically demanding it. Testers can benefit from the fact that Agile is a widely supported method which actively promotes involvement from all project members at most stages of the iterative process whether it is at project planning, estimating, participating in story huddles or attending retrospectives.

Testers not only attend these sessions but they are expected to actively contribute. In an Agile environment the tester is fully expected to utilize all the skills they have traditionally performed in testing plus a whole lot more. Not only does the tester have to rise to the challenge of these additional demands but the collaborative nature of Agile means that roles on the project can blur.

Nowhere is this truer than in the case of Test Driven Development (TDD), a method of development practiced in a number of Agile flavours such as eXtreme Programming (XP). TDD gives the programming team and the tester the assurance that potentially all written code is covered by a test, many defects are caught early and by extension a greater level of confidence in the code that is being delivered.

Increasingly TDD is being extended to the point that not only are programmers writing unit tests prior to the coding effort but functional acceptance tests are also being integrated into the process. This has been termed Acceptance Test Driven Development (ATDD). For some projects this additional effort is crucial to truly satisfying the requirements specified by the customer.

TDD and specifically ATDD raise a number of questions for the tester. At what point should the tester be involved in creating the automated tests for the story in play? Whose responsibility is it to code the automated functional tests? And who owns the test tools that drive the process?

TDD and the test tools
Whilst Agile often maintains there are no specialists, in the traditional waterfall approach the project roles tend to be clearly defined. In the case of test automation members of the QA team use tools, usually of the record and playback variety, to replace copious time consuming manual regression test suites. Because of the test effort required to create and maintain these suites the automated testing process and its respective tools are almost exclusively owned by the test team. Developers generally have minimal input into the process.

On Agile projects the approach to testing and test tools differs. There are a plethora of open source tools that support the TDD process. Programmers have the xUnit family to create unit tests aswell as a multitude of tools to create mocks and stubs for integration tests. Business analysts and even Product Owners may be involved in creating functional tests using tools such as Fitnesse. Then there are those tools used by both testers and developers, such as Selenium and Webdriver, which add further sophistication to the creation of automated functional tests.

The increasingly common use of these open source automated testing frameworks has added new complexities for testers. Testers have to consider some of the following questions when utilising these tools:

  • What technical knowledge is required to use these tools up?
  • Where should the tools various property files, logs, user extensions etc be located in the project architecture?
  • ho needs to be involved? Developers, system architects, system administrators, build/configuration administrators?
  • How do we integrate the tests into some sort of Continuous Integration process?

With these questions in mind it's clear that the use of test tools within the Agile practices of TDD and Continuous Integration (CI) is not a solitary activity conducted by the test team. This being the case let us look at the high level process of TDD and see where the tester can add value.

TDD–the process
The developer(s) has a story that has been planned in for the iteration/sprint and is ready for play. The functional acceptance criteria have been added by the Business Analyst or Product Owner, and from this the tester has extrapolated the test scenarios perhaps in conjunction with the Product Owner/BA.

The developers will use a unit testing framework to unit test their individual lines of code. They add their assertions, verifies etc. They may also write some integration tests. As an aside, there may be some value in the testers viewing these unit tests prior to code check in or at least stepping through the code with the developer. This exercise will often provide a quality audit of the unit tests and allows the coder to explain to the tester what the test coverage is up to this point.

Whilst viewing unit tests may not be mandatory for a tester it is essential that the tester is the guiding force when it comes to functional test scenarios. So let us assume that for this story all our test scenarios are candidates for functional test automation. We want these tests to run as part of CI giving us a higher degree of quality and confidence prior to us perhaps doing any further exploratory or manual testing. But who should script and code these tests?

I would argue that the tester should if they have the technical know-how. The UI presentation layer may not have been finished, all the values and targets to enter into our script may not yet been known, the developers probably haven’t confirmed most of the implementation yet, but the onus is on the QA team to at least determine the structure of the automated testing. This being the case, if its not possible to write the complete functional test up front then the tester should write a draft structure and then collaborate with the developer to flesh out the test as the coding of the story progresses.

To summarise the involvement of the QA in TDD:

  1. XA story is ready to be played. The developer writes unit/integration tests. Tester codes or drafts the functional automated test.
  2. As the story is coded the developers and tester collaborate to flesh out the functional test.
  3. On completion of code effort the developer then demo's the working tests on their local environment to the tester prior to checking in their code and tests.
  4. All being well the tests pass the CI build. Testers may now do further manual and/or exploratory testing as they see fit.
  5. The QA team may also use some or all of the functional automated test to integrate into a larger regression script which can be run on a standalone CI server or perhaps on a separate QA environment.

Blockers and impediments to the process
For various reasons the process does not always work this smoothly. Some QA teams may simply not have the manpower to manage all the demands of an agile project. Or there may be a skill deficit in the QA team with no one confident enough in using the test tool of choice. If preparing automated test scripts up front is a problem then perhaps provide the developers written test scenarios from which they can write their own test scripts. However, it is then even more essential that the tester sits with the developer prior to completion of the story to ensure they are satisfied with the automated implementation of their written test scenario.

Here is a selection of some of the other issues testers may need to consider and some potential solutions:

The CI build may eventually have a large and unwieldy number of functional acceptance tests which really slows the build down?
he testers should help the team provide a solution. Perhaps the functional tests can be run on a separate CI box. Perhaps some of the low priority tests can run less frequently? Some tests may no longer be valid and can be removed from the build.

How much technical & coding expertise does today’s agile QA need? What if a decision is made to use a different tool?
One solution is to have a technical tester on each team. The preferred solution would be for all agile testers to get skilled up.

A 'green' build does not mean your test has been run.
Make sure you know that the tests are indeed running and not being ignored. Perhaps some sets of tests have been deleted or disabled during code refactoring or bug fixing. There are many ways for a test to be ignored, set to pending or pulled out and it’s up to the tester to know when this happens. The test team should have an awareness of the entire test coverage.

Cry wolf syndrome.
There is a risk that badly written tests that regularly generate false failures will be ignored, so that when a real failure occurs it may not be detected. Likewise a high number of passing unit tests may bring a false sense of security and a danger of complacency, resulting in less additional QA activities.

'Testers drive the testing process'
Perhaps asking the question who owns the test tools is misleading. It is clear that the sphere of testing no longer belongs exclusively to the QA team and by extension the test tools now belong to the entire project.

Yet whilst the testing of an Agile project is a team exercise there are inherent risks around diminishing levels of quality when developers are expected to write their own functional acceptance tests. Testers, acting as pre-sign off stewards of quality, should be instrumental in writing tests based on the customer requirements, with or without the tools.

Ultimately it pays to have the test experts provide the guardianship of the testing process. This is far more important than disputing the ownership of the test tools themselves.

This article is based on a presentation given by John Reber at the London Tester Gathering in Jan 2010.

About the author

CMCrossroads is a TechWell community.

Through conferences, training, consulting, and online resources, TechWell helps you develop and deliver great software every day.