Heresy! Automation Does Not Require Test Cases

[article]

A few years ago, when explaining that not all test activities should be automated, I described myself as a “heretic to my discipline”—that discipline being test automation. My use of the word heretic was meant to convey that I was a proponent of ideas that were not in line with others who practiced my discipline.

Someone pointed out that an early definition of the word heretic was something akin to “one who chooses.” While not my original intent, I think that definition also fits my statement: I was choosing what to automate and what not to automate, as well as how to automate.

Traditionally, automated scripts are derived from existing test cases. Even in an agile-based development organization, where most stories have automated scripts created to check for adherence to acceptance criteria, the scripts are derived from test cases. This is what is traditionally considered automation.

If we divorce the notion of “automation” from the notions of “test cases” and “test scripts,” we can begin to think of automation as a judicious use of technology to help humans do their jobs; in our case, those humans are testers. This broadens our idea of automation to include non-test-case-based tools that will assist humans in performing their testing. These nontraditional tools can amplify testers’ abilities by allowing them to increase coverage, test faster, and detect trends. As such, I classify them as automation-assist tools, calling attention to the fact that the technology is assisting a tester.

(Though I would like to claim credit for inventing the term, I first heard it from a respected colleague and mentor named Mas Kono, who had coined the term with a team prior to my working for him. I’ve taken the liberty of expanding upon the original definition to include any nontraditional automation.)

Automation-assist tools live compatibly with more traditional test-script-based tooling; these automation approaches are not mutually exclusive because each provides some value that the other does not. Automation-assist tools can extend beyond the notion of test automation into areas that still help us do our jobs.

I’ve used automation-assist tools in several ways throughout my career. One example was roughly inspired by high-volume automation testing, which Cem Kaner says “refers to a family of testing techniques that enable the tester to create, run, and evaluate the results of arbitrarily many tests.”

My team had something we called the Random Link Clicker. This was its basic algorithm:

  1. Randomly select an HTML link (i.e., an <a> tag) on the current page.
  2. Click the selected link.
  3. Observe the page loaded as a result of the click.
  4. If nothing appears to be interesting, go back to Step 1.
  5. If anything appears to be interesting, record a screenshot, the HTML, and the reason the page appeared interesting.
  6. Return to a predefined safe starting page and go back to Step 1.

By “observe,” I mean programmatically check for certain conditions on the resulting page; by “interesting,” I mean conditions such as:

  • The tool landed on our 404 page or our 500 page
  • A server crash occurred
  • The tool landed on a page whose URL is not in a list of expected domains (e.g., clicked link is on a page from a test environment, but the resulting page is in the production environment)
  • The word “error” appears on the page
  • The page didn’t finish loading after a predefined amount of time

On each click, valuable traversal information is recorded, including the randomly selected HTML element and the resulting page’s URL. This data aids in determining if any of the interesting conditions is an actual problem or not.

How did the program decide whether an instance of an interesting condition was a cause for concern? Easy: It didn’t. The program only recorded the fact that something interesting occurred. The list of interesting conditions was provided to the tester at the end of a run. The tester then evaluated the interesting conditions, looking for those that were actually a cause for concern. There was no pass/fail criteria for execution. The program either found interesting conditions that a tester needed to evaluate, or it didn’t. It was up to the tester to decide if the condition warranted some sort of report. We did not automate any testing; we assisted the testers in doing their jobs.

Why did we take this approach? Couldn’t an existing site-crawling tool have done the job? Perhaps, but we noticed that some pages and some features behaved differently (or misbehaved!) depending on the path taken to the page. On a sufficiently large site, as most of today’s sites are, the number of path permutations becomes too large to complete in a tolerable duration, even for distributed automation executions.

By creating a tool based on randomness, we traverse a subset of paths through the system that includes unintuitive but valid paths. In my experience, it’s these paths that cause problems in the system because they are just that—unintuitive—and are therefore less likely to be tested. It should be noted that the Random Link Clicker was cheap to build. The initial release took about four hours of effort and found a significant issue during its first week of execution. Had it not been cheap to build, we may have considered another approach with a better value proposition.

Automation-assist tools are also useful when we think about disposable automation, i.e., automation that will be used for only a short time. Why should the only assistance that automation provides to testers be large, test-case-based regression testing suites? Sometimes a one-time activity is worth automating, provided the automation provides sufficient value.

I worked on a project where one of the testing tools had an oracle that was composed of thousands of “golden files.” Being the oracle, these files were used to determine whether each test scenario had passed or failed. At one point, a change was made in the product that affected the way some results were returned: In some cases a NULL was now returned instead of a numeric value. Due to the number of files and the conditions under which a NULL was returned, the test architect estimated the effort required to change the golden files was four to six weeks! Clearly, this had the potential to impact product delivery.

In order to reduce the risk to product delivery, we investigated the possibility of automating this activity. Fortunately, the algorithm for determining which values should now be NULL was straightforward to implement and the format of the golden files was not complicated to parse. In less than one day of effort, we developed a program that would perform the NULL substitution, thereby avoiding the four to six weeks of effort and associated opportunity costs. What made this tool disposable was the fact that, outside of the testing of the tool, it would be executed exactly once. Knowing this ahead of time allowed us to take some coding shortcuts that we would not have taken had this tool needed to be supported for the long term. This was not automation based on test cases, but it was a program that provided assistance to the testers by avoiding a large opportunity cost.

Though nontraditional approaches such as automation-assist tools are not yet widespread in most testing organizations, they are no longer as heretical as they once were.

I’m not suggesting that we demonize test-case-based automation approaches. We should consider scripting some functional checking, provided that checking is what is actually needed and that the scripts provide more value than they cost. And there certainly can be value in automating a small smoke suite that checks basic product features on each continuous integration build or deploy; these scripts can alert us to fundamental problems in our latest build, indicating “This build is probably so broken that humans should not waste their time testing it further.” There can also be value in automated checking that looks for regressions in order to help testers with regression testing.

These kinds of automation can save us time and reduce opportunity costs, assuming, again, that they provide more value then they cost. I’m only suggesting that these are specific implementations of automation; frequently, there are other opportunities to apply technology that may have more value than traditional approaches.

User Comments

16 comments
Kathy Duberke's picture

I am a QA Manager and I never tout the "automate every test case"!  I am always asking my team to look for automation opportunities that provide value outside of just automating test cases. 

October 10, 2016 - 2:10pm
Paul Grizzaffi's picture

That's great! It's always nice to hear from reasonable leaders.

October 10, 2016 - 2:21pm
Tim Thompson's picture

That is a good approach. I found that testing plain simple reports is easily done in a two step manual process. Compare the newly generated output with a known good output by looking at both side by side. It is so easy to spot any layout differences or missing entries. Second step is to craft a text based output and use WinMerge or any other file comparison tool to compare the file content. When using the same data only the creation date and time are allowed to be different. Sure, file compare can be automated, but spotting changes in layout will be incredibly tedious to automate and maintaining those tests will take more time and effort than doing it by hand.

October 12, 2016 - 6:30am
Matt Griscom's picture

Nice article, Paul!

I like the thesis generally, but disagree with

"I’m not suggesting that we demonize test-case-based automation approaches. "

Consider manual test cases, that is, test cases that were developed from a manual perspective, where it's assumed that smart, observant testers are doing it, or test cases that are run manually at some point.

"Automating" these always creates business risk, because the test oracle or explicit verifications will never have the breadth and intelligence of observation that even a decent human tester would. The risk comes from the common assumption that running the test case *with automation* brings the same value as a person running it, except faster, at all hours etc. The risk comes from failure to recognize that, although automation is very powerful at some things, people are still needed to observe the many things that are not verified by the automation.

My approach would be to not automate test cases at all, in fact, do automated verifications based on your functional requirements (as linked to your business requirements). Checks need to be simple anyway, much simpler than many manual tests, so they don't hide quality information or just be too brittle. With self-documenting hierarchical steps (see the Hierarchical Steps pattern of MetaAutomation) the automation documents itself at runtime.

If the automation story is done well enough, and soon enough in the SDLC or feature development, there is no need for test cases at all. Manual testers know exactly what is and is not verified with automation, because it's all self-documented in a way that does not lose information, and they can simply look at a presentation of the result of a check run and see what was not verified with automation.

Also, there's layout, localization, etc. that aren't reasonably automatable, although tools can certainly help the manual testers be more productive in those domains.

Automating manual test cases will significantly limit the business value of the automation as well, because those checks would tend to be complex and brittle and require a lot of expensive maintenance work, and the business value of the data generated would be limited by excessive complexity.

So, while I wouldn't "demonize test-case-based automation approaches" either, to move forward with quality automation requires a smarter, more value-oriented approach to automated verifications. Test-case-based automation is not a very effective approach to automated verifications, and will limit the larger value of quality automation for the team.

 

December 21, 2016 - 1:04pm

Pages

About the author

CMCrossroads is a TechWell community.

Through conferences, training, consulting, and online resources, TechWell helps you develop and deliver great software every day.