Heresy! Automation Does Not Require Test Cases

[article]
Summary:
Traditionally, automated scripts are derived from existing test cases. But if we divorce the notion of “automation” from the notions of “test cases” and “test scripts,” we can think of automation as a judicious use of technology to help humans do their jobs. This broadens our world to include different tools that can help testers increase coverage, test faster, and detect trends.

A few years ago, when explaining that not all test activities should be automated, I described myself as a “heretic to my discipline”—that discipline being test automation. My use of the word heretic was meant to convey that I was a proponent of ideas that were not in line with others who practiced my discipline.

Someone pointed out that an early definition of the word heretic was something akin to “one who chooses.” While not my original intent, I think that definition also fits my statement: I was choosing what to automate and what not to automate, as well as how to automate.

Traditionally, automated scripts are derived from existing test cases. Even in an agile-based development organization, where most stories have automated scripts created to check for adherence to acceptance criteria, the scripts are derived from test cases. This is what is traditionally considered automation.

If we divorce the notion of “automation” from the notions of “test cases” and “test scripts,” we can begin to think of automation as a judicious use of technology to help humans do their jobs; in our case, those humans are testers. This broadens our idea of automation to include non-test-case-based tools that will assist humans in performing their testing. These nontraditional tools can amplify testers’ abilities by allowing them to increase coverage, test faster, and detect trends. As such, I classify them as automation-assist tools, calling attention to the fact that the technology is assisting a tester.

(Though I would like to claim credit for inventing the term, I first heard it from a respected colleague and mentor named Mas Kono, who had coined the term with a team prior to my working for him. I’ve taken the liberty of expanding upon the original definition to include any nontraditional automation.)

Automation-assist tools live compatibly with more traditional test-script-based tooling; these automation approaches are not mutually exclusive because each provides some value that the other does not. Automation-assist tools can extend beyond the notion of test automation into areas that still help us do our jobs.

I’ve used automation-assist tools in several ways throughout my career. One example was roughly inspired by high-volume automation testing, which Cem Kaner says “refers to a family of testing techniques that enable the tester to create, run, and evaluate the results of arbitrarily many tests.”

My team had something we called the Random Link Clicker. This was its basic algorithm:

  1. Randomly select an HTML link (i.e., an <a> tag) on the current page.
  2. Click the selected link.
  3. Observe the page loaded as a result of the click.
  4. If nothing appears to be interesting, go back to Step 1.
  5. If anything appears to be interesting, record a screenshot, the HTML, and the reason the page appeared interesting.
  6. Return to a predefined safe starting page and go back to Step 1.

By “observe,” I mean programmatically check for certain conditions on the resulting page; by “interesting,” I mean conditions such as:

  • The tool landed on our 404 page or our 500 page
  • A server crash occurred
  • The tool landed on a page whose URL is not in a list of expected domains (e.g., clicked link is on a page from a test environment, but the resulting page is in the production environment)
  • The word “error” appears on the page
  • The page didn’t finish loading after a predefined amount of time

On each click, valuable traversal information is recorded, including the randomly selected HTML element and the resulting page’s URL. This data aids in determining if any of the interesting conditions is an actual problem or not.

How did the program decide whether an instance of an interesting condition was a cause for concern? Easy: It didn’t. The program only recorded the fact that something interesting occurred. The list of interesting conditions was provided to the tester at the end of a run. The tester then evaluated the interesting conditions, looking for those that were actually a cause for concern. There was no pass/fail criteria for execution. The program either found interesting conditions that a tester needed to evaluate, or it didn’t. It was up to the tester to decide if the condition warranted some sort of report. We did not automate any testing; we assisted the testers in doing their jobs.

Why did we take this approach? Couldn’t an existing site-crawling tool have done the job? Perhaps, but we noticed that some pages and some features behaved differently (or misbehaved!) depending on the path taken to the page. On a sufficiently large site, as most of today’s sites are, the number of path permutations becomes too large to complete in a tolerable duration, even for distributed automation executions.

By creating a tool based on randomness, we traverse a subset of paths through the system that includes unintuitive but valid paths. In my experience, it’s these paths that cause problems in the system because they are just that—unintuitive—and are therefore less likely to be tested. It should be noted that the Random Link Clicker was cheap to build. The initial release took about four hours of effort and found a significant issue during its first week of execution. Had it not been cheap to build, we may have considered another approach with a better value proposition.

Automation-assist tools are also useful when we think about disposable automation, i.e., automation that will be used for only a short time. Why should the only assistance that automation provides to testers be large, test-case-based regression testing suites? Sometimes a one-time activity is worth automating, provided the automation provides sufficient value.

I worked on a project where one of the testing tools had an oracle that was composed of thousands of “golden files.” Being the oracle, these files were used to determine whether each test scenario had passed or failed. At one point, a change was made in the product that affected the way some results were returned: In some cases a NULL was now returned instead of a numeric value. Due to the number of files and the conditions under which a NULL was returned, the test architect estimated the effort required to change the golden files was four to six weeks! Clearly, this had the potential to impact product delivery.

In order to reduce the risk to product delivery, we investigated the possibility of automating this activity. Fortunately, the algorithm for determining which values should now be NULL was straightforward to implement and the format of the golden files was not complicated to parse. In less than one day of effort, we developed a program that would perform the NULL substitution, thereby avoiding the four to six weeks of effort and associated opportunity costs. What made this tool disposable was the fact that, outside of the testing of the tool, it would be executed exactly once. Knowing this ahead of time allowed us to take some coding shortcuts that we would not have taken had this tool needed to be supported for the long term. This was not automation based on test cases, but it was a program that provided assistance to the testers by avoiding a large opportunity cost.

Though nontraditional approaches such as automation-assist tools are not yet widespread in most testing organizations, they are no longer as heretical as they once were.

I’m not suggesting that we demonize test-case-based automation approaches. We should consider scripting some functional checking, provided that checking is what is actually needed and that the scripts provide more value than they cost. And there certainly can be value in automating a small smoke suite that checks basic product features on each continuous integration build or deploy; these scripts can alert us to fundamental problems in our latest build, indicating “This build is probably so broken that humans should not waste their time testing it further.” There can also be value in automated checking that looks for regressions in order to help testers with regression testing.

These kinds of automation can save us time and reduce opportunity costs, assuming, again, that they provide more value then they cost. I’m only suggesting that these are specific implementations of automation; frequently, there are other opportunities to apply technology that may have more value than traditional approaches.

User Comments

16 comments
Jena Buttimer's picture

If only I could encounter a QA manager who is open to these ideas. Unfortunatly most of the ones I've delt with live by the "automate every test case" rule. I've been doing test automation for nearly 20 years and sadly not much has changed. 

Great article, Paul. Gives me hope! Keep preaching. 

October 4, 2016 - 2:48pm
Paul Grizzaffi's picture

Thank you so much for the kind words, Jena. Stick with it; I believe these kind of value-based approaches will make inroads over time.

October 4, 2016 - 10:44pm
john paxton's picture

I'm wondering if that's related to selection? I.E., how QA manager's are hired and if that leads to the stagnation you've noted?  

Most roles I appplied to stressed total automation = total quality , ha!  as that's how the business was sold automation and that's what they want to hear from anyone taking the role over.  Once they hear that I don't believe t  in most cases that going full automation is not the answer interviews become quite interesting:  

With regards to the article nearly every major time-saving piece of automation I've created was disposable , other than build smoke tests ,  saving 20 mins a  run 2 to 3 times a day made it worthwhile, when the time comes to expand them into more detailed tests they really don't catch much.   Automating usually leads me to a better understanding of the app but the value is in  thinking my way through it rather  than  the results from it. 

October 5, 2016 - 1:51pm
Craig Woolgar's picture

This article is very much where we are at the moment.

We have an existing internal system with no automation that is being superceded by a modern system with automated testing (about a 4 year project). In order to ease the pressure on the testers we are writing automation-assist tools, that takes on some of the more repetitive tasks. It doesn't do any checking, but creates contacts, staff, etc. for the testers to test with.

October 4, 2016 - 10:23pm
Paul Grizzaffi's picture

Great Craig! I always like to hear other people's success stories with non-traditional automation.

October 4, 2016 - 10:45pm
Tim Thompson's picture

How is this different than using test data management? We have backups of various states of the application database that we can drop in and start testing without doing much or any data entry. That works very well for us except for date driven features, but there we go an extra step. We craft queries that update applicable records to contain the dates in relation to today.

October 12, 2016 - 6:25am
Paul Grizzaffi's picture

That's pretty cool stuff, especially that date thing. Great idea!

How is this different from test data management? In some respects, I think automated test data management/creation/modification is a subset of how I define automation assist: the automated test data management does some stuff on behalf of a tester so the tester can spend time doing. I say subset because automation assist also covers things like the Random Link Clicker I describe in the article. That tool has nothing to do with test data management; it's job is to exercise the product on behalf of a tester and "make notes" of things that may be of interest to the tester. The tester then examines the "notes" to see if there is anything there to cause concern.

October 12, 2016 - 9:53am
Gregory Annen's picture

Both methods can be employed to increase test coverage: use a tool/framework to automate every test case (often demanded by the client for traceability); and a tool based more on randomness. 

October 5, 2016 - 4:30pm
Greg Paskal's picture

Paul, what an exceptional article! You do a great job in helping us innovate in the world of automation and to consider different and unique ways to get more out of these talents we have an Automation Engineers. Keep up the great work!

October 6, 2016 - 8:55am

Pages

About the author

CMCrossroads is a TechWell community.

Through conferences, training, consulting, and online resources, TechWell helps you develop and deliver great software every day.