Wreaking Havoc With Simple Tests

[article]
Summary:
Do the simplest thing that could possibly wreak havoc-that's how Danny Faught approaches testing. He finds this simple idea especially helpful when doing reliability and robustness testing. And as he'll tell you, Danny's one for making big plans for all the evil things he can do to an application, but he's most efficient when he starts out small and incrementally builds his own tools. In this week's column, he gives several examples of how he's used small programs to cause big problems, and then gives tips on how you can use this approach.

According to Martin Fowler, one of the mantras of the agile world is "Do the simplest thing that could possibly work." For my testing, I like to do the simplest thing that could possibly wreak havoc. Here are some stories that illustrate how I've applied this idea.

Flip a Bit
I was testing an embedded operating system on a handheld device. I surmised that the flash memory on the device could fail intermittently and cause file corruption. So I decided to simulate corruption in an executable file in order to test the system's robustness. I wrote a simple Perl script named "bitflip" that picks a single random binary bit in any file and flips it to the opposite value. The operating system I was testing couldn't run Perl, but it did support networking . So I ran the script on another machine to corrupt a copy of one of the standard utilities that comes with the embedded operating system. I made the corrupted file accessible over the network and ran it on the device under test. Usually the corrupted program would either crash, as we would expect it to, or run as if nothing had changed. But about 10 percent of the time, the entire operating system would crash.

The operating system vendor couldn't fix the crashes, and eventually the project chose a different operating system for the device. So the simple bitflip program, along with several other factors, managed to wreak a lot of havoc with this system. I could have spent a lot of time designing a more systematic test, but in this case the randomized approach was sufficient to prove the point.

Monkey Around
On another project, I was testing an early alpha version of a Windows application. It would occasionally crash in many different ways that weren't easily reproducible. I wanted to automate a test that could reproduce at least a few of the crashes. The simplest thing to do in this case was to write a "monkey test" that would randomly click on the main application window to test the application's reliability.

Again I turned to Perl, using the Win32::GuiTest module. I learned how to locate the application on the screen and click a random point in the window. After running the script for a few minutes, I found a handful of issues in the test with which I would have to deal. Because I allowed the monkey script to click and drag the application's border, it dragged the window under the task bar. Later clicks would activate items on the task bar, which was outside the scope of what I wanted it to do, so I changed my geometry calculations to keep the clicks off of the application's border, the title bar, and the resize widget in the lower right corner. I could test the effects of moving and resizing the window later.

I also observed that the monkey tended to open modal dialogs in the application that had to be dealt with before the main window would accept input again. The monkey would click futilely in the main window for a long time, only dismissing the dialog by accident if the dialog happened to overlap with the main window on the screen. I changed the script to detect these dialogs and dismiss them as soon as they appeared. A future enhancement could do something more interesting with the dialogs.

As I struggled with learning how to use the Win32::GuiTest module and with making my test script robust, I realized that this approach wasn't exactly simple--it's now 355 lines of code. However, it was much simpler than automating a complete suite of GUI tests. The monkey test caused several crashes in the application, some of which I had never seen before. As a bonus, when I watched the monkey at work, I discovered a few obscure corners of the GUI interface I didn't even know were there.

Thrash It With Applescript
The application that endured the torture of my monkey script also had a Mac OS X version. I wanted to develop a monkey test for the Mac version too. I researched my options and found that Applescript would do GUI automation, and I could do similar things in Perl using the Mac's Open Scripting Architecture. I had to install a large number of additional modules to do this in Perl, which was compounded by a configuration problem with Perl's mechanism to automatically install modules on Mac OS. I fixed the configuration problem, but knew that other people trying to run my Perl code probably would have to apply the same fix.

I had to make a choice between using my favorite language and learning to do Applescript-like things without knowing Applescript, or biting the bullet and learning enough of the language to write the test directly in Applescript. Past experience told me I needed to first learn the native language before I could be productive in either one, so I started learning the basics of Applescript.

Within a few days I had a simple script. The application had a tendency to crash when I changed it from one username to another, so I decided to simply write a reliability test that would switch back and forth repeatedly between two usernames. The test would cause a crash within a few minutes every time I ran it--though I had not yet figured out how to synchronize properly with the application to know when it was ready to receive input. I just used fixed timing delays, which made the script slower, and not guaranteed to work every time. But  I deemed it good enough, and sent it to the developer.

It turned out that our Mac developer was already familiar with Applescript, so he was happy with the route I had chosen. I haven't yet developed a complete monkey test on the Mac, because my simple script is keeping the developer busy with bug fixing for now.

Adjust When You Oversimplify
One of my earlier attempts at writing a simple robustness test turned out to be a bit too simple. I was testing a Unix-variant operating system with a C program that called randomly chosen system calls with random parameters. Shortly after I started the test for the first time, the system rebooted. "Yes!" I shouted, thinking I'd found a serious bug. But when I examined the test log, I found the test had called the "reboot" system call and was running as an administrative user that had permission to reboot. So the system did exactly what it was supposed to do.

I added a bit more infrastructure to the test to ensure that it didn't run as an administrative user. Even though this limited the test coverage, it was the simplest way to get my test running again. Later, my tests found problems in the operating system.

Keep it Simple
I like to follow this formula when I am testing reliability and robustness in a software product:

  1. Use the simplest approach that's likely to find the kind of bug you're looking for.
  2. Learn just enough about the technology involved to code the parts that need to be automated.
  3. Make the automation robust enough so that it will find bugs.
  4. Get feedback from the people who need to reproduce the bugs to plan further enhancements for the test.

If you keep these steps in mind, you can find big bugs with minimal effort.

Tags: 

About the author

CMCrossroads is a TechWell community.

Through conferences, training, consulting, and online resources, TechWell helps you develop and deliver great software every day.