Accountability in Testing Embedded and IoT Software Systems

[article]
Summary:

Take a look at the critical systems in the world today and you’ll find software. From water, power, and utilities to nuclear plants, factories, and cars, pretty much everything has become integrated with digital devices and the internet. We need to do testing from a risk-based perspective and be accountable to the public by acknowledging what is tested and what is not.

I usually write about testing, particularly in the internet of things and embedded environments. However, recent terror attacks and cyber crimes have made me stop and think about what we in the software testing world do—or maybe should do—to protect our systems. I hope this piece makes you stop and think also.

Years ago I did some consulting for the Nuclear Regulatory Commission (NRC) with the US government. I was the young member of a group of very senior researchers and practitioners who considered what the NRC should require when companies place digital devices into nuclear systems.

Historically, nuclear systems were mainly analog electrical and mechanical, but the NRC knew this was changing with the introduction of digital devices with embedded software. The experts were concerned that the insertion of software into nuclear systems would cause problems when there were bugs. We made recommendations, and embedded software devices became successfully used in nuclear plants. At that point we were not worried about the connection of devices to the internet, which barely existed.

Then, Code Took Over the World

Take a look at the critical systems in the world today and you’ll find software. From water, power, and utilities to nuclear plants, factories, and cars, pretty much everything has become integrated with digital devices and the internet. For a long time, embedded software didn’t really have to worry about connections to the outside world; these connections were nonexistent or minimal, often via a secondary connection. Then the internet of things, or IoT, changed all that.

For the last few years, much of the professional software engineering and testing world has become well aware of the security threats to embedded and IoT devices. By this point, the public has also heard of cars getting hacked, threats to the infrastructure, and digital weapons such as Stuxnet. Again, the IoT is changing our world.

Testing Needs to Grow Up

While I sense awareness and people saying, “Yes, yes, testing and quality of the IoT is important,” I do not see the real action and accountability that I feel is needed. The public and even the general IoT industry seems willing to wait until something really bad happens, like cyberterrorism targeting a nuclear plant or major infrastructure. If this does happen (maybe I should say when this happens), there will be a great cry for better quality, engineering, and testing—concepts that have been preached all too often through the evolving years of software.

Until then, testing is, as James Bach once said, a bit of a Peter Pan. We do a lot of things that might or might not add value. If there is a failure, we can say, oops, the programmer messed up too, and the requirements are unclear, and the combinations are impossible to fully test. Because of that, testing never really needs to “grow up.”

It has been thirty-five years since I started working at a major defense contractor. My role in the community is such that I get to do what I think needs to be done instead of pursuing a paycheck. I am coming to believe I am to be a bit of a bellwether, raising the warning before that really bad thing does happen—such as seeing that car control systems could get hacked while driving, and expanding that train of thought to caution against the potential for major power outages or nuclear meltdowns.

We Must Do Better

Testers, engineers, and companies must do a better job now and be accountable to our users and the general public, which includes our families. Accountability in the software devices we produce can’t be addressed solely by standards or in our legal systems. Most of those systems seem to protect the producers of software, not the consumers. When we create an embedded or IoT device, particularly for industrial use, we need to do testing from a risk-based perspective where we acknowledge what is tested, what is not, what qualities the device has, and any implications of not testing pieces or parts or the whole. That might be happening already, but the customers don’t know; they are simply trusting our judgment.

Imagine a world where the test strategy is documented and clearly communicated to the customer and stakeholders, in a way that they can understand the tradeoffs between testing, cost, and time to market. Instead of choosing what product they want to buy and use based only on brand or price, they get an added dimension of test strategy.

I define tester accountability as the obligation of an individual or organization to account for its activities, accept responsibility for them, and disclose the results in a transparent manner. There are no standards telling us to be accountable for our software testing. Testers who have been involved in legal actions regarding software failures learn that software can be liable or answerable to the courts. Why are we waiting to be accountable until it’s mandated?

We need to be clear about the qualities of embedded and IoT systems, and software in general. What we engineer affects others, so accountability is essential. The hackers and other bad guys out there will eventually take advantage of our lack of accountability if we don’t do our jobs. It’s time for testers—and testing itself—to grow up.

About the author

CMCrossroads is a TechWell community.

Through conferences, training, consulting, and online resources, TechWell helps you develop and deliver great software every day.