Expect the Unexpected

By Jeff Gainer

(Author's note: This essay originally appeared in the 21 February 2001 issue of the Cutter IT Journal E-Mail Advisor.)

"It only happens with pistachio ice cream," the exasperated man explained. "The car starts fine when I buy vanilla or chocolate ice cream, but it never starts when I buy pistachio."

The mechanics at the Houston car dealership couldn't convince the customer that his new car somehow knew what flavor of ice cream he bought. But they couldn't explain why the flavor affected whether or not the car would start. Surely pistachio ice cream wouldn't account for the customer's new car problem.

A bright young cashier solved the problem. "It's simple," she explained. "When he buys chocolate or vanilla, the ice cream is already in the freezer, ready to go. But when he buys pistachio, it has to be packed by hand, and that takes a little time. Meanwhile, the car is sitting out there in the Texas sun, heating up until the fuel systems develops an airlock. Then the car won't start."

I told this apocryphal anecdote to one of my software testing classes recently and received blank looks. After a few seconds of mystified silence, I voiced the question each doubtlessly had. "So, what's this got to do with software testing?"

"Everything," I said, answering my own question. When we write a typical test case, we are looking for an expected value. But very often, we don't know what the result will be, or even where the problem might lie. And just because two things might always occur together, it's all too easy to accept the fallacy to presume that one thing must be causing the other. Always expect the unexpected.

Several years ago, a client told me a testing story that I have since heard all too many times: the product passed as the tests, but within a week of being placed at beta sites, computers began to crash. The problem: it seemed that the users in the real world queued up all the printed reports and forms, then printed them in a batch at night. The problem: a memory leak that was only detected in the field. When using an automated test tool, I recommend that clients run end-to-end business scenarios in a loop-executing the scenario hundreds, even thousands of times, monitoring resources of databases, servers, and client front ends throughout. The result: memory leaks can be trapped before they are deployed into the field. The problem: you don't necessarily know where to look.

Another anecdote comes from the 1999 film Office Space. Three disgruntled employees plant a bit of computer code into bank money transfer software. It seems that the original software had a slight rounding error, and each transaction carried a few hundredth of a cent of interest that no one seemed to have noticed. Their software swept these golden crumbs into their own account. After a few months, they predicted, they would accumulate a few hundred thousand dollars of money that no one would have missed anyway. After a single weekend, they found that their bank account had ballooned by several million dollars and-well, no more, you'll have to see the film for yourself. The point here is that only by executing test scenarios repeatedly with huge sums would the error have been detected.
We can look for expected behavior, expected error conditions, and expected return values. But often, the most insidious bugs are the unexpected. There will always be a place for manual exploratory testing and automated stress testing for error conditions, both anticipated and unexpected. Sometimes the best test may not be verifying an actual value against and expected value, but expecting-and finding-the unexpected.

--Jeff Gainer


(c)2002 Cutter Information Corp. All rights reserved. This article has been reprinted with the permission of the publisher, Cutter Information Corp., a provider of information resources for IT professionals worldwide.

This article originally appeared in the Cutter IT E-Mail Advisor, a supplement to Cutter IT Journal. www.cutter.com/itjournal

Return To jeffgainer.com.