Post Hoc Ergo Propter Hoc
By Jeff Gainer
(Author's note: This essay originally appeared in the 04 April 2002 issue of the Cutter IT Journal E-Mail Advisor.)
For the most part, functional testing seems to be pretty straightforward. Do this, that happens. Is "that" what we expected? If so, the test passes; if anything else happens, it fails. Seems pretty simple.
The phrase "Post hoc, ergo propter hoc" (After this, therefore, because of this), contains a warning against the fallacy of presuming that if two events occur together, or in sequence, then one is necessarily causing the other. This sort of logical error is fairly simple to root out in functional, white-box, and unit testing, which makes a strong case for testing tests before executing them. In the realm of performance testing, however, determining which event causes another-of if they are both occurring because of a third, undefined cause-- is a far more complicated matter.
Ideally, functional testing relies solely on the logical paths of the application under test. Performance testing an e-business application, however, can present a far thornier set of problems. When designing tests, particularly performance tests, it is important to define the objective of the test-does it test a business scenario, or does it test an infrastructure weakness? I have observed clients lumping all their tests under the single label of performance tests without giving much regard to their individual purposes. Yet, a rare few classify their performance tests as stress tests, limit tests, aging tests, capacity tests, all with detailed definitions and objectives. The names donít much matter, but their definitions do. Make certain that everyone-from line testers to sponsors and stakeholders-understand what each category of test is designed to do.
In the first of two kinds of tests are those designed to answer the questions posed by the applicationís use: can it withstand a predefined or ever-increasing volumes of users and/or transactions? Like business-rule functional tests, these tests are best designed by domain experts, or at a minimum, have their requirements outlined by application domain experts. I have found that getting these requirements, however, is problematic because often these requirements simply have never been defined. Even if they have been defined, these requirements are frequently based on guesses, both educated and uneducated. Sometimes, though, only a test in the field will suffice-witness the high-profile news stories about Web sites that crash instantly on their debut, or how internet backbones have become hopelessly clogged when a major news stories breaks.
The other type of tests, however, are best designed by technological experts who know how to "break" a system. These are the pass/fail tests of the infrastructure. The technical experts know the various levels involved-the routers, the databases, the load balancing, etc.--and are best qualified to design tests to assault these various vulnerabilities. The test definitions and classifications must be clearly defined and should never be used interchangeably-just as a null string and an empty string are rarely the same thing, depending on the environment, a stress test and a performance test may well test very different objectives.
(c)2002 Cutter Information Corp. All rights reserved. This article has been reprinted with the permission of the publisher, Cutter Information Corp., a provider of information resources for IT professionals worldwide.
This article originally appeared in the Cutter IT E-Mail Advisor, a supplement to Cutter IT Journal. www.cutter.com/itjournal
Return To jeffgainer.com.