Tag Archives: Sapient

Opportunity Cost

I’ve always had an abiding love of economics – which, contrary to what seems to be popular belief, is not about how to balance your checkbook, but about what it means to make choices in the face of scarcity. Once you’re even just a little familiar with the economic way of thinking then you never see the world in quite the same way again.

A fundamental component of economic thought is the notion of “opportunity cost.” The idea was probably most famously and elegantly described by the French economist Frédéric Bastiat in his essay “What Is Seen and What Is Not Seen”, in 1848. Its basics are this: The real cost of a thing is not the time or the money you spend on it, but the alternate choice that you’ve given up. For example, the opportunity cost of the omelet you had for breakfast is the pancakes you didn’t have.

Testers – ever familiar with tight schedules, under-staffing, and a potentially infinite list of tests to perform – probably understand opportunity cost at a more visceral level than your typical academic economist. When testers keep opportunity cost in mind, they’ll constantly be asking themselves, “Is this the most valuable test I could be doing right now?”

This brings us to the perennial question: Should testing be automated?

Setting aside the deeper question of “Can it?”, the answer, obviously, should involve weighing the benefits against the costs – including, most importantly, the opportunity cost. The opportunity cost of automated testing is the manual testing you have to give up, because – let’s not kid ourselves – it’s the rare software company that will hire a whole new team whose sole job is planning, writing, and maintaining the test scripts.

And let’s say the company does hire specialist automation testers. Well, it seems there’s always an irresistible inclination to, in a pinch (and Hofstadter’s Law ensures there’s always a pinch), have the automation team help with the manual testing efforts – “only to get us through this tough time”, of course. Funny how that “tough time” has a tendency to get longer and longer. Meanwhile, the automation scripts are not being written, or else they’re breaking or becoming obsolete. (Note that, of course, when a company puts automation on hold in favor of doing manual testing, it’s an implicit recognition of the opportunity cost of the automation.)

There’s another factor to consider: How valuable are these automated tests going to be?

I’ve already made the argument that good testers are not robots. Do I need to point out what automated testing is? Scripts are the pinnacle of inflexible and narrowly focused testing. A human performing the same test is going to see issues that the machine will miss. This is why James Bach prefers to call manual testing “sapient”, instead. He has a point!

Furthermore, except in particular domains (such as data-driven testing) any single test will lose value each time it is performed, thanks to something else from economic theory: the law of diminishing returns. Sure, sometimes developers break stuff that was working before, but unless there’s no chance the manual – sorry sapient – testers were going to timely find the problem in the course of their – ahem! – sapient tests, the value of any given automated test will asymptotically approach zero. Meanwhile its cost of maintenance and execution will only increase.

I won’t go so far as to say that test automation is never worth it. I’m sure there are situations where it is. However, given the precious opportunity cost involved, I think it’s called for less frequently than is generally believed.

Share

(Good) Testers Are Not Robots!

Toy RobotReading James Bach’s recent blog post this morning, “The Essence of Heuristics” – in particular the list of questions at the end – I was reminded, by way of stark contrast, of the testing culture I found when I started my current consulting gig.

One of the first things I was told was one of their testing “rules” – every test case should be repeated, with different data, 15 times. At first I simply marveled at this, privately. I figured someone must have a good reason for choosing 15 as the magic number. Why not 5? Or, for that matter, 256? Why every test case? Surely my time would be better spent doing a new test case instead of the 15th iteration of the current one, right?

Sooner or later, I thought, the rule’s reasonableness should become apparent. After a couple weeks I knew the team a little better, but the rule still seemed as absurd to me as when I first heard it, so I broached the topic.

“Why do you run 15 iterations of every test case?”

“Well, sometimes when we run tests, the first 10 or 12 will pass, but then the 11th or 13th, for example, will fail.”

“Okay, well, do you ever then try to discover what exactly the differences were between the passing and failing tests? So that you can be sure in the future you’ll have tests for both scenarios?”

<blank stare>

I quickly came to realize that this testing “rule” was symptomatic of a larger issue: an attitude in management that the team couldn’t be trusted to approach the testing problem intelligently. I saw evidence of this attitude in other ways. For example, we were told that all bug descriptions needed to include the date and time the bug occurred, so that the programmers would know where to look in the log files. When I pointed out that not all bugs will involve issues with logged events, I was told that they just didn’t want to confuse the junior team members.

Another example – and a particular pet peeve of mine – is the requirement that every test case include detailed step-by-step instructions to follow, leaving no room for creative thinking, interpretation, or exploration. The reasoning behind the excruciating detail, of course, is so the newest team members can start testing right away. My first objection to this notion is that the fresh eyes of a new user can see problems that veterans have become blind to. As such, putting blinders on the newbies is not a good idea. Also, why bypass the testing of product’s usability and/or the help documentation and user manual? New users are a great resource for that.

In short, testers are not robots, and treating them like they are will result in lower quality testing efforts.

Share