(Good) Testers Are Not Robots!

Toy RobotReading James Bach’s recent blog post this morning, “The Essence of Heuristics” – in particular the list of questions at the end – I was reminded, by way of stark contrast, of the testing culture I found when I started my current consulting gig.

One of the first things I was told was one of their testing “rules” – every test case should be repeated, with different data, 15 times. At first I simply marveled at this, privately. I figured someone must have a good reason for choosing 15 as the magic number. Why not 5? Or, for that matter, 256? Why every test case? Surely my time would be better spent doing a new test case instead of the 15th iteration of the current one, right?

Sooner or later, I thought, the rule’s reasonableness should become apparent. After a couple weeks I knew the team a little better, but the rule still seemed as absurd to me as when I first heard it, so I broached the topic.

“Why do you run 15 iterations of every test case?”

“Well, sometimes when we run tests, the first 10 or 12 will pass, but then the 11th or 13th, for example, will fail.”

“Okay, well, do you ever then try to discover what exactly the differences were between the passing and failing tests? So that you can be sure in the future you’ll have tests for both scenarios?”

<blank stare>

I quickly came to realize that this testing “rule” was symptomatic of a larger issue: an attitude in management that the team couldn’t be trusted to approach the testing problem intelligently. I saw evidence of this attitude in other ways. For example, we were told that all bug descriptions needed to include the date and time the bug occurred, so that the programmers would know where to look in the log files. When I pointed out that not all bugs will involve issues with logged events, I was told that they just didn’t want to confuse the junior team members.

Another example – and a particular pet peeve of mine – is the requirement that every test case include detailed step-by-step instructions to follow, leaving no room for creative thinking, interpretation, or exploration. The reasoning behind the excruciating detail, of course, is so the newest team members can start testing right away. My first objection to this notion is that the fresh eyes of a new user can see problems that veterans have become blind to. As such, putting blinders on the newbies is not a good idea. Also, why bypass the testing of product’s usability and/or the help documentation and user manual? New users are a great resource for that.

In short, testers are not robots, and treating them like they are will result in lower quality testing efforts.

  • Share/Bookmark
Leave a comment

4 Comments.

  1. “…detailed step-by-step instruction leaves no room for creative thinking, interpretation, or exploration. The reasoning behind the excruciating detail, of course, is so the newest team members can start testing right away.”

    Oy. It’s even worse than you think.

    There is absolutely room for creative thinking, interpretation, and exploration. Suppressing those things is doubtless the desire of those who mandate the scripting. That’s bad.

    Yet the scripts themselves don’t guarantee that those good things won’t happen. The control that the prescriptionists seek is only an illusion of control. In its way, that’s bad too, if that’s what the prescriptions really want.

    Yet the control likely will work to some degree. To the degree that it works, there are very few better ways to make sure that important bugs don’t get found. Inattentional blindness in the form of selective attention and semantic filtering is a certain consequence of following a script. When you’re programmed to look for a certain kind of result, problems in the product will go undetected.

    Finally, one of the claims of the prescriptionists is that scripted tests “help new testers to learn”. Everything we know about learning, in theory and in practice, tells us that this assertion is unwarranted and false. People don’t learn when they follow scripts; people learn when they have authentic problems to solve, when those problems are cognitively rich, and when people have the latitude to try solving them in their own way; when people receive feedback, both from collaboration and mentoring and from their interactions with the subject of the learning. Scripts don’t promote learning; they inhibit it. (For reference, see
    Exploring Science: The Cognition and Development of Discovery Processes
    (Klahr & Simon); papers by Okada and Simon; etc.)

    —Michael B.

  2. Another example – and a particular pet peeve of mine – is the requirement that every test case include detailed step-by-step instructions to follow, leaving no room for creative thinking, interpretation, or exploration.

    What’s interesting is when this practice is removed – at first teams can feel a bit lost – but when they realise they’re being trusted to think and produce – then it becomes a turning point.

    Then it’s difficult to do it a different way – as they’ve been “empowered” – and that’s a good thing!

  3. Thanks, Simon. I think you’re exactly right. Treat people like intelligent adults and soon they start acting like them!

    Michael, excellent point, re: inattentional blindness. It’s made especially bad when combined with test case obsession. When “productivity” is measured based on the number of test cases you complete… yikes!

Leave a Reply


[ Ctrl + Enter ]

Trackbacks and Pingbacks: