Testing’s Quiet Evidence

Since my post about Nassim Taleb’s The Black Swan I’ve continued to muse about the book and its ideas. In particular, while thinking about the notion of “silent evidence,” I realized there was a connection to testing that I hadn’t noticed before. I won’t flatter myself by thinking I missed the link due to some inherent subtlety in the concept. More likely it’s because I already associated the idea with something else: get-rich-quick schemes. Bear with me for a minute while I explain…

A few years ago I was consumed with debunking network marketing companies and real estate investing scams (If you’re genuinely curious why, I explain it all here). My efforts were focused primarily on a real estate “guru” who lives nearby, in Glendale, AZ. As with most con artists, his promotional materials include dozens of narrative fallac–uh… testimonials from former “students” who claim they achieved “great success” using the investment methods he teaches.

Of course, what the “guru” doesn’t promote are the undoubtedly much higher number of  people who attended his “boot camps” or bought his materials and either, A) did nothing, or B) tried his techniques and lost money (On the rare occasions such people are even mentioned, there’s always a ready explanation for them: The blame lies not with the technique, but with the practitioner. Voilà! The scammer has just removed your ability to falsify his claims!). Taleb calls these people the “silent evidence.” To ignore them when evaluating a population (in this case, the customers of a particular guru) is to engage in survivorship bias and miscalculate what you’re trying to measure. Con artists of all stripes make millions encouraging their customers to do this.

Testers are not con artists (as a rule, I mean), but we do have something that, while perhaps not silent, should be considered at least very quiet. In contrast to the scammers, it’s not to our advantage that it stay quiet. In fact, I’m starting to wonder if keeping it quiet is not at least in part to blame for some of the irrational practices you find in dysfunctional test teams, such as the obsession with test cases.

What am I talking about?

I am referring, dear reader, to all the bugs that were found and fixed prior to release. All those potentially serious issues that were neutralized before they could do any damage. No one thinks about them, because they don’t exist, except as forgotten items in a database no one cares about any more. But they’re there–hundreds, maybe thousands of them–quietly paying tribute to averted disasters, maintained reputations, even saved money (hence, why I call them “quiet” rather than silent: they’re still there if you look for them).

Meanwhile, the released product is out in the world, exposing its inevitable and embarrassing flaws for all to see, prompting CEOs and sales teams to wonder, “What are those testers doing all day? Why aren’t they assuring quality?” Note that this reaction is precisely the survivorship bias I mentioned above. The error causes them to undervalue the test team, in a way exactly analogous to how dupes of the real estate gurus overvalue the guru.

Okay, so what to do about this? I confess that, as yet, I do not know. Right now all I can say is it behooves us as testers to come up with ways of better publicizing the bugs that we find–to turn our quiet evidence into actual evidence. As to how to go about that, well, I’m open for suggestions.

Share
  1. Hi Abe,

    Interesting angle – yes, faults that don’t make it to the released product are indeed silent evidence.

    I think testers sometimes have trouble in presenting what they do. In pre-release reports they can talk about the problems identified – showing the type of issues that the end-customer won’t be subjected to (maybe.)

    Why, maybe? The conditions under which the product was tested also has a bunch of related silent evidence – I alluded to that, here.

    But, how to tackle the post-release issues. That partly comes down to reputation and trust about what the organisation is doing.

    Any fault coming in from the field will have a bunch of up-front information and a whole lot of silent information/evidence. This is where the need for forensic root-cause-analysis can help – this is not just looking at where a fault was “introduced” and/or “missed” (both words laden with a blame-bias!) but also what were the project circumstances (issues/problems/priorities/line intervention) at the supposed time of fault introduction.

    I think the serious faults that are found in the field do not always have simple answers – it might be a chain-reaction of events & circumstances – and once the organisation begins to realise/understand that then the less likely they will be to blame the testers for not finding the problem.

    A first step is enhancing the tester reputation pre-release (to help prevent an automatic knee-jerk reaction.) To establish the basis of providing good/useful information that is eventually used by the CEO’s and sales teams then testers (individually and as teams) need to build their brand.

  2. …once the organisation begins to realise/understand that then the less likely they will be to blame the testers for not finding the problem.

    Reminds me of the counter-productive attitude of the organization at my last gig. It was very “cover-your-ass” there, which meant, in practice, taking screen shots of *everything* to “prove” it was working!

    Ugh.

    I was at a loss with where to even begin correcting the erroneous thinking behind that practice.

  3. The same problem arises in information security–it’s rare to get credit for the attacks which fail, only blame for those which succeed.

  4. It was as though you’d seen me whilst writing the comment – here’s a line I wrote in the original comment and cut before pasting (relating to the paragraph you quoted): (Next question: How to establish those routines in an org?

    I’m actually doing some research for an article related to this – but the first question is to determine whether it is worth the effort. I left a place 6 years ago partly because of the mentality to the testers – in that case it wasn’t worth the effort to try to change the culture (my opinion.)

    So, if you determine it’s not worth the effort – correct, why bother. Life’s too short!

  5. Jim, thanks for the comment! I can imagine that Information Security must be a mostly thankless business, too, then.

    Simon, that’s pretty funny.

  6. Hi Abe,

    I wrote something on this a while back. I called it celebrating your wins.

    http://mavericktester.com/celebrate-your-wins-in-software-testing

  7. Anne-Marie, excellent. Thanks!

    I’ve been surprised and dismayed by the number of test teams that see finding bugs as a bad thing. Of course it’s never a good idea to gloat in front of the programmer, but if you don’t cultivate a sense of pride in finding strange and nasty bugs… well, lots of bad things happen.

Leave a Comment


NOTE - You can use these HTML tags and attributes:
<a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>