The Post Hoc Fallacy

Correlation is not causation.

It seems a simple statement when you look at it. Just because night follows day does not mean that day causes night. However, it’s clear that people fall prey to this fallacy all the time. It’s what’s behind, for example, the superstitious rituals of baseball pitchers.

A far less trite example is modern medicine. You have a headache. You take a pill. Your headache goes away. Did it go away because of the pill you took? Maybe it would have gone away on its own.  How do you know?

Teasing out causation from mere correlation in cases like that, with potentially dozens of unknown and uncontrolled variables, is notoriously difficult. The entire industry of complimentary and alternative medicine banks on the confusion.

I was thinking about all this the other day when I was testing a tool that takes mailed orders for prescription drugs, digitizes the data, and then adds it all to a central database. I was focusing specifically on the patient address information at the time, so the rest of the orders, like payment information, was fairly simple–meaning all my test orders were expected to get assigned a payment type of “invoice”, which they did. So in the course of my address testing I “passed” the test case for the invoice payment type.

It wasn’t until later that I realized I had committed the fallacy Post hoc ergo propter hoc (“After this, therefore because of this”), just like the person who attributes the disappearance of their headache to the sugar pill they’ve just taken. I discovered that all orders were getting a payment type of “Invoice”, regardless of whether they had checks or credit card information attached.

Inadvertently, I had succumbed to confirmation bias. I forgot, momentarily, that proper testing always involves attempting the falsification of claims, not their verification.

  • Share/Bookmark
Leave a comment

8 Comments.

  1. Sounds like this place your consulting for has some serious issues. They will end up paying for it in the longrun. Nice blog by the way. Look forward to reading more posts. :cool:

  2. It’s dismaying how many places are penny wise but pound foolish.

    Thanks for the comment.

  3. I’d bet that very few software testers recognize this fallacy to the degree you do, Abe. And the problem is rampant everywhere we look. I’ve spent the last two months running statistical regressions on the US and global economy (MBA course in macroeconomics), and it’s just crazy how few “talking heads” (as my prof calls them) understand this basic fact. You can run regressions all day and discover correlations every time, but they are not causation; to find causation, you have to be insightful and creative. You have to understand the underlying story about what’s happening. That takes a curious mind and big-picture thinking. Like yours!

  4. Thanks, Nancy. I’m blushing.

    I’m re-reading Nassim Taleb’s Fooled By Randomness right now (I’m thinking about writing a review of it) and he brings up another example of the Post Hoc fallacy, made by the author of The Millionaire Mind.

    Apparently millionaires as a group are risk-takers. Taleb makes the point that you would, no doubt, also see a preference for risk-taking in the population of people who are filing for bankruptcy.

    Not really testing-related, I know. I just found it interesting.

  5. Thanks for the nice read.

    I think this is a very common mistake made by almost everyone, even by software testers. Even if some software testers do know how NOT to fall for this, it can be pretty hard to convince “the rest of the world” of how they are wrong.

    We are all professionals, who try to learn from what we experience. Especially those who are eager to learn, want to apply all the new stuff they learn, that they make conclusions based on what they think is causation (which in fact is just correlation).

    Randall Munroe expressed the mixing up of correlation and causation very nicely in one of his comics: http://xkcd.com/552/

  6. Its interesting post. When I think about this subject I go first to the myth of automation role in testing.
    Because it was observed that people who do occasional scripting obtain much better results that people who weren’t able to script at all, it was considered that automating almost to 100 % is the solution.
    For example in a typical organization things tent to go like this: some new automated scripting people come because PM saw that is not possible by hand to do all the work, that start to create complex frameworks. The results seem acceptable. But when new wave of people come and have task to work on the framework the results are not so satisfying. What happens is the initial people had time to do exploratory testing before any framework instead of new guys pushed directly to it. But of course they will be judged on the premise that their automated scripts are not good.

    Sebi
    http://www.testalways.com

  7. Thanks for commenting, Martijn.

    That’s a great comic. I wonder how many people read it and don’t get it.

  8. Sebi, the situation you describe sounds frustrating!

Leave a Reply


[ Ctrl + Enter ]