Requirements: Placebo or Panacea?

Perhaps the definitive commentary on "requirements"I vividly remember the days when I matured as a tester. I was the fledgling manager of a small department of folks who were equal parts tester and customer support. The company itself was staffed by primarily young, enthusiastic but inexperienced people, perhaps all given a touch of arrogance by the company’s enormous profitability.

We had just released a major update to our software–my first release ever as a test manager–and we all felt pretty good about it. For a couple days. Then the complaints started rolling in.

“Why didn’t QA find this bug?” was a common refrain. I hated not having an answer.

“Well… uh… No one told me the software was supposed to be able to do that. So how could we know to test for it? We need more detailed requirements!” (I was mindful of the tree cartoon, which had recently been shared around the office, to everyone’s knowing amusement.)

The programmers didn’t escape the inquisition unscathed, either. Their solution–and I concurred–was, “We need a dedicated Project Manager!”

Soon we had one. In no time, the walls were papered with PERT charts. “Critical path” was the new buzzword–and, boy, did you want to stay off that thing!

You couldn’t help but notice that the PERT charts got frequent revisions. They were pretty, and they gave the impression that things were well-in-hand; our path forward was clear. But they were pretty much obsolete the day after they got taped to the wall. “Scope creep” and “feature creep” were new buzzwords heard muttered around the office–usually after a meeting with the PM. I also found it odd that the contents of the chart would change, but somehow the target release date didn’t move.

As for requirements, soon we had technical specs, design specs, functional specs, specs-this, specs-that… Convinced that everything was going to be roses, I was off and running, creating test plans, test cases, test scripts, bla bla…

The original target release date came and went, and was six months gone before the next update was finally shipped. Two days later? You guessed it! Customers calling and complaining about bugs with things we’d never thought to test.

Aside from concluding that target release dates and PERT charts are fantasies, the result of all this painful experience was that I came to really appreciate a couple things. First, it’s impossible for requirements documents to be anything close to “complete” (yes, a heavily loaded word if I’ve ever seen one. Let’s provisionally define it as: “Nothing of any significance to anyone important has been left out”). Second, having document completeness as a goal means spending time away from other things that are ultimately more important.

Requirements–as well as the team’s understanding of those requirements–grow and evolve throughout the project. This is unavoidable, and most importantly it’s okay.

Apparently, though, this is not the only possible conclusion one can reach after having such experiences.

Robin F. Goldsmith, JD, is a guy who has been a software consultant since 1982, so I hope he’s seen his fair share of software releases. Interestingly, he asserts here that “[i]nadequate requirements overwhelmingly cause most project difficulties, including ineffective ROI and many of the other factors commonly blamed for project problems.” Here, he claims “The main reason traditional QA testing overlooks risks is because those risks aren’t addressed in the system design… The most common reason something is missing in the design is that it’s missing in the requirements too.”

My reaction to these claims is: “Oh, really?”

How do you define “inadequate” in a way that doesn’t involve question begging?
How do you know it’s the “main” reason?
What do you mean by “traditional QA testing”?

Goldsmith addresses that last question with a bit of a swipe at the context-driven school:

Many testers just start running spontaneous tests of whatever occurs to them. Exploratory testing is a somewhat more structured form of such ad hoc test execution, which still avoids writing things down but does encourage using more conscious ways of thinking about test design to enhance identification of tests during the course of test execution. Ad hoc testing frequently combines experimentation to find out how the system works along with trying things that experience has shown are likely to prompt common types of errors.

Spontaneous tests often reveal defects, partly because testers tend to gravitate toward tests that surface commonly occurring errors and partly because developers generally make so many errors that one can’t help but find some of them. Even ad hoc testing advocates sometimes acknowledge the inherent risks of relying on memory rather than on writing, but they tend not to realize the approach’s other critical limitations.

By definition, ad hoc testing doesn’t begin until after the code has been written, so it can only catch — but not help prevent — defects. Also, ad hoc testing mainly identifies low-level design and coding errors. Despite often being referred to as “contextual” testing, ad hoc methods seldom have suitable context to identify code that is “working” but in the service of erroneous designs, and they have even less context to detect what’s been omitted due to incorrect or missing requirements.

I’m not sure where Goldsmith got his information about the Context-driven approach to testing, but what he’s describing ain’t it! See here and here for much better descriptions.

Goldsmith contrasts “traditional QA testing” with something he calls “proactive testing.” Aside from “starting early by identifying and analyzing the biggest risks,” the proactive tester

…enlists special risk identification techniques to reveal many large risks that are ordinarily overlooked, as well as the ones that aren’t. These test design techniques are so powerful because they don’t merely react to what’s been stated in the design. Instead, these methods come at the situation from a variety of testing orientations. A testing orientation generally spots issues that a typical development orientation misses; the more orientations we use, the more we tend to spot. [my emphasis]

What are these “special risk identification techniques”? Goldsmith doesn’t say. To me, this is an enormous red flag that we’re probably dealing with a charlatan, here. Is he hoping that desperate people will pay him to learn what these apparently amazing techniques are?

His advice for ensuring a project’s requirements are “adequate” is similarly unhelpful. As near as I can figure it, reading his article, his solution amounts to making sure that you know what the “REAL” [emphasis Goldsmith] requirements are at the start of the project, so you don’t waste time on the requirements that aren’t “REAL”.

Is “REAL” an acronym for something illuminating? No. Goldsmith says he’s capitalizing to avoid the possibility of page formatting errors. He defines it as “real and right” and “needed” and “the requirements we end up with” and “business requirements.” Apparently, then, “most” projects that have difficulties are focusing on “product requirements” instead of “business requirements.”

Let’s say that again: To ensure your requirements are adequate you must ensure you’ve defined the right requirements.

I see stuff like this and start to wonder if perhaps my reading comprehension has taken a sudden nose dive.

Share
Leave a comment ?

11 Comments.

  1. Robin Goldsmith has had plenty of opportunity to dispute the Context-Driven view of testing to the faces of the people who have painstakingly crafted and expanded that vision. He has chosen not to do so. I’ve seen him lurking in the back of tutorials of mine several times, and each time I have invited him to challenge or question anything he liked. He has declined every time.

    And since he has actually attended some tutorials, it’s mystifying to me that he could so completely fail to understand what he saw.

    I think some people are addicted to looking like and sounding like an expert, but not so excited to do the hard work of studying their art and responding to other thinkers in the field.

  2. That is definitely a mind blower.

  3. I agree with your post (and James’ assessment) in that many want to sound like an expert, without taking the time to learn. While this can apply to a host of fields, it is prevalent in the software development industry because it’s one of the few industries that you can get away with it in. Why? Because often the leaders of these companies do not know the difference.

  4. Mr. Goldsmith aside, it is indeed possible to define clear, complete, concise and correct requirements before starting on the solution, and to be able to trace those requirements through design and dev to drive test plans & test cases; come to http://www.iag.biz and we can show you how.

    The shortest way to describe this is to change the mindset of requirements gathering. If you ask people what they want, that is an open-ended question that may or may not provide the right answers, and you can never tell if you have got all the answers. That is the nature of expressing requirements as desires.

    Change the question to “what do you do, and what information do you need to do it?” People can absolutely answer this question. Base your requirements on what people do and you will be able to get all the requirements. Not hard, but it does take a change in mindset, try it and see.

  5. Thanks for the comment, Noe!

    David, color me extremely skeptical.

  6. I don’t completely disagree with David and actually, I think his question is a good one – asking about needs rather than desires but isn’t that how most projects start? Maybe I’ve just always worked for organized companies.

    Regardless of how much analysis we perform early on about must haves, should haves, would like to haves, etc., the requirements always evolve as the project progresses. Software development does not happen in a vacuum. The earth does not stop spinning once the requirements doc is complete until development is complete. We’re continually learning more about our clients, their customers, advances in technology, etc. and so it’s natural that the requirements change somewhat.

    Besides having requirements that are somewhat gelatinous as the project progresses, I would contend that it is impossible to formulate requirements that describe every last detail about the product. And that means that we testers are probably not going to think of every last scenario to test or conceivable way to exercise the system.

  7. Thanks for the comment, Trevor! Well said.

  8. Abe,

    Come on by http://www.iag.biz to see people doing it every day, …like me!

  9. Trevor,

    Yes, Requirements can change once development starts; having the original base set of Requirements allow you to do impact analysis, so that the change impacts the correct part of the Requirements, and assists in estimating the additional cost/time to make the changes.

    dww

  10. I like the picture with the requirements perspectives. Who’s is it and where is it from so I can credit it?

  11. Shannon, I don’t have an answer for you. That’s been floating around, now, for at least 12 years. Sorry I can’t be any help.

Leave a Comment


NOTE - You can use these HTML tags and attributes:
<a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>