The following is a snippet of a Skype conversation I was a part of recently. I wanted to share it, since I think it’s a perfect example of a perennial tester conversation. See if you recognize yourself in it. The participants’ names have been removed in the interest of privacy. Other minor changes were made for the sake of clarity.
Tag Archives: Communication
Testers Cannot Be Perfect Proxies for End Users
Irreverence Versus Arrogance
Everything sacred is a tie, a fetter.
— Max Stirner
I am an irreverent guy. I’m a fan of South Park and QA Hates You, for example. Furthermore, I think it’s important–nay, essential–for software testers to cultivate a healthy irreverence. Nothing should be beyond question or scrutiny. “Respecting” something as “off limits” (also known as dogmatism) is bound to lead to unexamined assumptions, which in turn can lead to missed bugs and lower quality software. If anything, I think testers should consider themselves akin to the licensed fools of the royal court: Able–and encouraged–to call things as they see them and, especially, to question authority.
Contrast that with arrogance–an attitude often confused with irreverence. The distinction between them may be subtle, but it is key. Irreverence and humility are not mutually exclusive, whereas arrogance involves a feeling of smug superiority; a sense that one is “right.” Arrogance thus contains a healthy dose of dogmatism. The irreverent, on the other hand, are comfortable with the possibility that they’re wrong. They question all beliefs, including their own. The arrogant only question the beliefs of others.
I pride myself (yes, I am being intentionally ironic, here) on knowing this difference. So, it pains me to share the following email with you. It’s an embarrassing example of a moment when I completely failed to keep the distinction in mind. Worse, I had to re-read it several times before I could finally see that my tone was indeed arrogant, not irreverent, as I intended it. I’ll spare you my explanations and rationalizations about how and why this happened (though I have a bunch, believe me!).
The email–reproduced here unmodified except for some re-arranging, to improve clarity–was meant only for the QA team, not the 3rd-party developer of the system. In a comedy of errors and laziness it ended up being sent to them anyway. Sadly, I think its tone ensured that none of the ideas for improvements were implemented.
After you’ve read the email, I invite you to share any thoughts you have about why it crosses the line from irreverence into arrogance. Naked taunts are probably appropriate, too. On the other hand, maybe you’ll want to tell me I’m wrong. It really isn’t arrogant! I won’t hold my breath.
Do you have any stories of your own where you crossed the line and regretted it later?
The user interface for OEP has lots of room for improvement (I’m trying to be kind).
Below are some of my immediate thoughts while looking at the OEP UI for the front page. (I’ll save thoughts on the other pages for later)
1. Why does the Order Reference Number field not allow wildcards? I think it should, especially since ORNs are such long numbers.
2. Why can you not simply click a date and see the orders created on that date? The search requires additional parameters. Why? (Especially if the ORN field doesn’t allow wildcards!)
3. Why, when I click a date in the calendar, does the entire screen refresh, but a search doesn’t actually happen? I have to click the Search button. This is inconsistent with the way the Process Queue drop down works. There, when I select a new queue, it shows me that instantly. I don’t have to click the “Get Orders” button.
5. What does “Contact Name” refer to? When is anyone going to search by “Contact Name”? I don’t even know what a Contact Name is! Is it the patient? Is it the OEP user???
4. In fact, I *never* have to click the Get Orders button. Why is it even there on the screen?
6. Why waste screen space with a “Select” column (with the word “Select” repeated over and over again–this is UGLY) when you could eliminate that column and make the Order Reference number clickable? That would conserve screen space.
7. Why does OEP restrict the display list to only 10 items? It would be better if it allowed longer lists, so that there wouldn’t need to be so much searching around.
8. Why are there “View Notes” links for every item, when most items don’t have any notes associated with them? It seems like the View Notes link should only appear for those records that actually have notes.
9. Same question as above, for “Show History Records”.
10. Also, why is it “Show History Records” instead of just “History”, which would be more elegant, given the width of the column?
11. Speaking of that, why not just have “History” and “Notes” as the column headers, and pleasant icons in those rows where History or Notes exist? That would be much more pleasing to the eye.
12. In the History section, you have a “Record Comment” column and an “Action Performed” column. You’ll notice that there is NEVER a situation where the “Action Performed” column shows any useful information beyond what you can read in the “Record Comment” field. Why include something on the screen if it’s not going to provide useful information to the user?
For example:
Record Comment: Order checked out by user -TSIAdmin-
Action Performed: CheckOutThat is redundant information.
In addition to that, in this example the Record Create User ID field says “TSIAdmin”. That’s more redundant information.
There must be some other useful information that can be put on this screen.
13. Why does the History list restrict the display to only 5 items? Why not 20 items? Why not give the user the option to “display all on one page”?
14. In Notes section of the screen, the column widths seem wrong. The Date and User ID columns are very wide, leaving lots of white space on the screen.
The Blackest of Boxes
At a recent gig I was one of several veteran testers brought in to shore up a team that… I don’t want to say “lacked experience.” Probably the best description is that they were “insular.” They were basically re-inventing the wheel.
I had a number of discussions with the team’s manager and he seemed receptive to my suggestions, up to a point. Past that, though, all I heard was “We don’t do that.” When I first heard this my mind was suddenly struck with visions of Nomad, the robot from Star Trek, whose favorite thing to say was “Non sequitur. Your facts are uncoordinated.”
Don’t worry. While I was tempted to, I didn’t actually say it out loud.
Let me back up a little. The job involved testing a system that is sandwiched between two others–a digital scanner and a database. The system is designed to take the scanned information and process it so that the right data can be updated and stored, with a minimum of human intervention. That last part is key. Essentially most of what happens is supposed to be automatic and invisible to the end user. You trust that checks are in place such that any question about data integrity is flagged and presented to a human.
Testers, however, are not end users. This is a complicated system, with a lot of moving parts (metaphorically speaking). When erroneous data ends up in the database it’s not a simple matter to figure out where the problem lies. When I asked for tools to help the team “look under the hood” (for example, at the original OCR data or the API calls sent to the database) and eliminate much of the overhead involved with some of the tests, I was told “But we are User Acceptance Testing”, as if that were somehow an argument.
Worse, the tool was not being developed in-house. This meant that our interaction with the programmers was essentially non-existant. We’d toss our bugs over a high wall and they’d toss their fixes back over it. The team was actively discouraged from asking questions to programmers directly (the programmers were in another state, so obviously face-to-face communication was not possible, but even emails and phone calls were verboten).
One thing I’ve learned over the years is that it’s never a good idea to separate programmers and testers. It fosters an us-versus-them mentality instead of a healthy one where the testers are viewed as helping the programmers. It lengthens the lifespan of bugs since a programmer doesn’t have the option to walk over to a tester and ask to be shown the issue. Testers are deprived of the opportunity to learn important information about how the system is built–stuff that’s not at all apparent from the user interface, like snippets of code used in several unrelated places. A change to one area could break things elsewhere and the testers might otherwise never think to look there.
So, when I hear “We don’t do that” it strikes me as a cop out. It’s an abdication of responsibility. A non sequitur.