The Myths of Testing

I find it interesting how themes can pop up and attract attention on social media. Sometimes when I see a theme that is misguided, I’ll sometimes chip in, other times I’ll just shrug. A theme I see pop up reasonably often is the one focused on “testing myths”. So, what is a myth?

If you describe a belief or explanation as a myth, you mean that many people believe it but it is actually untrue.

If I extend that definition to software testing then a “testing myth” must be a commonly held belief about software testing, or testers, that is factually untrue.

Are there myths in software testing? I contend that there are people that mistakenly hold onto false beliefs about testing. “Testing prevents defects”, “automation finds bugs” and “testing is all about bugs”, to mention just a few, certainly fall into the myth zone. But here’s the rub, the examples I just proposed have little meaning or weight on their own. Without me digging in and providing evidence of my reasoning, my example myths can be easily dismissed. Hitchens’ Razor comes to mind:

According to Christopher Hitchens (hence Hitchens’ Razor), “what can be asserted without evidence, can also be dismissed without evidence.”

Perhaps we should have a look at the most recent series of testing myths I found circulating on social media. None were posted with any supporting evidence or reasoning. Note that the following list of testing myths has been copied without edits.

  • You are a tester because you are not technical enough
  • Testers are breaking the product
  • Testing is for finding bugs
  • Everything can be automated
  • Testing can be estimated
  • Quality means testing
  • Anyone can test
  • Testers won’t become project managers, 
  • Testers should follow developer’s work schedule
  • Testers should not get equal time to test than developers needs to develop

Are these claims myths, are they beliefs commonly or widely held by people but untrue? It’s hard to say because when a list, or even a single claim is posted, and there is no supporting evidence or commentary, it’s hard to judge. I mean I can form my own opinion on each of those items listed but isn’t that the role of the tester posting the myths? If a tester is going to post claims I thought it was in the nature of the tester to also provide some level of substantiation. Some evidence, some thoughtful analysis, stories of experience and observation.

Testing, at least in my mind, is, amongst other things, an evidence based pursuit. I can’t imagine being seen as credible if my work is founded on unsubstantiated opinion. “A desirable outcome is …. because it is and clearly you see the logic of that” is not a rational or compelling argument, or an approach to one, and can be easily dismissed. “A desirable outcome based on the following oracle and using the consistency heuristic would be to consider….” is far more credible and engaging. It is thoughtful and evidence based.

So, what am I really getting at here? 

  • If you want to contribute to the discussion that is generated in the name of software testing, do it with thought and reason. Present a position that is underpinned by reasoning. 
  • Tell a story, present research data, use relevant statistics, relate your previous experiences and observations. 

Make a difference, explain your idea or thought in a way that demonstrates the critical thinking, analysis and reasoning that many testers seem to think just comes bundled with the job title (hint – it doesn’t. These skills require work, constant practice and refinement). Am I a little snarky? Yeah, a little, but I’m also tired of seeing a tsunami of what is effectively clickbait passing around ideas that range from bad to flat out wrong. That there are plenty of people who respond to these posts with “helpful”, “useful”, “very insightful” and similar is more than a tad worrying. There’s a lot of uninformed and misinformed representation going on. That’s neither good or helpful for those who are working hard to elevate the credibility of excellent testing.

A tester whose work I have admired and followed for a couple of decades once noted in conversation “the testing world is awash with bullshit and we do not have enough brooms to clean it up”.

Brandolini’s law (also called the bullshit asymmetry principle), is the adage that “the amount of energy needed to refute bullshit is an order of magnitude bigger than to produce it”.

How about we turn our focus from getting “likes” to making reasoned arguments and helping testing advance through useful knowledge. 

Those of you who have made it to the last paragraph might be wondering why I made assertions about testing myths and then moved on. Not in the spirit of the post, right? That’s a fair point, so let’s fix that. Click on the links below for few samples where I tackle myths.

“Testing prevents defects” – Testing and Prevention – The Illusion

“Automation finds bugs” – Automation – what bugs me

“Testing is all about bugs” – Testing – Not Just Bugs

Thanks to Lee Hawkins for his review and feedback (and never ending encouragement).

Testing and Prevention – The Illusion

In what is my first blog post for quite a while I’m going to look at the notion that “testers prevent defects”. I see this claim made by non-testers talking about testing (yes “agile” I’m looking at you as well as your coaches), professional testers and test consultancies. It must be incredibly enticing to issue claims that as a tester you prevent an unwanted outcome. That’s powerful, right? As a marketing tool, either for a company or a person, it’s a bold selling point. 

The “prevent” claim raises a significant question for me. Is the statement credible and representative of what testing and testers can provide?

Let’s start by looking at the meaning of “prevent”. 

From the Collins English Dictionary:

To prevent something means to ensure that it does not happen.

And from the Merriam-Webster Dictionary:

To keep from happening or existing

The use of “ensure” (to make sure or certain) within Collins’ definition is interesting. If you care to look at other dictionaries you’ll find that prevent has definitions consistent with the above selections

Broadly speaking software defects have two states when we consider observation:

  1. they exist and have been observed
  2. they exist but have not been observed

Within state 1 we know there is an issue because it has been observed. We have a record of it happening. Perhaps we have identified the specific conditions required to reproduce the problem and are able to analyse the issue. We might even agree that the outcomes are undesirable (a threat to value) and fix the defect. Of course we might also make a decision to not make any changes (a different topic).

State 2 is the “great unknown”. Issues are sitting in the product just waiting for somebody to stumble across them. To the extent that they remain “hidden” and do not threaten value, these are often ignored. Until they change into state 1.

For the purposes of this discussion let’s move on from state 1. Clearly there was no prevention because the problem has been observed (either pre- or post-release). 

Before I venture further let’s consider a few places we might observe issues within software development while wearing a testing hat:

  1. Documentation – specifications, help guides, product claims
  2. Discussion – ideas, thoughts, queries about the software, specific to a set of changes or the product more generally
  3. Software – investigation of the product either in part or whole

As a tester, I:

  • engage with issues by helping to solve them with other people. The issue might be that we need new, additional functionality to keep customers happy or that some part of current functionality is not working in desirable ways (ways that threaten value)
  • provide evidence-based observations of what I have done during testing. What I have observed, my view of risks in the software. I’m likely to comment on things such as (but not limited to) ease of use, consistency in the application, issues I have found and how much of a threat they might be. My communications around testing can cover a lot of different considerations. Key to these observations, the “consistent thread”, is that I can back up my observations with evidence. If I’m asked to provide details related to my testing my response will not be “just feels like it” or similar. It will be backed by specific evidence.

If I claim that I “prevent issues”, how do I provide evidence that I prevented a thing that never existed? If my Manager (or anybody else) asks me to evidence the “issues I prevented” how would I do this? At best I could point to a trend of declining issues in production (which is an excellent outcome) but correlation does not imply causation. I get that it’s nice to think in this way but I actually want to see the link because that’s important feedback in improvement loops. How do you know you are preventing anything? Even small software companies have a myriad of changes happening in parallel. So which ones are working well? That’s a matter of evidence linking changes to outcomes. Good luck with that when you have no evidence (remember that the issue never existed).

It seems to me that a re-frame is in order. Let’s consider that by visiting those places I listed earlier where we might find issues. Documentation, discussion, software.

You’re reading through a specification and you find an error in a statement regarding functionality. To fix this you consult with the specification author and a change that corrects the problem is added. Cool you prevented an issue….except you didn’t. What you did is find something in the document that does not make sense to you. You detected a signal that there might be an issue here. When you discuss this with the document author, and they concur, then they will update the document to add clarity. But, and this is important, they may not agree and the document might not be changed. Regardless of whether the change is made, this is early detection, not prevention.

You’re in a project group discussion. The basic information flows of the project are being mapped out along with how data will be entered and interacted with by your customers. You notice a large inconsistency with a similar feature elsewhere in the software. This inconsistency would reduce usability and increase confusion so you point this out. Awesome, you prevented an issue……except you didn’t. Again you detected a signal that there might be an issue, you raised this with your colleagues, and further discussion and investigation is likely to follow. Perhaps this inconsistency, while not initially known, is now considered to be an important aspect of the project and will be retained. Again, this is early detection of an issue, not prevention. 

You’re running a test session pairing with a Developer. During your exploring you observe that for a given set of values you receive different results each time you enter the values. Incredible, you prevented an issue…except in this scenario that’s not a claim you’re likely to make. Why? It’s really no different to my first two examples. The incorrect output is a signal that there is an issue. We have helped identify that further investigation is required so we can reconcile actual behaviour with desired behaviour.

When I see claims of testers, or testing, preventing bugs it seems to me that testing is being set up for failure by representing goals and outcomes it can never own. It is a confusion of what powers testers and testing possess. If I was a surgeon and you were a doctor off to the South Pole as part of a team, it is a requirement that your appendix be removed. As a surgeon I could, in this context, assert quite positively that, by removing your appendix, I have prevented you suffering an episode of appendicitis. Testing isn’t like that.

Testing is like this. You’re a passenger in a car, driving down a road that has a variety of speed limit signs. The car has a speedometer which you can see and you glance at it occasionally to check the car’s speed. Does the speedometer reading prevent the driver from driving over the speed limit (which is an issue)? It doesn’t. The speedometer provides you with a signal which you can either ignore or act upon. You might say to the driver “gee the speed limits change a lot around here, we just moved from an 80 km/h zone and into a 60 km/h zone”. The driver can choose to listen to you or ignore you. They might increase speed, decrease speed or stay at the same speed. Changing speed requires a direct input on the accelerator and it is the sole responsibility of the driver to make that adjustment.

As a tester you have a focus on the speedometer (and other conditions that are part of the context such as the weather, the road conditions, etc.). You are providing feedback, perhaps even encouraging slowing the car to a more appropriate speed. You are an observer of what is happening, not the driver who has control and can make changes based on your feedback. You are providing feedback that can be acted upon, but you’re not the person making the adjustments.

As I noted at the opening of this post, I’m very unclear why people really want to make the claim that they, or testing, “prevent issues”. Not only is that claim beyond the remit of testers and testing, it is damaging to testing. It denies the value and usefulness of detection, something that good testers bring to the table with each test assignment and discussion. My advice is to use your detection skills, scrutinise, explore, question, propose ideas, challenge and advocate. When you’ve done these things you can actually demonstrate how you have influenced product quality by talking about all those issues you have brought to light. That feels a lot like being an advocate for better quality in an authentic way.

A big thank you to Lee Hawkins (@therockertester) for his endless patience and quality feedback.

The thrill of testing

I gave myself 60 minutes to write a blog about my day and the excitement that testing brings me. For better or worse, here it is.

I’ve written in previous posts and articles that I find testing enormously satisfying and challenging. Every challenge is unique in some ways but often similar in other ways. Testing is one big jigsaw puzzle, a sudoku, a cryptic crossword puzzle, or a “whatever floats your boat” as a puzzle. You never quite know what skills you will need to pull from your toolkit, what roadblocks you might hit, and, unlike that jigsaw puzzle, there isn’t always a picture of what the completed product (or feature) looks like (and there isn’t always a single “right” answer).

Today I had one of those days that reminds about how exciting and engaging testing really is. One of those days where I got to hypothesise, explore, test, take onboard feedback from my actions, adjust, try again. Today I was a detective. I was given a crime scene but there was no weapon, no clues as to how the crime was committed, just the aftermath.

Today I was given a SQL query and asked to execute it. The SQL query revealed a number of database records that had a value in a table column that should not have been there. The number of records impacted by the “crime” was small compared to the total number of database records writing to the table. This in itself makes the problem more intriguing – why so few? My mission – determine the cause of the problem so it can be replicated and then resolved.

My first reaction was basically “wow, where do I go with this?”. Time to assess. What do I know at the moment. I know there are a bunch of incorrect records. What don’t I know? I was unsure of the relationship of one of the tables columns to our problem. OK, let’s interview the reporting witness and query that relationship. With that done, the problem was actually quite clear. That helps but it’s not completing the mission. Now the fun really starts.

As I now know the type of transaction that creates the data on this table, I created a few of those transactions and completed things I imagined the customer would or might do. Hopes were high but my hypotheses didn’t hold. That’s bad, right? Actually, it’s not. I observed some relationships between the data and the database table that I didn’t previously have knowledge of – that’s useful. I also eliminated some potential suspects.

What to do next? We keep a bunch of full copies of production data (fully masked). What might these tell me. I start querying a database that is around 2 months old. It has less instances of this problem but there’s still a number of them to consider. I was really hoping to have a smaller field if possible. Although the data volume did allow me to observe that some potential key values differed considerably. No clear discernible pattern. Dismissed totally, no – noted for later if required – but put to one side as it felt more like a distraction, a red herring. The search through the data, via SQL queries continued until I queried a copy of production data that is 4 days older than our current production copy. Then – you beauty!!! – I had only 1 less problem record in the 4 day old database than in our latest production copy. I had stumbled across an occurrence of the problem and within a 4 day window. Not only that I could see the problem when I compared the 2 records within each databases GUI.

Soooo, we have logs, let’s use them. I eagerly plug in a reference Id to help me find the suspect transactions. Drats, there is zero there that is even vaguely suspicious and nothing that has any clear alignment with the problem data state. Thinking, mulling, reflecting for a moment, boom. There are effectively 2 parties to this transaction. Doh!! I run the same log search this time on the other reference and…….boom, I can see the type of transaction that I was initially suspecting. I can see that it happened within the right timeframe and I can see that the transaction could not have been updated in the way I had hypothesised.

I’d love to close this story out with a classic “then I solved it”, but alas, that part of the story awaits for me to arrive at work tomorrow morning. So, yeah, the mission is not yet complete but I’m confident it will be tomorrow based on what I learnt today. As important as it is to find the cause and then a solution (a proper conclusion) I wanted to write this for no other reason than to celebrate how exciting testing is, and remind those who might need a reminder. There is a “rush” in going from “what the hell” to “OK, this is what’s happening” to helping derive a solution that improves your product. There were no test cases, there was minimal background, there was just evidence of “something” being wrong. Constructing a picture of that “something” is amazing fun and satisfying beyond belief. This is one reason why, after almost 20 years of testing, I keep coming back for more and celebrating the opportunities.

umpires are testers too

During my time on this planet I’ve engaged in both team and individual sports. On the team side there was, predominantly, cricket and basketball and on the solo side, squash. In competition squash (not at elite levels) you play in a group of four, so there is some aspect of team, but during the actual games, you’re on court with just your opponent. 

Six years ago Father Time reminded me that things might not work as well when you’re older. This happened in the form of a torn Achilles tendon during a cricket match. I had planned to get back to cricket the next season but decided not to (after months of hitting the gym!!). To stay connected with the game I took up cricket umpiring and it is this that I’m going to write about. When I umpire, I test, I help manage people and I keep my stakeholders (the players and my umpiring partner) up to date with information that is relevant to the game’s progress and conduct.

My requirements, as a cricket umpire, come from two places. The first being Laws of Cricket 2017 Code which contain 42 laws and a preamble on “The Spirit of Cricket”. There is a lot going on in those 42 laws and there is plenty of technical complexity. However every competition has its own set of by-laws, rules that are specific to the competition. These rules might overwrite specific rules in the Laws of Cricket or they might be in addition to those Laws. Where the specific competition by-laws are silent then the Laws of Cricket must be applied.

Requirements (or in this instance Laws) cover the things we can think of that we believe are important. In any given set of requirements much is left unsaid and so we have holes that are filled by interpretation (and don’t assume all interpretations are equal) until such time as it is recognised there is a problem and more requirements should be added or existing ones modified. Here’s one of the glorious things about humans, especially sportspeople, those changes, intended to add certainty, will open up further avenues of grey and uncertainty for some.

Let’s also consider some aspects of the context of a game of cricket. The state of the wicket to be used for the game changes from game to game. Some can be bouncy, some can have the ball stay quite low, others might feature quite a bit of variable bounce. Some bowlers, for a variety of reasons, get the ball to bounce noticeably, others not so much bounce but more skid. There’s a whole spectrum of how the ball will behave based on the bowler, their bowling action, the pitch conditions and the like. The grounds on which the games are played are all different. The players are all different. Different abilities, different personalities, different levels of maturity and attitude. Many are on the field for the fun of the competition, while a few appear to be playing for some enormous cash jackpot not apparent to anybody else such is their intensity. The team Captains, their demeanour and willingness to discuss and collaborate, to work with the umpires to run a smooth and orderly game, is not a given for all. This is all important to me as an umpire as it helps me with decision making and communication. It sometimes allows me to signal to Captains, before the game has started, that certain behaviours are either desired or not acceptable and their role in achieving these outcomes.

So I have Laws and now I just need to apply them, because the umpire has the final say on matters. Great theory but this just doesn’t work “straight out of the box”. Like testing, knowing the theory simply is not enough, and while the theory might hold from game to game, the context within any given game can change often and swiftly. I’ve had games where a team’s behaviour has switched from nice to nasty within moments. I umpired a game where one player grabbed an opposition player by the throat and threatened others with his bat (he found himself with 2 years of “spare time” courtesy of the Tribunal hearing). The good thing is that these occurrences, as horrible as they are, become learning experiences. I thought I was a reasonably good communicator but these instances made me realise that communication has to be specific and timely and I can’t assume that the Captains are seeing things the same way I am. Captains are expected to control their teams during a game. I’ve learned that early communication of behaviour I find unacceptable helps enormously in setting a “tone”. 

As an umpire I “fail” often, well at least you’d think so if you pay attention to the player feedback on some of my decisions. In the competition I am umpiring I’ll encounter some players bowling a stitched leather 156g cricket ball at around 130 km/h at a batsman that is just a tad over 20 metres away. A lot of bowlers will be slower but the time between the ball being released and getting to the batsman is going to be in the range of 0.50 to 0.75 second. From the time the bowler commences his run up to bowl to the time the cricket ball is considered dead (ie, in the umpire’s opinion play has stopped for that delivery) I am running tests and making decisions based on observations. Did the bowler bowl a fair delivery, did the batsman hit the ball, was it caught by a fieldsman, did the batsman do anything illegal, was the batsman hit on the pads (protective gear worn by a batsman on his legs) without the ball being touched by the bat (and this is just a small selection of tests)? This last instance is what is known as the Leg Before Wicket (LBW) rule and it is laid out in all its glory as Law 36. The TL;DR here is if a batsman, in the opinion of the umpire, would have been given out bowled, but, the batsman’s pads stopped the ball hitting the wicket, then the batsman is given out LBW by the umpire. There’s a bunch of caveats that apply to this dismissal, and they all need to “line up” for an LBW decision, but even without those, this is challenging. If a batsman is bowled everybody can see the wicket has been broken, if a batsman is out caught we can witness the catch. With LBW the umpire must run a series of tests and checks that, in the end, allows the umpire to form an opinion that something that didn’t happen (ball hitting the wickets) would have happened (ball hitting the wickets). See the potential for discussion and disagreement here with this mode of dismissal?

So how is this relevant to my day job as a tester? We can start with context. Understanding that context is not consistent and that people are a very important part of that context is key. I can adjust to all sorts of changes in physical playing conditions (the wicket, the field, the weather, etc) but if I ignore the people (the players, my umpire partner) I’m going to umpire badly and more than likely cause significant issues that adversely impact outcomes. Similarly if I ignore the physical changes and umpire each game as if it was played in the same conditions each weekend, I’ll umpire badly and adversely impact the game.

I make mistakes when umpiring (so do the players when competing – different story). I also make good decisions. In a day of cricket I will make a lot of decisions, many of these will go unnoticed by the players. The decisions I make that get the most attention, for reasonably obvious reasons, are those that require me to decide a batsman is “out” or “not out”. I’ve given batsmen out and then realised I’ve made a bad call. I’ve also given batsmen “not out” and then realised I had probably got it wrong. The key here, we are told by our umpiring advisors and coaches, is to not dwell on the error. You’ve made a decision, it’s in the past, focus on the next ball to be bowled. Of course I want to learn from the errors, and spend time reflecting, but have learnt that is for later, not during the game. The same applies at work. Accept the error, reflect when appropriate, learn, move on. Letting one mistake be the cause of a series of errors doesn’t make a lot of sense.

Communication is so important. I often get asked why I made or didn’t make a decision. When I get asked I explain clearly and calmly based on what I observed. If I see something in the game that I think needs to be communicated I’ll talk to my umpiring partner. Together  we’ll agree on a strategy and then we’ll communicate that as a team and we’ll communicate as early as possible citing specific examples and what outcome we would like. We will also take on board any feedback from the Captain so we can reach a common goal (Captains might not always agree but that’s not the point of the chat). I use the same approach at work although it changes a little as at work I’m not an umpire.  At work I’m a tester and in this capacity I am not a  gatekeeper.   This is a really important distinction, I don’t provide final, binding decisions, I influence with evidence based observations. The same principles around communication apply though. Communicate at the earliest useful moment, be specific, cite evidence and, when appropriate, seek a better way forward.

As a cricket umpire “team work” has layers of meaning. Taking the field with another umpire requires a consistent stream of communication. Some of it verbal and some of it hand signals but all of it designed to keep a close bond between us and enhance decision consistency within a game. I will discuss various rule interpretations, local conditions and anything we know of importance about the competing teams before a game. This is an effort to reduce variation in our approaches and decisions. During a game we will reinforce good decisions made during the game (a real spirit lifter) and things we might need to on watch for (perhaps someone getting close to infringing a rule). Umpires also need to work with the team Captains while staying impartial. Impartiality is really important and umpires need to constantly keep in mind that they are there to help a contest progress and not influence the outcome. I tend to not talk to players much on the field unless they commence the discussion but I also need to remember that I’m out there to have fun and enjoy the experience. When at work I talk more, a lot more but teamwork remains an important aspect for me. If I hear a discussion that I can join in with, add some value through my perspective I’ll join in. If I can help somebody who is struggling or stuck, I’ll happily do that at work but on the field that would remove my impartiality (or could be perceived as) and is something I will not do. As on the cricket field so too at work. I’m there to enjoy the experience and have fun while improving how I go about my job.

In closing, this blog has been on my mind for about 4 years and has gone through a number of attempts. Finally it is written and hopefully in a way that demonstrates testing crosses over into other aspects of life even when we don’t consciously think of testing. My sports background has influenced many things I do, testing has influenced how I look at the things I do and interact with those things.

A big thank you to Lee Hawkins for his review and feedback

some Economics of software

This post considers some of the problems we face when creating software and then explores if reframing those issues, by using language from another domain, might lead us to new perspectives or new approaches to solving problems.

It might seem strange for a tester to be writing about economics. Moreover, this post isn’t even specifically about testing but focused on aspects of software development. I have a degree in Economics and Finance and over 20 years of experience working in the finance industry in various roles. Misuse of economics terms in software development annoys me. Return on Investment (ROI), is probably a term you have seen thrown around in testing discussions, predominantly used in discussions about automation in testing, and, often by proponents of setting up a debate in terms of humans versus computers. This is not the topic on which I plan to write but offer it as means to demonstrate the ways in which an economics/ financial term has been dragged into testing and savagely abused. There are terms we can draw from economics and finance and use them in good ways rather than forcing them to mean things they have never been intended to represent.

You might be asking “what is Economics?”. That’s a good question. Rather than re-inventing the wheel I’m going to borrow from Investopedia. It’s pretty much what every introductory text to Economics states

Economics is a social science concerned with the production, distribution, and consumption of goods and services. It studies how individuals, businesses, governments, and nations make choices about how to allocate resources”

Personally, I would expand the above definition to include the word scarce, specifically “scarce resources.” Why “scarce?” Because everything is finite, meaning that there is an end point in supply and that supply needs to be managed.

Finance, per se, is not Economics and the reverse is also true. Economics does however help us to understand, model and explain much of the behaviour we see within Finance. There are many terms that exist within both disciplines and so if I refer to a finance or an economics term I’m using it interchangeably. I could be pedantic about this but it’s not the point of this post.

Economics can be broadly divided into two categories – macro economics and micro economics. Roughly speaking macro economics is “in the large” – countries, states and the like. Micro economics is “in the small” – think business impacts and decisions. These are (very) rough guides, ignoring a lot of important context. If you would like to know more I’d encourage you to dive into some reading. This blog is going to talk at the micro level.

If I use the terms “supply” and demand” my guess is that you could understand them in ways that are relevant to economics discussions. Heck you probably talk about these concepts often without even realising it. “Wow, fruit is getting expensive because of the drought impact” (supply related)” or if you lived in Melbourne during lockdown you might have stated “petrol is really cheap with people not driving anywhere” (demand related). There are many ways I could twist these examples to show different outcomes but the above is enough for now.

Think about your workplace, see if you can come up with the commodity that is likely the most scarce resource. In my world it’s time. There is limited time available in any given day, week, sprint, however you wish to divide your time. Now think about the demand side of time. It’s near limitless, right? How many times have you been in discussions at work and actually considered how you are balancing the commodity of time? I have no doubt that some people reading this might also be wondering about money. Of course money is important, and it too is subject to supply and demand forces, but if you manage your money well and your time poorly the cash will eventually run dry. The sweet spot is the “equilibrium point”. That is the point where the demand and supply of commodities are balanced. There is no excess demand or excess supply. How many companies discuss time in terms of equilibrium rather than playing some strange game where they believe more time can be produced?

Opportunity cost is quite simple. I have to make choices, I can’t have everything so for example, if I decide to buy a new luxury car I’m going to have to forsake that around the world trip I would like to do. So, in short, I can have A or B, but not both. The opportunity cost of acquiring A is B and vice versa. What if we tried rephrasing some our project decisions to be explicitly framed within “opportunity cost”? When we do this we need to understand value – both to ourselves and our customers. If we consider our major constraints – time and money – we need to make choices about what we deliver. What is the highest value to our customers, how much time do we have available? We could do projects A, B and C but the opportunity cost is projects D, E and F. Damn it, we really need F. OK we can do F but the opportunity cost increases as we cannot complete projects B and C. But A is really important – we need that. Sound familiar? I’ll bet you’ve been involved in conversations just like it. Within these constraints we have a spectrum of possibilities, we have options. Those options range from do none of the projects to do all of them (the time constraint should tell us that “do them all” is probably a really bad idea). Each of those choices also has an opportunity cost attached (how valuable is your reputation, what is it worth?). Value figures in our opportunity cost ponderings. The smart play is to figure what we can deliver, that is valuable, and within our constraints. We still have an opportunity cost but that’s (probably) unavoidable.

Let’s consider risk. If you work in software development risk is a term you’ll be very familiar with. It’s also a finance term. There is a rule in finance that says “the greater the return, the higher the risk”. For each increment up in risk, investors expect higher returns to compensate for that risk. Likewise at lower levels of risk the returns are generally lower and returns will ratchet down as risk reduces. Lack of risk = safety, you don’t really get rewarded (with high returns) by playing it safe. There’s another important aspect in this – sensitivity to risk. If you have a big bunch of money (Bill Gates style big bunch) then you might think nothing of investing a million dollars in an investment and hope that your get twenty million back in twelve months. If the investment fails, probably no big deal. This is a low sensitivity to risk. If a million dollars represents all your assets plus some borrowings your sensitivity to the risk of losing all or some of your wealth is likely to be quite (extremely) high (you’d be risk adverse in this context).

How often at work do we talk about risk in terms of returns as well as our sensitivity to risk? When we create a project we have multiple risks, they change during the project, they change after delivery, they change during post delivery maintenance. Would I be too far wrong if I suggested many discussions around risk are confined to a matrix where we , almost randomly, assign values against likelihood and impact and we spend little, if any time, contemplating post delivery risk? If we consider returns in terms of business satisfaction through the delivery of value, I suggest that if we replaced our “risk matrix monologue” with discussions about how risks relate to returns, we would produce better understanding of important risks. What if we inspected each of these using the lens of “sensitivity”? Does that help us understand what risks might really be important to both the company and its customers?

Not all risks are equal (even the matrix approach tells us this). If we can recognise this then we can start to stratify those risks using sensitivity. We can accept that some risks, even if realised, may result in very little inconvenience for our company or our customers. As an example we decide to change the font used for field labels in the GUI and we release quickly to production. Our customers, post release, report that not all fields have been updated to the new font. Not great, sure, but we could live with this. In fact we could be so insensitive to the issue that we don’t even prioritise fixing the inconsistency. Now consider a major release to our product, a significant upgrade. We have assessed that the upgrade provides considerable value to our customers, and have advertised this upgrade aggressively. We operate in a competitor rich environment and if the upgrade fails in any of the key functionalities our customers use, or any of the new functions, the company could suffer considerable customer and reputation loss. In this scenario we would likely be much more sensitive to the impacts of risk and we would be very diligent in our approach to finding and mitigating risk. We might even decide to release these changes in a series of slices so we can avoid a risky “big bang” release and to allow us to gather useful feedback on each small release.

What I have outlined in this post is very high level but hopefully enough for you to understand that reframing problems, through the language of other disciplines, can help us gather new insights. I have selected a couple of terms that seemed particularly useful, I could, and might, pick a few more in a future post. Using the language of economics and finance might not appeal to you, it might not feel comfortable for you to use it. You could instead choose to find inspiration in another discipline where the language is comfortable for you. If you choose not to reframe in the ways I’ve discussed, in making that choice you have thought about how you currently frame problems and why you are happy with what you are doing. Either way, your self reflection is a growth tool and, if this post contributes to your growth, I can’t ask for more.

Thanks to Lee Hawkins (@therockertester), Lisa Crispin (@lisacrispin) and Janet Gregory (@janetgregoryca) for their reviews and very helpful feedback.

Bug Reporting can be simple

I was recently asked if I could talk to some of my work colleagues about bugs. More specifically I was asked if i could explain what a good bug report looks like and what information it might contain. The people in focus here are the Halaxy Service team. I admire people who work in customer support, it can be a tough gig, and there is a fair amount of pressure when you get difficult customers or situations. I’ve been in this role (with another company), it can be demanding. For all that, the Halaxy Service people are something special. They have a rapport with Halaxy customers that is beyond anything else I’ve seen or heard.

I had a 10 to 15 minute slot for my talk. Not a lot of time to really dig in and the people I was talking to were not specialist testers. My goal was to deliver some key messages in easy to understand terms that could be taken away and used. I decided that going over the difference between fault, error and failure wouldn’t form part of the discussion. Similarly, heuristics as a concept, were not explained but I did briefly talk about consistency as a consideration. What we wanted to get to was a place where Service team people could raise bugs with sufficient detail to establish context and allow further exploration and solutioning by others in development (this includes all in the development team not just developers).

My framework became 3 key considerations when logging a bug based on discussion with customers. Three simple questions to consider when logging bug details. What happened,? How did it happen? What should have happened? Let’s consider each of these in turn.

What happened – in short what went wrong? Something happened that our customer didn’t desire. This is a good time to capture if the customer has ever before attempted the function that “misfired” or if it is a first time. At this point we know that something perceived as undesirable has impacted a customer and we have a picture of the what but that’s not enough. Like a car crash we could simply say “the car hit the tree”. The problem with this is too little information about how it happened and preventing future occurrences.

How did it happen – in this part of the process we really want to get an appreciation of what was going on when outcomes went in an undesired direction. Browser information is useful (especially if it’s an older browser). We could ask how they were interacting with the software, the specific data or anything they can recall about what they have done to this point. There’s a lot of information that could be relevant here depending on the context of problem. The “how” is the information that is going to help us see the “what”.

What should have happened – It’s helpful to know not only what the problem is but why our customer believes it is a problem, this has several purposes. Firstly it gives us an insight into what our customer desires. This could be as simple as “I’d like it to run like it did when I did the same thing two days ago”. It could also be a discussion about “I want “X” and I’m getting “Y”. In both examples whether the customers feedback is based on an unintended change to an outcome or a perceived one (our customer is mistaken or supporting customer documentation is ambiguous) we have an insight into how our customer is viewing one part of our system. This is important for investigation and solution purposes as well as helping to manage customer expectations should we need to later explain why differences between “desired and actual” represents how the functionality should execute at a business level.

On reflection, after my short presentation, it occurred to me that I could have included a fourth point – What’s the impact? This is useful information to help us determine how quickly we want to deal with this issue and how we deal with it. I know that when something with serious impact comes through our Service team it gets communicated quickly and the development team (again this includes the testers) swarm around the problem and potential solutions. However, it’s useful to capture the business impact as part of the bug detail regardless of whether the impact is large, small or somewhere in between.

So, that’s my story. No big technical terms, no diving into a glossary and insisting on specific terms but, hopefully, keeping it relevant and useful through simplicity. It hasn’t been long enough since the event to see if my efforts have helped people or how I could have been more effective. However, I thought this small story, about keeping communication simple, was worth sharing. This is my simple story.

Thank you to Janet Gregory and Lee Hawkins for their review of this blog and feedback.

Automation – what bugs me

When did we start to believe that automation in testing is intelligent? When did we start to believe that we can automate away human thinking within software testing? When did testers, many of whom rally to a cry of ” we are misunderstood and not appreciated” decide it would be a good idea to promote that automation in testing has super powers? Recent interactions on LinkedIn have had me pondering those questions.

You might notice that I use the term “automation in testing”. I use this term because it has resonated with me since I first heard the term via Richard Bradshaw (twitter handle @FriendlyTester). Automation in testing refers to automation that supports testing. It is a mindset and an approach,that has a meaningful and useful focus (you can read more here –

Let’s start with the claim that “automation in testing finds bugs”. I have no idea why it is such a steep hill to climb to suggest this statement is untrue. Automation in testing in a nutshell. A human codes algorithmic checks because (I hope) a decision has been made that there is value in knowing if “desired state A” changes to a different state (“state B”). There appears to be a reasonably wide held belief that if the automated check fails, because we no longer have “desired state A”, but instead “state B” then the automation has found a bug. This thinking shifts much focus from tester abilities and gives automation power it has no right to claim.

That the desired state does not equal the current actual state is a difference, a variance. It’s not a bug, it is a deviation from an expected outcome and a human is being invited to investigate the disagreement. As a tester, if you choose to tell people that the automated checks found a bug, then you might also be removing from the story that a tester, a human, was required to make any meaningful and valuable changes.

The automated check tells us there is a difference. Do we simply accept this and say “it’s a bug, needs to be fixed”? I don’t believe a tester worth their place in a software development company would even consider this an option. The first step is to dig in and discover where the difference occurred and the circumstances around the difference. Even something as simple as discovering the last time the test was executed can help us narrow down on possible changes that led us here. We will likely need to dig into the data, the new outcome and probably ask a bunch of questions to discover what the changed outcome means. Your automation code, the computers you are running the automated checks on, the code you are executing to run the automation – none of these can do the investigation you are currently running. You are looking for clues, evidence, using heuristics to help you make decisions.

Sometimes the investigation and collaboration leads us to conclude that indeed we do have an unwanted outcome. Who makes the decision that the difference is a bug? Likely it will be a collaborative effort. Tester, developer, business analyst, subject matter expert are just a few who might collaborate to find a solution. At no point has the automation made a decision that there is a bug. It is incapable of doing any more than pointing out that it expected “state A” but got “state B”. Equally, after investigating the evidence you might discover that “state A” is wrong. It might be wrong because there have been code changes that legitimately change “state A” so that we need to make changes in our expected results. It might even be that a change in code leads us to discover that “state A” has never been correct or hasn’t been correct for some time (I’ve seen this more than once). Please note carefully that the automation cannot decide between “bug” and “not a bug”, a human (or humans) do this.

What else might happen as an outcome of the above? It’s not unusual for our investigations to discover scenarios that are not covered by automated checks but that would be valuable to cover. We might find other checks running that are out of date or poorly constructed (that is, they will never give us any valuable information). We might even find other scenarios that will need changing because of this one difference. We might even spot signs of unwanted duplication. It’s pretty amazing what comes to light when you get into these investigations. There are a myriad of possibilities.

The one thing I really want to emphasise in this blog is that the computer, the automation, however you wish to refer to it, did not find a bug. It found a difference and that difference was found because a human wrote code that said that an outcome should be checked. Without the intervention of a human to analyse and investigate, this difference would have no meaning, no valuable outcomes. So if you want to elevate the standing of testers in the software community, it might be a good idea to take credit for your skills and contributions and not unthinkingly hand that over to a non-thinking entity.

Automated checks have value because of the information they can provide to humans. Consider that for your next conversation around automation.

Creating your story

Acknowledgement: My thanks to Lee Hawkins and Lisa Crispin for reviewing my blog before publishing. I really appreciate you providing the time to provide feedback on my writing.

In my last blog Not everybody can test I noted the importance of being able to tell stories about your testing. If you want to people to grasp what you bring as a tester, and what good testers and testing contribute to software development, you must be able to tell stories that are clear, factual and compelling. If you want to elevate testing in the minds of non testers, if you want to see the “manual” drooped from the term “manual testing”, if you want to build a clear delineation between testing and automation in testing, tell stories that do this. Tell stories that have clear context, clear messages and are targeted at your audience (yes this means that one story does not work for all audiences). Perhaps some of you are wondering why I am using the term “stories” when I could say “talk about”, and that’s a fair question. The use of story is a deliberate choice because stories, good stories, are a powerful way of communicating to other people. Consider the following from Harvard Business Publishing.

Telling stories is one of the most powerful means that leaders have to influence, teach, and inspire. What makes storytelling so effective for learning? For starters, storytelling forges connections among people, and between people and ideas. Stories convey the culture, history, and values that unite people. When it comes to our countries, our communities, and our families, we understand intuitively that the stories we hold in common are an important part of the ties that bind.

This understanding also holds true in the business world, where an organization’s stories, and the stories its leaders tell, help solidify relationships in a way that factual statements encapsulated in bullet points or numbers don’t.

There are some highly desirable outcomes listed in those two paragraphs. Influence, teach, inspire, forging connections among people and between people and ideas, solidify relationships. So here’s the first check point, if you want to influence how testing is viewed by non testers then you need to have stories and practice telling them. What is a good story? Well that’s up to you really and probably how much time you want to invest in building the story and learning to tell it well. I once asked a guitar teacher how much I had to practice to be a great guitar player. He said when I thought I was great, that was enough but cautioned other people may not hear my greatness in the same way. However before you can tell a story you have to have a story to tell.

Confession time. I write a lot of things that never get published because I can’t convince myself they are good stories. Often I write things where I need to get feedback to clarify that I am not talking nonsense. So I relate to the idea that telling stories can be difficult. I’ve been writing about testing for a while and still have episodes of feeling like an imposter ( Having said that, while practice hasn’t made me perfect, writing blogs is easier now as I have practiced. Likewise I spent time building and refining testing stories that I can now comfortably use when talking about testing to new testers all the way up to CEOs (and weirdly I can do this without imposter syndrome – people are wonderfully strange).

So let’s set a scenario that I understand is not all that uncommon. You’re a tester, you sit at the end of the development queue. You are expected to gather test requirements, write detailed test cases, execute them (probably at some “average rate per day”) and report bugs. If this is how you explain your testing role, if this is your testing story, you are underselling yourself, and, painting a picture of somebody that could be “automated out of a job”. How might we improve this?

Let’s start with really breaking down what you do (or might do) if you relate to the above. As you gather test requirements for documentation you are looking for problems, ambiguities, statements that simply do not align with your understanding of the project or things you know about the system. So you raise these for discussion. In doing this you have reported issues for examination before they get into the code. You are influencing product quality, you are actively advocating for quality and finding potential problems others have missed. See the difference here? “I write test requirements” versus “I help people improve quality by identifying potential problems before they are coded. This helps my company reduce cost and improve client happiness”.

Have you ever noticed that as you are writing those detailed test cases that you sometimes think about scenarios not covered in the specification (to be honest that’s anything not sitting directly on the “happy path”). Do you take notes about these missing bits of information or add them to the detailed test cases (I used to do the former as I don’t like detailed test cases very much). So you could tell a story that says “I write test cases and execute them” or you could say “I write test cases and make notes about scenarios that aren’t explicitly covered, things that might not have been considered during design and coding. I talk to the BAs and developers about these gaps and then when I test I explore these scenarios for consistency with other similar established and accepted outcomes”. Which story would you prefer to tell? Do you think one is a more compelling story of what you really do as a tester and the value you bring to a quality culture?

Let’s summarise. Telling stories is a powerful way of delivering messages that resonate and have the ability to build relationships and understanding “in a way that factual statements encapsulated in bullet points or numbers don’t”. Sadly though your testing stories have to be created by you. Somebody else could create them for you but then they wouldn’t be your stories and they would lack the authenticity that great stories require. Telling stories is not a “five minutes before you need it” activity (unless of course you are really practiced and have lots of stories to share). Take some time, understand what it is you do that makes your testing work important, create your stories, practice them, refine them and be ready to tell them. I’ve used some very simple examples, and deliberately so, for the purposes of illustrating ideas. So take your time, unravel the complexities of your work, understand your skills and celebrate them in compelling stories.

Not everybody can test

It has been, of late, an interesting time to observe LinkedIn posts. Amongst the popular themes are  “manual testing” and “not everybody can test”. While the term “manual testing” annoys me, for the moment, I’m a little over that discussion. Let’s look at the “not everybody can test” proposition. I’m uncertain if I’m about to take a step into testing heresy or not, but here goes.

Let’s start with some definitions of “test” (taken from

1. A procedure for critical evaluation; a means of determining the presence, quality, or truth of something; a trial:

2. A series of questions, problems, or physical responses designed to determine knowledge, intelligence, or ability.

3. A basis for evaluation or judgement:

Now one courtesy of Michael Bolton and James Bach

“Testing is the process of evaluating a product by learning about it through exploration and experimentation, which includes: questioning, study, modelling, observation and inference, output checking, etc.”

We’ll start with the “not everybody can test” claim. Let’s put this one to rest really quickly. The members of my family all undertake critical analysis, ask questions, solve problems, evaluate information they are presented with. I have 2 members of my family (beside myself) that actively study and learn about the “product in front of them”. I’m going to suggest at this point that just about every human on this planet tests as part of their navigation through each day.

I’m being deliberately obtuse or silly you say because we all know that “not everybody can test” has software testing as it’s context. That’s  an interesting response as non of those definitions I’ve noted include “software testing” within them and the original statement failed to include that. Cool, let’s change the statement – “not everybody can test software”. This too is problematic. Sit anybody in front of a piece of software, ask them to interact with it, and they’ll start testing. I seem to recall companies running usability labs basically using this approach. Those people are using and learning about the product, expressing feelings about how easy or hard it is to use, whether they want to keep interacting with the software or not, whether they feel they are achieving outcomes that are useful for whatever purpose the software is intended to serve. Ask people about the software they use and just wait for the opinions to flow about what they like and dislike about particular programs. These opinions are formed by testing the software as it is used.

What, you’re now telling me they’re not IT professionals and well, you know, that whole thing about “not everybody can test” is about IT professionals. OK, there’s a bit of goal post shifting going on here, but sure, let’s continue. I’m lucky enough to have worked at a company recently where a number of developers took their testing very seriously. I currently work at a company where the same can be said. They test in considered ways and think beyond unit testing (which is a type of testing). They work hard to make notes about what they have tested, potential problem spots and keep the test team appraised of what they are finding. Their goal is to, at a minimum, find those defects that are obvious or easily replicated and to provide useful information when deeper testing is undertaken. What I can say from working with these developers is that  they are indeed critically evaluating their work, learning about it, forming hypothesis, finding and resolving problems. So, it seems to me, they are testing based on the definitions above.

Say what, you’re still not happy, I’m still misinterpreting the sentence? Oh, right, so now you are telling me “not everybody can test software in ways that are thoughtful and structured to help discover valuable information for stakeholders”. At this point we could still debate nuances, perhaps tweak the statement, but now I’m starting to get the picture. When you say “not everybody can test” you really mean something far more specific? You mean that testers require a set of skills to do their job in an excellent manner. So my question then is why did you start with the premise that “not everybody can test”? Would it be better to instead propose that software testing is a set of skills, abilities and attributes not possessed by everybody? Might it be more useful if instead of telling non testers that “not everybody can test” you told compelling stories about what it is you do and bring as a tester that helps your company deliver excellent software to your customers. Would it be more effective to tell your testing story?

My final questions. Can you tell your testing story in a way that is meaningful and informative? If your answer to that question is “No” then perhaps consider if this is the next skill you should develop. If your answer is “Yes” then perhaps test out your testing story on someone that is outside of IT. See if they understand why testers are so important. Maybe your story needs some honing. If you want testing to be elevated to greater heights then some of that upward momentum is driven by your stories. Are you ready to tell your testing story?

A big thank you to Lee Hawkins (@therockertester) for reviewing the blog pre publication. If you don’t know Lee’s work you can checkout his blog at


The case against detailed tests cases (part one)

This blog was co-written with Lee Hawkins. You can find Lee’s blog posts at . Lee can be found on Twitter @therockertester

We recently read an article on the QA Revolution website, titled 7 Great Reasons to Write Detailed Test Cases, which claims to give “valid justification to write detailed test cases” and goes as far as to “encourage you to write more detailed test cases in the future.” We strongly disagree with both the premise and the “great reasons” and we’ll argue our counter position in a series of blog posts.

What is meant by detailed test cases?

This was not defined in the article (well there’s a link to “test cases” – see article extract below – but it leads to a page with no relevant content – was there a detailed test case for this?). As we have no working definition from the author, this article is assuming that detailed test cases are those that comprise predefined sections, typically describing input actions, data inputs and expected result. The input actions are typically broken into low level detail and could be thought of as forming a string of instructions such as “do this, now do this, now do this, now input this, now click this and check that the output is equal to the expected output that is documented”.

Let’s start at the very beginning

For the purposes of this article, the beginning is planning. The article makes the following supporting argument for detailed test cases

It is important to write detailed test cases because it helps you to think through what needs to be tested. Writing detailed test cases takes planning. That planning will result in accelerating the testing timeline and identifying more defects. You need to be able to organize your testing in a way that is most optimal. Documenting all the different flows and combinations will help you identify potential areas that might otherwise be missed.

Let’s explore the assertions made by these statements.

We should start by pointing out that we agree that planning is important. But test planning can be accomplished in many different ways and the results of it documented in many different ways – as always, context matters! 

Helps you to think through what needs to be tested

When thinking through what needs to be tested, you need to focus on a multitude of factors. Developing an understanding of what has changed and what this means for testing will lead to many different test ideas. We want to capture these for later reference but not in a detailed way. We see much greater value in keeping this as “light as possible”. We don’t want our creativity and critical thinking to be overwhelmed by details. We also don’t want to fall into a sunk cost fallacy trap by spending so much time documenting an idea that we then feel we can’t discard it later. 

Planning can be made an even more valuable activity when it is used to also think of “what ifs” and looking for problems in understanding as the idea and code is developed, while “detailed test cases” (in the context of this article) already suggests waterfall and the idea that testers do not contribute to building the right thing, right. 

Another major problem with planning via the creation of detailed test cases is the implication that we already know what to test (a very common fallacy in our industry). In reality, we know what to confirm based on specifications. We are accepting, as correct, documentation that is often incorrect and will not reflect the end product. Approaching testing as a proving, rather than disproving, or confirming over questioning activity plays to confirmation bias. Attempting to demonstrate that the specification is right and not considering ways it could be wrong does not lead us into deeper understanding and learning. This is a waste of tester time and skills.

That planning will result in accelerating the testing timeline and identifying more defects

We are a bit surprised to find a statement like this when there is no evidence provided to support the assertion. As testing has its foundations in evidence, it strikes us as a little strange to make this statement and expect it to be taken as fact. We wonder how the author has come up with both conclusions. 

Does the author simply mean that by following scripted instructions testing is executed at greater speed? Is this an argument for efficiency over efficacy? We’d argue, based on our experiences, that detailed test cases are neither efficient nor effective. True story – many years ago Paul, working in a waterfall environment, decided to write detailed test cases that could be executed by anybody. At that point in test history this was “gold standard” thinking. Three weeks later, Paul was assigned to the testing. Having been assigned to other projects in the meantime he came back to this assignment and found the extra detail completely useless. It had been written “for the moment”. With the “in the moment knowledge” missing, the cases were not clear and it required a lot of work to get back into testing the changes. If you’ve ever tried to work with somebody else’s detailed test cases, you know the problem we’re describing.

Also, writing detailed test cases, as a precursor to testing, naturally extends the testing timeline. The ability to test early and create rapid feedback loops is removed by spending time writing documentation rather than testing code.

Similarly “identifying more defects” is a rather pointless observation sans supporting evidence. This smacks of bug counting as a measure of success over more valuable themes such as digging deeply into the system, exploring and reporting that provides evidence-based observations around risk. In saying “identifying more defects”, it would have been helpful to indicate alternative approaches being compared against here. 

Defects are an outcome of engaging in testing that is thoughtful and based on observation of system responses to inputs. Hanging on to scripted details, trying to decipher them and the required inputs, effectively blunts your ability to observe beyond the instruction set you are executing. Another Paul story – Paul had been testing for only a short while (maybe two years) but was getting a reputation for finding important bugs. In a conversation with a developer one day, Paul was asked why this was so. Paul couldn’t answer the question at the time. Later, however, it dawned on him that those bugs were “off script”. They were the result of observing unusual outcomes or thinking about things the specification didn’t cover.

You need to be able to organize your testing in a way that is most optimal.

This statement, while not being completely clear to us in terms of its meaning, is problematic because for one thing it seems to assume there is an optimal order for testing. So then we need to consider, optimal for whom? Optimal for the tester, the development team, the Project Manager, the Release Manager, the C level business strategy or the customer? 

If we adopt a risk-based focus (and we should) then we can have a view about an order of execution but until we start testing and actually see what the system is doing, we can’t know. Even in the space of a single test our whole view of “optimal” could change, so we need to remain flexible enough to change our direction (and re-plan) as we go.

Documenting all the different flows and combinations will help you identify potential areas that might otherwise be missed.

While it might seem like writing detailed test cases would help testers identify gaps, the reality is different. Diving into that level of detail, and potentially delaying your opportunity for hands-on testing, can actually help to obfuscate problem areas. Documenting the different flows and combinations is a good idea, and can form part of a good testing approach, but this should not be conflated with a reason for writing detailed test cases. 

The statement suggests to us an implication that approaches other than detailed test cases will fail to detect issues. This is another statement that is made without any supporting evidence. It is also a statement that contradicts our experience. In simple terms, we posit that problems are found through discussion, collaboration and actual hands on testing of the code. The more time we spend writing about tests we might execute, the less time we have to actually learn the system under test and discovering new risks.

We also need to be careful to avoid the fallacy of completeness in saying “documenting all the different flows and combinations”. We all know that complete testing is impossible for any real-world piece of software and it’s important not to mislead our stakeholders by suggesting we can fully document in the way described here.

Summing up our views

Our experience suggests that visual options, such as mind maps, are less heavy and provide easier visual communication to stakeholders than a library of detailed test cases. Visual presentations can be generated quickly and enable stakeholders to quickly appreciate relationships and dependencies. Possible gaps in thinking or overlooked areas also tend to stand out when the approach is highly visual. Try doing that with a whole bunch of words spread across a table. 

Our suggestions for further reading:

Thanks to Brian Osman for his review of our blog post.

%d bloggers like this: