The case against detailed tests cases (part one)

This blog was co-written with Lee Hawkins. You can find Lee’s blog posts at https://therockertester.wordpress.com/ . Lee can be found on Twitter @therockertester

We recently read an article on the QA Revolution website, titled 7 Great Reasons to Write Detailed Test Cases, which claims to give “valid justification to write detailed test cases” and goes as far as to “encourage you to write more detailed test cases in the future.” We strongly disagree with both the premise and the “great reasons” and we’ll argue our counter position in a series of blog posts.

What is meant by detailed test cases?

This was not defined in the article (well there’s a link to “test cases” – see article extract below – but it leads to a page with no relevant content – was there a detailed test case for this?). As we have no working definition from the author, this article is assuming that detailed test cases are those that comprise predefined sections, typically describing input actions, data inputs and expected result. The input actions are typically broken into low level detail and could be thought of as forming a string of instructions such as “do this, now do this, now do this, now input this, now click this and check that the output is equal to the expected output that is documented”.

Let’s start at the very beginning

For the purposes of this article, the beginning is planning. The article makes the following supporting argument for detailed test cases

It is important to write detailed test cases because it helps you to think through what needs to be tested. Writing detailed test cases takes planning. That planning will result in accelerating the testing timeline and identifying more defects. You need to be able to organize your testing in a way that is most optimal. Documenting all the different flows and combinations will help you identify potential areas that might otherwise be missed.

Let’s explore the assertions made by these statements.

We should start by pointing out that we agree that planning is important. But test planning can be accomplished in many different ways and the results of it documented in many different ways – as always, context matters! 

Helps you to think through what needs to be tested

When thinking through what needs to be tested, you need to focus on a multitude of factors. Developing an understanding of what has changed and what this means for testing will lead to many different test ideas. We want to capture these for later reference but not in a detailed way. We see much greater value in keeping this as “light as possible”. We don’t want our creativity and critical thinking to be overwhelmed by details. We also don’t want to fall into a sunk cost fallacy trap by spending so much time documenting an idea that we then feel we can’t discard it later. 

Planning can be made an even more valuable activity when it is used to also think of “what ifs” and looking for problems in understanding as the idea and code is developed, while “detailed test cases” (in the context of this article) already suggests waterfall and the idea that testers do not contribute to building the right thing, right. 

Another major problem with planning via the creation of detailed test cases is the implication that we already know what to test (a very common fallacy in our industry). In reality, we know what to confirm based on specifications. We are accepting, as correct, documentation that is often incorrect and will not reflect the end product. Approaching testing as a proving, rather than disproving, or confirming over questioning activity plays to confirmation bias. Attempting to demonstrate that the specification is right and not considering ways it could be wrong does not lead us into deeper understanding and learning. This is a waste of tester time and skills.

That planning will result in accelerating the testing timeline and identifying more defects

We are a bit surprised to find a statement like this when there is no evidence provided to support the assertion. As testing has its foundations in evidence, it strikes us as a little strange to make this statement and expect it to be taken as fact. We wonder how the author has come up with both conclusions. 

Does the author simply mean that by following scripted instructions testing is executed at greater speed? Is this an argument for efficiency over efficacy? We’d argue, based on our experiences, that detailed test cases are neither efficient nor effective. True story – many years ago Paul, working in a waterfall environment, decided to write detailed test cases that could be executed by anybody. At that point in test history this was “gold standard” thinking. Three weeks later, Paul was assigned to the testing. Having been assigned to other projects in the meantime he came back to this assignment and found the extra detail completely useless. It had been written “for the moment”. With the “in the moment knowledge” missing, the cases were not clear and it required a lot of work to get back into testing the changes. If you’ve ever tried to work with somebody else’s detailed test cases, you know the problem we’re describing.

Also, writing detailed test cases, as a precursor to testing, naturally extends the testing timeline. The ability to test early and create rapid feedback loops is removed by spending time writing documentation rather than testing code.

Similarly “identifying more defects” is a rather pointless observation sans supporting evidence. This smacks of bug counting as a measure of success over more valuable themes such as digging deeply into the system, exploring and reporting that provides evidence-based observations around risk. In saying “identifying more defects”, it would have been helpful to indicate alternative approaches being compared against here. 

Defects are an outcome of engaging in testing that is thoughtful and based on observation of system responses to inputs. Hanging on to scripted details, trying to decipher them and the required inputs, effectively blunts your ability to observe beyond the instruction set you are executing. Another Paul story – Paul had been testing for only a short while (maybe two years) but was getting a reputation for finding important bugs. In a conversation with a developer one day, Paul was asked why this was so. Paul couldn’t answer the question at the time. Later, however, it dawned on him that those bugs were “off script”. They were the result of observing unusual outcomes or thinking about things the specification didn’t cover.

You need to be able to organize your testing in a way that is most optimal.

This statement, while not being completely clear to us in terms of its meaning, is problematic because for one thing it seems to assume there is an optimal order for testing. So then we need to consider, optimal for whom? Optimal for the tester, the development team, the Project Manager, the Release Manager, the C level business strategy or the customer? 

If we adopt a risk-based focus (and we should) then we can have a view about an order of execution but until we start testing and actually see what the system is doing, we can’t know. Even in the space of a single test our whole view of “optimal” could change, so we need to remain flexible enough to change our direction (and re-plan) as we go.

Documenting all the different flows and combinations will help you identify potential areas that might otherwise be missed.

While it might seem like writing detailed test cases would help testers identify gaps, the reality is different. Diving into that level of detail, and potentially delaying your opportunity for hands-on testing, can actually help to obfuscate problem areas. Documenting the different flows and combinations is a good idea, and can form part of a good testing approach, but this should not be conflated with a reason for writing detailed test cases. 

The statement suggests to us an implication that approaches other than detailed test cases will fail to detect issues. This is another statement that is made without any supporting evidence. It is also a statement that contradicts our experience. In simple terms, we posit that problems are found through discussion, collaboration and actual hands on testing of the code. The more time we spend writing about tests we might execute, the less time we have to actually learn the system under test and discovering new risks.

We also need to be careful to avoid the fallacy of completeness in saying “documenting all the different flows and combinations”. We all know that complete testing is impossible for any real-world piece of software and it’s important not to mislead our stakeholders by suggesting we can fully document in the way described here.

Summing up our views

Our experience suggests that visual options, such as mind maps, are less heavy and provide easier visual communication to stakeholders than a library of detailed test cases. Visual presentations can be generated quickly and enable stakeholders to quickly appreciate relationships and dependencies. Possible gaps in thinking or overlooked areas also tend to stand out when the approach is highly visual. Try doing that with a whole bunch of words spread across a table. 

Our suggestions for further reading:

Thanks to Brian Osman for his review of our blog post.


8 thoughts on “The case against detailed tests cases (part one)

  1. To be fair shouldn’t you contrast with test automation or even BDD? How is that any different than detailed test cases? In the case of test cases, at least there is some consideration to test design techniques.

    Like

    1. Hi Nilanjan, we are addressing specific claims in a blog we strongly disagree with. I’m unclear why you think it is unfair we did not compare or contrast with test automation or BDD.

      Like

      1. Don’t all of your claims, as well as those in the referenced posts, equally apply to test automation? Don’t they also apply to the idea of testing (mainly test automation) among development thought leaders. The problem isn’t just with detailed test cases. It would be good to add that as a foot note to this blog post.

        Like

      2. Hi Nilanjan, we chose to address the source article in a particular way. I’m happy if readers want to consider parallels for the application of our observations. We plan to write at least one more blog on this. Automation may, or may not, form part of that.

        Like

  2. Hi Paul, great post!

    After reading the linked article, my first question was addressed in your first heading. What is meant by detailed test cases?

    What if the context of application domains that you are referring to?

    Let’s have the context in telecommunication protocol, and one of the testing missions is to cover all the use cases of that protocol? You have documentation of communication parameters and their values, but no use case document? Is this context valid for writing detailed test cases (as you defined them)?

    Thanks!

    Regards, Karlo.

    https://blog.tentamen.eu

    Like

    1. Hi Karlo, thanks for the feedback.

      To address your question which is I think is “would I write detailed test cases for the scenario you provided”, my answer in brief is no.
      You have told me that there is documentation about parameters and values. I’m going to assume that I also have some access to stakeholders to ask questions (simply because I can make that assumption although). Perhaps I’m also working with a team of testers. At no point in my response am I considering “covering all the use cases”. My focus will be on checking functions we say will work in particular ways and then exploring to learn about ways in which the system might misbehave. I think this is a far more valuable focus than chasing the illusion of “exhaustive use case list”. As I understand your scenario I see a number of ways I can explore the information on parameters and values. I could create a decision tree, I could use some pairwise testing, I’m probably going to consider boundaries and other things such as parameter dependencies. It’s a lot of information and I really want to keep it light. It seems to me that writing about what these relationships should do according to documentation is far less valuable than having some ideas about how and what to test and then adjusting these as my testing progresses and I learn more. During all this I am compiling test notes, converting these into both oral and written reports to stakeholders and sharing my learning with fellow testers (and anyone that might be interested). There is a complete absence of detailed test cases but there is planning, there is direction, there is risk assessment and there is a stream of information to stakeholders to inform them of risk via evidence based reporting. I feel like I’m hitting valuable test mission targets here. There is one circumstance where I would suggest using detailed test cases. That is when key stakeholders cannot be swayed from this as an artefact of testing. Of course it would also be my preference not to engage in a testing role that demand detailed test cases.
      Hope this answers your question.
      Cheers
      Paul

      Liked by 1 person

Leave a comment