Creating your story

Acknowledgement: My thanks to Lee Hawkins and Lisa Crispin for reviewing my blog before publishing. I really appreciate you providing the time to provide feedback on my writing.

In my last blog Not everybody can test I noted the importance of being able to tell stories about your testing. If you want to people to grasp what you bring as a tester, and what good testers and testing contribute to software development, you must be able to tell stories that are clear, factual and compelling. If you want to elevate testing in the minds of non testers, if you want to see the “manual” drooped from the term “manual testing”, if you want to build a clear delineation between testing and automation in testing, tell stories that do this. Tell stories that have clear context, clear messages and are targeted at your audience (yes this means that one story does not work for all audiences). Perhaps some of you are wondering why I am using the term “stories” when I could say “talk about”, and that’s a fair question. The use of story is a deliberate choice because stories, good stories, are a powerful way of communicating to other people. Consider the following from Harvard Business Publishing.


Telling stories is one of the most powerful means that leaders have to influence, teach, and inspire. What makes storytelling so effective for learning? For starters, storytelling forges connections among people, and between people and ideas. Stories convey the culture, history, and values that unite people. When it comes to our countries, our communities, and our families, we understand intuitively that the stories we hold in common are an important part of the ties that bind.

This understanding also holds true in the business world, where an organization’s stories, and the stories its leaders tell, help solidify relationships in a way that factual statements encapsulated in bullet points or numbers don’t.

https://www.harvardbusiness.org/what-makes-storytelling-so-effective-for-learning/

There are some highly desirable outcomes listed in those two paragraphs. Influence, teach, inspire, forging connections among people and between people and ideas, solidify relationships. So here’s the first check point, if you want to influence how testing is viewed by non testers then you need to have stories and practice telling them. What is a good story? Well that’s up to you really and probably how much time you want to invest in building the story and learning to tell it well. I once asked a guitar teacher how much I had to practice to be a great guitar player. He said when I thought I was great, that was enough but cautioned other people may not hear my greatness in the same way. However before you can tell a story you have to have a story to tell.

Confession time. I write a lot of things that never get published because I can’t convince myself they are good stories. Often I write things where I need to get feedback to clarify that I am not talking nonsense. So I relate to the idea that telling stories can be difficult. I’ve been writing about testing for a while and still have episodes of feeling like an imposter (https://en.wikipedia.org/wiki/Impostor_syndrome). Having said that, while practice hasn’t made me perfect, writing blogs is easier now as I have practiced. Likewise I spent time building and refining testing stories that I can now comfortably use when talking about testing to new testers all the way up to CEOs (and weirdly I can do this without imposter syndrome – people are wonderfully strange).

So let’s set a scenario that I understand is not all that uncommon. You’re a tester, you sit at the end of the development queue. You are expected to gather test requirements, write detailed test cases, execute them (probably at some “average rate per day”) and report bugs. If this is how you explain your testing role, if this is your testing story, you are underselling yourself, and, painting a picture of somebody that could be “automated out of a job”. How might we improve this?

Let’s start with really breaking down what you do (or might do) if you relate to the above. As you gather test requirements for documentation you are looking for problems, ambiguities, statements that simply do not align with your understanding of the project or things you know about the system. So you raise these for discussion. In doing this you have reported issues for examination before they get into the code. You are influencing product quality, you are actively advocating for quality and finding potential problems others have missed. See the difference here? “I write test requirements” versus “I help people improve quality by identifying potential problems before they are coded. This helps my company reduce cost and improve client happiness”.

Have you ever noticed that as you are writing those detailed test cases that you sometimes think about scenarios not covered in the specification (to be honest that’s anything not sitting directly on the “happy path”). Do you take notes about these missing bits of information or add them to the detailed test cases (I used to do the former as I don’t like detailed test cases very much). So you could tell a story that says “I write test cases and execute them” or you could say “I write test cases and make notes about scenarios that aren’t explicitly covered, things that might not have been considered during design and coding. I talk to the BAs and developers about these gaps and then when I test I explore these scenarios for consistency with other similar established and accepted outcomes”. Which story would you prefer to tell? Do you think one is a more compelling story of what you really do as a tester and the value you bring to a quality culture?

Let’s summarise. Telling stories is a powerful way of delivering messages that resonate and have the ability to build relationships and understanding “in a way that factual statements encapsulated in bullet points or numbers don’t”. Sadly though your testing stories have to be created by you. Somebody else could create them for you but then they wouldn’t be your stories and they would lack the authenticity that great stories require. Telling stories is not a “five minutes before you need it” activity (unless of course you are really practiced and have lots of stories to share). Take some time, understand what it is you do that makes your testing work important, create your stories, practice them, refine them and be ready to tell them. I’ve used some very simple examples, and deliberately so, for the purposes of illustrating ideas. So take your time, unravel the complexities of your work, understand your skills and celebrate them in compelling stories.

Not everybody can test

It has been, of late, an interesting time to observe LinkedIn posts. Amongst the popular themes are  “manual testing” and “not everybody can test”. While the term “manual testing” annoys me, for the moment, I’m a little over that discussion. Let’s look at the “not everybody can test” proposition. I’m uncertain if I’m about to take a step into testing heresy or not, but here goes.

Let’s start with some definitions of “test” (taken from https://www.thefreedictionary.com/test)

1. A procedure for critical evaluation; a means of determining the presence, quality, or truth of something; a trial:

2. A series of questions, problems, or physical responses designed to determine knowledge, intelligence, or ability.

3. A basis for evaluation or judgement:

Now one courtesy of Michael Bolton and James Bach

“Testing is the process of evaluating a product by learning about it through exploration and experimentation, which includes: questioning, study, modelling, observation and inference, output checking, etc.”

We’ll start with the “not everybody can test” claim. Let’s put this one to rest really quickly. The members of my family all undertake critical analysis, ask questions, solve problems, evaluate information they are presented with. I have 2 members of my family (beside myself) that actively study and learn about the “product in front of them”. I’m going to suggest at this point that just about every human on this planet tests as part of their navigation through each day.

I’m being deliberately obtuse or silly you say because we all know that “not everybody can test” has software testing as it’s context. That’s  an interesting response as non of those definitions I’ve noted include “software testing” within them and the original statement failed to include that. Cool, let’s change the statement – “not everybody can test software”. This too is problematic. Sit anybody in front of a piece of software, ask them to interact with it, and they’ll start testing. I seem to recall companies running usability labs basically using this approach. Those people are using and learning about the product, expressing feelings about how easy or hard it is to use, whether they want to keep interacting with the software or not, whether they feel they are achieving outcomes that are useful for whatever purpose the software is intended to serve. Ask people about the software they use and just wait for the opinions to flow about what they like and dislike about particular programs. These opinions are formed by testing the software as it is used.

What, you’re now telling me they’re not IT professionals and well, you know, that whole thing about “not everybody can test” is about IT professionals. OK, there’s a bit of goal post shifting going on here, but sure, let’s continue. I’m lucky enough to have worked at a company recently where a number of developers took their testing very seriously. I currently work at a company where the same can be said. They test in considered ways and think beyond unit testing (which is a type of testing). They work hard to make notes about what they have tested, potential problem spots and keep the test team appraised of what they are finding. Their goal is to, at a minimum, find those defects that are obvious or easily replicated and to provide useful information when deeper testing is undertaken. What I can say from working with these developers is that  they are indeed critically evaluating their work, learning about it, forming hypothesis, finding and resolving problems. So, it seems to me, they are testing based on the definitions above.

Say what, you’re still not happy, I’m still misinterpreting the sentence? Oh, right, so now you are telling me “not everybody can test software in ways that are thoughtful and structured to help discover valuable information for stakeholders”. At this point we could still debate nuances, perhaps tweak the statement, but now I’m starting to get the picture. When you say “not everybody can test” you really mean something far more specific? You mean that testers require a set of skills to do their job in an excellent manner. So my question then is why did you start with the premise that “not everybody can test”? Would it be better to instead propose that software testing is a set of skills, abilities and attributes not possessed by everybody? Might it be more useful if instead of telling non testers that “not everybody can test” you told compelling stories about what it is you do and bring as a tester that helps your company deliver excellent software to your customers. Would it be more effective to tell your testing story?

My final questions. Can you tell your testing story in a way that is meaningful and informative? If your answer to that question is “No” then perhaps consider if this is the next skill you should develop. If your answer is “Yes” then perhaps test out your testing story on someone that is outside of IT. See if they understand why testers are so important. Maybe your story needs some honing. If you want testing to be elevated to greater heights then some of that upward momentum is driven by your stories. Are you ready to tell your testing story?

A big thank you to Lee Hawkins (@therockertester) for reviewing the blog pre publication. If you don’t know Lee’s work you can checkout his blog at https://therockertester.wordpress.com/

 

The case against detailed tests cases (part one)

This blog was co-written with Lee Hawkins. You can find Lee’s blog posts at https://therockertester.wordpress.com/ . Lee can be found on Twitter @therockertester

We recently read an article on the QA Revolution website, titled 7 Great Reasons to Write Detailed Test Cases, which claims to give “valid justification to write detailed test cases” and goes as far as to “encourage you to write more detailed test cases in the future.” We strongly disagree with both the premise and the “great reasons” and we’ll argue our counter position in a series of blog posts.

What is meant by detailed test cases?

This was not defined in the article (well there’s a link to “test cases” – see article extract below – but it leads to a page with no relevant content – was there a detailed test case for this?). As we have no working definition from the author, this article is assuming that detailed test cases are those that comprise predefined sections, typically describing input actions, data inputs and expected result. The input actions are typically broken into low level detail and could be thought of as forming a string of instructions such as “do this, now do this, now do this, now input this, now click this and check that the output is equal to the expected output that is documented”.

Let’s start at the very beginning

For the purposes of this article, the beginning is planning. The article makes the following supporting argument for detailed test cases

It is important to write detailed test cases because it helps you to think through what needs to be tested. Writing detailed test cases takes planning. That planning will result in accelerating the testing timeline and identifying more defects. You need to be able to organize your testing in a way that is most optimal. Documenting all the different flows and combinations will help you identify potential areas that might otherwise be missed.

Let’s explore the assertions made by these statements.

We should start by pointing out that we agree that planning is important. But test planning can be accomplished in many different ways and the results of it documented in many different ways – as always, context matters! 

Helps you to think through what needs to be tested

When thinking through what needs to be tested, you need to focus on a multitude of factors. Developing an understanding of what has changed and what this means for testing will lead to many different test ideas. We want to capture these for later reference but not in a detailed way. We see much greater value in keeping this as “light as possible”. We don’t want our creativity and critical thinking to be overwhelmed by details. We also don’t want to fall into a sunk cost fallacy trap by spending so much time documenting an idea that we then feel we can’t discard it later. 

Planning can be made an even more valuable activity when it is used to also think of “what ifs” and looking for problems in understanding as the idea and code is developed, while “detailed test cases” (in the context of this article) already suggests waterfall and the idea that testers do not contribute to building the right thing, right. 

Another major problem with planning via the creation of detailed test cases is the implication that we already know what to test (a very common fallacy in our industry). In reality, we know what to confirm based on specifications. We are accepting, as correct, documentation that is often incorrect and will not reflect the end product. Approaching testing as a proving, rather than disproving, or confirming over questioning activity plays to confirmation bias. Attempting to demonstrate that the specification is right and not considering ways it could be wrong does not lead us into deeper understanding and learning. This is a waste of tester time and skills.

That planning will result in accelerating the testing timeline and identifying more defects

We are a bit surprised to find a statement like this when there is no evidence provided to support the assertion. As testing has its foundations in evidence, it strikes us as a little strange to make this statement and expect it to be taken as fact. We wonder how the author has come up with both conclusions. 

Does the author simply mean that by following scripted instructions testing is executed at greater speed? Is this an argument for efficiency over efficacy? We’d argue, based on our experiences, that detailed test cases are neither efficient nor effective. True story – many years ago Paul, working in a waterfall environment, decided to write detailed test cases that could be executed by anybody. At that point in test history this was “gold standard” thinking. Three weeks later, Paul was assigned to the testing. Having been assigned to other projects in the meantime he came back to this assignment and found the extra detail completely useless. It had been written “for the moment”. With the “in the moment knowledge” missing, the cases were not clear and it required a lot of work to get back into testing the changes. If you’ve ever tried to work with somebody else’s detailed test cases, you know the problem we’re describing.

Also, writing detailed test cases, as a precursor to testing, naturally extends the testing timeline. The ability to test early and create rapid feedback loops is removed by spending time writing documentation rather than testing code.

Similarly “identifying more defects” is a rather pointless observation sans supporting evidence. This smacks of bug counting as a measure of success over more valuable themes such as digging deeply into the system, exploring and reporting that provides evidence-based observations around risk. In saying “identifying more defects”, it would have been helpful to indicate alternative approaches being compared against here. 

Defects are an outcome of engaging in testing that is thoughtful and based on observation of system responses to inputs. Hanging on to scripted details, trying to decipher them and the required inputs, effectively blunts your ability to observe beyond the instruction set you are executing. Another Paul story – Paul had been testing for only a short while (maybe two years) but was getting a reputation for finding important bugs. In a conversation with a developer one day, Paul was asked why this was so. Paul couldn’t answer the question at the time. Later, however, it dawned on him that those bugs were “off script”. They were the result of observing unusual outcomes or thinking about things the specification didn’t cover.

You need to be able to organize your testing in a way that is most optimal.

This statement, while not being completely clear to us in terms of its meaning, is problematic because for one thing it seems to assume there is an optimal order for testing. So then we need to consider, optimal for whom? Optimal for the tester, the development team, the Project Manager, the Release Manager, the C level business strategy or the customer? 

If we adopt a risk-based focus (and we should) then we can have a view about an order of execution but until we start testing and actually see what the system is doing, we can’t know. Even in the space of a single test our whole view of “optimal” could change, so we need to remain flexible enough to change our direction (and re-plan) as we go.

Documenting all the different flows and combinations will help you identify potential areas that might otherwise be missed.

While it might seem like writing detailed test cases would help testers identify gaps, the reality is different. Diving into that level of detail, and potentially delaying your opportunity for hands-on testing, can actually help to obfuscate problem areas. Documenting the different flows and combinations is a good idea, and can form part of a good testing approach, but this should not be conflated with a reason for writing detailed test cases. 

The statement suggests to us an implication that approaches other than detailed test cases will fail to detect issues. This is another statement that is made without any supporting evidence. It is also a statement that contradicts our experience. In simple terms, we posit that problems are found through discussion, collaboration and actual hands on testing of the code. The more time we spend writing about tests we might execute, the less time we have to actually learn the system under test and discovering new risks.

We also need to be careful to avoid the fallacy of completeness in saying “documenting all the different flows and combinations”. We all know that complete testing is impossible for any real-world piece of software and it’s important not to mislead our stakeholders by suggesting we can fully document in the way described here.

Summing up our views

Our experience suggests that visual options, such as mind maps, are less heavy and provide easier visual communication to stakeholders than a library of detailed test cases. Visual presentations can be generated quickly and enable stakeholders to quickly appreciate relationships and dependencies. Possible gaps in thinking or overlooked areas also tend to stand out when the approach is highly visual. Try doing that with a whole bunch of words spread across a table. 

Our suggestions for further reading:

Thanks to Brian Osman for his review of our blog post.

Testing in stealth mode

In my most recent blog Value through simplicity I mentioned how it is possible to take small steps to influence quality and/or reduce complexity. In an “ideal world” you would be able to propose trying an idea and then run the experiment and feel safe the entire journey. Not all employers are an “ideal world” so sometimes you need to do things “on the quiet”. I have thought occasionally about blogging on some “cheats” I used across my testing journey. So this is a blog about some of those. It’s possibly also a story about impatience, single mindedness or tunnel vision (or all of those and more).

I mentioned in Value through simplicity that in my very early days as a tester I realised the value of communicating early with both the business analyst and developers involved on a project (communicating with our customers would have been nice but, for most people in this company, they were off limits). This was, as far as I can remember, the first time I moved away from established test team processes and started questioning a bunch of “(best) practices”.

I’d been in testing for maybe 2 years and was getting to the point where the mandate to create detailed test requirements (with traceability to the specification) was driving me nuts. This is the only time in my testing journey that I almost walked away from testing. I didn’t become a tester to spend so much time on documentation. Testing is a big jigsaw puzzle and you don’t solve jigsaw puzzles by writing about possibilities. You solve them by interacting, being hands on.

On top of writing detailed test cases they then had to be maintained. When the specification changed (and it always did, multiple times) the test cases had to be maintained along with the linkages to the requirements. I can still remember the damned formula used for how testing effort would be split for a project. The total test time estimation was based on the number of test requirements gathered and their perceived complexity. The total testing time was then split 40% test case writing, 20% data set up and 40% execution of test cases. Years later and I’m still astonished. The same amount of time writing as executing. That’s badly broken.

My response was to cheat. I did the initial mapping to requirements but I wrote my requirements at a higher level. That meant less writing (weirdly, looking back, they were actually tracking more toward a test idea than the test requirements we were supposed to be writing). I wrote detailed test cases but with less detail. I knew more detail didn’t help. I’d actually experimented with writing test cases that met the “anybody can execute them” standard. After I wrote the first one I was asked 3 weeks later to return to the project and execute the testing. I was back to square one. Stuff that had made a lot of sense, in context, in the midst of the project, was now lost. Most of that additional detail related to knowledge that was, for want of a better phrase “in the moment” and 3 weeks is a long break from the “moment”.

When it came to changes (the inevitable specification changes) I no longer updated the test cases, I simply dumped the maintenance. I made notes about the changes and where they would impact. When it came time to execute the tests and record outcomes I made brief notes as to what was actually executed, why it was different to the written instruction set and the actual result based on the changes. I was a lot happier with a reduced overhead on what I thought was reasonably pointless documentation and I spent more time playing with the software, learning and discovering. That nobody ever questioned my different approach says a lot about the value those documents provided to stakeholders (hint – there was no value).

It wasn’t too much further down the road that I decided that once my test cases were written I would make them available for the developers to review. I was a bit nervous about this because, back then, bug numbers had a relationship to being viewed as a good tester (sadly I hear that in some companies this is still a thing). I was hoping they would have a look at what I had written and we might find variations in what we thought the software was supposed to do. I liked working with many of the developers as we would pick through code (I learnt a hell of a lot about COBOL and reading it through what we would now call “pairing”).

The above wasn’t exactly a runaway success. It had a few wins. None of the other testers were interested in giving it a try (it was still a bit “developer v tester” for many). One day a developer came over and informed me that he had run all my test cases on the project he had coded, thus the code would come to me clean. I’d never seriously considered this would happen. There was a moment (maybe a little longer) of “what the hell do I do now?”. Rerunning tests that had already been executed seemed a bit pointless but……. I might have to pretend, go through the motions.

And that is how I started to learn about exploring over scripting. It dawned on me that not all that long ago, through a chat with a colleague, I had realised that a lot of more my more important defect discoveries were often found “off script”. They came from ideas I generated while testing, things that I saw or learnt while interacting with the software. Ideas that tend not to spring out from documentation alone. So my plan was to execute a few of the key tests and then develop themes around those and explore those ideas (and of course, back filling the test cases with information so it looked “legit”). As it turned out the first few key tests that I decided to run demonstrated a myriad of horrible problems and placed a hold on testing for a week or so (which prompted a polite query about which project test cases the developer had executed). When I got back to retesting the project I selectively chose a few additional tests from my test cases to execute (beyond those I’d initially chosen). Once I completed executing those checks I then explored related themes and test ideas. Things became progressively more exploratory for me.

One of the benefits of being at the same company for (quite) a while is that you can get away with things on “trust” (although in the environment I was in that might not be the right word. Perhaps “tolerated” is closer). I had a number of practices that were considered “not for the rest of the testing team” (go figure because my ability to test well was never called into question – so maybe I had things that could be shared). Of all the things I could get away with the company still had a policy that required test cases (even during its moments of “agility”). So I created a process where I would capture test ideas. No formal mapping back specification sections, no “test requirements” in the way our other testers were writing them. The test cases would evolve as I executed testing. I would add, delete and modify my test requirements as I learned more while testing. So to keep everything “cool in the hen house” my test notes were captured in the form of completed test cases. I wasn’t ecstatic about this but I could live with it and it was at least in the neighbourhood of how I wanted to test. Much of this went unnoticed until……. new test manger.

Which could have been a disaster but I knew Mr Test Manager from a previous work place where we had socialised and discussed the meaning of testing (in case you are wondering, neither of us though it was “42). Interestingly we had vastly different views of testing. Mr Test Manager was all about numbers and predictability but valued that I thought differently to him and would critically analyse thoughts and ideas rather than just being a yes/no/no comment person. One day he approached me and said “I’m testing my test predictor spreadsheet, I need to know how many test cases you have for this project”. I looked at him and said “no idea”. I got a quizzical look (or at least that’s my interpretation of it) and “you’re testing, you must know”. I followed up “if you want to count test cases then come ask me when the project is put to bed”. A meeting and explanation, complete with rationale, followed. Fortunately all was cool but I was told to keep my process to myself (I guess to avoid an uprising or anarchy) and I was never again asked for a test case count.

This has been an interesting blog to write. I can see pieces of my testing philosophy being stitched together over a number of years. I’m lucky in that everything I do now is “above board” and I can openly discuss and experiment with ideas. That’s a privilege that I hope to never take for granted. I suspect though that having to do some stuff using “stealth” made me think a lot about risk and reward, probably useful lessons.

Value through Simplicity

Testing is a complex activity that has intersections of technology, psychology, analysis and people management (amongst other skills – this in itself could be a blog). Before you get confused, people management, from my testing perspective is working with people and influencing, not being a gatekeeper to release or some similar activity. If we accept that creating software is a complex task we might benefit, as testers, trying to find ways to reduce complexity. Sometimes I think complexity is promoted as a “badge of honour”. Personally, I prefer to make things as least complex (simple) as possible.

I feel that if I am talking about concepts with simplicity, rather than complexity, being my guide, I’m probably going to get my message across with greater effectiveness. I also think it enables us to take action far quicker which shortens the space between an idea and learning about that idea by actually doing.

There is no “one size fits all” approach just as there is no “best practices”. I might suggest though that if you approach testing as primarily being an exercise in documentation, you just might be missing opportunities to add real value. Not to say testing documentation isn’t important, but there is certainly a balance. In the near future a couple of co-written blogs on detailed test cases (and why they are a massive fallacy) will be zooming out into cyberspace. That’s for later.

Two weeks ago I sat with my 2 test specialist colleagues and a developer. We chatted about some upgrade changes (3rd party software) that we had to introduce. We wanted to avoid falling back onto automation and just waiting for “green ticks”. Why? Because we knew that we didn’t have as much coverage as we would like in a particular area the upgrade would impact. We have some good coverage through the API layer but we wanted to spend time testing that our customers would not be affected through the GUI (there are reasons why our automation is somewhat lower here, but that’s not a discussion for this blog) Beyond that though, we knew there was value in having some humans diving in and hitting the software in ways we knew the automation didn’t and couldn’t. I think it is pretty cool that we know our automation well enough to make calls like this.

So after this chat we developed a test strategy so we could co-ordinate our testing effort. We spoke to the strategy for about 10, maybe 15, minutes. We spoke about which bits we each wanted to start with, areas where we might lack some knowledge, how to compensate for any knowledge shortfall, what we would do if we found a bug. We agreed this was more than “hey I found a bug” and move on. We wanted to form a quick gathering so we all understood the bug and the nature of it. This would enable us to quickly adjust our individual testing approaches, if required, or to quickly come up with some additional test ideas for exploration. It’s amazing how quickly some focused discussion enables ideas to flow and in turn deepen the testing.

The strategy is shown below. Small and simple and the details can be consumed quickly. These are all advantages. Gaps are exposed pretty quickly because there is no clutter. The information can be consumed quickly because there is no clutter.

The documentation shows what we want to hit, who is taking care of the initial “slices” and what those “slices” are composed of. Note also that the whiteboard is in the open and available for anybody to see it and provide feedback (that’s intentional, not accidental). The rest of the strategy, well that was discussion and collaboration. I’m a big fan of this approach because it means maximum time hitting the software rather than crafting documents that often serve little purpose beyond “ticking off” a process box. Test cases? Nope, we each had a direction and ideas. I mean that is a type of test case it just doesn’t come with explicit steps and expected results defined.

I get that the above might seem strange to some (or many). I also acknowledge that this is the first group of testers I have worked with where, as a group, we embrace the exploratory nature of testing (and I’ve worked with a lot of testers). It’s actually really nice to work with a group of testers where I don’t have to try and influence an exploratory approach. It’s pretty normal for us, on a daily basis, to be involving each other in some way with each others testing. The starting point can be anything from “hey check this out” to looking over a shoulder and suggesting that something looks a bit weird or simply asking “hey what is that?” while pointing at something on the screen. This is how we end up pairing a lot. Those “interruptions” become paired exploratory sessions. It’s a fun place to work and productive. I genuinely learn new things every day. I really wish more testers and developers would open themselves up to this type of interaction. The discoveries can be amazing and that’s a real value add to the software.

So perhaps you can’t reduce a strategy to a white board list. Perhaps you are expected to write detailed test cases or you are sitting in a silo waiting for bits of code or documentation to “waterfall to you”. I’ve been there and you cannot move from that in a hurry. It’s embedded and beyond your direct control (I really should blog on things I did to shortcut my way through pointless process or cheats to look like I was complying). What you can do though is pick one thing, just one thing, that you can do something about. Pick something low risk but something that will help reduce complexity for you. Many years ago the first move in that direction for me was to start early conversations with the business analyst and developers that were working on projects coming my way (I was the only tester in the team doing this). This was my first step toward really learning about testers influencing quality. Over a period of time it seeped into the test team practices (because behaviours like this do get noticed. The worst thing that could have happened was being told to stop and stay in my own silo – like I said, low risk, small steps). See if you can find something that helps you start the journey.

On reflection…….

Thursday was a busy day for me at work. Busy in good ways. I had picked up a small task for testing and in the process, along with a developer, spent the majority of the day finding issues. The best bit, at least in my view, was that my developer colleague was active finding bugs as well. Both of us essentially spent the day asking “what if”, exploring different perspectives and posing questions.

My colleague is situated in Canberra, I’m in Melbourne ( that’s a separation of about 700 kilometres by land and 460 kilometres by air). We communicated the simple stuff via Slack either in short sentences or short sentence accompanied by a screenshot for clarification. For a more complex issue we screen shared so I could walk through the problem. I felt unless I demonstrated what I had found I probably couldn’t describe the scenario well in a ticket. We asked each other a lot of questions which helped focus further exploration (and the screen share session surfaced another problem because of our discussion about what was happening and what we were observing).

Even though we were not in the same office it felt like we were. We were able to find and communicate issues with a level of clarity and understanding that worked for us. Because we were talking about what we found, the bug documentation was to the point. Because we were working “in the moment” and had a steady stream of communication the bug fixes were pretty rapid. I cannot really explain how much I love having bugs fixed so quickly after finding them. The context is fresh in my head and often in the interim between find and fix I have thought of other relevant tests I could run. In this type of scenario I don’t need to re-read and revisit the scenario/s, I think that can be a massive advantage.

By the time we had finished working our way through our explorations we had discovered, and fixed, around a dozen issues of various impact and importance (none of them introduced by the change that started my testing). By the time we had finished I was mentally tired. Normally I will work on something for a little while then pull back for a minute or two, reflect on what I have seen, perhaps take a walk to the kitchen and grab a tea or coffee or have a quick chat about something with a colleague, and then dive back in. This is something of a “rinse and repeat” habit that works well for me. I was so enjoying what was going on, the discussion, the exploration and discovery that I just didn’t really pull myself out of the adventure the way I normally would. I’m (kinda) OK with doing this occasionally but not as a “lifestyle”.

Before calling it as day I had a quick chat to my colleague to thank him for the effort he had put in, his willingness to maintain a two way line of communication and that he wasn’t just fixing but also finding issues. Both of us agreed it had been a good day. We both felt we had left our software in a better state than it was when we started that morning.

I have an easy 10 minute stroll from work to the train station, then around a 30 minute ride on the train. This is reflection time for me, especially the walk to the station. I made four notes in my phone notepad from this reflection time, below they are reproduced, as written into my phone:

  • Discussing a problem face to face is powerful and effective
  • Keep communication as simple and concise as you can without destroying the message
  • Pairing on a problem leads to discoveries you might not have made working alone
  • Sometime people forget that things they know are not common knowledge. Pairing can help surface some of this knowledge and create excellent learning opportunities (for you and others when shared).

None of those points are “must do” or “best practice” but for me they are “good practices in context”. I guess there are many more I could list but these are the 4 that really stood out when I was reflecting on the day. Another day, another adventure and my reflections would be different (to some degree). I don’t see any of those above points as being “break through” learning, more a reinforcement of things I have learned previously. I think that reflection for the purpose of continued acceptance or rejection of practices is healthy and an important input into continuous improvement. It’s certainly a habit that I feel has been beneficial for me.

The impossible and the ironic

Recently I spotted a job advertisement in LinkedIn and decided to tweet the advertisement. My commentary on the tweet “When are we going to stop seeing this in tester role ads. It’s way past time this was history. Good luck with ensuring error free.” The job ad I tweeted is shown below with any references to the company removed.

So, what’s the problem with the advertisement? Here’s several things that trouble me.

Testers do not – ensure and neither do they assure. Why? If you’re unfamiliar with the term as it’s worth having a look in the dictionary and the thesaurus or just look below

When you ensure or assure you are providing a guarantee that something will be true. In this case a future state over which you have both limited control and limited knowledge. You are making a commitment without the tools to deliver the commitment. That seems like a very risky thing to do Whilst looking at various dictionaries I found what I think is an interesting explanation of assure (a synonym of ensure)

The notion of telling a stakeholder something “so they do not worry” is right up there with the notion of “giving stakeholders confidence”. Both are rooted in emotional reactions to data that are in the control of the person receiving the data. One requirement of testers is to present evidence based reports that provide insight into product risk risk. Testers should not be thinking about “confidence” or “worry”, they should be focused on empirically/evidenced backed information that just might shatter stakeholder illusions. I do wonder if that ad might have really meant “the tester is required to say “everything is OK Boss”.

Error free – I have no idea how you do this. We can go philosophical “absence of evidence is not evidence of absence”. Black swans did not exist until explorers travelled to Australia and found them in Western Australia. It was, until then, considered impossible for a swan to be any colour other than white. So how does someone provide a guarantee of a system, with multiple layers of complexity being error free? You can only know what you know (or as Daniel Kahneman states “What you see is all there is”), there will always be blind spots (and we tend to be unaware of those simply by the nature of them). I mean you could be honest and state “we can no longer find any bugs that we believe will adversely impact our clients businesses”. But….. that’s not a statement of error free, that’s not a guarantee that a client won’t do something with the software you never thought of doing while testing, that the system is capable of behaving badly in production. The ad is asking somebody to commit to the impossible.

Over 150 people applied as at time of taking the ad snapshot. If people want to apply for roles, that is their choice and their right. However it raises questions in my mind. Perhaps what the job advertisement says and states is now considered irrelevant by job seekers. If that’s the case we should ask if that is a problem. Moreover the job ad is asking me to do things that I cannot, with honesty, state that I can deliver. How do I work with that? If I landed the role does it play out with the expectations of the advertisement? If it does, then I (and anybody else that might get the role) won’t make it past probation. If the job advertisement is inaccurate then why? There is plenty of time to get the wording right and focus on the people you are wanting to apply. If you can’t do that there might be issues working with the people that you hire. At a minimum it shows that there is a remarkable misunderstanding of what testers do and can bring to an employer. I suppose you could idealise that you might be able to change the thinking from the job ad to one that is based on reality while you are there. Perhaps those that signed off on the ad were happily paying for copy that didn’t reflect their beliefs. Remember to keep those rose coloured glasses on all day, you’ll probably need them.

The thing that lingers in my mind is that I see on Twitter and LinkedIn, many testers selling the idea that testers are gatekeepers and are responsible for quality. That testing is “assurance” of quality and that testing is about 0 defects. Perhaps it’s the idea (illusion) of having control or power that is attractive. Perhaps it is an angle that is intended to make a persuasive argument for retention of testers. Perhaps it is the unfortunate and popular conflation of testing and quality assurance. In my view this is a destructive and dishonest way to talk about testing. It also helps reinforce out of date views and poor practices on non testers that read the commentary. I don’t see it as a big stretch to suggest that it helps contribute to tester job ads that are remarkably inaccurate. As a community we need to do better in this respect. We need to talk about how testing helps bring better through collaboration, even if your current role has you sitting “at the end of the queue” waiting for code to be thrown over the wall. Rather than selling that you are a gatekeeper, think about and promote the idea that you can collaborate and influence and that the team owns quality. If you want “power” (whatever that might mean in your mind) you’re far more likely to find it when collaborating and influencing. If you want job ads that speak to what testers can and should bring then join in and help educate others that do not test. Choose your words carefully and wisely, talk in ways that are both compelling and honest.

The irony – so, in an advertisement that demands “ensured….error free” we get this

Perhaps it tells us everything we need to know about the offer.

Testing – Not Just Bugs

This post is brought to you through conversations recent, and past. The more recent ones tending towards LinkedIn discussions and reading the thoughts of others. Something that stands out to me is that many testers struggle to explain what they do when they test, why they do those things and even why they test. I see many testers justify their existence with “I find bugs”. Well that’s cool (to a degree) because testers need to find (important) bugs, but when you focus only on that, that’s underselling the craft and omitting many things that good testing brings to a company.

I remember many years ago being involved in an interview for a tester. Through the interview I counted 15 instances (I know I missed a few early ones) of the candidate saying “I catch bugs”. The candidates entire expression of testing was finding bugs, more nuanced discussions were largely beyond them. It’s a little sad when you realise this person had 5 years of testing experience. This interview might just have been a catalyst for me to make sure I could always talk about testing with some depth and clarity. It certainly encouraged me to reflect on and learn from my experiences and others.

To me, testing is a very broad church. To do it well you require multiple skills that are not limited to the domain in which you work or a couple of tools. Hell I use people and communication skills when testing that I learnt when I was a primary school teacher. I bring in to my testing things I experience as a cricket umpire (and I also wear my testers hat while umpiring).

As a tester you are working with not only hardware and software but also people. There are people that are in the same company as you and external to the company. Many of these external people are clients, others are providers of software that interface with what we build. To successfully understand the context in which I am testing I need to have sufficient knowledge of who the key people are, when I need to communicate with them and the most effective ways of providing them the information they need to make the decisions they need to make. In this space I’m managing technology, information, people and relationships and helping guide the production of quality software as part of a team effort. I might also point out that I’ve done all this and haven’t necessarily touched the software to be released (possibly not a line of code written yet). That doesn’t mean I’m not testing and it certainly doesn’t mean I’m not influencing what will be released or how we might go about building the code, documentation or release plans.

While I’m engaged in the above I’m constructing models of what will be built, trying reduce complexity and increase my understanding of how to best examine the software. I’m talking to people about my models (which generally have been physically represented either on paper, whiteboard or mind map) and seeing if they have the same models. It’s not unusual to find they don’t and this is a great test result. It shows that there are some different understanding and/or assumptions at play. Time to discuss, examine and course correct as required.

I like to talk to developers about the earliest point at which they can give me access to the code changes and what they can deliver to me. Give me something small and I can test it and give you feedback. How small – no single answer here except for “it depends”. I once had some code dropped to me that was “it’ll take you 10 minutes to test – at most, it doesn’t do a lot”. It did enough to show that there was a fundamental flaw in our approach. Most importantly we discovered this before it was a major strip down and overhaul.

Before I start testing I think about ways the software I’m about to test might be used, misused (intentionally and unintentionally), how I might make it “trip over itself”. How might I make the software do something that is “embarrassing”. How might I make the software hang, crash, respond poorly or simply behave like a 3 year old throwing a tantrum? Then it’s all “down hill”, right? Not really because it’s just getting started. The fun really starts now.

Now I have my my hands on the software (or some part of it). I have my preliminary test ideas and I’ll also include checking that any representations we have made about functionality can be demonstrated. Which is an activity that helps me build out more test ideas as well as adjusting or eliminating some of those I started with. To move to a slight tangent, this is just one reason why I personally dislike test scripts and the mindset of “that test passed”? What about everything else is going on around that single test, what is getting missed in the race for the “green tick”? At the risk of overemphasising the point, observation and concentration are key. I want to spot the unusual and unexpected.

I have yet to write that “I find bugs”. When I test I expect to find bugs but that is because I think deeply about how I might effectively test the software. In what ways can I test and find vulnerabilities? I’m not actually thinking “let’s find bugs?”. This might seem like a subtle difference (or perhaps not). This approach is explained by John Kay in his book Obliquity. If I say I’m going to find bugs, and make that my sole focus, I’m taking a very direct approach. I prefer an oblique approach where I focus on doing things that give me the opportunity to discover and learn and put myself in the best possible position to find important bugs. It also helps me focus on developing a solid understanding of various risks (because I need to convey this to stakeholders). Bugs are not my focus but an outcome of the way I test. This is just one reason why I don’t count bugs. The count, to a large degree, is irrelevant. There are many other dimensions of bugs that interest me far more before my (or another testers) personal tally. There are many other dimensions by which I judge my performance than by the number of bugs found.

If people wish to express their value in terms of “bug count” that’s a personal decision. In my sphere of influence (whatever that might be – and assuming I have one) I feel it is important to broaden the conversation. When I talk about testing I want to amplify the skills required, the depth of thinking and analyses needed to do it well. The broad experiences and sheer hard work that feed into decision making, the people skills, the reporting skills, collaborating, influencing and helping people and products improve. That list is just a starting point, far from exhaustive. In that conversation I want people to appreciate that bugs are found because good testers make it inevitable they will be found rather than testers only find bugs. There’s a huge difference in value statement between the two.

I would much rather have the conversation about why I find bugs that others might not find rather than bug numbers. I would like to be recognised for more than just finding bugs, because, frankly, I (and many other testers) bring way more than that to the table (if you’re reading this and don’t know any of those “other testers” get in touch with me and I’ll send you some names). I would like to move the focus of testing discussion to the nuances, challenges and difficulties of good testing and how testers can help with quality (and that testers don’t own quality or assure it). I really want to see more testers thinking about, and communicating with clarity, what it is that makes testing valuable. Elevating the status of testing by sending clear and accurate messages – well, it’s in our hands.

Cheers

Paul

Agile Elitism – Undoing the Good

I’ve been in IT for a bit over 20 years. In that time I’ve been a business analyst, a support desk lead and, for the most part, a tester. I’ve designed processes, I’ve redesigned processes, I’ve redesigned myself and my testing beliefs. When I first starting testing, as a dedicated tester, I was very much “by the seat of my pants”. I had ideas how to go about testing and I sort of followed them. I was then sent to an ISTQB Foundation course. I came back from that with a framework and an approach. It just made sense. It gave me a structure that I lacked, or more correctly, it gave me a structure and approach approved by others. I was somewhat evangelical about the whole deal when I returned to work with my new shiny certification.

Funny thing about shiny stuff, unless you keep polishing it (which in my view is often little more than reinforcing your biases) it tarnishes and stops looking the way it used to look. This is exactly what happened to me. What I thought I needed to do and what ISTQB told me were far apart. Writing detailed test cases sucked (and also led me to considering exiting testing). Having to maintain them sucked more. I wanted to engage with the business analysts and developers earlier, I wanted to write less and test more. I wanted to pair with other testers (before that practice had a name). I learnt very quickly that what I thought the software would do based on the specification, and what it actually did when it got me, were often worlds apart. I also found that when I focused less on the test cases and more on observation of the software I found really interesting bugs and made interesting discoveries that I could share. That was basically how I found context driven testing and a whole new view of testing. It’s also how I became substantially different to the majority of other testers I worked with for much of my professional testing life.

The reason I outlined a brief of my background is because I’m finding the same thoughts going through my head around Agile/agile as I did about testing. My first introduction to agility was through being in a waterfall project team that thought holding daily stand ups made us agile. At that point I lacked the experience I have now and just went along with it. It was slightly useful but mostly slightly annoying. The stand ups rarely revealed anything we didn’t already know. Mind you, nobody had the foggiest idea about Agile, it was just a directive from management (who also lacked the required awareness).

My next interaction was at the same company. A decision was made to “be Agile” and scrum was the adopted framework. It was doomed to failure because management had no idea of what was required and had no interest in changing their behaviours. That Agile was a mindset, rather than a concrete “thing” never occurred to them. From a waterfall development shop 5 scrum teams were formed. There was minimal coaching, for many there was minimal interest in learning new behaviours. At some point, because of my enthusiasm to find better ways of working, coupled with the amount of reading and learning I did on Agile and scrum, I slid into the “Agile evangelist” role (something I would now avoid with great enthusiasm)  and ended up coaching several teams on adopting good behaviours such as collaboration, testing from start to completion (what is often called “shift left” – horrible term – please don’t use it), writing testable stories and breaking stories into small slices, or least making the stories small and delivering small bits of testable code frequently. The move to agility failed, badly. Lack of trust and a management need to both micromanage and blame overwhelmed everything else. Interestingly, and not all that surprisingly, the teams I worked in loved the (fleeting) ownership and responsibility shift.

Since that gig I’ve worked at other places. My time at Locomote was interesting. It desperately wanted to embrace Agile but in no way did it tick of on all the manifesto principles and values. Again, scrum was the chosen framework. There was a bunch of waterfall behaviours that persisted, BUT, and this is the key bit for me, there was a desire for change, a desire to get better at producing high quality software. A willingness to reflect on what had been done, what could get better. Transparency, at the team level, was pretty good. At times it was patchy from higher up. At least here there was a feeling that we worked as teams wanting to be more agile. The cool thing for me was that with good Scrum-masters I was able to contribute ideas to improving the team and also focus strongly on improving testing specifically.

It was actually during this time that I really started to question Agile and how people spoke about it. Too much talk about “you don’t do this, so you’re not Agile”, too much focus on exclusion as if Agility is an exclusive club with strict entry criteria. I call these people “Agilistas” (a term somebody else coined, can’t remember who). I got tired of these discussions and discussions that simply focused on reinforcing some level of “purity” above a culture of consistently getting better as a group or company in sustainable ways. In short I’m over being told that unless the company I work for does a bunch of practices, that might not be relevant in context, the company does not qualify as agile.

agile-manifesto-principles

My current company, HealthKit, may not pass the “Agile purity test” that “Agilistas” demand. From my perspective HealthKit is the most agile place I have worked. It is a group of people who prize and practice:

  • transparency
  • reflection
  • adaptation

We collaborate a lot, we idea swap and challenge each other with respect and in an environment of safety. We adjust our planning as our views change, based on customer feedback or discoveries we have made, things  we have learned. When someone needs help it is willingly and rapidly made available. We actually pair a lot, something that only solidified in my mind this week. The best thing is that this is what you might call “natural pairing”, it’s not really pre-determined, it’s more on a “needs” basis. You know, “I need some assistance”, “here, lets explore what this does together”, and “hey, you know this area better than me, let’s work together, knowledge share and see what we can find”. I’ve had plenty of sessions where I ask for some assistance and then the person stays a little while longer to contribute testing ideas and observations. I work with 2 other testers and we are always involving each other, knowledge seeking, sharing, helping, working through some testing side by side. At Locomote we tried to schedule pairing between testers, it didn’t work. Pairing requires desire and willingness not a schedule.

As far as I know the developers at HealthKit don’t do a lot, if any, development using TDD (if they do it must be discussed in secret developer meetings). We don’t have a huge amount of automation, it is not currently a “way of life” (although automation is going to receive some specific attention I believe – even better the focus is automate the right things not everything). We don’t use Jira, we don’t write user stories using INVEST, or anything remotely similar, we don’t have enormous planning meetings or estimation meetings. We meet once a day for a whole team stand up session to discuss what we have done, what we are doing, raise any concerns/blockers and share things learned. To be honest, the rapid reduction of formal meetings, compared to past work places, is really appreciated. If it helps any “Agilistas” reading this, we release frequently, 2 or 3 times a week.

I’m at a company where the people are truly passionate about building excellent software and keeping our customers happy. We respect each other, we talk honestly and respectfully with an eye on sustainable, continuous improvement. Transparency is promoted by all and we are a genuinely happy and fun workplace. Best of all the culture is neither top down or bottom up, it is both because the culture is shared by all. Our CEOs sit in the office with everybody else (and no, I don’t mean the same space but in offices with doors closed) so if the culture wasn’t jointly owned it would be glaringly obvious.

So I reckon there are a bunch of people that will tell me HealthKit doesn’t meet their definition of Agile. That’s fine because I’m done having those discussions, they are pretty much irrelevant. My thoughts and understanding have been shaped by my experiences, observations and shared discussions. This doesn’t make me the oracle of “right” but it gives me more than “I read it in a book” or “this is what course ABC” told me. I’m in a company with an extraordinary culture, committed professionals that work together to create the best software we can for our customers, always on the lookout for ways we can improve. If what I’m currently part of isn’t Agile, according to the “law of the Agilista”, I really don’t care.

Regards

Paul

PBR/Grooming

Note: This is another blog that has been sitting in my drafts folder for well over 12 months. I honestly don’t know why, maybe I just forgot it was there. I’m publishing this in the “as I found it” state with the exception of a couple of grammatical changes. I can still remember the people and interactions that prompted me to write this blog. I hope you find something useful in my writing.

It is a source of wonder to me that humans can attach a whole variety of meanings to words or concepts. In ways it is a beautiful attribute of being human. At times it is quite a journey when you swap questions and answers then realise what you thought was a common reference, isn’t. I like these moments, occasions when you know that potential misunderstandings have been avoided through establishing deep, rather than shallow, understanding. I don’t have statistics to back me on this but I’d wager that the majority of our software disappointments fall into the shallow understanding “bucket”. Conversations that we thought were precise and clear but where we were actually talking past one another. I’ve heard plenty of this, I’m sure I’m not alone. Occasionally I get into trouble for focusing on words (people can get a tad impatient with me). People who work with me for a while get to understand why (I’m always happy to explain).  Often I’m not the only one querying the word, phrase or statement. I just happen to be the one who will initiate chasing clarity sooner rather than later. Experience is a great teacher.

The common reference I have in mind for this blog is Product Backlog refinement (PBR) or Grooming.


Product Backlog refinement is the act of adding detail, estimates, and order to items in the Product Backlog. This is an ongoing process in which the Product Owner and the Development Team collaborate on the details of Product Backlog items

http://www.scrumguides.org/docs/scrumguide/v2017/2017-Scrum-Guide-US.pdf


I’m somewhat surprised by the number of chats I have around PBR that, when it comes to the role of the tester, and the value of PBR sessions, includes the notion that testers should walk out of these sessions knowing exactly what they are going to test. That any gaps or problems should be identified in this session. I struggle with this idea for a number of reasons.

  • It doesn’t align with the PBR description in the Scrum guide
  • It doesn’t align with any PBR, or grooming, description I have read
  • It doesn’t align with the idea that we learn as we go
  • It doesn’t align with the idea that user stories are a starting point, a placeholder for conversations
  • It places responsibility on dedicated testers to find “gaps” and assigns a quasi “Quality Police” tag to them in what is a team responsibility
  • It is about knowing everything upfront. That’s a waterfall mindset and antithetical to an agile based approach.
  • It’s an unrealistic and unfair expectation

Personally I sometimes go into PBR sessions and encounter an area with which I have little knowledge. I contribute where I can, often it’s looking for ambiguities, clarifying terms or challenging assumptions (you don’t need deep understanding of an area to pick up on assumptions). I’ll also use this as a heads up for things I need to learn more about, investigations to be had (although I prefer to think of it as playtime and it is often a good way of finding existing bugs).


Some good questions to ask in this discussion:

  • How will you test it?
  • Why is it important?
  • Who will use it?
  • How will we know when it’s done?
  • What assumptions are we making?
  • What can we leave out?
  • How could we break this into smaller pieces?

https://www.growingagile.co.za/2016/06/help-my-first-grooming-session/


I borrowed the above from a Growing Agile article on grooming. I think they represent excellent questions in a grooming session. One thing I have found across teams I have worked with is that testing can be a “forgotten cousin” when it comes to getting stories ready for actual development. It’s not that the other people in the team can’t contribute in the testing space, or don’t want to, it’s simply not habit to do so.  It’s a habit I like to cultivate in teams. It’s quite interesting how quickly team members jump on-board. In my previous blog  I mentioned Mike Cohn’s  conditions of satisfaction.  I think they fit very nicely as a tactic within good PBR discussions.

My hope is that if you are reading this, and if you are a dedicated tester within a scrum team, you are not identifying with the demand to be “completely across” a story during PBR. If you do identify then it would be a good retrospective item to raise. It would be good for the team to understand the load this expectation places on you. It would be even better for the team to acknowledge that the attitude is counter productive and create external communications (ie to stakeholders outside the immediate team) accordingly. If you really want to kill off the “know it all in grooming” expectation, work with your team so that every grooming session has everyone thinking about and contributing to testing thoughts. Actively discuss testing and capture thoughts and ideas. Show that testing is being considered and considered deeply. It doesn’t show that you have “covered it all” (and nor should it) but it does show thought and commitment to each story. The reality is, you can defend that approach (if required) and, as a team, reduce unrealistic expectations. As a team you’ll also be far more aware of stories when testing is an embedded consideration in the PBR meetings.

As my final thought for this blog. In my opinion, and experience, there is a sure fire sign that the team is getting across joint ownership of testing. When you are sitting in a grooming session and others start asking questions about testing or testability before you, the dedicated tester, start asking, you are on the right track. Punch the air (even if it is only in your mind) and congratulate the team for their help. A better journey has started.

Cheers

Paul

%d bloggers like this: