Value through Simplicity

Testing is a complex activity that has intersections of technology, psychology, analysis and people management (amongst other skills – this in itself could be a blog). Before you get confused, people management, from my testing perspective is working with people and influencing, not being a gatekeeper to release or some similar activity. If we accept that creating software is a complex task we might benefit, as testers, trying to find ways to reduce complexity. Sometimes I think complexity is promoted as a “badge of honour”. Personally, I prefer to make things as least complex (simple) as possible.

I feel that if I am talking about concepts with simplicity, rather than complexity, being my guide, I’m probably going to get my message across with greater effectiveness. I also think it enables us to take action far quicker which shortens the space between an idea and learning about that idea by actually doing.

There is no “one size fits all” approach just as there is no “best practices”. I might suggest though that if you approach testing as primarily being an exercise in documentation, you just might be missing opportunities to add real value. Not to say testing documentation isn’t important, but there is certainly a balance. In the near future a couple of co-written blogs on detailed test cases (and why they are a massive fallacy) will be zooming out into cyberspace. That’s for later.

Two weeks ago I sat with my 2 test specialist colleagues and a developer. We chatted about some upgrade changes (3rd party software) that we had to introduce. We wanted to avoid falling back onto automation and just waiting for “green ticks”. Why? Because we knew that we didn’t have as much coverage as we would like in a particular area the upgrade would impact. We have some good coverage through the API layer but we wanted to spend time testing that our customers would not be affected through the GUI (there are reasons why our automation is somewhat lower here, but that’s not a discussion for this blog) Beyond that though, we knew there was value in having some humans diving in and hitting the software in ways we knew the automation didn’t and couldn’t. I think it is pretty cool that we know our automation well enough to make calls like this.

So after this chat we developed a test strategy so we could co-ordinate our testing effort. We spoke to the strategy for about 10, maybe 15, minutes. We spoke about which bits we each wanted to start with, areas where we might lack some knowledge, how to compensate for any knowledge shortfall, what we would do if we found a bug. We agreed this was more than “hey I found a bug” and move on. We wanted to form a quick gathering so we all understood the bug and the nature of it. This would enable us to quickly adjust our individual testing approaches, if required, or to quickly come up with some additional test ideas for exploration. It’s amazing how quickly some focused discussion enables ideas to flow and in turn deepen the testing.

The strategy is shown below. Small and simple and the details can be consumed quickly. These are all advantages. Gaps are exposed pretty quickly because there is no clutter. The information can be consumed quickly because there is no clutter.

The documentation shows what we want to hit, who is taking care of the initial “slices” and what those “slices” are composed of. Note also that the whiteboard is in the open and available for anybody to see it and provide feedback (that’s intentional, not accidental). The rest of the strategy, well that was discussion and collaboration. I’m a big fan of this approach because it means maximum time hitting the software rather than crafting documents that often serve little purpose beyond “ticking off” a process box. Test cases? Nope, we each had a direction and ideas. I mean that is a type of test case it just doesn’t come with explicit steps and expected results defined.

I get that the above might seem strange to some (or many). I also acknowledge that this is the first group of testers I have worked with where, as a group, we embrace the exploratory nature of testing (and I’ve worked with a lot of testers). It’s actually really nice to work with a group of testers where I don’t have to try and influence an exploratory approach. It’s pretty normal for us, on a daily basis, to be involving each other in some way with each others testing. The starting point can be anything from “hey check this out” to looking over a shoulder and suggesting that something looks a bit weird or simply asking “hey what is that?” while pointing at something on the screen. This is how we end up pairing a lot. Those “interruptions” become paired exploratory sessions. It’s a fun place to work and productive. I genuinely learn new things every day. I really wish more testers and developers would open themselves up to this type of interaction. The discoveries can be amazing and that’s a real value add to the software.

So perhaps you can’t reduce a strategy to a white board list. Perhaps you are expected to write detailed test cases or you are sitting in a silo waiting for bits of code or documentation to “waterfall to you”. I’ve been there and you cannot move from that in a hurry. It’s embedded and beyond your direct control (I really should blog on things I did to shortcut my way through pointless process or cheats to look like I was complying). What you can do though is pick one thing, just one thing, that you can do something about. Pick something low risk but something that will help reduce complexity for you. Many years ago the first move in that direction for me was to start early conversations with the business analyst and developers that were working on projects coming my way (I was the only tester in the team doing this). This was my first step toward really learning about testers influencing quality. Over a period of time it seeped into the test team practices (because behaviours like this do get noticed. The worst thing that could have happened was being told to stop and stay in my own silo – like I said, low risk, small steps). See if you can find something that helps you start the journey.

On reflection…….

Thursday was a busy day for me at work. Busy in good ways. I had picked up a small task for testing and in the process, along with a developer, spent the majority of the day finding issues. The best bit, at least in my view, was that my developer colleague was active finding bugs as well. Both of us essentially spent the day asking “what if”, exploring different perspectives and posing questions.

My colleague is situated in Canberra, I’m in Melbourne ( that’s a separation of about 700 kilometres by land and 460 kilometres by air). We communicated the simple stuff via Slack either in short sentences or short sentence accompanied by a screenshot for clarification. For a more complex issue we screen shared so I could walk through the problem. I felt unless I demonstrated what I had found I probably couldn’t describe the scenario well in a ticket. We asked each other a lot of questions which helped focus further exploration (and the screen share session surfaced another problem because of our discussion about what was happening and what we were observing).

Even though we were not in the same office it felt like we were. We were able to find and communicate issues with a level of clarity and understanding that worked for us. Because we were talking about what we found, the bug documentation was to the point. Because we were working “in the moment” and had a steady stream of communication the bug fixes were pretty rapid. I cannot really explain how much I love having bugs fixed so quickly after finding them. The context is fresh in my head and often in the interim between find and fix I have thought of other relevant tests I could run. In this type of scenario I don’t need to re-read and revisit the scenario/s, I think that can be a massive advantage.

By the time we had finished working our way through our explorations we had discovered, and fixed, around a dozen issues of various impact and importance (none of them introduced by the change that started my testing). By the time we had finished I was mentally tired. Normally I will work on something for a little while then pull back for a minute or two, reflect on what I have seen, perhaps take a walk to the kitchen and grab a tea or coffee or have a quick chat about something with a colleague, and then dive back in. This is something of a “rinse and repeat” habit that works well for me. I was so enjoying what was going on, the discussion, the exploration and discovery that I just didn’t really pull myself out of the adventure the way I normally would. I’m (kinda) OK with doing this occasionally but not as a “lifestyle”.

Before calling it as day I had a quick chat to my colleague to thank him for the effort he had put in, his willingness to maintain a two way line of communication and that he wasn’t just fixing but also finding issues. Both of us agreed it had been a good day. We both felt we had left our software in a better state than it was when we started that morning.

I have an easy 10 minute stroll from work to the train station, then around a 30 minute ride on the train. This is reflection time for me, especially the walk to the station. I made four notes in my phone notepad from this reflection time, below they are reproduced, as written into my phone:

  • Discussing a problem face to face is powerful and effective
  • Keep communication as simple and concise as you can without destroying the message
  • Pairing on a problem leads to discoveries you might not have made working alone
  • Sometime people forget that things they know are not common knowledge. Pairing can help surface some of this knowledge and create excellent learning opportunities (for you and others when shared).

None of those points are “must do” or “best practice” but for me they are “good practices in context”. I guess there are many more I could list but these are the 4 that really stood out when I was reflecting on the day. Another day, another adventure and my reflections would be different (to some degree). I don’t see any of those above points as being “break through” learning, more a reinforcement of things I have learned previously. I think that reflection for the purpose of continued acceptance or rejection of practices is healthy and an important input into continuous improvement. It’s certainly a habit that I feel has been beneficial for me.

The impossible and the ironic

Recently I spotted a job advertisement in LinkedIn and decided to tweet the advertisement. My commentary on the tweet “When are we going to stop seeing this in tester role ads. It’s way past time this was history. Good luck with ensuring error free.” The job ad I tweeted is shown below with any references to the company removed.

So, what’s the problem with the advertisement? Here’s several things that trouble me.

Testers do not – ensure and neither do they assure. Why? If you’re unfamiliar with the term as it’s worth having a look in the dictionary and the thesaurus or just look below

When you ensure or assure you are providing a guarantee that something will be true. In this case a future state over which you have both limited control and limited knowledge. You are making a commitment without the tools to deliver the commitment. That seems like a very risky thing to do Whilst looking at various dictionaries I found what I think is an interesting explanation of assure (a synonym of ensure)

The notion of telling a stakeholder something “so they do not worry” is right up there with the notion of “giving stakeholders confidence”. Both are rooted in emotional reactions to data that are in the control of the person receiving the data. One requirement of testers is to present evidence based reports that provide insight into product risk risk. Testers should not be thinking about “confidence” or “worry”, they should be focused on empirically/evidenced backed information that just might shatter stakeholder illusions. I do wonder if that ad might have really meant “the tester is required to say “everything is OK Boss”.

Error free – I have no idea how you do this. We can go philosophical “absence of evidence is not evidence of absence”. Black swans did not exist until explorers travelled to Australia and found them in Western Australia. It was, until then, considered impossible for a swan to be any colour other than white. So how does someone provide a guarantee of a system, with multiple layers of complexity being error free? You can only know what you know (or as Daniel Kahneman states “What you see is all there is”), there will always be blind spots (and we tend to be unaware of those simply by the nature of them). I mean you could be honest and state “we can no longer find any bugs that we believe will adversely impact our clients businesses”. But….. that’s not a statement of error free, that’s not a guarantee that a client won’t do something with the software you never thought of doing while testing, that the system is capable of behaving badly in production. The ad is asking somebody to commit to the impossible.

Over 150 people applied as at time of taking the ad snapshot. If people want to apply for roles, that is their choice and their right. However it raises questions in my mind. Perhaps what the job advertisement says and states is now considered irrelevant by job seekers. If that’s the case we should ask if that is a problem. Moreover the job ad is asking me to do things that I cannot, with honesty, state that I can deliver. How do I work with that? If I landed the role does it play out with the expectations of the advertisement? If it does, then I (and anybody else that might get the role) won’t make it past probation. If the job advertisement is inaccurate then why? There is plenty of time to get the wording right and focus on the people you are wanting to apply. If you can’t do that there might be issues working with the people that you hire. At a minimum it shows that there is a remarkable misunderstanding of what testers do and can bring to an employer. I suppose you could idealise that you might be able to change the thinking from the job ad to one that is based on reality while you are there. Perhaps those that signed off on the ad were happily paying for copy that didn’t reflect their beliefs. Remember to keep those rose coloured glasses on all day, you’ll probably need them.

The thing that lingers in my mind is that I see on Twitter and LinkedIn, many testers selling the idea that testers are gatekeepers and are responsible for quality. That testing is “assurance” of quality and that testing is about 0 defects. Perhaps it’s the idea (illusion) of having control or power that is attractive. Perhaps it is an angle that is intended to make a persuasive argument for retention of testers. Perhaps it is the unfortunate and popular conflation of testing and quality assurance. In my view this is a destructive and dishonest way to talk about testing. It also helps reinforce out of date views and poor practices on non testers that read the commentary. I don’t see it as a big stretch to suggest that it helps contribute to tester job ads that are remarkably inaccurate. As a community we need to do better in this respect. We need to talk about how testing helps bring better through collaboration, even if your current role has you sitting “at the end of the queue” waiting for code to be thrown over the wall. Rather than selling that you are a gatekeeper, think about and promote the idea that you can collaborate and influence and that the team owns quality. If you want “power” (whatever that might mean in your mind) you’re far more likely to find it when collaborating and influencing. If you want job ads that speak to what testers can and should bring then join in and help educate others that do not test. Choose your words carefully and wisely, talk in ways that are both compelling and honest.

The irony – so, in an advertisement that demands “ensured….error free” we get this

Perhaps it tells us everything we need to know about the offer.

Testing – Not Just Bugs

This post is brought to you through conversations recent, and past. The more recent ones tending towards LinkedIn discussions and reading the thoughts of others. Something that stands out to me is that many testers struggle to explain what they do when they test, why they do those things and even why they test. I see many testers justify their existence with “I find bugs”. Well that’s cool (to a degree) because testers need to find (important) bugs, but when you focus only on that, that’s underselling the craft and omitting many things that good testing brings to a company.

I remember many years ago being involved in an interview for a tester. Through the interview I counted 15 instances (I know I missed a few early ones) of the candidate saying “I catch bugs”. The candidates entire expression of testing was finding bugs, more nuanced discussions were largely beyond them. It’s a little sad when you realise this person had 5 years of testing experience. This interview might just have been a catalyst for me to make sure I could always talk about testing with some depth and clarity. It certainly encouraged me to reflect on and learn from my experiences and others.

To me, testing is a very broad church. To do it well you require multiple skills that are not limited to the domain in which you work or a couple of tools. Hell I use people and communication skills when testing that I learnt when I was a primary school teacher. I bring in to my testing things I experience as a cricket umpire (and I also wear my testers hat while umpiring).

As a tester you are working with not only hardware and software but also people. There are people that are in the same company as you and external to the company. Many of these external people are clients, others are providers of software that interface with what we build. To successfully understand the context in which I am testing I need to have sufficient knowledge of who the key people are, when I need to communicate with them and the most effective ways of providing them the information they need to make the decisions they need to make. In this space I’m managing technology, information, people and relationships and helping guide the production of quality software as part of a team effort. I might also point out that I’ve done all this and haven’t necessarily touched the software to be released (possibly not a line of code written yet). That doesn’t mean I’m not testing and it certainly doesn’t mean I’m not influencing what will be released or how we might go about building the code, documentation or release plans.

While I’m engaged in the above I’m constructing models of what will be built, trying reduce complexity and increase my understanding of how to best examine the software. I’m talking to people about my models (which generally have been physically represented either on paper, whiteboard or mind map) and seeing if they have the same models. It’s not unusual to find they don’t and this is a great test result. It shows that there are some different understanding and/or assumptions at play. Time to discuss, examine and course correct as required.

I like to talk to developers about the earliest point at which they can give me access to the code changes and what they can deliver to me. Give me something small and I can test it and give you feedback. How small – no single answer here except for “it depends”. I once had some code dropped to me that was “it’ll take you 10 minutes to test – at most, it doesn’t do a lot”. It did enough to show that there was a fundamental flaw in our approach. Most importantly we discovered this before it was a major strip down and overhaul.

Before I start testing I think about ways the software I’m about to test might be used, misused (intentionally and unintentionally), how I might make it “trip over itself”. How might I make the software do something that is “embarrassing”. How might I make the software hang, crash, respond poorly or simply behave like a 3 year old throwing a tantrum? Then it’s all “down hill”, right? Not really because it’s just getting started. The fun really starts now.

Now I have my my hands on the software (or some part of it). I have my preliminary test ideas and I’ll also include checking that any representations we have made about functionality can be demonstrated. Which is an activity that helps me build out more test ideas as well as adjusting or eliminating some of those I started with. To move to a slight tangent, this is just one reason why I personally dislike test scripts and the mindset of “that test passed”? What about everything else is going on around that single test, what is getting missed in the race for the “green tick”? At the risk of overemphasising the point, observation and concentration are key. I want to spot the unusual and unexpected.

I have yet to write that “I find bugs”. When I test I expect to find bugs but that is because I think deeply about how I might effectively test the software. In what ways can I test and find vulnerabilities? I’m not actually thinking “let’s find bugs?”. This might seem like a subtle difference (or perhaps not). This approach is explained by John Kay in his book Obliquity. If I say I’m going to find bugs, and make that my sole focus, I’m taking a very direct approach. I prefer an oblique approach where I focus on doing things that give me the opportunity to discover and learn and put myself in the best possible position to find important bugs. It also helps me focus on developing a solid understanding of various risks (because I need to convey this to stakeholders). Bugs are not my focus but an outcome of the way I test. This is just one reason why I don’t count bugs. The count, to a large degree, is irrelevant. There are many other dimensions of bugs that interest me far more before my (or another testers) personal tally. There are many other dimensions by which I judge my performance than by the number of bugs found.

If people wish to express their value in terms of “bug count” that’s a personal decision. In my sphere of influence (whatever that might be – and assuming I have one) I feel it is important to broaden the conversation. When I talk about testing I want to amplify the skills required, the depth of thinking and analyses needed to do it well. The broad experiences and sheer hard work that feed into decision making, the people skills, the reporting skills, collaborating, influencing and helping people and products improve. That list is just a starting point, far from exhaustive. In that conversation I want people to appreciate that bugs are found because good testers make it inevitable they will be found rather than testers only find bugs. There’s a huge difference in value statement between the two.

I would much rather have the conversation about why I find bugs that others might not find rather than bug numbers. I would like to be recognised for more than just finding bugs, because, frankly, I (and many other testers) bring way more than that to the table (if you’re reading this and don’t know any of those “other testers” get in touch with me and I’ll send you some names). I would like to move the focus of testing discussion to the nuances, challenges and difficulties of good testing and how testers can help with quality (and that testers don’t own quality or assure it). I really want to see more testers thinking about, and communicating with clarity, what it is that makes testing valuable. Elevating the status of testing by sending clear and accurate messages – well, it’s in our hands.



Agile Elitism – Undoing the Good

I’ve been in IT for a bit over 20 years. In that time I’ve been a business analyst, a support desk lead and, for the most part, a tester. I’ve designed processes, I’ve redesigned processes, I’ve redesigned myself and my testing beliefs. When I first starting testing, as a dedicated tester, I was very much “by the seat of my pants”. I had ideas how to go about testing and I sort of followed them. I was then sent to an ISTQB Foundation course. I came back from that with a framework and an approach. It just made sense. It gave me a structure that I lacked, or more correctly, it gave me a structure and approach approved by others. I was somewhat evangelical about the whole deal when I returned to work with my new shiny certification.

Funny thing about shiny stuff, unless you keep polishing it (which in my view is often little more than reinforcing your biases) it tarnishes and stops looking the way it used to look. This is exactly what happened to me. What I thought I needed to do and what ISTQB told me were far apart. Writing detailed test cases sucked (and also led me to considering exiting testing). Having to maintain them sucked more. I wanted to engage with the business analysts and developers earlier, I wanted to write less and test more. I wanted to pair with other testers (before that practice had a name). I learnt very quickly that what I thought the software would do based on the specification, and what it actually did when it got me, were often worlds apart. I also found that when I focused less on the test cases and more on observation of the software I found really interesting bugs and made interesting discoveries that I could share. That was basically how I found context driven testing and a whole new view of testing. It’s also how I became substantially different to the majority of other testers I worked with for much of my professional testing life.

The reason I outlined a brief of my background is because I’m finding the same thoughts going through my head around Agile/agile as I did about testing. My first introduction to agility was through being in a waterfall project team that thought holding daily stand ups made us agile. At that point I lacked the experience I have now and just went along with it. It was slightly useful but mostly slightly annoying. The stand ups rarely revealed anything we didn’t already know. Mind you, nobody had the foggiest idea about Agile, it was just a directive from management (who also lacked the required awareness).

My next interaction was at the same company. A decision was made to “be Agile” and scrum was the adopted framework. It was doomed to failure because management had no idea of what was required and had no interest in changing their behaviours. That Agile was a mindset, rather than a concrete “thing” never occurred to them. From a waterfall development shop 5 scrum teams were formed. There was minimal coaching, for many there was minimal interest in learning new behaviours. At some point, because of my enthusiasm to find better ways of working, coupled with the amount of reading and learning I did on Agile and scrum, I slid into the “Agile evangelist” role (something I would now avoid with great enthusiasm)  and ended up coaching several teams on adopting good behaviours such as collaboration, testing from start to completion (what is often called “shift left” – horrible term – please don’t use it), writing testable stories and breaking stories into small slices, or least making the stories small and delivering small bits of testable code frequently. The move to agility failed, badly. Lack of trust and a management need to both micromanage and blame overwhelmed everything else. Interestingly, and not all that surprisingly, the teams I worked in loved the (fleeting) ownership and responsibility shift.

Since that gig I’ve worked at other places. My time at Locomote was interesting. It desperately wanted to embrace Agile but in no way did it tick of on all the manifesto principles and values. Again, scrum was the chosen framework. There was a bunch of waterfall behaviours that persisted, BUT, and this is the key bit for me, there was a desire for change, a desire to get better at producing high quality software. A willingness to reflect on what had been done, what could get better. Transparency, at the team level, was pretty good. At times it was patchy from higher up. At least here there was a feeling that we worked as teams wanting to be more agile. The cool thing for me was that with good Scrum-masters I was able to contribute ideas to improving the team and also focus strongly on improving testing specifically.

It was actually during this time that I really started to question Agile and how people spoke about it. Too much talk about “you don’t do this, so you’re not Agile”, too much focus on exclusion as if Agility is an exclusive club with strict entry criteria. I call these people “Agilistas” (a term somebody else coined, can’t remember who). I got tired of these discussions and discussions that simply focused on reinforcing some level of “purity” above a culture of consistently getting better as a group or company in sustainable ways. In short I’m over being told that unless the company I work for does a bunch of practices, that might not be relevant in context, the company does not qualify as agile.


My current company, HealthKit, may not pass the “Agile purity test” that “Agilistas” demand. From my perspective HealthKit is the most agile place I have worked. It is a group of people who prize and practice:

  • transparency
  • reflection
  • adaptation

We collaborate a lot, we idea swap and challenge each other with respect and in an environment of safety. We adjust our planning as our views change, based on customer feedback or discoveries we have made, things  we have learned. When someone needs help it is willingly and rapidly made available. We actually pair a lot, something that only solidified in my mind this week. The best thing is that this is what you might call “natural pairing”, it’s not really pre-determined, it’s more on a “needs” basis. You know, “I need some assistance”, “here, lets explore what this does together”, and “hey, you know this area better than me, let’s work together, knowledge share and see what we can find”. I’ve had plenty of sessions where I ask for some assistance and then the person stays a little while longer to contribute testing ideas and observations. I work with 2 other testers and we are always involving each other, knowledge seeking, sharing, helping, working through some testing side by side. At Locomote we tried to schedule pairing between testers, it didn’t work. Pairing requires desire and willingness not a schedule.

As far as I know the developers at HealthKit don’t do a lot, if any, development using TDD (if they do it must be discussed in secret developer meetings). We don’t have a huge amount of automation, it is not currently a “way of life” (although automation is going to receive some specific attention I believe – even better the focus is automate the right things not everything). We don’t use Jira, we don’t write user stories using INVEST, or anything remotely similar, we don’t have enormous planning meetings or estimation meetings. We meet once a day for a whole team stand up session to discuss what we have done, what we are doing, raise any concerns/blockers and share things learned. To be honest, the rapid reduction of formal meetings, compared to past work places, is really appreciated. If it helps any “Agilistas” reading this, we release frequently, 2 or 3 times a week.

I’m at a company where the people are truly passionate about building excellent software and keeping our customers happy. We respect each other, we talk honestly and respectfully with an eye on sustainable, continuous improvement. Transparency is promoted by all and we are a genuinely happy and fun workplace. Best of all the culture is neither top down or bottom up, it is both because the culture is shared by all. Our CEOs sit in the office with everybody else (and no, I don’t mean the same space but in offices with doors closed) so if the culture wasn’t jointly owned it would be glaringly obvious.

So I reckon there are a bunch of people that will tell me HealthKit doesn’t meet their definition of Agile. That’s fine because I’m done having those discussions, they are pretty much irrelevant. My thoughts and understanding have been shaped by my experiences, observations and shared discussions. This doesn’t make me the oracle of “right” but it gives me more than “I read it in a book” or “this is what course ABC” told me. I’m in a company with an extraordinary culture, committed professionals that work together to create the best software we can for our customers, always on the lookout for ways we can improve. If what I’m currently part of isn’t Agile, according to the “law of the Agilista”, I really don’t care.




Note: This is another blog that has been sitting in my drafts folder for well over 12 months. I honestly don’t know why, maybe I just forgot it was there. I’m publishing this in the “as I found it” state with the exception of a couple of grammatical changes. I can still remember the people and interactions that prompted me to write this blog. I hope you find something useful in my writing.

It is a source of wonder to me that humans can attach a whole variety of meanings to words or concepts. In ways it is a beautiful attribute of being human. At times it is quite a journey when you swap questions and answers then realise what you thought was a common reference, isn’t. I like these moments, occasions when you know that potential misunderstandings have been avoided through establishing deep, rather than shallow, understanding. I don’t have statistics to back me on this but I’d wager that the majority of our software disappointments fall into the shallow understanding “bucket”. Conversations that we thought were precise and clear but where we were actually talking past one another. I’ve heard plenty of this, I’m sure I’m not alone. Occasionally I get into trouble for focusing on words (people can get a tad impatient with me). People who work with me for a while get to understand why (I’m always happy to explain).  Often I’m not the only one querying the word, phrase or statement. I just happen to be the one who will initiate chasing clarity sooner rather than later. Experience is a great teacher.

The common reference I have in mind for this blog is Product Backlog refinement (PBR) or Grooming.

Product Backlog refinement is the act of adding detail, estimates, and order to items in the Product Backlog. This is an ongoing process in which the Product Owner and the Development Team collaborate on the details of Product Backlog items

Click to access 2017-Scrum-Guide-US.pdf

I’m somewhat surprised by the number of chats I have around PBR that, when it comes to the role of the tester, and the value of PBR sessions, includes the notion that testers should walk out of these sessions knowing exactly what they are going to test. That any gaps or problems should be identified in this session. I struggle with this idea for a number of reasons.

  • It doesn’t align with the PBR description in the Scrum guide
  • It doesn’t align with any PBR, or grooming, description I have read
  • It doesn’t align with the idea that we learn as we go
  • It doesn’t align with the idea that user stories are a starting point, a placeholder for conversations
  • It places responsibility on dedicated testers to find “gaps” and assigns a quasi “Quality Police” tag to them in what is a team responsibility
  • It is about knowing everything upfront. That’s a waterfall mindset and antithetical to an agile based approach.
  • It’s an unrealistic and unfair expectation

Personally I sometimes go into PBR sessions and encounter an area with which I have little knowledge. I contribute where I can, often it’s looking for ambiguities, clarifying terms or challenging assumptions (you don’t need deep understanding of an area to pick up on assumptions). I’ll also use this as a heads up for things I need to learn more about, investigations to be had (although I prefer to think of it as playtime and it is often a good way of finding existing bugs).

Some good questions to ask in this discussion:

  • How will you test it?
  • Why is it important?
  • Who will use it?
  • How will we know when it’s done?
  • What assumptions are we making?
  • What can we leave out?
  • How could we break this into smaller pieces?

I borrowed the above from a Growing Agile article on grooming. I think they represent excellent questions in a grooming session. One thing I have found across teams I have worked with is that testing can be a “forgotten cousin” when it comes to getting stories ready for actual development. It’s not that the other people in the team can’t contribute in the testing space, or don’t want to, it’s simply not habit to do so.  It’s a habit I like to cultivate in teams. It’s quite interesting how quickly team members jump on-board. In my previous blog  I mentioned Mike Cohn’s  conditions of satisfaction.  I think they fit very nicely as a tactic within good PBR discussions.

My hope is that if you are reading this, and if you are a dedicated tester within a scrum team, you are not identifying with the demand to be “completely across” a story during PBR. If you do identify then it would be a good retrospective item to raise. It would be good for the team to understand the load this expectation places on you. It would be even better for the team to acknowledge that the attitude is counter productive and create external communications (ie to stakeholders outside the immediate team) accordingly. If you really want to kill off the “know it all in grooming” expectation, work with your team so that every grooming session has everyone thinking about and contributing to testing thoughts. Actively discuss testing and capture thoughts and ideas. Show that testing is being considered and considered deeply. It doesn’t show that you have “covered it all” (and nor should it) but it does show thought and commitment to each story. The reality is, you can defend that approach (if required) and, as a team, reduce unrealistic expectations. As a team you’ll also be far more aware of stories when testing is an embedded consideration in the PBR meetings.

As my final thought for this blog. In my opinion, and experience, there is a sure fire sign that the team is getting across joint ownership of testing. When you are sitting in a grooming session and others start asking questions about testing or testability before you, the dedicated tester, start asking, you are on the right track. Punch the air (even if it is only in your mind) and congratulate the team for their help. A better journey has started.



The question not asked


This is a blog a I wrote late 2015 and just discovered sitting in my drafts. As I read this blog I can still remember the project, the questions and the problematic discovery. I can also remember that this was classified as a simple change and “waterfalled” to me. I thought the blog worth sharing, along with a heuristic of mine. If people start talking about how “easy” a project is, be alert, there will be dragons. That false sense of security and acceptance is often a lack of vigilance and critical thinking. Be aware, be alert, be an active questioner.

“An implicit part of your preparation and your mission is to recognize, analyze, exploit, and manage your emotional states and reactions.”Michael Bolton, Developsense

I’m sitting in a meeting room, key people from the Development team are with me and a Technical Engineer. The reason for the gathering – I found a bug while testing and the bug has pointed out the need for a fundamental change to deliver the desired functionality to our client. This problem has been exposed because we “messed with the Laws of Nature”. Maybe that’s overly dramatic but we have made a change and failed to fully appreciate the nuances. The way this issue has surfaced, and some subsequent discussions, has had me reflecting. This blog is about some of those reflections and resultant observations.

“When you find yourself mildly concerned about something, someone else could be very concerned about it.”Michael Bolton, Developsense

It’s more than a feeling (more than a feeling)
When I hear that old song they used to play (more than a feeling)
And I begin dreaming (more than a feeling)

Boston – More than a feeling

I’m talking about emotion because right from my initial involvement in this project my gut feeling was that something was not quite right. It felt like the solution had oversimplified and under considered the changes. I don’t doubt that tight timelines impacted what was done compared to what could have been done on the design analysis side. We were changing an area of complexity where no one fully understands the “inner workings” or complexities in a holistic way (it’s a big legacy system). The client desire focused on improving performance, but, in doing so, we had created a hybrid being. It had features of 2 types of existing entities but was neither. Those feelings had me asking lots of questions, challenging ideas, but I failed to ask the right question.

“If they can get you asking the wrong questions, they don’t have to worry about answers.”
Thomas Pynchon, Gravity’s Rainbow

The change involved cash accounts, think bank account if you wish. To meet client desires we needed to create an account that effectively did not accrue interest but had some interest accrual features. We underestimated the complexity of this task and as a result our models and oracles were poor. An assumption had been made about behaviour, the assumption was wrong and I was unable to frame questions to expose the assumption. I’m realistic enough to also realise a bunch of other people failed to expose the assumption as well. Up until yesterday the testing results actually aligned with our expectations, our models seemed OK.

“The scientist is not a person who gives the right answers, he’s one who asks the right questions.”
Claude Lévi-Strauss

A number of times my testing raised issues where I felt there were consistency issues. These were explained away. This is not to say I was dismissed, or the questions not listened to. The explanations were reasonable but generally left me with the idea that, at minimum, we might have been missing some opportunities to deliver a better outcome in terms of consistency. Part of the confusion resided in a non accrual account type now supporting interest entries. This meant that when I tried to talk about process within the interest function it was never the “whole function” as we had known it. It becomes easier to justify behaviour when you think purely in terms of the previous functions and don’t really think about how that has now been twisted. Our abstractions were leaky but we didn’t see it. Perhaps, because the results and our oracle seemed to align, I became a little complacent. Maybe, just maybe, I questioned the answers to my questions a little less than I should have.

“Most misunderstandings in the world could be avoided if people would simply take the time to ask, “What else could this mean?”
Shannon L. Alder

I was testing the penultimate functional area. We hadn’t made any direct changes in this function but the changes we had made would flow through here and so I needed to test and see that it looked OK. This function is an interesting one, not one I’ve spent a lot of time in recently, and it had the potential to be challenging. First thing I decided to do was check the module parameters. There were 4 parameters related cash transactions. Two of these related directly to elements of the enhancement changes. This was cool because it gave me a chance to see my test data, and outputs, in a new way. I would be able to componentise aspects of the data. I could quite possibly find issues in here that would not be easily exposed via other functions I had tested.

I ran a test against data I had used in other functions. I had modeled the data and had specific outcomes I expected to see. I got, literally, zeros. I stopped, did a double take, checked some parameters, valuation dates, did I use the right accounts? Everything checked out so I executed the function again. Same result. This was really out of the blue, everything so far had checked out (it wasn’t bug free but it wasn’t “on fire” either). I ran configurations of the two primary parameters that interested me and ran them on the valuation date I was interested in, plus and minus one day. There were anomalies I just could not explain, the results on my primary valuation point were just bizarre. I sent my spreadsheet and some other details to the Developers and Business Analyst – “Hey guys, I can’t explain this. Can you have a look so we can work out what’s going on.”

They did, and I’m told, within 20 minutes of starting the review of my data, outcomes and the code they realised our solution was fundamentally, and fatally, flawed. It could not deliver what the client desired. I was somewhat happy this had come to light before release. While I couldn’t specifically target the right question when I wanted, my gut feeling had been right. The emotional discordance had a basis. My testing approach had also enabled me to eventually find where the software was broken. I had learnt along the way and applied that learning as I went.

Since realising that we had a considerable problem we’ve had a few discussions, mostly around recovering the situation. There are lessons for our Development floor, things that we could have, should have done. The potential to find this before we wrote a single line of code was missed. The opportunity to discuss with our clients how they would use this new functionality and the report values they would desire were not taken up. If they had been we would have had examples that would have allowed us to determine, before writing a line of code, that this project was not simple, and possibly, not even desirable.

“The Wilderness holds answers to more questions than we have yet learned to ask.”
Nancy Wynne Newhal

For me the lessons are simpler, they revolve around questions:

  • On what basis are you making your assumptions?
  • How do you know your assertions about outcomes are correct?

I asked these questions but not in an effective way, I should have questioned my questions, rephrased them into better questions and asked those.

Then there are answers. When the answer doesn’t completely resolve your disquiet, when your gut feel is that there might be something missing, something important, keep pursuing that hunch.

A Tester Tested

It’s late November 2018 and the news is delivered that Locomote will no longer be developed in Australia and I, along with a bunch of colleagues, are about to be without employment. I wrote about that moment in time here and here and also about some of the things I decided to dive into.

It would be fair to say that not working in December was bearable. The break was nice, the redundancy payout was sufficient to keep the wolf from the door for a while and there was Christmas and New Year as a distraction. We also acquired a puppy Cavalier King Charles Spaniel who provided a lot of fun and distraction. I’d also been invited for an interview and made it to second round. I was disappointed not to go further but I was encouraged by feedback from that process. I knew January was going to be slow, to really bare, month opportunity wise but by late that month I’d had my fill of not working with people. I was missing the satisfaction of solving important problems, finding new information and working with others to produce excellent software that made people happy.

I had maintained a focus on extending my skills, I figured it was a worthwhile way to spend time. I had developed a reasonable basic level understanding of Java. I can write basic code, debug it and even managed to think of ways to refactor what I had written (whether this made it superior to the original is questionable but it enabled me to practice). I stopped at a point where I felt I knew enough basics to engage in some other work. I did a bunch of exploring Postman features through a Pluralsight course. I know far more about Postman than I did. Using snippets and writing tests in Postman introduced me to Javascript. While I can’t write Javascript from scratch I can now read it, to a reasonable degree, and understand what it is trying to do. Within the context of Postman that gives me some options to “borrow” and amend. I even went and spent some time refreshing my SQL knowledge using mySQL and a Pluralsight course. I dove back into Alan Richardson’s book “Java for Testers”. The book made more sense to me now as I had some context I was missing before the Pluralsight course. I even started a course on Java and writing tests using Selenium. I also spent some time playing with the Selenium IDE and had a bit of a laugh. This is basically the modern day version of an in-house automation tool I used over a decade back. The problem is, that despite all the “busyness”, it was me, working alone, no deadlines, no one to deliver my work to, none of the usual external motivations. I also had a firm gaze on the job advertisements and the constant appearance of must be able to “write a framework and automation code” was starting to really grate. It had become pretty clear that doing either, well and with thought, was not an easy job. It seemed to me this skill set was more than a couple of courses and a book. I had a 30 year habit of working and wondered if I was actually relevant to the job market. Things, by that I mean my mood and outlook, got a little dark.

So let’s skip forwards to late February because things changed fast and the last sentence above pretty much describes the period to this point. I spent time contacting a number of job recruiting agents. At some point I reached out to Sun Kang at Opus Recruitment. Sun is located in Sydney but has a Melbourne portfolio. Sun did more than just talk about possible jobs, he took an interest in me as a person and kept in regular contact. Talking to Sun added some positive vibes to my day. Around the same time I contacted Jeremy King at Interface Recruitment. Jeremy is the Melbourne based version of Sun (or Sun is the Sydney based version of Jeremy). Jeremy is open, honest and has genuine empathy coupled with wanting to know his clients.Both Jeremy and Sun went, in my view, above and beyond and I genuinely like chatting to both. There are future coffees or beers planned with both. They both helped stoke my positive energy and kept me focused on moving forwards. If you’re reading this blog, and job hunting, contact these gentlemen, they are absolute gems.

I also need to mention Katie Peterson who works with Prima Careers. Prima Careers were engaged, by Travelport, to help people transition to a new role, and or career, outside of Locomote. Katie provided me with a lot of great advice, helped me remodel my CV and spoke to me about how to contact people that would help me find a job I wanted. One of her first questions in our first session was “do you want the first job you can get or do you want to find the job you want?” That was a great scene setting discussion. Beyond that, every time we spoke, she lifted my spirits and I left feeling really positive.

There is a saying that “it never rains, it pours”, and thus it unfolded. In the space of, maybe 10 days, I find myself at 3 final stage interviews, a firm offer on the table from one of them and turning down 2 companies requesting interviews. I’m still having trouble reconciling this, totally new territory for me. I also did something I have never done before, I turned down a job offer from a company that, under other circumstances I would have accepted on the spot. It’s not an entirely bad spot to find yourself.

So here’s the thing that interests me, I mean really interests me. In all my job applications, even those that went into the “apply now” black hole, I was entirely honest about my coding abilities. I was completely upfront about my history of involvement in automated regression testing and what that involvement entailed. The companies that interviewed me could deal with that. It interests me that all but one company I spoke to had automated script writing as a role requirement, in fact, listed right near the top of skill attributes. My lack of coding ability never became an issue or an impediment in interviews. Feedback from one potential employer, in terms of declaring my limited coding ability as part of completing a preset challenge, was “great to see people recognising and acknowledging their limitations”.

I had decided prior to these interviews that rather than dwell on what I didn’t have I would amplify the skills and abilities I did have. Show the companies that I could bring a level of thought and testing approach that would benefit them. I also focused on demonstrating skills that might not be traditionally associated with a tester (such as coaching, mentoring and process improvement to name a few). I wanted to convince my interviewers that I am genuinely happy to share this with others and look for ways to find “better” as a team. What came through in feedback from interviews is that I had a clear passion for testing and quality, I thought deeply about testing and had great critical thinking skills. I was told by more than one person that I “think and talk differently about testing compared to other testers”. As far as I can tell my passion, thinking and ability to clearly articulate how I would approach problems made me shine and stand out.

The good news, the news that really excites me, is that I have accepted an offer to work with HealthKit. This is a new sector for me, as was Locomote. While I didn’t like the way things finished up with Locomote it gave me two years of exposure to the travel sector and demonstrated to me that I could quickly learn what I needed to make a speedy start and then use that knowledge to perform excellent testing while also contributing product ideas and helping to build approaches that supported development and release of high quality software.

To close this out I really want to shout out to Lee Hawkins ( who spent a lot of time helping me through my first spell of unemployment in 30 years. I’ve told Lee I owe him plenty. I also owe my family a lot for their support. My wife and kids were just brilliant along with everyone in my extended family. I probably wasn’t the easiest person to live with for a while. Also a big wave to Janet Gregory. Janet is a mentor of mine and messages from Janet and some chats helped my mindset. James Barker (Test Practice Lead at Culture Amp), who I met in December 2018 (prior to bombing out in the second round of their interview process), was also great support. James is a special person. I see us having a lot of chats about testing and quality as we learn from each other. I haven’t mentioned everyone that enquired as to how I was going but rest assured I really do appreciate your friendship and efforts. I will also be forever grateful to those in the testing community that have helped me understand what excellent testing requires. Those people that helped me progress and improve my testing through many facets, not the least being the importance of developing an enquiring mind that leverages critical thinking. This helped me become “different” in good and valuable ways.

On the 25th of March I start my new role. I’m eager, I’m excited and, I guarantee you, I will never take a full time job for granted again. I’ll be back working with great people to solve important problems. I’ll be working with people to make software that makes our clients really happy. Most importantly I will be back doing what I want to do, what I love to do.



Lemons and Lemonade

Those that read my previous blog will be aware that the company I was working for decided to cease software development in Melbourne. That decision put me out of a job, unemployed for the first time in a little more than 30 years. I’d prefer to be employed, I miss working with smart and interesting people to solve important problems for clients. I genuinely love solving problems with people, learning from others and also helping colleagues find new ways to think about, or approach, things. Still the decision to close down was not mine, was out of my control, so I’m trying to just let all that stay in the past and control what I can. In case anybody is wondering, no, it doesn’t feel like a big holiday.

To fill in time I’ve been working on refactoring my CV (with some help from a consultant), applying for jobs and digging into something I’ve been wanting to do for a while. I’ve been learning Java. In the past I have learnt some visual basic and developed an understanding of some basic Ruby (to the point that I could write code). So why Java and not continue with Ruby? If I had a dollar for every time someone asked “why Java” I wouldn’t be learning Java, I’d be sunning it on a tropical beach, fishing rod in one hand and eyeing off a bucket of cold beers.

I decided on Java because it seemed like a good idea. A lot of articles pointed to learning Java as a good idea. Beyond that I wanted to get into writing some automation and the majority of resources I had an interest in used Java. The primary source of interest here was Alan Richardson’s Java For Testers. I loved the idea of a book that would teach just enough to get me started and allow me to write scripts. If I could skip past some stuff I didn’t really need to know, that would be cool. I should also mention that I am a fan of Alan’s work so it felt like a great starting place. So start I did.

I was about, maybe a third, maybe less, into Java For Testers when it occurred to me that I cared a lot about some of the things that were being skipped over. I wanted to know more about certain relationships (constructors and getters for example). I felt like if I put the book down and tried to write some basic Java, I’d be stuck. In no way is this a comment on the book, more about my learning style. I believe some of the answers I wanted appear later in the book, but I was a little uncomfortable and decided a change of direction might be useful.

Late last year I commenced a 12 month subscription to Pluralsight. I decided to do this for several things I wanted to study, Java was not one of those things. I decided to start on the Introduction to Java Course by John Sonmez. This course has been really helpful but it takes some effort (at least for me) to learn basic Java. Part 1 of the course contains 6 modules;

  • Introduction to Java
  • Variables and Operators
  • Classes
  • Control Statements
  • Inheritance and Composition
  • Generics

and the content runs just a tick over 4 hours. I can tell you it has taken me longer than that. Why? It’s easy to get lulled into a sense that you understand things when you are taking notes and writing code that is mimicking examples on the screen. So several times I have stopped and said to myself “right, go write code based on what has been covered to date”. This is how I very quickly find out what I have really taken on board and what I haven’t. It was this process that finally gave me an understanding of constructors, not just the “why” but also the how. It also clarified in my mind what an object is within code.

In the almost 2 months elapsed that I’ve been engaging with Java
(elapsed – I point this out because not every day is “I want to code Java day” and some days I need to focus on different things such as CV refactoring, job applications, meetings) I’ve had a ball, I’ve learnt a lot and I can now create working code in Java. I have a much better understanding of how complex coding is (thinking about the stuff I don’t yet understand – yikes). I’ve been exposed to, and know how to use, in useful ways, 2 IDEs, Eclipse and IntelliJ. I’ve been able to write buggy code (unintentionally) and use my testing skills to hunt down the problems (this bit doesn’t come with either book or on-line courses). Sometimes my code works first time which always results in me punching the air in triumph. I’ve learnt to hate curly brackets just a little bit (turns out, that’s not just me)

I’ll go back to Java For Testers when I finish the Pluralsight Introduction course. Once I complete the book I think I’ll dive back into Pluralsight or the Intermediate Java course by John Sonmez. I’d like to expand out my Java knowledge a little more. I mean, while I’m having fun, why not? Of course there might be other discoveries along the journey that change those choices in some way.

So how are my travels into Java going to help me? In no particular order, or with exhaustive thought, I want to be able to talk to Developers a little more in their language. I do feel that it might open up some more ideas around testing or prompt me to ask questions I might not have asked.
Will it help a lot, not sure, it hasn’t felt like an inhibitor to date. I guess all experiments start with an unanswered question. I want to learn how to write better code, become a competent coder, at least in ways that coding competence is important to me delivering value, and assist with automation. It would also be nice to be able to write a few small tools to help with testing. I guess I’ll firm up goals as I go. I do know at the moment I’m not really (at any level) thinking about a change of career into coding. I do enjoy the learning though and knowing that I have acquired technical knowledge that I didn’t have before, that my skill set continues to expand.

While I could do without the unemployment, the break has allowed me to dive into some new skills and at least feel productive in other ways than being in the office. If I’d gotten a week into this and decided it wasn’t for me, I would have been cool with that, I would have learnt something and then gone looking for something else (and that might have been another coding language). I’ve always enjoyed learning, that mindset has been something of a blessing during this break. I’m looking forward to more coding practice and learning from my plentiful mistakes. It’s also given me some new insights on coding and testing, I’ll need another blog for those.

Thanks for dropping by folks and, as always, happy to hear your thoughts via comments

Regards ……….. Paul


It turns out that being made redundant changes your day, although I don’t plan to make this blog about that aspect. Maybe another blog but not this one. What I have found myself doing is reflecting somewhat more than usual. Reflection and introspection, looking for what went well, what could be done better, at a personal level, is something of a habit. I think it always has been but as I move through life I think I learn to be more realistic about reflection and how I deal with “not quite hitting the mark”. A little bit of disappointment is good, it’s motivating. Too much is depressing and a motivation damper. I’ve learnt to appreciate the learning that comes from misfires and the opportunity to improve.

This blog though is really about a retro I ran around 3 months ago. I was actually going to write about it, about 3 months ago. Stuff happens, the blog didn’t. It was the second retro I ran at Locomote and the first for the team I was working with. Part of writing about this is because I want to highlight that Testers do far more than test, at least the way I view testing. As a tester I want to be influential, I want to help people, I want them to know I am accessible, that I think widely and care a lot about who we are, what we do and how we do it. If you really want a quality product then it starts with quality people who really care about each other and are supported by a company that really cares about them. It aligns nicely, I think, with Obliquity (John Kay). Give people meaningful problems and let them solve those problems with enough support to enable collaboration and creativity.

About that retro

I said that I want to show that Testers can apply their skills to multiple disciplines, that is true. I also want to share this so if anyone else is interested they might give it a run, refine and share. I was told by others that they hadn’t seen this retro before. Given I’d come up with the concept 2 days earlier on a 45 minute train trip home, the observation wasn’t all that surprising. In short I wanted to run a retro where we revisited the past and forecast to the future. I felt it gave us a slightly different approach to retros we had been running. I like a bit of “different” in retros, different angles of thinking produce different ideas about not only possible improvements but ways in which we might experiment with, or implement, those improvements.

My rough sketch of the retro

It’s probably a bit misleading to say that the above is a “rough sketch”. It looked pretty similar on the big whiteboard, just more colorful and sans the explanation notes for me. I had a feeling that the concept might be a bit big for a single retro. Having introduced the idea and walked through each of the “mini themes” I proposed picking maybe 2 themes and then coming back to the others in a different retro. After a short silence, and a few questions, the team chose to cover the entire map.

So many sticky notes…….

I was genuinely surprised by the amount of ideas and observations that came out of this session. There was a little bit of initial questioning around the “categories” on the map. That was quickly dealt with (just write your thoughts and we can look after the rest). Some ideas didn’t neatly fit any of the categories, somebody decided to place those in the middle of the road (I thought that was super cool). One of the things that really interested me was that some of our experiments that had failed in the past, were added to the map as “proceed with caution” but also popped up in the things we need to do to get better in the “slow down” area. I thought it was really cool to see that level of reflection. I also thought it was interesting, and brave, for people to add some practices to the “Stop” part of the map, especially when we still had some as current practice (via outside influences). There was, for the greater part of my employment, a sense of safety at Locomote, that you could (and should) respectfully challenge the status quo. That atmosphere, in my opinion, turns “challenging” thinking from sarcasm to a genuine focus on real desires and perceived needs within retro sessions.

I was going to go deeper into some of the specific outcomes and the map points. I decided not to as it would make this a long blog, I’ve lost a few of the really specific details in the space between running the session and blogging AND the value for me, if you try my retro idea, is how you interpret and change it to your context. What I can say (as I look over the sticky notes – I kept them) is that there was a mighty focus on working with people, across teams, across time zones, pairing, sharing, collaborating. People solving problems that were meaningful to our clients. Those items figured prominently on our road to a better future “us”. I wasn’t surprised by that focus, it just reinforced the people focus I knew the team already possessed and their desire to make it even better.

Sadly I write this as an ex Locomote Travelport employee (that’s detailed in my previous blog post) so on-going actions and benefits are no longer anything I will be involved in. Hopefully my “between jobs” status is brief and I get the opportunity to run similar activities at my new employer.

If you have any questions or thoughts on the retro I’m happy to discuss them with you.

Thanks for dropping by.


%d bloggers like this: