The analogy that refuses to die

There are many pleasures being involved in software development. I’ve been involved as a Business Analyst, Support Desk Lead and a Tester. Working with smart people, working with people that are passionate about doing a good job, meeting with likeminded people that enjoying discussing how things could be better (and actually do things to try and make that “better” happen). Of all my favourite things though nothing beats hearing that analogy that equates software development work to manufacturing or building. It just never gets old (you’re picking up on sarcasm at this point – I hope). As far as I can tell the discussion always seems to pop up in relation to estimation (and/or cost) and is often accompanied by the “oh shit” panic of a project in a bit of trouble. You know the sort of scenario:

Manager: I don’t understand the estimates we give. When I ask for a house to be built I get given a price, the house gets built and delivered within the date timeframes.

You: They’ve built that house before, right? I mean you’re asking for a new house but that house has been built before for others. You’ve seen the house, been in a display model of it. You’re asking for a copy of something that exists.

Manager: I don’t get your point. We’ve built software before. Our software exists.

You: We build software because the software doesn’t exist. Our clients ask us to build something new into the existing system. They have neither seen or experienced that functionality before in our system, and neither have we. We are not comparing like for like when we discuss software creation and house building. Perhaps you could ask your builder to add a swimming pool to your lounge room when they are half way through the build.

……and so the discussion goes until the inevitable conclusion where the analogy lives to fight another day.

When we compare thought/knowledge work (software development) to the manufacturing or building analogy it is accepted, by many, as a flawed comparison. That we are comparing “apples and oranges”. We can’t compare thinking to a machine stamping out widgets. When we are doing this we are comparing, what I’ll call “determined repeatability”, to the creation of something new. “Determined repeatability” is possible because we have spent time developing approaches, formulas, processes and formed them into a chain that produces what we (more specifically, our clients) desire. We can continue to process these widgets infinitum (as long as market forces maintain a demand). But market forces can be fickle, so can resourcing inputs. What happens if one of these changes and our processing chain loses its “determined repeatability”? What if the widget needs an update with the addition of new attributes? I wonder, is there a useful analogy if we change the target area of the analogy focus? I think when we compare the creation of software to manufacturing a widget we miss that the widget required research and development. Before the widget there wasn’t a widget. Someone had to spend time and money creating it. Now that the widget requires new features the “cut and stamp” process is disrupted. We have lost “determined repeatability” In this phase we potentially have a useful analogy with software development. In this phase of manufacturing we see thought, iteration, experimentation, exploration, failure, learning and finally a product that can be produced “cookie cutter” fashion.

I like aviation, actually more than like. It mystifies me even though I have a basic understanding of the forces at work. I can (and sometimes do) spend hours watching the big birds take off and land. To me it is a graceful blend of man and nature working together (leaving aside the engine emissions debate). The Wright brothers famous flight was on December 17 1903. On April 2005, a jet that stretched further than the Wrights’ first flight, took its own first flight. In slightly under a century it is a mighty impressive demonstration of human endeavour. If you’re interested, you can buy an A380-800. The average list price for one of these is a cool USD432.6 MILLION. (Personally I’m thinking of spending on the Boeing 787 Dreamliner. I can get 2 of those for the price of a single A380). I guess what I’m getting at here is that I can get a very sophisticated, highly technically complex aircraft for a known price and a known delivery date. As much as this sounds like it underplays the technology, at this point, the A380 is produced “cookie cutter” style, it’s a known quantity. When we compare software development to this, it “fails to fly”(pardon the pun).

Now let’s go back in time a bit the A380 does not exist. There are potential customers but the aircraft is not a reality. Let’s also remember this is not the first aircraft either. This is true for the Airbus company and a large and relatively thriving industry. You could be slightly flippant and say “they are just making a variation of what has been done before” (ever heard that before about software development – I have). There is no shortage of experience and knowledge but, and this is important, they are going to a place they have not gone before. This will be the largest commercial aircraft ever. Even if it wasn’t distinguishable on just that aspect it will be made with new age materials and technology. Some have been used before, some are new.

So how did it all go? You might know the answer, or have some knowledge of the situation, but in summary. Not so well. Let’s start with some headliners.


Originally scheduled for delivery in 2006, the aircraft’s entry into service was delayed by almost 2 years and the project was several billion dollars over budget.


There must have been some fascinating boardroom discussions as the project travelled along. I think it is worth re-iterating that this is a company that builds incredible aircraft, they employee knowledgeable, capable people. They have a history of building aircraft. They have a strong reputation. How could this go wrong?


At the heart of the problems were difficulties integrating the complex wiring system needed to operate the aircraft with the metal airframe through which the wiring needed to thread. With 530Km of wires, cables and wiring harnesses weave their way throughout the airframe.  With more than 100,000 wires and 40,300 connectors performing 1,150 separate functions, the Airbus A380 has the most complex electrical system Airbus had ever designed. As the first prototype (registration F-WWOW) was being built in Toulouse France, engineers began to realize that they had a problem. Wires and their harnesses had been manufactured to specification, but during installation the wires turned out to be too short. Even though the cables were at times just a few centimetres too short, in an aircraft you can’t simply tug a cable to make it fit.  As construction of the prototype continued Airbus management slowly came to the realization that the issue was not an isolated problem and that short wires was a pervasive issue throughout the design.


A single miscalculation. There were reasons behind the miscalculation (a chain of errors), but none the less, the impacts were really something. It also meant “back to the drawing board”, let’s examine and understand the failure, let’s find other ways to meet our objective. Who the hell saw that coming? The answer is nobody. If somebody had seen it coming it would have been prevented before it became a major problem.

I can’t prevent people wanting to make comparisons between software development and manufacturing or building. That’s beyond my control. What I can have some control over is my response. That response will now be to acknowledge there might be some validity but change the focus to the development phase, where there are similarities, and not the completed “cookie cutter” production cycle where there are few, if any, similarities. If you want to make a comparison between production line output and software development, then maybe we should be discussing the physical delivery of the completed code after completion of development. That just might be an equivalent analogy with stamping out widgets.

Thanks for dropping by

Paul


A big thank you to Lee Hawkins for his review and feedback

(@the rockertester, https://therockertester.wordpress.com/about/)


 

It’s only words

I had a conversation today that many of you have probably had a t some time. Not necessarily this conversation but one that is a parallel experience.

I overheard a discussion that had a Tester talking to a new Developer in our company. The conversation included “we work with fixed scope projects”. That statement made my ears open up and my response centre kick into gear. So I challenged my colleague on his statement. His response – “we know what that means here, we use it in a general sense”. I couldn’t leave it here, I was still curious and also wondering if maybe I could make him think a little differently.

I ask “which of those words do we use in a general sense?”.  The response “Well you know, we all know what it means”. “I’m not sure I do. My understanding of fixed means it is locked down. My understanding of scope is what we intend to develop. To me that means our projects do not change scope, at all, from start to completion. Can you remember one of those?” It turns out my colleague really couldn’t recall a project fitting that profile. So I asked again why we would use language that doesn’t reflect reality. Again I get the response “but we all know what it really means”.

My colleague has a mortgage on his house (like most of us). I asked him if he was to fix his home loan rate would he see that as having a specific meaning or a generally understood meaning. Would he be OK if the bank decided to increase his interest rate above the fixed rate because they believed fixed was generally understood to mean the rate could move? Suddenly there is a change in the meaning of fixed, “No I wouldn’t be OK with that, it’s fixed”. “So does fixed have a different meaning between the two contexts?” I ask. “Not that I can think of” comes the response.

no_trespassing

Then my colleague offers up the following. “So what we really have is current scope”. Mentally I punch the air. “Given that what we start with is not what we end up pushing out, current scope seems a fair description”. The conversation finishes but I can see my colleague tossing the conversation through his mind. I’m happy because I’ve challenged someone to think differently. My colleague might decide, after further thought, that I’m full of crap. He might decide to reconsider other “things we all know what it means”.  I hope he does but that’s his choice.

When I do stuff like this I find it interesting. There are generally two kinds of reactions.

1 – this is cool, let’s discuss

2 – stop being so damned picky, you know what I mean

That second reaction annoys me a touch. If people want to take that path then it’s their choice. I’m all for choice so I’m not rallying against that. The annoyance is that “we all know what it means” causes endless problems in specifications. It causes needless error when people just assume a word means something because “it can only have one meaning”. There are ways and means of minimising this type of error, many seem adverse to listening  or reading attentively enough to enable questioning possible ambiguity or misconception. Perhaps others just don’t see this thinking and analysis as part of their job. Some years back I was leading a team of testers that really did not value the importance of clarifying statements, getting a deep rather than shallow understanding. I introduced a challenge – find ambiguities in the daily newspaper. We found some absolute classics, had a lot of fun doing it and reinforced how language can be quite deceptive. Suddenly (well not suddenly, over time) the group became better at finding areas of “weak understanding” because they were aware of what they might look like (we spoke about things other than ambiguity) and why they might be important.

It might “only be words” but those words carry meaning and they carry a cost.Not just dollars but reputation and client satisfaction.  Shallow understanding is easy “we all know that”, deep understanding requires work and effort, questioning, critical thinking. Anyone can paddle around in the shallow end. Be different.

Thanks for dropping by.

Paul

OK – well maybe it’s not

I’m lucky enough to work with a company that has always had a high cultural diversity amongst its people. I find it interesting how that diversity of backgrounds can influence and broaden thinking. Sometimes it directly influences through solution approaches, other times it is through story telling. I was born in Australia, English is my native language. I have tremendous respect for people who are fluent in multiple languages. I’ve tried learning other languages and really struggled. I was fortunate enough to spend a few days in Paris some years back. At that point I realised that I could learn another language if I had the right motivation. The reason I mention this is because many of the stories that get told are about learning English and trying to communicate as new arrivals. I love these stories because they are told with great enthusiasm and humour. There is always something to think about. A language I take for granted, it’s good sometimes to have a reminder that not everybody has that same grasp.

At work today a young lady I work with used the word “literally” several times. Just messing around I challenged each use of “literally”. While chatting about this it reminded me of a personal story. Many, many years ago I was at college studying to be a primary school teacher (primary school covers a child for the first 7 years of their formal education starting around age 5). Part of the student experience, indeed required professional development, is to go on teaching rounds. These are both exciting, and initially, just a tad nerve-wracking. I guess it is just like any new experience that has real meaning for you.

From memory this was my second teaching round in my first year. This was the first time we were given the opportunity to plan and take classes. It was limited to a handful of 30 minute sessions with feedback from the supervising teacher. At this point the teacher was always in the room with you to lend support if needed. I don’t remember my supervising teachers name (let’s call her Mrs Pleasant) but I do remember her, I can still see her face. She was from the era of teachers that were teaching when I was in primary school. She was very supportive, generous with feedback and able to deliver constructive criticism in a very non confronting way (I discovered on future teaching rounds that this lady possessed a rare skill). The school day closed, I was going home to finalise my preparation for the next days class and Mrs Pleasant says “don’t be alarmed but the school Principal sits in on student classes and he is going to sit in on yours tomorrow. He is very supportive but he doesn’t like the word OK”. I thank her for the heads up knowing that the troublesome “OK” rarely features in my vocabulary.

The Principal was pretty much from the same era as Mrs Pleasant. An old school gentleman. I didn’t see him a lot but I did enjoy our chats when we had them. It was only while thinking about this today that I realised how magnificent it is to find someone with that passion. He had been in the education system for more years than I’d been alive but he still wanted to see the “new blood” and provide input to their growth. That’s a rare and valuable passion, that, given a second chance, would have been better used. Sometimes opportunity slips past you and you just don’t realise it.

Back to the story….Next morning usual routines. We get to first break and my lesson is straight up after the break. I’m prepared and relaxed. So much so that I meander down to the staff room and grab a cup of tea and have a chat. While I’m there Mrs Pleasant comes up to me and provides a reminder “just remember to watch your use of OK”. I nod, smile and thank her. I go back to the classroom, we get the children back inside, the Principal arrives, I start my lesson. I’m amazed how relaxed I feel, I know the lesson plan well and what I want to achieve so am sure the preparation gave me confidence.

Now it gets weird. I noticed that I had said “OK”. I damn near never use that word and not in formal settings. I press on, let’s not use that again. Another “OK” pops out, what?? I think I might have caught one further utterance. Finally the lesson ends, I feel pretty good. The word that shall not be spoken popped out a couple of times but that’s alright (I hope). So the debrief starts. In short pretty good effort, here’s some things to be aware of, etc, etc. And then……”do you realise that you said OK 30 times?”. A this point I might have sought a place to hide, meekly mumbled some weird disclaimer or perhaps thought about lodging an application with the Guinness Book of Records. I do remember a massive feeling of disbelief.

So what the hell happened? It’s a lesson in people getting you to focus on what not to do rather than focusing on what you should do. Good coaches know this and use it when working with people. I remember attending several workshops held by Allan Parker (who is an excellent presenter, very entertaining) where he spoke about this phenomenon. If you focus on what not to do there is a strong chance you will do exactly what you don’t want to do. His example was your average weekend golfer. There is a lake on the left hand side of the fairway. The golfer tees up his golf ball and thinks “don’t go left, don’t go left”. He swings and during the follow through watches his golf ball sail left and make an impressive splash into the water. The pro golfer, in the same situation, tees up his ball, knows there is a lake to the left and then picks a target either centre or right side of the fairway. This golfer is focusing on what target to hit not what to avoid. No guarantees this one won’t mess up his swing and find the water but he has set himself up for success rather than failure avoidance.

How often, at work, do we “tee up the ball” and then focus on not going left? More than we should, I suspect, and possibly more than we know. If there is history of management pointing out errors and focusing on them what is our strategy? It’s hard to focus on a positive target when the message that is constantly running past you is about what you shouldn’t do. We can easily, and unknowingly, make avoiding error our primary driving goal. How de we call this out and change it? Well that’s a case by case consideration and probably another blog on strategy. For all that I’m pretty sure that focusing on “what not to do” is not OK.

 

 

Is Hybrid Really OK?

Published – LinkedIn – March 3, 2016

I find it interesting that hybrid development structures are becoming such a hot topic. A steady stream of articles are appearing to justify the hybrid landing point. My problem with the acceptance of a hybrid model is that it ignores the reasons why hybrid became the adopted model. I’ve been part of an agile transformation that is now at “hybrid”. Based on current available evidence this isn’t a place where will stop, revisit what happened, make changes and move back to the agile transformation. This is it folks, this is our new way of doing things. This is not an unusual story. I’ve spoken to numerous colleagues that have been becalmed attempting to navigate the same waters..

The failure to transform is not a reason to accept adopting a hybrid model. It’s actually a pretty bad reason to accept being hybrid. Agile fails for numerous reasons. Stopping at hybrid accepts and validates the failure points rather than exploring and resolving. Moving to Agile is not about transforming the Development department. It is fundamentally about changing the business. It is a change of mindset, it is a change of culture. Waterfall seems to be an excellent model for covering up dysfunction in a company. The “guess the requirements” game followed by rounds of rework and argument, the leveraging of change requests, the reliance on legal documents (not denying the need for the documents, just the way they can be used to defend deliverables or practices). These all serve to generate a layer of noise that masks poor practices. Transforming to agile practices simply lay these bare.

Facing down dysfunctional behavior is not easy. My experience (this includes talking to colleagues) is that the biggest layer of dysfunction anchors to a company’s Management layer. Many Managers acquire their role not because of their ability to deal with people but because they were good at the technical aspects of their job. In other words, “you understand what it takes to deliver a project, you can manage your role, by extension you can manage others”. That’s bad reading of a persons capability and often robs the business of a good, productive person. More companies than not assume that Managers have the “required skills”. More likely they have the mindset of their previous Managers –predominant command and control. Transforming to agile you ask Managers to give up command and control, to move away from micro-management. You ask them to hand power across to teams of people. Do we really wonder why transformation might fail? Leaders, on the other hand, will welcome the transformation power shift. Why? Because they have never been in to the command and control style, they have always been about empowering their people. Transformation will have traditional Managers staring “ïn to the abyss” wondering what they will do next.

There are numerous factors that may stall an agile transformation. I Have heard little to convince me that many companies really consider this, how many will present a real risk. Before you start transforming understand what is in front of you. Even with careful planning you will encounter the unexpected. If you hit the point of “being hybrid” don’t stop there. Inspect, adapt, move forward. Do not accept hybrid as “the destination”. More than anything else, if you are hybrid, do not tell others you are Agile. You’re not Agile. Saying you are is a lie. You are lying to yourself and your clients. Hybrid is to Agile what a VW Beetle is to a Porsche. So for all those that say it’s Ok to be hybrid, that’s cool, but only if you set out to be hybrid. Did you?

Transforming a Test Manager into a Test Leader

Test Magazine, May 2016

Co-authors – Lee Hawkins, Rajesh Mathur

Testing Management – It’s Not What You Might Think (Thinking Isn’t Optional)

One of the ironies of the software testing industry is that a lot of people outside the industry (and also a lot of people inside the industry) believe that testing is easy. Testing can be easy for certain software products. For example, applications which meet the following assumptions:

  • simple architectural designs
  • are used sparingly
  • are not mission, life or business critical
  • do not interact with other applications or environments or their interaction as well as integration is minimal
  • usability and accessibility requirements are minimal and may have bugs that “may not bug someone who matters”.

Such applications are usually free, open-source or come as freebies with other software products. An example is Notepad which has minimal functionality and comes free with other Microsoft products.

On the other hand, testing can be very complex. Think about all other software that you use or interact with, or depend on, while at home or work, while driving, while traveling by air etc. The list of complex software we interact with is almost endless. However, when many people talk about software testing, they generalize the subject and call testing easy. This generalization naturally leads to the belief that anyone can test. If you share this belief, please read on. The authors suggest you read  “Perfect Software and Other Illusions about Software Testing” written by Jerry Weinberg. This might change your perceptions and thinking.

Since many people believe testing is easy, some testers or technical people we meet also feel that test management is easy and that anyone can do it. Most of the people who say such things do not really understand what they mean by testing or test management. It is very important to understand what we mean when we use these terms. In the words of Michael Bolton,”Words are powerful tools for understanding and clarifying ideas, but like all tools, they must be used skillfully to achieve their purposes and to avoid trouble.” The authors of this article mostly use the vocabulary of the Rapid Software Testing Namespace.

Here are some of the myths of test management that we have often heard from test professionals. In this article we will examine some of these.

  • I don’t need to know testing to become a test manager.
  • Test management is all about organizing resources. (The authors of this article prefer to use “people” or “team members” and not “resources”).
  • As a test manager, I do not have to actually test.
  • Knowing about testing is not that important as a manager because anyone can test.
  • I am a test manager and it’s easy because all you have to do is to assign resources to projects.
  • As long as I follow best practices, it will be all good.
  • Test Strategies and plans are based on templates. So as long as you have a template, planning is easy.
  • The availability of a detailed requirement documentation makes a test manager’s job easier. Her testers can simply write test cases based on requirements.
  • Following standard testing processes help you deliver good testing and vice versa.

Test management, like any other management discipline, requires a balanced and relevant skillset. Here are some of the skills that help one make a good test manager:

  • Leadership and management: Dealing with people (people management), setting priorities, delegating, motivating and developing people, coaching, listening. Demonstrating that you trust your people to understand problems and provide great solutions.
  • Critical thinking: To understand the mission of the project and to devise approaches appropriate for solving the problems. Recognising and negating pitfalls and biases that the problems pose and to draw meaningful conclusions, when needed.
  • Project management: You don’t have to create project plans, but learning how to decipher them or to add to them is seen as a good skill. Other project management skills that are useful to know as a test manager are scoping, planning, coordinating, budgeting, organizing and understanding risks.
  • Communication & collaboration skills: As a test manager, an important part of your job is to communicate. You communicate with your team members, with peers like development managers, architects, database administrators, infrastructure people, support teams, and with management teams. Good collaboration skills help you value and build relationships with these people., Forming positive alliances and understanding and is important when compromise and negotiation is required.
  • Testing: An important skill for test managers is to understand testing. Creating a test plan based on a requirements document and a test plan template is not test management. You must understand testing and you must be ready to roll up your sleeves!

It is clear that test management is much more than just resource management as some of the test managers we have met or worked with seem to think.

So what makes a good test manager? It is a combination of people skills combined with test skills. The balance is important. The context of the engagement matters and the balance will change as a test team matures. The one thing that stays constant is the need to have a “people first” attitude. Management is great for handling management responsibilities (reporting and the like), beyond that, you must embrace leadership.  When most people complain about their manager they are not complaining about management. They are really complaining about too much management and not enough leadership. A leader is a person who distributes empowerment through trust. It is someone who trusts you to solve problems using the skills you have (or ones they will actively encourage you to develop). They are most definitely not a micromanager and they know how to create an environment in which failure is safe. A manager, on the other hand, talks about how people (they would call them resources) should feel empowered but not give them the right permissions to actually be empowered. They micro manage and assign blame. There is no safe way to fail and the acceptable solutions are your managers solutions. Leaders motivate, managers suck motivation out of people. Daniel Pink in his book Drive – The Surprising Truth of What Motivates Us talks about motivation models.  A summary is presented below:

Motivation 1.0 – These are your basic instincts. Humans have had these since the dawn of time. This is the drive to survive.

Motivation 2.0 – The recognition that people respond to reward and punishment (controlled motivation). In the early 1900s Frederick Winslow Taylor was a notable contributor in this area. This approach hinges on rewarding desired behavior and punishing other, unwanted, behavior. This a command and control approach and appears to still be the predominant form of motivation used by managers. Recall that quote from Lenin? Control over trust, that [1] is motivation 2.0 thinking.

Motivation 3.0 – tapping into people’s intrinsic (autonomous) motivation, the desire to do a great job. Allowing people to utilize their sense of autonomy, allowing them to self-direct. This requires resisting the urge to control people.

If you want people to succeed, excel and engage then you must give them room to do so. Managers must learn to manage less and lead more.

Good test managers follow good practices of management. While people management skills are really important as a leader, another important requirement in becoming a good test manager is to becoming a skillful tester. We strongly recommend that you maintain a healthy interest in continuously improving your testing skills.

Imagine you decide to learn how to drive a motor car. You have a friend and your friend’s Grandfather has decided he will help you. He’s been driving for years so you’re confident that he’ll know what you need to learn. Experience is really important, right?  The morning of the first lesson arrives. You sit in the driver’s seat of your car imagining yourself out on the road. Your friend’s Grandfather arrives, gets into the passenger seat and says “You know I’ve been driving for over 60 years”. “Awesome, you respond, you must have driven a lot of cars”. The answer comes back “No, still driving my first model T. I take it out for a short drive every decade or so on a private property”. Is Grandpa really the right guy to be guiding you, teaching you car driving skills? Just about every industry I can think of has examples of people who think knowledge at a point in time (especially certification) equips them for life, and that skill, practice and acquiring new knowledge and skills are not important. This is a bad attitude and a great way to make yourself redundant. You really want to make sure that you don’t roll up to work a “Model T driver” when your team are all suited up “Formula 1 racers”. Experience is important but the right experience is far more useful.

Documentation and metrics (by this we mean metrics that are supported by a clear context that enables them to tell the underlying story) are useful. If you are moving into a test manager role it is likely to be one of the first items added to your “to do” list. Improving your team’s testing capabilities, creating a capability of finding important bugs fast is probably the most important task. Documentation and metrics do not make your clients happy, high quality software does. How you do that depends on your testing and people management skills. As a manager you might simply embark on a “certification collection” exercise and tell your clients  “My test resources are really good. They are all certified and we use only best practices”. As a leader you might talk to your people, discover areas they feel development is required. You might also consider skills that do not have “test” as part of their description. Courses that focus on things such as teaching, mentoring, coaching, thinking, analysing, team building, leading. The absence of a certificate at completion will be overridden by the value of the knowledge being brought back to the test team. As a leader you’ll tell your clients “We have a really broad experience base. The people in the test team are broad thinkers, they love analysis and problem solving. We are one of the happiest and strongest teams I have ever worked in”. We have previously answered some questions about certification in the January 2016 edition of Testing Circus magazine. A very good resource for improving testing skills is to attend courses offered by the Association for Software Testing.  

Through our experience, we believe that, amongst other things, good test managers are rounded individuals. They manage when required but otherwise lead and are good at leading by example. Their people first approach engages those that work with them and encourages those same people to work with a real passion because their input is highly valued. The testers experiment and innovate because they are lead by someone who makes it safe for them to fail and supports moving forward from the failure. We are not being critical of people who do not demonstrate these skills. We are, however, suggesting that if this article makes you feel like you manage and never lead, it is time to reconsider your approach.

Resources

Rob Lambert has written a lot on this topic too, so review his blogs for ideas: http://thesocialtester.co.uk/writing/  http://cultivatedmanagement.com/blog/ http://thesocialtester.co.uk/wp-content/uploads/2013/08/thediaryofatestmanager.pdf  http://cultivatedmanagement.com/how-to-manage-time/

http://www.developsense.com/articles/2005-01-TestingWithoutAMap.pdf

Pink, Daniel H. (2010-01-13). Drive: The Surprising Truth About What Motivates Us. Canongate Books

Johanna Rothman has written many good articles on this subject. Visit her website for those articles:

http://www.jrothman.com/articles/

 

Management and trust – So simple, so complex, so important

Testing Trapeze, April 2016

TRUST is a word that often gets used without realizing its importance or weight. It is easy for someone to tell you “of course I trust you” or “I trust in your ability to see this project through”. A person’s ability to talk about trust is not always backed up by an ability to provide the promised trust. Telling someone you trust them to deliver when you give them an easy task which they have delivered on many times previously is pretty easy. History tells us that fulfillment of the delivery is a low risk proposition, but can you repeat that same sentiment when the stakes are high?

I had a number of possible topics for this article, I guess you don’t need to be Sherlock Holmes to figure out what I decided to write about. The reason I settled on this topic is worth relating. I work in a team that has a weekly gathering over coffee. Part work discussion, part social. It is a bonding get together that has been incredibly successful. A recent work meeting that we had all attended came up in discussion. Of all things discussed in the meeting a single comment resonated as “the highlight” and it was discussed with considerable passion and emotion. Why this one specific comment? Because the speaker’s (a manager) opening comment focused on personal consequences of not complying with a new process being rolled out. The comment completely overrode the positivity of the actual roll-out announcement. It communicated a clear lack of trust. Below is a summary of key team discussion conclusions about the lack of a trust culture:

  • It is not just a management issue; this attitude pushes down to others
  • Lack of management trust generates fear of failure
  • Lack of trust erodes a team’s ability to trust each other and work efficiently
  • Lack of trust undermines confidence, generates self-doubt and promotes inefficiency
  • Being assigned a solution rather than a problem removes buy-in and removes feedback opportunities
  • Lack of trust in the form of micromanagement kills innovation and self confidence
  • Lack of trust produces a fear of failure
  • Lack of trust is part of a “carrot and stick” approach with the emphasis clearly on the “stick”
  • Lack of trust creates an implied belief that people will seek to break process or give less than their best
  • Creates a culture of success by finding fault in others work in order to succeed

In an article published in the Huffington Post titled Managing Better: 7 Ways Leaders Say “I Don’t Trust You”, David Peck cites the following issues:

  • Nitpicking: Micro-editing, being hyper-vigilant about the details of their work, too frequent check-ins and telling, rather than asking, “better” ways to do what they are doing.
  • Delegating the “what” and the “how”: Saying, in effect, “This is what I need, and here’s how I need you to do it”, or “You should have done it this way”.
  • Delegating without sufficient context: Making a request or command to do something without explaining why or where it fits in to the bigger picture; “do this”.
  • Delegating responsibility, while actual authority to act resides too high up the chain: Many organizations say they empower their people, yet particularly in difficult times, the reverse is the tidal pull. Pulling too many decisions into committees or up the leadership chain is often the rule. You delegate, but don’t give responsibility for the final decisions related to what you’ve asked them to do.
  • Leading with the mindset that your people are never allowed to fail: However well-intentioned, if people are working at their best, sometimes they will fall down or fail. Intervening, overrehearsing or otherwise being heavy-handedly protective of them.
  • Overriding your people’s input or feedback: Requesting or taking input from your team then (apparently to them) ignoring it without explanation. Asking for feedback, then overriding it.
  • Keeping your people under wraps: Behaviors like bringing your people along with you to an important presentation or moment, and not having them actively participate. Not giving your people opportunities to showcase their work in more important settings.

The sources are separated by significant distance but there are many common themes between Peck’s list and the sentiments expressed in the team meeting. This is clearly not a problem that is limited to a handful of people in a single location. Why, as people, do we hold trust as such an important attribute when it is clear from the above that a real or perceived lack of trust is immensely damaging? I went hunting for a definition or explanation of trust, ending up (somewhat unexpectedly), settling on the following from Wikipedia:

… trust has several connotations. Definitions of trust typically refer to a situation characterized by the following aspects: One party (trustor) is willing to rely on the actions of another party (trustee); the situation is directed to the future. In addition, the trustor (voluntarily or forcedly) abandons control over the actions performed by the trustee. As a consequence, the trustor is uncertain about the outcome of the other’s actions; they can only develop and evaluate expectations. The uncertainty involves the risk of failure or harm to the trustor if the trustee will not behave as desired.

Vladimir Ilych Lenine expresses this idea with the sentence “Trust is good, control is better”. 

I like the above description. It invokes several key themes:

  • Trustor and trustee
  • Willingness
  • Reliance
  • Voluntary or forced
  • Abandoning control
  • Uncertainty
  • Behavior expectations

I also like the final quote (which I will come back to). If you are wondering who Vladimir Ilych Lenine is, you might better know him as Lenin.

From the session responses and the trust rationale described, there is the expectation that trust transfers power and responsibility. It moves it from the trustor to the trustee. Why is this transition such an issue? Surely giving people responsibility and allowing them to create solutions is why we hire people? Part of the answer might be found in Dan Pink’s words. In his book, Drive – The Surprising Truth about What Motivates Us, he raises the idea of three motivational systems, which I’ll paraphrase as:

  • Motivation 1.0: These are your basic instincts. Humans have had these since the dawn of time. This is the drive to survive.
  • Motivation 2.0: The recognition that people respond to reward and punishment (controlled motivation). In the early 1900s Frederick Winslow Taylor was a notable contributor in this area. This approach hinges on rewarding desired behavior and punishing other, unwanted, behavior. This a command and control approach and appears to still be the predominant form of motivation used by managers. Recall that quote from Lenin? Control over trust, that is an example of motivation 2.0 thinking.
  • Motivation 3.0: Tapping into peoples’ intrinsic (autonomous) motivation, the desire to do a great job. Allowing people to utilize their sense of autonomy, allowing them to self-direct. This requires resisting the urge to control people.

Pink elaborates:

“Autonomous motivation involves behaving with a full sense of volition and choice, … whereas controlled motivation involves behaving with the experience of pressure and demand toward specific outcomes that comes from forces perceived to be external to the self.” 

 If you look at the list of observations gathered from teammates about the impacts of a lack of trust you can see they align with a rejection of motivation 2.0 principles and a desire to embrace motivation 3.0. There is a desire to be self-directing, autonomous achievers; engaged and fulfilled by the work.

So how did “we” get to a point where management are directing operations in a way that their people just do not relate and why have we stayed there so long? Tom DeMarco provides an interesting idea in his book, Peopleware: Productive Projects and Teams:

“Consider the preparation we had for the task of management: We were judged to be good management material because we performed well as doers, as technicians and developers. That often involved organizing our resources into modular pieces, such as software routines, circuits, or other units of work … After years of reliance on these modular methods, small wonder that as newly promoted managers, we try to manage our human resources the same way. Unfortunately, it doesn’t work very well.” 

The DeMarco observation highlights two problems;

  • Technical skill gets people to management rather than people skills;
  • Technical competency is built by owning low level detail but this is not ideal when managing people.

This is not an ideal foundation. The following discussion from Joan Lloyd about micro-managers helps to reinforce DeMarco’s views;

“These folks just can’t let go. Typically, they have worked their way up the ladder and they are familiar with the work that needs to get done. They find satisfaction in doing the work, so they like to do it themselves, or tell their employees exactly how to do it. Often, micromanagers are perfectionists, so they breathe down the necks of their employees, checking their work to see if they have completed it exactly like the manager would have done it.

Sometimes micromanagers are created because their boss is pressuring them for fast, specific results. This causes the manager to hover over their employees’ and frequently inquire about the progress of the project. If the manager’s boss is the punitive type, you can bet the manager will be micromanaging his or her employees, so no heads will roll”. 

If managers can’t let go, and that results in a “lack of trust” environment, do we just accept this and move on (“hey that’s the way it’s always been, it’s just the way it works”) or do we seek better and find ways to address the problems? The answer might lay within initiatives that provide managers with leadership skills. Why is this important? While the terms manager and leader are often used interchangeably, they are not the same thing. Consider the following points of distinction from the Harvard Business Review:

Counting value vs Creating value. You’re probably counting value, not adding it, if you’re managing people. Only managers count value; some even reduce value by disabling those who add value. By contrast, leaders focus on creating value, saying: “I’d like you to handle A while I deal with B.” He or she generates value over and above that which the team creates, and is as much a value-creator as his or her followers are. Leading by example and leading by enabling people are the hallmarks of action-based leadership.

Circles of influence vs Circles of power. Just as managers have subordinates and leaders have followers, managers create circles of power while leaders create circles of influence. The quickest way to figure out which of the two you’re doing is to count the number of people outside your reporting hierarchy who come to you for advice. The more that do, the more likely it is that you are perceived to be a leader.

Leading people vs Managing work. Management consists of controlling a group or a set of entities to accomplish a goal. Leadership refers to an individual’s ability to influence, motivate, and enable others to contribute toward organizational success. Influence and inspiration separate leaders from managers, not power and control. 

There is a clear alignment between leadership and motivation 3.0. Is the transformation possible? I can’t see why it isn’t but it will take effort and persistence. This is a cultural transformation. It is the management specific “waterfall to agile” mindset change. Many of these fail because change is hard and old habits die hard but without this change a business risks either failing to reach full potential, or simply failing. Businesses that understand the imperative will surge forward as the trust shown by leaders engage employees and encourages them to do their best in safety.

In closing, from Tom DeMarco again:

“Most managers give themselves excellent grades on knowing when to trust their people and when not to. But in our experience, too many managers err on the side of mistrust. They follow the basic premise that their people may operate completely autonomously, as long as they operate correctly. This amounts to no autonomy at all. The only freedom that has any meaning is the freedom to proceed differently from the way your manager would have proceeded.”

Testers FAQ on certifications

Co writers – Lee Hawkins, Rajesh Mathur

Testing Circus – Volume 7, Edition 1, January 2016

This article (or Testers’ FAQs) is a result of our constant discussions, conversations, debates, teaching, coaching and mentoring over social media and not-social (we are
not saying anti-social) media. Many of these conversations have occurred during our popular TEAM meetups.

We have also been responding to many queries from experienced as well as inexperienced testers lately. Many of these queries are topics such as career advice, growth path, techniques and technology. The most common question concerns testing certifications. The testers out there appear confused about the value of becoming certified. Considering the confusion and common questions, we decided to create this FAQ guide. We have also listed some good references at the end of this article.
Q. In an overcrowded market, there has to be a benchmark which employers can work with. Don’t certificates help employers by becoming that benchmark?
Answer: The central problem is not the missing benchmark. The problem seems to be about assessing job applicants’ CVs before the interview process. One common complaint is that most CVs from testers appear similar, as if a template has been used. This problem is exacerbated by certification, not remedied by it. If every tester is certified by the same body, then this is not a differentiator. Those involved in recruiting testers need to look beyond the cookie-cutter CVs for signs of genuine testing ability and interest, such as community engagement and critical thinking (an example of which would be the realization that certification homogenizes rather than differentiates).

Q. Isn’t it hard to assess a person’s capability until they reach the interview stage? It is an overcrowded market because the employers often get a large number of applications for a single job posting (certified or otherwise).
Answer: A good CV can tell you a lot about a candidate. Focus on understanding the applicant’s past employment history, what they bring at the pure job experience level. Now understand their involvement in learning and contributing. What does the applicant do to extend their testing knowledge? Do they rely on employer provided training or do they have an established self improvement program? Is the applicant involved in the test community? What does this involvement look like? Is this a “9 to 5” tester for whom testing is just a way of making money or someone that wants to grow and add value as a tester? This tells us more about an applicant than holding a generic testing
certificate.
Employers also have their vested interests in making the hiring process easier for themselves. One approach is to be very clear and specific while drafting the job  description. Most of the testing job descriptions are very vague and poorly drafted and this is why we believe employers receive a large number of applications.
Employers would benefit by spending time understanding the role they really need filled and describing an appropriate job profile. They could also benefit by understanding what certification means to them as an organization, the value it delivers, rather than simply adopting it as a filtering benchmark without understanding the impact of this decision.
Q. Any ideas how the testing industry would set the expectation that certificates are/are not required? There doesn’t appear to be a clear expectation amongst all organizations.
Answer: It would be unrealistic to assume all organizations have the same expectations on this or any other subject. Organizations have different cultures, projects within them also have different ways of  working and their own expectations on the value testing
brings to the project. It is these differences between what constitutes value from testing that is acknowledged by the principles of context-driven testing and these differences are poorly served by a “one size fits all” approach to testing that is almost an inevitable outcome of following certification programs.
Q. For a beginner tester, the certification programs teach techniques that might help in their job. Theory can be a base for practical experience?
Answer: We think that a certification course and a subsequent exam do not teach beginners testing techniques. Most certified testers who we have interacted with confirm that they took the exam only for the purpose of gaining the certificate and not for
learning new techniques. Testing techniques are mostly learned on the job by doing.
Further, when the means to an end is remembering things to regurgitate on an exam then it is not an exercise of practical demonstration but of rote learning. We have worked with numerous testers that have received high certification marks but cannot negotiate their way through real world problems. Certification does not teach thinking skills, it teaches students to follow an established pattern of practice. Students do not leave the courses with any strategies on how to deal with these problems and think through issues. We
advocate getting experienced testers to sit with the new testers. Establish a mentoring program, let real experience guide the development of testers. Let them experience real world problems and develop problem solving skills as they find answers to these problems
while being fully supported. James Bach wrote an excellent post for new testers which we highly recommend (even if you are not new to testing).
Q. For an experienced tester, certification works as a refresher and for up-skilling knowledge. People take certifications to showcase their ability or curiosity to
learn, don’t they?
Answer: We’re all for testers taking responsibility for their own career and a demonstrated history of continuous learning is something you should be looking
for on any tester’s CV. Too often we encounter testers who think taking ISTQB Foundation teaches them all they need to know to be a good tester. In our opinion
this is a blinkered and misguided approach. Testing growth requires a community. The idea that one source of information (certification) makes you complete is a
fallacy. You don’t learn to speak English by reading the dictionary. In a ‘thinking and doing’ profession you don’t up-skill by occasionally getting a certificate. You
improve by immersing yourself in other people’s thinking, be that books, articles or ad-hoc observation (to name a few means) and turning those into experiments. You improve by discussing ideas within your test community, attending meetups, conferences,
workshops. Even coffee chats with other testers who are willing to challenge your ideas, experiment with them, provide feedback on outcomes!
Q. Certification is essential to get into the workforce or be promoted.
Answer: This is why it’s so important for there to be good arguments in place for not having such requirements. With the ubiquity of ISTQB certification, it is not surprising that organizations latch onto it as a prerequisite during hiring and promotion – but that
doesn’t mean it has to be this way. It is up to all good testers to present compelling arguments for alternatives that focus on critical thinking and context.
Q.Most importantly, certifications are for implementing a universal process. Each organization has their own testing strategies, plan and approach. Learning about a
universal process gives the tester confidence to fit into any organization easily.
Answer: Testers serve their stakeholders in a particular context. They cannot work in isolation because there are multiple stakeholders that testers influence and are
influenced by. Since testers serve a specific purpose in a specific context, the universality of language is not required as much as it may be required by a developer. Testers who try to impose their language on their stakeholders do a disservice to our craft. This imposition
serves to create confusion and contributes to poor relationships with stakeholders. Each organization is unique and has its own way of doing things. What one organization may call build verification test, others may call smoke, sanity, shakedown or shakeout test.
Semantics matter and may impact adversely to a context. It is a better approach, in our opinion, to allow project stakeholders to agree a “glossary of terms” for the project so that all stakeholders use common, clear language. The glossary may not survive more than a
project, but, it doesn’t need to. It survives only as long as it is relevant to the project context.

The idea of creating a universal test language is a unicorn. It suggests a level of conformity among the  testing community that, in reality, will never be achieved. Many years ago, being relatively new to testing and having achieved ISTQB foundation, Paul
attempted to formalize the language of eight testers he led. He spent time trying to align terminology to the ISTQB glossary. He eventually dropped the idea as it became clear there was no appetite for the change. He realized not long after that this was a high effort, low
value exercise. We don’t need a common language, we need to discuss and establish context through communication. Michael Bolton’s post on Common Language provides further “food for thought”.
The notion of universality also implies best practices. This is, in itself, an issue as it encourages the idea that I can apply the same approach in any context and it will
be efficient, meaningful, and provide value to the stakeholders. This attitude severely retards meaningful growth in the industry and also damages credibility. Our personal approach, and the one we would recommend, is to consider all possibilities available to
you and use the ones that best fit your needs within the current context. Don’t rest on these decisions though. Contexts change, be alert and ready to assess what those changes mean. Do not blindly follow a pre-determined path.
Q. Understanding the process makes the person a best fit. Certification exams make testers understand the testing process. Hence, certifications are not at all wasteful activity.
Answer: One does not need certification to understand or learn a new process, language or activity. Children learn languages by observation and get fluent by practicing. Similarly, for testers, practice, observation, exploration and learning are more important than
certification.
The question whether testing certification is wasteful or not again depends on context. The effort involved in learning the syllabus content to a point where the examination can be passed presents opportunity cost, given that the tester involved could instead be tasked
with learning and exploring the product under test and providing information about it to your stakeholders. Entrenching the so-called standard terminology from the certification may end up being wasteful, as the tester needs to work within the differing project environments in any organization and adopt the de facto terminology in use in each.
Q. We don’t retain knowledge as time passes. So we do need refreshing courses or certifications to get you back into the process.
Answer: Learning can be done in many ways. Certification or paid courses are not the only options for gathering knowledge. Today, there is so much free learning material & support available online that a seeker may not even need an institution.
Q. Certifications or standards were created out of necessity.
Answer: The question is, what necessity or whose necessity? The necessities need to be exposed as these are the assertions that need to be examined for accuracy and relevance. What necessities led to the creation of a very directive method of testing? What other options were discarded and why. If you are going to argue necessity then you need to look at the historical context and examine it.
If certificates were created because of necessity, what was the objective? What benefits were to be accrued through prescribed process in a business of a “million or more” contexts? Is there really “one true process to rule them all”?. Does that original necessity still exist? We have yet to hear a convincing argument that it does. We argue, strongly, that you cannot implement directive approaches to testing without tearing large holes in the credibility of testing as a profession.

Can you hire good testers who do not have a certificate but still have good knowledge of testing? We include Rob Lambert’s post in the references below Rob’s post answers this question in significant detail.
Q. If experience-based testing is what we recommend then what are the entry criteria for freshers? They cannot gain experience without being hired as a tester.
Answer: There are ways by which an entry level tester can gain experience. There are open source projects available online, many websites like CodeAcademy, Khan Academy, Free Code Camp provide learning and experience opportunities. Crowdsourcing sites like
uTest provide opportunities to gain experience and earn as well. Beginners can also join test meetups or engage more anonymously through forums such as LinkedIn.
Beginners can use these to not just learn, but experience and earn too.

References
1. “Recruiting Software Testers” (Dr. Cem Kaner)
http://kaner.com/pdfs/QWjobs.pdf
2. “LinkedIn PDCA: Are you ready?” (Rajesh Mathur)
http://media.wix.com/ugd/c47e45_4d2ec12335a744ee8117632d5
f9423cd.pdf
3. “How to Recruit a Tester” (Phil Kirkham)
http://www.ministryoftesting.com/2011/11/how-to-recruit-atester/
4. “Certifications are creating lazy hiring managers” (Rob Lambert)
http://thesocialtester.co.uk/certifications-are-creating-lazyhiring-
managers/
5. “How to Find, Interview and Hire Great Software Testers”
(Simon Knight)
https://blog.gurock.com/interview-recruit-testers/
6. “Defending Against Standards and Certification” (Eric Proegler)
http://testingthoughts.com/ericproegler/?p=481
7. “Certifications in hiring – Part 1” (Johanna Rothman)
http://www.jrothman.com/htp/hiringprocess/
2016/01/certifications-in-hiring-part-1-a-certificates-value/

It’s On My List – One Piece of the Quality Jigsaw

Checklists reduce the likelihood of you forgetting to do something important and so increase the chances of delivering quality that will delight your customer. That’s a bold opening statement, it sounds more like a conclusion, but read on, please. This begs the question, if my brief opening summary is reasonably correct, why does there seem to be a general lack of checklist use in software production? When I have raised the idea of checklists with colleagues none profess to using them (or using them in any consistent way for any length of time). I recently spoke about checklists at a test meetup, it was apparent that the audience had spent little, if any time, using checklists as a tool. Why is the use of checklists not a more general practice within software development? When I first tried to introduce them in to a test team some years back I could not get the idea to gain traction. I won’t deny that part of that could have me being a poor salesman rather than the idea being a poor one. Never the less none of the testers seemed to intrinsically see them as useful. Checklists make sense to me. There are very few days where I don’t plan what I need to cover in a day. You might call that a daily plan or a “To-Do” list but it is just as easily viewed as a checklist. Checklists provide a level of comfort that the things I should be doing are getting done. I don’t list everything, just the important things, the things that I do need to complete on the day. When I sit in an aircraft I’m really hoping the Captain and First Officer are calling through those checklists. Wheels down landings at appropriate speed really appeal to me (my full list of requirements in this space extend beyond a single aircraft configuration). It was really my interest in aviation that brought checklists to the front of my thinking

1Pilots have checklists for the following: “before start,” “after start,” “before takeoff,” “cruise,” “pre-descent,” “in-range” (about 10 minutes before landing), “after landing,” “parking” and, if the airplane is finished for the day, a “termination” checklist must be completed. Checklists are fundamental to the aviation industry, the most regulated industry I know, because they virtually eliminate mistakes and oversights. In addition to mechanical checklists mounted in the cockpit, we consult plasticized checklist sheets and electronic ones displayed on airplane computer screens, as well as reference checklists for such procedures as de-icing (courtesy of Air Canada, the bolding is mine).

The use of checklists extend beyond aviation. Medicine, in particular surgery and critical care, has also adopted checklists.

2The concept of using a checklist in surgical and anaesthetic practice was energized by publication of the WHO Surgical Safety Checklist in 2008. It was believed that by routinely checking common safety issues, and by better team communication and dynamics, perioperative morbidity and mortality could be improved. The magnitude of improvement demonstrated by the WHO pilot studies was surprising. These initial results have been confirmed by further detailed work demonstrating that surgical checklists, when properly implemented, can make a substantial difference to patient safety (British Journal of Anaesthesia the bolding is mine).

One of the things I really like about scrum is the Definition of Done (DoD). Why so? Because it is a checklist that the team must make each other accountable for. That checklist represents things that must be done in order to say we have value that can be delivered to our client. It removes the “hey did we complete the code reviews on that release?” scenario as the release flies out the door to client land. It covers off the “did we complete the testing we committed to?” panic question after release to a client (this does assume proper use of the DoD). The DoD is a powerful governance item. The team I work with has it’s own DoD defined in key areas of story card movement and each of those stages has key attributes we believe are vital to ensuring consistent quality. It represents things we shouldn’t even have to think about doing. The “dumb things” that you could never forget to do, but somehow, under pressure, or other distractions, you might. The DoD sits on each team members desk and a big copy of it sits blu-tacked to a window in our area. It’s as visible as we can make it to the team and external stakeholders. So what are the qualities of a good checklist? Or maybe a checklist is a checklist is a checklist? Just make something up and go for it. A checklist isn’t just any list of things, if you want it to be effective: · make sure the people that are going to use the checklist help create the checklist · limit the length of the checklist, four to six items, covering only the critical items · use the checklist as a tool to support, not replace, judgement and creativity You should also give some thought to how you want the checklist to be used. Are you expecting the actions to be acknowledged and physically ticked off? Or are you going to introduce this as a means of prompting thinking and action but not expect execution of the action to be checked? There is no right or wrong. The context in which you are using your checklists will guide how they need to be used. Understanding this is quite important. If you cannot engage people with using checklists, the best checklist in the world will not help you improve. A checklist that sits unused is a waste of effort. Spend time making the checklist as simple and useful as possible. As soon as you get a few checklist “wins”, point them out, discuss them and let the checklist “do its own talking”.

If you are interested in learning more about checklists you might like to grab a copy of The Checklist Manifesto: How to Get Things Right by Dr. Atul Gawande. The good Doctor saw value for checklists in a number of endeavors, including software development. There are a few extracts and overviews of this book on the web if you would a bit more detail before handing over a few dollars to a book retailer. To close I’d like to use the following from an Aviation Week article that quotes Dr. Gawande. 3

“….overcoming historic ignorance is less of a challenge in the field than ineptitude, the “instances the knowledge exists, yet we fail to apply it correctly.” While there is bountiful information available to medical professionals and most study for years to master their skills, the new challenge is to assure they “apply the knowledge . . . consistently and correctly. We have accumulated stupendous know-how. We have put it in the hands of some of the most highly trained, highly skilled and hardworking people in our society . . . . Nonetheless, that know-how is often unmanageable. Avoidable failures are common and persistent.” Disciplined use of checklists “provides protection against such failures.”

While the above specifically references medicine, change the reference to software development and the message still rings true.

Constructing a checklist that is useful takes time and thought. It needs to be open to feedback to get just the right “shape”. It needs to contain the right language (if you have a geographically distributed team the checklist that works well in one location may not be so good in another location). Persist with the effort and you’ll find yourself rewarded with a tool that can help you improve and maintain your quality levels.

1 When do pilots use checklists? http://enroute.aircanada.com/en/articles/when-do-pilots-use-checklists

2Surgical safety checklists: do they improve outcomes? http://bja.oxfordjournals.org/content/early/2012/05/30/bja.aes175.full

3 Checklists and Callouts: Keep It Simple, Avoid Distraction, Prevent Ineptitude http://aviationweek.com/business-aviation/checklists-and-callouts-keep-it-simple-avoid-distraction-preventineptitude

Stress and work – Strategies that keep you moving forwards

 Women Testers, Edition 6, October 2015

Co-author: Annie Rydholm

 

What is stress? We often use the term, sometimes as a positive, sometimes as a negative and sometimes as a badge of honour.

1Stress is an everyday fact of life. You can’t avoid it. Stress results from any change you must adapt to, ranging from the negative extreme of actual physical danger to the exhilaration of falling in love or achieving some long-desired success”.

1Not all stress is bad. In fact, stress is not only desirable it is also essential to life”.

“Good stress” is a motivating force. It gets you out of bed in the morning, it makes you relish the challenges in front of you. It makes you sharp and alert, it gets you thinking. Testers work under many pressures, short and tight deadlines, software that is complex and challenging to understand, and sometimes, just plain old “stuff happens”. Sometimes the challenges can seem overwhelming, and your anxiety will start to rise and the “good stress” positives will start to diminish.

When bad stress builds to excessively high levels, it is serious, and medical intervention is required. This article is not about that level of stress. This article is about strategies that will keep you moving forward and help you stay energised.

Stressor 1: Lack of knowledge

Strategy 1: Write down what you do and don’t know

It’s not unusual to find yourself assigned to test a software change where you have little familiarity with the functions you need to focus on. I have over 15 years experience with a particular piece of financial software and I still find myself in situations thinking “how the hell do I test this?”

The reality is you do know something about the software, the functional change and your relationship to it. Give yourself some clear air, that might mean moving to a quiet space for a few minutes, stepping out to a cafe and grabbing a drink. Have a notepad and pen handy and make 2 columns. The first column is “What I know”. In this column you make notes about all the things you know about the project you are going to test. Doesn’t matter how small, write it down. You’ll often be surprised at how much you write in this space. Now add a column called “What I need to know”. In here you can make notes on things that puzzle you, knowledge gaps, just any question that comes to mind. When you are doing this you might find that considering these questions prompts your memory to give up more things you do know. Note these down.

You have now created a means of asserting some control. Talk to people involved in the project to get clarity around the questions you have. I can almost guarantee if you have a question on something, others will have the same question. When this happens you are identifying potentially harmful knowledge gaps and helping to strengthen the product. Feels good right? Not only are you learning you are helping improve quality. While you are having these discussions you can also validate some of the things you do know, this will help increase your comfort level even further.

While writing this article Annie and I spent some time discussing this strategy. Annie, not having used this idea before, tried it out and found that it worked well. Annie noted that just writing things down produced an instant feeling of relief. It took things out of her head and on to paper where she could focus on the ideas and sort the more important from the less important. It helped her make decisions about her next move.

Stressor 2: System complexity

Strategy 2: Create a model you can understand

When you start on a project, the project itself could be reasonably advanced, or, even if you are at project start up, the project does not play to your specific domain strengths. People on the project team are busy, the project is complex, you need to sort out exactly what you are dealing with. Between you and understanding are pages of documentation and very busy project team members. You need to find ways to reduce complexity. It is so much easier to think freely when you can establish the basics in a way you can understand them.

Find the paperwork you need to get a handle on. Most likely this will be a specification document. Read through it once, just try to get a high level overview. Maybe make a small note or two. Once done put it down, take a walk to the kitchen, make a coffee, kick back for a few minutes (and maybe a few extra ones). Now go back to your desk. Open the specification start reading and sketching. Start drawing out all the relationships you get from the document. You are now creating a model. Models are great because 2“Today’s systems are complex with many moving parts (thanks to modern multi-tier and distributed architectures) — models enable us to cope with this complexity by providing a visual abstraction layer that focuses on the higher level concepts in the problem domain and de-couple the “what” from “how”.

In cases where you do not have a written specification, you need to into “ information discovery mode” you need to talk to people who may have the answers (note: having a document does not mean you skip “information discovery mode”). The people could be designers, product owners, developers, any stakeholder relevant to the project. Talk to them and sketch relationship from what you know, then update your model by asking them again.

When you complete this exercise don’t expect it to be perfect, it doesn’t need to be. It needs to be good enough for you to have a visual model of the changes, that you understand, and how they impact the system under test. It needs to be good enough for you to use when talking to busy team members. It should help you understand components of the system under test and to identify areas of focus and conversation. Your models should be subject to continuous improvement.

Often when I do this I find it is the first time the team has sat down with a high level diagram that shows changes and impacts. It can be quite a conversation starter. An example of using this technique, and perhaps one of the first times I used it, I remember finding a bug in the design of a small project. As I mapped out relationships I create a path which showed contradictory actions (the document described a state that could not be allowed). I showed this to the project team who immediately recognised that I had indeed spotted an error. This not only helped the project, the conversation helped validate my understanding of changes and demonstrated that I was serious about helping the project be as good as it could be.

Stressor 3: What do I do next?

Strategy 3: Heuristics

3 “Heuristics do not replace skill. They don’t make skill unnecessary. But they can make skilled people more productive and reliable.”

Heuristics can be a great tool for both experienced and inexperienced testers. Heuristics are not infallible, they do not guarantee you are following the right path. Heuristics however do help you generate ideas about things you might or should test. The heuristic won’t tell you how to go about your testing, how to execute with the right focus. It will open pathways to  information that will assist you to generate thoughts, approaches, create meaningful questions. The heuristic will not do the work for you but it will help you make decisions about what you could do next. The most important thing they will do is get you thinking, and moving. A gentle push to gain some momentum is often enough to start generating that positive thinking energy.

Stressor 4: High work levels/tight deadline

Strategy : Prioritisation, information flows, working slow

Let’s get this straight, right out of the box. This is a problem, but it is not your problem. You don’t own this, but you do need to work with it and keep moving forward. The reasons for work levels getting significantly big against a deadline are many, but, generally, we get in this position because of systemic failure. An aircraft rarely crashes because of a single source of failure. A project is pretty much the same.

Make sure that your stakeholders are well informed about how testing is progressing. Be clear about things that are blocking testing and the impacts of those blockers. However, be realistic, also think about strategies that could help mitigate these issues. It is important to demonstrate that you are not just presenting problems but also helping with potential solutions. It is also important for you, your overall mindset, to think about solutions (positives) and not just problems (negatives).

The deadline remains set in concrete, your report on testing  impediments has been largely disregarded. That sucks but you cannot control others reactions and actions, so focus on things that you can control. Move your focus to agreeing priorities. Of the outstanding testing, what is the most important? Get agreement on scope priority. Work to this and report in terms of the priorities. By going into “priority mode” you immediately reduce the amount of testing you need to focus on and help your state of mind.

So now you want to go fast and really slash through the amount of outstanding testing. “If I go really fast we can still complete all the testing – that’ll impress everyone”. Don’t. The minute you think you need to “go fast” you actually need to “go slow”. Why? Rushed testing, testing to a “tests completed per day” target  is bad, you are focusing on the wrong goals. Rushing increases stress, makes you more likely to skip over things you would have otherwise investigated. It reduces your ability to clearly analyse and think. Give yourself permission to stay focused and calm and look for problems. You will find a sustainable and appropriate speed. At this speed you will do the right thing by you and your clients.

Clearly you cannot avoid stress, in reality you don’t want to. You do however want to maximise “good” stress and reduce any other type. In most cases simply doing something that will keep you moving in the right direction is enough to keep you positive and energised. We hope the strategies outlined in this article help you to maintain forward momentum and a positive focus.

References

1 McKay, Matthew; Davis, Martha; Eshelman, Elizabeth Robbins; Patrick Fanning (2008-05-03). The Relaxation and Stress Reduction Workbook (New Harbinger Self-Help Workbook) (Kindle Locations 189-191). New Harbinger Publications. Kindle Edition

2 http://www.ashoknare.com/2009/03/30/why-do-we-need-visual-models/#sthash.Vhn2Kb1z.dpuf

3 <Source http://www.satisfice.com/blog/archives/462 >

 

 

 

 

 

 

 

We are all thinking testers

When did we start using thinking as an attribute by which testers are, or might be, classified? It’s a question that I’ve pondered a few times. A recent visit to a website that classified group members as “thinking testers” brought this question back into my focus. I’ve heard more than once “everyone here is a thinking tester” or “only a thinking tester would do that” or “I am a thinking tester”. If we believe that “thinking tester” is a valid descriptor then surely the opposite attribute must also be true, a “non thinking tester”.  A computer executing regression tests is a possible example of a non thinking tester (although I prefer the term checking rather than testing in this space and I don’t agree with the idea that a computer can test).  I’ve yet to hear anyone call themselves a thinking tester in comparison to a computer. The assertion has always, to me, seemed to be “at the human level”.

Is it possible for a human tester (tester from here on in means human tester) to not think?  According to Scientific American the answer is No. Humans, from the dawn of time, have been hardwired to think.


“Optimal moment-to-moment readiness requires a brain that is working constantly…….Constant thinking is what propelled us from being a favourite food on the savanna—and a species that nearly went extinct—to becoming the most accomplished life-form on this planet. Even in the modern world, our mind always churns to find hazards and opportunities in the data we derive from our surroundings, somewhat like a search engine server. Our brain goes one step further, however, by also thinking proactively”


So, from a scientific approach, there are no non thinking humans, ergo, no non thinking testers.

Let’s move away from the scientific domain and consider the word itself. The Online Etymology Dictionary provides the following


Old English þencan “imagine, conceive in the mind; consider, meditate, remember; intend, wish, desire” (past tense þohte, past participle geþoht), probably originally “cause to appear to oneself,” from Proto-Germanic *thankjan (source also of Old Frisian thinka, Old Saxon thenkian, Old High German denchen, German denken, Old Norse þekkja, Gothic þagkjan).


Let’s dive into a dictionary and have a look at meaning of this word.From the Oxford Dictionary:

  • the process of considering or reasoning about something
  • a person’s ideas or opinions

and from the Cambridge Dictionary:

  • the activity of using your mind to consider something

There’s a lot of consistency in those sources. So the meaning of thinking can be considered reasonably stable across a long period of time. It’s not a word that has recently attracted new meaning. Whether we think of testing in ISTQB terminology and approach:


Software testing is a process of executing a program or application with the intent of finding the software bugs. It can also be stated as the process of validating and verifying that a software program or application or product: Meets the business and technical requirements that guided it’s design and development.


or we favour the definition from the Rapid Software Testing namespace:


Testing is the process of evaluating a product by learning about it through exploration and experimentation, which includes to some degree: questioning, study, modelling, observation, inference, etc.


I struggle to conceive of a way of doing either that does not involve consideration, reasoning, opinions and ideas. Even deeply detailed, low level, stepped out test cases require thought when they are being executed. In fact those really detailed test cases, if out of date, can require a considerable dose of thinking to get through them. Working out if a variance, an unexpected outcome, is a software issue or user input mistake and what triggered, or may have triggered, the variance takes thinking. Some issues are really elusive, reproducing them requires thinking. Testing, regardless of whether we view it as good testing or bad testing, requires thinking.

When we refer to a “thinking tester” it is far too general to be useful. We are all “thinking testers”. Much in the same way the we are all “breathing humans” (have you ever felt the need to point this out in conversation?). Thinking is useful and it has many dimensions. Perhaps we need to consider various types of thinking. Thinking can be critical, deep, reflective, lateral, quick, slow, analytical, concrete, abstract, divergent, convergent, the list goes on. In understanding types of thinking, how we apply them and how they apply to us we add depth to our toolkit. We start to get a good view of our thinking strengths and weaknesses, we get knowledge on where we could improve. Perhaps we can focus on improving our thinking skills and let others within the community, our peers and those we serve,  recognise our thinking skills through actions rather than words. When the proof is on display it might just be our need to use the label “thinking tester” disappears and we settle for being a Tester.

Regards

Paul

 

 

 

 

 

%d bloggers like this: