Talk Context to Me

[article]
Summary:

Context-driven testers see the world in a fundamentally different way than testers from other schools of thought. Matt Heusser provides some tips on avoiding risks of communication across those schools.

In software testing we have different schools, each with their own terms and definitions. James Bach belongs to my own school, the context-driven school of software testing. James created this mashup video, which may give you a taste for what we view as excellence in action.

The hero in the movie is clearly a hero. Notice that he is not talking about prevention or getting things right up front. Instead, he wants to solve the problem of the day right now. You may have noticed the swagger, the tough questions, and quick decisions. The context-driven tester may use jargon you are not familiar with, or have what appears to be an allergic reaction to specific terms like “best practice” or “test automation.” He might laugh out loud at metrics programs. To some people, that behavior can be off putting, even offensive.

In this article, I’ll be trying to reconcile the two positions—to help someone who is unfamiliar with context-driven testing get past what might look like a gruff exterior to the beating heart that shares a desire to pursue test excellence.

A Few Things to Consider When Talking to a Context-Driven Tester
Arguments can be a good thing. The only way to improve my position is to change it, and I will only change it when faced with ideas that are different. That means if you find yourself in an argument with a context-driven tester, he is treating you like an adult, with the hope that each person can learn something from the other. To put it differently: Your critics are your best friends.

Context-driven testers will reject imprecise terms. Take the term “test automation,” for example. The term implies that the entire work of the tester can be scripted up front, repeated, and is best done by a computer. Yet, test automation tools, specifically GUI tools, automate only a small percentage of the actual work of testing. They don’t come up with their own designs; they don’t diagnose defects, file them, explain them, or resolve them. The work the tools do, the test execution, has no feedback or intuition. That means that “test automation,” like “best practice,” is making a promise it cannot keep. You could argue about these terms (arguments can be a good thing), but arguing about terms doesn’t advance the practice. My advice: Stick to things that influence practice.

Be prepared to ask, and fight, about terms. Context-driven testers have developed a language like any scientific community, and outsiders can feel stuck. That’s OK. Define what you mean by terms like “regression testing” or “test suite,” and ask your new friend what he means by “heuristic,” “oracle,” or “sapience.” One common tactic is to give up trying to decide who has the correct definition of the word and instead say, “When I use the term test plan, I mean ___.” In many cases, context-driven testers prefer the discussion over terms to a premature “standard.” Be prepared for it.

Focus on skill. The idea that testing work can be done better or worse, that it is a skill that can be practiced and taught, is central to context-driven testing. This means that when detailed instructions fail, context-driven testers don’t try to write them at the next level down; instead, we engage people in the work, asking them to help define it.

Ask for an example. One way to build common ground with a context-driven tester is to talk about experiences and examples or, better yet, do some actual testing. This won't save you from the necessity of debating the meanings of words and the value of practices, but it will at least provide more data to inform those debates.

Context-driven testers believe test process is about tradeoffs. They see test process in terms of problems and possible solutions. If that is true, then there are no “best practices.” Instead, the best we have is guidance that can fail, what we call “heuristic.” Therefore, if you know only one way to do something, or imply that there can be only one good way to test, expect an argument.

Toward a Real Dialogue
In his book The Five Dysfunctions of a Team, Patrick Lencioni details five specific problems. The behavior of context-driven testers is directed at addressing three of these dysfunctions in particular: fear of conflict, which creates artificial harmony and stifles real change; lack of commitment, which leads to people disengaging instead of openly disagreeing; and avoidance of accountability, which leads to people ducking the responsibility to call peers on counterproductive behavior and produces low standards.

Context-driven testers hope to advance the practice of software testing, avoiding dysfunction by fighting about language and ideas and exposing shallow agreement and making their ideas explicit with actual examples. This is what they bring to testing, and what they believe to be valuable. If you can keep that in mind when tempers flare, things might just go a little better.

As a context-driven tester, myself, I invite you to tell me I'm wrong. Have you had an experience working with testers from different schools? Did your experience conflict with my advice? Tell your story. Do you have a different idea about dealing with differences and conflicts in testing culture? Make your case. Or, if you agree or have something to add to my model or my advice, say so.

This article is my part of the conversation. The comments? That’s up to you.

 

User Comments

7 comments
LIsa Crispin's picture
LIsa Crispin

I think the people who label themselves as "context-driven testers" have done the software dev world a whole lot of good. It is good to be pushed to be specific with our terminology, to acknowledge the skills involved in doing a good-enough job of testing, to focus on business value. Not that they're the only ones doing this, but still folks like Matt have had a big influence for the better.

However, I feel that the whole idea of "schools" is divisive and unproductive. Meaningful terminology around testing is one thing, labels are another. And I've felt "judged" by people who label themselves as in this school, and told I don't know anything about testing.

Despite that, I've learned a lot even from the people who pass judgment on me, and am grateful for how their work has helped me improve how I can help my own team and customers.

No "school" knows anything. Most of us need to learn skills from lots of places and people.

July 29, 2013 - 4:02pm
Matthew Heusser's picture
Matthew Heusser

Thanks Lisa. I can agree with you that the schools concept cause division; I just think the benefits outweigh the pain caused. That's /my/ opinion, of course, and you are entitled to yours. I'm happy to talk about it sometime! :-)

July 29, 2013 - 4:21pm
Jason Koelewyn's picture
Jason Koelewyn

Good article, the video is very clever.

I would caution you that as I read it I got the impression you have a thing against Test Automation. I understand what you dislike are the assumptions the term engenders, and I agree that in some situations clarification is needed.

We tend to refer to Automated Regression tests, Automated Service tests etc. to reduce the confusion.

July 29, 2013 - 4:28pm
Teri Charles's picture
Teri Charles

Michael,

I love this article! This really breaks down CDT in a way that as I'm explaining CDT to someone, I can hand them this and then really start the dialogue going. I myself sometimes struggle explaining some of the finer points of CDT and this will help a lot.

And I must say that I especially like the section on test automation. I'm not the biggest test automation expert, but one of my biggest pet peeves is how some people just throw out the term "test automation" when not understanding what, how, when, what, and why. I'm going to print out this section, put it in my wallet, and pull it out whenever anyone says, "Just automate it"! :-)

Again, nice job and thanks for breaking down CDT so well.

Teri

@booksrg8

July 29, 2013 - 4:53pm
Aaron Hodder's picture
Aaron Hodder

@Lisa "The idea of test schools is divisive" It's not the idea of test schools that's divisive, it's the presence of test schools that's divisive. And the 'idea' of test schools came from the observation of the 'presence' of test schools. The division is

July 29, 2013 - 4:59pm
Jesse Alford's picture
Jesse Alford

This article seems to have a confused position on arguments. For instance, this:

> [...] arguing about terms doesn’t advance the practice. My advice: Stick to things that influence practice.

doesn't (necessarily) square with this:

> **Be prepared to ask, and fight, about terms.** Context-driven testers have developed a language like any scientific community, and outsiders can feel stuck. That’s OK. Define what you mean by terms like “regression testing” or “test suite,” and ask your new friend what he means by “heuristic,” “oracle,” or “sapience.” One common tactic is to give up trying to decide who has the correct definition of the word and instead say, “When I use the term test plan, I mean ___.” In many cases, context-driven testers prefer the discussion over terms to a premature “standard.” Be prepared for it.

Arguments about terms advance practice to the extent that they are taken seriously and argued in good faith. (Or even bad-faith good faith, in which one party assumes the other is a reasonable person, and chooses the most reasonable interpretation of the other's argument, even though this is not always the case in reality.) If plans are nothing, but planning is everything, I'd say something similar applies to arguments about terminology: the terms are nothing, but our understanding of them is everything.

Testers could be successful using _flauxbarg_, _Varzy_ and _Kenning_ as terms if they first made sure they had developed deep agreement on their meaning. Actually, that might be an interesting exercise... a planning session in which a broad selection of common testing words (of both the commonly abused variety, such as "regression," and the commonly understood variety, such as "boundary,") were taboo, and had to be replaced with nonsense words with negotiated meanings.

Significant progress in the practice happens when someone goes from understanding "regression" as "fancy word for bug" to understanding it as "something that breaks something that worked before we started." (In case you think I am being unrealistic with this example, I assure you that it is drawn from reality and involved a tester with years of experience.) Of course, there is a converse transformation. A context-driven tester may come to understand that when a certain person says "regression testing" they mean "scripted testing of a new feature" or "replicating bugs reported by users;" this too is an important discovery.

It is possible for a person who attaches contextually-correct meaning to "regression" to communicate better with programmers and product owners, and evaluate the priorities of other people who use the word more accurately. This all impacts the practice of someone asked to "test for regressions" or "regression test."

July 29, 2013 - 6:49pm
Peter Walen's picture
Peter Walen

I've read this and thought on it and read it again.

I find the challenge to be problematic. My concern with terms is that so many people throw them around as if everyone agrees with their definition, without realizing that people do not agree. "Regression" is an awesome example. "Regression Testing" is another.

My hunch, and Matt and I have taken different tacks on "automation" based on the context of our experiences (in public, while teaching a workshop together) is that the problem isn't with the idea behind "automated testing" (another vague and imprecise term) the problem is this: many people, managers and above in particular, have been sold a bill of goods that can never be delivered in their tenure at the company.

They are looking for Magic. They are looking for Harry Potter to flourish a wand and shout some Latin-ish sounding phrase and POOF! The software is tested! Of course, they'll deny that and say, "No we want the tests to run and everything to be green at the end." Sounds pretty much the same though, doesn't it?

If we can't define the terms in the context of the situation we are in, then why use a "common set" of terms at all? In my experience it takes a very open minded person to be willing to consider the possibility that they might be wrong. I'm wrong a lot. The context of our situation will determine what will work for good or ill. I have not found canned responses to be of much value.

Regards -

July 29, 2013 - 11:02pm

About the author

CMCrossroads is a TechWell community.

Through conferences, training, consulting, and online resources, TechWell helps you develop and deliver great software every day.