Skip to content

July 30, 2020

Trust and Test Driven-Development for People

By Douglas Squirrel ,Jeffrey Fredrick

This post was adapted from Episode 122 of the Troubleshooting Agile Podcast.


This is number three in our series as we go through our new book, Agile Conversations. We’re up to Chapter Three and this chapter’s all about trust.

One of the things we were having a challenge with was finding something new to say about trust, because we’ve said so much about it. You can listen and read to some of our past thoughts on trust here: 

But one of the things that’s tricky and interesting is that now people are reading the book it’s interesting to hear what their reactions are, how they take what we said in different ways and so on.

This past week, thanks to a weekly newsletter, the Software Testing Weekly newsletter—they told us about a blog post from [a reader]. She had read our book and wrote up her review in a blog post, and she also had several sort of miniature responses going through on different topics as she read.

It was quite interesting to read through those, and there’s one in particular I thought was worth talking about because it brought up an interesting element which is on the Ladder of Inference and in particular is one of our favorite techniques, which is test-driven development for people.

And she had an interesting point! She’s trying to take the analogy as literally as possible and was comparing that, and she made the correct observation that in TDD, in software, you’ll often start by writing a test when you have no code at all. You are writing out, you’re saying this is what it should be. You’re essentially creating the world. You’re making an assertion about the way the world should be, and then you, of course, will have a failing test because you haven’t written the code. By the way, sometimes this doesn’t happen. This is why you always run the test first, because sometimes you’ll write the test and run it and it’ll be green. But in the normal case, you could make this assertion about the world. And it’s not true. And when it’s read, your job is now to change the world. You make the world different so that your assertion was correct.

And that’s not exactly the way things work in a conversation, when we go and talk to someone and we get a different answer. We’re not saying you’re gonna go change the world and make that person agree with you. 

Expectations vs. Reality

The thing that happens to me all the time is I keep tripping over reality and hurting my foot, so the thing that’s much more common is that my experience of reality is very different to what I expected. And hers was a very interesting point of view and an interesting way to understand what we wrote.

I remember coming up with the idea that this existing notion—the Ladder of Inference—this idea that you start with something that you can see or hear: a observable piece of data; and then what’s important about that; and what meaning it has for you; what assumptions you get from that; the conclusions you draw; what beliefs you draw; and then what actions you take. I think I even remembered all the rungs, that’s amazing.

So you go through these slow steps, and we take readers through that and we give lots of examples for how you go through all those steps. As you go through those steps, the crucial thing for me as I practice this over and over again is how familiar the feeling is, how similar the sense of security is that I had as I am talking to you…We had a conversation just before starting this podcast, and I started with something I was observing, and that helped me feel really confident about mentioning: “Hey, Jeffrey, I noticed this thing.” Then you said, “Oh, yeah, this is what it’s about.”

And that helped me enormously. And I felt secure in the same way as when you’re doing test-driven development. You think, well, I’m not real sure what the world is like. I’m not sure whether this test will pass or not. I know my expectation is it will fail. So then I can write some code, but I’m going to know in a few seconds whether or not the world is as I think it is. I’m taking small steps, so I have this iterative experience. And that’s exactly what you can get in a conversation.

People often tell me as they’re describing conversations they’re dreading that they’re thinking, “Oh, my gosh! I’m gonna have to have this conversation. This is going to be terrible.” One of the things they often cite is “The person will blow up at me,” or “Maybe I’ll discover that I had the wrong information all along.” “It’ll be embarrassing”; “It’ll be threatening.” They think of all these bad things will happen, most of which involve tripping over reality!

And the great thing about using a technique like TDD for people or the Ladder of Inference is it lets you learn in very small increments, with a lot of confidence at every stage, and with small consequence for getting a little bit wrong. And so that’s what we really meant when we said “Gosh. When I explain this to people, especially software developers and testers,” like the blogger, “I should be talking about test-driven development, because that’s the feeling you get.”

It’s the knowing that you could take small steps, building confidence incrementally. You’re gaining knowledge that you can be certain of.  I like that analogy and, thinking about it, I had that same kind of feeling when I was practicing TDD, when I wrote code as a part of my daily work. I remember one of the elements, though, and comparing to her comments in the blog, I thought, there’s times when I valued that feeling from TDD so much that I took it into other areas. In particular, I remember using a TDD approach to learn an API. And I would say it was a “TDD” approach, even though it didn’t necessarily have the elements that she described. It wasn’t how I would do it normally because I didn’t often know exactly how the API worked.

This is a sort of an exploratory approach. I’m going to go in here and learn. I have some guesses I want to make, but really, my goal is to learn, and I’m not confident about how things are going to proceed or how the world should be. I do have an idea where I want to end up, but it’s much more vague in this kind of world. The other thing that came to mind is exploratory testing, where the emphasis is much more on learning than pure execution.

And you don’t work to a script or execute a very fixed set of steps. You’re exploring and testing, and in small increments! You say, “Oh, I wonder what happens if I put in a negative number? How about a negative big number? How about if I put the largest negative number…Aaah wait! Now I’ve got a crash. Okay.” And you didn’t set out in the beginning to test negative numbers. You just thought ‘that would be interesting. Let me try that.’

Vulnerability is Key

I remember talking to Michael Bolton and one of the elements he and I would talk frequently about and agree on with testing is that: testing is a human activity. What we often call an automated test really should be called an automated check, and what we are doing in these conversations is, I think, much more human. We’re opening ourselves up. It’s not simply a mechanical check of “I know how the world is.” At least, that’s the mindset we should have.

And I think that actually brings us to one of the points that, looking back, we haven’t talked about as much in the podcast, but we do mention in the book, which is this idea of vulnerability, and the link between vulnerability and trust. One of the things that we say is that to build trust, you need to be vulnerable. And there’s an element here, even in these very small steps that we’re taking, we are being a bit vulnerable because we are making assertions about how we see the world that the other person might not agree with. That frequently in fact, very often, we expect that they’re not going to agree with. We’re just not sure where exactly.

And some of the best testers I’ve ever known have been particularly ego-free and vulnerable, friendly. And they have to be that, because they have to simultaneously inhabit a world where the code is supposed to do something, and where it could not do that thing. So they have to be able to go to the world where it doesn’t do what it’s supposed to do, and they have to be creative and open to that.

If you’re really confident—this is why developers often are poor testers of their own code, because they know what it’s supposed to do—well, of course you’re not going to put a negative number in. Why would you do that? I remember one of my very best testers came along and literally put in a negative number for how many shares to buy. And we all said “You can’t buy a negative number of shares. What are you talking about? That’s nonsensical.” She said, “Well, you can short sell, and look it crashes if you do it. So you better not do it.”

We had all been very confident that all you could do was put in positive numbers, and no one would ever try putting in a negative one. We found out we were wrong. It required greater vulnerability from her. She outdid us in vulnerability, because we were confident and not willing to be vulnerable and say we could have been wrong. That our conception of the reality could have been incorrect.

This is one of the things that I know for myself, and others in development, we often struggle with testing our own code because we are attached to our conception of what it does, even if that’s not what it actually does. We’re so attached to our perception of reality that we’re not really testing it. We’re confirming, rather than exploring what it really does.

But it’s ten times worse when it’s a person, and it’s your own beliefs, rather than a computer executing. You can always say “Well, I just didn’t know that negative numbers would have that effect.” But when it’s “Well, I believed that you guys were all behind me and backing my desire to deliver this feature next week,” and I suddenly discover I was wrong? That’s very threatening, even more so. So it’s much easier to be much more attached.

And I think in general, building a sense of trust in this idea of being vulnerable is something that’s a bit threatening, and hard for people to do, because our natural mode is to try to confirm what we believe. This kind of harkens back to the cognitive biases that we talked about last time. Our cognitive biases kind of worked for us to be seeking confirmation, but our view is that if you want to learn, what that really is is detecting and correcting error. And that means being open to the idea “I’m mistaken.” And, you know, we’re talking about this, and about whether knowing this makes it easier. You said to me something like “It doesn’t get any more fun.”

It’s just like people who perform a lot say they get stage fright every time, they just have developed a set of things that they can do that actually let them get on the stage, whereas somebody who has severe stage fright will just never get on the stage. In the same way, things like the techniques we describe in the book, and the exercises, and the fundamental elements of being vulnerable and predictable…Those help you kind of brace yourself for that learning experience.

But the learning experience doesn’t get any better. You don’t think, “Wow! I was wrong and I’m so happy about being wrong. I’m enjoying this experience of being wrong.” You think, “Oh, here I go again. I’m wrong. OK. Well, I kind of knew that was coming and that’s why I went into this. But it still does kind of hurt.”

You’re gonna have that #LearningIsHorrible experience, but you’ve made a decision. You’ve said, “Well, I value having a more accurate view of the world more than I do the misplaced confidence and the lovely feeling of ‘knowing’ that I’m right, and therefore I’m open to this.” We talked about this sense of being confident in our own sense of being right. I think it’s useful to go back to this idea of how that relates to trust, which is, it’s gonna be very hard for people to trust you if your worldview is self-sealed, and there are differences of opinion never make it into you.

Not necessarily that you have to agree with them. But to have heard them! And to have incorporated their world view and to say, “Well, mine differs in these ways and it doesn’t differ in these ways. And I learnt something new! This is a way that I’ve changed my view.” All of that creates aligned stories, which we go into in more depth in the book, where the belief that you have and the belief that the person has about how the world works are more closely aligned. Not matching, they do not have to be the same. But they have to start from some of the same assumptions and principles and ideas, and each of you has to have heard the other’s point of view.

One of the important ones we talk about is this idea of being predictable to earn trust, and hearing the other person’s stories and sharing our story. If we’ve laid out “This is our story, this is how we see the world, and therefore I’m going to behave in this way.” When you behave that way people may not agree with it. They may think, “Ah, well, that’s not how I would have done it.” But they can state, “Well, I trust that Squirrel will behave this way because he’s explained to me what he’s seen in the past. He’s explained his assumptions and his motivations and that this is how he responds in this scenario. OK, and he is now acting that way. Great.” That level of predictability starts to build trust. Whether or not you agree with the action!

You might say, “Boy! That Squirrel, he’s nuts, he’s doing this crazy thing. I don’t think that’s the right thing to do. But, man, at least I know what’s coming. I can adjust my behavior.”

I can trust him to do that every time. When I come in, he’s going to ask me “So what did you see? You know, I saw this. What? What did you see?” And that’s very helpful.

- About The Authors
Avatar photo

Douglas Squirrel

Coauthor of Agile Conversations

Follow Douglas on Social Media
Avatar photo

Jeffrey Fredrick

Coauthor of Agile Conversations

Follow Jeffrey on Social Media

No comments found

Leave a Comment

Your email address will not be published.



Jump to Section

    More Like This

    Team Cognitive Load: The Hidden Crisis in Modern Tech Organizations
    By Summary by IT Revolution

    "This feels pointless." "My brain is fried." "Why can't I think straight?" These aren't…

    The Missing Link in Your Industry 4.0 Strategy: Industrial DevOps
    By Summary by IT Revolution

    As manufacturers embrace Industry 4.0, many find that implementing new technologies isn't enough to…

    The Original Disruptor of the Music Industry
    By Matt McLarty , Stephen Fishman

    I know. You’re thinking I'm talking about Napster, right? Nope. Napster was launched in…

    From Turbulence to Transformation: A CIO’s Journey at Southwest Airlines
    By Summary by IT Revolution

    When Southwest Airlines' crew scheduling system became overwhelmed during the 2022 holiday season, the…