Inspire, develop, and guide a winning organization.
Create visible workflows to achieve well-architected software.
Understand and use meaningful data to measure success.
Integrate and automate quality, security, and compliance into daily work.
Understand the unique values and behaviors of a successful organization.
LLMs and Generative AI in the enterprise.
An on-demand learning experience from the people who brought you The Phoenix Project, Team Topologies, Accelerate, and more.
Learn how making work visible, value stream management, and flow metrics can affect change in your organization.
Clarify team interactions for fast flow using simple sense-making approaches and tools.
Multiple award-winning CTO, researcher, and bestselling author Gene Kim hosts enterprise technology and business leaders.
In the first part of this two-part episode of The Idealcast, Gene Kim speaks with Dr. Ron Westrum, Emeritus Professor of Sociology at Eastern Michigan University.
In the first episode of Season 2 of The Idealcast, Gene Kim speaks with Admiral John Richardson, who served as Chief of Naval Operations for four years.
New half-day virtual events with live watch parties worldwide!
DevOps best practices, case studies, organizational change, ways of working, and the latest thinking affecting business and technology leadership.
Is slowify a real word?
Could right fit help talent discover more meaning and satisfaction at work and help companies find lost productivity?
The values and philosophies that frame the processes, procedures, and practices of DevOps.
This post presents the four key metrics to measure software delivery performance.
September 7, 2021
This post is an abridged excerpt from Agile Conversations: Transform Your Conversations, Transform Your Culture.
In his book, Kent Beck says that Test-Driven Development (TDD), the practice of writing a test concurrently with the code it exercises, gives him “a sense of comfort and intimacy.” That is exactly the feeling we want you to have during the Trust Conversation, and the tool to help you achieve it is the Ladder of Inference, another concept from Chris Argyris and colleagues in the book Action Science.
Observe that the Ladder tells a coherent story: from data, you derive meanings, which gives you assumptions, conclusions, and beliefs; and from these, you determine your actions.
The goal of the Trust Conversation is to align your story with that of your conversation partner, and the Ladder provides an obvious way to structure that alignment: first align on the bottom rung, then rung 2, and so on, until your stories match.
This would be easy if both parties’ ladders were visible, but as you can see in the image above, only the bottom rung (observations) and the top rung (actions) exist outside your head, where others can see them. Everything else is invisible—which is where TDD for People comes in.
When writing code with TDD, you proceed slowly—in confident, small steps. Similarly, when using the Ladder of Inference, you are going to ascend in small, tested steps, each of which increases your confidence. At each step, you’ll ask a genuine question of your partner about her reasoning at that rung and, if needed, explain your own reasoning as well. (We described genuine questions in more detail in Chapter 2 of Agile Conversations.) This will reveal both sides’ ladders rung by rung, so you can understand where they differ.
When your test fails—that is, when you are surprised by or don’t understand the answer to your question, exposing a misalignment—you’ll stop, refactor your understanding, and retry the test.
At the end, you and your partner will have more closely aligned your ladders and, therefore, your stories; and where you still don’t fully agree, you will at least understand each other’s motives. As a result, you will have built substantial trust for the future.
Let’s go through an example: Suppose your team is working on a system that sets and adjusts prices for customers, and you have noticed one recently added team member, Helen, griping that the pricing algorithm is too complex to maintain. Since others, including you, are working on this code happily, you believe there is a misalignment that is affecting trust, as Helen’s complaints sap morale and she resists all suggestions for improving the problem code. You are starting to suspect that she and perhaps others are going to refuse to update prices until the whole subsystem is rewritten, which you don’t think the company can afford right now.
Rung 1: Observable Data
“Helen, I heard you say in the standup that the pricing code is over engineered. Did I understand you correctly?”
“Yes, anyone can see it’s impenetrable.”
You have established the basis for the conversation—that Helen sees a complexity problem. Your test is green; move to the next step.
Rung 2: Data Selection
“Got it. For me, the important part of any complex code is its architecture—how it’s divided into chunks—because that’s hardest to change. Is that the area of most concern for you too?”
“Absolutely. I mean, the comments and variable names suck, but we can refactor to improve those over time. I can’t see how any new joiner can hope to understand a forest of tiny classes like we have, though.”
After hearing your reasoning, Helen has confirmed agreement. Green again. (Notice we don’t have to agree with Helen that the architecture is actually objectively complex—just that she perceives it that way.)
Rung 3: Meanings
“Okay. So that means to me that you are going to find it hard to add new prices to the system. Is that right?”
“Of course! That’s why I asked to be reassigned to the edit page design.”
Your stories continue to match; Helen agrees that the perceived complexity is a barrier to her work. Green test: onward!
Rung 4: Assumptions
“So are you assuming that the pricing algorithm is just too hard for you to work on?”
“Sure, but it’s not just me—Ramona says she can’t make heads or tails of it either.”
A new fact: Helen isn’t alone in her assessment of the code. But this is another green test—your stories continue to match. You may be starting to wonder where the misalignment is, or whether it exists.
Rung 5: Conclusions
“I guess you’re thinking that we’re ripe for a rewrite of that code then.”
“What? No, that would be a waste of time. You and the other experts can keep hacking at it while us newbies stick to the user interface.”
RED! Here’s the misalignment. You thought Helen was angling for an expensive overhaul, but she’s suggesting that only experienced team members work on the complex algorithm. Time to refactor!
Rung 5: Conclusions, again
“Ah, I didn’t understand your thinking. You’re concluding that the pricing complexity means new joiners like you can’t work on that code, is that right?”
“Of course; that’s what I keep saying. We just don’t have enough experience to make changes safely.”
Now we’re green again; we understand Helen’s thinking, even if we aren’t aligned with the actions it leads to. On to the next rung but with new understanding.
Rung 6: Beliefs
“So it sounds like you believe that making some tasks off-limits to newer team members is a good idea. I have a different belief, though—that we should raise everyone’s skills until they can work on any feature; so everyone learns and we get the most from every developer. What are your thoughts?”
“I did think that I had to stick to the easier bits. But I can get behind the training idea, if we can afford it.”
Here’s alignment happening in real time. Now that your conclusions match, it’s easier for Helen to bring her beliefs in line with yours. Green!
Rung 7: Actions
“Great! I’d be willing to book Maria, our pricing expert, to spend the next week training you and Ramona on the pricing algorithm. Would that work?”
“Sure! I didn’t know that was an option. I’d be willing to give it a try!”
We’ve reached an action that Helen agrees with, thanks to aligning our stories. Even better, Helen can apply the common story to other potentially challenging areas, asking for training or help to raise her skills rather than complaining about being unable to contribute. In other words, we have built trust with Helen.
In any conversation, the internal stories of those involves is paramount. TDD for people helps you align your internal story with other’s’, thus building the trust necessary for optimal teamwork. Aligned stories allow us to safely adopt the transparency and curiosity that we need for a successful conversational transformation.
An executive leader can create a trusting relationship with employees, giving confidence to all parties that hte cultural transformation is headed in the right direction without micromangement and continual supervision.
A team lead can align stories with her team to eliminate unproductive infighting and debates, and instead, cooperate to meet spring goals and product targets.
An individual contributor can boost trust with his peers for more effective collaboration, so he can get and give more help with cooperative activities like code reviews, estimations, and pairing sessions.
Continue reading Agile Conversations to learn more about building agile conversations in you organization
Coauthor of Agile Conversations
No comments found
Your email address will not be published.
First Name Last Name
Δ
You've been there before: standing in front of your team, announcing a major technological…
If you haven’t already read Unbundling the Enterprise: APIs, Optionality, and the Science of…
Organizations face critical decisions when selecting cloud service providers (CSPs). A recent paper titled…
We're thrilled to announce the release of The Phoenix Project: A Graphic Novel (Volume…