EP. 3: ACHIEVING BETTER OUTCOMES THROUGH STRUCTURE: A CONVERSATION WITH ELISABETH HENDRICKSON
On This Episode
In Episode 3, Gene Kim is joined by Elisabeth Hendrickson, who inspired many ideas in The DevOps Handbook and, more recently, The Unicorn Project. She has shaped the way Gene sees the world of DevOps. From Developer to Tester ratios to the importance of architecture and the need for leaders to decompose systems well, Elisabeth has been a huge inspiration to the entire DevOps community.
Together they explore her years as VP R&D for Pivotal Software, Inc., software development, and the link between organizations and architecture. In a wide-ranging discussion, they cover Elisabeth’s mental model of balance, structure, and flow, and her view of how organizations really work. Listen as Gene and Elisabeth explore her WordCount Simulation, her personal experience with MIT’s Beer Game, and much more.
About the Guest
Elisabeth Hendrickson is a leader in software engineering. She most recently served as VP R&D for Pivotal Software, Inc. A lifelong learner, she has spent time in every facet of software development, from project management to design for companies ranging from small start-ups to multinational software vendors. She has helped organizations build software in a more efficient way and pioneered a new way to think about achieving quality outcomes and how that hinges on fast and effective feedback loops. Her book, Explore It!: Reduce Risk and Increase Confidence with Exploratory Testing, was released in 2013 and is explores technical excellence and mastery, and creating effective feedback loops for everyone. She spoke at the DevOps Enterprise Summit in 2014, 2015, and 2018, and received the Gordon Pask Award from the Agile Alliance in 2010.
You’ll Learn About
- How to build software in a more efficient way.
- Elisabeth’s mental model of balance, structure, and flow.
- How Conway’s Law applies to Elisabeth’s model.
- Elisabeth’s WordCount Simulation.
- Radical Candor: Be a Kick-Ass Boss Without Losing Your Humanity by Kim Scott
- The Four Quadrants of Radical Candor
- Ruinous Empathy
- Manipulative Insincerity
- Obnoxious Aggression
- Radical Candor
- Dangerous Company: The Consulting Powerhouses and the Businesses They Save and Ruin by James O’Shea
- Explore It!: Reduce Risk and Increase Confidence with Exploratory Testing by Elisabeth Hendrickson
- Better Testing, Worse Quality? by Elisabeth Hendrickson
- Managing Proportions of Testing to (Other) Developers by Dr. Cem Kaner, Elisabeth Hendrickson, and Jennifer Smith-Brock
- When NASA Lost a Spacecraft Due to a Metric Math Mistake by Ajay Harish
- Lockheed: New Carrier Hook for F-35 by Dave Majumdar
- Conway’s Law
- Decoding the DNA of the Toyota Production System by Steven Spear and H. Kent Bowen
- The WordCount Simulation by Elisabeth Hendrickson
- “The Beer Game” by Prof. John D. Sterman
Gene (00:00:07): Mid-roll. This episode is brought to you by the 2020 DevOps Enterprise Summit in London, which will be a virtual conference due to the global pandemic. For seven years we've created the best learning experience for technology leaders, whether they're experienced reports from large, complex organizations, talks from the experts we need, or through the peer interactions that you'll only find at DevOps Enterprise. At the time of this recording, I'm busy prerecording all of the exciting speakers for the conference. I'm so excited at the amazing speakers we've got lined up for you. Some of the exciting experience reports include executives from Adidas, Swiss Re, Nationwide Building Society, Maersk, CSG, Siemens, and so many more.
Also speaking is Coats, which has manufactured fibers and threads, which was founded in the year 1755. I'm also so excited that we have speaking Peter Moore, who you heard in episode one of this podcast, teaching us about Zone to Win; David Crossman, a coauthor of the book, Team of Teams; Dr. Carlotta Perez, who you've heard me quote so often in the last few years, and John Allspaw who helped form the DevOps movement. I'm super excited about the high learning and networking event that we've created for you, which I'm hoping will be an incredibly valuable and fun way to learn, so different than the endless video conference calls we've all been stuck in for weeks. To register, go to events.itrevolution.com. Thank you so much for listening to this episode. Up next will be another dispatch from the [inaudible 00:01:39] which will be selected excerpts from Elisabeth Hendrickson's 2014 and 2015 DevOps Enterprise Summit talks. I'll also share my observations and aha moments from watching her presentation, both from back then and now six years later. I know you will enjoy them just as much as I did. In this second episode of the Idealcast, I am so delighted that I have on Elisabeth Hendrickson. She has influenced my thinking so much ever since we met in 2012, when we were both on the program committee for Jez Humble's FlowCon conference, Holy cow, I have learned so much from her. She is the author of the book, Explore It!: Reduce Risk and Increase Confidence with Exploratory Testing. This book came out in 2013 and it's a book about technical excellence, mastery and creating effective feedback loops for everyone. She's a software engineering leader.
Most recently, she served as the VP of R & D in charge of Pivotal's big data products. And I was so happy to see the picture of her alongside the other Pivotal executives on the balcony overlooking the New York stock exchange when they went public in 2018. Elisabeth is someone who has contributed to our profession for decades. She's helped pioneer a new way to think about achieving quality outcomes and how it hinges upon fast and effective feedback loops. She's a software engineering leader who has ascended the ranks continually being given larger and tougher problems to solve and is without doubt a lifelong learner. For all those reasons, it's no surprise that Elisabeth has spoken at the DevOps Enterprise Summit in 2014, 2015, and again, in 2018. I had so much fun recording this podcast and I learned so much too.
In fact, this interview crystallized my quest to better understand leadership and how organizations actually work to either create amazing outcomes for all their stakeholders and customers or how they failed them entirely. In this podcast, we dive deep into the stories she shared with me almost seven years ago that shaped how I think about DevOps. Things like dev test ratios, the importance of architecture and the need for leaders to decompose systems well. We talk about the MIT beer game, her own amazing word count simulation, and we recount a conversation we had last year over lunch about how we can work better with people like Sarah, the notorious SVP of retail operations. And both the phoenix project and the unicorn project, who is most certainly the most polarizing character in those books. So without further ado, Elisabeth, I am so delighted to be having this conversation with you.
Elisabeth H. (00:04:14): Oh, it's so good to be here. Thank you so much.
Gene (00:04:17): First off, maybe in your own words, can you introduce yourself and summarize your decades of contributions to this profession?
Elisabeth H. (00:04:23): You did an amazing job. I actually hate talking about myself. I will say that I have spent time in just about every facet of software development that there is. And I think in this conversation, we're going to be talking about a bunch of those different facets. So early years, I spent a lot of time in QA and testing, and I think we're going to end up talking about that. And that's really where I got a lot of insight about what wasn't working about the way that most organizations view those roles. And then subsequent years, I spent about 10 years as a consultant. Basically my job was get on airplanes and help organizations figure out how to build software better. Oftentimes I was brought in because of that background in QA. And so they would often bring me in to fix quote unquote in funny quotes, fix the QA group. And often I found the QA group was the only group that knew what was going on. And that was not the group that needed to be fixed. That was the group where they were discovering. That was the phase in their software development process where they were discovering the problems. And so my practice morphed more into helping organizations adopt agile practices. And then what is it now? Eight years ago, I joined Pivotal Labs at the time to help build Cloud Foundry and ended up having a fascinating journey for seven years at Pivotal before leaving, when Pivotal was acquired by VMware in December of last year.
Gene (00:05:38): And so if we could just talk about QA for a moment, the reason is that you've contributed so much to that space. And I remember when I first met you, when Jez Humble introduced the two of us, my reaction was, oh my gosh, I think I read something of hers before. And I remember finding this paper that I think I filed away in 2003 or 2002.
Elisabeth H. (00:06:02): I think there are two papers, you and I talked about.
Gene (00:06:04): Yes.
Elisabeth H. (00:06:04): One is, Better Testing, Worse Quality. And that's one that I wrote, I was the solo author on that one.
Gene (00:06:04): What year was that?
Elisabeth H. (00:06:10): I think I wrote it in 2000.
Gene (00:06:14): 2000, right.
Elisabeth H. (00:06:15): It may have been published in 2001, but I mean, it's at this point a long time ago. The basic idea behind the paper is taking a systems diagram of effect look at testing and organizations and uncovering the reason why you can end up investing massively in your QA, basically your independent test group. Most organizations call it a quality assurance group, and end up with worse software. How does that even happen? So that was one paper. And then another paper that you surprised me with at one point by pulling it up and saying, this paper. It was a paper about managing the proportion of testers to other developers. And that was coauthored with Cem Kaner and Jennifer Smith-Brock. And that came out of a workshop, a software test managers round table workshop. And both of them were going against the common wisdom at the time.
Gene (00:07:05): Gene here. Elisabeth just mentioned two papers, which I think are truly notable, both of which influenced my thinking over the last decade. I'll put links to both papers in the show notes. Better Testing, Worse Quality, and Managing the Proportion of Testers to (Other) Developers. Elisabeth and I talk about that first paper later in this interview, but I want to describe in more details the second paper. It was published in 2001 by Dr. Cem Kaner, Elisabeth Hendrickson and Jennifer Smith-Brock. At the time, it was a very surprising and maybe even controversial paper, because it brought to light some surprising observations.
Among other things, it described an exercise done at the software test manager round table, which for years assembled some of the best software QA professionals in the game, no consultants allowed. In this particular exercise, they asked everyone two questions. What was the dev to test ratio for your best projects? And what was the dev to test ratio for your worst projects? The results were not what anyone expected. I asked Elisabeth whether my recollection of this exercise was correct.
Elisabeth H. (00:08:12): You got the exercise absolutely 100% right. I guess the meeting that led to the paper, Managing the Proportion of Testers to (Other) Developers. Brian Lawrence was facilitating, and he came up with these two questions and passed out sticky notes. And all of us in the room filled out number, ratio for best and for worst. It was really surprising to me because at that point, one of the things I loved about that community, Steamer and the Los Altos workshop on software testing was in that same sort of vein. Is you get together practitioners, people who are actually doing the work and got to see the consequences of their decisions and ask them questions like that.
We would tell stories to one another and that's where I got to feel not alone, because I really oftentimes would see the results of what we were doing in organizations, and then compare that with what I was reading about as supposed best practices and feel like Holy snacks, we must be doing it wrong. But that was whereas practitioners, we would get together and tell each other's stories. And so the outcome of that exercise that you just described was there was no real correlation between the ratio. This was in the fall of 2000, at the time, the supposed best practice was a one-to-one ratio, because this is what Microsoft had published as like, oh, well you have to have a one-to-one ratio of dev to test.
And what we saw in the anecdotal data, because I mean, we were serving 12 people or whatever it was, but what we saw was no correlation for the ratio for best, but across the board, the worst project had more testers. And so there is something really interesting there. Like, okay, why? And as we then told each other stories about it, part of it is that the causality wasn't that the testers were causing the projects to fail, but the projects were already failing. And the organization's answer was to increase the number of testers. It turns out that doesn't work. Go figure.
Gene (00:10:13): I remember there was something you told me that went along these lines. It was, when everyone knows that there's no one out there who's going to test your software for you, suddenly you end up with great testing behaviors by everybody, especially developers. Is my recollection correct on that?
Elisabeth H. (00:10:30): Oh yeah, absolutely. So in, Better Testing, Worse Quality, it's not an accident that these things all came out close to the same time. Sort of hot topic at the time. And especially in the organization that I was in, we were really struggling with some stuff. And so, Better Testing, Worse Quality, our fictional hero, Chuck who, let's face it. Chuck is a composite character, but there's a lot of me in our fictional character, Chuck. Chuck is a QA manager who gets hauled into the VP's office to explain how is it that we have invested massively in you and your lab and your people and training, and you were brought on board to fix our quality problem. And yet here we are two years in and things are worse. So how is it that our software got worse when we invested all of this in quality?
And Chuck, our fictional hero, walks through a diagram of effects and shows that well, funny thing happens when you ramp up the amount of investment in an independent test group, then given the amount of pressure that's already on the developers to deliver. It is so easy for the developers to say, this isn't my problem anymore. Thank goodness we've got the professionals over here. It's their job to test. And there is a little, I think that the statute of limitations is probably up on this one, so it's okay for me to admit [inaudible 00:11:51] of this one. In the paper, there's this little snippet of a supposed email that got sent out. It's not a supposed email. It is actually literally copied and pasted email. I was in Chuck's shoes.
I was being blamed for there being quality issues, and they were genuinely quality issues. And I did what Chuck in the paper does and instituted a policy of, okay, well, we're going to stop pretending that we've got 18 people to test everything across the board, because everybody then thinks that their thing is being tested by 18 people. And instead, we're going to actually segment out and say, okay, well, these people are assigned. And basically all I was doing was saying, people can't multitask. That's also not a shocker. But at the time the thinking was sort of like, oh, well you just do your Microsoft Project Gantt Chart thing and you put your resources on and et cetera.
And so we had this sort of mythical idea in our heads that, well, if we've got all these people assigned, the work must be getting done. Obviously this is not true. So there was this one particular engineering manager who, when I ruled out the sort of segmenting of people in time and hours and all right, these are the most critical things. And so this is where we're going to assign these testers. And I'm realizing that as I'm saying this, it just sounds so foreign to my own ears to talk about assigning testers as though they are separate from the development team, but that was where we were at the time. And so, okay.
Gene (00:13:13): Still is [crosstalk 00:13:14]
Elisabeth H. (00:13:13): Oh, that hurts. So in any case, I'm sorry, I'm now stuck in my head on so many people are still living this reality. In any case. So, all right, five of the people are going to go test the server, blah, blah, blah, blah, blah. And three of the people are going to go test this. And there was this one little thing where that left zero people left the test. So by being super honest and dedicating people 100%, we were able to look and suddenly see sort of the big Kanban boards that you talk about. We can actually see that there are zero people available to work on this lower priority thing. And so I just threw up my hands and said, well, we got who we got. I don't have any more reqs to hire any more people. Even if I did, I don't think we could absorb them. So you get zero testers.
And that engineering manager through such a fit at me, which does not show up in the paper. We went back and forth, and the upshot was basically, he was yelling at me, but you can't change objective reality. Because he didn't disagree with my prioritization, he just didn't like the outcome. But what that really means is that he didn't like the reality that he had already been living in and had not yet acknowledged. And the next thing he did was to write this email to all of his developers, that he then, I believe he [crosstalk 00:14:38] me on, and the whole email was really pretty snarky. It basically amounted to, as a result of this year in competence of our QA group and their inability to assign us any testers, well, I've got to ask you, please test your stuff.
Funny thing, the quality went way up big. [crosstalk 00:14:57] realized that there was nobody else. The cavalry was not going to be coming to save them. Nobody else was going to be coming along to tell them where there were mistakes. They actually had to take responsibility again for the quality. And I say this as they, and it sounds really blaming. This wasn't about the people not caring, they cared deeply. Before we started hitting the record button, Gene, you and I were talking about structure. This is an example of structure. This previous structure was giving people the illusion of something that didn't even exist. And once we managed to wipe away that illusion and they were dealing with reality, then they changed their behavior and their behavior resulted in an outcome that was so much better than we had before.
Gene (00:15:43): That is amazing. I think it was your pipeline conference where you talked about the story, the specific developer behaviors of I'm confident that the people downstream will catch it for me. Right? So therefore it's safe for me to keep working on features, right? That it seems like in that case, the structure enabled a certain set of behaviors or reinforced developers to just work on features. Does that resonate with you?
Elisabeth H. (00:16:07): Oh, totally. And for that matter, it wasn't even just the developers. It was also their managers. It would have been unsafe from a professional development and compensation standpoint where those developers to decide to test more. Literally at that same company, I overheard in the hallway engineering managers saying things to individual developers like, well, why are you still testing that thing? That's what we've got a QA group for. You should be sending it over to them. And they were like, actually getting reprimanded for [crosstalk 00:16:35]. And again, the engineering manager, I'm not blaming them either. They live within a structure and they have expectations about what should be true. And so, yeah, absolutely it resonates.
Gene (00:16:47): Okay. I'd like to pause for a moment and let that sink in, because when Elisabeth first told me that story in 2013, it was such an aha moment. What Elisabeth observed is that when there's an independent test group, developers knew that they could work on new features and not worry about quality, because other people would catch those errors for them. But on the other hand, when every developer knows that there is no one out there who's going to catch your problems for you, you end up with far higher quality. This is so counterintuitive. And yet this is a common experience of the best QA professionals in the game. I've seen this phenomenon before too, because it happens in many other areas other than QA. Take operations. If developers are woken up at 2:00 AM, when their applications crashed in the middle of the night, you end up with far better application operability. This is exactly what Facebook observed in 2009, when they put all developers, developer managers and architects in the page rotation. Similarly, if all developers know that there no external security group, who's going to find and fix your security issues and who will apologize to your customers for you, you quickly end up with more secure code. I asked Elisabeth to what extent this, and I don't even know what to call it, this principle? To what extent it applies to not just QA, but all other nonfunctional requirements as well.
Elisabeth H. (00:18:11): Yup.
Gene (00:18:12): Yeah.
Elisabeth H. (00:18:13): So I've had to think a lot about, it turns out drawing lines is hard. And anytime we slice work, we're drawing a line, but drawing lines just in general is hard, right? If you're writing code, is this one class or two? One method or two? Do I separate it out into a helper method? You're drawing lines. Drawing lines is hard. At an organizational level, is this two groups? Is this one group or two? Should there be a sharp line between development or the coders and people doing QA work or between developers and operations. And in some organizations, they view it as a best practice to have a very, very strong wall because otherwise, like I've heard way too many times, you can't have the fox watch the hen house. And so there's this belief that if you don't have a strong wall, that you will actually end up with collusion and worst results, which never actually happened in my experience, so it could just be ironic. But anyway, you're drawing lines, right? So anytime you're drawing lines, that's a hard thing to do.
Gene (00:19:10): Exactly there. I think what I'm trying to come to understand is that so much of the role of the leader is to draw lines, right? I mean, you want to decompose work. You want to enable them to work independently. You need to be able to assign responsibilities for certain areas of system, so that it's not just one big amorphous blob, right? So say more about that.
Elisabeth H. (00:19:31): The underlying principle is to slice the work such that the result of the work is actually something that's aligned with a business interest, which comes right back to something that you talked about wonderfully in both the unicorn project and the phoenix project. You can't have a team of 50 people and expect them to be productive. You have to figure out how do we find the right dividing lines between teams. But if you take that principle of the place to find the natural seams in the organization has to include this idea of the outcome is something that relates directly to a business outcome. And in the case of the work that I was doing at Pivotal, we shipped software. So it's a shippable thing.
I don't want to slice the work such that one half of the shippable thing is on one side of a dividing line, and the other half of the shippable thing is on the other. But then when you're dealing with, we make databases. Databases are very large endeavors. You don't have five people. When you get to a very mature database, you typically need enough engineers that it's no longer a two pizza team. And you have to find the right scenes within that. And that's where we get to Conway's Law. And ThoughtWorks coined the term, Inverse Conway Maneuver, because if Conway's law says that your architecture is going to resemble your organization, and inverse Conway maneuver is when you organize the structure of the organization to reflect the architecture that you wish you had.
And one of the places where we did that was GemFire is one of the databases it's Apache Geode. And a bunch of us got in a room and had a discussion about the architecture that existed. And there were a lot of debates back and forth about what the architecture actually was, because at that point, no one person held the entire architecture in their head. So it wasn't a very clean, simple diagram, but we got a lot of experts in the room. My role there was to simply facilitate the discussion and ended up with a whole lot of cards, representing pieces and parts and capabilities of the system, and ended up walking out of that room with a structure that showed the architecture we wish we had. And that was how we reorganized the teams.
Gene (00:21:38): One is, so in the unicorn project, we had the first ideal of locality and simplicity. And I think it was really just a way to show that in the ideal, you should be able to deliver a useful capability just by one team making change at one place, right?
Elisabeth H. (00:21:54): Yup.
Gene (00:21:54): In the worst case, we have 50 teams that are so entangled together that none of them can actually work independently. It sounds like at that point in time, when no one can hold the architecture in their head, and yet if all work depends upon this invisible, terrible architecture, everyone is now shackled together in ways that defy even easy explanation or easy documentation, right? So that is a structure, right? That as a leader you saw is very important that you need to crack.
Elisabeth H. (00:22:22): Right.
Gene (00:22:22): And figuratively, metaphorically, et cetera.
Elisabeth H. (00:22:25): It's debatable which is worse. Is the deadlock that occurs because people aren't willing to take the risk and just do it, which is the deadlock of you need 50 people to sign off. Or what I saw happen in other cases was a mentality of screw it. We have to get this done. So we're just going to do this and move fast and break things.
Gene (00:22:25): Right.
Elisabeth H. (00:22:48): And guess what, if you have a super complicated architecture things break. And my favorite book, that was an example of that, it involved a setting that in one place in the code specified a number of milliseconds, and in another place in the code it specified in.
Elizabeth (00:23:00): ...had a number of milliseconds and in another place in the code, specified a number in seconds, but it was representing the same delay. So depending on where you were in that code, that number was being interpreted very, very differently. And that's what happens when you say, "All right, well, one way to make this change would be to get everybody to participate, but we want to get it done and it seems like it should be a simple change so [crosstalk 00:23:28] we're just going to move fast and break things." Neither is ideal.
Gene (00:23:32): Right. I love the phrase, tightly coupled, loosely controlled comes to mind, right? That's the worst case...
Elizabeth (00:23:39): Yes.
Gene (00:23:43): Gene here. The example of the mismatch between milliseconds and seconds, that Elizabeth just mentioned, happens more often than we'd like. In 1999, the NASA Mars Climate Orbiter crashed after 10 months of travel, because one component measured distance in the metric system and the other in Imperial. Most of us in software have experienced this, whether it's messing up a unit of time, distance, duration, angle of separation, a monetary value and so forth. On a separate note, this happens in physical systems as well. In 2013, they discovered that the F35B airplane couldn't land on aircraft carriers because the airplane tail hook couldn't catch onto the arresting gear on the aircraft carrier. This just shows that architectural coupling is a fact of life. You want these two components to work together, even though they are very distant from each other in the system; one resides on an airplane and the other resides on a ship.
Okay. Back to my next question to Elizabeth. So one of the things that we were talking about is you were VP of [R and D 00:01:50] for the big data suite. And over the years, you've ascended through the ranks and you're being given more and more responsibility. And we're talking about, as you get higher up in the organization, given more and more authority, the knobs that you actually get to turn are actually surprisingly few and this is something that we were talking about as being a surprising of learning that we're both getting our heads around, or in your case for 20 years. Can you talk a little bit about that astonishing claim? I mean, how can you defend the claim that as you get higher in the organization, the knobs you actually get to fiddle with in a meaningful way actually is surprisingly small?
Elizabeth (00:25:29): Yes. As you go from being an individual contributor to a first line manager, to a manager of managers, to a director. As a VP, I had directors reporting to me who had managers reporting to them. I think one of the things that I've certainly believed along with plenty of other people who I've talked to over the years, when I was an individual contributor, I believed, "Oh, well those people with the big titles, they have the power to fix things and why aren't they? They should be." Doing a little bit of armchair executiving. The more I rose, the more I discovered that, yes, I have fewer knobs and no, controlling directly, just flat out doesn't work. Not that I was ever inclined to try to go micromanage. It's just not my style, but it doesn't work for so many reasons.
I mean, first of all, it robs people of their agency, which is bad. But second of all, you don't have the situational awareness to actually able to make good decisions about what you change. You can really only talk about outcomes. So the knobs that you have, one is structure, which you talked about and it's so important. One is coaching, helping your people get better and better as leaders. And so I spent a lot of my time as a VP coaching people to help them learn how to take their hands off knobs that they had no business having their hands on and instead look at how do we make the right requests to increase visibility, increase flow, empower people, but also give them a sense of really be clear about the outcomes. Because if you just empower a bunch of people and say, "Go do whatever you think is right," you end up with complete chaos because everybody has a different idea of what the right thing is.
Gene (00:27:23): Gene here. In this interview, Elizabeth shared with me, her mental model of how she views the way organizations work, which has shaped how she's made decisions for nearly 20 years. I was so excited when she told me this, but in order for you to understand why, I'd like to take a moment to step back and describe why I'm doing this podcast. This podcast is part of a quest I am on to better understand leadership and how organizations work to better understand how and why some organizations create amazing outcome for all of their stakeholders and their customers, while some seem almost preordained to fail entirely. I am so honored that I get to learn from the people who have created or studied greatness and how they view the world. Later in this podcast series, I interview someone else who has tremendously influenced my thinking, [Dr. Stephen Spear 00:05:13].
He is famous for many things, including writing the most downloaded Harvard Business Review article of all time, which was published in 1999, Decoding the DNA of the Toyota Production System. This was based on his Harvard Doctoral Dissertation and in support of that work, he decided to work on the plant floor of a tier one Toyota supplier for six months. Since this January, I've been talking with Steve weekly, trying to understand his mental model of the world, because for years, I've been dazzled by his clarity of thinking and his amazing ability to predict how organizations work both in the ideal and in reality. To my amazement, he seems to describe organizations using only two constructs. These two constructs are structure and dynamics. Structure is how we organize our teams. Structure also includes the architecture those teams work within as well as the official sanctioned ways those teams work together.
In other words, the interfaces between those teams and dynamics is everything else. Dynamics include how signals are transmitted and received by individuals within the defined structure. The notion of signals includes how, where, and how frequently feedback is created and received by teams. It also includes how culture either amplify signals or can suppress them. For instance, in a culture where it's unsafe to tell bad news, signals can be extinguished entirely. Now that I've explained that, maybe you'll understand why I was so surprised and so excited to learn that Elizabeth's mental model of how the world works is astonishingly close to how Steve views the world. So I asked her to describe further, her model of balance, structure and flow.
Elizabeth (00:29:56): Sure. Even though I've been thinking about it for 20 years, it's fairly raw in the sense that I haven't tried to explain this, but I have a very similar mental model balance structure flow. And so structure, I think we've talked a lot about structure and the organizational structures and how that affects things [crosstalk 00:30:15].
Gene (00:30:15): ...architecture, right? I mean...
Elizabeth (00:30:15): Right, yeah.
Gene (00:30:18): [crosstalk 00:30:18] fire. It was the architecture prevented however, you organize the teams, from people actually working independently. That's why you cared about it, just to confirm?
Elizabeth (00:30:26): Yeah, totally. The organization of the teams and architecture as Conway's law states are inextricably linked.
Gene (00:30:32): Right.
Elizabeth (00:30:33): So absolutely structure it for all dimensions of what we might mean by structure, but structure by itself, that's a snapshot in time and you're not actually seeing the outcomes which gets to flow, which of course is flow is it's in the second of the five ideals. Flow is absolutely essential. But then there's one more piece that I would add to that, and that's the balance piece. And that really refers to looking at the system and what state the system is in. Because if you have a state in homeostasis where all of the forces are balanced and then the system as a whole continues to maintain its operability state, then if you change a thing, then you can start to at least see correlation, if not causation, with respect to changes in outcomes.
But if you already have a system that is so perturbed by the last five reorgs that you did and the system hasn't settled down yet into its steady state, then you can't tell if the most recent thing that you change is the reason why something changed or not. So that's the balance piece is keeping everything in balance so that the system as a whole is healthy.
Gene (00:31:43): Elizabeth mentioned several times, Conway's Law. My favorite definition of Conway's Law is from Eric S. Raymond. He said, "If you have four groups working on a compiler, you will get a four pass compiler." This was based on a famous series of experiments that Dr. Melvin Conway did in 1968. What he found on a government contract was when you have three teams working on a compiler, you get a three pass compiler. And when he had four teams, they wrote a four pass compiler. And thus, he wrote this law, any organization that designs a system will produce a design whose structure is a copy of the organization's communication structure. And so Elizabeth put it brilliantly, the architecture of the system and the way we organize the teams are inextricably linked.
Another amazing thing that Elizabeth has done is create a simulation called word count. In this simulation, teams work together in a fictitious company to create word counting software. And over the years, she has performed a stimulation over 150 times and the symptomology and the patterns she's observed are so fascinating. Later in this interview, Elizabeth will talk more about the simulation. But before that, we talked about the famous MIT beer game, which incidentally, I recently spent weeks studying with Dr. Steven Spear. The MIT beer game was an experiential learning business simulation created by Dr. Jay Forrester and his colleagues at the MIT Sloan School of Management in the early 1960s. Generations of business leaders have played it and it's integrated into most modern MBA curriculums. It was intended to demonstrate a number of key principles in supply chains. The game is played by a team of at least four players, often in heated competition.
According to the authors, the purpose of the game is to understand the dynamics of a typical supply chain. Players take the role of the manufacturer, the distributor, the wholesaler, and the retailer. Everyone is penalized for any excess inventory they hold as well as any unfilled back orders. And there's a two turn delay between ordering from your supplier and when it is fulfilled. The game typically starts where there is no communication allowed except for a sheet of paper. [inaudible 00:33:55] demand to the retailer is not known to any team members in advance and the most famous variation of the MIT beer game, customer demand levels are very, very simple. The initial conditions is that there are consistently demand for one unit per week. And then several translator customer demand goes up to two where it remains constant. Across thousands of trials across decades, the results are astonishing and also very consistent. Often in the simplest scenarios where customer demand only increases once, a surprisingly high number of team perform very, very poorly. It is all too common that inventory levels across the entire system, a very simple system with only four nodes grows exponentially.
It seems that all it takes to create this disaster is for one person to over or under order, and then the inventory problem becomes almost unfixable. They call this the bull whip effect. Famously, researchers found that Fortune 50 CEOs perform no better than high school students. Verbal communication between players is against the rules, so feelings of confusion and disappointment are very, very common. Players look to one another frantically trying to figure out where things are going wrong. Most of the players feel frustrated because they are not getting the results they want. Often people report that their teammates do not understand the rules of the game. In the show notes, I'll link to my favorite paper by [Dr. John Sterman 00:00:35:17] from MIT, summarizing his observations and learnings from the MIT beer game. Now let's go back to Elizabeth while I ask her about her own personal experiences with the MIT beer game?
Elizabeth (00:35:30): So here we are in mid-March recording this. It's an incredibly weird time. The world is locked down in a pandemic and toilet paper is out everywhere. Now, I did run the beer game once internally at one of the software companies that I worked at. This was years and years and years ago. I think our outcome was very typical that it was very much what you described that even though the persistent demand had only gone up by 2 six packs, the consumers were ordering two more six packs every week, and that was a steady state, but the system fluctuated wildly and, at the end, the manufacturer ends up with just so much beer. And the reasons for that, as you said, because of the way that the structure of the system is designed, there's a level of transparency that each stage in that supply chain is unable to give the other stages.
And so the only signal that they can give is orders. And so in the system where there's a little bit of a scarcity, because demand has gone up, the customer keeps coming back to say, "Well, are you still out of beer? I want beer." And so the retailer starts ordering more hoping that they will get even just a tiny fraction of their order, which results in the distributor then feeling the scarcity and ordering more from the factory. And then the factory starts saying, "Okay, okay, I guess demand's nuts. Let's just put a whole bunch into ferment." And then there's this latency as you wait for the fermentation to finish, right? And if you could pass back the information about, instead of attempting to get what you need by just quadrupling your order and hoping to get one case out of 20 that you're ordering or whatever, if you could pass back and say, "Look, we're constantly short. This is what we need."
Maybe that would help. I don't know. It would be really interesting to run the beer game with some changed rules. So I have a hypothesis that we're going to find out in somewhere between six to 12 months, whether or not it's true if the grocery stores and making massive forts out of all of the toilet paper that they end up getting shipped, because we're out of toilet paper and paper products in general, in all the grocery stores. I was just at the grocery store this morning for some supplies that were very much needed. Fortunately toilet paper was not one of those because I have never seen so empty a shelf in an open grocery store. So are we seeing an example of the beer game where the toilet paper manufacturers just didn't have enough in stock? And so we're going to see this whiplash. I don't know. It's going to be fascinating.
Gene (00:38:09): I was talking with Dr. Steven Spear and I asked him if there's one thing about the structure that you could change to ameliorate the effects, what would it be? And he said, "Just allow one message to come back the other way. So you order and you'd get back, 'I got your order,' and better yet, you can say, 'I got your order and you'll get it in six turns.'" He thinks that would have to go a great way to actually get the system down to a stable state sooner. So tell us, does that resonate with you and tell us about how it resonates with your own interactions around the word count simulation?
Elizabeth (00:38:45): So let's talk a little bit about the word count simulation, because it is fundamentally different than the beer game in the sense that there isn't this structure. It's really an abstraction of a software development organization. So when you start, there are also four workstations. So there must be something magic in [crosstalk 00:39:01] four workstations. The four workstations are, there are testers, there are developers, there are product managers and there's a special role that the computer. So in designing the simulation, I didn't want to create a simulation where the standard for what is good was something that was wholly subjective. I wanted to have something where you could have a notion of meets a customer's need, and I wanted it to be able to "execute," but I didn't want the complexity of having to enforce, this is done in Java and I wanted to make it accessible to people who didn't write code for a living.
So the obstruction there is... The programmers are writing instructions in English and the computers are interpreting faithfully and not what the programmer intended, but what they actually wrote. So that's the fourth workstation is the computer. And at the start of the game, the structure is there is an interoffice mail courier who passes messages between the humans within this system. So as a tester, if I have a question for a programmer, I have to write it out on a piece of paper, put it in an envelope and pass it along with the interoffice mail courier. And so there is an asymmetry of information available within the system that mirrors how many, many organizations work. And so yes, absolutely that asymmetry of information, which I think is what you're talking about when you say, wouldn't it be great if they could pass this information forward...
Gene (00:40:22): Information flows one way.
Elizabeth (00:40:24): Right. Yes. Yes.
Gene (00:40:27): Right.
Elizabeth (00:40:28): The information about specific things is only flowing in one way.
Gene (00:40:32): Right.
Elizabeth (00:40:34): So the word count simulation is so rich. At this point, I've run it over 150 times myself. Other people have also run their variations on this thing. And each time is different. And yet there are patterns that are very similar. And one of the common things that happens is even though there is a mechanism for getting information between the humans and the organization, they don't use it frequently, frequently in round one, nobody actually communicates in any way with any of the other groups. I once had an interoffice mail courier who was so concerned at the start of the simulation based on the structure. So concerned that they were going to be overwhelmed and running between tables, they recruited themselves a helper. So there were two of them. And the two of them were asked to pass exactly zero messages for the entire first round of working. Nobody pulled their head up out of their own work long enough to think about communicating with any of the other groups. So the product managers sent zero messages to the developers and the testers asked zero questions and similarly with the developers and the testers. Amazing.
Gene (00:41:47): Just to process that. So as a leader of this organization, you might be thinking you actually want these teams working together to achieve a common objective. And so if you don't see any communications, one might be a little bit concerned, right? Is that the aha moment?
Elizabeth (00:42:00): So for me, the aha moment was realizing the extent to which people get so wrapped up in what's in front of them, that they forget to take a step back and look at the whole system and look at the outcomes that the system is trying to achieve. There's actually no leader in the word count simulation.
Gene (00:42:00): Right.
Elizabeth (00:42:15): But I have had, so sometimes leaders, when I've run this, sometimes the leaders in the organization will take the role of observer, which is another role that is available. And I actually had two occasions out of the 150 where the leader walking around with me and observing and surveying the efforts of everyone, two times a leader said to me, "Oh, isn't this great? It's so quiet in here. Everyone is so focused. This is exactly how it should be." Now, fortunately, one of those two people was actually an Agile coach themselves, as well as being a leader in the organization, and they were kidding.
The other person wasn't kidding. They legitimately believed that this was what a productive organization looked like. And at the end of round one, of course, they were nowhere close to being able to recognize revenue. They had come nowhere close to being able to... So I played the role of the customer. And I'm the one who decides whether or not to pay out. I am actually the best customer you will ever have. I'm 100% consistent. I am extremely clear about my requests. I am not trying to trick in any way, shape or form. There are no tricks. There's no hidden anything. I have acceptance test cases. If you even hint that you want them, I will give them to you. So I am a really good customer and they still couldn't come anywhere close to delivering something that I would pay money for. And so for that one leader who thought that that's what productivity looked like and then realized how far they were from being able to make revenue, for them, I hope that was a huge aha.
Gene (00:43:54): Oh, this is so great. So you paint this one symptomology where there's no communication going on between groups. So, keep going.
Elizabeth (00:44:01): Okay. So at the end of round one, round one, I, as the facilitator have created the structure, but at the end of round one, I say, "All right. So you worked within this structure, here were the rules. Now you own your process. And that the one thing that you don't get to do is to fundamentally change the role of the computer to the point where they do directly whatever the customer asks you. You still have to maintain this idea of there's software and fiction that these humans are computers. Other than that, you can change your process in any way, shape or form, whatever you want to do." And this is where the results start to really diverge because some organizations, some groups will at that point, just change one or two little things like they'll fire the interoffice mail courier.
Sometimes they don't even do that. I had one group not fire the interoffice mail courier for the entire simulation. They still wanted to communicate by memo, go figure. I don't know. Anyway, so sometimes they'll change one or two things. Sometimes they will throw out the rule book altogether. And that gets really interesting. You throw out the rule book and it's now utter chaos, utter chaos. You've got people duplicating effort left and right. You've got nine different people talking to the customer and then bringing the information back. And it becomes a giant game of telephone. And the organization ends up getting whiplash and that's an example of everybody is empowered, but there is no alignment. And so the outcome doesn't typically get any better.
Gene (00:45:28): But there are some [inaudible 00:45:29] that converge upon a successful set of patterns that actually do finish the simulation in a good state.
Elizabeth (00:45:35): Most groups that go through, vast majority, I would say. I've had maybe two or three failed to ship utterly. That very rarely happens. My favorite fail to ship story involves a group of [Agile 00:45:48] consultants. They were all super, super experienced Agile consultants. To be fair, and in their defense, I threw more roadblocks in their way than I usually do. And we were in an environment that was itself, fairly chaotic because it was at a conference, but it's still...
Elizabeth (00:46:00): ... it's all fairly chaotic because it was at a conference, but still, it cracked me up that a group of agile consultants couldn't ship. But-
Gene (00:46:08): It reminds me of this book called Dangerous Company: The Consulting Powerhouses and the Businesses They Save and Ruined.
Elizabeth (00:46:14): I love that book. But the vast majority do actually get to the point where they can ship. Round one, I've set the rules, round two, they start to own their working agreements. And the whole point of word count is reflect and adapt, and the power of reflecting and adapting your working agreements to improve the outcome, ultimately, that is the whole point. Now in the process, oftentimes organizations will reinvent continuous integration, they'll reinvent cross functional teams. So basically teams end up reinventing at least some portion of agile as they execute. So maybe since there are so many stories, maybe one story I could tell is actually my favorite story, because in a way it also relates to partnership.
I was reflecting on a lot of the Unicorn Project and the Phoenix Project is really about becoming a good partner. And so in this one organization, there was a group that was responsible for reporting. They had a terrible reputation within their organization, in fact, this was a company that at the time I was a consultant at the time, and I had worked with a lot of teams within this organization. And so I knew a lot of the people there. And I came in to work with the reporting group, this one group and somebody else with a different team who I had worked with before saw me in the hall and said, "Oh, what are you here for?" And I explained, "I'm here to work with this one group." And they kind of shook their head and they said, "Oh, good luck with that."
Because again, this team's reputation was terrible. I had been warned in advance, they can't deliver anything. When you do the word count simulation with them, you're going to have to go really easy on them because they don't [crosstalk 00:47:52]. So we get into it where we're doing the word count simulation, and the first round, they kind of chafed at my rules, but they followed them while rolling their eyes at me like, "Oh, stupid consultants, this is dumb. Why would we ever work this way?" And in round two, I said, "All right, you own your working agreements now. Do whatever you think is best." And they were one of the very, very few that shipped software in round two. Because the minute that the handcuffs were off and they were allowed to work in a not stupid way, they immediately formed a cross functional team, everybody was gathered around the same workspace, so talk about locality and simplicity.
They got rid of all of the overhead and they just executed. They were in close communication and collaboration with me as the customer. And they were really good at asking for examples of what I considered to be an acceptable deliverable. And they were delivering as of around two and in round three, they just cranked up even more. And they were pounding out capabilities so fast, I had to make up new things that I had not had as part of the simulation before. In debriefing, when I very gently asked them, "Hey, so..." And I didn't quite say, "You have a terrible reputation." But I kind of gently alluded to that and asked, "So help me understand." So here's what I learned as the reporting group, they were always at the end. In that particular organization, people did not at the time think in advance about the data that would be needed for reporting.
And so the data wouldn't be collected at the beginning. They would have to figure out how do we synthesize the results, how do we get this out of ... the data is not already there, but we can kind of artificially synthesize and get to ... So they were having to go back and report on data that wasn't there and find ways around the lack of reporting having been considered from the very, very beginning.
And because they were always in an impossible situation where they were going to be asked to do the absolute, the impossible thing, and always under pressure, they had learned how to work together incredibly well. So it was another example of how you slice the work, another example of when I was asked in to help QA organizations, where leadership would bring me in to, "Fix the QA organization." Here's another example of work that was sliced off in such a way that it made it super difficult for that group to succeed and made it super difficult to deliver. But boy howdy had that group learned how to work well together because they had to, it was a matter of survival.
Gene (00:50:37): You almost seem to minimize it. And yet it seems like you're actually suggesting the opposite, that as a leader, your job is to draw lines well. To draw lines poorly, these two horrendous outcomes as a leader structure, drawing those lines is one of a very few number of things you can actually directly control. Can you just maybe defend that claim? Because it sounds so ridiculous, because as a leader, you have all the power, you can tell anyone what to do. Can you just take a moment to defend that claim, that drawing those lines well is one of the few things that leaders can control and need to do well.
Elizabeth (00:51:17): So how do I explain this? I'm remembering a moment in the Phoenix Project where Bill knows that if they do this deployment of the Phoenix Project, it is going to go very badly. And he sends this note and the word comes back from Steve and Sarah, "No, you can't delay. You absolutely must." And so this was an example and in fact, I think Steve has an epiphany as a result of then they do the deployment, things go terribly, it's exactly what Bill said would happen. And then Steve chews him out up one side and down the other for having screwed up. And Bill's like, "Yo, but I told you exactly what happen." And then Steve gets really upset with the, "I told you so." And like, "You have to be more productive than just telling me, 'I told you so.' You've got an attitude problem."
Which of us has not been in that situation at some point in our careers? And then Steve later comes back and apologizes. And so I'm bringing the story up just because I think it's a great example in the Phoenix Project of this exact dynamic over and over again. Where I have seen, and for that matter been the leader who comes in and inappropriately gives, say a program or a direction like, " Yo, this is stupid. You must go fix this of millisecond seconds, et cetera."
As a leader, you don't have the situational awareness to know what is feasible and what's not. And so you're more likely to screw it up just like Steve did then if you go in and say, "Here's the problem I'm trying to solve." And that actually became one of the most important phrases in my vocabulary to say, "As a leader, here's my concern. Here's the problem I'm trying to solve. Here's the constraints that we have. Here's the outcome that we need to achieve. Help me understand what is it going to take. Instead of telling me that you can't, help me understand what would have to become true in order to achieve this outcome within this series of constraints."
And sometimes the solution is the null set. There is literally nothing that you can do short of changing the physical laws of physics to achieve what you're asking for. You're asking for the wrong thing. Now we get to have a real discussion about, " Well, okay, what is achievable? Given everything that we've got, all the resources at our disposal, all of the time that we have available to us and given this set of constraints, what is the most we could theoretically achieve?" That is a productive conversation. It's not a productive conversation for a leader to go in and say, "No, you must do blah." Because you're going to end up with a whole bunch of unintended consequences. It's going to go so badly.
Gene (00:53:54): I love it. This is I think from the Lean Literature that the further you are from the problem, the less ability you have to actually make a good judgment about what to do for [inaudible 00:08:05]. But one of my favorite quotes is in the 1700s, the British government engaged in a spectacular example of top down bureaucratic command and control, which proved remarkably ineffective. Georgia was still a colony then and despite the fact that the British government was 3000 miles away, lacked firsthand knowledge of local land chemistry, rockiness, topography, accessibility to water and other conditions, it planned the entire Georgia's entire agricultural economy. And the result was that it was actually at the lowest levels of prosperity and economical wealth out of the 13 colonies. So you just-
Elizabeth (00:54:38): What a beautiful example.
Gene (00:54:40): It was really astonishing. And in the state of DevOps report, one of the top contra indicators of performance was to what extent do groups rely on approvals from distant authorities to be able to promote changes to production?
So I think that both demonstrate exactly what you're talking about. And you're talking about the dynamic between Bill and the Phoenix Project and Steve, the CEO, in the Unicorn Project was notion of psychological safety. If we boil it down to structure and people emitting signals and people receiving signals, I know it's kind of very simplistic and hyper mechanistic, but in that lens, there are certain things we do to increase the ability of people to hear signals and things that suppress signals, like not being able to tell bad news, like overly being prescriptive or telling people what to do. These are things that inhibit people from getting the work done. Can you just maybe react to that?
Elizabeth (00:55:33): Yeah. I think not being able to give bad news is probably one of the most damaging, not just to individuals, which is already bad enough, but to the organization as a whole. Having to maintain a fiction, that things are fine, it's just so incredibly toxic. Yes, that's been my experience, most definitely. And so I was nodding because, oh yes, absolutely. And it's so painful to see an organization where you can't speak ... you are not allowed to say what is true to the leadership. So incredibly toxic.
Gene (00:56:13): Gene here again. I want to take a moment to concretize what Elizabeth and I talked about so far through the lens of the five ideals. The first ideal is locality and simplicity. Specifically, we talked about structure and the software architecture as being just as important as the organizational structure, as Conway's law suggests. Elizabeth presented numerous examples of architecture, where it shackled teams together, which led to all sorts of bad outcomes or enabled teams to work independently and get far better outcomes.
And the fourth ideal is psychological safety. We just talked about how important it is that leaders create an environment where it is safe to tell bad news. While writing the Unicorn Project, it was so rewarding to revisit the work of what Google did to understand what made great teams great. In Project Oxygen and later Project Aristotle, they set out on a multi-year research project to understand what made great teams great. Year after year, the top factor they identified as predicting great teams was psychological safety as measured by to what degree do members on a team feel safe to say what they really think and to take risks without fear of ridicule, embarrassment, or being punished. This factor ranked higher than dependability structure and clarity, meaning of work and impact of work.
And the fifth ideal is about customer focus. This was a topic discussed in episode one with Dr. Mick Kirsten and Peter Moore. Specifically, one piece of advice that Peter Moore gave was that you must never talk like a cost center because otherwise you'll be treated like one. Because profit centers are more funding, we joke that in a profit center, you can always get more budget. You need $5 million, just ask, and they'll be able to find the money somehow. But in cost centers, the mission is to do more with less, and therefore you get your budget cut 3% every year. QA, and for that matter information, security are often seen as a cost center, as opposed to development, which is typically viewed as part of a profit center. I asked Elizabeth, what advice would you give to people who are stuck in a cost center? And I want to ask her about her experiences when she was feeling like she was stuck in a cost center herself.
Elizabeth (00:58:21): So in terms of being in a profit center versus a cost center, you're right. I spent the vast majority of my career in R&D in mostly enterprise software companies, but software companies, where the product that we sold was the software that the group that I was with built. And that is a very different feeling than being part of the cost center. But I did spend one year in an IT organization. So it's a very small amount of time and I only got sort of keyhole view into what the world feels like, but it left an indelible impression on me. And the other thing is that because so much of my early career was spent as part of a QA organization or running a QA organization, QA at the time was treated very much like a cost center. Some of the justification that I would have with the executive leadership of the companies that I was with was "Well, what's the ROI on our testing?"
Which is how I ended up in my fictional character Chuck's shoes being hauled in front of the VP to justify the spend in my organization. And I would say that in those cases, it really is an issue of drawing the lines in the wrong place. And I think that the Phoenix Project and the Unicorn Project both do a great job of illustrating the extent to which IT is so strategically critical, especially these days.
It comes back to, was it Mark Andreessen who said, "Software is eating the world and even if you don't think you're a software company, congratulations, you're now a software company." That wasn't necessarily true when I was part of an IT organization, because that was many, many, many years ago. But it certainly is absolutely 100% true today. And so, as we think about it comes back to drawing lines. And so at the executive staff level, and for that matter at the board level, I really think it's the wrong framing to think about anything that is strategically critical to the company to think about it as a cost center. I would question if even the stuff that we don't think is strategically critical to the company to think of it as a cost center.
I still think it's the wrong framing because you end up with these inefficiencies and the irony is it costs far more than it would if you didn't draw the lines that way, tremendous irony.
Gene (01:00:25): I think it was Chris O'Malley, the CEO of the famously resurgent mainframe vendor, Compuware he said, " There's no active investment you can do these days that doesn't involve software." In fact, in the previous episode, Mick and Peter said that if you have that cost center, you're almost guaranteed to get the wrong outcomes because the customer is not even in the picture, is not in the Gantt chart, it's not in the rank order list of things that you'll never get around to. Does that resonate with you?
Elizabeth (01:00:56): Totally resonates, totally resonates to me. And yet a lot of the Unicorn Project is about fixing the build system. And the build system is not something that customers see directly. And at the same time, if your build system is as messed up as the one that Maxine found, when she gets into the Unicorn Project, then you can't ship software.
And so if you don't view your build system as part of the necessary mechanisms by which you ship product and make it ... One of the things that I did was to take that notion of tool smiths or at least engineering. And that's where we put our best engineers. And the cost center mentality says, "No, you find the cheapest workforce you can." But that is the most expensive thing that you can do because you won't be able to build the software.
Gene (01:01:45): Oh my gosh, yeah. One of the things I find so delightful in the Unicorn Project is just kind of contrasting that. Like you just said, you put your best engineers on the infrastructure that developers use in a daily work because it creates that necessary feedback loop. That's fast, so people get feedback on a daily work. And it's all too common, not just cost centers, many development shops as well. They'll put their most junior engineers on the build systems. You have the best engineers on features, second best engineers on the backend APIs and the worst engineers in the summer interns on the build systems. Which is, I'll admit, is the way we did at Tripwire back in 2007, which did not treat us so well.
Elizabeth (01:02:27): I was going to ask, how did it work?
Gene (01:02:29): No, it was terrible because we were able to integrate code within a week, always, during the merge process, so that was in 2006, our CruiseControl server went down. I was part of the people like, "Eh, it's fine." Right? And a year and a half later, our code mergers don't take one week, it takes six weeks. And so absolutely, that was one of the critical reasons that we deemphasized builds. We wouldn't even staff the wreck. We wouldn't open up the hiring wreck because we needed features for developers. In hindsight, a totally wrong call. Developers were not productive. And if it looked like they were productive, it was an illusion.
I'm interrupting myself here because wow, it is still so funny and painful to hear that story. I want to take a moment to clarify exactly what happened back then. Honestly, I can't remember what year it was exactly, but 2006 sounds about right. This is when our CruiseControl build server went down and we didn't replace it for years and as a result, developers no longer had daily feedback on their work. We no longer had a build server to tell us when a developer made a change that broke the build. Instead, we only got feedback between teams when we merged the code at the end of the project, which went from taking only one week to do, to taking six weeks to get everything integrated back together again.
So we went from shipping new releases once every nine months, to once every one and a half years, because the integration and release process were just so painful we had to do them less frequently. And as a result, customers had to wait sometimes one and a half years or three years to get the features that they were asking us for. The funny thing is for years, people in development lobbied to get a build manager position created, but we never did. I'm embarrassed that I was part of the group who said, "Eh, it just doesn't seem that important." Because we wanted to hire more developers to work on features instead. So 15 years later, it's very easy to laugh at how wrongheaded that decision was, but it was far less understood back then just how important dev productivity systems are. And I'm sure some of you out there still struggle to justify the business case for these type of systems. Okay. Back to Elizabeth reacting to the story.
Elizabeth (01:04:40): Yes. But it's so counter intuitive. See, that's the thing about so much of this stuff is that your intuition typically guides you to, "Well, if we can't ship features fast enough, that must be because..." And it's the first thing that you notice the first order thing is, "Well, that must mean that we don't have enough developers working on the features, and so we're going to fix that." Or, "We have a quality problem. So we should hire somebody with quality in their title to fix that problem."
And it really takes taking that big, giant step back and looking at the system as a whole in order to see ... This is what you've laid out so beautifully in both the Phoenix Project and the Unicorn Project, that big take a step back, create visibility, allow us to identify our bottlenecks, our constraints, our Brent's within the system, so that we can actually put our finger on the right lever. And so when you talk about leaders and levers, ultimately that is what a leader is supposed to do. And that is so hard though. And to have the intestinal fortitude, to withstand that the barrage of negative coming from Steve, who's in a really bad mental place or a Sarah, who's a terrible partner and to be able to have that intestinal fortitude to just be resolute in looking for the real underlying root cause, so hard, but so necessary. That's what the leader is there for.
Gene (01:06:05): And use the words is there to give the context and then to communicate, but also to guide and eliminate obstacles, remove obstacles. You're there to enable those teams. So there's two things I want to cover. So much of your work is based on this, and I learned so much from it is you make this claim that is all about feedback, and you've given many talks on the care and feeding of feedback loop. So talk us through that, and one of the things I just loved about your work is that you really said all of what we think about as QA is really just one form of feedback of which there are many more forms of feedback. Can you explain that us?
Elizabeth (01:06:48): Usually this goes better if I can actually show pictures. So I'm going to try to draw a picture in your mind.
Gene (01:06:53): Imagine.
Elizabeth (01:06:54): Imagine, as we start working on a given thing, whatever the thing is. We've identified that there is a need out there, some reason why we're doing the thing. We have some business motivation, presumably to do the thing, some objectives that we want to achieve. And we start to plan to do the thing and in a traditional waterfall-y kind of project where we've got a planning phase and then an implementation phase, and then a testing phase, and then we're going to release to customers. At each stage of those phases, we are speculating that we're doing the right thing. And so as we're starting to do designs or come up with plans or do wire frames or whatever it is that we're doing, we're speculating that we've actually understood the real need and that our ideas about how to meet that real need are actually going to meet the real need.
And then when we get to implementing the thing, and again, I picture very phased approach. We get to implementing the thing, we're now basing all of our implementation off of that speculation. And we're speculating that this thing, this design, this architecture, this whatever, that it's actually going to work in the real world. And so we are further increasing the amount of speculation and we don't actually start to see the amount of speculation come down until we actually start testing it in realistic kinds of scenarios. And so if you're doing a waterfall kind of thing, you end up with this massive amount of speculation and all of that area under the graph that hopefully I've painted a picture in your mind of the increasing amount of speculation, all of that is risk. There is a tremendous risk that we actually identified the wrong problem to solve, that we're solving it the wrong way, that people are going to reject our solution.
And so Karen feeding a feedback cycles is all about recognizing that it is absolutely critical that we, I love the phrase, fail fast, as long as we don't just use it in a bumper sticker kind of way. As long as we look at sort of what do we mean by fail fast? What we mean is at each stage where we're making educated guesses, we are speculating that we have a way to tell whether or not we're aiming in the right trajectory ultimately, so we can steer towards value. Coming back to the role of the leader, this is also what leaders do, they steer towards value-
Elisabeth (01:09:00): That's the role of a leader. This is also what leaders do. They steer towards value, obviously in collaboration with the whole team. It's not something, they don't climb the mountain and come back with all the answers, but they have the responsibility. Leaders have the responsibility of taking the decisions necessary in order to help steer towards value. So feedback cycles at the independent test group that just I don't believe in independent test groups anymore. But that is one example of feedback from an independent group, from a group that wasn't part of building the software. I don't believe in it because by the time you get there, you have speculated too much, and it's usually too late in the cycle for bad news. And then we start running into the psychological safety issue where if that QA group finds a whole bunch of issues, you end up with that moment in time, like in The Phoenix Project, where it's just not safe to say this is going to be an absolute flipping disaster.
So you want to bring that feedback cycle in closer. And so that means that a part of lean startup, build, measure, learn, build, measure, learning is tight feedback loops as you possibly can because you want to be checking your assumptions about the market or your customers or whatever. As a developer, I have an intention to do a thing. Did I actually do the thing that I intended to do? As a product manager, I had an intention for us to build something. Does the resulting solution actually do what I asked for? So all of these are examples of feedback cycles and there's obviously more.
Gene (01:10:24): And what I find remarkable, and I think Jez Humble, co-author of the DevOps Handbook, co-author of the amazing book, Continuous Delivery, he said one of the biggest surprises in learning for him co-writing in lean enterprise was that same sort of discovery process that we find in lean startup is exactly finding process improvement, that one tinkers with the process, but we must continually monitor and learn and adapt just like in the workout simulation, which I thought was a wonderful unifying construct that says regardless of where we are in the technology value stream or in trying to serve our customers or trying to improve how our organization works, it is all about experimentation learning. All of that requires fast feedback, fast and effective feedback.
Elisabeth (01:11:13): Yes, absolutely. The faster you can get the feedback, the more turns of the crank you get for any given unit of calendar time.
Gene (01:11:21): Oh, awesome. All right. So let's go to the topic that we talked about at lunch that got me so excited that actually in some, one could argue that actually led to the creation of this entire podcast, which is Sarah. So you asked me over lunch a question that I had never actually heard before or even pondered. And it was so startling that I was actually taken aback. I think I just burst out laughing, and I couldn't even formulate a response for many moments. And so the question you asked was, what is Sarah's background? So Sarah, of course, is one of the more polarizing characters in The Phoenix Project. She's a SVP of retail operations. And of course, she returns in The Unicorn Project with the powerful new allies. And the reason why I laughed was because if you look, I have a Scrivener document that I actually built out kind of a UX persona document for every one of the major characters.
I built out their resumes for the primary characters. I wrote out Q&A's just to kind of experiment with voice and just kind of help frame and create that character. But I tell you in all honesty, I did not do that at all with Sarah. So when you asked me what is Sarah's background, I had no clue.
Elisabeth (01:12:37): Oh, that's great.
Gene (01:12:41): For someone who is so essential a character, it was astonishing and startling. And so the comment I've made to you is that, wow, yeah. That kind of demonstrates to me that she was actually kind of a caricature. She was kind of a caricature villain, but of course, she's based on someone that I had to work with in my career and scarred me in many ways. In fact, in some ways, it actually enabled the frustration and anger that leads one to actually write a book for the [inaudible 00:04:13].
So I asked for some time to think about what is Sarah's background, and here's what I came up with. I think in fact, this is what we came up with together. We think that she's obviously very good at what she does without a doubt. You don't ascend to that rank without being competent. I always thought that she probably came from a mergers and acquisition background where it does tend to attract a certain group of ambitious people that you treat people as kind of fungible, right? You buy them for X, and then you reduce their cost to Y, and you suddenly end up with a very profitable operation. And when it doesn't work, eh, well, win some, lose some. If you want to make omelets, you got to break a couple eggs. To get more concrete. I think she's probably a great merchandiser. She definitely understands that space, and which is why that she's been so successful in retail organizations.
She asked this great question of we were trying to dream up what's on her bookshelf, which I thought was such a great exercise because I couldn't think of very many. The one that did come to my mind was Who Moved My Cheese.
Elisabeth (01:14:17): I think I had visceral reaction [crosstalk 00:01:14:20]. I actually absolutely loathe that book.
Gene (01:14:23): She probably loves giving out that book. She probably has a whole shelf of Who Moved My Cheese.
Elisabeth (01:14:30): Here, let me patronizingly show you [crosstalk 01:14:31] could be a better manager.
Gene (01:14:34): Exactly. And maybe just to conclude, the thinking of [inaudible 00:05:37], which is I think she probably has a knack for strategy. But probably in her performance reviews, she is consistently in the bottom 25% of people leaders. She's viewed as a terrible leader of people. So I think most of us have had to work with a Sarah, sometimes in cooperation, sometimes in competition. Maybe let's talk about it in the third person. I hear you have a friend who's worked with Sarahs. What would this person say about having worked with Sarahs in the past?
Elisabeth (01:15:12): I think that you kind of nailed it with some of the descriptions of Sarah, the fact that she's hyper competent but empathy not there. We had core values at Pivotal, one of which was be kind. She would just roll her eyes at that fluffy, hippie, group hug crap.
Gene (01:15:33): I'm not here to be nice. I'm here to win.
Elisabeth (01:15:34): I'm here to win. Right. I'm not here to be nice. And that suggests that she would confuse nice and kind and [inaudible 01:15:40], that those three things would, as far as she's concerned, be in the same bucket. And so working with Sarah would be a very dehumanizing experience. And I think she is described as being condescending or patronizing, I can't remember which, but in one of the books. But she is also described as being hyper-competent. I will confess it all. Although, I come from a very different background than what you described for Sarah, in my very, very early years, I suspect that I probably came across a little Sarah-ish to some people because I was very focused on being competent and growing my competence and being known for being competent. And I had no patience. In fact, I still, somewhere in my files, have a performance review from approximately 1995 that says, "Does not tolerate fools gladly." And I didn't understand why that was in the negative. So I think one of the reasons I was asking you about [inaudible 01:16:34] is that, although, totally Sarah's behavior is unacceptable. I'm not in any way trying to actually defend her actions. I was curious about her as a character, as a person, because I think that she is redeemable. I think she just doesn't know how to partner, and she needs some coaching.
Gene (01:16:54): Yeah. In fact, it was based on your advice led to the change in the last scene, where Maxine actually goes out to lunch with Sarah. And I think the phrase was it wasn't what she expected and is maybe looking forward to her next meeting, which was that little sliver, that little crack in the door for redemption. There are valid reasons why we might have helped create Sarahs. She's been burned by technology before. She has learned that you must hold these, I'm trying to self censor myself, you must hold these bastards accountable. Otherwise, they will walk all over you.
Elisabeth (01:17:28): Well, she's not wrong. Right?
Gene (01:17:30): Right, exactly.
Elisabeth (01:17:32): I have seen instances where technologists gets so excited about technology, they completely lose sight of the business outcomes. And then millions and millions and millions of dollars get wasted with no outcome. And I think the Phoenix Project at $20 million in, no outcomes. It was kind of in one of those situations. And so it's understandable why Sarah is upset.
Gene (01:17:54): Right. [crosstalk 01:17:54].
Elisabeth (01:17:55): You got to hold these people accountable.
Gene (01:17:57): In fact, maybe just to kind of go where it's more provocative and controversial, in Unicorn Project, the CI system, the continuous integration service, turns into this endless sinkhole of the best people. And it's like that's actually not a core competency organization. Maybe we should get a real vendor in here so we can actually hold them accountable and release our best people for other critical things to enable the organization.
Elisabeth (01:18:22): Yeah. Because that's about slicing the work. It is so tempting. And if you're Sarah and you are hyper-focused on being competent and winning and not on the people and you don't see things as a system, you see things in a very transactional kind of way, that eminent background. Then your twitch is going to be, oh, well then we should go get a real vendor so that we have a single throat to choke on this. And oh boy, have I heard that the single throat [inaudible 01:18:51] too much. But the tremendous irony, and this is where partnership is so important because it's actually going to backfire. It's going to backfire spectacularly. And so the only way out of the mess is to have that strong partnership where she can be super focused on what she's good at, which is that business context and understanding where we have to get to and have the here's my concern, here's the outcome we need, how can we get there and have that strong partnership relationship. Part of that's her. But also part of that is that the IT organization has to be willing to partner with her. And it's so interesting to me how Sarah is vilified from the very moment that she enters the first scene. And so that partnership becomes impossible because if the IT [inaudible 01:19:39] only sees her in a negative light, then they're not going to be good partners.
Gene (01:19:44): Wow. I thought that was so amazing. Elisabeth talked about how if one side refuses to work together with the other, they can never really be a good partner. This gave me pause because it reminded me of some very unhealthy outsourcing patterns I've seen in my career. Top of them is the practice of let's give all our dev work to one outsourcing partner and then give all our QA work to another outsourcing partner to keep the first one accountable and then give all our ops work to yet another outsourcing partner because why not. And then if that weren't bad enough, to make sure that the fox isn't guarding the hen house, let's make sure to give information security to yet another outsourcing partner. For all the reasons that Elisabeth described earlier, this is probably one of the worst ways to divide up work because no one group can get anything done independently. And even the smallest unit of work has to transit across three or four different outsourcers.
The final indignity to this practice is every couple of years, let's shuffle all the outsource around, have them recompete for the work just to keep everyone on their toes and keep the pressure on for everyone to deliver their best. I asked Elisabeth what she thought about this practice, specifically because of how Sarah probably loves doing this when it comes to IT because it appeals so strongly to her desire for accountability.
Elisabeth (01:20:58): Well, I'm going to tell you a story that's actually not my story to tell, and so I'm going to strip away any possible identifying anything and just say it was in a situation where there were multiple vendors. And I suspect that this is one of those things like Dilbert cartoons, where people go to Scott Adams and say you must have worked at my company because this is so spot on. It must have happened in multiple, multiple places with multiple vendors. And my friend worked for one of the vendors and actually got formally reprimanded. It went in his file because he had a collaborative conversation with a member who was supposedly on this badgeless project where they supposedly were all working towards the same outcomes. But he had a conversation with a representative from another vendor, and that got him in formal trouble at his employer because they were basically giving information to the enemy.
Gene (01:21:50): Right.
Elisabeth (01:21:50): And so that's just one example of where these kinds of we're going to have these vendors and shuffle the cards occasionally, but we're going to outsource this to this vendor and this, to that vendor. The vendors don't have a good reason to collaborate. They're not actually on the same team. So one of the things that I've been thinking about a lot for the last seven, eight years is teamwork. What makes a team? The team is two or more individuals who are united by a shared mission or objective and a set of working agreements to achieve that. So if you have that vendor situation you described, you can never truly, truly be on the same team because you don't have the same objective. Too many vendors are in a situation where they have to be looking out their first team. In The Five Dysfunctions of a Team, Patrick Lencioni's book, one of my favorite concepts that comes out of that is who is your first team, who do you most strongly identify with. And for a vendor, their first team has to be their company. That has to be true. So you set up this structure where you've got all these different vendors that are supposedly trying to collaborate towards an outcome for the customer. They aren't ever going to act like their first team is that project.
Gene (01:23:02): If we could wave a magic wand, if we were the organizational coach that was brought in to help Sarah learn to be a better partner, could you walk us through what your advice would be to Sarah in terms of how can she better work with a technology organization? Maybe it's not just Sarah. It's Sarah and her technology counterparts.
Elisabeth (01:23:19): Yeah. That's the thing about partnership. It does have to involve everybody. It's not just fix Sarah and you fixed the problem.
Gene (01:23:26): That's right. It is a symmetric relationship, which it is a situation we co-create. So yeah. Let's coach Sarah and the technology leader.
Elisabeth (01:23:33): Yeah. So I would say that there's three key things that I would be looking for. One is, first, be trustworthy. Act in a trustworthy manner, which means no hidden agendas, everything. And Gene, I can see you cracking up right now. Oh my goodness, does Sarah have hidden agendas, that whole I'm going to break up the company and sell it for parts. And yet her heart is probably in the right place, which I know sounds super bizarre. How could that possibly be because she's talking about destroying the company? But if she's looking at shareholder value as the primary thing that she's trying to drive towards. And so the first one is act in a trustworthy manner, so proactive communication, don't make commitments you can't keep, no hidden agendas. So you yourself on all sides have to be trustworthy.
The second thing is assume good intentions. And so if everyone is willing to commit to acting in a trustworthy way, then that means that, let's say, Gene, you and I are working on a thing and something isn't going well, I'm going to assume that your heart's in the right place, you don't have hidden agendas, you are doing your best to act in a trustworthy way. And then I'm going to be creating that psychological safety, speaking of psychological safety, so that I can call you on it and say, hey, I am concerned because blah, and then we can have an open, honest conversation and ultimately be acting as a team where we have to be united towards the same objectives and be continuously adjusting our working agreements for how we're executing towards those objectives. Those are really the three things that I would just keep coming back to if I were trying to coach this group of leaders as they navigate these super difficult waters because, of course, the situation they're in is really hard.
Gene (01:25:19): Can you expound on how does one effectively have that discussion that is honest and effective?
Elisabeth (01:25:27): Yeah. Hole is in your side of the boat. Yes. I'm sorry. I'm [inaudible 01:25:32] on the extent to which Sarah just keeps trying to point fingers at the rest of the organization. I think in terms of how you have that conversation, you do have to be able to ultimately establish trust. But if you can, then one of my favorite books that's come out in the last several years is Kim Scott's Radical Candor. For me, one of the really eye-opening, I guess is the right phrase, the idea of ruinous empathy, which obviously [crosstalk 01:26:01]. But having this little two by two, four quadrants, if you don't really care about the individual, or sorry, now I'm not looking at it, so I'm not remembering exactly that quadrant is framed, but ruinous empathy is the opposite of radical candor.
Gene (01:26:19): Right. The other ones are manipulative insincerity and obnoxious aggression.
Elisabeth (01:26:24): Okay. And that's where Sarah lives, is either manipulative insincerity or aggression. She lives in those quadrants. But getting to that place where you can be honest and simultaneously kind, and this is the difference between nice and kind. Kind, you don't have to be an asshole in how you say things. And so in terms of the how you get past the hole is in your side of the boat, okay, let's first make sure that we are all aligned on the same outcome. Do we all want the same things? And if we don't all want the same things, we might as well stop trying to have this conversation. So let's get to a place where we can say we all want the same things. Now, let's talk about the situation and just put all of our cards on the table. And I think we see examples in the books of Bill doing this with Kiersten and the number of projects, et cetera. And so those were such wonderful examples of partnership, and with John, the security guy. And so I think that seeing how they can be just super open with one another.
In fact, there's this moment in The Phoenix Project that I loved where John asks Bill, "Haven't I ever done anything good for you?" And Bill goes through, and we see his inner thoughts. And he's basically going, right, well, lying would be bad. Okay, so basically no, never. What a horrible thing to say to somebody. But that right there, that is the difference between nice and kind. He's not [inaudible 01:27:55], but he's saying it in the kindest way he knows how. How do you shift the conversation from the hole is in your side of the boat to a productive [inaudible 01:28:05] figure out how we go forward. We acknowledge there are no time machines. We can't go back in time. Holding people accountable for things that happened in the past aren't going to get us going forward. We all need to be on the same team, same objective. We have to have each other's backs and be able to trust each other, and we have to chart our course forward. So that's how I would have that conversation. It's really hard.
Gene (01:28:26): Yes, hard and yet so necessary to get the outcomes that we all want. Thank you so much for all of your time today. And I'm so excited that we could finally share some of these amazing gems and wisdom that you shared with me for almost a decade. So can you tell us, Elisabeth, how people can reach you?
Elisabeth (01:28:46): Probably the best way to reach me is on Twitter. I am @testobsessed on Twitter. And there are days that I'm way more responsive on Twitter than I would be on email anyway. So I would say Twitter @testobsessed.
Gene (01:28:46): Thank you so much.
Elisabeth (01:29:00): Thank you. Whoo-hoo.
Sign up to receive email updates
Enter your name and email address below and I'll send you periodic updates about the podcast.