Gene Kim (00:00:00):
I am so excited to share the final episode of season one of the Idealcast with you. We'll be back again after the holidays with season two, continuing our learning journey with more great interviews. You're listening to the Idealcast with Gene Kim, brought to you by IT Revolution.
Gene Kim (00:00:24):
I've known of Jeffrey Frederick's work for over a decade. First through the groundbreaking work that he did with CruiseControl back in 2005, which was my first experience with continuous integration or CI. Although I will admit that back then, I had only the most superficial understanding of what CI was and just how important it really is. He helped pioneer the use of CI, both through CruiseControl and as co-organizer of the Continuous Integration and Testing Conference or CIT Con. So much of this was informed by his experiences as head of engineering, head of product management and evangelizing these important concepts.
Gene Kim (00:01:02):
I'm delighted that I've had a chance to get to know him better because he's a coauthor of the fabulous book, Agile Conversations, along with Douglas Squirrel, and within the DevOps enterprise community. We were introduced by Elizabeth Hendrickson, who was in the second episode of this podcast.
Gene Kim (00:01:18):
This interview is literally the by-product of one of those serendipitous conversations that Jeffrey and I had. Jeffrey had made a couple of remarks that jolted my attention. Well, that's actually an understatement. When we were talking, literally everything he said was startling and seemed laden with meaning, which led me to asking him if we could repeat the conversation, but also record it for this podcast. For me, this is a remarkable episode because Jeffrey helped me synthesize and reflect upon some of the major themes from this entire podcast series.
Gene Kim (00:01:50):
I've mentioned that I'm on a quest to understand why organizations behave the way they do, and it was so helpful to talk through some of our respective learnings and reflections. I found so many of his observations to be startling and incredibly insightful. He helped me connect many dots and pointed to new areas that deserve more study.
Gene Kim (00:02:10):
In this episode, we talk more about the nature of knowledge work, which includes software and how it requires so much more conversation or joint cognitive work, and the challenges that this presents. We talk about what bodies of knowledge are required as we push more decision making and value creation to the edges of the organization. We revisit the concept of integration in some very surprising contexts and why it's so much more important now than say 50 years ago.
Gene Kim (00:02:37):
We talk about the MIT beer game, Kanban cards and the applicability of concepts that came up in the second Mike Nygaard episode around generality and information hiding. Why "Are you happy?" and "Are you proud of your work?" are two very powerful questions and what they actually reveal about people and the work that they're performing, and why all of this is so important as we try to create organizations that maximize learning for everyone. And we talk about the work of Dr. Thomas Kuhn and the structure of scientific revolution and how it pertains to management models past, present, and future.
Gene Kim (00:03:11):
Jeffrey, I'm so glad you're on this episode today. So I've introduced you in my words. Can you introduce yourself in your own words?
Jeffrey Fredricks (00:03:19):
Oh, thank you, Gene, that was very nice to hear. I've been mostly working these days, coaching in organizations, I've been doing executive leadership coaching for the last several years, and I've been more recently doing executive team coaching off-site to try to work on the language that people use for each other. And the phrase I really liked that I came across recently from actually your podcast, Elizabeth Hendrickson described the power of reflecting and adapting to change your organizational working commitments. And that's really been the focus of my work.
Gene Kim (00:03:55):
And by the way, why do you think conversations are so important?
Jeffrey Fredricks (00:04:00):
The conversations, I think, become a limiting factor of a group of people to learn. And I think learning is the limiting factor for an organization to succeed, to adapt and thrive. That's where I've gotten to over time. This is not all where I started. My whole career, I've been very, very interested in reducing suffering in software development through becoming better. And initially I had a very strong tool focus. You mentioned the work on CruiseControl. I was super excited about that. I actually had done a startup company starting in 1999 that I can now describe to people as, when they say, "What's it about?", it was a company called Open Avenue. Well, it was like GitHub, but it was before Git. So we had the right idea. It turns out being early is like being wrong. I would say actually starting a software company in 1999, right before the.com crash was really our downfall.
Jeffrey Fredricks (00:04:59):
But I was excited about that because I could see the struggles that people had in developing software and the human pain of it was always very visible to me. My first job at University of Bennett Borland, which was a tools company, the old time developers will know. Often I mention and those people, oh, very excited Turbo Pascal or Turbo CE or Borland C++ plus or something, often their first language or first serious language. And it was a very interesting company, but it also had a culture a lot of times of late nights and the classic problems of software development. And I just always felt that there had to be a better way. And I have been looking for it really since then, from the early nineties, believing there had to be a way that didn't require as much suffering.
Jeffrey Fredricks (00:05:50):
And that's what led me first into tooling and both the startup that we did. And then after that, CruiseControl, because I knew viscerally that fast feedback would allow people to avoid a lot of problems that were otherwise problematic. I didn't want the success measured by the pizza box count. I wanted people to go home and see their kids on the weekends and things like that. And so that's what really drove me. It was only later, through the 2000s as Agile was really being fleshed out by the early adopters. And I was so excited, it's so much better, and I really expected it to just change everything, which it has, but I expected much more change, much faster than happened. And I was left with this puzzle. What's going on here?
Jeffrey Fredricks (00:06:48):
I had always heard the phrase that the future's already here. It's just not evenly distributed. And I could see that playing out before my eyes, but why? And so I've followed this process, this evolution from tools to processes and from processes to people, which I think is a classic one. It really comes down to the interactions between people are what are actually blocking improvement on the ground. In a given organization, if you're not improving, why is it? And it's going to come down to the dynamics between the people.
Jeffrey Fredricks (00:07:26):
And it's so interesting to me, because it's so much generally under our control, people have much more ability to change their interactions with each other than they ever consider doing. It's just easy for people to walk in and accept the dynamics that are happening to just slot into, I guess these are the meetings we have. Why? Because they're on the calendar. Are they working? Well, who knows, but they're the ones we have. I'm just excited by the possibility to do more and different and better. Anyway, this is a very long answer to what [crosstalk 00:08:04].
Gene Kim (00:08:03):
So in your book, Agile Conversations, which I think is dazzling, and one of the neatest parts of it are the citations and end notes that the corpus of literature that you're citing. For example, whether it's Dr. Chris Argyris' book, Crucial Conversations, Patrick Lencioni, Dr. Alistair Cockburn. I think you referenced the negotiation project at the Harvard Business School.
Jeffrey Fredricks (00:08:28):
Yeah, that's right.
Gene Kim (00:08:29):
I had two reactions. One is it's a very curious literature to be citing in a book about technology leadership. And yet on the other hand, it does seem, I wouldn't say obvious, but in the knowledge business, knowledge work, and it does involve more about people interactions. Can you crystallize why you think conversations are more important now than it was say 100 years ago?
Jeffrey Fredricks (00:08:59):
Oh, wow. That's interesting. I think that the immediate answer that comes to mind was actually two fold. One is I think that we tend to interact with a lot more different people, coordinating the work differently. And there's a lot more variability. That need to frequently adjust requires more conversation. One of the blogs I've been reading is a by a historian it's called Acoup.Blog, A-C-O-U-P.blog. And he recently talked about farming in the ancient world. And he was describing how most people lived. They were subsistence farmers growing wheat or rice, he mostly focused on wheat because that's his area of specialty. And he talked about the knowledge that people would develop in their local context over centuries. And the thing about that is you could expect that while there was a lot of variability from year to year, from season to season, and a lot of knowledge you needed to have, it didn't have the same range of possible outputs of your existence.
Jeffrey Fredricks (00:10:10):
People didn't have as many choices about what they were going to do and what they were going to do together. Fundamentally, you had to go grow that food and then store it and then eat it to not die. Your options were kind of constrained and that's just not true for us, either as individuals, but even, and we come together on our teams and our projects, what are we going to make today? There's a huge range of what we might work on within the same company. The same team has a lot more choices about what to work on, what the outcome is going to be, what will make this year successful than you had 100 years ago. So I think that need for collaboration and coordination makes conversations much more an essential skill.
Gene Kim (00:11:01):
That's interesting. And by the way, I'm thinking about the book, Patrick Lencioni and that was The Executive Team. So that was kind of the point of which integration happened was at the very top, around strategy interaction, and then got pushed down. It seems like there's something about the work we do where the point of integration happens at a very low level. It happens everywhere. So the increase in specialization of knowledge, the integration happens multiple times a day, through every interaction. Does that resonate with you?
Jeffrey Fredricks (00:11:32):
Absolutely [crosstalk 00:11:34] that's true. It happens at a larger level when it's done well. I mean, one of the lessons of the book, the apex of the pyramid, the top dysfunction is inattention to group results. And so when the executives start focusing on their functional areas, like, I don't know about those people over in marketing, but we're going to have a great technology organization, or I don't know what's up with technology, but over in finance, we're going to kill it. When they don't look at the group results, they just start focusing on their areas of control, then you have this ultimate dysfunction.
Jeffrey Fredricks (00:12:11):
When it's done well, you have alignment across the business. And that is what allows you to push the coordination down to a lower level. And you can have that integration cross-functionally at a low level. That's success. But when you have to go up three levels and then go horizontally over to the other executive and come back down, those conversations aren't happening daily.
Jeffrey Fredricks (00:12:36):
When is our executive going to talk to your executive? Okay, we might get an answer next week. So when it's done well, we have the opportunity. And I think that's what it comes down to. We have the opportunity for more frequent consequential integration conversations at a lower level than we did before. The cost is system complexity. We're in a much more complex system that gives us a lot more flexibility for what we generate. And this is going to rely on those low-level integration conversations to make it happen.
Gene Kim (00:13:09):
Gene here. I had a bit of an aha moment right here, and it was around the word integration, which is strange since I've used this word a lot in this podcast, especially with Dr. Steven Spear. The dictionary defines to integrate as combining one thing with another, so that they become a whole. I've heard Steve take this even further. We want to combine things with another so that they become much greater than the sum of their parts. It's funny that upon listening to this for the third time, the word integration suddenly takes on a new meaning for me. So Jeffrey helped bring continuous integration to the masses through CruiseControl. Continuous integration is about creating continuous code builds, continuous integration of code commits from different teams and continuous testing to make sure everything still works together when you combine them. This resulted in the opposite condition of what was all too common back then and still to come and even now that integrating code from multiple teams would often take months or even years.
Gene Kim (00:14:10):
Jeffrey mentions later in this interview, that many projects didn't even make it out of integration because "Integration is where projects went to die." In the famous book, Five Dysfunctions of a Team by Patrick Lencioni, it suddenly seemed like a very similar concept, not about integration of code, but integration of objectives and teams. The main characters in that book is a new CEO who is attempting to understand why all her top executives didn't trust each other, and exploring what was required to get them to all be working towards a common objective.
Gene Kim (00:14:44):
In this book, there is very infrequent integration of something, of strategy, of execution, of a shared consciousness or creation of shared goals. This resulted in very poor performance. They were losing in the marketplace. There was rampant finger pointing, and a sense of lack of countability and conflicts within the organization, that had to escalate to the highest levels of the organization. That failure to integrate shared goals at the highest levels made integration at the lower levels impossible, which was why it was so problematic when people had to work together from different silos. Because of the absence of shared goals, in order to make a decision, they had to escalate it all the way up to the highest levels.
Gene Kim (00:15:24):
The notion of shared goals at the top, being a prerequisite or a precondition of being able to push integration lower into the organization, I guess now seems obvious and a corollary is that any integration that was happening in the before scenario of a dysfunctional team was only happening within each executive silo. I'm immediately reminded of how, in a previous episode, Dr. Steven spear described how the level of which integration occurs within military branches has continued to get pushed down over the last 100 years. And that integration in healthcare organizations now has to happen between scores of different specialties.
Gene Kim (00:16:03):
In his DevOps enterprise summit talk at Vegas Virtual two weeks ago, he talked about how in healthcare organizations that he's worked with, because of the lack of any structure that allows integration between say nursing and pharmacy at the local levels, means escalating most issues all the way up to the healthcare system CEO. And as he said in his talk, the CEO has bigger problems to deal with than say the number of Advil tablets that should arrive at a patient's bedside. Okay. Back to the podcast. When I asked Jeffrey, is there an easy answer for why is there a need for better integration at lower levels now, and why is it so urgent say compared to even two decades ago?
Jeffrey Fredricks (00:16:44):
An easy answer, if I recall, wasn't it Carlotta Perez that said we're the age of information? But I think that's the answer. What we're working with, the material that we are crafting the future with is information. Information is a much more amorphous substance, and it can be molded and shaped in different ways and put to different purposes. And therefore it leads to more of this coordinated, we need to work together to distill what the meaning of this is that we're talking about.
Jeffrey Fredricks (00:17:19):
So fundamentally, when you're dealing with something that doesn't exist physically, you really have to work a lot harder to be clear on what this thing is. If I go and hold up a pen in front of the camera, we're talking about tangible objects. Our language can be imprecise because we have a unit of truth that we can look at. It's a thing, we can touch it, we can have it [crosstalk 00:17:49]. You have it, I don't. You need to send it to me. Right.
Jeffrey Fredricks (00:17:54):
And so, we don't need to articulate as clearly and as precisely when we're dealing with physical, tangible objects, because we have the actual thing itself. When we're dealing with pure concepts of information, it puts a lot more work on us to create the language that has the meaning to capture the relevant concepts. And we have to always be negotiating that meaning back and forth between us and finding, because we have no way of directly transmitting our definition to another person, not the real one. We can come up with an approximation of words, but it's only an approximation. And we work out the differences in use, through interaction back and forth. We find those edges and differences of meaning over time together. And so, this is an ongoing conversation to define what does this really mean? What does this really mean?
Gene Kim (00:18:51):
I had goosebumps when you said that. So let me reflect it back at you and see if it still holds. So in the previous era, we were dealing with the manipulation of atoms, of matter, and now it's about bits, information flows. And so even the construction of code is really, I love that phrase. You create things out of thought stuff, literally infinite possibilities, permutations, and being able to concretize that into things that you build. I mean, that's what is so challenging about software. And so it is that ambiguity that forces a much higher degree of something, communicating, specification, conversation that actually makes all those things that free us from shipping physical things around is actually what creates this need for higher fidelity communication, something.
Jeffrey Fredricks (00:19:47):
Yes. We were doing the work in our own heads at our own desk. We'd describe it as knowledge work. When we start doing in teams, it becomes a bit different. Now it's this conversation work where we're using all of our numbers knowledge together to come up with answers that we didn't have independently, we're generating something that none of us had separately. There's a kind of magic in that.
Gene Kim (00:20:12):
That is amazing. All right. So I'm going to probably put that in a box for now. We'll probably take it out in a moment. So here's the first question I had prepared for you. So the first startling reaction I had is when I had described that Stephen Spear story about the 60 line side store changes per day. So just for context, this is when he went to Japan with the mentor, Dr. Ken Bowen, with one of the VP of manufacturing from one of the big three auto manufacturers. And the story goes, plant manager then discussed how at this particular Toyota plant, they were doing 60 line side store changes per per day. And to which the VP of manufacturing for the big three automotive companies said, "Ah, that's crap. We tried six, and we ended up shutting down the whole plant for three days." In other words, it took three days for them to be able to recover because parts were in the wrong place and suddenly they couldn't ship cars for three days.
Gene Kim (00:21:04):
The aha moment for me in that particular conversation with Stephen Spear was that the Kanban card was the unit of synchronization. And it also allowed for a decoupling of systems so that you could change things at the edges without having to have this all-knowing thing in the center, which actually modeled the big three plant, because when they tried to do six line side store changes, they missed something and suddenly parts were in the wrong place. And your reaction to my utter astonishment was, "Oh, I recognize that. That's information hiding." So can you please explain what did you mean by this and why do you think that's important?
Jeffrey Fredricks (00:21:43):
All right. So when you described it and were describing that pattern, I immediately thought of an interface. And what's the point of the interface? It's to hide the information of the implementation behind it. It's it's exactly this case of, I want this to go somewhere and I don't need to know exactly where. I'm sending a message and I don't know exactly what they're going to do with it. I don't need to know. And it's that ability to not need to know is what in OO, we described, well, that's information hiding there. I don't need to know how the rest of the system is going to deal with this.
Jeffrey Fredricks (00:22:16):
And so you described this with the Kanban system, yeah, you have that interface, you just redirect it to a different place. The sender didn't need to know where it's going, and the receiver doesn't need to know where it came from and it's all going to work out. They didn't know. It hit exactly that, I've refactored to that before. So it very much resonated with me.
Gene Kim (00:22:40):
And so, why do you think that's important in, I don't even know how to ask the question. It just seems like a big concept. And I think it's important, but I don't know why.
Jeffrey Fredricks (00:22:53):
As I was saying before we started that, I was listening to the interview you had with Michael Nygard, and he was describing about genericness, I would say, and it talked about the same property of being able to push a context to the edge. And it was the exact same argument. And in that whole explanation, he was talking about system architecture and about the advantages you get to get the flexibility that you want. And for exactly the same reasons, he made the argument saying, you want the local country managers to be making their local deals. You want what they accept for payment processing to be able to change frequently, all the time. And you wouldn't want that to be slowed down by needing to change a service, for example, that was matching up the types of currency that people would take.
Jeffrey Fredricks (00:23:50):
So by moving that to the edge, you allowed more flexibility for more rapid change. And that idea of rapid change is what's behind the info information hiding. You're making a trade off in a certain direction to get something you value, which is, we're valuing the ability to move things around, to be flexible. And so I think this is why this pattern comes up again and again, at completely different contexts, in a sense. But it's the same motivation. It's what allows us to adapt. And how much effort do we want to put in empowering adaptation versus control? This is the trade off. What works against this is, no, no, no, you can't do that. There must be a central source of truth.
Jeffrey Fredricks (00:24:40):
And I think there's a mindset that says, to have anything other than central control is anarchy. There's a great a friend of mine, Kevin Lawrence, he had a great test, which is, you could ask people and say, is it more important that your system be correct or that it be useful? And there's two kinds of people in the world and it's whether or not they could conceive of a system that's useful but not correct, versus the ones who cannot, there are a class of people who say it cannot possibly be useful unless it's correct. And this mindset, I think this is akin to the people who believe there must be a central source of truth. They're going to build in a certain type of system, whether that's a software program or a enterprise architecture, or whether it's your enterprise organization, how that information flows, this is a pure sort of bureaucratic mindset.
Jeffrey Fredricks (00:25:33):
There is one right way, and therefore all the information must flow to and from that source of truth. It's going to be inherently centralized around the right blessed people, knowledge system, whatever it is, because anything else could not possibly be useful because it couldn't be correct. People with real world experience realize that there's value in that, there's value in having a centralized system, bring everything together, but it's a trade-off that you're making. So is it worth it in your context to have that centralization of information before you make changes? Or in your context, do you get benefit from moving things to the edges? And I think that's part of the job of design is to know when and where to make those trade-offs.
Gene Kim (00:26:32):
And then maybe to go to the question I asked you before, is there an easy answer to say why that's probably more important now than it was 20 years ago, pushing it to the edges?
Jeffrey Fredricks (00:26:43):
The very obvious answer is the pace of change. That's the immediate slam dunk answer. The pace of change moves what the trade offs are. When your systems are evolving more rapidly, when you are in a position where you need to be more competitive, where you need to respond to the market faster, it's environmental. If the environment has become more demanding, then you need to be more responsive. And that then makes this trade-off move more towards one direction than the other. It moves more towards adaptability over that centralized, let's make sure we get everything exactly right. And the one right way. And the efficiencies that come from one right way, because there's no free lunch. It can just seem that way if we have so many advantages, but there definitely are trade-offs here.
Gene Kim (00:27:36):
So interesting. Okay. By the way, thank you so much. Certainly as good as I hoped it would be and enlightening as I hoped it would be. So here's the other thing that you reacted to in a quick way that again, made me halt my tracks. We were talking about the MIT Beer game and when I was describing it to you in terms of the way it's structured. So you have the retailer, the wholesaler distributor, and then the factory and the very specific-
PART 1 OF 4 ENDS [00:28:04]
Gene Kim (00:28:03):
... the wholesaler, the distributor, and then the factory. And the very specific one-way flows of the information and the catastrophic outcomes that usually come out of it over 40 years. And you said something that startled me, you said, "Oh, that's obviously a totally screwed up structure." Which in many ways summarized my reaction I formed only after months of studying it. So it seemed evident to you that there was something really, really wrong with the way thinking was structured. So what made you say that? And why do you think that realization is important to the work that we do every day?
Jeffrey Fredricks (00:28:35):
It's a good question. Again, it's sort of pattern matching going on there. And it's very similar to what we were just describing, which is sort of one-way flow of information from a central source of truth to the edges, as opposed to optimizing and saying, "How can we get information from the edges and inform the system as rapidly as possible?" When we want to adapt a system, that's what we're saying is, "How can we know as soon as possible?"
Jeffrey Fredricks (00:29:01):
I was just describing to someone in a coaching call ... He was relatively new people manager, but is experienced technology person. And he was saying, "How do I make my system resilient, my people system?" And I said, "Well, tell me what kind of problem you're trying to solve." And he described a problem they had where some people had been unhappy. And I said, "Great. Well, that's an incident. How do you go and diagnose that? If it was a technology problem, what would you do?" Because he knows how to do a root cause analysis. He likes that incident analysis. And John [inaudible 00:29:33] would hit me for using root cause. But hopefully, we know what it means. We'll use it as shorthand for blameless post-mortem. He actually gives my ... The person I was coaching, he said, "Post-mortem."
Jeffrey Fredricks (00:29:43):
So he likes to go in. And I say, "Look, you have a real advantage here because you're dealing with similar components and information flow, but you have the advantage to be able to ask the components what's going on in a much richer way." And if you were doing a post-mortem of a technical incident, you'd look back to say, "Well, what could we have known at the time? What was happening, what was known? And even, what could have been known? What could we know in the future? How could we get information earlier to inform our decision? Then we make better decisions." Because you want to be sensing from the edges and using it to inform what you do sooner.
Jeffrey Fredricks (00:30:20):
And he was like, "Oh yeah, of course." And so it really came down, in part, asking the people on this team, "Are you happy?" Because humans are great at sort of synthesizing all the different things that are going on in the projects into like one KPI. "Are you happy right now?" "No, actually I'm kind of annoyed." "Great. Tell me about it. I don't know what's going to be behind that, but we're going to learn something." So in your system design of your team, are you gathering that information from the edges, from the people, and bringing it back?
Jeffrey Fredricks (00:30:51):
And the beer game was clearly not capturing information from the edges. It was not designed that way. It was sort of, "We're going to send these messages and hope everything works out." But there's no potential, very limited capacity for learning about what's actually going on, learning about your state and using that to adjust. So it looked to me like you were going to end up in kind of a driven oscillator. You can get these sort of chaotic rhythms from taking like a pendulum and adding a kick to it. And you're going to have this sort of driven oscillation system that's going to be chaotic. Just from the design of it.
Gene Kim (00:31:24):
By the way, that's a heck of a surmise from very limited ... That was an astounding conclusion based on even just a rough description of the simulation, by the way. And I just wanted to credit you. That's mind blowing to me. And by the way, you bring up something that I've always wondered about, right? So in the State of DevOps research, one of the organizational performance metric is employee Net Promoter Score. And whether you call it employee engagement, enthusiasm for work, I've always wondered, why is that really a marker of organizational performance?
Gene Kim (00:32:03):
I think the definition we used for the DORA research was, what percentage of organizations are exceeding profitability, market share, and productivity goals? And somehow that leads to a sense of employees being willing to recommend their organization as a great place to work to their friends and colleagues. So I've never heard such a great definition of that measure, are you happy? The reason why that's important as an indicator is that that is a phenomenal synthesis of how I feel about the work I do. Right? I mean, can you say a little more about that?
Jeffrey Fredricks (00:32:39):
Yeah, absolutely. When I do one-on-ones with people, there's two questions I asked, roughly going from the same thing. And the first question is, "Are you happy?" And the second is, "Are you able to do work that you're proud of?" And these things are closely related, but sometimes they can generate slightly different answers. And the reason why these work ... So these are important for two reasons. The first one is just, this is a leading indicator. Like this is number one. Let's just start with this. Employee happiness, an employee ability to be proud of what they're doing, is a leading indicator of problems on your project, and problems in your organization. It will show up there before it shows up anywhere else.
Jeffrey Fredricks (00:33:22):
Now, why is that? And this is the interesting part. And there's two answers. One we've been saying, "Okay, well, because we're getting information from the edges." The people doing the work know. But that's only part of the answer, and it's the least important part. Much more important is the fact that they care. This is pure Theory Y. And for people who don't know Theory X, Theory Y, Theory X is like old school management. The idea is that employees are lazy. And if they can, they will steal from the company by getting paid and not doing the work. And therefore, the job of a manager is to be overseeing them to make that they're doing work, because otherwise they won't. And that's often our sort of caricature of what a boss is and what a boss does. And it's terribly, terribly wrong. It misses this fundamental insight, which is that, no, actually, Theory Y, people are self motivated and they want the project they're on to succeed. They want the work they do to matter.
Jeffrey Fredricks (00:34:21):
And what makes them very unhappy is the fear that it won't. The idea that the work I do doesn't matter is a terrible one. I remember actually reading Kent Beck's Extreme Programming Explained, the white book, way back when. And in the epilogue, which is ... The whole book was mind blowing to me. Whole book I just absolutely loved. And I got to the epilogue, and he describes that every methodology is based on fear. And the person who designed the methodology, designed it to prevent the things that they're afraid of. And what he said is, "I'm afraid of doing work that doesn't matter." And I was like, "Oh, I totally get this. I totally get this." And that was what was so powerful to me about agile and XP at the time, was we were saying, "Look, we want the business to succeed. There is no tension here. It's not like the business people versus the technical people. The technologists, we want the projects to succeed. We want the business to do well. We want our systems to matter to what comes out of this." That, I think, is a very human element. And once you understand that, once you really grasp that, yeah, people do care, then you suddenly start interacting with them very differently. You see them as a very different source of information. A traditional boss will see themselves as having more information and more knowledge about what's going on. And, "Oh, there's these people. And it's my job, I'll hear what they have to say, and I'll correct them." And he's missing the possibility that these people are informing you. These people are giving you this fantastic gift about what's actually happening out there. And just from their emotional state, you'll know from the dynamics, is this good or bad?
Jeffrey Fredricks (00:36:15):
And this is a very fractal test, by the way. I was doing a offsite training for some people, virtual offsite, on Friday. And I had the people do a discussion. Actually, they were going to choose the discussion topic for the afternoon, based on the ... I'd been training them on conversational techniques, like, "Right, you're going to apply it now, in how to be transparent and curious. Choose your topic of conversation." It took them about 15 minutes for them to choose a topic from among four. And I stopped them. I said, "All right, let's reflect on the conversation you just had to choose your topic. I want you all to write down the things you were thinking and feeling, but didn't say, in choosing the topic to discuss." And they did that for a couple minutes, and I said, "Great, I'm going to go around, and each of you are going to share one thing from your list that you were thinking or feeling, but didn't say." We got that all down, and I said, "Well, that's really interesting. You're here to practice transparency, but even in the discussion of what to decide you're not being transparent. And even better, the choice of the four topics you were considering, you came up with, that you decided on as a group, was the worst possible choice of the four. One person passionately cared about one topic, and we are not discussing that. One person definitely did not want to discuss this topic, and everyone was okay. So you chose the worst possible score for the group because you weren't sharing what you knew, what you were thinking and feeling." And I said, "Now, having done that, have that conversation again."
Jeffrey Fredricks (00:37:52):
And they did spend another 10 minutes discussing the topics. They very quickly converged on one. But if you just were watching a video camera, just of a recording, it was a different meeting. The first one was terrible. The first 10 minutes, just like, "Well, what do you want to discuss? Is anyone really excited?" These sort of questions to the room. The thing is, these are all nice people. That's the problem. They're all very nice, and they didn't want to upset one another. And they so much didn't want upset each other that they didn't share their own actual thoughts of what they cared about, because they thought it would inhibit other people. And as a result, they had the classic group think of getting a bad outcome. In the second one, they started being willing to share more, disagree more. And every time that someone brought up something that kind of was from a different angle, rather than going along with the flow, it took the conversation in a different direction. It added tremendous value. Tremendous value.
Jeffrey Fredricks (00:38:48):
And you had that just in a few minutes. And the feeling in the discussion was so much better. They were laughing. They were joking. That's what you would have seen on the camera. You would have seen this completely different dynamic from people. They were having a lot more fun in that second conversation. They were much happier in that second conversation than they were in the first. So this idea of employee happiness, it's not just like abstract Net Promoter Score over time. It's like, right now in this meeting, this conversation, are you bored? I mean, if you're bored, if we're bored in this meeting, maybe it's the wrong meeting. And one of things [inaudible 00:39:25] if you ever are in a meeting and you're saying, "I'm feeling kind of bored right now." Guess what? You will not be bored any longer. You're about to have a very interesting conversation with people.
Jeffrey Fredricks (00:39:36):
So anyway, this idea of employee happiness, people care about it, and they care about it at every level. And it's always available for you to be tapping into. How is this going for you? Are you enjoying this or not? Are you bored? Are we engaged right now in this moment with the work we're doing right now?
Gene Kim (00:39:54):
By the way, I think that's amazing. And I'm just trying to prove to myself that I really understand why that's important. Why is that important? So can you connect the dots for, when that happens, why is the second dynamic so much better than the first? I mean, can you just connect the dots so that you can make the claim, "This is better for the organization"?
Jeffrey Fredricks (00:40:19):
Yeah, it's better for the information because ... It's simple, you're generating more information. At bottom line, that's the answer. You're bringing more information into the conversation. All those things that people were thinking, but not saying, that was data, that was information that could have gone into a decision process, but it was being withheld. It wasn't meeting the threshold for contribution. So, I mean, I'm going to just say generically, "Do you want to make decisions based on more information or less information?" That's kind of the choice. I think [crosstalk 00:40:51].
Gene Kim (00:40:50):
And does it matter who's making the decision? Is it the person at the top, the decider, right? The hippo, right? Or is it more than that?
Jeffrey Fredricks (00:40:59):
Well, I like to disentangle the decision making process, like who decides and how, from the information you're going to use to inform the process. Those are really, really different. I don't really care, in a sense, how are you going to make the decision. I mean, you can analyze that separately. It's just not where I usually find the bottleneck. The bottleneck of good decision-making is not usually decision-making process, it's what generates the information for the decision. It's that we're not bringing all the information in that we could have.
Jeffrey Fredricks (00:41:36):
And so that's the problem I'm usually helping people overcome. Because if people aren't speaking up, if they're not sharing what they know, if they're not sharing their thoughts and feelings, their gut feeling, like their, "Oh, I'm worried about this. I'm afraid of what might happen. Oh, actually this really excites me. I think this'll be great." It's not just the facts and figures, all their people's knowledge and experience integrated into emotional state is tremendously valuable information. And the more of that you can bring into the process that make visible the better decision you're going to make. No matter what protocol you use to decide.
Gene Kim (00:42:15):
That's interesting. And so what's coming to mind is sort of the Kahneman, Tversky kind of model of thinking fast and slow. Kind of where my head is going is kind of that feeling is really ... Is it mode one or mode two?
Jeffrey Fredricks (00:42:28):
Gene Kim (00:42:28):
Whatever the fast part is, right? The system one is all [inaudible 00:42:32] integrated into a feeling, right?
Jeffrey Fredricks (00:42:35):
Gene Kim (00:42:35):
Maybe informed by system two. To what degree do you agree with the correctness of that statement?
Jeffrey Fredricks (00:42:42):
It's interesting. I think that's right. That the system one is integrating a lot of information unconsciously. That division I really like, because system one is the part of your thinking that makes most of your decisions. And yet system two is what we think of when we think about thinking. So there's this dichotomy, that that sort of rational, deliberate effortful system two is relatively rare among all the decisions you make, but it's what we think of. So we have this incorrect model of how we make decisions. And we carry that broken model into our meetings and we think, "Oh, well ..." We're going to think it's all about this effortful analysis and facts and figures. And that's really important, but it's only part of the story.
Gene Kim (00:43:37):
It's interesting. And when you ask, "How are you feeling", right? When you ask, "Are you happy? Are you doing work that you're proud of?" It seems like this is when you allow kind of system two to work. And the question really says, "Does system one believe it?" To what degree is all that data really informing and being reinforced by that feeling? It's like, "Oh no, I am proud, right?" We're not there yet, but we're going down the right path, versus, "Ugh, this is crap."
Jeffrey Fredricks (00:44:04):
Well, it's interesting. Almost always people answer the question sort of yes or no instantly, effortlessly. And then there's a pause, because I don't say anything. And then they start thinking, "Well, why is that?" It's a pattern I see again and again. They'll say, "Oh, [inaudible 00:16:20]." "Yeah. Yeah, I would say I am." So there's that recall. So it definitely is that sort of element of they know, at the tip of their tongue, the answer to these questions. But they don't have access to, why exactly? How did I get there? And then it takes an effort to go find it.
Jeffrey Fredricks (00:44:41):
Of course, remember, when talking about thinking fast and slow, the difference between system one system two is the source of cognitive bias. So I think it's important to say that when people come out and say, "What do you think of this plan?" "Oh, I think it's crap." That should not be the end of the conversation. You've gained some valuable information, but not all of it. You're on the beginning of a journey. The cognitive bias happens if you stop at that point and say, "Oh, I don't need it. I don't need to discuss it any further, because it's crap."
Gene Kim (00:45:12):
And presumably, this is why it's so important to be able to get as much information to inform these decisions? That creates a space for the data to help overcome all these cognitive biases, right? What system one is so bad at.
Jeffrey Fredricks (00:45:26):
Oh yeah, I think that's a great way to put it. We all have partial information, and information that's much more partial than we are aware of. And getting out all the information, all the different points of view, is where you have the chance to discover surprises, information that you didn't know that you didn't have. Which is the most valuable information you can get is, I had a belief about the world ... I mean, fundamentally, learning is the detection and correction of error. So it's when I have a model of the world, and then some facts come up that disprove my model, I'm like, "Oh, there's something else going on here that I wasn't aware of. That was very valuable. I have the opportunity to learn. And so I want to be sure to grab that."
Jeffrey Fredricks (00:46:15):
If I don't have people bringing their information up, I don't have that opportunity. And then I'm limiting ... This is what come back. We're limiting our ability to learn. We're limiting our ability to innovate. We're limiting our ability to solve problems. We're going to be more likely to suffer, which I want us to avoid that suffering part.
Gene Kim (00:46:37):
And forgive me if I asked this already, but it seems like that, too, is more important now than it was 20 years ago or a hundred years ago. Is there an obvious reason, that easy answer to why that is?
Jeffrey Fredricks (00:46:50):
I think, to me, it largely comes back to the same issue, which is that there's more change, and therefore there's more at stake in our day-to-day interactions. And that may not be obviously followed, but when there's more change then the kind of types of errors are more novel and we may have less opportunity for recovery. And so that makes it for a more competitive environment. When you have a relatively static society, relatively stable roles that you can expect what tomorrow's going to be like, you can expect what next month, next year is going to be like, when there's a lot of predictability, then there's less need.
Jeffrey Fredricks (00:47:36):
I'm tempted to say there's less novel information, but I think that's not actually true. I actually just think it's there's less need for it. You're in a much less competitive environment. And I think that environmental factor is really important. One thing I've noticed in my career is the pace of adoption of different breakthroughs, and the uneven adoption of breakthrough practices and technologies and approaches. And you could say it's places that are more competitive, that are under more competitive pressure evolve faster. They have to. And that's a function of evolution, right? It's not a deliberate thing. It's just the ones that don't evolve rapidly, they die. And so you're left with survivorship bias gives you like, "Oh yeah, that industry has a lot of really advanced people." Why? Because the people not doing that are dead.
Gene Kim (00:48:27):
Yeah. Matter of fact, one of the findings in the State of DevOps research in 2019, was actually the first time ever, that was actually the first time there was actually a bias in the vertical, which is in retail. If you were in retail, you were more likely to be a high performer because of [crosstalk 00:48:41] apocalypse, right?
Jeffrey Fredricks (00:48:42):
Yeah, exactly. That's fantastic. This, to me, this [inaudible 00:48:46] in the other way, in my sort of earliest consulting in the 2000s. I found that the places that seemed to have the lowest pace of change were insurance companies because their business was just so profitable. They were going to make money no matter what. And I was talking to someone whose brother worked there and he was describing the challenge of introducing new practices was, how do you get buy-in to try something new? You say, "Well, it would be better." Well, better how? Will it be done sooner? Anytime you said [inaudible 00:49:20] better you were taking on unnecessary risk, because no one really cared if your project was going to take a year or 18 months or two years. It really didn't matter to the profitability of the company. So if you were going to try to promote something that's better, you were saying, "Well, instead of taking a year, I can get it done in nine months." Now, you're taking a risk. Why would you do that? Who's going to sign off on that, right? No, it's fine. Let's just do it the way we know, the way that's safe. So that selection bias that happens through evolution in industries, I think it's much more prevalent now than it was a hundred years ago.
Gene Kim (00:49:58):
Yeah. It's also maybe another reason why it's more important now than 20 years ago or a hundred years ago, is that if we are pushing decision-making to the edges, the surface area of which decisions are being made, potentially bad decisions are being made, is much vaster, right?
Jeffrey Fredricks (00:50:13):
Gene Kim (00:50:14):
So the quality of decisions, arguably, it's better if that goes up, right?
Jeffrey Fredricks (00:50:20):
Yeah, absolutely. And that's why I think that you have this ... The interesting thing, then, about a learning culture, like Toyota, reading high velocity edge. And you look at the effort that goes into teaching people how to see problems, right? And pushing that ability out to the edge. The effort that goes into making sure that the decisions being made everywhere are being made with the right mindset is something that was so essential. It's not just push decisions out. It's also making sure that people are empowered to actually do something with the bill you're giving them. That means training them, investing in them, helping them to have the right skills to make those decisions effectively.
Gene Kim (00:51:06):
Hi, everyone, Gene here. I want to take a moment and thank everyone who took the time to listen to the Idealcast and contribute to the 35,000 listens we've had since it debuted six months ago. It's been amazing for me to share with you some of the most compelling lessons I've learned from the most amazing people in the industry. I believe that these lessons are important and can be applied by technology leaders in their own organizations. I, and everyone here at IT Revolution, look forward to producing season two, bringing you even more thought provoking discussions and lessons in the New Year to continue our learning journey.
Gene Kim (00:51:42):
Speaking of continuous learning, the latest IT Revolution DevOps Enterprise Journal, made possible with the support from LaunchDarkly, is now available. The DevOps Enterprise Journal features papers from some of the best thinkers and doers in our space on how they solve their most pressing problems. You can read those papers along with all the IT Revolution titles on their brand new reading app, which is now in the Apple and Google Play app stores. Create your account at myresources.itrevolution.com.
Gene Kim (00:52:14):
So let's get to the third thing that you said that also kind of made me screech to a halt. I think this is the first thing I asked you. I had mentioned that we're sort of going through this transition in methods of management, and that was like a Kuhnian moment, referencing the book, Structure of Scientific Revolution. Where anytime you go through a revolution, whether it's Newtonian to Einsteinian or even before that, Copernican to Newtonian, it looks like one person sort of had the aha moment. But if you zoom in there's always a whole school of people, sometimes in competition, sometimes in cooperation. But one person gets the credit, Copernicus, Newton, Einstein, right? Then Dr. Thomas Kuhn describes it as a sublimation moment. Suddenly everything sort crystallizes. We go from gas to air. And then in a moment we're now looking at a solid. And you had mentioned that it triggered some other thoughts for you, right? Can you describe what some of those aha moments were? And why did they strike you as significant?
Jeffrey Fredricks (00:53:21):
This is about the change of management over time. I think recently probably the book that has kind of helped shape my thinking about that is the book, Reengineering Organizations, which introduced, I remember, the Teal Organization to my vocabulary. And one of the elements of that model was that we're getting organizational breakthroughs that enable new types of organizations. It doesn't mean that all the old ones go away. The old model organization's still there. So you have a red organization, which is the street gang, wolf pack analogy, leadership by the strong. That still exists when you still get the orange, which is you think of like a centralized bureaucracy, Catholic church kind of model. Then you have the machine paradigm that comes along after that.
Jeffrey Fredricks (00:54:16):
And then you have the family model, sort of green organization, much more we're [inaudible 00:00:54:24]. And then you get to the most newest one, was teal. That describe sort of these different modes of paradigms for an organization, different analogies you could use about how people related to one another. And this idea about the different possibilities, and that when you have a different model of humans and how humans can interrelate, it leads ... It's such a fundamental assumption, it changes everything else. In a less fine-grained term, I mentioned the Theory X, Theory Y. And Theory X would be sort of the poster child for Taylorism style management. When you start thinking that, "Oh, actually these people don't need to be beaten into doing work, they actually can be active contributors in the work," it just changes everything about how you structure your communication, how you structure organization. Everything else must change as a consequence.
Jeffrey Fredricks (00:55:21):
And to me, that's the paradigm shift moment in a Kuhnian sense. You don't look at the world the same way afterwards. You're looking at them through different eyes. And for people on the other side of that paradigm shift, they can't understand the worldview. It's not accessible to them. It's kind of inconceivable until you have that moment. And then once you've made it, then it's like, "I can't believe I ever ... How did I ever believe that other thing was right? How couldn't I see it before?" And I think that was this idea of this transition, of how we manage ourselves the way we-
PART 2 OF 4 ENDS [00:56:04]
Jeffrey Fredricks (00:56:03):
This transition of how we manage ourselves, the way we manage our organizations, the way we treat people, as increasingly active components of the system, as opposed to pieces, cogs to be moved around.
Gene Kim (00:56:15):
Right. And then compelled into doing their part. Right.
Jeffrey Fredricks (00:56:20):
It was interesting because Taylor was an industrial engineer by training. And so he's the kind of person, so naturally he saw that engineering model, the organization as a machine, and the person who's designing it needs to be diagnosing it. And the job of a manager is to be managing all the parts to make sure that none of them are faulty. And if they are, to replace them. These are essentially interchangeable parts that is being diagnosed by the engineer. That's not how people would, hopefully, most people don't... Some still do see people that way. They still see their resources in their organization, can we get three more resources on this project, as fungible parts? Now, the model we're describing sees people as a source, not as interchangeable, but a source of unique value, that they bring different strengths and weaknesses, and their interactions between them are going to matter in a different way. They're going to generate value that the designer could not have foreseen.
Gene Kim (00:57:21):
I think it was in Kuhn's book. He talks about, I think, what we [inaudible 00:57:26] will call the dominant architecture. Whatever, before the [inaudible 00:57:30] moment, there was a dominant belief system. And I think he made that comment that often you have to wait for them to die, before the new way, which I think is a depressing claim. As we talked about a new way of working, is that a preordained conclusion? Is that the only way that can bring this new way of working into being, is by waiting for certain people to retire or die?
Jeffrey Fredricks (00:57:52):
I think there's an element to which that's true. I relate it to the book, Crossing the Chasm by Geoffrey Moore. And he brings up the technology adoption lifecycle. He introduced the chasm in existing model, the technology option lifecycle, this is before him. And I think about that technology adoption life cycle in evolutionary psychology terms. If you're a tribe, you want people in all of these things, you want some people to be laggards. You want some people who will stick to the old ways and never let them go because it gives you more resilience in the population. So in a sense, yes, there will be organizations, there'll be individuals who will never adopt a new paradigm. And we should be thankful because they're holding on to a past solution, but what they're holding onto is a cultural legacy. What they're holding onto was a solution to a problem that we had previously not had. We had to invent that solution. And they're essentially the last guardian of that.
Jeffrey Fredricks (00:58:56):
And it's not bad that their performance service, it may not be applicable in this case. We may not need that solution in the future, but we don't know. It's really hard to predict things, especially about the future. And so they're actually, I'd say in a population sense, they're performing a service. Now in the moment, in my company, that's a real problem, to stop and allow us to move ahead. I feel conflicted on this point, about those individuals. I understand that it's valuable for the population to have them, but they can be an impediment for me as someone who's on the other side of the technology adoption curve. I'm very much in the... I just mean new because it's better, and that's a motivation for me.
Gene Kim (00:59:45):
So here's something that Steve [inaudible 00:59:47] told me the other day, I thought was equally startling. He said... It's so novel. I might not be able to repeat it, get right the first time. But he said essentially, for decades, when you get hired into a role, your job is to fulfill the obligation responsibility of that role. And it doesn't change. He said at the Toyota production system, it was never really ever said that you are going into a static system. In fact, the higher you rise in leadership, the more expected it was that your job was to change the system. And I thought that was just a stunning... I've never heard that before. And it was such a stunningly large thought, that your job is to inject change into the system as part of your daily work. Can you react to that?
Jeffrey Fredricks (01:00:40):
Yeah. The thought that immediately comes to mind, because you mentioned in agile conversations, we talk about Chris Argyris, and we're talking in the book largely about what he did with conversationals, but it was in pursuit of a learning organization. And he introduced something else that people might've heard it before, which is single loop versus double loop learning. And a single loop learning is the learning that you do to get better at the job you're currently doing. And double loop learning is when you take a sit back, and you rethink the strategies you're currently considering. So it's a meta learning. So it's a higher rate of learning. And so it's not, should we keep doing this the same way, or can we tweak it slightly? It's well, why are we doing this at all?
Jeffrey Fredricks (01:01:29):
It's the kind of thing that says, maybe we should change industries. Have you ever considered, maybe there's a whole different possibility here. And I think that that idea that, when you have an organization that wants to be resilient, that wants to be adapting, that wants to be learning over time. What are you saying? You have to have that ability to change at higher and higher levels, bigger and bigger changes. If the job of the people in the top is to keep the big picture the same, well, you're limiting the life of your company. You're limiting the scope of changes you can actually make. So, in a sense, if you want to be long-term successful, long-term profitable, long-term innovative, then it has to be the role of the leadership of the company to be thinking about the biggest possible changes and to be an ally of change, not an inhibitor of change.
Gene Kim (01:02:30):
There's some examples in politics where certain leaders can't have the same thought twice, or at least have a very difficult ability to execute on anything. So certainly, part of the job of leadership is some level of constancy and consistency. How do you reconcile that with that statement? It's a very big statement, you have to, at the highest level of leadership, you have to be open to the largest possible changes.
Jeffrey Fredricks (01:02:55):
Right. Well, I think it's one of those cases where you can use the same word to mean very different things. Sometimes people will use leadership as a noun, the people who are in charge of the company, and then you have leadership, the verb, which is the thing that you're doing. And it's very easy to conflate the two. And they're not the same. I think even when you describing the leadership that is changing day to day, I think this can come to one of the questions that people have had, and complaints about American companies, have been under the shareholder primacy doctrine, that the job of a corporation was to return profits to shareholders, led to incredible short-termism. And some people would say that actually it's impossible to have a long-term innovative public company in the US because the demands of the shareholders to seek short-term returns will prevent the long-term resilience, long-term innovation.
Jeffrey Fredricks (01:03:55):
And so I think there's something that is, what's the scope of timeframe that you're thinking about? What's the timeframe an organization, time scales that we care about? And at the level of the highest level leadership should be about organizational design, and you should be talking about across the decades, in my mind. It's not saying you ignore the problems of day to day, but in an sense, everyone is worried about the day-to-day problems. They don't have the space to think about the longer term arc and to say, do we have the right culture that's going to sustain us for a long time? What do we need to inject in this culture? What are the values that we have or are missing? What's the spirit we want our company to have? Those are big picture concerns that will matter tremendously for the long-term survivability and relevance of a company.
Jeffrey Fredricks (01:04:51):
If you're thinking about, are we going to hit the quarterly numbers? You might not have space for that.
Gene Kim (01:04:59):
Nor the authority.
Jeffrey Fredricks (01:05:02):
Nor the authority. Exactly. Which is why I think that when you look at places where there has been a real innovation, and you say, well, let's look at who has control and why and over what time periods, and what allows them to take longer term views, or what compels them to have shorter term views. And I think that you find, I've not done a [inaudible 01:05:25] on this, but I think that would be something very interesting to look through. And I would expect that, in places where people are more responsive to the stock market, you will have a long-term destruction of value or a stagnation, and places where you have people who are able to take a longer term, then you will have a better, longer term outcomes.
Jeffrey Fredricks (01:05:45):
But that comes to the idea of what's the scope of leadership? What are they allowed? What do they give themselves as their brief to think about? And if they think about long-term versus either short-term move from one thing to next, or our job is to only preserve the good things have been handed down from the past. Now that's the other problem. We just say, well, my great-grandfather founded this company, and we've run it by the same principles for 80 years. And we're going to continue those principles in the future, no matter what. That's not a recipe for success either. You may be taking the long-term view, but you're using the wrong recipe.
Gene Kim (01:06:24):
I want to bring up a topic that we're talking about before we started recording. And that was around structure and dynamics. I think the goal is to create these very parsimonious concepts, which is to structure and dynamics, with then maybe additional dominant architectures, and to be able to... I love that the parsimonious principle, to explain the most amount of observable phenomenon with the fewest number of principles, reveal spreading insights and confirm deeply held intuitions. And you said something that, I think the term you used was teasing apart, or being able to put certain terms into buckets was actually useful to you. Can you just talk about that in terms of [crosstalk 01:07:02]
Jeffrey Fredricks (01:07:01):
Sure. Yeah, yeah. I was describing what I've been really enjoying about The Ideal Cast and going through and listening to these episodes because it's dealing with an area that I care a huge amount about, which is the system design at different levels. And it was interesting for me that I got my head around the idea of structure and dynamics, that structure is the things that you do, and dynamics are what you emerge, that are what you get. And that was really helpful for me. And then within that then, the structure, then we can tease it apart into different elements. And one of the valuable tests for me was to look at principles that could map from across different systems. So we have an organizational system, and we might map it into an architectural system.
Jeffrey Fredricks (01:07:52):
And that tells me that we're on the road of some fundamental insight, some more generic principle that has some kernel of truth. There's some very important atom of truth in that. And so, describing the example we mentioned earlier of the kanban card, as information hiding, and relating that physical card to the Michael Nygard discussion of something that is, if you just put all the data you need in the transaction, you can now vary tremendously. Oh, so I can get similar principles from my distributed payment rules and my car production line. That there's similar information principles at play, and this idea that they're both governed by the flow of information. And which you think in the information age, that might be important, that understanding what we can do to change the flow of information and how when we change the flow of information, we get different dynamics. That seems like a really important area of inquiry in the information age. We're talking about the physics of the information age. We should understand this, I think.
Gene Kim (01:09:13):
Can you just say a little bit more about that? I, again, had goosebumps as... Yes, it was very pithy and funny, but also profound. What do you think is not, if you're going to articulate, what is it that is not well understood yet? And that maybe we're on the peripheries of understanding better.
Jeffrey Fredricks (01:09:34):
Gosh, well, I would put this way. We're learning how to see. So we're in the position at the dawn of age of microscopy, where we have the first illustrations of microscopic creatures. We're still developing the tools to see the invisible, to see information and understand it and see the flows of information and understand it, and to see the effect. Especially as it's mediated by the human brain, which is both a tremendous organ for synthesizing information and hiding it in all facets, both the good and the bad. And so we are very good at learning when we have feedback on what we do, when we can take an action and see the results. We are not good at learning in situations where we don't get feedback. It's very difficult to develop skill if you don't get feedback. I could go further and say it's impossible to skill if you can't see the feedback.
Jeffrey Fredricks (01:10:54):
I think the Wardley Mapping talk that Simon Wardley does, and he describes trying to play chess with and without having a board is a good analogy. If all you have is the stream of moves, and you've never seen a chess board before, you might eventually learn some way to play chess after a fashion, but you will get crushed by anyone who has a board to see.
Jeffrey Fredricks (01:11:21):
And what he captures in that analogy is the value of learning to see information in a useful way. And I just think in these system level properties, we're still not good about being able to see and understand and characterize information flows. Now, one of the things that I really liked in accelerate was that introduced me to the idea of the different cultures, of being psychopathic or bureaucratic or generative. And they were categorized on the function of information flow. It was the flow of information that would allow you to tell what kind of organization you were in. That characterization on how information flows was a key insight, because of course, information flow is going to affect all this other stuff we're talking about, your ability to learn, your ability to execute, your ability to innovate. These are all things about knowledge, working in the information age, bringing experience and information together, to get a result.
Gene Kim (01:12:23):
Holy cow. This is such a stunning conversation. By the way, I've loved the Western organizational typology model. I've never really thought about it as an information flow categorization, but now that you say it, it's obvious. But let me tell you something that has also stunned me, and I say this just because I suspect that you'll have just a profound observation that will also blow me away. I'm reading a book called Command in War by Martin van Creveld. This is the third time I've read this book in 25 years, and it's such a dazzling book for me. And essentially, he says in the history of warfare, there's always been four basic phases.
Gene Kim (01:13:13):
You have the stone age of command, where everything before Napoleon, he would categorize the same way. In general, permanent formations were less than 4,000 people, because that was the most that you control visually. The term detachment really meant detach and never come back. You could fork, but never join because once you fork, you'll never see them again until the end of the war. Because if you're more than four miles apart, you can't even find each other, let alone send messages. So that means you fork, and you basically have lost control of them. The speed of information flow was rumor could travel 200 miles in a day, but reliable communication was basically at the speed of horseback. It could go higher, but it required a lot of fixed infrastructure like stable roads, like permanent posts, because that meant that information can only travel quickly behind the lines, never on the lines or ahead of the lines.
Gene Kim (01:14:11):
So which led to the second age of Napoleon, he would make the claim, the majority of an army, a fielded army, existed was basically surviving. It was logistical. In fact, the only time you would split a force is primarily to keep them fed. You needed high population densities. And so it was actually Napoleon who had recognized that density, population densities, had gotten high enough where we could actually have larger forces in the field. He was a phenomenal micromanager, prodigious. And for the first time, he fielded, I think 300,000 troops in the field, which was never seen, so orders of magnitude larger than ever seen before. And he created the command staff, where it was essentially the first time you had senior positions reporting directly to him to help ease the communication problem. The next phase was the mission command in the World War II, by the German army, representing what he would view as a pinnacle of decentralized command. And then he had this other example of the Vietnam War, the US forces, where it was a combination of a couple of things. One is it was the first time where communication equipment was cheap enough so it could be fielded to much lower levels in the organization, which led to communication security risks. So you had to put encryption field, which meant that they were always saturated. So you had flash traffic, which basically said there was important enough to be encrypted, which was always saturated, that it created super flash traffic, which is really, really important messages.
Gene Kim (01:15:39):
But then you ended up with this phenomenon where any mistake on the battlefield could be on the evening news. You had this position where a captain on a hill would be micromanaged by a major orbiting in the helicopter, being micromanaged by a colonel in a helicopter above them, being micromanaged by a general in an orbiting 707, with the joint chiefs on the satellite phone in the Pentagon, essentially depriving autonomy at a rate that was unprecedented.
Gene Kim (01:16:08):
This too is an information problem. And in fact, it was called an information problem. And so just like the Toyota bring that cost change is an information problem, it seems like this too seems profound and important to me. Could you react to that?
Jeffrey Fredricks (01:16:28):
That's very interesting to hear that recapitulate history. And I had heard some version of that up until, but not including, the Vietnam War part, but I am aware of that phenomenon of people on the ground being micromanaged from the White House with terrible results. It's just awful. What's funny is because you can understand. It's an understandable mistake because whereas before you had a trade-off between centralized communication and local decision making, and you mentioned the German model, which we actually talk about it a little bit in agile communications. We took a bit on briefing and back briefing, and the name of the book we got it from escapes me, but it was from that German Prussian military system. And we go a little bit further back to von Moltke in there.
Jeffrey Fredricks (01:17:21):
But what you had there was the idea of mission intent being sent out, in the knowledge that people were going to have to act independently, that there was no way to have information flow back and forth centrally. So when you suddenly get the technical breakthrough that allows the information flow to be richer, the natural mistake is to say, well, great. Let's bring all of that back. And we can take the communication pattern we had before the dominant architecture, and we're going to ramp it up with the new technology. We're going to supercharge it. And that doesn't work. It turns out that that breaks in different ways.
Jeffrey Fredricks (01:18:07):
So I can see in that why that would happen. And it made me remember a book or an article that I read many years ago, called How I Learned to Let My Workers Lead, which is one of the first books or articles I read about self-organization within companies. And this is from Johnsonville Sausage Company, I think something [crosstalk 01:18:28].
Gene Kim (01:18:28):
Jeffrey Fredricks (01:18:29):
Yeah. Right, right, right.
Gene Kim (01:18:31):
Which turned into a book.
Jeffrey Fredricks (01:18:33):
Yeah. Yes. So when I read that, one of the things that always struck me, when he talked about the transformation of the company from traditionally managed to self-organized, was they needed to invert the flow of information. What you have to do is you have to take this innovation to have all this connection, but rather than using it to pull all the information from the edges to the center, you need to push it from the center to the edges. That was the opposite. He was like, we had to invest a lot in how we pushed all of the information about the company, finances and management, out so it'd be visible to the workers, so that they could make informed decisions.
Jeffrey Fredricks (01:19:13):
It turned out that there had been information hiding again. The management had information that wasn't shared widely. Now, in part you could say, well, because they previously didn't really have a technology, in say the '40s, to go and have everyone know what the current [spink 01:19:31] balance was, what was today's sales. Management didn't know it, so there's no way the workers on the line could know today's sales because no one knew it. And by the time that you did get information, it wasn't relevant to the work that workers were doing. When you start having information coming back to the edges, from the edge of center, is the best thing you can do with it is put it back out to the edges again. Because the fact that you can go and know that there was a sale of those sausages over in that store over there... Like the beer game. Think of the beer game. If you could have the people on the line know how many cases of beer were ordered today, that might be really relevant.
Jeffrey Fredricks (01:20:16):
So if you had the Vietnam staff devising this, you would have had the helicopters getting information from the stores, and they would be sending the information back out to the stores again. They'd be telling people about where to move the displays around, but not taking it to the people who are actually in the beer factory, making the beer. They had the information flowing the wrong way. They were using the information, innovation and availability of information flow, the cheaper information flow, and solving the wrong problem.
Gene Kim (01:20:46):
And so if I hear you correctly, this is a relevant lesson to what we're talking about.
Jeffrey Fredricks (01:20:53):
Yes, absolutely. So the whole model was about how do we build situational awareness for the center. How does Napoleon get information about what's going on with all of his troops everywhere, so that he can micromanage them and decide. The breakthrough post Napoleon and in the Prussian army is about mission command, and you're going to pull information from the edges to inform strategy. You're going to push the decision down to people so they can implement it, and they're going to be making... But there's still information scarcity. The expectation is that the people on the edges are working with local information. They're optimizing, they're doing this work locally with information that they have, that no one else has.
Jeffrey Fredricks (01:21:43):
What we now have the ability to do is to not only enable those people with information that they have locally, but the information globally, they can now have such a situational awareness of a much broader scale. Of course, what we first did in the Vietnam War problem was to try to say, let's get all the information at the center, but now we're saying actually, how can the information... And of course, in the series of podcasts, you have brought this up the solution to this. That was Team of Teams, the Stanley McChrystal book, what were they doing there was about bringing information in and pushing it back out. They were making it so that people were able to... The information was [inaudible 01:22:20] who need it, and you were able to dynamically get the coordination they needed at the edges, without having to go through the White House.
Gene Kim (01:22:30):
It was amazing. By the way, he's presenting again in Vegas, as he did in London, [inaudible 01:22:35] with Jessica [Rife 00:01:22:37], who comes from the software space. And it is so amazing. In fact, that it's actually a specific term, I asked him to talk about, which is, he said, imagine the feeling when you have the autonomy you need, so you can run and shoot where you go, because the mission has been architected in a way where you don't have a lot of dependencies. All the dependencies are known. If you go in this direction, you would find don't go there, because now you have interdependencies.
Jeffrey Fredricks (01:22:58):
Gene Kim (01:22:59):
But he said there's this phenomenon that happens when, and I love that he has a phrase for this, when the decision space gets pulled up, meaning someone above you is taking that decision space away from you, and how frustrating that can be, especially when it's grounded in the conviction that you actually have better data locally than the... Actually give the great example of the White House situation room during the Osama bin Laden raid, of where it's a bunch of anxious faces. And it is a one-way communication. They are observing, it's not reaching down 12 layers of management to tell someone what to do. I just think that was so interesting. Last question, by the way [crosstalk 01:23:39] this is so fun.
Jeffrey Fredricks (01:23:41):
When you have it pulled up to the wrong level in software, what do we call that? A feature factory. At least that is one complaint. That's one type. There's many ways to repeat that problem. But when I hear people complain about a feature factory, what they're complaining about is that the decision space has been pulled up to the wrong level.
Gene Kim (01:24:00):
Actually, give me other examples of when that happens.
PART 3 OF 4 ENDS [01:24:04]
Gene Kim (01:24:03):
Actually give me other examples of when that happens. One is in the Unicorn Project, right? The levels of authority needed to approve a decision, get pulled up. Right?
Jeffrey Fredricks (01:24:11):
Gene Kim (01:24:11):
That's another example.
Jeffrey Fredricks (01:24:12):
Exactly. We were worried, so we're going to solve this by increased control. That's the driving characteristics. You can look at that. Because we sense risk, we're going to try to centralize the control of what happens. That will have a tendency to pull the decisions up to too high of a level, because you're saying, "Well, we're going to have a higher level review." And it's not long until that review becomes direction.
Gene Kim (01:24:45):
Just from a logical rigor perspective, I also want to observe that during high consequence situation, high outage costs outage that's actually an appropriate place for a certain types of rigidity to come in. It's like, "Okay, no one touched anything. We need to isolate actions so that we can better sense make the world around us." So there are situations where it does lead to chaos.
Jeffrey Fredricks (01:25:13):
Yeah. Absolutely. And we can get into what are the things that caused that chaos like unexpected interdependencies, lack, partial information. Absolutely. And so you're going to band-aid over your problems of information flow by slowing down action, because you don't want people acting faster than information can flow.
Gene Kim (01:25:37):
Oh, wow. That's excellent. The last thing I wanted to get your thought on was this kind of two by two matrix that I've been putting into with Steve Spear. And I just thought this was interesting, because the original goal was to tease apart a paper that Dr. Amy Edmondson and Dr. Michael Roberto. They're comparing the culture between the Apollo space program and the space shuttle program. Their words were experimental culture like on the frontier. We know that we don't know enough. So that kind of characterized the space program in the 1960s leading to the Apollo program. And then they characterize the space shuttle program as a standardized culture where there was an operating tempo that we had provided low cost to space. So what kind of resulted in this was this matrix. I think that the tools here was to be able to create a theory of why high-performance exists, whether it's the things we've discussed, Apollo space program, Toyota production system, Naval Reactor Core, Alcoa, and be able to explain why does a causal theory that explains high performance.
Gene Kim (01:26:47):
And then also to be able to prove the contrapositive. In the absence of these constructs, high performance cannot exist. So in the graph that sits on the X-axis is the degree of which work is standardized, meaning we write down what we do, we do what we wrote. And then the second one is degree of which we are integrating feedback into our standardized work. And so I had touched on this in the Nygaard podcast. Can I get your reaction on that?
Jeffrey Fredricks (01:27:16):
Yeah. I really liked that. I was also struck by that, in part because I remember the question of saying looking at the principles from Rick Over. And they seem really anti-agile in some way. So these are two good things but from, seemingly, a very different point of view. It's funny, because standardization, one thing you said was what's written down and we do what's written down. And I think that's actually a trap. I think that standardization is really important and the integration of feedback. So I like these axes here that we need to have integrated feedback and we need centralization. That doesn't mean written down.
Jeffrey Fredricks (01:27:51):
Because the thought that came to mind there was my early XP teams that I worked on were some of the most disciplined teams I've ever worked on, where we had very clearly made oral contracts of how we were going to work. And we were keeping each other to that daily. And I remember when this was still new to us and we were still ... In this instance, maybe we were a bit experimental. We were learning our way through XP. And I remember sitting with a pair. We were relatively new pairing. And we're talking about what we should do next. And he said, "If we were brave ... And I'm not saying we should do this. But if we were brave, what we would do is we'd write a test next, because that's what we said we were going to do." And I was like, "You're right. Let's do that."
Jeffrey Fredricks (01:28:38):
And so we were doing things, experimentally, following the standards that we had set. We had made a contract with the group that we were trying this out. We were trying extreme programming. And to try it meant we were going to do the practices, not because they were right but because that's what we were doing. And I'll just say, this has been an ah-ha discipline of following the form is something that has come back to me again and again over the years and most recently on this work around conversations where I will run a conversational dojo. And we'll say, "Look, this is what we're practicing today. I'm not saying this right, but this is the form. So we're all going to try to put our conversation in this form. And we'll see what we learn." And it takes real discipline to do that. I think that's the value of the centered work is that it takes real commitment. It's not natural to follow us in our practice.
Jeffrey Fredricks (01:29:36):
Alister Coburn, who you mentioned in passing earlier, he had one of the most influential papers on me was his paper about characterizing humans as non-linear, first order components of software development. Very humanistic file. It doesn't sound like it, but it says, "Yeah, people matter." And what he said was, "Because people matter so much, we should look at their attributes. We should understand the attributes of people." And one of the key attributes of people is that they are not good at behaving consistently. They do not act consistently over space and time. We should not expect that of them. What you're describing here in the upper right is high discipline systems where people have made a very strong effort to ensure consistency. They're trying to remove one area of error, accidental error, which is accidental variation.
Jeffrey Fredricks (01:30:32):
And they're saying, "So let's try to remove that through discipline." And whether it's an oral contract with a pair checking, giving each other courage to follow the process. Or whether it's being very rigorous about following the standard process, you're saying, "Let's limit that sort of variation so that we can have pure learning for what actually happens. We don't want the outcomes ... in Deming terms, we want the system under statistical control so that we can improve it." You can't improve a system that's not under statistical control. So that's what I think of when I look at this on the upper right with that high standardized work and high degree of feedback.
Gene Kim (01:31:15):
And by the way, we probably could hear sort of the hedging I was doing and just maybe even the discomfort of just trying to reconcile. But it doesn't sound very DevOpsy. Intuitively, I know it has to be true. And in fact, if you come ... If I heard you right, the trap is writing things down. The trap is calling them rules. I mean, I think there's a lot of words that immediately takes us down and like, "No, this doesn't apply to our work."
Jeffrey Fredricks (01:31:40):
You get the form. The problem is people get caught up on the form and lose the essence. And so they say, "Well," for example, "this is the problem with CMI level five. Well, because we wrote it down, we have a good process." Or even to say, "It's standardized because we wrote it down." That doesn't follow those. That's not how logic works. It's easy to substitute this goal displacement. You have an ultimate goal, and you have an intermediate goal that's on the step towards that goal. So it's like, "We want to be standardized where we're always doing the same thing everywhere. So that anywhere there's innovation, we can spread that learning everywhere. So we want that standardization for that reason. Along the path to that" ... Or actually I'm going to go further. "What we want is the learning. What we want is the learning and to capture and spread the learning everywhere as quickly as possible. How are we going to do that? Standardized work. How are we going to do that? We're going to write it down."
Jeffrey Fredricks (01:32:41):
so we have this multi-step process, but it's so easy to lose sight of the ultimate goal was learning. Instead, I think it's about the documents. And this is why you get people doing checkbox OKRs, for example, or checkbox anything. It's like, "Oh, this is the bureaucratic mindset." We've been told this is the form. Can we check? Yep. We've done the box. We've fit the OKRs. Yes, we've scored them. We're done now because they're not focused on learning, which was the real goal.
Gene Kim (01:33:09):
Yeah. I'm looking at this matrix, and I'm trying to concretize the information flow problem. I mean, surely, it has clearly it has to do something with information. Where is the information flows? In standardized work and degree of integration and feedback into standards work, what are the information flows?
Jeffrey Fredricks (01:33:27):
Okay. So, I think there's two elements here. So when you talk about degree of standardized work, I would say the first thing that you're doing is you're improving signal quality. Because you're taking that noise, you're taking out random variation. You're removing randomness. With low signal quality, it's just hard to learn. You can't make sense of the information. So you're improving the quality of information as you improve the standardization of work. And then you have the degree of integration of feedback. But we're just saying, "Well, how are we getting feedback?" This is signal. And the other one is are we using the signal? You have to do both. It's not enough to generate the information.
Jeffrey Fredricks (01:34:11):
This is like how many times have people gone to an outage. And you look back and say, "Well, what was going on in the systems?" Oh yeah, look, we have a metric that would told us exactly what was happening, but no one was looking at it. Or we had an alert set and the alert was going off, but no one was monitoring it. The information was there, but the system wasn't set up to use it. That's the left hand side, the reintegration of feedback.
Jeffrey Fredricks (01:34:39):
There's two elements. Is our system generating information? One. And two, are we using the information? And that second one is a lot harder. It is much harder because it works against our habits. Our human nature is to not learn. Our human nature is to find habits and routines and things that are good enough. And it's very hard to be that sort of discipline, rigorous product to say ... to use the phrase from the other podcast, "You need to work out all the funnies," like on the Apollo missions. That's not human nature. And contrast that with the space shuttle. The foam is coming off. Well, that's a surprise. It seemed to work okay. Carry on. How do we know it's okay to fly when there's foam coming off? Because we've done it before. That is more human. I think that's the thing that the not really driving to learn for things that are not obvious is kind of not a normal human behavior.
Gene Kim (01:35:43):
That's definitely system two. We have to absolutely bring system two to the foreground.
Jeffrey Fredricks (01:35:49):
And the important part about that is I always say this. System one and system two, in theory, make a great pair. System one gives you an instinctual reaction, and system two is there to back it up. I said, but the problem is that system one is biased and system two is lazy. And this is what people forget about reading that is that you're evolved to not use system two. Everything about the design of your brain is to use system two as little as possible. It's not comfortable to multiply three digit numbers in your head. You can do it, but you don't do it for fun because it's work and you're evolved to not do it. This is what we're asking people in the upper right-hand quadrant, to live in that system two land all the time. And it takes a lot of structure. It takes a lot of structure to support people, to get them in a space where they can live in system two, where you can have the dynamic of living in system two all the time.
Gene Kim (01:36:47):
Wow. This is so great. I would amiss if I didn't ask you one question that will seem so tactical. One thing that occurred to me as you were talking, we were talking about Kent Beck's amazing book, Extreme Programming. And one of the most provocative, startling, preposterous notions he has is pair programming. It just seemed preposterous, absurd. Main people tossed out his notion of agile just because of it, because it seems so contrary to common wisdom. And so when you talk about the intangibility of thought stuff, pair programming is exactly what came into my mind in terms of just how counter-intuitive is it that just having two people working on one problem arguably leads to better outcomes in two people working separately on their own. What other sort of insights or revelations came through kind of the Kent Beck notion of pair programming.
Jeffrey Fredricks (01:37:42):
That whole Extreme Programming explained, it really distilled a lot of ferment that had been happening in that space. I was fortunate enough to be on Ward's wiki, the c2.com wiki in the late 90's. I think I was first there in something like ... I want to say '98 or '99, something like that, where you would have people like Kent and Alistair and others and Ward Cunningham and Ron Jefferson talking about these ideas. Have you tried this? Have you tried that? What do you think about this? And it was really people wrestling with something very similar to what we're discussing today. Because in part, talking about the lessons from Stephen Spear and the Toyota production system and high velocity edge there, it was actually the same material that we were looking at late 90's, we were looking at Toyota and Honda and how manufacturing had changed with lean manufacturing. But it hadn't made it a software yet.
Jeffrey Fredricks (01:38:38):
And so we're saying, "What are the analogies? What are the things that we can use?" And people looking far afield field, Christopher Alexander and The Timeless Way of Building, looking for quality without a name, building quality, all of these elements and finding counter-intuitive insights that we could apply in very practical terms. You didn't really need to understand the neuropsychology of humans to understand why paired programming work. You could just do pair programming and go, "Oh yes, we're actually getting more done. We're writing and committing more code. Actually, more importantly, we are shipping more code with your defects with two people working on one bit than we were before with people working separately." We were changing our metrics and changing what we measured.
Jeffrey Fredricks (01:39:27):
And this is like a paradigm shift, where it moved from are we delivering our software as code complete. Have we hit software development complete and we're ready for the integration phase? Which is where projects went to die because you would enter the six week planning integration phase. And it would be six months or nine months or never. Projects would ... because you had this mountain. It was like, "No, actually that's not the way to be successful. We need to stop looking at software committed to software that ships and gets used. The usage matters." That changing of the goalpost, that changing the way we saw, yeah, it led to all kinds of things. Paired programming was one.
Jeffrey Fredricks (01:40:11):
I mean, the other thing that really stands out for me was in that book, if something's hard, you should do it more frequently. This was the argument for continuous integration. Integration is difficult. Therefore, we should do it all the time. That is probably the most counterintuitive element. It's certainly next to pair programming, I think, in that book. And man, the resistance to that. We can all talk about sort of, I remember, doing the impossible 50 times a day from Timothy Fitz that at InView about deploying to production 50 times a day. Every commit, beat it out to production, assuming that no test failed and the testing system that went into it. But in there, it was the question of did you check in multiple times a day. It was not normal for developer to check in everyday. I think people will forget that. People would have code on their desktop for weeks. And the idea that you're going to check in, not just once a day but multiple times a day, again, that was another sort of ah-ha moment. In fact, you're going to be integrating all the time.
Jeffrey Fredricks (01:41:22):
And I remember making a game of it. "Well, let's taking this dial to 11," and being in a four person team with two pairs. And we were like racing. We wanted to make integration the other team's job. So we would be, "What's the smallest test code cycle we can do and commit?" So we could look up and sort of understatedly say, "You're going to want to pull." And then quick, get the next one done. Such a different dynamic.
Jeffrey Fredricks (01:41:57):
The first company where I was doing this, we looked at check- in metrics. And there was a ten fold difference over a six-month period between the person who checked in the most and the person checked in the least. Literally 70 times versus 700 times. Massive difference. Now that's not productivity. I'm not saying that the person who checked in 700 times was writing 10 times more code. That's definitely not the case. But they were generating information a lot more frequently. The information flow was utterly different. And that's what it comes back to, these insights about pairing and integration are about information flow. They're about opportunities to learn early. The earliest time that you could learn is I'm there talking to the person before I even type. The next person, I type it in. And he goes, "Wait a minute, you missed a semi-colon." It was faster feedback than the compiler. So, these insights were really. They have these different forms. But they come back to what we're talking about. Why were they more effective? Because they generated more information faster. They had tighter feedback loops.
Gene Kim (01:43:08):
I got to tell you, my cheeks hurt from smiling so much. I really appreciate this. And by the way, just to reciprocate with the story at the end view one, by the time I had read that article, yeah, I was sort of immune to the shock of deployment. But I was still shocked when I read how did the database schema changes or the fact that like you would instead of making a schema change just stored in a different way. In other words, take the complexity in your code, not in the database. And that was shocking. That struck me as immoral. What sort of a psychopath would do that? And yet, given just the catastrophic impacts of bad schema changes, I mean, that was clearly an adaptation to work around that. It was just amazing to me.
Gene Kim (01:43:53):
[inaudible 01:43:53] here. Okay. I just want to punctuate that phrase that Jeffrey said. "They were generating much more information. And they were generating that information much earlier, which meant that they were learning more and learning earlier." Holy cow, that seems like a super important concept, as well as concretizing all the way that information helps us learn. I also did want to take a moment to describe that famous blog post by Timothy Fitz at InView called Doing the Impossible 50 Times Per Day. I had mentioned that the shock of headlines like 10 deploys a day had started to wear off on me. It seemed like that was destined to become the new normal. And I was having plenty of fun describing it in my own presentations to shock and horrify the people I was presenting to.
Gene Kim (01:44:36):
But I remember in 2013, Jess Humble showing me that article from Timothy Fitz and the utter shock when I read it and to learn what these savages were doing. It wasn't in the blog post, but something in the comments that Jess had pointed me to. Basically it said that they avoided doing risky database schema changes the way that most people did them, which often resulted in application servers crashing when they looked for database columns that no longer existed. So instead of renaming table columns, they would make a new column, copy all the data into that new column, change all the code to query that new column, and then safely drop the old column. So it really did offend me at first, the notion of having the same data stored in two places. And introducing even more complexity into your code was definitely the opposite of the way we were trained to do things in school. But of course it does make so much more sense. They actually decrease the risk of database schema changes and took advantage that code is much more malleable, shapeable, and testable than the database. Even now, almost seven years later, I still marvel at how ingenious and counter-intuitive and even shocking their practices were.
Gene Kim (01:45:53):
Okay, back to the interview. I have this feeling, whether you call it learning organizations or there's a body of work that goes back 50, 60 years. I'm thinking about Dr. Garris, all the lean researchers, or this MIT beer game, organizational dynamics, system dynamics. I mean, I feel like this is all part of a cohesive body of work. But my feeling is, and this is maybe the height of arrogance to even claim, is that there's something missing. We entered the age of software and data, the age of information. There's a cohesive whole that is missing. And that's a very grand claim to make. But I just wonder if I can get your reaction on that. On a scale of one to 10, one is utter disdain for that comment. Clearly I didn't read a certain book I must go read. That everything I just said disqualifies me from anything that you care about. 10 is like, "Oh, no. There is something missing." As someone who has read so copiously, is there something missing and you feel like the sort of fills a hole?
Jeffrey Fredricks (01:46:59):
Wow. I definitely think something's missing. There's no question. And I'm not sure exactly where. But I think if you look over the course of time, yes, there's this body of knowledge. But we're expanding our understanding of the constituent bits very rapidly. At the time that you started that, so you're going to go back into the 60's or even early 70's that was before the kind of Trubisky stuff where the cognitive biases. We didn't have the concept of cognitive biases as part of a vocabulary at all until the 70's. And it's not been common in our thought process of organizational design until even is it still common. Are people really designing around cognitive biases right now? I think mostly not. So in our daily thought process of how we manage and design these systems, we're not using the latest knowledge. It's not all been integrated yet into the theory of learning and software.
Jeffrey Fredricks (01:48:01):
Because part of it, you mentioned two very different things. Part of it was humans and human systems, but then you brought in information. And when you bring those together, you're talking about what used to be called cybernetics, which is a term that's way out of fashion. It does not ... It used to be-
Gene Kim (01:48:18):
Jeffrey Fredricks (01:48:19):
Exactly. But that idea of like how information technology would interact with human decision-making, I think there's a whole area there that's still very poorly understood. So I think we're getting better on the human element. We're getting better at systems thinking. But we're still integrating the information plus systems plus humans component. So, I think there's a lot there for us to work out still.
Gene Kim (01:48:50):
Thank you so much for the time. How can people reach you? And is there any specific help you're looking for these days where you want people to reach out to you?
Jeffrey Fredricks (01:48:56):
I am on Twitter, JTF. One of those three letter, early day Twitter handles. And that's the easiest way for people to reach me. I'm also on LinkedIn and conversationaltransformation.com. And I am really interested from hearing from people. And I'm really interested when people find out that they have trouble making change where they are. I, for many years, done a session with people where I'd say, "Are you frustrated? It's probably your fault." And the idea is that there's probably some contribution you're making or you have more freedom and degrees of control than you might realize. So I'm interested in hearing from people, in part because I want to sort of disprove this and find those places where ... Help me find the place where change is really impossible and where there's no chance for improvement. I'm interested in those people, frustrated learners, who are willing to share their stories. I find those incredibly helpful.
Gene Kim (01:49:50):
Awesome. Thank you so much, Jeffrey.
Jeffrey Fredricks (01:49:52):
Gene Kim (01:49:56):
Thank you so much for listening to this episode. Up next will be my interview of Scott Havens, formerly director of software engineering at jet.com and Walmart Labs and then head of supply chain technology at Moda Operandi. He gave one of the most amazing DevOps summit presentations describing how he helped rebuild the entire inventory management system for Walmart, the world's largest company. He earned this right by the amazing work he did building the incredible systems that powered jet.com that Walmart had acquired. It powered the inventory management, order management, transportation, available to promise, available to ship, and tons of other critical processes that almost go right for an online retailer. He talked about functional principles, not in the small but in the large and how it enabled building and running one of the world's largest supply chain systems more safely, more quickly, and even more cheaply. This inspired one of my favorite story elements in the Unicorn Project. I know you'll enjoy it and see you then.
Gene Kim (01:51:01):
I hope you enjoy that final episode of season one. While we take a break over the holidays, I want to share with you some of the great opportunities for you to continue your learning journey. First off, you should, of course, read Agile Conversations, the book that Jeffrey coauthored. And while you're at it, pick up a copy of John Smart's new book, Sooner, Safer, Happier. I love both books. And keep an eye on IQ Revolution's Twitter account at IT Rev Books for lots of holiday giveaways. And don't forget to sign up for the IT Revolution newsletter at itrevolution.com for updates about the release of season two. See you then.