LLMs and Generative AI in the enterprise.
Inspire, develop, and guide a winning organization.
Understand the unique values and behaviors of a successful organization.
Create visible workflows to achieve well-architected software.
Understand and use meaningful data to measure success.
Integrate and automate quality, security, and compliance into daily work.
An on-demand learning experience from the people who brought you The Phoenix Project, Team Topologies, Accelerate, and more.
Learn how to enhance collaboration and performance in large-scale organizations through Flow Engineering
Learn how making work visible, value stream management, and flow metrics can affect change in your organization.
Clarify team interactions for fast flow using simple sense-making approaches and tools.
Multiple award-winning CTO, researcher, and bestselling author Gene Kim hosts enterprise technology and business leaders.
In the first part of this two-part episode of The Idealcast, Gene Kim speaks with Dr. Ron Westrum, Emeritus Professor of Sociology at Eastern Michigan University.
In the first episode of Season 2 of The Idealcast, Gene Kim speaks with Admiral John Richardson, who served as Chief of Naval Operations for four years.
DevOps best practices, case studies, organizational change, ways of working, and the latest thinking affecting business and technology leadership.
Just as physical jerk throws our bodies off balance, technological jerk throws our mental models and established workflows into disarray when software changes too abruptly or without proper preparation.
Sure, vibe coding makes you code faster—that’s the obvious selling point. But if you think speed is the whole story, you’re missing out on the juicy stuff.
The values and philosophies that frame the processes, procedures, and practices of DevOps.
This post presents the four key metrics to measure software delivery performance.
October 27, 2025
The following is an excerpt from the book Vibe Coding: Building Production-Grade Software With GenAI, Chat, Agents, and Beyond by Gene Kim and Steve Yegge.
Welcome back, head chefs. You’ve mastered working with your AI sous chef. You’ve discovered the joys of FAAFO—being fast, ambitious, able to work more autonomously, having fun, and exploring multiple options. But what happens when you need to step beyond your single station and orchestrate a kitchen—or perhaps a chain of restaurants?
In this chapter, we’ll explore your evolution from managing a single AI partner to conducting a symphony of digital assistants. We’ll touch on how to coordinate teams of AI agents working across complex projects. You’ll see why organizational architecture becomes more critical when AI accelerates everything. And we’ll talk about how to avoid a madhouse (either creating one or winding up in one) when multiple developers each command their own AI armies.
We’ll walk through frameworks for understanding how work gets done at scale, drawn from Gene’s research on high-performing organizations. We show real examples of what works and what doesn’t. And, yes, we’ll address the elephant in the room—the surprising DORA finding that AI adoption initially correlates with worse performance metrics.
By the end of this chapter, you’ll understand how to manage multiple AI assistants and how to architect systems where both human and AI teams can thrive together. You’ll have the skills to avoid becoming the source of 2 a.m. pages for your on-call colleagues, while creating the conditions for your organization to achieve FAAFO at scale.
You’ve grown comfortable working with your AI sous chef, maybe a few at once, and you’ve found FAAFO. But there may come a time when you need to scale this up. What happens when you’ve gone beyond running one kitchen and have to expand to a chain of restaurants (congratulations)—managing multiple locations across different continents, each with their own teams of humans and specialized AI assistants?
This is the transition we’re exploring now, moving beyond individual productivity into the realm of orchestration. And to navigate this shift effectively, we need a framework for understanding how work gets done in any system that needs to coordinate and integrate the efforts of many, so they can operate as a coherent and well-functioning whole. Fortunately, such a framework exists, born from a decade of research by Gene and his colleague Dr. Steven J. Spear, and culminating in their book Wiring the Winning Organization.
Gene, coming from the world of studying high-performing technology organizations and DevOps, got to collaborate with Dr. Spear, currently at the MIT Sloan School for Management and a renowned expert on high-velocity learning systems like Toyota’s Production System (see his book The High-Velocity Edge). Together, they were searching for a unified theory of extraordinary management systems.
They asked: What separates organizations that consistently win from those that struggle? They found the answer was in how the work was structured and coordinated. What they called the “organizational wiring.” They concluded that in any organization, work happens at three distinct layers, each with different concerns, where the organizational wiring resides in the third:
Organizational wiring is so important because Layer 3 by itself often dictates success or failure, regardless of how good Layers 1 and 2 are. Consider the legendary transformation of the GM–Toyota joint venture plant (NUMMI) in Fremont, California. Toyota took one of GM’s worst-performing plants, kept the same workforce (Layer 1) and the same factory capital equipment and floor space (Layer 2), yet turned it into a world-class facility within two years. The only thing that changed was Layer 3—the management system, the workflows, the communication patterns, the problem-solving mechanisms, and training for leaders.
In Part 2, we talked about how, during the Apollo space program, NASA established that the only people on the ground in Mission Control who could talk to the astronauts in space were fellow astronauts. This too was a Layer 3 decision.
Historically, as developers or individual contributors, most of us operated primarily at Layers 1 and 2. We focused on writing code or executing tasks using the tools provided. Layer 3 decisions—architecture, team structure, cross-team communication protocols, project planning—were typically the domain of managers, architects, or senior leadership. If you needed something from another team, you often had to escalate up the chain because the direct Layer 3 connections weren’t there or weren’t effective.
Consider Chefs Isabella and Vincent from Part 1. Both had equally talented staff (Layer 1) and identical kitchens (Layer 2). But Isabella, who meticulously planned the workflow, defined clear responsibilities for each station and established how they would integrate their parts (fabulous Layer 3 decisions), thus achieving FAAFO. Vincent, who threw everyone together hoping for emergent collaboration, created a shambles and the “bad” FAAFO. The only difference between Chefs Isabella and Vincent was the decisions they made in Layer 3.
Vibe coding, especially with agents, pushes every developer into making decisions in Layer 3. When you can spin up an AI assistant (or ten) to work on different parts of a problem, you become the architect.
Mastering these Layer 3 skills—thinking like an architect, enabling independence of action, creating fast feedback loops, managing dependencies, establishing clear communication protocols for your digital assistants—is not optional in the world of vibe coding.
How we organize and architect our teams and systems may change with vibe coding. For instance, consider how front-end and back-end teams emerged and had to agree on API contracts, whether their code should live in a shared or common repository, and protocols for synchronizing and merging work. Most of the industry decided that front-end/back-end teams should be separate, because each side grew complex enough to keep a human busy for their whole career. This was a Layer 3 problem that we solved through meetings, documentation, and processes.
These decisions may become a hindrance when AIs can do all the coding for both the front-end and back-end parts of the system. How do you coordinate and synchronize different agents run by different humans working on different sides of a service call? It may well be easier to have one AI handle it all.
We may decide that the traditional front-end/back-end team split doesn’t make sense anymore, since giving the agent a view of both sides may improve its performance on the client/server communication. We want to be able to make changes to both sides of the interface, which could be more difficult if they’re in different repositories. These types of coordination questions—how to organize agents and groups of agents—become critical as parallelism increases.
This new level of coordination requires thinking about agent-to-agent communication, shared standards for AI-generated outputs, and new Layer 2 tools designed for coordinating across multiple individual AI ecosystems. It adds a new dimension of complexity to teamwork. And we see many organizations already charging down this path.
We expect Layer 3 organizational wiring will change significantly in the years to come. When coding is no longer the bottleneck, the rest of your organization becomes the bottleneck. We’ve seen this before in the DevOps movement: cloud, CI/CD, and other Layer 2 technologies boosted developer productivity so much that they forced organizations to rewire (e.g., QA and InfoSec “shifting left,” “you build it, you run it,” etc.).
AI promises a bigger shift. When code generation stops being the constraint, pressure transfers to functional roles like product management, design, and QA, which become the new critical path. We’ll explore these broader organizational issues later in the book.
Throughout the book, we’ve pointed out that Layer 2 tooling is still quite poor, putting increased coordination burdens in Layer 3. For instance, we don’t yet have sophisticated dashboards for seamlessly orchestrating fleets of agents, managing their interactions, and resolving conflicts automatically. Much like early-days chefs figuring out how to run a multi-station kitchen, we’re often improvising—passing context via shared files, littering AGENTS.md files in our source code, creating custom Bash scripts, manually juggling Git branches, listening for notifications to make sure agents aren’t blocked for us, manually reviewing shared artifacts at each step, and so forth.
In Part 3, when we advocated for developers to create their own tooling to improve their own workflow, it was to address this gap. These will reduce the need to do so much coordination manually in Layer 3, especially as we want to support developers being able to create ten thousand lines of code a day or more for sustained periods.
We’re seeing early patterns emerge:
Agent Organization Patterns:
Communication and Context Sharing:
Parallel Work Management:
The near future holds promise for richer dashboards to manage agent swarms and better tools for cross-agent coordination. But today, you’ll need to be deliberate about establishing these patterns yourself.
As if running your own teams of agents isn’t hard enough, think about your human colleagues. Managing your own team of AI agents is the new individual Layer 3. We need to be able to collaborate with colleagues who are also managing their own agent teams. Given a team of five developers, each running multiple agents, coordinating their clusters is an open problem. This is where we should start to see the emergence of “Layer 3 of Layer 3” coordination patterns that span multiple developers’ agent clusters.
And consider how fast it will be when we cease being the mechanism by which agents communicate. Instead of manually starting one agent to write the tests and another one to write the feature, we’ll be able to start up a group of agents that already know how to coordinate with each other and can take individual and group instructions from you.
For more insights on effective AI-assisted development, check out Kim and Yegge’s upcoming book Vibe Coding and their podcast Vibe Coding with Steve and Gene on YouTube.
Gene Kim has been studying high-performing technology organizations since 1999. He was the founder and CTO of Tripwire, Inc., an enterprise security software company, where he served for 13 years. His books have sold over 1 million copies—he is the WSJ bestselling author of Wiring the Winning Organization, The Unicorn Project, and co-author of The Phoenix Project, The DevOps Handbook, and the Shingo Publication Award-winning Accelerate. Since 2014, he has been the organizer of DevOps Enterprise Summit (now Enterprise Technology Leadership Summit), studying the technology transformations of large, complex organizations.
Steve Yegge is an American computer programmer and blogger known for writing about programming languages, productivity, and software culture for two decades. He has spent over thirty years in the industry, split evenly between dev and leadership roles, including nineteen years combined at Google and Amazon. Steve has written over a million lines of production code in a dozen languages, has helped build and launch many large production systems at big tech companies, has led multiple teams of up to 150 people, and has spent much of his career relentlessly focused on making himself and other developers faster and better. He is currently an Engineer at Sourcegraph working on AI coding assistants.
No comments found
Your email address will not be published.
First Name Last Name
Δ
The following is an excerpt from the book Vibe Coding: Building Production-Grade Software With GenAI,…
The rapid adoption of AI coding assistants has shattered traditional software development workflows. While…
This post explores key insights from the upcoming book Vibe Coding by Gene Kim…
The following is an excerpt from the forthcoming book Vibe Coding: Building Production-Grade Software With…