Skip to content

October 27, 2025

From Line Cook to Head Chef: Orchestrating AI Teams

By Gene Kim ,Steve Yegge

The following is an excerpt from the book Vibe Coding: Building Production-Grade Software With GenAI, Chat, Agents, and Beyond by Gene Kim and Steve Yegge.


Welcome back, head chefs. You’ve mastered working with your AI sous chef. You’ve discovered the joys of FAAFO—being fast, ambitious, able to work more autonomously, having fun, and exploring multiple options. But what happens when you need to step beyond your single station and orchestrate a kitchen—or perhaps a chain of restaurants?

In this chapter, we’ll explore your evolution from managing a single AI partner to conducting a symphony of digital assistants. We’ll touch on how to coordinate teams of AI agents working across complex projects. You’ll see why organizational architecture becomes more critical when AI accelerates everything. And we’ll talk about how to avoid a madhouse (either creating one or winding up in one) when multiple developers each command their own AI armies.

We’ll walk through frameworks for understanding how work gets done at scale, drawn from Gene’s research on high-performing organizations. We show real examples of what works and what doesn’t. And, yes, we’ll address the elephant in the room—the surprising DORA finding that AI adoption initially correlates with worse performance metrics.

By the end of this chapter, you’ll understand how to manage multiple AI assistants and how to architect systems where both human and AI teams can thrive together. You’ll have the skills to avoid becoming the source of 2 a.m. pages for your on-call colleagues, while creating the conditions for your organization to achieve FAAFO at scale.

Advanced Lessons for Head Chefs

You’ve grown comfortable working with your AI sous chef, maybe a few at once, and you’ve found FAAFO. But there may come a time when you need to scale this up. What happens when you’ve gone beyond running one kitchen and have to expand to a chain of restaurants (congratulations)—managing multiple locations across different continents, each with their own teams of humans and specialized AI assistants?

This is the transition we’re exploring now, moving beyond individual productivity into the realm of orchestration. And to navigate this shift effectively, we need a framework for understanding how work gets done in any system that needs to coordinate and integrate the efforts of many, so they can operate as a coherent and well-functioning whole. Fortunately, such a framework exists, born from a decade of research by Gene and his colleague Dr. Steven J. Spear, and culminating in their book Wiring the Winning Organization.

Gene, coming from the world of studying high-performing technology organizations and DevOps, got to collaborate with Dr. Spear, currently at the MIT Sloan School for Management and a renowned expert on high-velocity learning systems like Toyota’s Production System (see his book The High-Velocity Edge). Together, they were searching for a unified theory of extraordinary management systems. 

They asked: What separates organizations that consistently win from those that struggle? They found the answer was in how the work was structured and coordinated. What they called the “organizational wiring.” They concluded that in any organization, work happens at three distinct layers, each with different concerns, where the organizational wiring resides in the third:

  • Layer 1: The Work Itself: This is where value is created. It’s the patient in the hospital, the artfully plated entree leaving the kitchen, the code being developed, the binary executable running in production, the feature being delivered to users. It’s the “what,” where value is being added.
  • Layer 2: The Tools and Infrastructure: This is the gear we use to do the work. In the hospital, it’s the MRI or CT scanners; in the kitchen, it’s the ovens, mixers, knives, and fancy sous-vide machines; in our world, it’s your IDE, the compiler, Kubernetes, your CI/CD pipeline, and version control systems. It’s often how we work. Mastery of Layer 2 tools is thought of as a hallmark of being a great practitioner of our craft.
  • Layer 3: The Organizational Wiring: This is the least visible but most critical layer. It defines how the work is structured, partitioned, and integrated. It encompasses system architecture, organizational design, communication protocols, workflows and processes, standards, and interfaces—how everything and everyone connects. It defines who talks to whom, about what, how often, in what format, and under what rules. It’s the layout of the kitchen, the roles of the kitchen staff, how orders turn into successful dishes, and the communication flow between stations. It’s the leadership and cultural norms that dictate how people act and react. This wiring enables (or hinders) effective collaboration. In our world, it also includes software architecture—a connection Conway’s Law made famous: “If you have four groups working on a compiler, you’ll get a 4-pass compiler.” 

Organizational wiring is so important because Layer 3 by itself often dictates success or failure, regardless of how good Layers 1 and 2 are. Consider the legendary transformation of the GM–Toyota joint venture plant (NUMMI) in Fremont, California. Toyota took one of GM’s worst-performing plants, kept the same workforce (Layer 1) and the same factory capital equipment and floor space (Layer 2), yet turned it into a world-class facility within two years. The only thing that changed was Layer 3—the management system, the workflows, the communication patterns, the problem-solving mechanisms, and training for leaders. 

In Part 2, we talked about how, during the Apollo space program, NASA established that the only people on the ground in Mission Control who could talk to the astronauts in space were fellow astronauts. This too was a Layer 3 decision.

Historically, as developers or individual contributors, most of us operated primarily at Layers 1 and 2. We focused on writing code or executing tasks using the tools provided. Layer 3 decisions—architecture, team structure, cross-team communication protocols, project planning—were typically the domain of managers, architects, or senior leadership. If you needed something from another team, you often had to escalate up the chain because the direct Layer 3 connections weren’t there or weren’t effective.

Consider Chefs Isabella and Vincent from Part 1. Both had equally talented staff (Layer 1) and identical kitchens (Layer 2). But Isabella, who meticulously planned the workflow, defined clear responsibilities for each station and established how they would integrate their parts (fabulous Layer 3 decisions), thus achieving FAAFO. Vincent, who threw everyone together hoping for emergent collaboration, created a shambles and the “bad” FAAFO. The only difference between Chefs Isabella and Vincent was the decisions they made in Layer 3.

Vibe coding, especially with agents, pushes every developer into making decisions in Layer 3. When you can spin up an AI assistant (or ten) to work on different parts of a problem, you become the architect.

Mastering these Layer 3 skills—thinking like an architect, enabling independence of action, creating fast feedback loops, managing dependencies, establishing clear communication protocols for your digital assistants—is not optional in the world of vibe coding. 

AI May Change Our Layer 3 Decisions

How we organize and architect our teams and systems may change with vibe coding. For instance, consider how front-end and back-end teams emerged and had to agree on API contracts, whether their code should live in a shared or common repository, and protocols for synchronizing and merging work. Most of the industry decided that front-end/back-end teams should be separate, because each side grew complex enough to keep a human busy for their whole career. This was a Layer 3 problem that we solved through meetings, documentation, and processes.

These decisions may become a hindrance when AIs can do all the coding for both the front-end and back-end parts of the system. How do you coordinate and synchronize different agents run by different humans working on different sides of a service call? It may well be easier to have one AI handle it all.

We may decide that the traditional front-end/back-end team split doesn’t make sense anymore, since giving the agent a view of both sides may improve its performance on the client/server communication. We want to be able to make changes to both sides of the interface, which could be more difficult if they’re in different repositories. These types of coordination questions—how to organize agents and groups of agents—become critical as parallelism increases.

This new level of coordination requires thinking about agent-to-agent communication, shared standards for AI-generated outputs, and new Layer 2 tools designed for coordinating across multiple individual AI ecosystems. It adds a new dimension of complexity to teamwork. And we see many organizations already charging down this path.

We expect Layer 3 organizational wiring will change significantly in the years to come. When coding is no longer the bottleneck, the rest of your organization becomes the bottleneck. We’ve seen this before in the DevOps movement: cloud, CI/CD, and other Layer 2 technologies boosted developer productivity so much that they forced organizations to rewire (e.g., QA and InfoSec “shifting left,” “you build it, you run it,” etc.). 

AI promises a bigger shift. When code generation stops being the constraint, pressure transfers to functional roles like product management, design, and QA, which become the new critical path. We’ll explore these broader organizational issues later in the book.

Areas Where We Need Layer 2 to Improve

Throughout the book, we’ve pointed out that Layer 2 tooling is still quite poor, putting increased coordination burdens in Layer 3. For instance, we don’t yet have sophisticated dashboards for seamlessly orchestrating fleets of agents, managing their interactions, and resolving conflicts automatically. Much like early-days chefs figuring out how to run a multi-station kitchen, we’re often improvising—passing context via shared files, littering AGENTS.md files in our source code, creating custom Bash scripts, manually juggling Git branches, listening for notifications to make sure agents aren’t blocked for us, manually reviewing shared artifacts at each step, and so forth.

In Part 3, when we advocated for developers to create their own tooling to improve their own workflow, it was to address this gap. These will reduce the need to do so much coordination manually in Layer 3, especially as we want to support developers being able to create ten thousand lines of code a day or more for sustained periods.

We’re seeing early patterns emerge:

Agent Organization Patterns:

  • Subagents: These enhance context window lifetime and parallelize research tasks.
  • Generators and verifiers: Separate concerns by creating dedicated agents for implementation versus testing. 
  • Task graph discipline: Break work into leaf nodes small enough for agents to handle independently.

Communication and Context Sharing:

  • Shared documentation and files: Agents (and people) ex-change context through plans, specifications, and design docs (recommended in Anthropic’s Claude Code Best Practices).
  • Direct agent communication: Frameworks enable agents to message each other, with MCP as a communication layer between systems.

Parallel Work Management:

  • Well-designed parallelism: Minimize dependencies while maximizing concurrent agent work. 
  • Large-scale parallel experimentation: Multiple agent clusters with separate repository clones compete to find optimal solutions. 
  • Verification integration: Build testing and validation into every stage rather than leaving it until the end. 
  • Merge strategies: Plan ahead for how components will recombine without conflicts.

The near future holds promise for richer dashboards to manage agent swarms and better tools for cross-agent coordination. But today, you’ll need to be deliberate about establishing these patterns yourself.

As if running your own teams of agents isn’t hard enough, think about your human colleagues. Managing your own team of AI agents is the new individual Layer 3. We need to be able to collaborate with colleagues who are also managing their own agent teams. Given a team of five developers, each running multiple agents, coordinating their clusters is an open problem. This is where we should start to see the emergence of “Layer 3 of Layer 3” coordination patterns that span multiple developers’ agent clusters.

And consider how fast it will be when we cease being the mechanism by which agents communicate. Instead of manually starting one agent to write the tests and another one to write the feature, we’ll be able to start up a group of agents that already know how to coordinate with each other and can take individual and group instructions from you.


For more insights on effective AI-assisted development, check out Kim and Yegge’s upcoming book Vibe Coding and their podcast Vibe Coding with Steve and Gene on YouTube.

- About The Authors
Avatar photo

Gene Kim

Gene Kim has been studying high-performing technology organizations since 1999. He was the founder and CTO of Tripwire, Inc., an enterprise security software company, where he served for 13 years. His books have sold over 1 million copies—he is the WSJ bestselling author of Wiring the Winning Organization, The Unicorn Project, and co-author of The Phoenix Project, The DevOps Handbook, and the Shingo Publication Award-winning Accelerate. Since 2014, he has been the organizer of DevOps Enterprise Summit (now Enterprise Technology Leadership Summit), studying the technology transformations of large, complex organizations.

Follow Gene on Social Media
Avatar photo

Steve Yegge

Steve Yegge is an American computer programmer and blogger known for writing about programming languages, productivity, and software culture for two decades. He has spent over thirty years in the industry, split evenly between dev and leadership roles, including nineteen years combined at Google and Amazon. Steve has written over a million lines of production code in a dozen languages, has helped build and launch many large production systems at big tech companies, has led multiple teams of up to 150 people, and has spent much of his career relentlessly focused on making himself and other developers faster and better. He is currently an Engineer at Sourcegraph working on AI coding assistants.

Follow Steve on Social Media

No comments found

Leave a Comment

Your email address will not be published.



Jump to Section

    More Like This

    From Line Cook to Head Chef: Orchestrating AI Teams
    By Gene Kim , Steve Yegge

    The following is an excerpt from the book Vibe Coding: Building Production-Grade Software With GenAI,…

    The Three Developer Loops: A New Framework for AI-Assisted Coding
    By Leah Brown

    The rapid adoption of AI coding assistants has shattered traditional software development workflows. While…

    When AI Cuts Corners: Hijacking the Reward Function
    By Leah Brown

    This post explores key insights from the upcoming book Vibe Coding by Gene Kim…

    The Key Vibe Coding Practices
    By Steve Yegge , Gene Kim

    The following is an excerpt from the forthcoming book Vibe Coding: Building Production-Grade Software With…