LLMs and Generative AI in the enterprise.
Inspire, develop, and guide a winning organization.
Understand the unique values and behaviors of a successful organization.
Create visible workflows to achieve well-architected software.
Understand and use meaningful data to measure success.
Integrate and automate quality, security, and compliance into daily work.
An on-demand learning experience from the people who brought you The Phoenix Project, Team Topologies, Accelerate, and more.
Learn how to enhance collaboration and performance in large-scale organizations through Flow Engineering
Learn how making work visible, value stream management, and flow metrics can affect change in your organization.
Clarify team interactions for fast flow using simple sense-making approaches and tools.
Multiple award-winning CTO, researcher, and bestselling author Gene Kim hosts enterprise technology and business leaders.
In the first part of this two-part episode of The Idealcast, Gene Kim speaks with Dr. Ron Westrum, Emeritus Professor of Sociology at Eastern Michigan University.
In the first episode of Season 2 of The Idealcast, Gene Kim speaks with Admiral John Richardson, who served as Chief of Naval Operations for four years.
DevOps best practices, case studies, organizational change, ways of working, and the latest thinking affecting business and technology leadership.
Just as physical jerk throws our bodies off balance, technological jerk throws our mental models and established workflows into disarray when software changes too abruptly or without proper preparation.
Sure, vibe coding makes you code faster—that’s the obvious selling point. But if you think speed is the whole story, you’re missing out on the juicy stuff.
The values and philosophies that frame the processes, procedures, and practices of DevOps.
This post presents the four key metrics to measure software delivery performance.
November 13, 2025
The following is an excerpt from the book Vibe Coding: Building Production-Grade Software With GenAI, Chat, Agents, and Beyond by Gene Kim and Steve Yegge.
In the previous chapter, we equipped you to orchestrate your own AI sous chefs with finesse. Now we turn to a higher-level challenge: How do you scale these practices across an organization?
Technical leaders face a shift when bringing vibe coding to their teams. You’ll need to inspire skeptical engineers who view AI as either a threat or overhyped. You’ll have to navigate serious tension between unleashing creativity and maintaining stability. You’ll rethink hiring practices, performance metrics, and team structures—all while the technology itself evolves at breakneck speed.
We’ll show you how Quinn Slack’s innovative token-burning leaderboard at Sourcegraph turned AI adoption into friendly competition. You’ll learn why your personal hands-on experimentation matters more than commissioning analyst reports and discover what interview questions predict success in this new paradigm.
By the end of this chapter, you’ll understand why organizational transformation is necessary, and how to cultivate it—from lighting the pilot flame of visible leadership to establishing safety rails that prevent disaster. You’ll have the tools to transform your organization from a collection of individual coders into a symphony of human creativity amplified by AI capability, creating environments where both humans and machines can do their best work.
As a leader, you’ll almost certainly need to roll AI and vibe coding into your current practices. But you’ll also have to mitigate the potential risks. Your job as a technical leader, whether it’s a line manager or a CTO, is to bring vision and velocity to your organization. To do that in the world of AI, you must encourage controlled experimentation, take controlled risks, and create a culture where everyone is excited to pull the starter cord, knowing there’ll be some wild first swings and, yes, some occasional mistakes.
Picture handing a chainsaw, with no guidance, to a friend who’s spent years chopping wood with a hatchet. Their first instinct might be to treat the chainsaw like a fancy axe, ruining it. Or maybe they manage to turn it on and then accidentally chop their backpack in half. Engineers with bad first experiences with vibe coding often go around telling everyone, “I knew it. This tool sucks! These things are a menace to society.”
As a leader bringing AI to your organization, you’ll need to project confidence and optimism. To be successful, you need engineers to be knowledgeable about vibe coding, and consequently happier—not doing it because you told them to. To get to that happy place, you need to help vibe coding go viral in your org, like at Adidas—with suitable guardrails, naturally (e.g., authorized models, cost limits, training on good practices).
Once AI goes viral in your company, you can look forward to unleashing a storm of creativity and productivity. But people resist any kind of change—especially a giant change like this—and they need to be inspired. You can be the one to inspire them, but you have to start with yourself.
Before you schedule brown‑bag lunches or commission an analyst report, crack open a chatbot window and spend a week cooking with the model yourself. Our section on “Your First Vibe Coding Sessions” in Part 2 serves as a great start. Slice through refactors, whip up test suites, maybe try rewriting an ancient, unreadable Perl script to see what happens. Your personal “hands on keyboard” time helps build the only intuition that matters—where you gain confidence that AI can bring real value to your org. Ten hours of hands-on play will inform your strategy better than a hundred pages of analyst reports.
Teams calibrate their risk tolerance by watching their leader. If you’re experimenting out in the open—posting snippets, bragging about thirty‑second migrations—your cooks will follow. If you hide behind policy documents, they’ll sense the fear and talk about vibe coding only in hushed whispers. Instead, talk about your own FAAFO outcomes. Initially, your efforts to encourage vibe coding may clash with your company’s perception of existing rules and bureaucracy—all the more reason for you to be a tireless and vocal champion about how vibe coding can be done safely.
Dr. Matt Beane gave us the simplest adoption metric we’ve found: tokens consumed and generated per developer (i.e., “tokens burned”). Software developers can only experience the upside of AI if they learn it, and they can only learn by using it. Set a target, publicize a leaderboard, maybe give a silly trophy to the monthly “Most‑Improved Code Base” or “Longest Running AI Job That Did Something Cool.” Friendly competition beats compulsory training every time.
A guitarist can get “locked in” to their guitar if it’s the only one they play—overfitting on the idiosyncrasies of that instrument. It’s hard to get a good feel for the quirks and strengths of AI models unless you’re using more than one and comparing them. (Similarly, when you learn your first foreign language, you understand your native language better.)
This is more expensive for organizations—getting enterprise licenses for two models might be out of the question. But if budget is an issue, you can look into bringing in an OSS model as your backup. The OSS models only tend to lag the frontier models by a few months, and with coding agents, the model often doesn’t have to be the smartest to find its way eventually to the answer. OSS models should evolve to become fine for all but the most demanding tasks.
To offer an author’s perspective—during the writing of this book, we initially used only one model for draft generation, Claude 3.5 Sonnet. But this grew to five and later to over twenty models. We were surprised at how distinctive their analysis and writing skills were. At this point we can often guess the model by reading what they wrote.
Malcolm Gladwell’s famous tipping‑point triad maps neatly onto engineering culture.
In decades past, we saw these same patterns help accelerate adoption of cloud, CI/CD, automated testing, microservices, and DevOps. Interestingly, they’re often the same mavens, connectors, and salespeople helping bring in the benefits of AI, but new talented people are joining this cast every day. In contrast, some experienced engineers often still struggle with the nuances of working with AIs. You’ll need to figure out how to identify who in your org is having early successes and encourage them to share that success with others.
Encourage use of AI by creating AI-specific channels for people to share their experiences and questions. Hold office hours for experts to answer questions. Host talks from internal and external experts. Consider creating a low‑friction expense budget for AI experiments. Unleash your teams and encourage them to build things they’re proud of.
We lock chainsaws in a cabinet when not in use; do the same for dangerous AI implements.
Don’t roll out all of AI to everyone at once. Instead, identify those tipping-point contributors in your organization, and help them generate a few successes first.
Insist on extra validation and verification for all AI-generated code. Lean on your tech leaders to figure out what this means for your organization. You’ll need more testing than before and as many different types of validation as you can invent. When the code is (even partly) a black box, you need a lot of additional auditing.
Make sure they’re aware of the AI fiascos that are possible, such as those in this book—“Don’t be like Steve and his disappearing test suite. Count your babies!” We want people to share these valuable lessons and normalize this learning process. This way, these new practices can be adopted by everyone in the organization.
No vibe coding while Jessie is on call! Make sure your engineers are not turning their brains off. Vibe coding in production has to be a rigorous engineering effort. Establish clear ownership standards so that all code has a clear escalation path when things go wrong in production.
Without guardrails, this headline might be your organization: “Junior Dev and Chatbot Erase $40M in Revenue.”
Nothing accelerates adoption like a local legend. Find a pilot team, scope a high‑value but bounded problem (the backlog item everyone’s been avoiding for two quarters), and let them attack it with vibe coding. When they deliver in a tenth of the expected time, put their demo on the big screen.
When writing this book, we were able to talk to the leadership team of an online betting company who shared an impressive story. As an experiment to see how much they could build using vibe coding, they tried analyzing user identity images—think driver’s license checks to confirm whether a user can create an account. For a variety of reasons, the developer team chose to implement it in Python, a language the team didn’t have a lot of experience in, to build a working prototype. The demo dazzled the business leaders, and to their surprise and delight, their cautious production leadership gave the thumbs-up to deploy into production. This went from theoretical to real-life, because the vendor they were using hiked their prices—suddenly, this experiment became a production service.
This showed the organization what could be done using vibe coding practices. What a victory! (Incidentally, many technology leaders tell us that teams are increasingly exploring displacing existing vendor solutions, especially those that are difficult to deal with or are now too expensive.)
Yes, every future outage will be blamed on “AI.” Lean into it. Host public retrospectives, document what happened, and capture the new guardrail that prevents a repeat. Over time the organization learns that accidents are an opportunity for the company to learn.
By encouraging everyone to share learnings, you give people an incentive to use AI more and teach others as well. You want to celebrate what people are doing with AI rather than having people hide it. Consider the scenario where individuals silently use AI to complete an eight-hour task in five minutes, saying that it took eight hours, and never disclosing that they got nearly eight hours back for themselves. Economists would describe this as individuals capturing the productivity surplus for themselves, rather than allowing the organization to benefit from and distribute these efficiency gains.
If you lean into this as a leader, with grounded confidence and optimism, you’ll have created an organization where greatness catches, spreads, and transforms everyone it touches.
How do you encourage your staff to embrace this unfamiliar technology? Quinn Slack, CEO of Sourcegraph, faced this challenge as he sought to generate enthusiasm for agentic coding across his organization—the whole company, not just engineers. His approach offers valuable lessons for any leader looking to foster a culture of innovation.
Mirroring Dr. Beane’s conjecture on token burn, Quinn independently postulated that token usage serves as a proxy for AI engagement, much like electricity consumption, which can be an effective predictor of factory output. He coded up a big, glowing, real-time leaderboard for Amp, the Sourcegraph coding agent for enterprise. The dashboard shows which developers are having the richest and lengthiest conversations with AI, who’s burning the most tokens, the number of lines of code everyone has generated, and other stats. Lots of fun, no judgement, and no shaming. All carrot, no stick.
Why does this approach work? Because visibility sparks curiosity, then curiosity sparks competition, and before long competition blossoms into experimentation. The first week the board went up, it generated conversations. The VP of Finance, Dan Adler, had the most lines of code one week, which he justifiably gloated over a bit, earning him lots of extra admiration from the developers.
Sourcegraph’s sales, customer success, and marketing teams have also been using the Amp coding agent for things like building technical demos and outreach tools. This usage by non-technical staff has been putting useful peer pressure on the engineers to jump on board, and once they see the light, they evangelize AI further.
Conversation lengths, raw token burn, and lines of code are blunt measures and somewhat gameable. Quinn knows that, so the leaderboard is framed as a conversation starter, not a performance review.
Outliers on either end are interesting. Heavy users are invited to share their techniques; light users might get asked whether they’re stuck or prefer to code by hand. The leaderboard isn’t there to judge; it’s there to surface stories, which are the most infectious way to spread new kitchen techniques.
For more insights on effective AI-assisted development, check out Kim and Yegge’s new book Vibe Coding and their podcast Vibe Coding with Steve and Gene on YouTube.
Gene Kim has been studying high-performing technology organizations since 1999. He was the founder and CTO of Tripwire, Inc., an enterprise security software company, where he served for 13 years. His books have sold over 1 million copies—he is the WSJ bestselling author of Wiring the Winning Organization, The Unicorn Project, and co-author of The Phoenix Project, The DevOps Handbook, and the Shingo Publication Award-winning Accelerate. Since 2014, he has been the organizer of DevOps Enterprise Summit (now Enterprise Technology Leadership Summit), studying the technology transformations of large, complex organizations.
Steve Yegge is an American computer programmer and blogger known for writing about programming languages, productivity, and software culture for two decades. He has spent over thirty years in the industry, split evenly between dev and leadership roles, including nineteen years combined at Google and Amazon. Steve has written over a million lines of production code in a dozen languages, has helped build and launch many large production systems at big tech companies, has led multiple teams of up to 150 people, and has spent much of his career relentlessly focused on making himself and other developers faster and better. He is currently an Engineer at Sourcegraph working on AI coding assistants.
No comments found
Your email address will not be published.
First Name Last Name
Δ
The following is an excerpt from the book Vibe Coding: Building Production-Grade Software With GenAI,…
Part 2 of 4: The Four Pillars of Progressive Delivery At GitHub, developers can…
Part 1 of 4: The Four Pillars of Progressive Delivery Do you remember when…