LLMs and Generative AI in the enterprise.
Inspire, develop, and guide a winning organization.
Understand the unique values and behaviors of a successful organization.
Create visible workflows to achieve well-architected software.
Understand and use meaningful data to measure success.
Integrate and automate quality, security, and compliance into daily work.
An on-demand learning experience from the people who brought you The Phoenix Project, Team Topologies, Accelerate, and more.
Learn how to enhance collaboration and performance in large-scale organizations through Flow Engineering
Learn how making work visible, value stream management, and flow metrics can affect change in your organization.
Clarify team interactions for fast flow using simple sense-making approaches and tools.
Multiple award-winning CTO, researcher, and bestselling author Gene Kim hosts enterprise technology and business leaders.
In the first part of this two-part episode of The Idealcast, Gene Kim speaks with Dr. Ron Westrum, Emeritus Professor of Sociology at Eastern Michigan University.
In the first episode of Season 2 of The Idealcast, Gene Kim speaks with Admiral John Richardson, who served as Chief of Naval Operations for four years.
DevOps best practices, case studies, organizational change, ways of working, and the latest thinking affecting business and technology leadership.
Just as physical jerk throws our bodies off balance, technological jerk throws our mental models and established workflows into disarray when software changes too abruptly or without proper preparation.
Sure, vibe coding makes you code faster—that’s the obvious selling point. But if you think speed is the whole story, you’re missing out on the juicy stuff.
The values and philosophies that frame the processes, procedures, and practices of DevOps.
This post presents the four key metrics to measure software delivery performance.
October 9, 2025
The following is an excerpt from the forthcoming book Vibe Coding: Building Production-Grade Software With GenAI, Chat, Agents, and Beyond by Gene Kim and Steve Yegge.
By now, you’ve had your first vibe coding sessions, and we’ve deconstructed our own real-life vibe coding sessions, in which we’ve solved problems that were meaningful to us. We’re now ready to think about how to get the best from our AI collaborators.
In the next sections, we’ll distill the practices that will help you understand the messy and improvisational spirit of conversational problem-solving with AI. We’ll show you how to lean into conversations rather than hyper-formal contracts, the lazy but effective way to let AI see and fix its own mistakes, the art of relying on your sous chef’s encyclopedic knowledge, and how to turn your vague-on-purpose requests into precision-engineered results—all while maintaining FAAFO.
Vibe coding is about dynamic, in-the-moment problem-solving rather than creating a bulletproof prompt. You ask AI to help you solve your problem, and when you’re done, in most cases, you throw the conversation away and start working on the next issue. It’s like texting with friends. Casual and impromptu.
In contrast, prompt engineering is more like emailing a lawyer who is suing you—everything in that email is fraught with consequence, requiring precision and care. This is because prompt engineering shares many traits with a traditional engineering discipline. It requires careful testing, clear validation of expected outputs, and consideration of long-term maintainability and accuracy. You meticulously craft instructions, iterating on them over and over again to get the outcomes you want.
In vibe coding conversations, you don’t need to worry so much about these rigorous constraints:
The overall philosophy is simple: Treat the chat like a text message conversation, not a legal brief.
When vibe coding, you can get all sorts of errors, from compile errors to runtime errors to test failures to unexpected behavior and even environment setup issues. In these cases, you need to copy those errors or behaviors into your chat session. These act as the feedback your AI partner needs to course-correct.
AIs are remarkably good at understanding error messages and logs, usually spotting the issue. Instead of explaining, “The date formatting isn’t working properly in the user profile page,” show AI the error: “Invalid Date: TypeError: date.format is not a function.”
It can be easiest to upload a screenshot of the error with no further explanation (and if you need to provide some text, use “didn’t work” or similar). The visual information contains all the context AI needs. And as we described in Steve’s Puppeteer story, if you can wire up a coding agent to take its own screenshots, all the better.
Properly configured, coding agents will automatically see all these error messages: They can access your browser console, your terminal shell, your logs, and your test suites, and usually require little to no action from you.
AI has read almost everything on the internet and knows how to use almost every tool. This can save you from spending time learning cryptic tooling and rescue you from some pretty hairy situations. For example, when working with ffmpeg, we don’t waste time learning dozens of arcane parameters.
Rather, tell your AI collaborator: “I need to extract a 30-second clip starting at 2:15, remove the audio, and compress it to 720p.” Or when working with a database, ask: “Write the query: I want all transactions from the last quarter where the amount exceeded $1,000, grouped by customer region.”
And it doesn’t have to be about programming. Gene learned something he has been wanting to do for over a decade by asking: “How do I generate Git diffs of all changes made to a given file?”
Or better yet, if you’re using a coding agent, don’t bother to learn the command—just tell the coding agent what you’re trying to do. “Something broke in my code in create-drafts.clj. It used to work on Git commit 9b28ff3. What happened?”
(Steve, unfortunately, found himself in a position of asking this: “Please resurrect all of the deleted tests somewhere between 20 and 100 commits ago.” To his relief, Claude Code did all the Git investigation and surgery for him, rescuing all that code. We’ll tell you the full story in Part 3.)
Previously, we talked about how you can be sloppy with spelling and grammar in your conversations with AI. However, you’ll want to be clear and precise about the problem you want solved. This is because AIs can’t read your mind (yet). When we’re not sufficiently clear about our problem specification, surprise, woe, and frustration await.
Consider this vague request (similar to the style we hear many tech leaders telling us their developers are using): “We need to handle dates with time zones.” There’s not much the world’s best time zone consultant could do with this, let alone your AI. So, let’s dictate to AI what the problem is, what you know so far, and what help you’re looking for.
Here’s what that dictation might sound like (cleaned up a bit) when solving this problem.
I know that storing dates without time zones is not tenable. We’re there now and in a pickle. Give me some options on how real programs handle time zones. I like the way databases do it or how Git does it. Help me understand what it means to turn this Unix epoch in Python to something involving time zones. Yeah, give me a plan for how I handle time zones correctly for this value. How’s that?
Notice how sloppy and unstructured it is—our confusion about how to proceed should be evident. But as long as you tell AI what you know and what you want, you don’t need to worry about long pauses, extra information, garbled sentences, random noise, or changing your mind while you’re talking. AI will usually figure it out like an attentive person would.
Copy the dictation into an AI of your choice, and provide any extra stuff that you think might be helpful. In the time zones case, AI came back with this plan, which you can then copy and paste into chat programming or your coding agent:
It also provided a helpful migration plan, usage examples, and some best practices to follow.
Remember: The more concrete you are about requirements, and the better context you provide, the more useful the code you get from AI will be. In the absence of clear specifications, AI will fill it in with its own imagination and hallucinations. But AIs excel at following your lead when you give them concrete examples.
This rule explains why your first prompt in the conversation tends to be the longest. You’re outlining what you want, being as specific as you can to constrain the solutions it generates. The first prompt may involve requesting it to create a plan, which you then review.
After that initial raft of instructions, we’ve found that our messages to AI tend to be short, such as, “Yes, go!” “Explain #2 further.” “Use the conventions in the function create-drafts-and-rank.” Or in less awesome cases, “No, revert that change,” or even, “Bad AI, bring those files back!”
When AI is doing things in a way that earns your trust, your prompts will tend to be shorter. When AI goes off the rails, you’ll have to write longer clarifications or start a new conversation.
Stay tuned for more exclusive excerpts from the upcoming book Vibe Coding: Building Production-Grade Software With GenAI, Chat, Agents, and Beyond by Gene Kim and Steve Yegge on this blog or by signing up for the IT Revolution newsletter.
Steve Yegge is an American computer programmer and blogger known for writing about programming languages, productivity, and software culture for two decades. He has spent over thirty years in the industry, split evenly between dev and leadership roles, including nineteen years combined at Google and Amazon. Steve has written over a million lines of production code in a dozen languages, has helped build and launch many large production systems at big tech companies, has led multiple teams of up to 150 people, and has spent much of his career relentlessly focused on making himself and other developers faster and better. He is currently an Engineer at Sourcegraph working on AI coding assistants.
Gene Kim has been studying high-performing technology organizations since 1999. He was the founder and CTO of Tripwire, Inc., an enterprise security software company, where he served for 13 years. His books have sold over 1 million copies—he is the WSJ bestselling author of Wiring the Winning Organization, The Unicorn Project, and co-author of The Phoenix Project, The DevOps Handbook, and the Shingo Publication Award-winning Accelerate. Since 2014, he has been the organizer of DevOps Enterprise Summit (now Enterprise Technology Leadership Summit), studying the technology transformations of large, complex organizations.
No comments found
Your email address will not be published.
First Name Last Name
Δ
The following is an excerpt from the forthcoming book Vibe Coding: Building Production-Grade Software With…
Bottom Line Up Front: Gene Kim's The Unicorn Project predicted nearly every major challenge…
Picture this: A seasoned medical coder sits down at her workstation every morning and…