LLMs and Generative AI in the enterprise.
Inspire, develop, and guide a winning organization.
Understand the unique values and behaviors of a successful organization.
Create visible workflows to achieve well-architected software.
Understand and use meaningful data to measure success.
Integrate and automate quality, security, and compliance into daily work.
An on-demand learning experience from the people who brought you The Phoenix Project, Team Topologies, Accelerate, and more.
Learn how to enhance collaboration and performance in large-scale organizations through Flow Engineering
Learn how making work visible, value stream management, and flow metrics can affect change in your organization.
Clarify team interactions for fast flow using simple sense-making approaches and tools.
Multiple award-winning CTO, researcher, and bestselling author Gene Kim hosts enterprise technology and business leaders.
In the first part of this two-part episode of The Idealcast, Gene Kim speaks with Dr. Ron Westrum, Emeritus Professor of Sociology at Eastern Michigan University.
In the first episode of Season 2 of The Idealcast, Gene Kim speaks with Admiral John Richardson, who served as Chief of Naval Operations for four years.
DevOps best practices, case studies, organizational change, ways of working, and the latest thinking affecting business and technology leadership.
Just as physical jerk throws our bodies off balance, technological jerk throws our mental models and established workflows into disarray when software changes too abruptly or without proper preparation.
Sure, vibe coding makes you code faster—that’s the obvious selling point. But if you think speed is the whole story, you’re missing out on the juicy stuff.
The values and philosophies that frame the processes, procedures, and practices of DevOps.
This post presents the four key metrics to measure software delivery performance.
January 5, 2026
You’ve read the headlines. AI is enabling 100% to 200% increases in productivity! Software developers will be obsolete in five years! We’ve taken the human entirely out of the loop! If you’re not doing this right now, you’ll be out of business in three years!
You’re a promoter of new technology—you wouldn’t be in your leadership role if you weren’t. This all sounds amazing. You rolled out AI assistants to augment your engineers. You know the benefits should be huge.
But your current lived experience looks more like this:
And here’s what worries you most: With the ability to generate code faster than ever, it’s now taking much longer to confirm that work is ready for production than it took to complete the work in the first place.
According to Robbie Daitzman and Christina Yakomin in their paper “Unclogging the Drain” published in the Fall 2025 Enterprise Technology Leadership Journal, if you don’t address this bottleneck, you’re heading for disaster.
The advent of generative code solutions—from AI coding assistants like GitHub Copilot to “vibe coding” tools like Cursor—presents an existential risk to large enterprises if not applied intentionally.
The question isn’t “Are we building the right thing?” (that’s product management, requirements definition, feature prioritization). The question is: “How do we make sure we’re building the thing right?”
The greatest opportunity for improvement today lies in what happens from when a pull request (PR) is raised to when a client can first utilize a new capability in production. This is not only what you, as a senior technology executive, have the most control over—it’s also the part of the SDLC that will feel the most strain with the rise of AI tools.
Think of it like a drain. Code generation is like turning on a fire hose. If your downstream validation processes—code review, testing, policy enforcement, performance validation—are designed for a trickle, you’re going to flood the system. The drain clogs. Work backs up. Value delivery slows to a crawl despite generating code at unprecedented speeds.
The authors map out the critical steps in this journey:
Each step is a potential bottleneck. AI code generation doesn’t just strain one step—it simultaneously increases pressure on every single validation point in your pipeline.
Let’s examine how to unclog each critical step.
The Problem: As generative AI tools accelerate code creation, the code review process becomes a bottleneck—especially when senior engineers must manually review complex, machine-generated code that may be harder to interpret than human-written code.
The Solution:
When to do it: Code review should be completed via pull request submitted with each code/config change prior to entering an integration/testing environment, before code is merged into main or feature branches.
Where to do it: Code review can start in the local IDE before code is committed, through practices like pair programming. Eventually, before merging, formal code review should be completed asynchronously through code repository tooling/SCM.
The Problem: Manual testing can’t keep pace with AI-generated code volume. Organizations need automated validation that runs continuously.
When to do it: Run automated functional tests with each CI (continuous integration) build. This provides fast feedback, allowing developers to catch and fix issues early in the development cycle.
Where to do it: Execute in the CI pipeline as part of the automated build process.
The Problem: Without automated code quality checks, issues accumulate. AI-generated code may violate coding standards, introduce security vulnerabilities, or create maintainability problems that only become apparent later.
The Solution: Static code analysis evaluates source code without executing it, identifying syntax errors, code smells, security vulnerabilities, and violations of coding standards.
When to do it: Incorporate static code analysis into every CI build and as a standard part of the broader CI/CD pipeline.
Where to do it: Run locally during development, then as part of CI/CD pipeline throughout the development lifecycle.
The Problem: Organizational conventions, standards, and best practices need to be consistently followed. Manual checks don’t scale to AI code generation volumes.
The Solution: Automated policy enforcement validates configurations like cloud resource tagging, ensures resiliency testing completion, and confirms alerting and monitoring standards are met.
When to do it: Run with each CI build to catch violations early. Enforcement timing may vary based on policy severity and context.
Where to do it: Integrate into the CI/CD pipeline to ensure compliance checks are core parts of the delivery process, not afterthoughts.
The Problem: New functionality must maintain or improve performance profiles. Without systematic performance validation, regressions slip through, degrading user experience.
The Solution: Performance testing evaluates how a system behaves under expected load conditions, ensuring responsiveness, stability, and scalability.
Run moderate load tests before each production deployment representing typical daily traffic patterns. This validates key user journeys continue meeting performance expectations and no regressions have been introduced.
When to do it: Before each production deployment for critical user journeys. Component-level performance tests can run with each CI build of the main branch but shouldn’t be build-breaking due to longer runtime.
Where to do it: Pre-production or integration environment that closely mirrors production.
The Problem: A fully automated pipeline may confirm a release can go to production, but the Go/No-Go decision answers whether it should go, factoring in business readiness, client impact, and timing.
The Solution: This decision is critical to ensuring production deployments are not only technically sound but also aligned with broader organizational goals and client expectations.
When to do it: Every production deployment, regardless of size or complexity, should be preceded by a deliberate Go/No-Go decision.
Where to do it: Integrated into the CI/CD pipeline as a formal checkpoint, often with a human in the loop. This can be a manual approval step, scheduled release window, or gated deployment requiring sign-off from designated stakeholders.
The Problem: Even with extensive validation, things can go wrong in production. The blast radius of failures needs to be minimized, with rapid recovery mechanisms in place.
The Solution: Resilient rollout deploys production changes in a way that minimizes risk, ensures rapid validation, and enables efficient rollback if needed.
When to do it: Every production deployment should include a resilient rollout strategy, regardless of the size or perceived risk of the change.
Where to do it: Deploy to production with automated gating, health checks in the CI/CD pipeline, and post-certification validation live in the production environment.
The authors’ central recommendation is clear: Automate wherever possible, validate continuously, and use signals generated throughout the software delivery lifecycle to build confidence in every production release.
This transformation doesn’t happen overnight. It requires:
But here’s the critical insight: These downstream improvements have some of the highest return on investment and are often within your direct control as a technology executive.
If you’re a senior technology executive at a large enterprise (likely 10k+ employees), you need to recognize:
Start by mapping your current PR-to-production flow. For each step, ask:
Then systematically address each bottleneck:
Most importantly: Recognize this is not a one-time fix. True transformation requires continuous focus on identifying and addressing the next source of friction. For many large organizations, once downstream validation is optimized, the next frontier may lie further upstream in ideation, prioritization, or portfolio planning.
AI code generation promises revolutionary productivity gains. But without addressing the PR-to-production bottleneck, those gains evaporate in your validation pipeline.
The drain is clogged. AI is the fire hose. Unless you unclog the drain, turning on the fire hose just floods the system.
The good news: You have direct control over these downstream processes. The practices outlined by Daitzman and Yakomin provide actionable steps to automate validation, build confidence, and actually deliver the business value that AI code generation promises.
The question isn’t whether to adopt AI code generation—that ship has sailed. The question is whether your organization will unclog the value stream fast enough to capture the benefits before your competitors do.
Start with your biggest bottleneck. Automate it. Then move to the next one. Build a sustainable, adaptive delivery model that evolves with your business, your clients, and your technology landscape.
Because generating code faster only creates value if you can deliver it to production faster too.
This blog post is based on “Unclogging the Drain: Clearing Obstacles in the Value Stream from PR to Production” by Robbie Daitzman and Christina Yakomin, published in the Enterprise Technology Leadership Journal Fall 2025.
Managing Editor at IT Revolution working on publishing books and guidance papers for the modern business leader. I also oversee the production of the IT Revolution blog, combining the best of responsible, human-centered content with the assistance of AI tools.
No comments found
Your email address will not be published.
First Name Last Name
Δ
You've read the headlines. AI is enabling 100% to 200% increases in productivity! Software…
The following is an excerpt from the new book Vibe Coding: Building Production-Grade Software With…
The following is an excerpt from the book Vibe Coding: Building Production-Grade Software With GenAI,…