Skip to content

September 25, 2025

AI’s Mirror Effect: How the 2025 DORA Report Reveals Your Organization’s True Capabilities

By Leah Brown

The 2025 DORA State of AI-assisted Software Development report delivers a sobering reality check for technology leaders rushing to implement AI solutions. After surveying nearly 5,000 professionals globally, the research reveals a fundamental truth: AI doesn’t create organizational excellence—it amplifies what already exists. For high-performing organizations with solid foundations, AI becomes a powerful accelerator. For those with dysfunctional systems, it magnifies chaos.

This finding presents a crucial opportunity for technology executives to understand AI’s true potential. While 90% of organizations have now adopted AI in their software development processes—a 14% increase from last year—the benefits aren’t automatically flowing to organizational performance. The report’s central insight is that AI functions as both mirror and multiplier, reflecting an organization’s true capabilities while amplifying their existing strengths and weaknesses.

The Systems Problem Behind AI Adoption

The research demolishes the notion that AI adoption is simply a tools problem. Instead, it reveals AI success as fundamentally a systems problem requiring organizational transformation. This aligns with patterns we’ve seen before: organizations that simply moved to cloud infrastructure without rethinking architecture saw limited returns, while those that restructured their applications, teams, and operations unlocked real value.

The DORA research powerfully validates what Gene Kim and Dr. Steven J. Spear discovered in their Shingo Award-winning book Wiring the Winning Organization: the decisive factor in high-performing enterprises isn’t technology, resources, or even talent—it’s organizational wiring that enables innovation, excellence, and greatness to flourish.

The same principle applies to AI. The DORA team identified seven foundational capabilities that amplify AI’s positive impact on performance. These capabilities—ranging from clear AI policies to healthy data ecosystems—are all team and organization-level factors. This represents a critical shift from focusing on individual AI tool usage to designing the systems that enable AI success.

The seven AI capabilities that emerged from the research are:

  1. Clear and communicated AI stance: Organizations need explicit policies about AI tool usage, not ambiguous guidelines that leave developers uncertain about acceptable practices.
  2. Healthy data ecosystems: High-quality, accessible, and unified internal data dramatically amplifies AI’s organizational impact.
  3. AI-accessible internal data: Connecting AI tools to internal documentation, codebases, and decision logs improves output quality and developer effectiveness.
  4. Strong version control practices: With AI accelerating code generation, mature version control habits become even more critical for managing the increased volume and velocity of changes.
  5. Working in small batches: This long-standing DORA capability proves especially valuable for AI-assisted teams, improving product performance while reducing friction.
  6. User-centric focus: Perhaps most critically, teams without a user-centric focus actually experience negative impacts from AI adoption, while those with strong user focus see amplified benefits.
  7. Quality internal platforms: High-quality platforms serve as the distribution layer that scales individual AI productivity gains into organizational improvements.

The AI Adoption Landscape: Universal but Uneven

The 2025 research reveals remarkable adoption rates. Ninety percent of survey respondents now use AI as part of their work, with the median user having 16 months of experience. Respondents report spending a median of two hours per workday interacting with AI—representing about one-quarter of an eight-hour workday.

However, adoption breadth doesn’t equal adoption depth. Only 7% of AI users report “always” using AI when faced with a problem, while 39% only “sometimes” seek AI assistance. This suggests that while AI has become widespread, it hasn’t yet become reflexive for most developers.

The most common AI use case remains writing new code, with 71% of code writers using AI assistance. But AI usage spans a much broader range of activities: 68% use it for literature reviews, 66% for modifying existing code and proofreading, and 62% for debugging and explaining concepts.

More than 80% of respondents perceive that AI has increased their productivity, and 59% observe positive impacts on code quality. Yet a notable 30% report little to no trust in AI-generated code, indicating the need for critical validation skills—what the researchers call a healthy “trust but verify” approach.

The Instability Challenge: Speed Without Safety

Perhaps the most concerning finding is AI’s continued association with increased software delivery instability. While AI adoption now positively correlates with throughput—a reversal from 2024’s findings—instability remains elevated. This pattern suggests that while teams are adapting for speed, their underlying systems haven’t evolved to safely manage AI-accelerated development.

The research tested whether AI’s speed gains might offset instability’s negative impacts—a “fail fast, fix fast” hypothesis. The data doesn’t support this theory. Instability continues to harm crucial outcomes like product performance and burnout, potentially negating perceived throughput gains.

This challenge reflects what Gene Kim and Steve Yegge explore in their forthcoming Vibe Coding book—when AI dramatically accelerates software development, control systems must also speed up. Organizations need faster feedback loops, better version control practices, and more robust safety nets to handle AI-generated code volumes safely.

Platform Engineering: The Foundation for AI Success

The report’s findings on platform engineering provide crucial context for AI initiatives. With 90% adoption rates and 76% of organizations now having dedicated platform teams, platform engineering has moved from experimental to essential. More importantly, the research demonstrates that a high-quality internal platform is a key enabler for magnifying AI’s effects on organizational performance.

This connection makes intuitive sense. AI adoption without corresponding platform investment often results in localized productivity gains that get absorbed by downstream bottlenecks. A well-designed platform provides the necessary guardrails and shared capabilities that allow AI benefits to scale effectively across the organization.

The data reveals an interesting trade-off that technology leaders should understand. High-quality platforms correlate with slight increases in software delivery instability—a pattern the researchers interpret as “risk compensation.” Organizations with strong platforms can afford to experiment more and accept higher rates of minor failures because they can recover quickly. This represents a mature approach to risk management that enables innovation while maintaining overall system reliability.

The research also identifies an experience gap in platform capabilities. Core technical capabilities like security and reliability are perceived as well-provided, while user experience features like feedback responsiveness and task automation lag behind. This suggests many platforms are built technology-first rather than user-first.

Value Stream Management as AI’s Force Multiplier

The report validates value stream management (VSM) as a critical practice for maximizing AI investments. Organizations with mature VSM practices see dramatically amplified benefits from AI adoption on organizational performance. VSM provides the systems-level view necessary to ensure AI gets applied to actual constraints rather than just accelerating already-fast processes.

Without VSM, AI risks creating what the researchers call “localized pockets of productivity” that are absorbed by downstream chaos. Teams might generate code faster, but if testing, review, or deployment processes can’t handle the increased volume, the overall system gains nothing.

This finding aligns with the Flow Engineering methodology developed by Steve Pereira and Andrew Davis, which provides practical frameworks for mapping and improving value streams. VSM acts as a force multiplier for AI investments by ensuring that individual improvements translate into broader organizational advantages rather than creating more downstream chaos.

The Persistence of Organizational Dysfunction

One of the report’s most sobering findings concerns what hasn’t changed with AI adoption. Despite significant productivity gains at the individual level, AI shows no measurable impact on workplace friction or developer burnout. This persistence suggests these challenges run deeper than individual productivity and are embedded in organizational systems and culture.

The research indicates that friction remains unaffected because it’s often a product of processes beyond the individual developer. Microsoft’s 2019 research identified process issues like unstable systems, outdated documentation, administrative workload, and time pressure as primary sources of friction. Even if AI reduces friction for individual coding tasks, inefficient organizational processes can negate those benefits.

Similarly, burnout’s resistance to AI solutions reflects its roots in organizational culture rather than individual productivity. Burnout correlates strongly with leadership quality, priority stability, and generative cultures—factors that remain unchanged by developer tools. Some organizations are even experiencing work intensification, where perceived productivity gains from AI invite higher output expectations, maintaining the same balance between demands and resources.

This validates the core insight from Wiring the Winning Organization by Kim and Spear: organizational performance is determined by social circuitry—the processes, procedures, routines, and norms—not individual capabilities or tools.

Team Performance Profiles: Beyond Simple Metrics

The research introduces seven distinct team performance profiles, moving beyond simple metrics to capture the complex interplay between performance, stability, and well-being:

  • Harmonious high-achievers (20%): Excel across all dimensions in a virtuous cycle of stable, low-friction environments enabling high-quality, sustainable work.
  • Pragmatic performers (20%): Deliver impressive speed and stability but haven’t reached peak engagement.
  • Stable and methodical (15%): Deliberate artisans delivering high-quality work at a sustainable pace.
  • Constrained by process (17%): Trapped on a treadmill where inefficient processes consume effort despite stable systems.
  • Legacy bottleneck (11%): Constantly reactive, where unstable systems dictate work and undermine morale.
  • High impact, low cadence (7%): Produce high-impact work but with low throughput and high instability.
  • Foundational challenges (10%): Stuck in survival mode with significant gaps across processes, environment, and outcomes.

This framework provides a more nuanced understanding than traditional software delivery metrics alone. A team might achieve high throughput while burning out or maintain stability while stuck on legacy systems. The profiles help organizations apply targeted interventions rather than one-size-fits-all solutions.

The Critical Importance of User Focus

Perhaps the most striking finding is how user-centric focus determines AI’s impact on team performance. With high certainty, the research shows that teams with a strong user focus see amplified benefits from AI adoption. Conversely, teams without a user-centric focus actually experience negative impacts from AI adoption.

This finding provides a crucial warning: in the absence of a user-centric focus that prioritizes meeting end-user needs, AI adoption can harm team performance. (Check out the upcoming book Progressive Delivery for ways to incorporate the user into the traditional SDLC.) Organizations encouraging AI adoption must incorporate rich understanding of their end users, their goals, and their feedback into product roadmaps and strategies.

The AI Capabilities Model: Seven Pillars of Success

The inaugural DORA AI Capabilities Model identifies seven foundational capabilities that consistently amplify AI’s benefits:

  1. Clear and communicated AI stance moderates AI’s impact on individual effectiveness, organizational performance, friction, and throughput. Organizations need explicit policies that encourage experimentation while providing clear boundaries.
  2. Healthy data ecosystems amplify AI’s impact on organizational performance. When internal data is high-quality, accessible, and unified, AI tools can provide more relevant, contextual assistance.
  3. AI-accessible internal data amplifies benefits for individual effectiveness and code quality. Connecting AI to internal repositories, documentation, and decision logs dramatically improves output relevance.
  4. Strong version control practices become even more critical in AI-assisted development. Frequent commits amplify AI’s impact on individual effectiveness, while robust rollback capabilities improve team performance when dealing with AI-generated code volumes.
  5. Working in small batches amplifies AI’s benefits for product performance and reduces friction, though it slightly reduces individual effectiveness gains—suggesting AI’s productivity benefits come partly from generating large code volumes.
  6. User-centric focus is perhaps most critical, amplifying team performance benefits while preventing negative impacts. Without this focus, AI adoption can actually harm teams.
  7. Quality internal platforms amplify organizational performance benefits while potentially increasing individual friction—reflecting platforms’ role as both enabler and constraint in maintaining standards.

Implications for Technology Leaders

The report’s findings translate into several actionable insights:

  • Treat AI adoption as an organizational transformation, not tool procurement. The greatest returns come from investing in foundational systems—platforms, data ecosystems, and engineering disciplines—that amplify AI’s benefits rather than simply purchasing licenses.
  • Establish clear AI policies before widespread adoption. Ambiguity around AI usage creates both conservative underutilization and risky overuse. Clear, communicated stances provide psychological safety for effective experimentation.
  • Invest in data infrastructure as a strategic asset. The quality, accessibility, and integration of internal data significantly amplify AI’s organizational impact. This requires going beyond generic foundation models to provide AI tools with company-specific context.
  • Strengthen version control and safety nets. AI-accelerated development requires more robust practices, not fewer. The ability to quickly rollback and revert changes becomes critical when dealing with higher volumes of AI-generated code.
  • Map your value streams before scaling AI. Without systems-level understanding, AI adoption risks creating localized optimizations that don’t improve overall performance.
  • Maintain user focus as your North Star. Without a user-centric focus, AI adoption can harm team performance. User needs must guide AI-assisted development toward appropriate goals.

The Path Forward

The 2025 DORA report makes clear that AI’s transformative potential in software development remains largely unrealized. While individual productivity gains are real and widespread, translating these into organizational advantages requires intentional system-level changes. Organizations that treat AI adoption as a transformation opportunity—investing in the capabilities that amplify its benefits while addressing the systemic issues that limit them—will separate themselves from those that simply deploy tools and hope for results.

For technology leaders, the question isn’t whether to adopt AI—it’s whether to invest in becoming the kind of organization that can truly benefit from it. The research provides both a roadmap and a reality check: AI can revolutionize software development, but only for organizations willing to build the systems, cultures, and practices that allow it to flourish.

The mirror that AI holds up to our organizations shows us exactly what we are—strengths, weaknesses, and all. The choice is what we do with that reflection.

- About The Authors
Leah Brown

Leah Brown

Managing Editor at IT Revolution working on publishing books and guidance papers for the modern business leader. I also oversee the production of the IT Revolution blog, combining the best of responsible, human-centered content with the assistance of AI tools.

Follow Leah on Social Media

No comments found

Leave a Comment

Your email address will not be published.



Jump to Section

    More Like This

    AI’s Mirror Effect: How the 2025 DORA Report Reveals Your Organization’s True Capabilities
    By Leah Brown

    The 2025 DORA State of AI-assisted Software Development report delivers a sobering reality check…

    Welcome to the Vibe Coding Kitchen: Your First Lesson
    By Gene Kim , Steve Yegge

    The following is an excerpt from the forthcoming book Vibe Coding: Building Production-Grade Software With…

    Team Topologies, 2nd Edition: Real-World Lessons from the Global Business  Community
    By Leah Brown

    Today marks an exciting milestone in organizational design: the release of Team Topologies, 2nd…

    Beyond the AI Hype: Battle-Tested Leadership Insights from the Front Lines
    By Leah Brown

    Technology leaders are drowning in contradictory advice about AI. Move fast or risk obsolescence,…