LLMs and Generative AI in the enterprise.
Inspire, develop, and guide a winning organization.
Understand the unique values and behaviors of a successful organization.
Create visible workflows to achieve well-architected software.
Understand and use meaningful data to measure success.
Integrate and automate quality, security, and compliance into daily work.
An on-demand learning experience from the people who brought you The Phoenix Project, Team Topologies, Accelerate, and more.
Learn how to enhance collaboration and performance in large-scale organizations through Flow Engineering
Learn how making work visible, value stream management, and flow metrics can affect change in your organization.
Clarify team interactions for fast flow using simple sense-making approaches and tools.
Multiple award-winning CTO, researcher, and bestselling author Gene Kim hosts enterprise technology and business leaders.
In the first part of this two-part episode of The Idealcast, Gene Kim speaks with Dr. Ron Westrum, Emeritus Professor of Sociology at Eastern Michigan University.
In the first episode of Season 2 of The Idealcast, Gene Kim speaks with Admiral John Richardson, who served as Chief of Naval Operations for four years.
DevOps best practices, case studies, organizational change, ways of working, and the latest thinking affecting business and technology leadership.
Just as physical jerk throws our bodies off balance, technological jerk throws our mental models and established workflows into disarray when software changes too abruptly or without proper preparation.
Sure, vibe coding makes you code faster—that’s the obvious selling point. But if you think speed is the whole story, you’re missing out on the juicy stuff.
The values and philosophies that frame the processes, procedures, and practices of DevOps.
This post presents the four key metrics to measure software delivery performance.
January 12, 2026
The headlines scream about GenAI transforming everything. Tech leaders promise revolutionary productivity gains. Vendors pitch their latest AI-powered solutions. But when Fortune 500 companies actually try to deploy GenAI at scale, most discover something the hype cycle never mentions: buying licenses is the easy part.
According the recent paper “Lessons from Enterprise GenAI Adoption Journeys” published in the Fall 2025 Enterprise Technology Leadership Journal, successful GenAI integration requires mastering six critical domains that extend far beyond the technology itself.
Through extensive research and interviews with Fortune 500 companies across diverse industries—from financial services and software development to retail and manufacturing—these six domains have emerged as fundamental to GenAI implementation success. Organizations that master these domains see 75% of respondents reporting positive productivity gains. Those that don’t? They’re the ones quietly killing pilots after six months of disappointing results.
Organizations looking to lead see AI as technology with tremendous potential. The fear is that waiting for a well-defined path for adoption will likely come too late to catch up with those more aggressively experimenting and investing at the cutting edge.
The potential gains are too great and technology is moving too fast to sit back and wait. Organizations that successfully integrate GenAI gain dual advantages: enhanced customer value and empowered employees.
By automating mundane, repetitive tasks, businesses free their workforce to focus on higher-value activities requiring creativity, critical thinking, and problem-solving. This strategic reallocation of human capital drives efficiency, streamlines workflows, and reduces operational costs while fostering a culture of innovation and continuous learning.
But here’s what the vendors won’t tell you: None of that happens automatically. You can deploy the most sophisticated GenAI tools available, and they’ll sit unused if you haven’t addressed these six domains.
What It Is: The structured set of policies, frameworks, roles, and processes that guide responsible, secure, and ethical use of GenAI. It’s your company playbook for ensuring AI systems are safe, compliant, explainable, and aligned to strategic business objectives.
Why It Matters: Without governance, you face two equally destructive outcomes. Too little, and you’re exposed to data privacy breaches, security vulnerabilities, regulatory violations, and intellectual property loss. Too much, and innovation grinds to a halt as legal reviews take four to six weeks while “at the rate that AI is moving, that team probably just gave up.”
How Success Looks: One Fortune 500 financial services company implements contextual risk management. They apply eleven distinct guardrails for customer-facing AI models but only four controls for internal engineering use cases. The key insight? Different AI applications present fundamentally different risk profiles and require correspondingly different governance approaches.
Another organization uses “red content” classification for confidential creative materials with strict restrictions, while allowing flexible approaches for general business data. This nuanced approach prevents blanket restrictions that would eliminate beneficial AI applications while maintaining appropriate protection for sensitive intellectual property.
The Critical Mistake: Delegating governance frameworks to junior staff without senior engagement. As one technology leader explained: “When work flows down an organization, you’re dealing with people who aren’t empowered enough to challenge the framework and say no, hang on, common sense has to override the letter of the law.”
Senior stakeholders must remain engaged—not approving every decision, but ensuring governance structures include clear escalation paths and regular engagement so frameworks remain aligned with business objectives and risk tolerance.
What It Is: Identifying, evaluating, and implementing the optimal mix of tools, platforms, applications, and services that align with strategic business objectives and user needs while meeting technical requirements, security standards, and governance needs.
Why It Matters: The approach to selecting, rolling out, and governing GenAI tools significantly impacts adoption, usage, and ultimately the value your organization derives. One enterprise technology leader noted: “We’re adding fifteen tools a week to that list.” Without strategic curation, you face tool sprawl, which leads to increased complexity, fragmented workflows, security vulnerabilities, and escalating costs.
How Success Looks: Successful organizations take a persona-based approach. As one VP of Digital Employee Experience explained, what a senior business executive needs differs from what a mid-level software engineer needs. Even similar roles might have different needs—one technology leader noted: “We do see differences in acceptance rates based on language and framework.”
Vanguard’s approach exemplifies this. Their Senior Manager of Emerging Technology asked a deceptively simple question: “What’s the most annoying thing about your job?” This collaborative approach helped identify which tools would resonate with specific user groups.
Another Fortune 500 company took the opposite approach—decisive focus on specific tools: “We’re into this tool that is selected and right now that’s how it is going to be.” This strategic concentration allows more effective governance, measurement, and adoption by building critical mass around specific tools.
The Balancing Act: It’s still very early days in GenAI tooling development. Premature optimization or overly rigid lock-in can be detrimental in the long term. If you implement selective tooling, use light coupling to enable pivots when the inevitable “better tool” comes along.
What It Is: Frameworks tracking what truly matters—technical performance metrics (model acceptance rates and usage frequency), business impact indicators (productivity gains and quality improvements), and user experience measures (adoption rates and satisfaction).
Why It Matters: You can’t improve what you don’t measure. But measuring the wrong things is worse than not measuring at all. Traditional productivity metrics often miss AI’s true impact, and poorly designed measurement systems can actually discourage beneficial AI adoption.
How Success Looks: Leading organizations track across three dimensions simultaneously:
One global financial institution’s AI-powered help desk reduced ticket resolution time by 65%. They track not just resolution time but satisfaction scores: “Our satisfaction scores have never been higher. The AI handles over 10,000 routine inquiries weekly that previously kept our team from tackling strategic projects.”
The Critical Insight: Measurement isn’t just about proving ROI—it’s about continuous learning and iteration. Successful organizations use metrics to identify where AI delivers value and where it falls short, then adjust their approach accordingly.
What It Is: Understanding the full cost picture across people, process, and technology—including model development costs, infrastructure costs, specialized talent acquisition, and maintenance costs of AI systems.
Why It Matters: Initial license costs are deceptively small compared to the total cost of ownership. Organizations that focus solely on subscription fees get blindsided by infrastructure costs, token usage at scale, specialized talent requirements, and ongoing maintenance overhead.
How Success Looks: Mature organizations model costs across multiple dimensions:
One mid-sized SaaS company discovered that its GitHub Copilot investment delivered a 3x ROI within six months—but only after accounting for reduced debugging time, faster onboarding, and code quality improvements that resulted in lower downstream maintenance costs.
The Planning Imperative: Build financial models that account for experimentation costs, failed pilots, and the reality that not every AI initiative will deliver measurable value. Organizations that allocate 20-30% of their AI budget for “learning investments” that may not pan out maintain momentum even when specific initiatives fail.
What It Is: Building comprehensive approaches combining technical training with psychological support, leadership modeling, and opportunities for experimentation.
Why It Matters: Technology alone isn’t enough. Without proper enablement, even the best AI tools sit unused or misused. As one senior engineer admitted: “I was skeptical at first, but now I can’t imagine coding without it”—but getting from skepticism to adoption requires intentional enablement.
How Success Looks: Successful workforce enablement operates on multiple levels:
Technical Training: Not just “here’s how to use the tool” but understanding when to use AI, how to evaluate its outputs, and how to integrate it into workflows effectively.
Psychological Safety: Creating environments where people can experiment with AI, make mistakes, and learn without fear of judgment or job loss concerns.
Leadership Modeling: When executives and senior engineers visibly use AI tools and share their experiences—both successes and failures—it signals organizational commitment and normalizes experimentation.
Experimentation Opportunities: One Fortune 500 software company describes “blooming 1000 flowers”—allowing broad, distributed experimentation with minimal constraints while maintaining appropriate risk management.
The Recognition Factor: Organizations seeing 75% positive productivity gains implement comprehensive, appropriate use policies. These policies establish guardrails that enable experimentation while addressing risk, creating conditions for successful enterprise-wide adoption.
What It Is: Rewiring how the organization works, learns, and is led to fully leverage GenAI’s transformative potential. This encompasses restructuring workflows, creating new collaboration models, establishing new performance metrics, and fundamentally transforming organizational culture.
Why It Matters: AI adoption inevitably collides with established social circuitry of the enterprise. Organizations that fail to address cultural and structural aspects of AI adoption see poor adoption rates despite significant technology investments. One technology leader noted how engineers naturally began using AI during incident response rather than following prescribed implementations—the most valuable applications emerge organically from practitioners solving real problems.
How Success Looks: Successful organizational change creates comprehensive social architecture normalizing AI usage:
Environmental Reinforcement: “When you walk into a [company] office, there’s signage promoting the use of AI” combined with “folks doing interviews where they’re talking about how AI has brought so much productivity.”
Communities of Practice: “Different product teams can get together and share what they’re building.” Peer-to-peer knowledge sharing accelerates adoption more effectively than top-down mandates.
Distributed Ownership: Rather than centralized change management, mature organizations distribute responsibility to corporate functions—such as commercial, supply chain, and research teams—that better understand whether use cases drive value in their specific areas.
Strategic Framing: Organizations achieve better results by framing AI adoption as both an opportunity and a safety improvement rather than a defensive necessity. This positive framing reduces defensive reactions while naturally producing solutions delivering both safety and speed improvements.
The HR Blind Spot: One Fortune 500 financial services leader revealed a concerning disconnect: no one was hearing HR’s voice in AI transformation conversations. Organizations that fail to engage HR in addressing talent transformation gaps—role changes, skill requirements, career path evolution—undermine successful AI adoption even when technology implementations succeed.
Here’s what the vendors, consultants, and conference keynotes won’t emphasize: These six domains are interdependent. You can’t just nail governance and ignore workforce enablement. You can’t have great tooling but terrible measurement systems. You can’t drive organizational change without financial planning supporting the transformation.
One technology leader captured this reality: “We had such a small legal team doing all the reviews. It would take four to six weeks to get approval and at the rate AI is moving, that team probably just gave up…we don’t even care about the use case anymore because we waited too long.”
That’s what happens when you optimize one domain—in this case, governance—without considering the others. The governance process was technically sound, but it existed in isolation from the reality of AI’s pace of change, the organization’s capacity to review at scale, and the cultural dynamics of maintaining momentum.
Success requires treating these domains as an integrated system, not a checklist. Ask yourself:
Governance: Do we have contextual risk management that adapts controls to specific use cases, or blanket policies killing beneficial experimentation?
Tooling: Are we taking persona-based approaches matching tools to actual user needs, or pushing one-size-fits-all solutions?
Measurement: Are we tracking technical performance, business impact, AND user experience simultaneously?
Financial Planning: Do our models account for the total cost of ownership, including human capital and opportunity costs?
Workforce Enablement: Are we providing technical training, psychological safety, leadership modeling, and experimentation opportunities?
Organizational Change: Are we rewiring workflows, creating communities of practice, and distributing ownership appropriately?
Organizations that master these domains don’t just adopt GenAI—they transform how they work, compete, and innovate. Those that treat AI adoption as purely a technology purchase? They’re the ones whose pilots quietly die after six months, leaving executives wondering why their AI investments didn’t deliver promised returns.
The hype cycle will continue promising revolutionary transformation. The reality is more nuanced: Revolutionary outcomes require revolutionary approaches to adoption—approaches spanning governance, tooling, measurement, finance, workforce enablement, and organizational change.
The question isn’t whether GenAI will transform your industry. It’s whether you’ll master these six domains before your competitors do.
This blog post is based on “Lessons from Enterprise GenAI Adoption Journeys” by Adam Zimman, Brian Scott, Jason Clinton, Jason Cox, and Jeff Gallimore, published in the Enterprise Technology Leadership Journal Fall 2025.
Managing Editor at IT Revolution working on publishing books and guidance papers for the modern business leader. I also oversee the production of the IT Revolution blog, combining the best of responsible, human-centered content with the assistance of AI tools.
No comments found
Your email address will not be published.
First Name Last Name
Δ
The headlines scream about GenAI transforming everything. Tech leaders promise revolutionary productivity gains. Vendors…
You've read the headlines. AI is enabling 100% to 200% increases in productivity! Software…
The following is an excerpt from the new book Vibe Coding: Building Production-Grade Software With…
The following is an excerpt from the book Vibe Coding: Building Production-Grade Software With GenAI,…