LLMs and Generative AI in the enterprise.
Inspire, develop, and guide a winning organization.
Understand the unique values and behaviors of a successful organization.
Create visible workflows to achieve well-architected software.
Understand and use meaningful data to measure success.
Integrate and automate quality, security, and compliance into daily work.
An on-demand learning experience from the people who brought you The Phoenix Project, Team Topologies, Accelerate, and more.
Learn how to enhance collaboration and performance in large-scale organizations through Flow Engineering
Learn how making work visible, value stream management, and flow metrics can affect change in your organization.
Clarify team interactions for fast flow using simple sense-making approaches and tools.
Multiple award-winning CTO, researcher, and bestselling author Gene Kim hosts enterprise technology and business leaders.
In the first part of this two-part episode of The Idealcast, Gene Kim speaks with Dr. Ron Westrum, Emeritus Professor of Sociology at Eastern Michigan University.
In the first episode of Season 2 of The Idealcast, Gene Kim speaks with Admiral John Richardson, who served as Chief of Naval Operations for four years.
Exploring the impact of GenAI in our organizations & creating business impact through technology leadership.
DevOps best practices, case studies, organizational change, ways of working, and the latest thinking affecting business and technology leadership.
The debate over in-office versus remote work misses a fundamental truth: high-performing teams succeed based on how they’re organized, not where they sit.
Leaders can help their organizations move from the danger zone to the winning zone by changing how they wire their organization’s social circuitry.
The values and philosophies that frame the processes, procedures, and practices of DevOps.
This post presents the four key metrics to measure software delivery performance.
April 14, 2025
As enterprises move beyond initial experiments with generative AI, establishing robust governance becomes critical for balancing innovation with responsibility. This article explores practical approaches to governing GenAI initiatives, focusing on frameworks and organizational structures that leading companies have implemented successfully.
The power of generative AI brings unique risks that traditional technology governance may not adequately address. Several high-profile incidents illustrate why specialized governance is essential. As few examples that Joe Beutler from OpenAI shared at the recent Enterprise Technology Leadership Summit include,
Brian Scott, Principal Architect at Adobe, frames the fundamental challenge this way: “There’s always this delicate balance of allowing your team to move fast, but also moving fast enough to ensure you’re putting in the right forms of governance.” Companies need frameworks that enable innovation while protecting against risks that could harm customers, damage reputation, or create liability.
Adobe’s approach provides a valuable template for enterprises. Scott describes their “A through F framework” as a systematic way to document and evaluate AI use cases:
Each AI initiative starts by clearly documenting:
This ensures alignment with organizational priorities and helps establish accountability.
The framework then captures:
Internal-facing applications typically present lower risk than customer-facing ones, influencing how governance is applied.
Perhaps most critically, the framework documents:
This allows legal, security, and privacy teams to evaluate data protection requirements and ensure appropriate safeguards.
Finally, the framework captures:
This helps technical teams ensure compatibility and security while identifying potential risks.
Scott explains that this framework “aligns all our stakeholders and allows them to look at this use case through the same pair of glasses.” Rather than each department evaluating AI initiatives through their individual lenses, this creates a common language for cross-functional assessment.
After documenting use cases, organizations need a systematic way to evaluate risk. Adobe implements a scoring system that Scott describes as an “early warning system” that benefits both requesters and reviewers:
This risk assessment typically considers factors such as:
As AI adoption grows, governance processes must scale without becoming bottlenecks. Adobe’s approach follows three phases:
In the initial phase, Adobe focused on creating a single intake process. Scott explains that previously “we had literally about three or four or maybe even five different forms that were floating around out there.” Consolidating to one process ensured consistent evaluation.
Key elements include:
As volume grows, efficiency becomes critical. Adobe implemented several strategies:
Scott emphasizes that efficiency doesn’t mean cutting corners: “We try and steer requesters to known approved technology to allow them to move fast through the entire process.”
In the third phase, Adobe focused on identifying patterns and reusable components:
This approach accelerates adoption while maintaining governance. Scott notes they particularly focus on “identifying high-value use cases…as well as help identifying those low-risk use cases to get those through the pipeline a lot faster.”
Beyond formal governance, organizations must consider how to build trust with those using their AI systems. John Rauser of Cisco emphasizes transparency: when users get AI-generated answers, the system clearly indicates this and encourages feedback.
Patrick Debois notes that organizations must be clear about how they handle data: “You need to be crystal clear about how you are treating their data and what you are doing with it.”
Key trust-building practices include:
As GenAI initiatives grow, organizations are adopting different structural approaches:
Some organizations establish dedicated teams responsible for AI governance and enablement. These teams typically:
Paige Bailey described how Google leverages “25 years worth of software engineering telemetry” through centralized expertise to accelerate AI adoption in her recent ETLS presentation.
Other organizations distribute AI development while maintaining centralized governance. Cisco demonstrates this approach, with John Rauser explaining how they combine:
John Willis highlights potential tension in organizational structures when “CEOs are hiring Chief AI officers and basically telling the CIOs not to slow them down.” He advocates for closer collaboration between AI and IT teams to ensure secure, scalable implementations.
The most effective approach often depends on organizational size and maturity. Smaller organizations may benefit from centralization, while larger enterprises typically need federated models with strong coordination mechanisms.
Formal governance must be complemented by responsible AI practices embedded throughout development and deployment:
Organizations are developing standards for creating effective, safe prompts. Patrick Debois describes how prompt engineering has become “a field on its own” with significant impact on system behavior.
James Wickett, CEO of DryRun Security, explains how AI is transforming security testing by enabling more contextual analysis. Instead of relying on rigid pattern matching and rule-based systems, AI allows security teams to understand code in context.
This approach helps identify security issues that traditional tools might miss while presenting findings in ways developers can easily understand and act upon.
Continuous improvement requires systematic feedback collection. Anand Raghavan of Cisco emphasizes creating “a tight loop of improving the models and fine-tuning them based on user feedback.”
Adobe’s Brian Scott shares valuable lessons from implementing their governance framework:
Effective governance enables organizations to capture the benefits of generative AI while managing risks. The most successful approaches balance innovation with responsibility through clear frameworks, efficient processes, and appropriate organizational structures.
As Brian Scott advises, the key is to “focus on your MVP and take in that feedback and iterate.” By learning from pioneers like Adobe, enterprises can develop governance that enables responsible innovation while managing risk effectively.
In our next article, we’ll explore how enterprises are implementing GenAI across different business functions and the specific strategies they’re using to maximize impact.
Managing Editor at IT Revolution working on publishing books and guidance papers for the modern business leader. I also oversee the production of the IT Revolution blog, combining the best of responsible, human-centered content with the assistance of AI tools.
No comments found
Your email address will not be published.
First Name Last Name
Δ
As enterprises move beyond initial experiments with generative AI, establishing robust governance becomes critical…
Artificial intelligence and large language models (AI/LLMs) have emerged as powerful tools that can…
As enterprises begin exploring generative AI, understanding the core technical components becomes essential—even for…
Organizations building complex cyber-physical systems face mounting pressure to innovate faster while maintaining reliability,…