Skip to content

April 14, 2025

Governance and Organization: Creating the Framework for Responsible GenAI

By Leah Brown

As enterprises move beyond initial experiments with generative AI, establishing robust governance becomes critical for balancing innovation with responsibility. This article explores practical approaches to governing GenAI initiatives, focusing on frameworks and organizational structures that leading companies have implemented successfully.

Why GenAI Governance Matters

The power of generative AI brings unique risks that traditional technology governance may not adequately address. Several high-profile incidents illustrate why specialized governance is essential. As few examples that Joe Beutler from OpenAI shared at the recent Enterprise Technology Leadership Summit include,

  • A Chevrolet dealership’s chatbot agreed to sell a car for $1, creating a public relations challenge when the customer tried to hold the company to this “legally binding offer.”
  • Air Canada’s chatbot invented a non-existent travel policy, which courts later ruled the company must honor.
  • Google’s Gemini model faced backlash for generating historically inaccurate images.

Brian Scott, Principal Architect at Adobe, frames the fundamental challenge this way: “There’s always this delicate balance of allowing your team to move fast, but also moving fast enough to ensure you’re putting in the right forms of governance.” Companies need frameworks that enable innovation while protecting against risks that could harm customers, damage reputation, or create liability.

Building a Practical Governance Framework

Adobe’s approach provides a valuable template for enterprises. Scott describes their “A through F framework” as a systematic way to document and evaluate AI use cases:

1. Team and Purpose

Each AI initiative starts by clearly documenting:

  • Which team is requesting the capability?
  • What business objective they’re trying to achieve?

This ensures alignment with organizational priorities and helps establish accountability.

2. Users and Access

The framework then captures:

  • Who will use the system (internal employees vs. external customers)?
  • What permissions and access controls are needed?

Internal-facing applications typically present lower risk than customer-facing ones, influencing how governance is applied.

3. Data Handling

Perhaps most critically, the framework documents:

  • What data will be fed into the AI system and its sensitivity level?
  • What outputs will be produced, and what is their classification?

This allows legal, security, and privacy teams to evaluate data protection requirements and ensure appropriate safeguards.

4. Technology Stack

Finally, the framework captures:

  • Which models and tools will be used?
  • How will they be integrated with existing systems?

This helps technical teams ensure compatibility and security while identifying potential risks.

Scott explains that this framework “aligns all our stakeholders and allows them to look at this use case through the same pair of glasses.” Rather than each department evaluating AI initiatives through their individual lenses, this creates a common language for cross-functional assessment.

Risk Assessment: Creating an “Early Warning System”

After documenting use cases, organizations need a systematic way to evaluate risk. Adobe implements a scoring system that Scott describes as an “early warning system” that benefits both requesters and reviewers:

  1. For Teams Proposing AI Projects: The scoring helps them understand how complex the approval process might be and set realistic timelines. “This really helps them understand how long it is going to take for my use case to get fully reviewed,” notes Scott.
  2. For Governance Teams: The scoring helps prioritize which cases need deeper review. “From the folks dealing with responsibilities such as legal, security, and privacy, it really allows them to kind of say, “Okay, we understand this is going to be either a low or a high risk,” and now they can change up their questions.”

This risk assessment typically considers factors such as:

  • Audience risk (internal vs. external users)
  • Data sensitivity (public information vs. confidential data)
  • Decision impact (informational vs. actionable outputs)

Scaling Governance Efficiently

As AI adoption grows, governance processes must scale without becoming bottlenecks. Adobe’s approach follows three phases:

1. Establish the Foundation

In the initial phase, Adobe focused on creating a single intake process. Scott explains that previously “we had literally about three or four or maybe even five different forms that were floating around out there.” Consolidating to one process ensured consistent evaluation.

Key elements include:

  • A standardized intake form that captures all necessary information.
  • Clear documentation of approval criteria and processes.
  • Regular feedback loops to improve the process.

2. Optimize for Efficiency

As volume grows, efficiency becomes critical. Adobe implemented several strategies:

  • Prioritizing use cases that leverage approved technologies to speed review.
  • Implementing “fast lanes” for lower-risk scenarios.
  • Creating templates and patterns for common use cases.

Scott emphasizes that efficiency doesn’t mean cutting corners: “We try and steer requesters to known approved technology to allow them to move fast through the entire process.”

3. Scale and Amplify

In the third phase, Adobe focused on identifying patterns and reusable components:

  • Developing reference architectures for common use cases.
  • Creating pre-approved patterns teams can adopt.
  • Building a repository of successful implementations.

This approach accelerates adoption while maintaining governance. Scott notes they particularly focus on “identifying high-value use cases…as well as help identifying those low-risk use cases to get those through the pipeline a lot faster.”

Building Trust with Users and Customers

Beyond formal governance, organizations must consider how to build trust with those using their AI systems. John Rauser of Cisco emphasizes transparency: when users get AI-generated answers, the system clearly indicates this and encourages feedback.

Patrick Debois notes that organizations must be clear about how they handle data: “You need to be crystal clear about how you are treating their data and what you are doing with it.”

Key trust-building practices include:

  • Clear labeling of AI-generated content
  • Transparency about data usage and limitations
  • User controls, including feedback mechanisms and overrides
  • Documentation of AI principles and practices

Organizational Structures for AI Governance

As GenAI initiatives grow, organizations are adopting different structural approaches:

1. Centralized AI Centers of Excellence

Some organizations establish dedicated teams responsible for AI governance and enablement. These teams typically:

  • Define standards and best practices
  • Evaluate and approve technologies
  • Provide training and support
  • Build common platform capabilities

Paige Bailey described how Google leverages “25 years worth of software engineering telemetry” through centralized expertise to accelerate AI adoption in her recent ETLS presentation.

2. Federated Models with Shared Governance

Other organizations distribute AI development while maintaining centralized governance. Cisco demonstrates this approach, with John Rauser explaining how they combine:

  • Product-aligned AI teams focused on specific use cases
  • Enterprise-wide platform teams providing common capabilities
  • Shared standards and governance

3. Hybrid Approaches

John Willis highlights potential tension in organizational structures when “CEOs are hiring Chief AI officers and basically telling the CIOs not to slow them down.” He advocates for closer collaboration between AI and IT teams to ensure secure, scalable implementations.

The most effective approach often depends on organizational size and maturity. Smaller organizations may benefit from centralization, while larger enterprises typically need federated models with strong coordination mechanisms.

Responsible AI Practices Beyond Governance

Formal governance must be complemented by responsible AI practices embedded throughout development and deployment:

1. Prompt Engineering Guidelines

Organizations are developing standards for creating effective, safe prompts. Patrick Debois describes how prompt engineering has become “a field on its own” with significant impact on system behavior.

2. Testing and Evaluation

James Wickett, CEO of DryRun Security, explains how AI is transforming security testing by enabling more contextual analysis. Instead of relying on rigid pattern matching and rule-based systems, AI allows security teams to understand code in context.

This approach helps identify security issues that traditional tools might miss while presenting findings in ways developers can easily understand and act upon.

3. Monitoring and Feedback Integration

Continuous improvement requires systematic feedback collection. Anand Raghavan of Cisco emphasizes creating “a tight loop of improving the models and fine-tuning them based on user feedback.”

Key Lessons from Early Adopters

Adobe’s Brian Scott shares valuable lessons from implementing their governance framework:

  1. Start with MVP processes: “Don’t try to perfect your process. That’s just going to slow you down. Really focus on your MVP, and take in that feedback and iterate just as if you were building an actual product.”
  2. Manage volume carefully: “Don’t put more water in the pool than you can drain out. What this basically means is that we were routing a lot of use cases over to our stakeholders, and we’re really providing them more use cases all at once then they can review.”
  3. Focus on high-value use cases: Prioritize initiatives with clear business impact, especially those that might go to market or deliver significant internal value.

Conclusion

Effective governance enables organizations to capture the benefits of generative AI while managing risks. The most successful approaches balance innovation with responsibility through clear frameworks, efficient processes, and appropriate organizational structures.

As Brian Scott advises, the key is to “focus on your MVP and take in that feedback and iterate.” By learning from pioneers like Adobe, enterprises can develop governance that enables responsible innovation while managing risk effectively.

In our next article, we’ll explore how enterprises are implementing GenAI across different business functions and the specific strategies they’re using to maximize impact.

- About The Authors
Leah Brown

Leah Brown

Managing Editor at IT Revolution working on publishing books and guidance papers for the modern business leader. I also oversee the production of the IT Revolution blog, combining the best of responsible, human-centered content with the assistance of AI tools.

Follow Leah on Social Media

No comments found

Leave a Comment

Your email address will not be published.



Jump to Section

    More Like This

    Governance and Organization: Creating the Framework for Responsible GenAI
    By Leah Brown

    As enterprises move beyond initial experiments with generative AI, establishing robust governance becomes critical…

    Revolutionizing Product Management with AI: From Ideation to Implementation
    By Leah Brown

    Artificial intelligence and large language models (AI/LLMs) have emerged as powerful tools that can…

    Building the Foundation: Technical Components for Enterprise GenAI Success
    By Leah Brown

    As enterprises begin exploring generative AI, understanding the core technical components becomes essential—even for…

    Harnessing AI for Complex Systems: A New Paradigm for Industrial DevOps
    By Leah Brown

    Organizations building complex cyber-physical systems face mounting pressure to innovate faster while maintaining reliability,…