Skip to content

June 26, 2024

Generative AI Governance at Scale: Lessons from Adobe

By Summary by IT Revolution

In a recent presentation at the 2024 Enterprise Technology Leadership Summit Virtual Europe, Brian Scott and Daniel Neff, principal architects at Adobe, shared their experience developing a generative AI governance strategy for one of the world’s largest software companies. As more organizations grapple with the challenges and opportunities presented by AI, the insights from Adobe’s journey provide valuable lessons for executives looking to responsibly integrate this transformative technology.

The Need for AI Governance

With the rapid advancement of generative AI, companies face a delicate balancing act. On one hand, there is a strong desire among developers to leverage AI to enhance their work and drive innovation. On the other hand, organizations must navigate the potential risks and ensure responsible deployment. This is where a robust AI governance strategy becomes critical.

Adobe’s Approach: The A-F Framework

To address this challenge, Scott and Neff developed the A-F Framework, a standardized approach to evaluating and managing AI use cases across the enterprise. This single artifact ensures that all stakeholders, from the requesting team to legal, security, and privacy, are aligned throughout the review process.

The framework captures six key data points for each AI use case:

The A-F Framework, developed by Brian Scott and Dan Neff at Adobe, is a standardized approach to evaluating and managing generative AI use cases across an enterprise. It captures six key data points for each AI use case, ensuring that all stakeholders, from the requesting team to legal, security, and privacy, are aligned throughout the review process. The framework consists of the following components:

  • Audience: Identifies who will be using the AI solution (e.g., internal customer service representatives).
  • Objective: Defines the goal of the AI implementation (e.g., improving customer quality of service).
  • Input Data Type: Specifies the kind of data that will be fed into the AI system (e.g., published documentation, internal runbooks).
  • Input Data Classification: Indicates the sensitivity level of the input data (e.g., internal data).
  • Technology: Lists the AI technologies and platforms that will be used (e.g., Azure OpenAI, Weaviate).
  • Output: Describes the type of output the AI system will generate and how it will be used (e.g., refined how-to documentation).

By capturing this information upfront, the A-F Framework enables a streamlined review process and ensures that all relevant stakeholders have the necessary context to assess the risks and benefits of each use case. For example, a possible use case could be:

  • Team [A] = The customer support engineering team
  • with object [B] = to improve customer quality of service
  • for audience [C] = for internal customer service reps
  • uses input data [D] = using publishing documentation and internal runbooks
  • with tech [E] = with Azure OpenAI, LangChain, Pinecone
  • to create output [F] = to find concrete, referencable how-to documentation

Risk Assessment and Prioritization

Once a use case is submitted, stakeholders assign a risk score based on factors such as the audience (internal vs. external), input data sensitivity (public vs. private), and the nature of the objective (summary vs. actionable). This risk score serves as an early warning system, helping the requesting team understand the level of scrutiny their use case will face and allowing reviewers to prioritize their efforts.

Optimizing the AI Governance Process

Drawing inspiration from DevOps principles, Scott and Neff emphasized the importance of continuous improvement in AI governance. They recommended a three-pronged approach:

  1. Solidify: Establish feedback loops, implement quality checks, and optimize for approved technologies to streamline the process.
  2. Simplify: Treat technology onboarding as a separate process, prioritize low-risk use cases, and create a single entry point for all requests.
  3. Amplify: Identify high-value use cases, fast-track low-risk initiatives, and weave in points to recognize and promote impactful projects.

By iterating on their governance process, Adobe has been able to strike a balance between innovation and responsibility, enabling teams to move quickly while ensuring appropriate oversight.

Lessons Learned and Recommendations

Throughout their journey, Scott and Neff encountered several challenges and gleaned valuable lessons:

  1. Engage stakeholders early: Collaborate with legal, security, and privacy teams to define “green” scenarios that can bypass triage, streamlining the approval process.
  2. Establish a tech radar: Provide a curated list of pre-approved technologies to guide teams toward solutions that are easier to deploy responsibly.
  3. Avoid overloading reviewers: Pace the flow of use cases to ensure stakeholders can provide thorough and timely feedback.
  4. Consolidate intake channels: Implement a single entry point for all AI use case requests to maintain consistency and accountability.
  5. Embrace an MVP mindset: Start with a minimum viable process and iterate based on feedback rather than striving for perfection from the outset.

Looking Ahead

As generative AI continues to evolve, the need for effective governance will only grow. Adobe’s experience highlights the importance of proactive, collaborative, and adaptable approaches to managing this powerful technology. By sharing their insights and seeking input from the broader community, Scott and Neff are contributing to the development of best practices that will benefit organizations across industries.

For executives embarking on their own AI governance journeys, the key takeaways are clear: engage stakeholders early, establish clear frameworks and processes, prioritize based on risk, and continuously refine your approach. By doing so, you can harness the transformative potential of generative AI while mitigating the risks and ensuring responsible deployment at scale.

Watch the full presentation in our video library here.

Sign up for the next Enterprise Technology Leadership Summit here.

- About The Authors
Avatar photo

Summary by IT Revolution

Articles created by summarizing a piece of original content from the author (with the help of AI).

No comments found

Leave a Comment

Your email address will not be published.



Jump to Section

    More Like This

    Mitigating Unbundling’s Biggest Risk
    By Stephen Fishman , Matt McLarty

    If you haven’t already read Unbundling the Enterprise: APIs, Optionality, and the Science of…

    Navigating Cloud Decisions: Debunking Myths and Mitigating Risks
    By Summary by IT Revolution

    Organizations face critical decisions when selecting cloud service providers (CSPs). A recent paper titled…

    The Phoenix Project Comes to Life: Graphic Novel Adaptation Now Available!
    By IT Revolution

    We're thrilled to announce the release of The Phoenix Project: A Graphic Novel (Volume…

    Embracing Uncertainty: GenAI and Unbundling the Enterprise
    By Matt McLarty , Stephen Fishman

    The following post is an excerpt from the book Unbundling the Enterprise: APIs, Optionality, and…