Skip to content

November 8, 2023

Google Leaked Memo “We Have No Moat (and Neither Does OpenAI)” through the Lens of Slowify, Simplify, Amplify

By Gene Kim

Yesterday was the first OpenAI DevDay, which so many of us watched with excitement and amazement.

This reminded me of the May 2023 leaked “We Have No Moat, And Neither Does OpenAI” memo from a Google researcher. It was an incredibly fascinating memo that argued against the notion that frontier LLMs (Large Language Models), trained on massive amounts of text data at a cost of tens or even hundreds of millions of dollars, will be the winning solution.

The anonymous researcher goes even further, making the claims that these huge, capital-intensive projects don’t even create a moat. (Indeed, these frontier LLMs are very capital-intensive, requiring massive amounts of money and time. And they tie up the already incredibly-scarce GPU capacity — check out today’s article on how Microsoft’s need for GPUs are so high, they’re renting them from Oracle Cloud!)

What I find so tantalizing and startling about the memo: one of the main themes seemed very familiar. One of the things we’ve learned is that modularity in software is incredibly desirable — it enables independence of action, which enables vast numbers of people to work independently of each other.

In my soon-to-be-released book, Wiring the Winning Organization, co-authored with Dr. Steve Spear, we have case studies of the great Amazon re-architecture of 2001 and the massive $5 billion investment that IBM made in the 1960s to create the System/360 project (that’s $20 billion in today’s dollars!). What these two case studies have in common is that they created an architecture that allowed for teams to work in parallel, enabling teams to work and innovate independently.

What’s frankly amazing and mind-blowing in the memo are the observations about the pace of development happening in the OSS AI model community. To pick a few:

  • After the release of Meta’s LLaMA: “A tremendous outpouring of innovation followed, with just days between major developments (see The Timeline for the full breakdown). Here we are, barely a month later, and there are variants with instruction tuning, quantization, quality improvements, human evals, multimodality, RLHF, etc., many of which build on each other.”
  • “By contrast, training giant models from scratch not only throws away the pretraining but also any iterative improvements that have been made on top. In the open source world, it doesn’t take long before these improvements dominate, making a full retrain extremely costly.”

This is a fantastic document that anyone interested in AI should read. It’s interesting to see all the amazing advancements that OpenAI announced yesterday, and I’m not smart enough to even opine on to what extent this refutes or validates the claims in the paper.

But I find the arguments very, very compelling.

As I’ve done with the Cloudflare outage post-mortem and the Nordstrom case studies, I had ChatGPT interpret this memo through the lens of Slowify, Simplify, and Amplify.

Here’s what it came up with — edited by me to improve clarity and further land the point.

Slowification:

  • This memo proposes a reassessment of Google’s approach to AI development by proposing a thoughtful and strategic response to the changing landscape, specifically around OSS AI models.
  • The memo suggests that there is much to learn from the open-source community, pointing out the incredible pace of innovation. It provides a stunning timeline of advancements in the 50 days after the leak of the LLaMA source code (and the weights being leaked one week later)
  • The memo describes the shortcomings of direct competition with open-source initiatives and proposes a strategic pivot to create an ecosystem that helps open-source efforts.

Simplification:

  • Incrementalization: The ability for smaller AI models to soon run on consumer hardware (e.g., laptops, phones, glasses) will enable much smaller, incremental steps than currently possible with frontier LLMs.
  • Modularization:
  • The pace of innovations happening in open-source AI efforts is astonishingly fast. As an example, the creation of Low Rank Adaptation (LoRA) was cited as just one new mechanism that helps quickly and efficiently update models, enabling rapid and distributed innovation.
  • Smaller models, models working together, and complementary efforts allow for the separate enhancement and sharing of model components, encouraging parallel development efforts. In other words, more brains beat fewer brains almost all of the time.
  • Open source provides a decentralized model of innovation where the collective intelligence of the open-source community can contribute to the evolution of AI models.

Amplification:

  • The memo itself is an example of an attempt to amplify a signal that Google might be at a competitive disadvantage compared to open-source models, signaling a need for urgent attention and action.
  • By discussing the successes of the open-source community, the memo proposes the need for Google to adapt its strategy to incorporate more open and collaborative approaches.
  • The memo pushes the organization to consider and act upon the challenges it faces more promptly and effectively before it is too late.

I find the logic of this argument to be rock-solid. How this squares with OpenAI’s amazing pace of innovation and shipping capabilities that developers can use, I’m not quite sure.

What do think?

(Also, many of you were interested in the prompts I used. I will be publishing the prompts on GitHub. Please let me know if you find them of value; feel free to submit suggestions/PRs, etc.: https://github.com/realgenekim/wwo-llm-prompts/blob/main/prompts/wwo-simple-prompt.md)

- About The Authors
Avatar photo

Gene Kim

Gene Kim is a Wall Street Journal bestselling author, researcher, and multiple award-winning CTO. He has been studying high-performing technology organizations since 1999 and was the founder and CTO of Tripwire for 13 years. He is the author of six books, The Unicorn Project (2019), and co-author of the Shingo Publication Award winning Accelerate (2018), The DevOps Handbook (2016), and The Phoenix Project (2013). Since 2014, he has been the founder and organizer of DevOps Enterprise Summit, studying the technology transformations of large, complex organizations.

Follow Gene on Social Media

1 Comment

  • Anonymous Nov 12, 2023 4:42 pm

    Thank you for the in-depth explanation. I also learned that modularity in software is highly desirable, but few communities understand it. Here are my principles of modularity in my daily work on developing a modular source code network. 1. Environment: 1.1. management over creation 1.2. requirements over expectations 2. Source-code: 2.1 reusability over code development 2.2 writing over refactoring code 3. Data: 3.1 data standardization first 3.2 fat-data over fat-function What do you think about Hyper modular architecture?

Leave a Comment

Your email address will not be published.



Jump to Section

    More Like This

    Building an Automated Governance Architecture: Investments Unlimited Series: Chapter 5
    By IT Revolution , Helen Beal , Bill Bensing , Jason Cox , Michael Edenzon , Dr. Tapabrata "Topo" Pal , Caleb Queern , John Rzeszotarski , Andres Vega , John Willis

    Welcome to the fifth installment of IT Revolution’s series based on the book Investments…

    Addressing Burnout in Our DevOps Community Through Deming’s Lens
    By John Willis

    A Crucial Battle We Must Not Ignore Today, I'd like to pivot from our…

    The Ethical Tensions Between Bureaucracy and Digital
    By Summary by IT Revolution

    We live in an era of competing value systems—the lingering influence of impersonal, productivity-maximizing…

    The Path of Gracious Perseverance: Developing Leadership Courage for Business Impact 
    By Summary by IT Revolution

    We’ve all encountered situations at work where politics, opinions, and power dynamics seem to…