Inspire, develop, and guide a winning organization.
Create visible workflows to achieve well-architected software.
Understand and use meaningful data to measure success.
Integrate and automate quality, security, and compliance into daily work.
Understand the unique values and behaviors of a successful organization.
Explore our extensive library of experience reports.
An on-demand learning experience from the people who brought you The Phoenix Project, Team Topologies, Accelerate, and more.
Learn how making work visible, value stream management, and flow metrics can affect change in your organization.
Clarify team interactions for fast flow using simple sense-making approaches and tools.
Multiple award-winning CTO, researcher, and bestselling author Gene Kim hosts enterprise technology and business leaders.
In the first part of this two-part episode of The Idealcast, Gene Kim speaks with Dr. Ron Westrum, Emeritus Professor of Sociology at Eastern Michigan University.
In the first episode of Season 2 of The Idealcast, Gene Kim speaks with Admiral John Richardson, who served as Chief of Naval Operations for four years.
Weekly discussion around “Deming’s Journey to Profound Knowledge” with author John Willis.
VIRTUAL — Helping leaders succeed and organizations thrive (formerly DevOps Enterprise Summit).
Venue: Fontainebleau — Helping leaders succeed and organizations thrive (formerly DevOps Enterprise Summit).
DevOps best practices, case studies, organizational change, ways of working, and the latest thinking affecting business and technology leadership.
Is slowify a real word?
Could right fit help talent discover more meaning and satisfaction at work and help companies find lost productivity?
The values and philosophies that frame the processes, procedures, and practices of DevOps.
This post presents the four key metrics to measure software delivery performance.
November 8, 2023
Yesterday was the first OpenAI DevDay, which so many of us watched with excitement and amazement.
This reminded me of the May 2023 leaked “We Have No Moat, And Neither Does OpenAI” memo from a Google researcher. It was an incredibly fascinating memo that argued against the notion that frontier LLMs (Large Language Models), trained on massive amounts of text data at a cost of tens or even hundreds of millions of dollars, will be the winning solution.
The anonymous researcher goes even further, making the claims that these huge, capital-intensive projects don’t even create a moat. (Indeed, these frontier LLMs are very capital-intensive, requiring massive amounts of money and time. And they tie up the already incredibly-scarce GPU capacity — check out today’s article on how Microsoft’s need for GPUs are so high, they’re renting them from Oracle Cloud!)
What I find so tantalizing and startling about the memo: one of the main themes seemed very familiar. One of the things we’ve learned is that modularity in software is incredibly desirable — it enables independence of action, which enables vast numbers of people to work independently of each other.
In my soon-to-be-released book, Wiring the Winning Organization, co-authored with Dr. Steve Spear, we have case studies of the great Amazon re-architecture of 2001 and the massive $5 billion investment that IBM made in the 1960s to create the System/360 project (that’s $20 billion in today’s dollars!). What these two case studies have in common is that they created an architecture that allowed for teams to work in parallel, enabling teams to work and innovate independently.
What’s frankly amazing and mind-blowing in the memo are the observations about the pace of development happening in the OSS AI model community. To pick a few:
This is a fantastic document that anyone interested in AI should read. It’s interesting to see all the amazing advancements that OpenAI announced yesterday, and I’m not smart enough to even opine on to what extent this refutes or validates the claims in the paper.
But I find the arguments very, very compelling.
As I’ve done with the Cloudflare outage post-mortem and the Nordstrom case studies, I had ChatGPT interpret this memo through the lens of Slowify, Simplify, and Amplify.
Here’s what it came up with — edited by me to improve clarity and further land the point.
Slowification:
Simplification:
Amplification:
I find the logic of this argument to be rock-solid. How this squares with OpenAI’s amazing pace of innovation and shipping capabilities that developers can use, I’m not quite sure.
What do think?
(Also, many of you were interested in the prompts I used. I will be publishing the prompts on GitHub. Please let me know if you find them of value; feel free to submit suggestions/PRs, etc.: https://github.com/realgenekim/wwo-llm-prompts/blob/main/prompts/wwo-simple-prompt.md)
Gene Kim is a Wall Street Journal bestselling author, researcher, and multiple award-winning CTO. He has been studying high-performing technology organizations since 1999 and was the founder and CTO of Tripwire for 13 years. He is the author of six books, The Unicorn Project (2019), and co-author of the Shingo Publication Award winning Accelerate (2018), The DevOps Handbook (2016), and The Phoenix Project (2013). Since 2014, he has been the founder and organizer of DevOps Enterprise Summit, studying the technology transformations of large, complex organizations.
In their upcoming book, Unbundling the Enterprise: APIs, Optionality, and the Science of Happy…
Welcome to the final installment of IT Revolution’s series based on the book Investments…
As a business leader, you know that artificial intelligence (AI) is no longer just…
Welcome to the twelfth installment of IT Revolution’s series based on the book Investments…