LLMs and Generative AI in the enterprise.
Inspire, develop, and guide a winning organization.
Understand the unique values and behaviors of a successful organization.
Create visible workflows to achieve well-architected software.
Understand and use meaningful data to measure success.
Integrate and automate quality, security, and compliance into daily work.
An on-demand learning experience from the people who brought you The Phoenix Project, Team Topologies, Accelerate, and more.
Learn how to enhance collaboration and performance in large-scale organizations through Flow Engineering
Learn how making work visible, value stream management, and flow metrics can affect change in your organization.
Clarify team interactions for fast flow using simple sense-making approaches and tools.
Multiple award-winning CTO, researcher, and bestselling author Gene Kim hosts enterprise technology and business leaders.
In the first part of this two-part episode of The Idealcast, Gene Kim speaks with Dr. Ron Westrum, Emeritus Professor of Sociology at Eastern Michigan University.
In the first episode of Season 2 of The Idealcast, Gene Kim speaks with Admiral John Richardson, who served as Chief of Naval Operations for four years.
Exploring the impact of GenAI in our organizations & creating business impact through technology leadership.
DevOps best practices, case studies, organizational change, ways of working, and the latest thinking affecting business and technology leadership.
Just as physical jerk throws our bodies off balance, technological jerk throws our mental models and established workflows into disarray when software changes too abruptly or without proper preparation.
Sure, vibe coding makes you code faster—that’s the obvious selling point. But if you think speed is the whole story, you’re missing out on the juicy stuff.
The values and philosophies that frame the processes, procedures, and practices of DevOps.
This post presents the four key metrics to measure software delivery performance.
November 8, 2023
Yesterday was the first OpenAI DevDay, which so many of us watched with excitement and amazement.
This reminded me of the May 2023 leaked “We Have No Moat, And Neither Does OpenAI” memo from a Google researcher. It was an incredibly fascinating memo that argued against the notion that frontier LLMs (Large Language Models), trained on massive amounts of text data at a cost of tens or even hundreds of millions of dollars, will be the winning solution.
The anonymous researcher goes even further, making the claims that these huge, capital-intensive projects don’t even create a moat. (Indeed, these frontier LLMs are very capital-intensive, requiring massive amounts of money and time. And they tie up the already incredibly-scarce GPU capacity — check out today’s article on how Microsoft’s need for GPUs are so high, they’re renting them from Oracle Cloud!)
What I find so tantalizing and startling about the memo: one of the main themes seemed very familiar. One of the things we’ve learned is that modularity in software is incredibly desirable — it enables independence of action, which enables vast numbers of people to work independently of each other.
In my soon-to-be-released book, Wiring the Winning Organization, co-authored with Dr. Steve Spear, we have case studies of the great Amazon re-architecture of 2001 and the massive $5 billion investment that IBM made in the 1960s to create the System/360 project (that’s $20 billion in today’s dollars!). What these two case studies have in common is that they created an architecture that allowed for teams to work in parallel, enabling teams to work and innovate independently.
What’s frankly amazing and mind-blowing in the memo are the observations about the pace of development happening in the OSS AI model community. To pick a few:
This is a fantastic document that anyone interested in AI should read. It’s interesting to see all the amazing advancements that OpenAI announced yesterday, and I’m not smart enough to even opine on to what extent this refutes or validates the claims in the paper.
But I find the arguments very, very compelling.
As I’ve done with the Cloudflare outage post-mortem and the Nordstrom case studies, I had ChatGPT interpret this memo through the lens of Slowify, Simplify, and Amplify.
Here’s what it came up with — edited by me to improve clarity and further land the point.
Slowification:
Simplification:
Amplification:
I find the logic of this argument to be rock-solid. How this squares with OpenAI’s amazing pace of innovation and shipping capabilities that developers can use, I’m not quite sure.
What do think?
(Also, many of you were interested in the prompts I used. I will be publishing the prompts on GitHub. Please let me know if you find them of value; feel free to submit suggestions/PRs, etc.: https://github.com/realgenekim/wwo-llm-prompts/blob/main/prompts/wwo-simple-prompt.md)
Gene Kim has been studying high-performing technology organizations since 1999. He was the founder and CTO of Tripwire, Inc., an enterprise security software company, where he served for 13 years. His books have sold over 1 million copies—he is the WSJ bestselling author of Wiring the Winning Organization, The Unicorn Project, and co-author of The Phoenix Project, The DevOps Handbook, and the Shingo Publication Award-winning Accelerate. Since 2014, he has been the organizer of DevOps Enterprise Summit (now Enterprise Technology Leadership Summit), studying the technology transformations of large, complex organizations.
No comments found
Your email address will not be published.
First Name Last Name
Δ
The following is an excerpt from the forthcoming book Vibe Coding: Building Production-Grade Software With…
Every organization fears becoming the next Blockbuster—a once-dominant company that failed to adapt to…
Today marks a pivotal moment for IT professionals and Phoenix Project fans everywhere. The…