Inspire, develop, and guide a winning organization.
Create visible workflows to achieve well-architected software.
Understand and use meaningful data to measure success.
Integrate and automate quality, security, and compliance into daily work.
Understand the unique values and behaviors of a successful organization.
LLMs and Generative AI in the enterprise.
An on-demand learning experience from the people who brought you The Phoenix Project, Team Topologies, Accelerate, and more.
Learn how making work visible, value stream management, and flow metrics can affect change in your organization.
Clarify team interactions for fast flow using simple sense-making approaches and tools.
Multiple award-winning CTO, researcher, and bestselling author Gene Kim hosts enterprise technology and business leaders.
In the first part of this two-part episode of The Idealcast, Gene Kim speaks with Dr. Ron Westrum, Emeritus Professor of Sociology at Eastern Michigan University.
In the first episode of Season 2 of The Idealcast, Gene Kim speaks with Admiral John Richardson, who served as Chief of Naval Operations for four years.
New half-day virtual events with live watch parties worldwide!
DevOps best practices, case studies, organizational change, ways of working, and the latest thinking affecting business and technology leadership.
Is slowify a real word?
Could right fit help talent discover more meaning and satisfaction at work and help companies find lost productivity?
The values and philosophies that frame the processes, procedures, and practices of DevOps.
This post presents the four key metrics to measure software delivery performance.
October 31, 2024
The following post is an excerpt from the book Unbundling the Enterprise: APIs, Optionality, and the Science of Happy Accidents by Stephen Fishman and Matt McLarty.
Throughout this book, we’ve focused our attention on unpacking the events of the past to make sense of the present. We did this with an aim to help you prepare for the future. Despite our central thesis that the future is inherently unpredictable, we do want to take a moment and look at some real-time events, see how APIs are driving them, and think about what options might be interesting to create and conserve.
The hottest trend that has captivated the world as we write this book is the rise of ChatGPT and, more generically, the power of generative AI. While we’re still seeing the potential of AI unfold across nearly every industry and labor pool, in the context of this book, it is interesting to note that much of the wildfire spread and impact of ChatGPT can be, unsurprisingly, traced to API-centered factors.
The standards-based API interface of OpenAI has made the costs of coordination and integration with the capabilities of ChatGPT so low that almost every software producing company (except for the ones competing for AI supremacy) have already rolled out new features in their own offerings powered by OpenAI and other providers’ large language models (LLMs).
The companies who’ve moved the fastest to integrate generative AI capabilities have been able to do so in part because much of the consuming applications themselves were already unbundled into segmented modules that communicate through APIs.
Given the wide acceptance of standards-based, API-enabled value exchange, it’s not surprising that the world has witnessed a flood of other AI-based tools over the last eighteen months. As these tools became ready for scaled usage, we’ve seen who was ready to quickly consume the AI capabilities into their offerings because their digital offerings were already built to consume and integrate external services. We have also seen how many organizations were left flat-footed and unable to evolve and exploit the fastest adoption curve the world has ever seen because their software offerings lacked the modularity necessary to integrate a host of external services at a low cost.
A somewhat hidden factor that both drives and feeds off the rise of abstract services like generative AI is the long-running trend where the increasing efficiency of value exchanges causes channel migration and organizational transformation.
A defining factor of commerce and business shifting to online channels is how digital transactions lower the coordination cost of interactions. A major reason businesses of all types have invested first in the web, then in mobile, and now in APIs is that these technologies lower coordination costs and increase the efficiency of value exchange. Digital channels have a price advantage compared to physical channels, which carry all sorts of physical world costs associated with pesky and unpredictable humans. As API proliferation and machine-to-machine/bot-to-bot interactions become the prevalent means of value exchange, we’ll begin to see a host of other behavior changes take effect.
Just like we explained in Chapter 5, where lowering the cost of running experiments will cause a rise in demand for experiments, we’re likely to see a plethora of behavior changes for enterprises when coordination costs fall precipitously. Perhaps the most far-reaching change will come in how enterprises approach fundamental execution questions like buy versus build. When the risk and cost of outsourcing capabilities to external specialty firms significantly falls, leasing market-proven capabilities from a specialist will become increasingly compelling when compared to keeping capabilities in-house.
The proliferation of hyper-specialized offerings (small firms with offerings that are both highly targeted and abstract) is poised to continue its acceleration while simultaneously making larger firms more efficient and productive. Where less than a few years ago, firms like Twilio or BazaarVoice were rare and unproven, today the launching of new specialized SaaS companies is an everyday occurrence.
APIs are at the heart of this change as their nature (standards-based interaction models that have a low, one-time, fixed cost of provisioning for execution combined with extremely low variable cost of execution) is significantly responsible for driving the continued decrease in the cost of multiparty interactions.
The broad adoption of API-based integration models by development communities has democratized an ever-expanding range of digital capabilities. What started out as basic computing infrastructure and content services has advanced through telephony automation, payments, and audio processing and brought other highly complex concepts like software-driven networking or generative AI within the reach of small and medium businesses.
Not only is it simpler than ever to provision and use distributed capabilities, but information hiding via modularity has also made it possible for firms to “not care” about the messy details of fulfillment (e.g., Do you really need to know how Twilio manages to deliver messages to phones on any network, anywhere in the world?).
One potential future implication of the fall in interaction costs is that large sectors of the global economy will depend upon increasingly complex networks of decoupled providers. A natural outcome of this increased dependency is that the need to increase service robustness and fault-tolerant services will increase because the amount of value dependent upon these efficient exchanges will be “too big to fail.” This increased need for reliability may cause the emergence of a new type of provider—a meta-aggregator that manages and routes traffic to the most efficient source of fulfillment for a range of capabilities.
These types of meta providers already exist in a limited fashion. Multi-CDN providers, for example, aggregate individual CDN providers (content delivery networks like Akamai, Fastly, and others) and deliver a higher level of reliability to enterprises that need efficient forms of content delivery.
As generative AI has been adopted by organizations around the world, we’ve begun to see novel applications where AI is being used to accelerate and improve software development processes. This specific capability will act as a flywheel and accelerate the maturation of generative AI capabilities. While it’s not possible to predict every outcome that is on the horizon, one likely outcome lies in the as-of-yet unfulfilled dreams of the inventor of the World Wide Web—Sir Tim Berners-Lee.
In 1999, Sir Tim Berners-Lee, one of Time Magazine’s one hundred most important people of the twentieth century, and one of only six members of the World Wide Web Hall of Fame, coined the term “the semantic web” and expressed a vision for how machines would be able to talk to other machines, stating:
I have a dream for the Web [where computers] become capable of analyzing all the data on the Web—the content, links, and transactions between people and computers. A “Semantic Web,” which makes this possible, has yet to emerge, but when it does, the day-to-day mechanisms of trade, bureaucracy and our daily lives will be handled by machines talking to machines. The “intelligent agents” people have touted for ages will finally materialize.1
While there has been some progress to bring this vision to reality, generative AI looks like it will be the key that unlocks this capability because LLMs like those provided by OpenAI, Google, Anthropic, and other tools are the “intelligent agents” Berners-Lee referred to. In the same way IaaS capabilities unlocked a new level of optionality (explained in Chapter 5), generative AI tools will be the catalyst that allows society to tackle or bypass most, if not all, of the major obstacles to realization.
Automated reasoning systems like ChatGPT can finally deliver on the promise of the semantic web because LLMs can automate the reasoning processes that, until now, were only doable by humans with the capacity to apply reason in a vague space. Because machine learning and LLM tools have the capability of automating tedious work, generative AI will unlock the capability to make every API-based service “self-describing.” This one spark will accelerate digital connectivity by overcoming challenges that have seemed insurmountable for the last twenty years (e.g., vastness, vagueness, uncertainty, inconsistency, deceit, etc.). For APIs, this means coordination costs will drop again as APIs become self-describing once LLM agents are pointed toward them (making integration and consumption of digital capabilities even easier than they are today).
As the cost of exchange drops, the volume of demand and consumption of hyper-specialized offerings will continue to rise, making the need for meta-aggregators more likely while also making them more financially viable. One potential outcome of this trend could be that the aggregators evolve into abstract hubs of redundant capabilities designed to ensure the highest possible levels of availability. Imagine a service that allows you to utilize Microsoft Azure for a set of workloads and seamlessly switches that workload to AWS, RedShift, Google Cloud, Salesforce, or any other emergent provider if Azure is experiencing latency or some form of downtime. Now take that idea one step further and imagine a service that aggregates and coordinates multiple types of packaged capabilities from multiple providers, allowing the consumer to only pay for the ones they use and always receiving the service from the most efficient and reliable provider.
When taking the above possible futures into a single context, a new world of possibility emerges, where multiple types of generative AI can be strung together (by an AI agent fluent with APIs) to enable chat-based capabilities for highly complex macro tasks that can be invoked by ubiquitous and ambient interfaces. “Hey, Siri. Make me a website to help me market and sell my book titled Unbundling the Enterprise.”
While the prognostications above may seem far-fetched in terms of scope or time horizons, a major takeaway from this book is intended to help you see the future in terms of possibility. A possibility that invites exploration into a set of potential options. Your potential portfolio of potential options stands to generate large returns if you’ve deliberately aligned your financial and technical processes to support generating optionality. The questions to ask yourself and your enterprise are:
Given the radical change we are all about to experience with the emergence of generative AI, have we invested in generating our own OOOps moments?
Have we already taken the step of unbundling and modularizing our capabilities through API-led decomposition (an architectural pattern for separating functional capabilities across distinct layers of APIs that often call each other in sequence)? When our capabilities are granular, unbundled, and addressable via standardized interfaces, our options can be generated and conserved prior to the future fully materializing. The choice we make today leaves open the possibility for us to maximize our chances for big returns.
Have we already taken the step of mapping our ecosystem including how value is exchanged across it? When our ecosystem is mapped and familiar to our delivery teams, our use of resources can be optimized to focus on options that might be valuable to entities that are in our ecosystem (e.g., our customers and partners). The choice we make today sets us up to be a proactive solver of problems with the communities we engage with.
Have we already taken the step of paving and optimizing our feedback loops? When our ability to iteratively learn and adapt is optimized for low-cost trials, we can rapidly focus our efforts behind options that show the promise of having a path to serial growth through ecosystem adoption.
The AI revolution we are now living through has many parallels to the Industrial Revolution that we can learn from. During the first Industrial Revolution, steam power shifted the manufacturing paradigm, but many constraints remained. When the new electrically driven paradigm accelerated, it wasn’t through the new power source—electricity—alone. It was through a combination of electricity and future optionality. Optionality came in the form of interchangeable parts (which modularized products) and the assembly line (which modularized production).
We began Chapter 1 of this book with Thomas Edison’s moment of triumph as he switched on a light bulb in 1882, illuminating not only his present day but the likely future paths of innovation as well. Making that light spread and be accessible to populations all over the world took imagination, many more innovations, and a healthy dose of modularity to walk down that future path to an electrified and illuminated society.
In the digital world we all inhabit today, our path to the future is somehow both hidden in shadow and illuminated at the same time. The shadow is the uncertain future, and the illumination is the methodology to systematically pull the undiscovered and confusing future into the well-understood present—prepare early by unbundling your enterprise with the three OOOps methods so you can decide late as the future unfolds and reveals itself. Unbundling your business capabilities to create more convex options can be your enterprise’s path to digital prosperity, even if you can no longer tell if it was by accident, by design, or, perhaps, both.
Matt McLarty is the Chief Technology Officer for Boomi. He works with organizations around the world to help them digitally transform using a composable approach. He is an active member of the global API community, has led global technical teams at Salesforce, IBM, and CA Technologies, and started his career in financial technology. Matt is an internationally known expert on APIs, microservices, and integration. He is co-author of the O'Reilly books Microservice Architecture and Securing Microservice APIs, and co-host of the API Experience podcast. He lives with his wife and two sons in Vancouver, BC.
Stephen Fishman (Fish) is the NA Field CTO for Boomi. He is a practicing technologist who brings creativity, rigor, and a human-centric lens to problem-solving. Known as an expert in aligning technology and business strategy, Stephen places a premium on pushing business and technology leaders to embrace iteration and the critical need to collaborate across disciplines. Throughout his career, Stephen has consulted with organizations desiring to transform their technology-based offerings to better meet the needs of organizations and the people they serve. In addition to consulting with large organizations, Stephen is an in-demand speaker and advisor. Stephen has led multidisciplinary teams to deliver amazing results at Salesforce, MuleSoft, Cox Automotive, Sapient, Macy's, and multiple public sector institutions including the US Federal Reserve and the CDC. He lives in Atlanta with his family and when he's not working can be found biking on the many trails in Georgia.
No comments found
Your email address will not be published.
First Name Last Name
Δ
"This feels pointless." "My brain is fried." "Why can't I think straight?" These aren't…
As manufacturers embrace Industry 4.0, many find that implementing new technologies isn't enough to…
I know. You’re thinking I'm talking about Napster, right? Nope. Napster was launched in…
When Southwest Airlines' crew scheduling system became overwhelmed during the 2022 holiday season, the…