Inspire, develop, and guide a winning organization.
Create visible workflows to achieve well-architected software.
Understand and use meaningful data to measure success.
Integrate and automate quality, security, and compliance into daily work.
Understand the unique values and behaviors of a successful organization.
LLMs and Generative AI in the enterprise.
An on-demand learning experience from the people who brought you The Phoenix Project, Team Topologies, Accelerate, and more.
Learn how making work visible, value stream management, and flow metrics can affect change in your organization.
Clarify team interactions for fast flow using simple sense-making approaches and tools.
Multiple award-winning CTO, researcher, and bestselling author Gene Kim hosts enterprise technology and business leaders.
In the first part of this two-part episode of The Idealcast, Gene Kim speaks with Dr. Ron Westrum, Emeritus Professor of Sociology at Eastern Michigan University.
In the first episode of Season 2 of The Idealcast, Gene Kim speaks with Admiral John Richardson, who served as Chief of Naval Operations for four years.
New half-day virtual events with live watch parties worldwide!
DevOps best practices, case studies, organizational change, ways of working, and the latest thinking affecting business and technology leadership.
Is slowify a real word?
Could right fit help talent discover more meaning and satisfaction at work and help companies find lost productivity?
The values and philosophies that frame the processes, procedures, and practices of DevOps.
This post presents the four key metrics to measure software delivery performance.
February 9, 2024
Russell Ackoff was an American organizational theorist, consultant, and professor known for his contributions to systems thinking and management science. He also worked closely with Dr. W. Edwards Deming. In his work, Ackoff delves into the intricate hierarchy of human cognition, emphasizing the critical distinction between efficiency and effectiveness in social-technical systems. Ackoff outlines a hierarchy of cognition that includes data, information, knowledge, understanding, and wisdom. He explains that while the first four elements focus on increasing efficiency, wisdom is uniquely associated with effectiveness. This distinction sets the stage for his critique of social-technical systems.
Drawing on Peter Drucker, Ackoff highlights the difference between doing things right (efficiency) and doing the right thing (effectiveness). He argues that an overemphasis on efficiency, especially when pursuing the wrong objectives, can lead to detrimental outcomes. This concept is a recurring theme in DevOps, DevSecOps, and SRE.
Ackoff also describes errors of omission (not doing something necessary) and commission (doing something unnecessary or incorrect). In traditional management systems, errors of omission are often overlooked and more significant. One of my favorite examples of this idea was expressed by one of the early cloud pioneers Randy Bias. His quote said it’s not about bottom-line ROI but top-line ROI.
Russell Ackoff passed away in 2009, which was a time when the field of Artificial Intelligence was going through a lull period, often referred to as the second AI winter. The significant advancements that led to an AI explosion in 2023/2024 were probably unknown to him then. However, it is intriguing to think about Dr. Russell L. Ackoff’s perspective on the present state of Generative AI.
In my book, Deming’s Journey to Profound Knowledge, I pondered Deming’s thoughts on this subject. Additionally, if we take the liberty to extend our thoughts, what would Ackoff say about the nature of DevOps and Generative AI?
If we examine using a software repository for a retail business, we can apply what I describe as Ackoff’s “Hierarchy of Cognition.” Using a demo application created by a company I’ve been working with, OpenContext, we can illustrate how Ackoff’s cognition model and Generative AI can be traced in this pyramidal structure.
If we explore the “scatter-ly/retail-app” repository on Github, we’ll find various artifacts, including an init program written in Typescript called “index.ts.” This program could be considered “data,” according to Ackoff’s Cognition Pyramid. In isolation, this program only tells us a little.
Moving up the pyramid, we could look at a YAML file that might give more insight into the “information” related to the retail application. In this example, we can see that “oc-catalog.yaml” tells a lot more information about this application. This file shows information related to this retail application, such as the service name, ownership, SBOM information, build actions, and container information.
Now that we know this application will run as a container, we can examine how the container is built using the Dockerfile artifact. At this point, we have gained some knowledge of the retail application. For instance, if someone needed to troubleshoot the application while running, they could refer to the Dockerfile and reasonably grasp how the retail service operates. Typically, since I’m more on the Ops side of the DevOps world, the docker file is the first place I look at in a repo. It usually tells me where the build and configuration artifacts come from and gives me a general idea of how the service will be run.
Working with a large-scale web application can be complex, as many interconnected components make up a modern service. Understanding the relationship between the running application and the code repository can sometimes be difficult. OpenContext is a product that acts like a modern-day Configuration Management Database (CMDB.) OpenContext collects information from systems like GitHub and GitLab in a Graph database. This level of context leads to what Ackoff would call “understanding.”
Even in this demo application, you can see the level of complexity related to understanding this simple retail application.
Ackoff says that the first four elements are related to efficiency. Enabling someone to navigate up and down the hierarchy to move from artifacts (e.g., data) to understanding (e.g., a graph). However, with the commoditization of AI through recent breakthroughs in natural language processing and neural networks, we can look at how “wisdom” could be reached using Generative AI. We exported OpenContext’s graph data as a simple prototype into a Retrieval Augmentation (RAG) system. In this example, we used MongoDB’s Atlas Vector Search (vector database) to create a simple Large Language Model (LLM) implementation of OpenAI GPT-4. Without any training, you could say, we started to achieve what Ackoff would call “wisdom” or, more specifically, effectiveness (see the figure below.).
Artificial Intelligence (AI) can be broadly classified into three types based on its capabilities and functionalities. These classifications help understand the extent to which AI systems can perform tasks and make decisions. The three types of AI are:
This is sometimes referred to as Narrow AI. Narrow AI is designed to perform a specific or narrow range of tasks without human intervention. It operates under a limited pre-defined range or context and does not possess consciousness, self-awareness, or genuine understanding. Examples include voice assistants like Siri and Alexa, recommendation systems on platforms like Netflix and Amazon, and image recognition software. It excels at the task it is programmed for but cannot perform beyond its set parameters. Narrow AI systems cannot apply their intelligence to tasks or problems they weren’t specifically designed for.
This is sometimes referred to as Strong AI. Artificial General Intelligence (AGI) refers to a theoretical form of AI that can understand, learn, and apply its intelligence to solve any problem with the same level of competence as a human. This type of AI would perform any intellectual task that a human being can, with the ability to reason, plan, learn from experience, and understand abstract concepts. It would possess self-awareness, emotional understanding, and the ability to judge in unfamiliar situations. AGI would be adaptable to various tasks without needing task-specific programming. According to ChatGPT, as of my last update in April 2023, AGI remains a theoretical concept and has yet to be achieved. Its applications would span every domain, from creative arts and scientific research to complex decision-making and problem-solving across all fields.
Also called the Singularity Intelligence, Artificial Super Intelligence takes the concept of Artificial General Intelligence further by not just equalling but significantly surpassing human intelligence. It could outperform the best human brains in every field, including scientific creativity, general wisdom, and social skills. This form of AI could improve itself autonomously, leading to rapid advancements beyond human control or understanding. The concept raises excitement for its potential benefits and significant ethical and safety concerns if not aligned with human values and controlled effectively.
Our simple prototype LLM, while powerful, is limited to just AI. However, when combined with human intelligence and a natural language interface, it may lead us to what Ackoff calls “wisdom.” With this prototype, we can ask the LLM conversational chat questions such as who owns a particular code component or which components are labeled Typescript. It’s like we are talking to the famous Brent character in Gene Kim et al.’s The Phoenix Project.
There’s a broader point to take away from all of this. Even though our LLM prototype is not advanced enough to achieve Artificial General Intelligence (AGI) or Artificial Super Intelligence (ASI), we are still able to reach Ackoff’s level of “wisdom” in certain situations. However, humans still play a vital role in this process. An employee with reasonable DevOps experience can quickly understand a retail application when given the OpenContext tools, including the LLM. If you hired someone without DevOps experience and gave them this simple LLM as a chatbot, their questions and answers would fall outside the Hierarchy of Cognition and amount to simple gibberish.
John Willis has worked in the IT management industry for more than 35 years and is a prolific author, including "Deming's Journey to Profound Knowledge" and "The DevOps Handbook." He is researching DevOps, DevSecOps, IT risk, modern governance, and audit compliance. Previously he was an Evangelist at Docker Inc., VP of Solutions for Socketplane (sold to Docker) and Enstratius (sold to Dell), and VP of Training & Services at Opscode where he formalized the training, evangelism, and professional services functions at the firm. Willis also founded Gulf Breeze Software, an award winning IBM business partner, which specializes in deploying Tivoli technology for the enterprise. Willis has authored six IBM Redbooks for IBM on enterprise systems management and was the founder and chief architect at Chain Bridge Systems.
No comments found
Your email address will not be published.
First Name Last Name
Δ
We're thrilled to announce the release of The Phoenix Project: A Graphic Novel (Volume…
The following post is an excerpt from the book Unbundling the Enterprise: APIs, Optionality, and…
A few years ago, Gene Kim approached me with an intriguing question: What would…
Ever since digital tools and experiences became aspects of everyday work life, there’s been…