Skip to content

February 9, 2024

Thoughts on Generative AI Related to Efficiency and Effectiveness

By John Willis

Russell Ackoff was an American organizational theorist, consultant, and professor known for his contributions to systems thinking and management science. He also worked closely with Dr. W. Edwards Deming. In his work, Ackoff delves into the intricate hierarchy of human cognition, emphasizing the critical distinction between efficiency and effectiveness in social-technical systems. Ackoff outlines a hierarchy of cognition that includes data, information, knowledge, understanding, and wisdom. He explains that while the first four elements focus on increasing efficiency, wisdom is uniquely associated with effectiveness. This distinction sets the stage for his critique of social-technical systems.

Drawing on Peter Drucker, Ackoff highlights the difference between doing things right (efficiency) and doing the right thing (effectiveness). He argues that an overemphasis on efficiency, especially when pursuing the wrong objectives, can lead to detrimental outcomes. This concept is a recurring theme in DevOps, DevSecOps, and SRE. 

Ackoff also describes errors of omission (not doing something necessary) and commission (doing something unnecessary or incorrect). In traditional management systems, errors of omission are often overlooked and more significant. One of my favorite examples of this idea was expressed by one of the early cloud pioneers Randy Bias. His quote said it’s not about bottom-line ROI but top-line ROI. 

Russell Ackoff passed away in 2009, which was a time when the field of Artificial Intelligence was going through a lull period, often referred to as the second AI winter. The significant advancements that led to an AI explosion in 2023/2024 were probably unknown to him then. However, it is intriguing to think about Dr. Russell L. Ackoff’s perspective on the present state of Generative AI. 

In my book, Deming’s Journey to Profound Knowledge, I pondered Deming’s thoughts on this subject. Additionally, if we take the liberty to extend our thoughts, what would Ackoff say about the nature of DevOps and Generative AI?

If we examine using a software repository for a retail business, we can apply what I describe as Ackoff’s “Hierarchy of Cognition.” Using a demo application created by a company I’ve been working with, OpenContext, we can illustrate how Ackoff’s cognition model and Generative AI can be traced in this pyramidal structure.

Data

If we explore the “scatter-ly/retail-app” repository on Github, we’ll find various artifacts, including an init program written in Typescript called “index.ts.” This program could be considered “data,” according to Ackoff’s Cognition Pyramid. In isolation, this program only tells us a little.

Information

Moving up the pyramid, we could look at a YAML file that might give more insight into the “information” related to the retail application. In this example, we can see that oc-catalog.yaml tells a lot more information about this application. This file shows information related to this retail application, such as the service name, ownership, SBOM information, build actions, and container information. 

Knowledge

Now that we know this application will run as a container, we can examine how the container is built using the Dockerfile artifact. At this point, we have gained some knowledge of the retail application. For instance, if someone needed to troubleshoot the application while running, they could refer to the Dockerfile and reasonably grasp how the retail service operates. Typically, since I’m more on the Ops side of the DevOps world, the docker file is the first place I look at in a repo. It usually tells me where the build and configuration artifacts come from and gives me a general idea of how the service will be run. 

Understanding

Working with a large-scale web application can be complex, as many interconnected components make up a modern service. Understanding the relationship between the running application and the code repository can sometimes be difficult. OpenContext is a product that acts like a modern-day Configuration Management Database (CMDB.) OpenContext collects information from systems like GitHub and GitLab in a Graph database. This level of context leads to what Ackoff would call “understanding.” 

Even in this demo application, you can see the level of complexity related to understanding this simple retail application.

Wisdom

Ackoff says that the first four elements are related to efficiency. Enabling someone to navigate up and down the hierarchy to move from artifacts (e.g., data) to understanding (e.g., a graph). However, with the commoditization of AI through recent breakthroughs in natural language processing and neural networks, we can look at how “wisdom” could be reached using Generative AI. We exported OpenContext’s graph data as a simple prototype into a Retrieval Augmentation (RAG) system. In this example, we used MongoDB’s Atlas Vector Search (vector database) to create a simple Large Language Model (LLM) implementation of OpenAI GPT-4. Without any training, you could say, we started to achieve what Ackoff would call “wisdom” or, more specifically, effectiveness (see the figure below.).

Types of AI

Artificial Intelligence (AI) can be broadly classified into three types based on its capabilities and functionalities. These classifications help understand the extent to which AI systems can perform tasks and make decisions. The three types of AI are:

1. Artificial Intelligence (AI)

This is sometimes referred to as Narrow AI. Narrow AI is designed to perform a specific or narrow range of tasks without human intervention. It operates under a limited pre-defined range or context and does not possess consciousness, self-awareness, or genuine understanding. Examples include voice assistants like Siri and Alexa, recommendation systems on platforms like Netflix and Amazon, and image recognition software. It excels at the task it is programmed for but cannot perform beyond its set parameters. Narrow AI systems cannot apply their intelligence to tasks or problems they weren’t specifically designed for.

2. Artifical General Intelligince (AGI)

This is sometimes referred to as Strong AI. Artificial General Intelligence (AGI) refers to a theoretical form of AI that can understand, learn, and apply its intelligence to solve any problem with the same level of competence as a human. This type of AI would perform any intellectual task that a human being can, with the ability to reason, plan, learn from experience, and understand abstract concepts. It would possess self-awareness, emotional understanding, and the ability to judge in unfamiliar situations. AGI would be adaptable to various tasks without needing task-specific programming. According to ChatGPT, as of my last update in April 2023, AGI remains a theoretical concept and has yet to be achieved. Its applications would span every domain, from creative arts and scientific research to complex decision-making and problem-solving across all fields.

3. Artificial Super Intelligence (ASI)

Also called the Singularity Intelligence, Artificial Super Intelligence takes the concept of Artificial General Intelligence further by not just equalling but significantly surpassing human intelligence. It could outperform the best human brains in every field, including scientific creativity, general wisdom, and social skills. This form of AI could improve itself autonomously, leading to rapid advancements beyond human control or understanding. The concept raises excitement for its potential benefits and significant ethical and safety concerns if not aligned with human values and controlled effectively.

Conclusion

Our simple prototype LLM, while powerful, is limited to just AI. However, when combined with human intelligence and a natural language interface, it may lead us to what Ackoff calls “wisdom.” With this prototype, we can ask the LLM conversational chat questions such as who owns a particular code component or which components are labeled Typescript. It’s like we are talking to the famous Brent character in Gene Kim et al.’s The Phoenix Project.

OpenAI, Langchain, and MongoDB Atlas Vector Search

Answer Rendered in Markdown

There’s a broader point to take away from all of this. Even though our LLM prototype is not advanced enough to achieve Artificial General Intelligence (AGI) or Artificial Super Intelligence (ASI), we are still able to reach Ackoff’s level of “wisdom” in certain situations. However, humans still play a vital role in this process. An employee with reasonable DevOps experience can quickly understand a retail application when given the OpenContext tools, including the LLM. If you hired someone without DevOps experience and gave them this simple LLM as a chatbot, their questions and answers would fall outside the Hierarchy of Cognition and amount to simple gibberish.

- About The Authors
Avatar photo

John Willis

John Willis has worked in the IT management industry for more than 35 years and is a prolific author, including "Deming's Journey to Profound Knowledge" and "The DevOps Handbook." He is researching DevOps, DevSecOps, IT risk, modern governance, and audit compliance. Previously he was an Evangelist at Docker Inc., VP of Solutions for Socketplane (sold to Docker) and Enstratius (sold to Dell), and VP of Training & Services at Opscode where he formalized the training, evangelism, and professional services functions at the firm. Willis also founded Gulf Breeze Software, an award winning IBM business partner, which specializes in deploying Tivoli technology for the enterprise. Willis has authored six IBM Redbooks for IBM on enterprise systems management and was the founder and chief architect at Chain Bridge Systems.

Follow John on Social Media

No comments found

Leave a Comment

Your email address will not be published.



Jump to Section

    More Like This

    High Stakes Communication: The Four Pillars of Effective Leadership Communication
    By Summary by IT Revolution

    You've been there before: standing in front of your team, announcing a major technological…

    Mitigating Unbundling’s Biggest Risk
    By Stephen Fishman , Matt McLarty

    If you haven’t already read Unbundling the Enterprise: APIs, Optionality, and the Science of…

    Navigating Cloud Decisions: Debunking Myths and Mitigating Risks
    By Summary by IT Revolution

    Organizations face critical decisions when selecting cloud service providers (CSPs). A recent paper titled…

    The Phoenix Project Comes to Life: Graphic Novel Adaptation Now Available!
    By IT Revolution

    We're thrilled to announce the release of The Phoenix Project: A Graphic Novel (Volume…