LLMs and Generative AI in the enterprise.
Inspire, develop, and guide a winning organization.
Understand the unique values and behaviors of a successful organization.
Create visible workflows to achieve well-architected software.
Understand and use meaningful data to measure success.
Integrate and automate quality, security, and compliance into daily work.
An on-demand learning experience from the people who brought you The Phoenix Project, Team Topologies, Accelerate, and more.
Learn how to enhance collaboration and performance in large-scale organizations through Flow Engineering
Learn how making work visible, value stream management, and flow metrics can affect change in your organization.
Clarify team interactions for fast flow using simple sense-making approaches and tools.
Multiple award-winning CTO, researcher, and bestselling author Gene Kim hosts enterprise technology and business leaders.
In the first part of this two-part episode of The Idealcast, Gene Kim speaks with Dr. Ron Westrum, Emeritus Professor of Sociology at Eastern Michigan University.
In the first episode of Season 2 of The Idealcast, Gene Kim speaks with Admiral John Richardson, who served as Chief of Naval Operations for four years.
DevOps best practices, case studies, organizational change, ways of working, and the latest thinking affecting business and technology leadership.
Just as physical jerk throws our bodies off balance, technological jerk throws our mental models and established workflows into disarray when software changes too abruptly or without proper preparation.
Sure, vibe coding makes you code faster—that’s the obvious selling point. But if you think speed is the whole story, you’re missing out on the juicy stuff.
The values and philosophies that frame the processes, procedures, and practices of DevOps.
This post presents the four key metrics to measure software delivery performance.
January 28, 2026
It is an exciting time to work in software engineering! AI is here to revolutionize everything, and it seems like anything is possible. The problem with paradigm-shifting technologies is that while anything may be possible, not everything is practical. When automobiles were first mass-produced in the early 20th century, one might have thought that they would become the source of all ground transportation. Yet here we are in 2025, and trains still have their place.
For AI, the industry is still in an experimental phase, and the realities that will differentiate practical from possible AI solutions at scale have not yet been revealed. However, we can use good old human reasoning to develop heuristics to guide our solutions. Specifically, this article examines how the human brain works and how its function can help us build optimized agentic automation solutions.
Thinking Fast and Slow, Daniel Kahneman’s landmark 2011 book, explains how the human brain functions in two modes, which Kahneman refers to as “System 1” and “System 2.” System 1 is the instinctive mode for the brain, where tasks are carried out automatically and consistently. System 2 is the reasoning mode, where tasks are considered logically and analytically, then executed based on its conclusions. System 1 is fast, and System 2 is slow, but both are necessary. Furthermore, because System 1 is faster than System 2, the brain is biased toward letting System 1 do as much of the work as possible. The brain is lazy and wants to do as little thinking as possible. This maximizes its efficiency. Not only does the brain route as much as possible to System 1, but it also ensures that when System 2 is called upon to solve the same problem multiple times, that becomes a learned task that gets moved to System 1. Conversely, when a repetitive System 1 task starts failing due to changing conditions, System 2 recognizes the problem and adjusts course. In summary, the brain has two modes: it biases toward the repetitive mode, tries to move everything from the reasoning mode to the repetitive mode, and fixes the repetitive mode when things change.
Humans have been automating business processes through software for decades. Through most of this time, the only “mode” of automation is one that matches System 1 in the brain: scripted and repeated. Conditional logic allows some variation, but such systems are invariably designed to rely on human intervention for complex analytical tasks and exception handling when unrecognized conditions occur. Don’t get me wrong: these systems are great! Like System 1 in the brain, they do a lot of work efficiently, repeatably, and predictably. They just didn’t have a System 2 to work with. Until now.
AI gives us System 2 for software-enabled automation. Rather than having to match specific conditions in order to execute logic, LLM-powered AI agents can use language-based reasoning to determine next steps and execute tasks. Using real-time context and the tools at their disposal, these agents can dynamically determine near-infinite processing paths, as opposed to the finite permutations of old-school automation solutions. Considering now these two modes of software-powered automation—repeatable System 1 and AI-reasoned System 2—let’s examine the lessons that Kahneman’s book can teach us about automation by using the human brain as an analogy.
Given the sophistication and flexibility of LLM technology, it is tempting to have it do all the work. The tech industry thrives on innovation, and it might seem innovative to throw the old stuff away and start anew. However, millions of years of cerebral evolution teach us that would be a mistake. Repeatable automation may seem limited or boring in light of AI advancements, but it works. It is efficient. It is predictable and deterministic.
Part of the charm of AI agents is their unpredictability. Their probabilistic capabilities that allow them to handle unexpected conditions are, by their nature, unpredictable and non-deterministic. Just as Kahneman shows that the brain works in two modes, so must agentic automation solutions. This is good news! All that work you have done automating business processes for the past several years doesn’t need to be refactored into AI agents.
Consider a company’s expense approval process. It is likely that many steps in the process—uploading receipts, currency calculations, single field policy checks—have been automated. However, some other steps—extracting data from receipt images, complete compliance checks, final approval—have not been. By keeping the existing automations, organizations can focus their agentic solutions on the remaining, more cognitively complex tasks.
But wait, there is more. If the human brain routes as many tasks as possible to System 1, shouldn’t agentic automation do the same? The second lesson we can apply from Kahneman is just that: keep as much of the automation in your agentic solution in the old mode, where logic is predictable and repeatable. Use agentic reasoning only when needed. A practical opportunity follows from this lesson.
Consider where the “last mile of automation” exists in your current automations, the ones that work entirely through System 1. What are the most common failure or exception scenarios? How could agentic reasoning solve those problems? There is a strong chance you could identify a low effort, high yield opportunity to apply agentic automation that will have a high ratio of repeatable versus reasoned functionality. That is the recipe for efficiency and ROI.
In our expense approval example, extracting data from images is tricky, but it doesn’t require Gen AI-based reasoning. There is likely more of a “System 1” type approach to solve it that can be used. However, synthesizing all the data in order to make a judgment on policy compliance requires a higher level of cognition, necessitating System 2 and agentic reasoning. Only introducing agentic processing when needed will lead to more optimal performance and efficiency.
Learning may be the greatest superpower of the human brain. As Kahneman points out, the brain is so biased towards System 1, when it repeats a task too often with System 2, that task will be learned and imprinted onto System 1. In the software world, this type of rote learning has been implemented in automation applications before, particularly in robotic process automation (RPA). However, the resulting software solution would be more like having half a brain: System 1 without System 2.
Agentic AI completes the digital brain and allows for dynamic learning. Not only can you use LLM-enabled agents to provide the reasoned logic that uplevels the automation, but you can also use the LLM’s ability to build solutions to create and deploy new workflows into the repetitive automation domain, agentic AI’s System 1. If you utilize LLM-based reasoning within your agentic automation to identify repetitive activities or observe common breakpoints, you can then leverage the LLM’s development capabilities to add new System 1 processes or fix erroneous workflows. This continuum of improvement is the pinnacle of agentic automation’s business value.
Breaking down the compliance checks in the expense approval process example, it is probable that patterns would emerge over time that would allow a more deterministic solution to be implemented. At that point, the org could shift the compliance checking to System 1 while considering how to make the final approval itself completely automated through System 2. This progressive approach to automation—start with System 1 tasks, augment with System 2, move System 2 tasks to System 1 over time—is a winning approach for organizations.
AI is still new. We are learning every day what works and what doesn’t work when it comes to applying AI in the realm of digital business. However, there are some guideposts we can follow besides trial and error. If our mission in pursuing artificial intelligence in the first place was to humbly try to recreate the magnificence of human intelligence through machines, we should follow closely what we know about the human brain. Applying these 3 lessons derived from Thinking Fast and Slow will help you build an optimized foundation for agentic automation, and more importantly, help you get business value from AI as quickly and sustainably as possible.
Matt McLarty is the Chief Technology Officer for Boomi. He works with organizations around the world to help them digitally transform using a composable approach. He is an active member of the global API community, has led global technical teams at Salesforce, IBM, and CA Technologies, and started his career in financial technology. Matt is an internationally known expert on APIs, microservices, and integration. He is co-author of the O'Reilly books Microservice Architecture and Securing Microservice APIs, and co-host of the API Experience podcast. He lives with his wife and two sons in Vancouver, BC.
No comments found
Your email address will not be published.
First Name Last Name
Δ
It is an exciting time to work in software engineering! AI is here to…
In 1944, the Office of Strategic Services—the precursor to the CIA—published a guide on…
A version of this article originally appeared at Hyperadaptive.Solutions. Why AI initiatives fail and…
Studies show 20-55% productivity gains from AI code generation. Organizations are mandating AI adoption.…