LLMs and Generative AI in the enterprise.
Inspire, develop, and guide a winning organization.
Understand the unique values and behaviors of a successful organization.
Create visible workflows to achieve well-architected software.
Understand and use meaningful data to measure success.
Integrate and automate quality, security, and compliance into daily work.
An on-demand learning experience from the people who brought you The Phoenix Project, Team Topologies, Accelerate, and more.
Learn how to enhance collaboration and performance in large-scale organizations through Flow Engineering
Learn how making work visible, value stream management, and flow metrics can affect change in your organization.
Clarify team interactions for fast flow using simple sense-making approaches and tools.
Multiple award-winning CTO, researcher, and bestselling author Gene Kim hosts enterprise technology and business leaders.
In the first part of this two-part episode of The Idealcast, Gene Kim speaks with Dr. Ron Westrum, Emeritus Professor of Sociology at Eastern Michigan University.
In the first episode of Season 2 of The Idealcast, Gene Kim speaks with Admiral John Richardson, who served as Chief of Naval Operations for four years.
DevOps best practices, case studies, organizational change, ways of working, and the latest thinking affecting business and technology leadership.
Just as physical jerk throws our bodies off balance, technological jerk throws our mental models and established workflows into disarray when software changes too abruptly or without proper preparation.
Sure, vibe coding makes you code faster—that’s the obvious selling point. But if you think speed is the whole story, you’re missing out on the juicy stuff.
The values and philosophies that frame the processes, procedures, and practices of DevOps.
This post presents the four key metrics to measure software delivery performance.
February 9, 2026
Self-driving cars operate at the sharp end of cyber-physical safety, where even minor mistakes can instantly transform two tons of steel into a deadly hazard. The autonomous vehicle industry has experienced catastrophic incidents resulting in fatal and life-changing injuries—Uber in 2018, Cruise in 2023. Yet analysts project a $500-billion mobility market by 2030 for companies that achieve safe and scalable automation.
The essential question facing leaders: Where exactly do we place human oversight in automated systems? And how do we ensure those human-machine interfaces remain reliable, clear, and effective as complexity grows?
According to the recent paper “Leading the Human-AI Revolution” published in the Fall 2025 Enterprise Technology Leadership Journal, the answer is non-negotiable: For safety-critical domains, human-in-the-loop, human-on-the-loop, and oversight are immutable.
Building and operating safety-critical cyber-physical systems (CPS) with AI integration presents a formidable leadership challenge that extends beyond technology. Incorporating AI in the design, development, deployment, and operations of these systems involves complex interactions between technology and social systems—people, processes, culture, and organizations.
While AI can greatly enhance system development through automation, adaptability, and data-driven insights, it can also disrupt established engineering practices, existing roles, and cultural norms. The challenge isn’t just technical—it’s deeply human.
As AI becomes increasingly embedded in CPS—from autonomous vehicles to healthcare monitoring systems to factory robots—leaders must deepen their focus on human interactions throughout development and consider the human-AI experience in operations.
The strategy for managing risk in AI integration matches the level of human oversight to the level of operational risk. Two primary models achieve this:
Human-in-the-Loop (HITL): For high-risk or critical tasks, AI cannot operate without direct human input, ensuring a human is accountable for the final action. The human makes the decision; AI provides information and recommendations.
Human-on-the-Loop (HOTL): For lower-risk, supervised autonomous tasks, AI operates independently while a human actively monitors and can intervene or override the system at any time. The AI makes decisions, but humans maintain veto power.
Selecting the correct model is key to ensuring both operational control and adaptability. These strategies preserve accountability, enable real-time decision-making, and ensure system reliability.
Successfully integrating HITL and HOTL requires designing workflows that blend human expertise with AI-driven automation, alongside user interfaces that foster collaboration through transparency and build trust between humans and intelligent systems.
Waymo’s approach to fleet response exemplifies disciplined human-AI boundaries. Their Fleet Response Operations (FRO) team monitors vehicles remotely, with authority to guide vehicles through uncertain scenarios. When a Waymo vehicle encounters an ambiguous situation—construction zones with unclear signage, emergency vehicle presence—it can request human assistance without stopping in traffic.
The FRO specialist sees real-time sensor feeds, provides high-level guidance (“proceed cautiously through the construction zone using the rightmost lane”), and the vehicle executes using its own perception. This is HOTL done right: The human doesn’t micro-manage steering or throttle but provides strategic context the AI lacks.
Result: Waymo has logged millions of autonomous miles with significantly better safety records than competitors.
Contrast this with Uber’s 2018 fatal incident in Tempe, Arizona. The system design had critical flaws:
Consequence: Fatality, program shutdown, and a landmark NTSB investigation that highlighted organizational and technical deficiencies.
The difference? Waymo designed for effective human oversight. Uber designed with humans as passive backup—and when that backup failed, the system failed catastrophically.
Healthcare provides another critical example. Healthcare providers monitor patients remotely via wearables while leveraging AI to analyze data trends, reducing need for in-person visits and minimizing exposure to secondary infections.
AI facilitates autonomous decision-making and anomaly detection, improving operational resilience and reducing reliance on manual intervention. But the key word is “reducing,” not “eliminating.”
When AI detects anomalies in heart rhythms or blood glucose levels, it alerts human clinicians who make treatment decisions. The AI doesn’t prescribe medication or adjust treatment plans autonomously—that remains firmly in human control, with appropriate medical oversight and accountability.
Collaborative robots (cobots) on factory floors demonstrate how strategic human-AI boundaries enhance both productivity and safety. At companies like Boeing and BMW, cobots execute precision drilling and fastening while human assemblers guide tasks and validate quality.
Effective implementation includes:
The result: Enhanced precision, streamlined workflows, reduced strain on human technicians, and significantly lower workplace injury rates.
Contrast with flawed implementations: Some factories deployed robots with insufficient safety boundaries, inadequate training, or unclear authority structures. Workers suffered injuries when robots moved unexpectedly, or when unclear protocols led to humans entering robot work zones without proper safeguards.
Successful human-machine collaboration hinges on well-defined operational boundaries, proactive safety measures, and workforce training—not as best practices, but as essential safeguards for human life, corporate integrity, and long-term operational sustainability.
Security in cyber-physical systems is inseparable from safety. A system with autonomous capabilities can be misled or exploited in ways that have direct kinetic consequences—a vehicle misrouting, a drone misfiring, a robotic system overextending.
According to OWASP’s work on agentic AI security, the attack surface in AI-enabled systems includes:
Human oversight serves as a critical security layer. Humans can detect when AI behavior seems “off,” when confidence levels don’t match observable reality, when patterns suggest system compromise rather than legitimate operation.
Leaders responsible for safety-critical systems must answer these questions:
On Human Oversight:
On Safety and Security:
On Training and Capability:
On Life Cycle Management:
To stay agile and competitive in the rapidly evolving AI landscape, technology leaders must collaborate strongly with cross-functional leadership teams to ensure alignment with organizational strategic vision.
The diverse leadership team should include representatives from:
Working together with shared vision, leadership teams can harness AI’s full potential while ensuring alignment with strategic goals and meeting workforce needs. This approach not only accelerates learning and innovation but prevents “shadow AI”—unregulated AI use operating outside coordinated business-technology leadership.
Understanding where different AI implementations fall on the agency-risk spectrum helps leaders make appropriate oversight decisions:
High Agency, High Risk: AI as autonomous agent making end-to-end decisions within predefined domains. Example: Inventory-replenishment bot automatically ordering stock. Requires robust monitoring, clear boundaries, and strong escalation paths.
High Agency, Low Risk: AI as collaborator/partner in iterative co-creation. Example: Writer leveraging AI to draft story segments, then editing and providing feedback. Human maintains final authority but benefits from AI assistance.
Low Agency, High Risk: AI as decision support providing recommendations. Example: Medical diagnosis system suggesting possible conditions. Human makes final diagnosis and treatment decisions.
If you’re leading development or operations of safety-critical systems, recognize that AI integration is not purely a technical decision—it’s a socio-technical transformation requiring careful attention to:
Human factors: How will people interact with AI systems? What cognitive demands will they face? How do we prevent vigilance decay in monitoring roles?
Organizational change: What new roles and skills are needed? How do we train existing workforce? What career paths exist for people in hybrid human-AI roles?
Cultural evolution: How do we build trust in AI systems while maintaining healthy skepticism? How do we celebrate both AI successes and human interventions that prevent AI failures?
Regulatory alignment: How do we demonstrate to regulators that we’ve thought through human oversight? How do we document human-AI interactions for accountability?
For safety-critical domains—autonomous vehicles, healthcare monitoring, industrial robotics, aviation systems, defense applications—human-in-the-loop and human-on-the-loop oversight are immutable requirements.
This serves as a guiding lens for identifying where trust, control, regulation, and governance must be strong, especially in high-risk, high-agency environments where the cost of failure is significant and institutional responsibility is paramount.
The organizations that thrive won’t be those that eliminate human oversight in pursuit of efficiency. They’ll be those that thoughtfully design human-AI partnerships that leverage the strengths of both—AI’s speed, consistency, and pattern recognition combined with human judgment, ethical reasoning, and contextual understanding.
When two tons of steel is moving at highway speeds, when a surgical robot is operating on a patient, when an industrial system is handling hazardous materials—human oversight isn’t a nice-to-have. It’s the difference between responsible innovation and catastrophic failure.
Design for it. Budget for it. Train for it. Make it non-negotiable.
This blog post is based on “Leading the Human-AI Revolution: Strategic Leadership Guidance for Effective Human-AI Interactions for Development and Operations of Safety-Critical Cyber-Physical Systems” by Dr. Suzette Johnson, Robin Yeman, Steve Wilson, Kim Harrison, and Christine Hudson, published in the Enterprise Technology Leadership Journal Fall 2025.
Managing Editor at IT Revolution working on publishing books and guidance papers for the modern business leader. I also oversee the production of the IT Revolution blog, combining the best of responsible, human-centered content with the assistance of AI tools.
No comments found
Your email address will not be published.
First Name Last Name
Δ
Self-driving cars operate at the sharp end of cyber-physical safety, where even minor mistakes…
It's 2:06 a.m. Jessie's phone buzzes with that distinct pattern she'd set for production…
It is an exciting time to work in software engineering! AI is here to…
In 1944, the Office of Strategic Services—the precursor to the CIA—published a guide on…