Inspire, develop, and guide a winning organization.
Create visible workflows to achieve well-architected software.
Understand and use meaningful data to measure success.
Integrate and automate quality, security, and compliance into daily work.
Understand the unique values and behaviors of a successful organization.
LLMs and Generative AI in the enterprise.
An on-demand learning experience from the people who brought you The Phoenix Project, Team Topologies, Accelerate, and more.
Learn how making work visible, value stream management, and flow metrics can affect change in your organization.
Clarify team interactions for fast flow using simple sense-making approaches and tools.
Multiple award-winning CTO, researcher, and bestselling author Gene Kim hosts enterprise technology and business leaders.
In the first part of this two-part episode of The Idealcast, Gene Kim speaks with Dr. Ron Westrum, Emeritus Professor of Sociology at Eastern Michigan University.
In the first episode of Season 2 of The Idealcast, Gene Kim speaks with Admiral John Richardson, who served as Chief of Naval Operations for four years.
New half-day virtual events with live watch parties worldwide!
DevOps best practices, case studies, organizational change, ways of working, and the latest thinking affecting business and technology leadership.
Is slowify a real word?
Could right fit help talent discover more meaning and satisfaction at work and help companies find lost productivity?
The values and philosophies that frame the processes, procedures, and practices of DevOps.
This post presents the four key metrics to measure software delivery performance.
October 17, 2023
This post is an excerpt from Industrial DevOps: Build Better Systems Faster by Dr. Suzette Johnson and Robin Yeman.
As Industrial DevOps principles continue to scale to cyber-physical systems, several misconceptions have evolved. The following misconceptions, while not all-inclusive, are ones we have encountered regularly while working in cyber-physical system environments; some are also prevalent in software-centric environments, but the misconception is magnified in cyber-physical due to the typical scale. With each misconception, we provide a response and identify which chapter you can read for additional information to help understand the principle to overcome the misconception.
Planning is necessary for any product development effort. When Agile first started, it was implemented on smaller efforts and gave the perception that longer-term planning was not necessary. This couldn’t be further from the truth. A general lack of experience and training (skills and knowledge) regarding Agile often leads to poor implementations, where teams mistakenly believe Agile means no planning. In some instances, teams have looked at frameworks like Scrum and thought that they needed to plan only “sprint by sprint,” with no longer-term vision.
This lack of understanding has added to the larger misconception that Agile teams don’t plan and has hampered the potential that Agile and DevOps offer for large efforts, specifically those with hardware, because of the long lead times. Interestingly, when using an Agile approach, teams often feel like they do more planning than they did with a traditional product development approach. In some cases, this is likely true. Agile teams plan more. With traditional approaches, the manager did most of the planning and gave it to the workers to implement. With Agile working methods, the manager provides a vision and road map, but the teams do the detailed planning for the iterations and daily work. They focus on how it gets done.
For cyber-physical systems, looking at multiple horizons of planning (Industrial DevOps Principle 2) is especially important for the alignment and integration of work across teams and suppliers. The difference is that longer horizons are kept at a higher level of granularity and are further refined as the teams get closer to implementation. The longer horizons provide a vision and forecast of what is coming in terms of features. Whereas the daily planning horizon provides the team the opportunity to shift plans as often as necessary. For example, at Tesla, the planning never stops. Teams have the autonomy for real-time changes in their plans with budget reallocation in real time every day with AI guidance, called Digital Self-Management.
To read more about planning and dispelling this misconception, read Chapter 5: Apply Multiple Horizons of Planning.
Another misconception is that Agile/DevOps efforts are always changing the requirements, and that won’t work for cyber-physical systems because of the hardware components. Principle 2 of the Agile Manifesto does state that we “welcome changing requirements, even late in development”; however, there are underlying premises to consider.
First, there are multiple levels of requirements and functionality that are decomposed to create a backlog of what needs to be done to build the system. It is true that this backlog of work is constantly refined and can be reprioritized. At the lowest levels of planning, the backlog will change more frequently as teams learn more about the complex system they are building. The higher-level requirements are inclined to change less frequently, especially requirements related to size, weight, power, and cost. Innovations will emerge and impact requirement definition and prioritization.
In the paper “Overcoming Barriers to Industrial DevOps: Working with the Hardware-Engineering Community,” we highlighted the fact that there are physical constraints when developing cyber-physical systems. One approach to address these constraints is to “look beyond single, specialized models and use systems thinking to build an integrated set of models that allow change of parameters in size, weight, power, and cost as we test solutions and learn by applying short development iterations.” The adoption of digital capabilities, tools, and modular architecture makes change less costly and makes it easier to respond to change.
Yes, we need systems engineering practices; however, what that looks like and how it is implemented have evolved over the past couple of decades. In small development efforts, systems engineering work was most likely consumed by the development team, making that work less visible. As you scale and get into developing larger systems and systems of systems, there is a greater need for systems engineering practices. It is recognized that the practices found within the traditional Vee model are not hand-offs from one functional area to another. They are much more integrated, and as a result, these practices have been interwoven into all of the Industrial DevOps principles.
Implementing Agile and DevOps practices improves quality. If you’re not improving quality, you are doing it wrong. Based on Principle 9 of the Agile Manifesto, “Continuous attention to technical excellence and good design enhances agility.” Technical excellence means always looking at the quality of your product from code complexity, automated testing, architecture, improving flow, and regular user feedback. Quality is built in. And speed and time to market require quality. We learned from the Lean community that small batch sizes can help improve quality and reduce overall costs, where defects can be found earlier as small batches flow through the system faster and users have more opportunity to engage.
This is a frequent misconception that has existed for many years, regardless of the product or size of the effort. Part of this misconception stems from not fully reading the Agile Manifesto. It may also stem from frustrations when, early on, documentation was the primary product being delivered via traditional waterfall ways of working. The Agile Manifesto states, “We value working software over comprehensive documentation.” Yes! That is so true. However, if you read on, it says, “While there is value in the items on the right [documentation], we value the items on the left more [working software].”
Some level of documentation is necessary. How much documentation and the form of documentation can vary based on the system you are building. If you were to look at the documentation for a phone app versus the documentation for a missile or for the next spacecraft, the amount and form of documentation would look different. The important thing is that we don’t measure progress on how much documentation has been written, but by what we can demonstrate in terms of product so we can get feedback and learn from it. Understand what level of documentation is needed for your system and how to use digital tools to ensure maintainability as the system evolves.
Early on, some of the Agile frameworks did not address how leaders and/or managers fit into the process. This is unfortunate, as it resulted in the notion that we either don’t need them or they do not provide value in this new environment. As we define new ways of working, it does mean rethinking how we work together, how we create a culture of psychological safety, and how we create a learning culture. Leadership is fundamental to enable the successful delivery of products and services. Build leaders at all levels of the organization. In L. David Marquet’s book Turn the Ship Around! A True Story of Turning Followers into Leaders, he explains that competence and organizational clarity are the pillars needed for pushing the decision-making authority at the right levels within the organization. This change in mindset is driven and modeled by leadership.
Yes, we want to release early and often. The goal is to deliver value in the shortest sustainable lead time. Companies like Amazon and Google release thousands of times per day and deploy into operations continuously. That works well for their environment, for their product, and for their customers. But that is not the case everywhere. Each iteration of work—each backlog item, when complete—should meet all the desired quality standards and the acceptance criteria such that it works as intended. This means it could be released if you have products or users where this is feasible and desired.
However, in cyber-physical systems, while you work with this mindset, you might be releasing into a large system-of-systems environment to be fully integrated. The goal is to identify what we can learn earlier. Maybe the hardware isn’t ready, but there is a simulated environment you can learn in. We cannot release it until the full system—hardware and software—has been fully integrated, meets safety requirements, and successfully passes the full launch checklist.
According to Principle 6 of the Agile Manifesto, “The most efficient and effective method of conveying information to and within a development team is face-to-face conversation.” Research emphasizes the value of face-to-face conversations. A great deal of communication is nonverbal, and building relationships and trust happens more easily when we have shared experiences.
This does not mean that everyone has to be colocated or that they have to be colocated all the time. Distributed teams are common now, and as you scale across teams of teams, the workforce will span across geographical locations. We are in a good position today, with the emerging digital technologies and tools, to be more effective when distributed. There are many more tools at hand to make distributed and virtual teams work.
Be prepared and be intentional with communication and building relationships when teams are not colocated. According to McKinsey research,
“Without the seamless access to colleagues afforded by frequent, in-person team events, meals, and coffee chats, it can be harder to sustain the kind of camaraderie, community, and trust that comes more easily to co-located teams. It also takes more purposeful effort to create a unified one-team experience, encourage bonding among existing team members, or onboard new ones, or even to track and develop the very spontaneous ideas and innovation that makes agile so powerful to begin with.”
Agile and DevOps may have started in software, and Lean may have started in manufacturing. But just because these practices started there does not imply that they are not applicable in other functions. Agile and DevOps started in software as developers were looking for better ways to manage their work and be responsive to changing priorities and customer needs. Lean began in manufacturing as a mechanism to reduce waste and improve quality. Over the years, these practices have permeated into other functional areas, and with the evolution of digital tools and capabilities, these practices have evolved into hardware and manufacturing. It is important to extend the Industrial DevOps principles and mindset across the value stream to ensure teams are continuously improving flow and the delivery of value, which requires more than software when building cyber-physical systems.
Another misconception is that Agile/DevOps does not work with safety-critical systems. This misconception is likely related to other misconceptions, such as quality and planning. Agile/DevOps development efforts have improved quality, and planning still happens. In fact, due to improved built-in quality and system traceability, Agile/DevOps has the opportunity to improve the safety of systems, as noted in an IEEE Software article:
“High dependability software systems must be developed and maintained using rigorous safety-assurance practices. By leveraging traceability, we can visualize and analyze changes as they occur, mitigate potential hazards, and support greater agility.”
Based on experiences and research from the industry, we proposed in our 2022 Industrial DevOps paper to the hardware engineering community that “applying Industrial DevOps principles should be required when building safety-critical cyber-physical solutions to lower risk and improve quality.” Furthermore, Agile/DevOps has already been implemented in safety-critical systems and is increasingly scaling and maturing. You will need to ensure your organization has overcome the other misconceptions for this to be effective.
The capabilities of cyber-physical systems continuously evolve as the product is built. Given that these solutions include software and hardware, it is recognized that a complete system is not fully integrated and tested within a two-week period. We are focused on what we can learn in the time box, and the capabilities of the system are decomposed in such a manner that every two weeks, there is better visibility into what is working, what has been successfully integrated, and, based on observation and data, what next steps need to be taken. This approach provides ongoing learning in short time frames, so when course correction is needed, the impact on the schedule and cost is less than if discovered six or twelve months later, which has often been the case with traditional development approaches.
Dr. Suzette Johnson is an award-winning author who has spent most of her career in the aerospace defense industry working for Northrop Grumman Corporation. Suzette was the enterprise Lean/Agile transformation lead. In this role, she launched the Northrop Grumman Agile Community of Practice and the Lean/Agile Center of Excellence. She has supported over a hundred enterprise, government, and DoD transitions to and the maturation of Lean-Agile principles and engineering development practices. She has also trained and coached over four thousand individuals on Lean/Agile principles and practices and delivered more than one hundred presentations on Lean/Agile at conferences both nationally and abroad. Her current role is as Northrop Grumman Fellow and Technical Fellow Emeritus, where she continues to actively drive the adoption of Lean/Agile principles with leadership at the portfolio levels and within cyber-physical solutions, specifically within the space sector. As a mentor, coach, and leader, she launched the Women in Computing, Johns Hopkins University Chapter; the Women in Leadership Development program; the Northrop Grumman Lean-Agile Center of Excellence; and the NDIA ADAPT (Agile Delivery for Agencies, Programs, and Teams) working group. She received a Doctorate of Management at the University of Maryland with a dissertation focused on investigating the impact of leadership styles on software project outcomes in traditional and Agile engineering environments. She am also a Certified Agile Enterprise Coach and Scaled Agile Program Consultant/SPCT
Robin Yeman is an award-winning author who has spent twenty-six years working at Lockheed Martin in various roles leading up to senior technical fellow building large systems including everything from submarines to satellites. She led the Agile community of practice supporting a workforce of 120,000 people. Her initial experience with Lean practices began in the late ’90s. In 2002, she had the opportunity to lead my first Agile program with multiple Scrum teams. After just a couple months of experience, she was hooked and never turned back. She both led and supported Agile transformations for intelligence, federal, and Department of Defense organizations over the next two decades, and each one was more exciting and challenging than the last. In 2012, She had the opportunity to extend our Agile practices into DevOps, which added extensive automation and tightened our feedback loops, providing even larger results. Currently, she is the Carnegie Mellon Space Domain Lead at the Software Engineering Institute at Carnegie Mellon. She is also currently pursuing a PhD in Systems Engineering at Colorado State University, where she is working on my contribution to demonstrate empirical data of the benefits of implementing Agile and DevOps for safety-critical cyber-physical systems.
No comments found
Your email address will not be published.
First Name Last Name
Δ
We're thrilled to announce the release of The Phoenix Project: A Graphic Novel (Volume…
The following post is an excerpt from the book Unbundling the Enterprise: APIs, Optionality, and…
A few years ago, Gene Kim approached me with an intriguing question: What would…
Ever since digital tools and experiences became aspects of everyday work life, there’s been…