Inspire, develop, and guide a winning organization.
Create visible workflows to achieve well-architected software.
Understand and use meaningful data to measure success.
Integrate and automate quality, security, and compliance into daily work.
Understand the unique values and behaviors of a successful organization.
LLMs and Generative AI in the enterprise.
An on-demand learning experience from the people who brought you The Phoenix Project, Team Topologies, Accelerate, and more.
Learn how making work visible, value stream management, and flow metrics can affect change in your organization.
Clarify team interactions for fast flow using simple sense-making approaches and tools.
Multiple award-winning CTO, researcher, and bestselling author Gene Kim hosts enterprise technology and business leaders.
In the first part of this two-part episode of The Idealcast, Gene Kim speaks with Dr. Ron Westrum, Emeritus Professor of Sociology at Eastern Michigan University.
In the first episode of Season 2 of The Idealcast, Gene Kim speaks with Admiral John Richardson, who served as Chief of Naval Operations for four years.
New half-day virtual events with live watch parties worldwide!
DevOps best practices, case studies, organizational change, ways of working, and the latest thinking affecting business and technology leadership.
Is slowify a real word?
Could right fit help talent discover more meaning and satisfaction at work and help companies find lost productivity?
The values and philosophies that frame the processes, procedures, and practices of DevOps.
This post presents the four key metrics to measure software delivery performance.
January 26, 2021
This post was adapted from the 2018 DevOps Enterprise Forum white paper Industrial DevOps by Dr. Suzette Johnson, Diane LaFortune, Dean Leffingwell, Harry Koehnemann, Dr. Stephen Magill, Dr. Steve Mayner, Avigail Ofer, Anders Wallgren, Robert Stroud, and Robin Yeman.
As DevOps continues to challenge the status quo and improve business outcomes for software systems, many of the world’s larger enterprises also need to identify how to scale these practices across large, complex systems composed of hardware, firmware, and software. This is where Industrial DevOps comes to play.
The ability to iterate and deploy faster allows companies to adapt to changing needs, reduce cycle time for delivery, increase value for money, improve transparency, and leverage innovations.
However, there is an industry-wide misconception that this form of rapid iteration and improved flow applies only to software or small applications and systems.
This post will provide an extended definition for DevOps as it applies to large, complex cyber-physical systems, and offer some recommendations on how to effectively leverage continuous delivery and DevOps in these systems.
Industrial DevOps is the application of continuous delivery and DevOps principles to the development, manufacturing, deployment, and serviceability of significant cyber-physical systems to enable these programs to be more responsive to changing needs while reducing lead times.
This practice focuses on building a continuous delivery pipeline that provides a multi-domain flow of value to the users and stakeholders of those deployed systems.
The bodies of knowledge that inform Industrial DevOps principles and practices include DevOps, Lean manufacturing, Lean product development, Lean startup, systems thinking, and scaled Agile development.
Applying DevOps to complex, cyber-physical systems can be summarized in 8 recommendations (or principles):
Below, we’ll briefly touch on each of these 8 recommendations. You can also download the full Industrial DevOps white paper for free here.
Under the objective of efficiency, many large programs have elected to organize around activities such as program management, systems, software, firmware, hardware, and testing.
Such an organization has been shown to be quite problematic when it comes to actual delivery of the solution. Delays and economic overruns have proven to be a significant burden on taxpayers.
To deliver systems that provide desirable business outcomes, we need to organize around the value stream and the end-to-end steps that are required to deliver value.
When employing DevOps in large, complex systems, the first step is to define the value stream(s).
In the past, teams would be organized around functional areas such as systems engineering, hardware, software, firmware, and testing with handoffs from one area to the next. Teams organized around such activities are locally optimized and cannot deliver any end-to-end business outcome to stakeholders on a regular cadence, leading to a long cycle time for development as well as a long, costly recovery time.
With Industrial DevOps teams would be organized around the end-to-end value stream. In the autonomous drone example, the best value stream organization would be around independent capabilities, such as a flight camera team. To build a flight camera, we will need electrical engineering, hardware engineering, systems engineering, software development, and testing teams. These teams would remove activity-based handoffs and focus on providing the specific business outcome of providing visibility of the battleground.
DevOps focuses on short-term planning horizons with iterations typically running no longer than a few weeks. However, large, complex systems demand longer planning cycles that are approached with the notion of multiple planning horizons.
These large-scaled solutions need multiple perspectives—from a long-term view to near-term objectives iteration and program increment objectives.
In a multiple planing horizon, each planning level provides time boundaries that enable teams to quantify “plan” versus “actual,” allowing the product solution to course correct based on empirical evidence.
Throughout development, the system is built in time-boxed increments. Each increment provides an integration point for demonstrating objective evidence of the feasibility of the solution in process.
That evidence is provided through a demonstration of working features of the system. For hardware and embedded systems, the early demonstrations may be limited to mathematical formulas, 3D models, walking skeletons (tiny implementations of the system that perform an end-to-end function), or prototypes that prove a specific element of the design is viable.
Improvements are made until the final production version is fully tested and ready to deploy.
Because these reviews are performed routinely on a set cadence (for example, every two weeks), system development progress can be measured, assessed, and evaluated by the relevant stakeholders frequently, predictably, and throughout the solution development life cycle. Faults can be found and corrected in small batches when the cost of change is low.
The transparency of this process provides the financial, technical, and fitness-for-purpose governance needed to ensure that the continuing investment produces a commensurate return.
Our design and engineering processes need to scale to the level of complexity inherent in large cyber-physical systems. Architectural decisions can support this by emphasizing modularity and serviceability. Modularity refers to component decoupling with a focus on the smallest unit of functionality. Serviceability refers to a focus on lowering the cost and time required to alter functionality both pre- and post-deployment.
Architecture modularity significantly impacts DevOps goals to continuously develop, integrate, deploy, and release value. Modular component-based architectures communicate in a consistent way through well-defined interfaces and thereby reduce dependencies between components, as displayed in the figure below. Such components can be independently tested, released, or upgraded.
To enable flow, fast feedback, and continuous learning, it is important to work in small batch sizes—both in terms of the size of the component or feature and the unit of change.
Small batches increase both the rate of technical exchange and the flow of work, enabling rapid feedback. Working with small components often reduces complexity and enhances transparency of the achieved results.
Short iteration cycles provide stakeholders with regular reviews of results against a defined set of acceptance criteria, enable the opportunity to provide feedback, and help identify integration challenges earlier in the development life cycle.
Working in short iterations throughout the development of the component or architecture enables agility and regular validation that the component is satisfying downstream expectations. Using this faster feedback model improves understanding of both the system being built and the requirements of system users.
Iterative development is enabled by model-based systems engineering and well-defined interfaces. Model-based systems engineering, A/B testing, and dark launches provide the ability to make changes and rapidly understand the impact of those changes.
There are unique distinctions when developing hardware in small iterations as compared to software.
Lead time associated with fabricating hardware creates a specific order on how the work for the iterations is defined and planned. This results in a more defined flow of the development activities, limiting some of the flexibility in reprioritization, yet still taking advantage of the learning and feedback opportunities each iteration brings.
With hardware development, the results of an iteration may not produce something an end user can use; that is, it is not “potentially shippable” but still results in functionality and objective evidence of completed work that can be tested against a subset of the acceptance criteria. For example, early iterations may involve 1:1 paper cut-outs of hardware that enable quick feedback from manufacturing. Later iterations can evolve to a 3D rapid prototype after an approach has been agreed upon.
Applying cadence and synchronization can help manage the inherent variability in solution development.
However, cadence alone is not sufficient, especially where cadences may traditionally vary greatly between different disciplines. For example, hardware design, manufacturing, and testing have different, longer lead times and cycle times than software.
Synchronization between these varying cadences for significant cyber-physical systems, such as aircraft or satellites, is necessary to enable Lean flow with frequent integration.
Continuous integration is the heart of DevOps. However, even in software-only systems this is no small feat.
Cyber-physical systems are far more difficult to integrate continuously, as there are limiting laws of physics as well as supplier, organizational, and test environment practicalities that must be considered. In addition, some components have long lead times to take into consideration, and you certainly can’t integrate what you don’t have.
In its place, “continu-ish integration” is a euphemism that indicates a planned strategy to integrate frequently, based on the economic tradeoff of the transaction cost of integration versus the risk reduction of objective evidence of system performance.
This can be accomplished by limiting the scope of integration tests, creating staging environments, and applying virtual or emulated environments, stubs, and mocks.
Systems built on modular, component-based architectures that communicate through well-defined interfaces are much simpler to test and verify functional behavior.
Each component can be independently built and tested (and in the spirit of DevOps, released) with more confidence that the change does not break another part of the system.
To build component-level tests, engineers apply test-driven development, meaning they write the tests for a change before they implement the change in software or hardware. Writing tests first helps engineers think deeply about the scope of a requirement change before beginning the implementation.
Once all tests pass, the work is complete.
Further, these tests should run automatically. In test-driven cultures, the environment automatically runs a rich set of component tests on any change. ECAD and MCAD tools have had testing infrastructure built into them for years. Adopting a test-driven mindset means creating the tests first and running them frequently.
Applying the theory and practice of learnings from DevOps has the potential to dramatically improve the development of complex cyber-physical systems.
Implementing practices such as organizing around value, utilizing multiple planning horizons, basing system decisions on objective evidence, reducing batch size, architecting for modularity and scale, iterating rapidly for fast feedback, applying cadence and synchronization, “continu-ishly” integrating the entire system, and applying test-driven development methods are keys to succeeding in this endeavor.
The companies that solve this problem first will increase transparency, reduce cycle time, increase value for money, and innovate faster. Simply, they will build better systems faster, and they will become the ultimate economic and value delivery winners in the marketplace.
Download the unabridged white paper Industrial DevOps here.
You can also read the two companion white papers: Applied Industrial DevOps: Practical Guidance for the Enterprise and Applied Industrial DevOps 2.0: A Hero’s Journey.
Trusted by technology leaders worldwide. Since publishing The Phoenix Project in 2013, and launching DevOps Enterprise Summit in 2014, we’ve been assembling guidance from industry experts and top practitioners.
Very interesting!! Thanks for. sharing
Reply
Your email address will not be published.
First Name Last Name
Δ
You've been there before: standing in front of your team, announcing a major technological…
If you haven’t already read Unbundling the Enterprise: APIs, Optionality, and the Science of…
Organizations face critical decisions when selecting cloud service providers (CSPs). A recent paper titled…
We're thrilled to announce the release of The Phoenix Project: A Graphic Novel (Volume…