Skip to content

February 6, 2024

Accelerating Value Delivery In Highly Complex Domains

By Jennifer Fawcett ,Kelli Houston ,Suzette Johnson ,Brian Moore ,Robin Yeman

Introduction

It’s amazing how the Lean and Agile community brings us all together with a shared sense of purpose. At a SAFe Fellows Retreat in Boulder, CO, I (Jennifer Fawcett) shared the beginnings of this complex value stream paper collaboration that Kelli Houston, an Associate Fellow at Lockheed Martin, had the desire to publish and give back to the community. The people I shared the paper with were Dr. Suzette Johnson and Robin Yeman, both SAFe Fellows. 

It immediately clicked that their book Industrial DevOps: Build Better Systems Faster (also referred to as IDO) was focused on some of the same knowledge, principles, purpose, and passion. At the same time, Brian Moore, another colleague and Principle at Raytheon, who has a passion for evolving similar ways of working, had asked for a peer review of a similar talk he was authoring on high-consequence systems. We joined forces to share our knowledge and expertise and give back to the community.

Our intent is to provide a perspective, leveraging Industrial DevOps success patterns, on how some of our world’s most complex solutions come together to deliver value. 

Who Is This Paper For?

This paper is for those working to adopt more Lean/Agile thinking and DevOps ways of working in highly complex organizations responsible for developing and manufacturing high-consequence systems for aerospace, defense, and similar cyber-physical domains. Others working to apply value stream concepts in complex domains will also benefit from the topics shared. 

What Is the Purpose of This Paper?

The purpose is to provide practical guidance on how to apply Lean, Agile, and Industrial DevOps (IDO) concepts to enable organizations developing large solutions in complex domains to organize around value, so they can maximize the delivery of value with the least amount of time and effort.

This paper provides an example, based on a missile system, of how to go from an operational value stream to the nested development value streams that build and deliver the solutions within the operational value stream, to an organizational design and Lean/Agile operating system that enables optimal execution of those value streams. If you are experiencing challenges in value stream identification, decomposition, and organization design, our intent is that you can use the information in this paper to help guide your reasoning process. 

The paper concludes with a discussion on additional topics to consider when designing value streams to ensure that the time investment made is set up to deliver high-quality solutions at the speed of relevance. By sharing this information, we hope to deepen the understanding in our community on how to leverage the power of value streams in the delivery of large solutions in complex domains.

Key Concepts Overview

Value streams provide a systems view that is focused on the success of the overall solution. They play a key role in understanding the sequence of activities needed to deliver a product, service, or solution to a customer. Value streams provide many benefits, including enabling the establishment of long-lived, stable teams that are focused on delivering value, and creating transparency that enables visibility into delays, bottlenecks, and unnecessary handoffs. 

Large systems are typically slower to deliver because of the complexity of the solution and the communication and knowledge sharing required to build the integrated system. A lack of knowledge, information coherence, infrequent integration points across the system, and complexity escalate this challenge. Value streams help address these challenges as they include all the skills and authority to define, build, test, verify, and deploy their system, which can dramatically decrease the time to deliver value. Value streams are a fundamental construct of Lean thinking and a key principle of Industrial DevOps. Identifying and capturing value streams is a critical first step for organizing around value. 

Industrial DevOps bridges the principles and practices of Lean, Agile, and DevOps with the unique needs and challenges of cyber-physical systems through nine principles. By doing so, Industrial DevOps enables organizations building cyber-physical systems to be more responsive to changing priorities and market needs while also reducing lead times and costs. The combined use of these nine principles has been proven effective in successfully delivering cyber-physical systems across industries. Throughout this paper, we will refer to these principles to validate the process used for our example. 

Industrial DevOps Principles

  • Organize for the Flow of Value: Organizing for flow provides guidance on how to align your multiple product teams for regular demonstration and delivery of value. 
  • Apply Multiple Horizons of Planning: Apply multiple horizons of planning to address scaling and complexity while leveraging ongoing experimentation and learning. 
  • Implement Data-Driven Decisions: Data-driven decisions use current observations and metrics to determine the state, manage the flow of work across systems of systems, and continuously improve with real-time data. 
  • Architect for Change and Speed: Architecting for change and speed provides information on multiple architecture considerations that can reduce dependencies and improve the speed of change. 
  • Iterate, Manage Queues, Create Flow: Iterate, manage queues, and create flow to emphasize the importance of fast feedback, experimentation, and continuous learning, and managing queues to improve flow. 
  • Establish Cadence and Synchronization for Flow: Establishing cadence and synchronization to manage variability discusses how these two concepts complement each other to reduce variability and improve predictability. 
  • Integrate Early and Often: Integrating early and often covers different levels and types of integration points across large, complex systems. 
  • Shift Left: Shifting left emphasizes a “test-first” mindset encompassing the multiple levels of testing across cyber-physical systems. 
  • Apply a Growth Mindset: Applying a growth mindset expresses the need to continuously learn, innovate, and adapt to the changes around us to stay competitive.

This paper provides an example of how these concepts are realized in the context of a highly complex cyber-physical solution and provides a guided example of how to:

  • Identify the operational value streams (OVSs).  
  • Decompose an operational value stream into nested development value streams (DVSs). 
  • Design the organizational structure that will execute a development value stream.
  • Implement a Lean/Agile operating system that enables effective and efficient execution of the value stream.

You can also download the full PDF of this paper here.

The Process: From Value Stream to Lean/Agile Execution

There are six basic steps to go from an overall operational value stream to a Lean/Agile operating system that enables optimal execution of the value stream:

  1. Identify and map the operational value stream.
  2. Identify the operational value stream solutions (system architecture: subsystems).
  3. Identify and map the nested development value streams.
  4. Identify the development value stream solutions (system architecture: subsystem components).
  5. Organize the teams around development value streams and the system architecture.
  6. Implement a Lean/Agile operating system to enable the teams to execute effectively and efficiently.

These steps are derived from the process described in the book Industrial DevOps and demonstrate the power of integrating principles and practices from value stream management, architectural design, and organizational design. Using a missile system example, we will demonstrate each step; however, the core concepts are applicable to any complex system. For another example (a satellite), see the book Industrial DevOps.

Step 1: Identify and Map the Operational Value Stream

The first step is to understand the overarching operational value stream (OVS), that is, the sequence of activities that deliver a particular product or service to a customer. As shown in Figure 1, the OVS for delivering a major defense capability system is a series of sequential OVSs (phases), each of which has a trigger to identify a need, deliver value, and move the program forward to deliver that value to the end user. These phases represent the phases of the United States Department of Defense (DoD) acquisition process.

Figure 1: DoD Acquisition Process (Missile Example)

Our example focuses on the OVS that includes the Engineering and Manufacturing Development (EMD) and Low Rate Initial Production (LRIP) phases, since the result of these phases is a system that has an initial capability that has been fielded to the customer. By treating these phases as one value stream, we include manufacturing as part of the end-to-end value stream, enabling the optimization of the flow of value between design, development, and manufacturing. 

The OVS example focuses on the orchestration and operational support for the design and development of an LRIP environment, including the people, processes, and tools. This OVS defines the sequence of activities originating from the award of a Request for Proposal (RFP) (the trigger) and concludes with the ability to support full-rate production of the missile (the value). The key phases/activities of that OVS are shown in Figure 2.

Figure 2: LRIP Missile Operational Value Stream

In all value streams, value flows two ways. The first and most obvious is how value flows to the customer. In our OVS example, that value is the ability to produce a missile system for defense agencies. The second flow of value is that which flows to the contractor/manufacturer. That value is the revenue received in advance for design and development, the revenue received for each produced missile, and the advancement of knowledge and expertise gained through the development and delivery of the cyber-physical solution. When decomposing value streams, we embrace Industrial DevOps Principle #1: Organize for the Flow of Value.

Step 2: Identify the Operational Value Stream Solutions

In this step, we identify the solutions that support/enable the operational value stream (OVS). When identifying the OVS solutions, we want to keep Industrial DevOps Principle #4 top of mind: Architect for Change and Speed. Figure 3 shows three key solutions that support/enable the LRIP Missile operational value stream: the physical missile being produced (commonly referred to as the “All Up Round (AUR)”), integration and testing support, and the supply chain. 

Figure 3: LRIP Missile OVS Solutions

For our example, we take a closer look at the missile system. Figure 4 provides an example of the types of subsystems that make up a missile system. These subsystems reflect a functional decomposition of the Missile System’s operational capabilities. Each subsystem is responsible for providing specific capabilities and for interacting with the other subsystems via specific interfaces, as shown in the diagram. The goal is for each subsystem to be as self-contained as possible with minimal dependencies on other subsystems so it can be iteratively designed, developed, tested, demonstrated, and maintained independently of the rest of the system. 

Figure 4: Missile System Subsystems

The identification of missile system subsystems is highly influenced by reference system architectures like the Government Reference Architecture (GRA) and open system architecture models like the Weapon Open System Architecture (WOSA). 

Step 3: Identify and Map the Nested Development Value Streams

In this step, the operational value stream (OVS) is decomposed into nested development value streams (DVS) that reflect the missile system subsystems identified in the previous step. These nested value streams are shown in Figure 5. 

When decomposing value streams, we embrace Industrial DevOps Principle #1: Organize for the Flow of Value. In addition, by defining DVSs in alignment with the subsystems, we are exercising Industrial DevOps Principle #4: Architect for Change and Speed. A well-structured architecture (loosely coupled, highly cohesive components) will result in well-structured development value streams, each focused on delivering specific [subsystem] capabilities with limited dependencies on other value streams. This will enable value to flow without interruption and improve the speed of delivery for each development value stream. 

Figure 5: Missile System Development Value Stream Examples

Once the nested DVSs are identified, the flow of each is mapped, just as was done for the OVS. For our example, we mapped out the guidance, navigation, and control (GNC) DVS, as shown in Figure 6.  

Figure 6: GNC Development Value Stream

The guidance, navigation, and control (GNC) calculations are triggered by a combination of factors and inputs to ensure accurate and precise flight. The missile’s current position, the target’s current position, environmental conditions, and missile characteristics are used to calculate any route changes needed to steer the missile on the correct path to the target and to make any needed real-time adjustments to the missile’s flight path.

Step 4: Identify the Development Value Stream Solutions

In this step, we repeat the process performed earlier for the operational value stream (OVS). This time we are focusing on one of the nested development value streams (DVS). Here we identify the solutions that support/enable the guidance, navigation, and control (GNC) DVS, which aligns with the GNC subsystem.

As the OVS solutions and the DVS solutions are identified, it is important to keep in mind Industrial DevOps Principle #4: Architect for Change and Speed, to enable responsiveness and resilience. To get started, consider the GNC subsystem capabilities. The purpose of the GNC subsystem is to enable the missile to reach its target within specific performance requirements. The GNC subsystem relies on a combination of algorithms, real-time sensor data, and guidance commands to calculate and adjust the missile’s flight path to achieve its mission objectives.

As with the missile system subsystems, the subsystem components reflect a functional decomposition of the subsystem’s operational capabilities. In the case of the GNC subsystem, those capabilities include:

  • Navigation: Determine the missile’s position, orientation, and velocity during flight.
  • Guidance: Use information from the targeting system and the navigation system to calculate the guidance commands required to steer the missile on the correct path to the target.
  • Control: Execute the guidance commands and make real-time adjustments to the missile’s flight path by sending commands to the missile’s actuators/effectors.

In addition, to deliver high-quality GNC solutions, a capability for measuring the performance of the algorithms is needed. The components that realize these capabilities and their dependencies are shown in Figure 7. These components are the solutions that support/enable the GNC nested value stream.

Figure 7: GNC Subsystem Components

Just like the identification of subsystems, the identification of subsystem components is highly influenced by reference system architectures like the GRA and open system architecture models like Weapon Open System Architecture (WOSA). 

Step 5: Organize Teams around Development Value Streams and the System Architecture

Now that the development value stream has been mapped and its solutions identified, we turn our attention to the design of the organization that will execute the development value stream and deliver the solutions.

A development value stream (DVS) is realized by an agile team-of-teams that delivers the DVS solutions. In this final step, the DVS solutions identified in the previous step (the subsystem components) are used to drive the organization of that agile team-of-teams.

As shown in Figure 8, in our example the GNC DVS is realized by an agile team-of-teams (the GNC Team) that delivers the GNC solutions (GNC components). The GNC team includes everything and everyone they need to deliver value (e.g., hardware, software, manufacturing, IT, etc.). The GNC team’s structure mirrors the GNC architecture (reverse Conway Maneuver). 

Each of the GNC teams have an associated “team type” as defined in Team Topologies: Organizing Business and Technology Teams for Fast Flow. The complicated subsystem teams have detailed knowledge about the algorithms in specific areas, whereas the platform team provides the platform for evaluating GNC performance. The teams may include suppliers or subcontractors and their contract should reflect how they plan to engage.

Figure 8: GNC Team Structure

As in earlier structuring steps, we leverage Industrial DevOps Principles #1 and #4 to ensure that the resulting teams are organized around value and that there are minimal dependencies between teams so they can deliver value as independently as possible.

Note: When building cyber-physical systems, we do not necessarily need every skill type on every team. Some teams may be more hardware-centric, some may be more software-centric, and other teams may provide specialized capabilities that are needed by the other teams. The overall development value stream team needs to be cross-functional; the individual delivery teams may not be.

Step 6: Implement a Lean/Agile Operating System

The final step of the process involves implementing a Lean/Agile operating system to coordinate value delivery across teams, enabling experimentation, fast feedback, and flow optimization. In this section, we describe an effective and efficient Lean/Agile operating system whose components support the nine principles of Industrial DevOps as shown in Figure 9.

Figure 9: Lean/Agile Operating System that Supports the Industrial DevOps Principles

This operating system is based on value-based teams that are designed to flow work and deliver value to the customer. Employing Industrial DevOps Principles #1 and #4, these teams are aligned with value streams for regular demonstration and delivery of value (Principle #1) and are structured with minimal dependencies between them so they can deliver value as independently as possible (Principle #4). 

In this operating system, aligned backlogs provide clarity of scope, priorities, and effort, and are decomposed to enable incremental delivery. Like the organization, the backlogs are architected for change and speed as defined in Industrial DevOps Principle #4. IDO Principle #5 is demonstrated as teams create flow through iterative development, regular demonstrations or integrated capabilities, and validating learning.

The operating system includes a common operating rhythm and supporting meetings, which provide cadence and synchronization (IDO Principle #6). The teams coordinate their operating rhythm, iterating together and synchronizing regularly to stay aligned, collaborate, address impediments, and continuously improve (IDO Principle #5).

Cadence and synchronization are especially important when building complex solutions, which require the coordination of many teams. Regular synchronization prevents alignment errors from accumulating, reducing the overall variation from what is desired to what is delivered.

The integrated road map and rolling wave plan demonstrate Industrial DevOps Principle #2 by applying multiple planning horizons, which provides flexible predictability. The road map reflects the overall plan for delivering the solution intent. Rolling wave, capacity-based planning details near-term activities performed at the team level. The road map is realized through multiple iterations of development with a CI/CD (continuous integration and development) pipeline for software and early and frequent integration and release points for integrated capabilities in alignment with IDO Principles #5 and #7. The iterative cycle employs a shift-left mindset, IDO Principle #8, as testing is defined before development begins and iterative design addresses manufacturing needs and constraints. These practices reflect the delivery of an evolving solution, delivering value early and often, while learning and adapting to change.

The operating system’s metrics provide accountability and transparency into the progress of integrated functionality and enable leaders and teams to make data-driven decisions based on the status of the system (IDO Principle #3). Using real-time data helps in prioritizing needed improvements.

Common processes and tools reflect how teams agree to work together. They guide and support the teams, enabling consistent, accelerated, and high-quality execution across teams. Tools automate the process and provide the collection and visualization of real-time/near real-time metrics. To be most effective in the adoption of these processes and tools, the organization needs to adopt a growth mindset and commit to continuous learning, which is defined in Industrial DevOps Principle #9.

This example is one of many examples that could be demonstrated and may not reflect all the capabilities and architecture of your system. It is a notional example to demonstrate the success patterns applicable to defining OVSs and system architectures.

Drive Value Delivery through Lean/Agile Execution

The value decomposition process just described results in a system that is designed to deliver value. In addition to OVS and DVS identification and the value-based structure, there are additional considerations that provide the foundation for successful execution and drive value delivery through Lean/Agile execution.

 In this section, we describe key patterns when implementing a Lean/Agile operating system to optimize the end-to-end value stream and improve the speed and delivery of high-quality solutions. 

Reduce Time-to-Value by Optimizing Flow

Faster time-to-value is the ultimate goal. This is embraced by IDO Principles #4: Architect for Speed, #5: Iterate, Manage Queues and Create Flow, and #7: Integrate Early and Often. Shortening the time from the receipt of the trigger based on a customer need to the delivery of value is the fastest way to reduce the time to deliver critical solutions. Improving flow means reducing waste and removing bottlenecks to accelerate delivery. 

Value stream management (VSM), the emerging modernization of the PMO, facilitates the optimization of flow and orchestrates and monitors that flow of value. VSM collaborations, practices, roles, and responsibilities are aligned to provide the structure of the operating system, as shown in Figure 10.

Figure 10: Value Stream Management (VSM) Collaborations and Practices

Value Stream Architecture function owns the technical vision for how the end-to-end system is going to be built. This includes designing the flow of value, along with the decomposition of the operational value streams (OVS) into nested development value streams (DVS). This is not a “one and done” activity. This function collaborates with System Architect functions to ensure collective intellect is leveraged and coordinated. As the system continuously evolves, this function and its associated roles continue to learn, adapt, and design for future architectural solutions and optimize the production factory itself. This is a repeatable pattern designed for continuous improvement.

Value Stream Operations coordinates and provides operational orchestration for all value streams, including people, processes, and tools. This includes the implementation of flow metrics to measure the effectiveness of that value stream and the implementation of automation to support value stream execution. Value Stream Operations looks across all aspects of the value stream, coordinates activities and events, and aids in ensuring that all aspects are working together as effectively and efficiently as possible. 

Value Stream Ownership is typically adopted by the System Engineering role, as they have deep solution execution experience and an understanding of the problem space. They are often supported by other technical experts who understand their detailed domains. Some of the responsibilities include being accountable for optimal execution of a value stream (i.e., the delivery of high-quality solutions in the shortest amount of time) and facilitating the management of the funding. 

The key to optimizing a value stream is to measure the effectiveness of that value stream using flow metrics, identify the blockers/impediments, and implement solutions and process improvements to remove those blockers. Thus, Value Stream Ownership is dependent on Value Stream Operations to provide the infrastructure that enables effective ownership.

Execute Short, Iterative Feedback Cycles to Drive Early Learning

To learn fast, we must build, demonstrate, and release fast (IDO Principle #7: Integrate Early and Often) so we can get feedback and validate the system functions. Incremental delivery with intent to release early and often accelerates the learning cycles, reduces launch risk, and improves value delivery.

While the value stream may appear waterfall-ish, it is not. It identifies steps needed to deliver value to the customer. When executed, the process for developing that value is iterative and incremental, with each cycle improving based on the learnings from the previous cycle. 

In the case of cyber-physical development programs, it can take many years to launch a final product. A key challenge is to break the system to be delivered into smaller increments that can be delivered faster, enabling the incremental maturation of the solution. 

As described earlier in the paper, the key to this decomposition is the identification of highly cohesive and loosely coupled solution components that can be developed independently but integrated often to deliver a solution faster (IDO Principle #4: Architect for Change and Speed). The plan for incrementally and iteratively delivering a solution is explicitly laid out in the road map, as described earlier. 

Leveraging proven patterns and reusable components can be used to shorten development cycle times by reducing what must be developed from scratch.

Another means to drive early learning is to move key activities earlier in the development or operational processes to detect and address issues proactively and cost-effectively. This is a concept commonly referred to as “shifting left” (IDO Principle #8). Critical benefits of shifting left include reducing costs, improving quality, accelerating delivery, mitigating risks, and enhancing customer satisfaction.

The following are examples of a shift-left mindset.

Integration and Testing

Development should begin with a test-first mindset. Define the test before development. For software, integration and testing happens daily. Tests are automated to support ongoing regression testing. Conducting regular integration and testing identifies issues and defects earlier in the development process reducing the cost and effort required to fix those problems. 

Adopt continuous testing and quality assurance throughout the development process instead of a single testing phase at the end. Continuous integration and evolution of hardware, firmware, and software is critical when developing complex cyber-physical systems. To enable this, investment in digital capabilities and modern practices may be needed in your organization. 

Continuous integration and testing are supported through a variety of techniques and capabilities, such as additive manufacturing (AM) materials and structures to produce hardware mock-ups; hardware-in-the-loop (HWIL) labs to enable the early integration of hardware, firmware, and software; and modeling and simulation technologies to replicate an entire mission for testing and performance measurements.

Security

This includes identifying and mitigating vulnerabilities and threats earlier, making systems more secure and reducing the risk of security breaches. This is becoming increasingly crucial as cybersecurity threats evolve. Security and compliance requirements must be designed for and built in throughout development.

DevOps and Continuous Delivery Pipeline 

Automating development and operations processes as early as possible makes delivery pipelines more seamless and efficient.

Compliance

Enabling Lean QMS processes as part of the regular flow provides early verification of quality, safety, and regulatory requirements. 

Operating System Setup

Structure the teams and define the operating system up front. In the case of our LRIP Missile OVS, if the value decomposition and team organization are done during the proposal phase, the basis of estimates (BoEs) that are included in the proposal will be more realistic, and the start-up phase will go much faster.

Production/Manufacturing

The notion of integrating and working in small batches isn’t just for software development. We extend these practices from product development to manufacturing for early test and feedback driving down product risk along the way. For example: 

  • Consider manufacturing/production impacts and concerns during design, ensuring designs can be produced and tested in a controllable, repeatable, and affordable way. Use iterative development and reviews with manufacturing team members.
  • Design solutions in a way that makes them easier and more cost-effective to manufacture, while still maintaining quality and functionality.
  • Simplify designs, choosing the right materials and manufacturing processes to improve ease of assembly and reduce waste and production costs. 

Designing for manufacturability (DFM) and the concept of shift-left manufacturing provide the benefit of reduced rework and late discoveries. Without it, this often leads to schedule delays and poor quality. When we shift left and include manufacturing concepts and collaboration early, we improve quality and speed of delivery. 

Conclusion

This paper provides some practical examples of how to apply the principles in Industrial DevOps: Build Better Systems Faster to the development of a complex cyber-security defense system, a missile. It describes a step-by-step process for accelerating value delivery by organizing around value and optimizing flow, complete with an example that demonstrates the application of these key principles. The paper also described some additional execution patterns that enhance the delivery process and ensure that the investment is meeting value expectations.

The patterns that we describe in this paper are our attempt to provide additional clarification on how to apply key Lean/Agile principles in practice. As in anything Lean, the pursuit of perfection is always embedded in our mindset. We fully embrace Industrial Principle #9: Apply a Growth Mindset. We look forward to sharing our learnings and look forward to hearing from you!


Special gratitude to our reviewers. We appreciate your time and contributions: 

JB Brockman, Rune Christensen, Nicolas Friberg, Harry Koehnemann, Marc Rix, and Glenn Smith.

Thank you to the authors and reviewers from the NDIA Systems Engineering Division ADAPT working group who contributed to this publication. Their diverse perspectives, expertise, and insight defined success patterns in applying Lean/Agile principles to the delivery of value

You can also download the full PDF of this paper here.

- About The Authors
Avatar photo

Jennifer Fawcett

Jennifer is a semi-retired empathetic Lean and Agile leader, practitioner, coach, speaker, and consultant. A SAFe Fellow, she contributed to and helped develop SAFe content and courseware. Her passion has been in delivering value in the workplace and by understanding the science of social communities, communication, and culture. Other areas of focus included product management, product ownership, and compassionate leadership. She has provided dedicated service in these areas to enterprises for over forty years.

Follow Jennifer on Social Media
Avatar photo

Kelli Houston

Kelli has over thirty-five years of experience in software engineering, solution architecting, project management, and business transformation consulting. She is passionate about empowering teams and driving business outcomes, enabling teams of all sizes to maximize their value delivery. As a Lockheed Martin Associate Fellow, Kelli is responsible for driving the adoption of Lean and Agile best practices, working across the organization to define the transformation strategy, coach teams, and capture successful patterns that accelerate the delivery of business outcomes and ensure knowledge continuity. Kelli holds a bachelor’s and a master’s degree in engineering, multiple agile and coaching certifications, and has authored several books. When Kelli isn’t leading transformations, you can find her practicing yoga, cycling, or diving into a great book. She also enjoys spending quality time with her two grown children and husband of thirty-five years.

Follow Kelli on Social Media
Avatar photo

Suzette Johnson

Dr. Suzette Johnson is an award-winning author who has spent most of her career in the aerospace defense industry working for Northrop Grumman Corporation. Suzette was the enterprise Lean/Agile transformation lead. In this role, she launched the Northrop Grumman Agile Community of Practice and the Lean/Agile Center of Excellence. She has supported over a hundred enterprise, government, and DoD transitions to and the maturation of Lean-Agile principles and engineering development practices. She has also trained and coached over four thousand individuals on Lean/Agile principles and practices and delivered more than one hundred presentations on Lean/Agile at conferences both nationally and abroad. Her current role is as Northrop Grumman Fellow and Technical Fellow Emeritus, where she continues to actively drive the adoption of Lean/Agile principles with leadership at the portfolio levels and within cyber-physical solutions, specifically within the space sector. As a mentor, coach, and leader, she launched the Women in Computing, Johns Hopkins University Chapter; the Women in Leadership Development program; the Northrop Grumman Lean-Agile Center of Excellence; and the NDIA ADAPT (Agile Delivery for Agencies, Programs, and Teams) working group. She received a Doctorate of Management at the University of Maryland with a dissertation focused on investigating the impact of leadership styles on software project outcomes in traditional and Agile engineering environments. She am also a Certified Agile Enterprise Coach and Scaled Agile Program Consultant/SPCT

Follow Suzette on Social Media
Avatar photo

Brian Moore

Brian J. Moore is an RTX Principal in and was one of the architects of RTX dual operating system CORE (Customer-Oriented Results and Excellence). His current assignment is in Enterprise Service where he is leading efforts to ensure that the ES operations are lean, agile, and digital. Brian has over three decades of experience in applying Lean and Agile across a wide range of development and operational value streams. He has worked in IT strategy, enterprise architecture, and systems engineering assignments related to cloud computing, collaboration environments, knowledge management, digital thread, product data management, and service-oriented architecture. He also led the creation of Raytheon’s IT sustainability program that won six industry awards and was featured on CNBC. Brian is a SAFe iSPCT, has a bachelor’s degree in chemical engineering from the University of Colorado, and a master’s degree in enterprise architecture from the University of Texas at Dallas.

Follow Brian on Social Media
Avatar photo

Robin Yeman

Robin Yeman is an award-winning author who has spent twenty-six years working at Lockheed Martin in various roles leading up to senior technical fellow building large systems including everything from submarines to satellites. She led the Agile community of practice supporting a workforce of 120,000 people. Her initial experience with Lean practices began in the late ’90s. In 2002, she had the opportunity to lead my first Agile program with multiple Scrum teams. After just a couple months of experience, she was hooked and never turned back. She both led and supported Agile transformations for intelligence, federal, and Department of Defense organizations over the next two decades, and each one was more exciting and challenging than the last. In 2012, She had the opportunity to extend our Agile practices into DevOps, which added extensive automation and tightened our feedback loops, providing even larger results. Currently, she is the Carnegie Mellon Space Domain Lead at the Software Engineering Institute at Carnegie Mellon. She is also currently pursuing a PhD in Systems Engineering at Colorado State University, where she is working on my contribution to demonstrate empirical data of the benefits of implementing Agile and DevOps for safety-critical cyber-physical systems.

Follow Robin on Social Media

More Like This

Discover the Formula for Repeatable Innovation
By IT Revolution

In their upcoming book, Unbundling the Enterprise: APIs, Optionality, and the Science of Happy…

The Final Countdown – Investments Unlimited Series: Chapter 13
By IT Revolution , Helen Beal , Bill Bensing , Jason Cox , Michael Edenzon , Dr. Tapabrata "Topo" Pal , Caleb Queern , John Rzeszotarski , Andres Vega , John Willis

Welcome to the final installment of IT Revolution’s series based on the book Investments…

Navigating the Ethical Minefield of AI 
By IT Revolution

As a business leader, you know that artificial intelligence (AI) is no longer just…

Audit to the Rescue? – Investments Unlimited Series: Chapter 12
By IT Revolution , Helen Beal , Bill Bensing , Jason Cox , Michael Edenzon , Dr. Tapabrata "Topo" Pal , Caleb Queern , John Rzeszotarski , Andres Vega , John Willis

Welcome to the twelfth installment of IT Revolution’s series based on the book Investments…