Skip to content

August 31, 2021

Five Papers to Improve Software Delivery

By IT Revolution

Measure Efficiency, Effectiveness, and Culture to Optimize DevOps Transformation

Metrics for DevOps initiatives

Successful software outcomes depend on negotiation of requirements, accurate scoping of work, value judgments, innovations, team collaboration, software architecture, economic tradeoffs and user demand. Success is less dependent on contract quality, Gantt charts, critical path schedules, earned-value measurement, laws of physics, material properties, mature building codes, and certified engineers. Stated another way, steering software delivery projects is more a discipline of economics than it is of engineering. Unlike most mature engineering disciplines, software delivery is a nondeterministic endeavor with more uncertainty to be managed.

There is a hunger for better measurements. DevOps teams and professionals want to push new ideas and new ways of doing things. Metrics objectively help teams distinguish between improvements and unproductive changes. This paper focuses on measurements that:

  1. remove subjectivity
  2. improve excellence
  3. focus on strategy
  4. create predictability

Measurements work best when they drive crucial conversations and help teams improve. Measurements are undermined when they are driven as an evaluation of performance and impact. People will game the system or distort a metric if it focuses on personal performance rather than team performance. Therefore, a successful transformation to a metrics driven organization is powered by three things outlined in this paper:

  1. A better work environment more accepting of new ideas
  2. Desire for accountable teams
  3. Continuous improvement via the scientific method

Download the Full Paper Here

Measuring Software Quality

A Guide to Employing Metrics in Software Development

In today’s digital economy, where software is central to the business, the overall quality of that software is more important than ever before. Together with a solid market need and business plan, top quality software leads to customer satisfaction, revenue, and profitability, and the best designs can even allow an organization to more easily enter new markets.

On the other hand, even with the most solid business plan, poor quality software can be one of the fastest roads to failure. Given the great importance of software quality, leaders cannot simply hope for the best. Just as businesses measure market trends, sales pipelines, inventories, fulfillment, and more, they must also measure the quality of their software.

As an industry, we have been attempting to assess software quality for quite some time. Today’s continuous delivery pipelines almost always include steps that run static- code analyses. Project managers habitually track and make decisions based on lines of code. Whether or not a team is practicing test-driven development, the value of test coverage is well understood. But we suggest that these relatively common practices at best provide only the weakest indication of the overall picture of the quality of software and at worst are misleading, giving only the illusion of quality.

There are a host of “ilities” that software development teams strive for, such as reliability, maintainability, scalability, agility, and serviceability, and it’s not difficult to draw a connection between these “ilities” and business outcomes. We know from the State of DevOps Report published by DORA that high-performing organizations have lower lead times and increased frequency of software deployments.

Clearly, such results are directly related to agility and even scalability, particularly as it relates to team structures—autonomous teams can bring new ideas to market far faster than those that must navigate complex bureaucracies. Lowering mean time to recovery (MTTR) reflects maintainability. And there is ample evidence that confirms the importance of secure software, with deficiencies in this area having catastrophic effects on consumer confidence and the business’s bottom line.

We know that we are striving for these things: reliability, agility, security, etc., but how do we know we have achieved them? There are several challenges. Some of these things are difficult to measure. How will we know when we have maintainable code? Any software developer charged with taking over an existing codebase will tell you that the mere existence of documentation does not necessarily make their job any easier, and it’s value may, in fact, be inversely proportional to how voluminous it is. Some of these things might be measurable, but the data may not be available in a timeframe that allows for it to easily drive improvement. For example, measuring the number of software outages gives an indication of reliability; however, assessing whether particular changes to the code move the needle in a positive direction will not be possible until said software has been running in production for quite some time.

Still other outcomes may be influenced by several factors requiring an aggregation of different measures. For instance, agility is influenced by software architecture (Do you have a monolith or microservices?) as well as organizational structures (Do you have autonomous, two-pizza teams responsible for their components, or do you depend heavily on ticket-based processes?).

In this paper, we suggest that there are a set of measurable leading indicators for these desirable outcomes. That is, improvements in the leading indicators are reflective of improvements in the outcomes.

We have categorized these leading indicators into two different buckets:

  • Measures against the code: These include some familiar attributes, such as results coming from static-code analysis tools, but we add to this list with some less widely understood elements, such as the use of feature flags.
  • Measures of software development and operations processes: For example, how long do integration tests take, and how often do you run them? Do you do any type of progressive delivery—A/B testing, canary deployments, etc.?

In addition, we will also point out when we feel common measures are misleading.

Download the Full Paper Here

Overcoming Inefficiencies in Multiple Work Management Systems

Helping Your Enterprise with Their DevOps Transformation

Large enterprises are traditionally organized by function and managed to optimize vertically for specific outcomes. In IT, this often means organizations specialize in functions such as design, development, QA, and operations. Many decisions are made in the context of those functional silos as opposed to the end-to-end flow of delivery across those teams.

This mode of decision-making affects work management practices as well as tool selection for each group. In traditional operating models of siloed teams, this is certainly an issue when creating an environment of task-driven queues. We view this as a variant of Taylorism, in which the concept of work specialization and tool choices for work management have negative impacts on overall service delivery goals in knowledge-based work. Common countermeasures usually involve the introduction of other groups (release management, change management, project management) to help manage the “flow” of work, but these groups yield marginal returns, if any.

As enterprises adopt Agile and DevOps, this mismatch between SDLC and ITIL practices, as well as tool silos, impedes both flow efficiency and work understanding within individual team and across teams. Additionally, business stakeholders are continuously frustrated because work disappears into the IT “black box,” and IT leaders are unable to provide timely delivery, let alone estimates and visibility into the work that it takes to both build and run the service. This dysfunction erodes trust between the stake- holders and the teams that are delivering and running critical services for the business.

This paper is focused on enterprise practitioners and management leaders who have multiple work management systems that their teams deal with every day and
who struggle to provide visibility to the work as well as an improvement model to make the work and systems better.

Download the Full Paper Here

Breaking the Change Management Barrier

Patterns to Improve the Business Outcomes of Traditional Change Management Practices

Best practice frameworks, such as the Information Technology Infrastructure Library (ITIL), advocated for the creation of Change Advisory Boards (CABs) that would be responsible for assessing requests for change (RFC) against risk and impact, as well as collision avoidance. The intent was to create a holistic advisory group with the skills and experience to evaluate “normal” changes and strike a balance between stability and innovation. By definition, normal changes were unique and had no history of risk or reward.

As change management became entrenched in the enterprise, the CAB shifted from an advisory group to a decision authority for most, if not all, requested changes. The burden on the change initiator grew as RFC details, timelines, and level of pre-submission approvals increased.

Traditional change management places more emphasis on managing RFCs over the change traceability, or change record. The net result has made for longer lead times, increased impediments and overhead costs, and frustration from the Development and Operations teams, business leaders, and customers. Once an enabler of innovation, command-and-control change management is now considered a constraint.

The business climate has changed, and IT must adapt its processes accordingly. IT has to implement changes more rapidly to gain or sustain a competitive advantage in this disruptive landscape. Smaller releases with risks mitigated by frequent and automated testing can deliver value faster and more frequently if allowed to deploy into production with a minimum viable process. Determining what’s “just enough” change management depends on the risk appetite and compliance requirements of the organization. However, even small improvements to change management and the CAB can result in big advancements.

This paper offers several patterns that can be applied in tandem or as appropriate by leaders seeking ideas and opportunities for optimizing the ongoing value of change management while reducing its complexity.

Download the Full Paper Here

It’s Time for … ERP Disruption

Applying Wardley Mapping and DevOps Techniques to the ERP Ecosystem

In this paper, a group of industry experts tackled the challenge of legacy ERP systems and how the impact digital transformation (DevOps, Lean, etc.) is placing on incumbent business systems (such as ERPs and middleware) within their integrated landscapes. This team applied Wardley mapping to increase their situational awareness and demonstrate this approach as a tool to explore ideas, identify key themes, and hopefully communicate with senior enterprise leaders.

As an example, the team looked at custom business processes (some should be standardized, others may need to be removed from the ERP), integrations, and automating business-process testing.

Tenets

  • ERP suites are increasingly evolving to Utility ERP with standardized processes for
    most financial processes.
  • ERPs are becoming one of a collection of critical systems instead of the center of an
    enterprise’s solar system.
  • Enterprises that place their most valuable (and customized) businesses processes in
    their ERPs may be making their ERPs more difficult to upgrade.
  • ERP+ (ERP, middleware, and connected systems) often spans multiple enterprise silos
    and is exceptionally difficult to test due to the shared data mode that crosses many
    systems.
  • Introducing new ways of working to ERP implementations leads to significant inertia
    for parties (systems integrators, ERP providers, and clients) due to distorted incentives
    and highly insular career ladders.

Download the Full Paper Here

- About The Authors
Avatar photo

IT Revolution

Trusted by technology leaders worldwide. Since publishing The Phoenix Project in 2013, and launching DevOps Enterprise Summit in 2014, we’ve been assembling guidance from industry experts and top practitioners.

Follow IT Revolution on Social Media

No comments found

Leave a Comment

Your email address will not be published.



Jump to Section

    More Like This

    Attack of the Supply Chains – Investments Unlimited Series: Chapter 9
    By IT Revolution , Helen Beal , Bill Bensing , Jason Cox , Michael Edenzon , Dr. Tapabrata "Topo" Pal , Caleb Queern , John Rzeszotarski , Andres Vega , John Willis

    Welcome to the ninth installment of IT Revolution’s series based on the book Investments…

    Finding Digital Treasure Through the Science of Happy Accidents
    By Summary by IT Revolution

    In a recent talk at DevOps Enterprise Summit 2023, industry veterans Steven Fishman and…

    Lessons from Unexpected Places: How Watching Ted Lasso Makes Us Better at Agile and DevOps
    By Anthony Earl , Jordan Stoner

    It happened to us, and we bet it’s happened to you too—you watch a…

    The Three Lines Model: Investments Unlimited Series: Chapter 8
    By IT Revolution , Helen Beal , Bill Bensing , Jason Cox , Michael Edenzon , Dr. Tapabrata "Topo" Pal , Caleb Queern , John Rzeszotarski , Andres Vega , John Willis

    Welcome to the eighth installment of IT Revolution’s series based on the book Investments…