• IT REVOLUTION
  • Newsletter
  • About
  • Contact
  • My Resources
  • Books
  • Resources
  • Courses
  • Podcast
  • Videos
  • Conference
  • Blog
  • IT REVOLUTION
  • Newsletter
  • About
  • Contact
  • My Resources

IT Revolution

Helping technology leaders achieve their goals through publishing, events & research.

  • IT REVOLUTION
  • Newsletter
  • About
  • Contact
  • My Resources
  • Books
  • Resources
  • Courses
  • Podcast
  • Videos
  • Conference
  • Blog

Measuring Software Quality

February 23, 2021 by IT Revolution 3 Comments

This post was adapted from the Measuring Software Quality white paper, written by Cornelia Davis, Stephen Magill, Rosalind Radcliffe and James Wickett.

In today’s digital economy, where software is central to the business, the overall quality of that software is more important than ever before. Together with a solid market need and business plan, top quality software leads to customer satisfaction, revenue, and profitability, and the best designs can even allow an organization to more easily enter new markets.

On the other hand, even with the most solid business plan, poor quality software can be one of the fastest roads to failure.

Given the great importance of software quality, leaders cannot simply hope for the best. Just as businesses measure market trends, sales pipelines, inventories, fulfillment, and more, they must also measure the quality of their software.

Current State of Measuring Code Quality

As an industry, we have been attempting to assess software quality for quite some time. Today’s continuous delivery pipelines almost always include steps that run static-code analyses. Project managers habitually track and make decisions based on lines of code.

Whether or not a team is practicing test-driven development, the value of test coverage is well understood. But we suggest that these relatively common practices at best provide only the weakest indication of the overall picture of the quality of software and at worst are misleading, giving only the illusion of quality.

To read more articles like this one, sign up for the IT Revolution newsletter.

There are a host of “ilities” that software development teams strive for, such as reliability, maintainability, scalability, agility, and serviceability, and it’s not difficult to draw a connection between these “ilities” and business outcomes.

We know from the State of DevOps Report published by DORA that high-performing organizations have lower lead times and increased frequency of software deployments.

Clearly, such results are directly related to agility and even scalability, particularly as it relates to team structures—autonomous teams can bring new ideas to market far faster than those that must navigate complex bureaucracies. Lowering mean time to recovery (MTTR) reflects maintainability.

And there is ample evidence that confirms the importance of secure software, with deficiencies in this area having catastrophic effects on consumer confidence and the business’s  bottom line.

We know that we are striving for these things: reliability, agility, security, etc., but how do we know we have achieved them? There are several challenges.

Challenges to Understanding and Measuring Code Quality

Some of these things are difficult to measure.

How will we know when we have maintainable code? Any software developer charged with taking over an existing codebase will tell you that the mere existence of documentation does not necessarily make their job any easier, and its value may, in fact, be inversely proportional to how voluminous it is.

Some of these things might be measurable, but the data may not be available in a timeframe that allows for it to easily drive improvement.

For example, measuring the number of software outages gives an indication of reliability; however, assessing whether particular changes to the code move the needle in a positive direction will not be possible until said software has been running in production for quite some time.

Still other outcomes may be influenced by several factors requiring an aggregation of different measures.

For instance, agility is influenced by software architecture (Do you have a monolith or microservices?) as well as organizational structures (Do you have autonomous, two-pizza teams responsible for their components, or do you depend heavily on ticket-based processes?).

Measurable Leading Indicators

We suggest there are a set of measurable leading indicators for these desirable outcomes. That is, improvements in the leading indicators are reflective of improvements in the outcomes. We have categorized these leading indicators into two different buckets:

  • Measures against the code: These include some familiar attributes, such as results coming from static-code analysis tools, but we add to this list with some less widely understood elements, such as the use of feature flags.
  • Measures of software development and operations processes: For example, how long do integration tests take, and how often do you run them? Do you do any type of progressive delivery—A/B testing, canary deployments, etc.?

In addition, we will also point out when we feel common measures are misleading.

To read more articles like this one, sign up for the IT Revolution newsletter.

A Framework for Measuring Code Quality

Software quality is not a destination, it is a journey. And it is essential that you address this concept through a practice of continual improvement. We suggest a framework that includes at least the following six elements.

1) Run Experiments

Feedback loops have been established as an essential tenet in many areas of software development, and they should be applied to your strategy for managing your software quality.

Choose outcomes you would like to improve, form hypotheses about leading indicators that could enable gains in these outcomes, gather data, and assess whether your actions are leading to improvements.

This is where you can assess agility as a combination of software architecture, release practices, team structures, and so on. You should also correlate those measures that take longer to gather (number of outages over a month) with those that are easier to attain (use of feature flags).

2) Establish Targets

Because the leading indicators are by definition measurable, defining unambiguous targets is not only possible but essential. Getting integration tests to run in under an hour, for example, is something everyone on a team can understand and apply efforts directly toward. Improvements can be clearly seen and celebrated.

3) Establish Guardrails

Sometimes improvement in one area can come at the expense of another area. For example, tests might be simplified in a manner that negatively impacts test coverage in order to get integration tests to run in under one hour.

To guard against this, it is useful to explicitly set targets for what should remain unchanged during an improvement period. A better integration-testing effort could be phrased, “get integration tests to run in under an hour without decreasing integration-test coverage.”

4) Update Goals and Metrics

Your software-quality improvement initiative should be constantly evolving.

For already established measures, targets should regularly be reevaluated and/or a set of gradual improvements should be enabled. Using the previous example of the goal of getting integration tests to run in under an hour, the first target may simply be a 25% improvement in speed.

These practices contribute to several of the leading indicators discussed earlier. Of course, the actual metrics you are measuring should also shift over time—you should start simple, add to them gradually, and create new aggregates, all in response to what you learn through your experiments.

5) Get Clean/Stay Clean

At the onset, you may be faced with a situation that requires a great deal of remediation, requiring a focus on certain metrics (e.g., the leading indicators for reliability) and a great deal of resource (time) invested in it. And while improvements in quality will reduce the emergency nature of your quality initiative, we caution against adopting a mindset that teams can achieve a “done” state.

Once clean, you must continue measuring your quality with the goal of remaining clean. Team responsibilities may change (e.g., a tiger team may be dissolved as developers assume full responsibility for maintaining quality), and processes may be adapted (e.g., code reviews may be done by a broader set of individuals, not only the most senior engineers); however, measures should remain in place and be regularly audited.

6) Don’t Forget the User

Customer satisfaction is the ultimate goal and should regularly be assessed against the measures you believe will lead to it. It is easy to get caught up in “metrics for metrics’ sake,” or metrics that are interesting from a technical perspective (lines of code covered by tests, production events captured, etc.), but the only metrics that truly matter are those that, when improved, also result in a measurable improvement in customer outcomes.

Continue with Test Metrics

Code can be measured in a variety of ways. It can be analyzed statically to look for bugs, compute structural complexity metrics, or examine dependencies. It can be executed to collect performance metrics, look for crashes, or compute test coverage. It can be monitored in production to collect information on usage, failures, and responsiveness.

Each of these metrics is important and, in line with our guidance to establish guardrails and targets, there is good reason to include multiple metrics. In the full white paper on Measuring Software Quality, we describe metrics that are currently in use and motivate the need for additional, higher-level metrics.

Read or download the full white paper here.

Most Recent Articles

  • From Milestones to a Continuous Quality Assurance Flow
  • Model Life-Cycle Management at Continental Tires
  • Flow Engineering

Filed Under: DevOps Enterprise Forum Guidance Papers, Metrics, Software Delivery Tagged With: data analysis, devops, devops enterprise forum, metrics, software quality

Comments

  1. Y says

    February 26, 2021 at 6:19 pm

    Hello,
    I think it’s also worth mentioning how formal methods can help software developers shift their mindsets and think differently about software.
    Software design shouldn’t be just a bunch of diagrams or a few workflows; it’s more serious than that. Whether an application has a monolithic or a microservices’ architecture, the system should hold a set of properties. Most developers tend to focus on the “shape/form” of the system, and “forget” about the “properties” of the systems.
    This fallacy is also observed in the testing phase; we focus more on the form / shape of testing, and care less about the essence of testing. A test suite should test the properties of the system, not just make sure some inputs match some outputs (generally speaking; I know that there are areas where you don’t have a lot of choice, like in fuzzy logic, AI, image processing, etc, where metamorphic testing is widely used).
    Writing formal specifications can help us, as developers, think about problems we may have never considered, model check our specs, and make sure the properties we expect to hold are actually true.
    I hope IT Revolution will consider tackling this area in the future.
    Regards,

    Reply
  2. Shamim Ahmed says

    February 27, 2021 at 6:30 pm

    How do I download the full report? When I click on that link, I get a page with no obvious way to download anything

    Reply
    • IT Revolution says

      March 1, 2021 at 5:58 pm

      Hello, when you click through please click the Get button in yellow. It will ask you to sign in or sign up. Then you will be able to read and download the paper.

      Reply

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

newsletter sign up

Topics

Tags

agile agile conversations better value sooner safer happier business business agility business leadership case study cloud continuous delivery devops DevOps Advice Series devops case study devops enterprise forum DevOps Enterprise Summit devops handbook digital transformation dominica degrandis douglas squirrel enterprise Gene Kim incident management information technology IT jeffrey fredrick jez humble John Willis Jonathan Smart leadership lean making work visible manuel pais mark schwartz matthew skelton nicole forsgren operations Project to Product project to product tranformation seven domains of transformtion software software delivery Sooner Safer Happier teams team topologies the idealcast WaysofWorkingSeries

Recent Posts

  • From Milestones to a Continuous Quality Assurance Flow
  • Model Life-Cycle Management at Continental Tires
  • Flow Engineering
  • Value Stream Management and Organizing Around Value
  • Don’t Just Survive Your Audit, Thrive In It

Privacy Policy

Featured Book

Featured Book Image

Events

  • DevOps Enterprise Summit Virtual - Europe
    Virtual · 10 - 12 May 2022
  • DevOps Enterprise Summit US Flagship Event
    Las Vegas · October 18 - 20, 2022
  • DevOps Enterprise Summit Virtual - US
    Virtual · December 6 - 8, 2022
  • Facebook
  • LinkedIn
  • Twitter
  • YouTube
Copyright © 2022 IT Revolution. All rights reserved.
Site by Objectiv.