Inspire, develop, and guide a winning organization.
Create visible workflows to achieve well-architected software.
Understand and use meaningful data to measure success.
Integrate and automate quality, security, and compliance into daily work.
Understand the unique values and behaviors of a successful organization.
LLMs and Generative AI in the enterprise.
An on-demand learning experience from the people who brought you The Phoenix Project, Team Topologies, Accelerate, and more.
Learn how making work visible, value stream management, and flow metrics can affect change in your organization.
Clarify team interactions for fast flow using simple sense-making approaches and tools.
Multiple award-winning CTO, researcher, and bestselling author Gene Kim hosts enterprise technology and business leaders.
In the first part of this two-part episode of The Idealcast, Gene Kim speaks with Dr. Ron Westrum, Emeritus Professor of Sociology at Eastern Michigan University.
In the first episode of Season 2 of The Idealcast, Gene Kim speaks with Admiral John Richardson, who served as Chief of Naval Operations for four years.
New half-day virtual events with live watch parties worldwide!
DevOps best practices, case studies, organizational change, ways of working, and the latest thinking affecting business and technology leadership.
Is slowify a real word?
Could right fit help talent discover more meaning and satisfaction at work and help companies find lost productivity?
The values and philosophies that frame the processes, procedures, and practices of DevOps.
This post presents the four key metrics to measure software delivery performance.
November 30, 2018
The following is an excerpt from a presentation by Naomi Lurie, Director of Product, and Laksh Ranganathan, Senior Solutions Consultant at Tasktop, titled “What Enterprise IT Can Learn From a Startup’s Journey.”
You can watch the video of the presentation, which was originally delivered at the 2018 DevOps Enterprise Summit in London.
We’re from Tasktop. We’re not strictly a startup anymore, rather we’re a small company. We were founded in 2007 in Vancouver, Canada. Today we have 160 people on our staff and are always growing. One of the tenants and core values of our company is that we are customer and partner-centric, which means that a lot of our roadmap is driven directly by our customer and partner requests.
While we know we’re small, we have pretty large and sophisticated product suites, so we’ve been able to take the journey into value stream management. Which means our journey is like a microcosm of what some larger enterprises would experience. Since there are a lot of organizations today that feel like agile and DevOps isn’t enough and are expressing the need for something larger, like value stream management, we thought sharing our experience would be helpful.
This is one of the official definitions put out recently by Forester in a report.
“Value stream management is the combination of people, process, and technology that maps, optimizes, and visualizes, and governs business value flow through a heterogeneous enterprise software delivery pipeline.”
In my own words, I would say value stream management is a practice of managing software delivery in a way that provides greater and greater value to your customers while eliminating delays, improving quality, and getting rid of some of the bad things like too much cost or employee frustration. Value stream management is end to end. It starts with a customer and it ends with a customer, so anything that happens in that process is part of your value stream.
To be honest with you, a few years ago, we felt like we weren’t doing this very well, and there were three main factors as to why.
We said we need to go back to doing what’s important to us, which is a customer and partner eccentric innovation. We can’t scale the way we are now. We needed to put in place value stream management. We needed to put in place an automated bi-directional flow from start to finish, from customer request until it is delivered, and then through the feedback loop. We needed to have a set of metrics that can create visibility for our leadership to be able to prioritize and optimize how we were delivering value to our customers.
The solution that we put in place was to connect our end to end toolchain so that everything being used by the specialists was creating value. The benefit being that we would reduce the waste and the overhead that the practitioners were having on a day to day basis. That way they could devote more time to generating actual business value, instead of coordinating with colleagues and syncing up tools etc. We knew that this would also improve employee engagement and satisfaction.
The second thing that we wanted to do was gather a traceable record of work as it flowed from tool to tool. We gathered all those little digital footprints of how work was progressing and collected it into metrics, with the hopes that it would provide insight and management capability to our leadership. We didn’t just want those metrics to look at deploying lead time, cycle times from co-commit to release, etc. We wanted them to cover everything from the original customer request until it was running in production. We wanted to have metrics from the moment a ticket is open to when it was resolved.
So, we embarked on a six-step process.
What we did was started by looking at our value stream for every one of our products. What you’re seeing above is what it looks like for one of our product integration hub. Now, you may be familiar with value stream mapping, or have done it as part of your DevOps transformation, but what the map above is slightly different.
We wanted to be able to identify and understand how each of our work items or artifacts manifest as they flow through our value stream. We examined our customer-centric, value-adding work, and noticed they have three key sources. We had:
As we looked at it in detail, all of these artifacts manifested into other artifacts downstream. For example, a feature request would eventually get related to a target process feature, which would then create a Jira epic. Stories and tasks would get created, get committed, and get built and released in Jenkins. If we had to be able to trace how workflows and value flow across, we had to be able to map all of these artifacts together and automate that flow.
As mentioned above, we had two goals in mind. We needed to automate our flow but we also wanted to be able to get meaningful metrics so we could measure ourselves as we went along. The pitfall of working with different systems and different types of work is that it takes a lot of effort to match them all together and bring them to a point where we can get similar metrics from the separate information points. That wasn’t something we were working on doing because we needed real-time metrics so we could work with them. So, we defined models, which aligned back to business values (like features, defects, test cases, etc) and mapped all of those artifacts that we had to these models.
By doing so, we normalized the data points so our metrics for similar items could be collected. If you had an incident, a case, a defect from a code scan, or even a defect coming in from internal test, all of these would have the same data point so we could pull out similar metrics from them.
What that also helped us to do was eliminate point to point mappings and reuse the models, which meant integrating and maintenance of our synchronizations to reduce the overhead reviews by about 80%, which also helped us along the line.
Now that we had an idea of what we needed to flow and we had the models ready, we needed to start thinking about the integrations.
We started by looking at what we had at the moment. We were okay on the CICD line. We had integrated our Jira release pipeline so we had metrics like cycle times from commit to release. We had already integrated our ITSM to our Jira. Which meant we had meantime to resolution for our defects. We also were measuring our performance of the engineering team through metrics like throughput and epic burn down. That was all great.
The problem was that all of these were focused on the engineering side of things and they were siloed in that world. Which meant that they were not linked to business priorities and product priorities.
The other key thing that was missing was a sense of true lead time from the point where a customer raises a request to the point where we fulfill that request.
To help this, we started one step upstream and started the target process. We knew that a lot of the frustrations we had within the organization were due to all of the overhead that it took to keep the product teams and the engineering teams in sync. This led us is an integration for our features to epics, and as soon as we did, it enabled these two teams to understand one other’s priorities.
There was a better flow of information. We were able to track how long it took for a piece of work to go from the point it was accepted by our product owner all the way to the point it was released. We were also able to see how it performed across the entire process between product management and development. We called this flow time and it became one of the key metrics that we measured.
Secondly, as we examined some of these metrics, we realized we could break down all the value-adding work into four key categories:
We also realized we could map out our finite resources and understand how we’re allocating between these four work items. We did that for every one of our product so we could define the direction depending on the priorities for each of those products at that point.
Having done all of that, though, we were still missing one key item here which was the feedback loop back to the customer.
This next step was pretty simple. It needed to be Salesforce. As soon as we linked our Salesforce features request to our target process features, we closed the loop. We were able to provide our field facing teams information about what was happening with a feature request. And as soon as it was released, they were able to speak with our customers to make sure that they had those features available and were able to utilize them.
They were also able to give feedback information about revenue generation. If there was a deal-breaking request, our leadership had better visibility on what impact it would have to revenues if that feature was delivered. Which again, closed that feedback loop.
As we examined these metrics, one of the key things we are now looking at is are we visualizing this right? Are we missing out any outliers? Do we need to just stick with histograms or do we need to look at it from a distribution perspective? Those are items we are trying to learn at the moment. Having closed that loop enables us to learn from our metrics and understand how we can improve on them.
Another key thing missing was our partner ecosystem, so we went ahead and integrated that as well. As we integrated with our partner’s agile tools, we were now not only be able to measure our own flow time, we were also able to measure flow times within those sections. This allowed us to optimize those processes and enabled us to remove any of the bottlenecks in those areas.
One of our goals was also to create a shared visibility. This wasn’t just something that we wanted to just exist in PowerPoint, but to be something that anyone come to use to see our value stream.
Above, you can see a view from the system with the same little icons as in the slides earlier. You can actually see those same tools and their connections laid out here.
The green lines indicate the tools that are integrated with one another. What you see coming up the bottom are the various DevOps tools that are updating or creating artifacts in Jira. There’s also a database over there on the left side, where all the different tools are feeding their information into the centralized reporting database so that we can run the metrics off of them.
If you want to make it a little bit more complex, you can also display which type of artifacts are being exchanged between the tools.
For example, you can see that requests are going from Salesforce to target process, and then at target process, they can either become requests that go to Jira, or they can become features that go to Jira. You can see that desk.com is sending cases to Jira when they need to be escalated for an engineer to resolve.
Then on top of that, we can layer even more information from the models,
Above, we talked about normalizing the data, which is very important if you want to do real-time reports and not have to touch anything to get meaningful metrics. If you start again with Salesforce, you can see what’s coming from them is basically a customer feature request. It is mapping to a target process request via a request model. Then from target process to Jira, we have features becoming ethics and they’re normalized through the feature model. That all allows us to do reports on each type of those models. Which is really the essence of what we’re looking for.
This is the leadership dashboard that we created based on all this data that we’ve been accumulating. It’s all about flow. We want to capture how are we creating value for the business. How is value flowing through our value stream?
That’s a top-level dashboard that we use to look make directional decisions and help identify problems that need to be optimized. If we see that there’s a problem, we’ll see that Flow Time is going up, Flow Efficiency is going down, etc, then we can drill down into these reports and find the bottlenecks.
As we started down the path of value stream integration, we realized that CI/CD was a good first step, but it was just the beginning of the journey. As we optimized our release pipeline and took care of all the bottlenecks in that area, the bottlenecks, as they usually do, shifted upstream. If we hadn’t mapped our entire value stream, we wouldn’t have been able to identify and rectify those.
The other key insight we found was that we can get a lot of data from our tools which have the ability to provide some real-time traffic map metrics and an accurate view of our software delivery, which was great. We found that being able to collect that information as soon as possible and normalize it in a way that will work now and in the future, was quite important. We managed to achieve this with the models, which enabled us to build and adjust our metrics quickly.
Having said that, arriving at the metrics that we have right now was a process that we did not get right the first time. But being able to pull that information and chart them out enabled us to learn from them and understand where some of the gaps were.
For example, we were able to understand how work-related back to certain classifications and then tag those classifications moving forward so we could get the right metrics going forward.
Now, is it perfect today? Of course not. We are continuously learning. What we have right now is providing the information that our leadership needs, but we will have to keep learning and adapting as business needs and product needs change in the future.
Having done this though, we were able to provide our practitioners with the ability to reduce their overhead and use their time with value-adding work they like to do. More importantly, we were able to improve the happiness rating within our teams.
From a leadership perspective, they now have better visibility for the value creation across each of our products and having closed that feedback loop means that we can continuously learn and react to customer needs very quickly and effectively and optimize ourselves.
Value streams are all about the customer. You have the customer at the center, but if you zoom out through delivery processes, you can surely categorize them into four main phases. You have…
If you drill down a little deeper, you’ll see that you have many of the specific activities being conducted by very specialized roles, and which are supported by multiple tools. In order to do value stream management, what you really seek to do is to connect all the tools that are participating in the value stream and close the loop between them and collect the data so that you can have a complete picture.
If you’re doing DevOps and CICD is working and in place, now’s the time to start connecting upstream so you get the real end to end picture of value creation.
For us, being a small company, perhaps it was easier than for others, but we think that if you do it, you can start to do something like this on a smaller product set.
Trusted by technology leaders worldwide. Since publishing The Phoenix Project in 2013, and launching DevOps Enterprise Summit in 2014, we’ve been assembling guidance from industry experts and top practitioners.
No comments found
Your email address will not be published.
First Name Last Name
Δ
We're excited to share how IT Revolution is evolving in 2025. While our books…
As we reflect on 2024, we're proud to share a year marked by groundbreaking…
"This feels pointless." "My brain is fried." "Why can't I think straight?" These aren't…
As manufacturers embrace Industry 4.0, many find that implementing new technologies isn't enough to…