The following is an excerpt from a presentation by Naomi Lurie, Director of Product, and Laksh Ranganathan, Senior Solutions Consultant at Tasktop, titled “What Enterprise IT Can Learn From a Startup’s Journey.”
You can watch the video of the presentation, which was originally delivered at the 2018 DevOps Enterprise Summit in London.
A little bit about us
We’re from Tasktop. We’re not strictly a startup anymore, rather we’re a small company. We were founded in 2007 in Vancouver, Canada. Today we have 160 people on our staff and are always growing. One of the tenants and core values of our company is that we are customer and partner-centric, which means that a lot of our roadmap is driven directly by our customer and partner requests.
While we know we’re small, we have pretty large and sophisticated product suites, so we’ve been able to take the journey into value stream management. Which means our journey is like a microcosm of what some larger enterprises would experience. Since there are a lot of organizations today that feel like agile and DevOps isn’t enough and are expressing the need for something larger, like value stream management, we thought sharing our experience would be helpful.
let’s define value stream management
This is one of the official definitions put out recently by Forester in a report.
“Value stream management is the combination of people, process, and technology that maps, optimizes, and visualizes, and governs business value flow through a heterogeneous enterprise software delivery pipeline.”
In my own words, I would say value stream management is a practice of managing software delivery in a way that provides greater and greater value to your customers while eliminating delays, improving quality, and getting rid of some of the bad things like too much cost or employee frustration. Value stream management is end to end. It starts with a customer and it ends with a customer, so anything that happens in that process is part of your value stream.
To be honest with you, a few years ago, we felt like we weren’t doing this very well, and there were three main factors as to why.
- First of all, we had just experienced a large growth spurt. We almost doubled in size in two years. As a result, all of the intimate communications that were previously possible between the small set of people stopped working. We could no longer have everybody on a call or send an email and get everybody on the same page. It wasn’t possible. Because of that, our leadership started to feel like they were losing visibility over what was happening in the company, along with the priorities that we were making, and the decisions that we were taking.
- Secondly, there was an increased specialization. As we were hiring more people, we had people coming in with different roles. More business analysts, deployment consultants, technical account managers, etc. And these different people had different needs. We went from a company where we had 40 engineers and everybody on Jira to needing more and more tools to support the specialized roles.To give you an example, product owners were one of our growing teams, and they didn’t want to work in Jira because it didn’t work well for them when they’d have a barrage of competing requests. As a result, we had this silo between product and engineering. They each had their own tools and they were working separately. Which really meant they had to do a lot of manual copy paste from system to system. In the end, our engineering manager said that he felt like the product secretary because he was the one that was having to keep notes and put the outcomes of discussions inside the epic in Jira, which was a total waste of his time.
- Finally, we were becoming more successful and had more partners. We had more suppliers. We had more subcontractors. We were starting to have a bigger ecosystem to manage and the stakes were growing higher, but we couldn’t see those dependencies very clearly.We ended up finding ourselves in a bit of a place of crisis. If you think about the three Ways of DevOps, you have Flow, Feedback, and Continual learning. Our flow was okay. We had a fully automated released pipeline, IDIC, etc. The problem was that our feedback loop to the business was broken. We had no way of keeping the business informed about what was being released by engineering. Which meant something slipped, we couldn’t communicate well back to the customers. Secondly, as a result of the feedback being broken, we couldn’t do continual learning very well.
Then, two years ago we staged an intervention.
We said we need to go back to doing what’s important to us, which is a customer and partner eccentric innovation. We can’t scale the way we are now. We needed to put in place value stream management. We needed to put in place an automated bi-directional flow from start to finish, from customer request until it is delivered, and then through the feedback loop. We needed to have a set of metrics that can create visibility for our leadership to be able to prioritize and optimize how we were delivering value to our customers.
The solution that we put in place was to connect our end to end toolchain so that everything being used by the specialists was creating value. The benefit being that we would reduce the waste and the overhead that the practitioners were having on a day to day basis. That way they could devote more time to generating actual business value, instead of coordinating with colleagues and syncing up tools etc. We knew that this would also improve employee engagement and satisfaction.
The second thing that we wanted to do was gather a traceable record of work as it flowed from tool to tool. We gathered all those little digital footprints of how work was progressing and collected it into metrics, with the hopes that it would provide insight and management capability to our leadership. We didn’t just want those metrics to look at deploying lead time, cycle times from co-commit to release, etc. We wanted them to cover everything from the original customer request until it was running in production. We wanted to have metrics from the moment a ticket is open to when it was resolved.
So, we embarked on a six-step process.
Step 1: Diagram our toolchain & workflow
What we did was started by looking at our value stream for every one of our products. What you’re seeing above is what it looks like for one of our product integration hub. Now, you may be familiar with value stream mapping, or have done it as part of your DevOps transformation, but what the map above is slightly different.
We wanted to be able to identify and understand how each of our work items or artifacts manifest as they flow through our value stream. We examined our customer-centric, value-adding work, and noticed they have three key sources. We had:
- Customer request that comes through from our Salesforce system
- Defects throughout the ITSM system, desk.com, which is now part of Salesforce
- Features that are raised by our product owners to set the direction of the product (that sat in the target process)
As we looked at it in detail, all of these artifacts manifested into other artifacts downstream. For example, a feature request would eventually get related to a target process feature, which would then create a Jira epic. Stories and tasks would get created, get committed, and get built and released in Jenkins. If we had to be able to trace how workflows and value flow across, we had to be able to map all of these artifacts together and automate that flow.
Step 2: Define artifact models
As mentioned above, we had two goals in mind. We needed to automate our flow but we also wanted to be able to get meaningful metrics so we could measure ourselves as we went along. The pitfall of working with different systems and different types of work is that it takes a lot of effort to match them all together and bring them to a point where we can get similar metrics from the separate information points. That wasn’t something we were working on doing because we needed real-time metrics so we could work with them. So, we defined models, which aligned back to business values (like features, defects, test cases, etc) and mapped all of those artifacts that we had to these models.
By doing so, we normalized the data points so our metrics for similar items could be collected. If you had an incident, a case, a defect from a code scan, or even a defect coming in from internal test, all of these would have the same data point so we could pull out similar metrics from them.
What that also helped us to do was eliminate point to point mappings and reuse the models, which meant integrating and maintenance of our synchronizations to reduce the overhead reviews by about 80%, which also helped us along the line.
Step 3: Take stock of what we had
Now that we had an idea of what we needed to flow and we had the models ready, we needed to start thinking about the integrations.
We started by looking at what we had at the moment. We were okay on the CICD line. We had integrated our Jira release pipeline so we had metrics like cycle times from commit to release. We had already integrated our ITSM to our Jira. Which meant we had meantime to resolution for our defects. We also were measuring our performance of the engineering team through metrics like throughput and epic burn down. That was all great.
The problem was that all of these were focused on the engineering side of things and they were siloed in that world. Which meant that they were not linked to business priorities and product priorities.
The other key thing that was missing was a sense of true lead time from the point where a customer raises a request to the point where we fulfill that request.
Step 4: Connect feature to release
To help this, we started one step upstream and started the target process. We knew that a lot of the frustrations we had within the organization were due to all of the overhead that it took to keep the product teams and the engineering teams in sync. This led us is an integration for our features to epics, and as soon as we did, it enabled these two teams to understand one other’s priorities.
There was a better flow of information. We were able to track how long it took for a piece of work to go from the point it was accepted by our product owner all the way to the point it was released. We were also able to see how it performed across the entire process between product management and development. We called this flow time and it became one of the key metrics that we measured.
Secondly, as we examined some of these metrics, we realized we could break down all the value-adding work into four key categories:
- Feature work
- Technical debt
We also realized we could map out our finite resources and understand how we’re allocating between these four work items. We did that for every one of our product so we could define the direction depending on the priorities for each of those products at that point.
Having done all of that, though, we were still missing one key item here which was the feedback loop back to the customer.
Step 5: Connect request to release
This next step was pretty simple. It needed to be Salesforce. As soon as we linked our Salesforce features request to our target process features, we closed the loop. We were able to provide our field facing teams information about what was happening with a feature request. And as soon as it was released, they were able to speak with our customers to make sure that they had those features available and were able to utilize them.
They were also able to give feedback information about revenue generation. If there was a deal-breaking request, our leadership had better visibility on what impact it would have to revenues if that feature was delivered. Which again, closed that feedback loop.
As we examined these metrics, one of the key things we are now looking at is are we visualizing this right? Are we missing out any outliers? Do we need to just stick with histograms or do we need to look at it from a distribution perspective? Those are items we are trying to learn at the moment. Having closed that loop enables us to learn from our metrics and understand how we can improve on them.
Step 6: Revealing Dependencies
Another key thing missing was our partner ecosystem, so we went ahead and integrated that as well. As we integrated with our partner’s agile tools, we were now not only be able to measure our own flow time, we were also able to measure flow times within those sections. This allowed us to optimize those processes and enabled us to remove any of the bottlenecks in those areas.
One of our goals was also to create a shared visibility. This wasn’t just something that we wanted to just exist in PowerPoint, but to be something that anyone come to use to see our value stream.
Above, you can see a view from the system with the same little icons as in the slides earlier. You can actually see those same tools and their connections laid out here.
The green lines indicate the tools that are integrated with one another. What you see coming up the bottom are the various DevOps tools that are updating or creating artifacts in Jira. There’s also a database over there on the left side, where all the different tools are feeding their information into the centralized reporting database so that we can run the metrics off of them.
If you want to make it a little bit more complex, you can also display which type of artifacts are being exchanged between the tools.
For example, you can see that requests are going from Salesforce to target process, and then at target process, they can either become requests that go to Jira, or they can become features that go to Jira. You can see that desk.com is sending cases to Jira when they need to be escalated for an engineer to resolve.
Then on top of that, we can layer even more information from the models,
Above, we talked about normalizing the data, which is very important if you want to do real-time reports and not have to touch anything to get meaningful metrics. If you start again with Salesforce, you can see what’s coming from them is basically a customer feature request. It is mapping to a target process request via a request model. Then from target process to Jira, we have features becoming ethics and they’re normalized through the feature model. That all allows us to do reports on each type of those models. Which is really the essence of what we’re looking for.
This is the leadership dashboard that we created based on all this data that we’ve been accumulating. It’s all about flow. We want to capture how are we creating value for the business. How is value flowing through our value stream?
- First, you will see Flow Velocity. This speaks to how many features are our customers getting in a three month period of time. You can see that it’s categorized in those four categories from before Defects, security issues which are risks, stories or features, and technical debt.
- Then, there is Flow Distribution. That’s where we ask ‘are we putting the right capacity towards features versus tech debt versus risk?’ If you look down below to the line chart, the question here is how efficient are we? How much wait time do we have? This is basically an active timeout of flow time.
- There’s Flow Time. This is the calculation of how long it takes us to go from start to finish in order to get something out, or how long it takes for a request to go through the system.
- And finally, Flow Load. Which is talking about how much load does our teams have. If there’s too much load then teams are thrashing and their ability to deliver goes down.
That’s a top-level dashboard that we use to look make directional decisions and help identify problems that need to be optimized. If we see that there’s a problem, we’ll see that Flow Time is going up, Flow Efficiency is going down, etc, then we can drill down into these reports and find the bottlenecks.
As we started down the path of value stream integration, we realized that CI/CD was a good first step, but it was just the beginning of the journey. As we optimized our release pipeline and took care of all the bottlenecks in that area, the bottlenecks, as they usually do, shifted upstream. If we hadn’t mapped our entire value stream, we wouldn’t have been able to identify and rectify those.
The other key insight we found was that we can get a lot of data from our tools which have the ability to provide some real-time traffic map metrics and an accurate view of our software delivery, which was great. We found that being able to collect that information as soon as possible and normalize it in a way that will work now and in the future, was quite important. We managed to achieve this with the models, which enabled us to build and adjust our metrics quickly.
Having said that, arriving at the metrics that we have right now was a process that we did not get right the first time. But being able to pull that information and chart them out enabled us to learn from them and understand where some of the gaps were.
For example, we were able to understand how work-related back to certain classifications and then tag those classifications moving forward so we could get the right metrics going forward.
Now, is it perfect today? Of course not. We are continuously learning. What we have right now is providing the information that our leadership needs, but we will have to keep learning and adapting as business needs and product needs change in the future.
Having done this though, we were able to provide our practitioners with the ability to reduce their overhead and use their time with value-adding work they like to do. More importantly, we were able to improve the happiness rating within our teams.
From a leadership perspective, they now have better visibility for the value creation across each of our products and having closed that feedback loop means that we can continuously learn and react to customer needs very quickly and effectively and optimize ourselves.
Value streams are all about the customer. You have the customer at the center, but if you zoom out through delivery processes, you can surely categorize them into four main phases. You have…
- The ideation phase
- The implementation or the creation phase
- Then you release
- And then you’re in two operations
If you drill down a little deeper, you’ll see that you have many of the specific activities being conducted by very specialized roles, and which are supported by multiple tools. In order to do value stream management, what you really seek to do is to connect all the tools that are participating in the value stream and close the loop between them and collect the data so that you can have a complete picture.
If you’re doing DevOps and CICD is working and in place, now’s the time to start connecting upstream so you get the real end to end picture of value creation.
For us, being a small company, perhaps it was easier than for others, but we think that if you do it, you can start to do something like this on a smaller product set.