Inspire, develop, and guide a winning organization.
Create visible workflows to achieve well-architected software.
Understand and use meaningful data to measure success.
Integrate and automate quality, security, and compliance into daily work.
Understand the unique values and behaviors of a successful organization.
Explore our extensive library of experience reports.
An on-demand learning experience from the people who brought you The Phoenix Project, Team Topologies, Accelerate, and more.
Learn how making work visible, value stream management, and flow metrics can affect change in your organization.
Multiple award-winning CTO, researcher, and bestselling author Gene Kim hosts enterprise technology and business leaders.
In the first part of this two-part episode of The Idealcast, Gene Kim speaks with Dr. Ron Westrum, Emeritus Professor of Sociology at Eastern Michigan University.
In the first episode of Season 2 of The Idealcast, Gene Kim speaks with Admiral John Richardson, who served as Chief of Naval Operations for four years.
Breaking news—we’re returning to Europe to host an in-person conference this year. Features a 1/2 day seminar on 15 May.
The conference for leaders of large, complex organizations. Attend our flagship event in Las Vegas!
Clarissa Lucas introduces the concepts from Beyond Agile Auditing.
Clarissa Lucas, Trent Russell, and Becca Kinney discuss their experiences with Auditing with Agility.
DevOps best practices, case studies, organizational change, ways of working, and the latest thinking affecting business and technology leadership.
David Anderson and Mark McCann, coauthors of The Value Flywheel Effect, helped create the Serverless-First strategy at Liberty Mutual in 2016
Will help organizations how they handle audit, compliance, and security for software systems
The values and philosophies that frame the processes, procedures, and practices of DevOps.
This post presents the four key metrics to measure software delivery performance.
December 14, 2021
This case study has been excerpted from the second edition of The DevOps Handbook by Gene Kim, Jez Humble, Patrick Debois, John Willis and Nicole Forsgren, PhD.
Over the last seven years, Capital One has been undergoing an Agile/DevOps transformation. In that time, they’ve moved from waterfall to Agile, from outsource to insource and open-sources, from monolithic to microservices, from data centers to the cloud.
But they were still facing a big problem: an aging customer servicing platform. This platform serviced tens of millions of Capital One credit card customers and generated hundreds of millions of dollars in value to the business. It was a critical platform, but it was showing its age and was no longer meeting customer needs or the internal strategic needs of the company. They needed to not only solve the technology/cyber-risk problem of the aging platform but also increase the NPV (net present value) of the system.
“What we had was a mainframe-based vendor product that had been bandaged to the point where the systems and operational teams were as large as the product itself. . . . We needed a modern system to deliver on the business problem,” says Rakesh Goyal, Director, Technology Engineering at Capital One, at the 2020 DevOps Enterprise Summit.
They started with a set of principles to work from. First, they worked backwards from the customer’s needs. Second, they were determined to deliver value iteratively to maximize learnings and minimize risk. And third, they wanted to avoid anchoring bias. That is, they wanted to make sure they were not just building a faster and stronger horse but actually solving a problem.
With these guiding principles in place, they set about making changes. First, they took a look at their platform and the set of customers. Then they divided them into segments based on what their needs were and what functionalities they needed. Importantly, they thought strategically about who their customers were, because it wasn’t just credit card holders. Their customers were regulators, business analysts, internal employees who used the system, etc.
“We use very heavy human-centered design to ensure that we are actually meeting the needs [of our customers] and not just replicating what was there in the old system,” says Biswanath Bosu, Senior Business Director, Anti-Money Laundering-Machine Learning and Fraud at Captial One.
Next they graded these segments on the sequence in which they would be deployed. Each segment represented a thin slice that they could experiment with, see what worked and what didn’t, and then iterate from there.
“As much as we were looking for an MVP [minimum viable product], we were not looking for the least common denominator. We were looking for the minimum viable experience that we could give to our customers, not just any small product we could come up with. Once we test that piece out and it works, the next thing we will do is just essentially scale it up,” explained Bosu.
As part of the platform transformation, it was clear they would need to move to the cloud. They would also need to invest in and evolve the tools in their toolbox, as well as invest in reskilling for their engineers to provide them with the appropriate tooling to be agile during this transformation.
They settled on building an API-driven microservice-based architecture system. The goal was to sustain and build it incrementally, slowly expanding into various business strategies.
“You can think of this as having a fleet of smart cars built for specific workloads rather than one futuristic car,” describes Goyal.
They began by leveraging proven enterprise tools. By standardizing, they could react faster to situations where engineers needed to contribute to other teams or move from one team to another.
Building out their CI/CD pipeline enabled incremental releases and empowered teams by reducing cycle time and risk. As a financial institution, they also had to address regulatory and compliance controls. Using the pipeline, they were able to block releases when certain controls were not met.
The pipeline also allowed teams to focus on product features, since the pipeline was a tool to leverage rather than a required investment from each team. At the height of their effort, they had twenty-five teams working and contributing simultaneously.
Trusted by technology leaders worldwide. Since publishing The Phoenix Project in 2013, and launching DevOps Enterprise Summit in 2014, we’ve been assembling guidance from industry experts and top practitioners.
No comments found
Your email address will not be published.
First Name Last Name
Δ
The Scenario Picture this... You're starting your work week off just as you do…
We are delighted to announce the publication of the Spring 2023 DevOps Enterprise Journal…
Scroll through my list of LinkedIn connections or the subscribers to my blog, and…
Have you ever had your auditors show up with a checklist or a scope…