Skip to content

December 14, 2021

Biz and Tech Partnership toward Ten “No Fear Releases” Per Day at Capital One

By IT Revolution

This case study has been excerpted from the second edition of The DevOps Handbook by Gene Kim, Jez Humble, Patrick Debois, John Willis and Nicole Forsgren, PhD.


Over the last seven years, Capital One has been undergoing an Agile/DevOps transformation. In that time, they’ve moved from waterfall to Agile, from outsource to insource and open-sources, from monolithic to microservices, from data centers to the cloud.

But they were still facing a big problem: an aging customer servicing platform. This platform serviced tens of millions of Capital One credit card customers and generated hundreds of millions of dollars in value to the business. It was a critical platform, but it was showing its age and was no longer meeting customer needs or the internal strategic needs of the company. They needed to not only solve the technology/cyber-risk problem of the aging platform but also increase the NPV (net present value) of the system.

“What we had was a mainframe-based vendor product that had been bandaged to the point where the systems and operational teams were as large as the product itself. . . . We needed a modern system to deliver on the business problem,” says Rakesh Goyal, Director, Technology Engineering at Capital One, at the 2020 DevOps Enterprise Summit.

They started with a set of principles to work from. First, they worked backwards from the customer’s needs. Second, they were determined to deliver value iteratively to maximize learnings and minimize risk. And third, they wanted to avoid anchoring bias. That is, they wanted to make sure they were not just building a faster and stronger horse but actually solving a problem.

With these guiding principles in place, they set about making changes. First, they took a look at their platform and the set of customers. Then they divided them into segments based on what their needs were and what functionalities they needed. Importantly, they thought strategically about who their customers were, because it wasn’t just credit card holders. Their customers were regulators, business analysts, internal employees who used the system, etc.

“We use very heavy human-centered design to ensure that we are actually meeting the needs [of our customers] and not just replicating what was there in the old system,” says Biswanath Bosu, Senior Business Director, Anti-Money Laundering-Machine Learning and Fraud at Captial One.

Next they graded these segments on the sequence in which they would be deployed. Each segment represented a thin slice that they could experiment with, see what worked and what didn’t, and then iterate from there.

“As much as we were looking for an MVP [minimum viable product], we were not looking for the least common denominator. We were looking for the minimum viable experience that we could give to our customers, not just any small product we could come up with. Once we test that piece out and it works, the next thing we will do is just essentially scale it up,” explained Bosu.

As part of the platform transformation, it was clear they would need to move to the cloud. They would also need to invest in and evolve the tools in their toolbox, as well as invest in reskilling for their engineers to provide them with the appropriate tooling to be agile during this transformation.

They settled on building an API-driven microservice-based architecture system. The goal was to sustain and build it incrementally, slowly expanding into various business strategies.

“You can think of this as having a fleet of smart cars built for specific workloads rather than one futuristic car,” describes Goyal.

They began by leveraging proven enterprise tools. By standardizing, they could react faster to situations where engineers needed to contribute to other teams or move from one team to another.

Building out their CI/CD pipeline enabled incremental releases and empowered teams by reducing cycle time and risk. As a financial institution, they also had to address regulatory and compliance controls. Using the pipeline, they were able to block releases when certain controls were not met.

The pipeline also allowed teams to focus on product features, since the pipeline was a tool to leverage rather than a required investment from each team. At the height of their effort, they had twenty-five teams working and contributing simultaneously.

- About The Authors
Avatar photo

IT Revolution

Trusted by technology leaders worldwide. Since publishing The Phoenix Project in 2013, and launching DevOps Enterprise Summit in 2014, we’ve been assembling guidance from industry experts and top practitioners.

Follow IT Revolution on Social Media

No comments found

Leave a Comment

Your email address will not be published.



Jump to Section

    More Like This

    Attack of the Supply Chains – Investments Unlimited Series: Chapter 9
    By IT Revolution , Helen Beal , Bill Bensing , Jason Cox , Michael Edenzon , Dr. Tapabrata "Topo" Pal , Caleb Queern , John Rzeszotarski , Andres Vega , John Willis

    Welcome to the ninth installment of IT Revolution’s series based on the book Investments…

    Finding Digital Treasure Through the Science of Happy Accidents
    By Summary by IT Revolution

    In a recent talk at DevOps Enterprise Summit 2023, industry veterans Steven Fishman and…

    Lessons from Unexpected Places: How Watching Ted Lasso Makes Us Better at Agile and DevOps
    By Anthony Earl , Jordan Stoner

    It happened to us, and we bet it’s happened to you too—you watch a…

    The Three Lines Model: Investments Unlimited Series: Chapter 8
    By IT Revolution , Helen Beal , Bill Bensing , Jason Cox , Michael Edenzon , Dr. Tapabrata "Topo" Pal , Caleb Queern , John Rzeszotarski , Andres Vega , John Willis

    Welcome to the eighth installment of IT Revolution’s series based on the book Investments…