Skip to content

February 1, 2023

An Automated Governance Superhighway: A Story of Changing the Game to Achieve Your Goals

By Michael Edenzon ,John Rzeszotarski

It’s okay not to be a perfect steward of DevOps, especially in highly regulated organizations. Sometimes you need to change the game to achieve your goals. That’s just what John Rzeszotarski and Michael Edenzon learned as they tried to automate the technological governance at PNC, a financial services firm.

PNC had launched an automated governance project on the heels of its DevOps transformation. Times were good and people were motivated. John Rzeszotarski, then Managing Director, SVP of Enterprise Technology, and Michael Edenzon, then Director of DevOps, were feeling idealistic about what it meant to engineer great software at a highly regulated institution. They believed in things like developer freedom and flexibility, and they believed they could bend the bureaucracy to their will.

This was a fallacy. After a while, they had to abandon their beliefs in order to achieve their desired outcome: secure, compliant software, and a faster time to market. And all this had to be fully auditable.

In this post, adapted from their 2022 DevOps Enterprise Summit presentation, John and Michael share their initial beliefs, derived from the ethos of the DevOps movement, and talk about the things they learned along the way and how they transformed their beliefs. In the end, they achieved a working automated governance, even if it didn’t look exactly like they thought it would initially. Their journey also became the basis for much of the story in Investments Unlimited: A Novel about DevOps, Security, Audit Compiance, and Thriving in the Digital Age.


The Movement Towards DevOps

John Rzeszotarski: We’ve been talking about how since 2014/2015 enterprises had been doing this movement toward DevOps and have been carrying a lot of the DevOps capabilities across the industry. But when we go back to think about it, these original ideas came out of big tech companies that were really focused on building the big web, and they got a ton of awesome benefits about it: 10 Deploys a Day at Flicker, etc.

One big problem was regulated industries. They got to take advantage of a lot of the DevOps movement, but they didn’t get the value that these big tech companies were getting because they have millions of lines of defense, millions of problems that they have to solve that there’s not really a great playbook for.

So we also come across a lot of people who would say, “Well, we do this already. We do secure coding standards, we do pipeline integration, we have automatic change record creation, and we have a million tools that go out and do this.” And our argument is a little bit different.

This is a big transformation. This is going to be reaching across the aisle, talking to a lot of different teams in a lot of different groups. You’re going to have to build providence for 10 different run times. You’re going to have to work on distributed and non-distributed systems. You’re going to have to deal with a ton of legacy code bases. This is not a cookie-cutter solution. So whatever you do, don’t go buy DevOps with a side of automated governance by any means.

Mike and I had this main goal to do continuous delivery at a large financial services institution. We wanted to do 10 deploys a day. And when we stepped back and looked at it, we saw that we had a lot of procedures, a lot of policies, a lot of best practices, a lot of things that we had to follow. A lot of those turned into requirements for something that we needed to engineer.

So if you think about it, basically we’re sitting in a conference room and a voice calls out into the speaker phone, “Hey, if you build it, they will come because engineers and developers absolutely want this.”

Automated Governance in Regulated Industries

Michael Edenzon: Yeah. So we asked ourselves, “Well, build what? What are we building?” We had this vision of a superhighway, and this superhighway would allow a developer to go from new code to production without any human intervention, without any manual work, fully automated. And I’ve talked to some of you who are at these cutting-edge tech companies and you say, “Well, we’ve been doing this for years. This is something that we do.” And that’s true.

But for those of you who are in regulated institutions like banks, defense contractors, and healthcare companies, I think you’ll know that there are a lot of manual processes that have been built up over the years, all related to regulatory requirements. And that makes it really difficult to do this in an automated fashion.

So when we had this idea, we had this vision, we asked ourselves, well, how are we going to do it?

How We Got Started with Automated Governance

John Rzeszotarski: So where do all great stories originate, a paper from the DevOps Enterprise Forum. In 2019, John Willis pulled some of us together to write a ton of things around controls. We got to talk about exactly what evidence cannot change, ephemeral evidence, what that really meant, and how you needed to quantify it. And we got to talk about a lot of best practices associated with what we needed to build. And this really gave us our blueprint. (You can read the paper, DevOps Automated Governance Reference Architecture, here.)

Michael Edenzon: So when we were talking about the automated governance reference architecture and what it would mean in its final state after we’d built it, I think John and I looked at each other and we said, “Wow, this is going to have a huge impact on the developer experience at our company.”

I think we had to be really careful and say, “Well, we don’t want this to fall into the wrong hands because this is going to give a lot of power and it could improve developer experience, but if you use it in the wrong way, it could also make developer experience a lot worse.”

So in the beginning, we set forth on our beliefs, and these are beliefs that we derived from all of you here in this community. And I think they’re beliefs that we all share. We’ll list just a few of them and how they pertain to what we were doing.

Developer Autonomy

But our first belief was developer autonomy. And the way that we saw that was developers should be able to have autonomy and the freedom to choose things like your IDE, your build pipeline, your builder image, even things like the ability and the freedom to prioritize your own work sets with the business free of any administrator oversight from someone that may not be on the team.

Positive Incentives

Secondly, we believed in carrots instead of sticks. And I don’t think I need to explain this one too much, but we believed in positive incentives instead of punitive justice.

Simple Shared Language

And lastly, we believed in no downward pressure. We didn’t want to do anything that put an undue burden on the developer experience. So our strategy was to create a simple shared language. We’re going to take the principles of the automated governance reference architecture and boil it down to a very simple shared language.

An Example

Let’s take a given control in the development process. It can have four states. Pass, warn, fail or not found.

Passing means you’re good to go. Fail obviously means you’re not, and not found means we don’t have evidence for that control so we’re going to deem it as a failure.

Now, when we built this product and we rolled it out, we were really excited about it. We built it. We had built this superhighway. But the problem was we built it and they didn’t come. The reason is that onboarding failed. And when we dug deeper, we realized that onboarding failed because you can’t onboard what you can’t see.

This challenged our first belief, and we had to make a choice. Do we promote autonomy or standardization?

I don’t want to make it seem like these are a binary decision. I’m not saying that you can’t have standardization while still preserving autonomy, but in our case, we really had to look at putting in some serious guardrails. At this point in time, developers could have any permutation of build patterns, and it made it impossible for us to see what was going on. We couldn’t onboard what we couldn’t see. So we standardized.

We actually used our automated governance tool to standardize and onboarding took off really quickly.

So we looked at each other and said, “Wow. All right. We’re on the right path now.” But then really quickly, it fell flat, and that’s because compliance didn’t improve.

We had all this great observability. We were producing immutable attestations, and we were giving great continuous feedback to developers, but they weren’t using it to get compliant. And if they weren’t getting compliant, they couldn’t do automated releases.

So the automated releases flatlined.

So we dug deeper and realized that what we sold was really different than what the developers saw. We sold a superhighway that was driven by compliance. And what the developers saw was the reality of software development. And that is that their apps were not compliant.

We had a long way to go as an institution to get these applications to the point where they could be released.

So there’s a bit of a disconnect that we realized in this process. We had to decide if we were going to feed them a fish or teach them to fish?

As the proprietors of the data that was given to the developers, they thought that we were responsible for remediating the compliance. We had to teach them that, no, you own your own compliance. Only the development teams can be responsible for your applications.

In summary, our first carrot didn’t work. We had to try something different.

Developers weren’t using the carrot of automated releases. So we said, “Well, what about scorecards?”

The scorecard was pretty simple. We took controls and we bundled them together. Then we said, all right, A, B, C, D, F. We’re going to aggregate and then give you a score.

But the scorecard gave us another question, what should the data be used for? Do we package this data into these scores and throw it over the fence and say, don’t talk to us until you’re compliant, or do we give all the troves of data over, at which point we risk someone misusing the data for a game of got you?

Because all a scorecard really is an abstraction. It’s packaged information. It’s in the aggregate and it abstracts the details so that you can make a directionally accurate decision as to how compliant something is.

But the problem with directionally accurate is it’s not sufficient for automated governance. We need granularity in our data. So when we provided the details, it presented a different problem. And that is with details, great power comes great responsibility, and there was a huge risk of the data being taken out of context and being misused. John has a good example of this.

John Rzeszotarski: So you’ve got to remember, you’re going to be reaching across a lot of aisles with working on something like automated governance. And a lot of the groups you’re going to be working with are not technologists, they’re not developers, they’re not engineers. When you say things in procedures and policies, it can have a very, very ill and negative effect.

Some of the research we found when we talked to a couple other organizations was something as simple as turning off unit tests during a build process essentially is an infraction of a policy. In a couple instances, we found essentially that that was termination applicable. So you really have to think very carefully about applying things like this.

Jason Cox said something that really hit home with me, which is this idea of proximity. And this is exactly it, you need to stay very close to the risk units that are going to be enforcing these procedures and policies that you’re going to write, because this could be incredibly detrimental to your culture and your development community.

Michael Edenzon: In short, we tried the scorecard and it just didn’t matter. It didn’t make a difference. There was a nominal increase in compliance, but it got us nowhere near where we needed to be. So we had to try something different once again.

And this challenged our third belief of no downward pressure.

When we look at it, we saw we had the mother load of all downward pressure: data. We had all the data we needed to simply start blocking deployments. We could just interject ourselves into the release process. This downward pressure is the ability that someone has to interject themselves directly in the workflow of a developer, because that’s the most convenient time to get the developer to do what you want them to do. And when that adds up, it becomes a real bottleneck.

So we asked, “Are we sure we want to do this? Just because we can, does that mean that we should?”

Our original belief had came from Sidney Decker in Just Culture when he said that anything that puts downward pressure in your organization on honesty, openness, and learning is bad for business.

And we believed that, but we decided to do it anyway.

So we put in our automated enforcement. We alerted developers throughout the build process, and throughout the pipeline process, and then warned when it came time for you to deploy to a lower environment. If you still went forward with your production deployment, we’d block you.

John’s going to talk about a little bit why we decided to do this.

John Rzeszotarski: This decision really hit our beliefs. Because this is not something we wanted to implement. This is not something we wanted to do.

We wanted to build a high-trust organization with many, many types of things. So we really had to convince ourselves, well, why is this the right way to go?

And Andrew Clay Shafer talks about finding the thing that changes the game. And the Gatling gun is the example he always gives, which was a game changer in the American Civil War.

We thought that our game-changer was implementing automated governance within our organization. When we looked at players in this game, we had product owners, developers, engineers on one side, and then governance, risk and compliance on the other side.

And if we think about the optimal outcomes for these two different groups, if I’m a product engineer—if you read the book (Investments Unlimited), Bill Lucas is obviously the product owner—I say, “We want to do features. They’re the most important. They’re wild, wild west. That’s what we want to get done.”

For governance, risk and compliance, the optimal outcome is a very lengthy review process. “We are going to check this thing to death before we actually let it out there.” Not surprisingly, the not optimal outcome is the exact opposite for these two players, which is why we’re stuck in this game that we’re playing.

So what does the industry do? The industry does exactly what they fall back to, which is a very subjective change process. I’m essentially going to fill out a mortgage application as my change record with all that documentation, and then I’m going to try to get through my approvals depending on who likes me that week. Hopefully, I can get my change approved and it’s going to get into put some type of change window.

And the ironic part that Mike and I were just so struggling with was, “Well, no, why isn’t automated governance the optimal output for these two parties?” It ended up being the not optimal outcome for both of these.

The reason for that is development and engineers. Automated governance was yet another thing they had to implement, just add that to their backlog. And on the GRC side, it looked like small batch release. That’s not an easy thing to swallow for a governance risk and security organization to be able to produce change that often infrequent.

So the prisoners’ dilemma teaches us essentially that the not optimal outcomes for the two different parties playing the game is actually what’s best for the entire organization. So it really actually fit. So now we got to figure out we need a game changer for our game changer. How do we change this game in order for us to be able to get buy-in for the not optimal outcomes for these two parties?

And that ended up being automated enforcement because it absolutely removed the other outcomes that were even possible for these two players and forced everyone down the path of the not optimal solution.

Michael Edenzon: So real quick, we’re just going to hit some tactics. We start with simple controls and soft gates, meaning that failing teams would get a warning. Hey, coming up soon, we’re going to start blocking you for this. And then really quickly we move to put in the hard gates. And we recommend that because you want to send that shockwave through your organization so that people know this is now how things are going to be, but then quickly move to more difficult gates all the way up into your most difficult controls. And I think you’ll see that your developers are going to make the adjustment very, very quickly. But I will warn you, and that is that we did not make any friends in this process.

John Rzeszotarski: Mike has to walk me to my car.

Michael Edenzon: Yeah, exactly. We’re not trying to scare you, but we just want you to know that this is something that’s not going to be a favorite amongst your developers at first. But we think it’s worth it because what we saw after we implemented this and after the developers started to change their behaviors was pretty extraordinary. We saw a dramatic uptick in engagement, and that was really encouraging to us because engagement to us meant that developers were starting to care and they were starting to take ownership of their compliance. They were trying to figure out “how do I remediate this failed control?” And then they started helping each other, which was another thing that we didn’t expect. So it created a sense of community, and that engagement was encouraging, and that’s when we knew we had it.

But also, there were a lot of stakeholders in the beginning of this process that were really worried that we would cause production incidents by blocking deployments because teams would miss their change window and it would have a cascading effect. And in our research and in our experience, we found zero instances of that happening. These teams took it very seriously and they know what they need to do to make sure that they don’t cause these incidents. So we tell you all this to say, “If you’re going to try this, go into it with a lot of confidence. Know that you’re not going to be a fan favorite, but that it will work.”

And most importantly, we reached our goal, which is that compliance shot up, an automated releases followed soon thereafter. So we were able to realize our vision by pretty much abandoning all of the beliefs that we held in the very beginning.

John Rzeszotarski: So regulated industries are inherently consequence-based models. Those consequence-based models go all the way down into the application teams and it permeates or essentially drives through the organization.

We understand that a lot of people aren’t going to agree with some of this. In fact, we’ve had people tell us very much to our face that this is the wrong model, you’re doing the wrong thing.

So I think it’s important to understand the contextual nature of what you’re trying to do and what industry you’re working on. But we would love to hear other stories associated to adoption and onboarding something like this in your industry and how that is accomplished. So hopefully you guys learned a few things and we can all go do the things.

- About The Authors
Avatar photo

Michael Edenzon

Michael Edenzon is a senior IT leader and engineer that modernizes and disrupts the technical landscape for highly-regulated organizations. Michael provides technical design, decisioning, and solutioning across complex verticals and leverages continuous learning practices to drive organizational change. He is a fervent advocate for the developer experience and believes that enablement-focused automation is the key to building compliant software at scale.

Follow Michael on Social Media
Avatar photo

John Rzeszotarski

John Rzeszotarski assists organizations with strategic planning and leadership in the solution and infrastructure focus areas; moreover, John provides thought leadership to large enterprises that need to focus on reliability, scalability, regulatory, and other business considerations. His expertise spans many verticals with a focus on digital, payments, security, development, and his primary passion is solving business and IT problems thru technology, process, and culture transformations.

Follow John on Social Media

No comments found

Leave a Comment

Your email address will not be published.



Jump to Section

    More Like This

    Mitigating Unbundling’s Biggest Risk
    By Stephen Fishman , Matt McLarty

    If you haven’t already read Unbundling the Enterprise: APIs, Optionality, and the Science of…

    Navigating Cloud Decisions: Debunking Myths and Mitigating Risks
    By Summary by IT Revolution

    Organizations face critical decisions when selecting cloud service providers (CSPs). A recent paper titled…

    The Phoenix Project Comes to Life: Graphic Novel Adaptation Now Available!
    By IT Revolution

    We're thrilled to announce the release of The Phoenix Project: A Graphic Novel (Volume…

    Embracing Uncertainty: GenAI and Unbundling the Enterprise
    By Matt McLarty , Stephen Fishman

    The following post is an excerpt from the book Unbundling the Enterprise: APIs, Optionality, and…