Skip to content

April 10, 2024

Boardroom Showdown – Investments Unlimited Series: Chapter 11

By IT Revolution ,Helen Beal ,Jason Cox ,Michael Edenzon ,Dr. Tapabrata "Topo" Pal ,Caleb Queern ,John Rzeszotarski ,Andres Vega ,John Willis

Welcome to the eleventh installment of IT Revolution’s series based on the book Investments Unlimited: A Novel about DevOps, Security, Audit Compliance, and Thriving in the Digital Age, written by Helen Beal, Bill Bensing, Jason Cox, Michael Edenzon, Tapabrata Pal, Caleb Queern, John Rzeszotarski, Andres Vega, and John Willis.

Down but not out! Our inimitable reform ringleader, Michelle, remains locked in a labyrinthine governance puzzle, perplexed by laggard legacy apps flouting newly installed enterprise pipelines! Our last installment brought complications and SBOMs. Now, with a boardroom showdown looming against skeptical execs, a breakthrough beckons spurred by an unlikely Five Lines Model? Could a revolutionary new Risk Code write automated governance rules minimizing red tape? Or is this precarious proposal to upend the compliance status quo doomed to be this project’s last hurrah? Buckle up for a pivotal pole position in the race toward modernization as the pressure dial spins up to eleven!


Thursday, October 6th

Michelle and Omar were nervous. It was now time to meet with IUI executives to show their progress. The target deadline for completing Turbo Eureka was only five months away. All the executives were eager to show the regulators that they had made sufficient progress to put the MRAs to bed and get IUI out of the doghouse, so to speak.

The meeting was a who’s who of IUI tech leadership: Jada King (CRCO), Tim Jones (CISO), Jennifer Limus (CIO/VP of Engineering), and of course, Jason Colbert (SVP of Digital Transformation).

With all of the criticism and pressure they had experienced from the supply chain issue, Michelle didn’t want this demo to come off as a failure. After all, it was a conceptual prototype. It wasn’t meant for production in its current state. She was all too aware that some of the people watching this demo were the ones casting criticism and calling for an outside engineering firm. If this fell flat on its face, there would be more pain coming the team’s way.

Michelle opened up the meeting by referencing the original demo of project Turbo Eureka and explaining that in the weeks since their initial prototype, Team Kraken had developed it even further to include support for new policies and address the recent supply chain break.

Michelle shared her screen with the attendees and started to run through the same steps from the first demo, opening a pull request from a feature branch and requesting an approval from Omar. She then pulled up a visual of the CI pipeline, which showed a series of steps:

  1. Code Checkout (30 sec)
  2. Sign and Build (2 min)
  3. Unit Test (1 min)
  4. Static Code Quality Scan (2 min)
  5. SCA (Software Composition Analysis) (2 min)
  6. SBOM (Software Bill of Materials) (1 min)
  7. Static Security Test (9 min)
  8. Publish (1 min)
  9. Update Manifest (45 sec)

Michelle knew from experience that a nearly twenty-minute pipeline demo was just about as exciting as watching paint dry. To make sure her audience didn’t lose interest, she kicked off the pipeline and then gave a play by play to highlight how each of the policies would be enforced during the demo.

“Okay, everyone, the excitement begins! I’ve just kicked off the pipeline. In real life, the build will be triggered by our Git tool when a pull request is merged. I am doing it manually just for the demo. The CI build is validating that the code build is successful and that it follows our enterprise policies and standards for building and publishing a deployable artifact. By artifact, of course, I mean the actual application code that is compiled and ready to run.

“Next,” she said while gesturing with her hand at the code review notes on the screen, “the code review here captures all code changes and verifies that all changes to the release branch were made via the pull request. It also verifies that each pull request received at least one approval from someone other than the code author. It’s important to note that we can adjust how many approvals any change needs without having to rebuild the whole thing from scratch.”

There was a series of small nods around the room.

“Now, I want to highlight that this branching pattern confirms that the application team practiced our enterprise-approved branching pattern when committing changes to the code repository. For all microservice applications, IUI requires that developers practice either trunk-based development or use short-lived feature branches. Prior to this, our teams used complex non-standard branching patterns, and we always ran into erroneous deployments—every now and then, wrong versions of code showed up in production!

“Next is the code signature policy gate. It verifies the checksum of every artifact. This check ensures that each artifact was digitally signed at build time, confirming that it was created in a trusted IUI build environment.

“Once that is complete, we move to the unit test policy gate. The unit tests confirm the successful execution and code coverage of the application’s unit tests. IUI has minimum code coverage standards, but we all believe that each product development team should determine their minimum standard, which is above and beyond IUI standard.

“Next, we have the code quality policy gate. The third-party tool we use for code quality ensures that we have reliable, maintainable, and simple code. This gate ensures that the quality scores we receive from this tool meet or exceed IUI standards for new code.

“The next gate is SCA, or software composition analysis, which is the automated process to identify all open-source software usage. IUI uses this information to ensure all open-source software used meets IUI technology standards, is free from vulnerabilities, and is using licenses that are approved by IUI’s legal department.

“Right after the SCA gate, we use the raw data from SCA, produce a version of a software bill of materials and save it on our SBOM Database. Please note that this may be a temporary solution. We will probably use an open-source solution in the future.

“Now we’re at the Static Security Test policy gate. This stage in the pipeline ensures that there are no critical or high vulnerabilities in the source code. We also have special rules to identify any user credentials or keys exposed in the source code.”

By the time Michelle finished walking through each of the new policies, the pipeline build had concluded. A new artifact was published in the deployment repository and the manifest file updated with a new version number. She turned her attention back to her laptop and navigated to the demo application’s source code repository, where she eagerly displayed the source code repository’s README file with a shields.io badge for each of the policy checks she had just described.

“Done! Look at that! All the policy gates are green.” The meeting attendees were silent, although they stared intently at the monitor mirroring Michelle’s screen. Some people squinted as if they were deep in thought, while others glanced around at their peers. Michelle looked over at Carol with surprise. She had expected the audience to be considerably more thrilled. Perhaps some of the leaders didn’t understand what they had just seen.

Michelle had an idea. “Let me show you what happens if a developer breaks one of the policies,” she said, betting that the stoppage of an artifact in violation of policy might resonate better with the less-than-technical leadership.

Her fingers crunched her laptop keys while her eyes darted back and forth between the audience and her screen. She was trying to keep their attention while she rushed to orchestrate a shortened rerun of the build pipeline. “Alright, now look at this!” she exclaimed. “I pushed a code change without approvals and ran the pipeline on my local machine instead of the IUI build servers.” She pointed to the large monitor in the meeting room that mirrored her laptop screen. Several of the previously green badges had turned red. “See the badges?”

Michelle explained that her second code change, which she pushed directly to the main branch of the source code repository without a pull request, failed the code review policy and showed a red badge with a large X. Murmurs of comprehension came from the audience. Michelle grinned—it was starting to click.

She then pointed out the Code Signature badge and explained that because she was running the build on her laptop and not an IUI-approved pipeline instance, the digital code signature did not match.

The audience looked puzzled, so Carol jumped in to summarize. “If the signatures don’t match, that means the code may have been built by an untrusted and potentially malicious build environment. If we can’t trust the environment that built the code, we also cannot trust the code itself.”

“And what exactly makes you trust your build environment?” Tim quickly interjected.

“I knew you were going to ask that question, Tim.” Michelle had a smile on her face. “Your team has actually scanned and tested our build server and build agent software. We store this approved software in a binary repository. For every build, we pull down the approved build agent software, instantiate the agent to create the build environment, run the build, and destroy the instance. Of course, we confirm that the hash of the build server and its build agent match the expected hash, so we can be confident no one tampered with it.”

“Oh wow! I didn’t know that. I’m so happy that my team was thoroughly engaged with your team throughout this.” Tim looked very pleased.

“So, going back to the failures  .  .  .  what happens now? Do those failures break the pipeline?” Tim asked.

“Well, yes and no,” Michelle responded. “They will cause the deployment pipeline job to fail.”

Tim, annoyed by the seemingly anal-retentive correction, responded, “Okay, what’s the difference?”

“So at IUI we have CI and CD, or as I refer to them, build and deploy. The build pipeline takes code and compiles it to a binary artifact, which is published to our internal artifact repository. And then the deploy pipeline takes that artifact off the artifact repository and deploys it onto a server or wherever the thing can run.”

“And a red badge results in a failed deploy pipeline?” Tim asked.

“Yes, exactly!”

“I’m confused,” he responded. “Shouldn’t a component be considered ‘deployed’ when it reaches the internal artifact repository?”

“No, you’re thinking of ‘publish,’” Omar interjected. “The publish stage pushes to the artifact repository, which happens at the end of the build pipeline. The deploy pipeline is a whole different thing.”

“But what about shared libraries?” Tim asked.

Omar didn’t expect such a targeted rebuttal. “What do you mean?” he asked.

“A Java or NPM library, something like that.” Tim folded his arms, leaned back, and crossed his legs. “Those aren’t directly deployed to a server, but they’re used by other applications, so they’re still reaching production. In that scenario, isn’t the ‘publish’ stage considered the deployment?”

“Well, it depends on your definition of deployment,” Michelle conceded.

“I consider an artifact deployed when it becomes consumable, either by the customer or other applications,” Tim said.

“Okay, now I’m confused,” Jada exclaimed. “I thought a deployment meant a new set of features were being delivered to customers.”

“That’s called a release,” said Omar.

“I think we’re talking past one another,” said Carol.

“We are,” said Jennifer, as she stood up and reached for a dry-erase marker. “For the sake of this conversation, let’s use the following definitions  .  .  . ”

She spelled out three definitions on the board:

  • Build—compile the code, publish binary to artifact repository.
  • Deploy—push a new version of the app onto the server.
  • Release—a mechanism that allows end user to access new features or functionalities in the new version of the application.

“For this conversation, can we all agree on these definitions?” asked Jennifer.

Tim was not convinced that his question on shared libraries was answered yet. He felt there was an unmitigated risk there. In his mind he was convinced that the shared libraries should be going through the same compliance checks before any application teams were allowed to use them. But he didn’t want to disrupt the flow either.

“Okay, I am good with this. For now. But I need your team to have a backlog item to revisit this shared library scenario,” Tim said. “You can consult my team any time if you need help. Does that sound alright?”

Everyone nodded.

Carol continued. “Great. Now if a control is failing, why wouldn’t the pipeline just break the build?”

Michelle looked puzzled.

“Why not just break the build? That would alleviate the need to distinguish which code bases are consumed after publish or deploy,” Jennifer asserted.

Jennifer was persuasive, but Michelle countered, “What if there’s an emergency? When a team builds their code it could be a break-glass situation. In that situation, a failed build could lead to a disaster.”

Carol interjected, “I think what Michelle is saying is that it seems draconian to fail a build for a missing code review. And what if there’s an emergency? Do we really want to be so strict that we can’t allow individual teams to use their discretion?”

Jada leaned forward. “So let me confirm my understanding. You’re saying that you want to build and publish all applications to the internal artifact repository, regardless of their compliance.”

“But—” Omar tried to interject but resigned to Jada.

“But you’re going to allow—” Jada took a breath, frustrated that she had to keep digging to get the answer she was looking for. “Who is going to be the one to decide whether to deploy a non-compliant application?”

“I don’t know. That can be up to you. We just want to make sure that someone can accept the risk,” Carol responded. “Let’s just finish the demo and then we can discuss at the end.”

“Sure,” said Jada. “But can I ask one more question while we’re on this?”

“Of course.” Carol was in fact happy that Jada was fully engaged in the demo.

“Where are you storing all these data?” Jada asked.

“Let me take that one,” Omar finally interjected. “We store the data in a database, and we call that an evidence store. Only Team Kraken has access to the evidence store.”

“Is that how it’s going to be in the future?” It was clear that Jada was not fully onboard with the idea that an engineering team would still have control over the evidence store.

“Well, we can talk about the ownership when we are through the crisis,” Carol replied. “I do understand where you’re coming from, Jada. If your team wants to control the admin access to that evidence store, we can make that happen. We didn’t know if you would have someone who can administer it  .  .  . ”

“That’s true,” Jada nodded. “I’ll have to think about that some more but please continue.”

Michelle navigated to the demo app’s deployment repository and pointed out the version of the artifact: iui-demo-app:0.0.24.

She opened a pull request from the staging branch to the dev branch, indicating the intention of deploying to the dev environment. Carol approved the pull request, and the deployment pipeline kicked off before failing abruptly. Michelle turned around to face the large monitor in the meeting room and pointed to the deploy pipeline logs at the bottom of the screen.

“See? Right there! It failed,” she looked back to the audience to gauge their reactions, happy to see that her audience was visibly pleased by her team’s work.

“This is great,” volunteered Jada, who was clearly enjoying this demo but was also somewhat puzzled. “But I have to admit I assumed this type of thing was already happening. Was I naive?”

“There was no automation to ensure our policies,” Michelle replied, “and to be honest, nobody was actually doing all of this manually either. We were not doing what we said we were doing. Hence the MRIA that has brought us all here.” Michelle meant it as a bit of a joke to lighten the mood, but no one even smiled.

“Uh,” Michelle continued, “when the policies are dependent upon manual validation, there is a risk that steps will be skipped. This automation ensures that we aren’t missing the required steps. Additionally, we just made the right way easy. We paved the road, if you will. And when you make the right way easy, people tend to do the right thing.”

“I have a feeling that our developers will love this,” Jada exclaimed. “This is a much better developer experience; do it automatically, do it early, and fail fast.”

“There is one thing that we did not demo here,” Michelle said, “and that’s testing—automated functional and performance testing that we perform. A couple of years ago we developed a modern test automation platform using open-source technologies. We named it Orion. This platform allows teams to run automated tests, captures test results, and runs many reports that teams use before production deployment.”

“Do we need to change anything in there?” Jada asked.

“I don’t think so. To be honest, that is the only platform that does what it’s supposed to do. Developers like it, and as far as I can tell, all of IUI uses it,” Jennifer replied convincingly.

“Anyway, that is what we have so far. Any questions?” Michelle was relieved that the demo went as she had expected.

“This is great, team. Wow—just great,” Jada said, before redirecting the conversation. “But can we go back to the topic of accepting risk?”

“Sure,” said Carol. “Where did we leave off?”

Jada briefly glanced at her notes. “In a scenario where a control fails, who makes the decision of ‘go’ or ‘no go,’ and what controls have we placed on that process?”

“Ultimately the application owner or the product owner will make the decision to accept the risk, and Risk, Change, whoever else can monitor those decisions. That stuff we can work out later,” Carol responded, “but for now may I propose a workflow?”

“Let’s hear it,” said Jada.

Carol uncapped a dry-erase marker and rolled her chair over to the board.

“As we all know, when a team goes to deploy, they need a change record. Teams are currently expected to manually upload evidence to the CMDB to satisfy required policies.”

“Yes, okay,” Jada said.

“I propose that Turbo Eureka orchestrates the creation of this change record during the promote stage of the deployment process. By doing so, it allows each attestation to populate the change record so that the final reviewer—the application owner—has the opportunity to view the proposed change and the current state of compliance.”

Omar took a breath before speaking up. “And—”

“Hang on Omar, just a second,” replied Carol. She continued, “By creating the change record directly from the pipeline, we can ensure that the proper artifact versions are documented, as well as the compliance status for each. So when the application owner reviews the change prior to deployment and sees failed controls, he or she can choose to accept the risk or reject the change. Turbo Eureka’s automation will allow them to make that decision.”

Satisfied with the synopsis of Carol’s proposal, Jada exclaimed, “I love it. I mean, anything you can do to accelerate that process will be a tremendous success. But, I don’t want to speak for our partners in change management, so please be sure to bring them along as you build this out.”

“Of course,” said Carol.

“So what happens if they’re not failing any compliance check?” asked Jennifer. She already knew the answer that she wanted to hear, but she often employed the Socratic method to steer the conversation.

“Right! Thank you, I almost forgot,” said Carol. “We want to incentivize teams. We want to make the right thing the easy thing, agreed?”

Jada nodded.

Carol continued, “So if a change is submitted with 100% compliance, let’s eliminate CAB.”

“What do you mean when you say eliminate?” asked Jada.

“You’re preapproved. No paperwork or meetings required,” Carol replied.

Jada leaned back but didn’t respond. It was evident that Carol’s proposal had caused her to pause for a moment and think.

“You’re not proposing to completely disband CAB, are you?” Jada was trying to grasp the full impact of this in the future.

“Well, I sure would like to explore if we can at least skip the CAB meetings!” Carol wanted to thoughtfully choose her words. “Even the CAB members feel that the meetings don’t add value. They also know the meetings become the bottleneck for teams that want to deploy more frequently.”

“I can see how this can completely change how we think about CAB.” Jada had the ‘thinking out loud’ expression on her face. “We can actually arm CAB with more real-time, trustworthy data for them to act only when it’s needed. They can be consulting partners rather than an approval authority. I’m seeing win-win all over the place!”

“Exactly!” Michelle jumped in. “Application or product owners can reach out to CAB members if they have questions or need guidance before accepting any risk.”

For a brief moment the room fell silent.

Sensing an opening and a chance to redirect the conversation, Tim asked, “How many teams in IUI are using Turbo Eureka right now?”

Michelle replied, “Right now, just one team.”

Tim responded, “Well, that doesn’t do us any good, does it?” He gestured to the large monitor across the room. “I love what you all have built here and I think it’s really slick. But we need to focus on onboarding more apps to this platform instead of adding new policies and features.” He continued, “We can’t satisfy regulators with only one team following the new system. Remind me, how many TLCs do we have here at IUI?”

TLC stood for three-letter code. The codes were used as unique identifiers to keep inventory of different technology assets throughout IUI. Unsure of the answer, Michelle looked around for help. When nobody indicated that they knew the answer, Carol offered a response, “I think we have somewhere around 1,900 code repositories.”

“That’s not what I’m asking,” Tim said. “How many TLCs do we have in our CMDB?”

Carol jumped in. “We have 587,” she said.

Tim glanced at his watch. “Alright, we’re over time. I want to know what the plan is to onboard all 587 TLCs. Bonus credit if you can also quickly put together a way to prevent any new TLCs from being created without this capability. Let’s pick it up there when we meet next week.”

Friday, October 7th

Michelle, Carol, Omar, and the rest of Team Kraken reconvened the following morning to debrief on the stakeholder demo the day before. Barry and Andrea also joined.

“I just wanted to say how impressed I am with the progress that you’re making. The demo yesterday was very exciting. And you all know I don’t say things like that lightly,” Bill said before continuing to the topic he really wanted to discuss. “Now, let’s talk about making it available for others. How are we going to onboard 587 TLCs, and fast?”

He looked around the table, inviting the whole team to participate.

Omar jumped at the opportunity to speak up. “Five hundred and eighty-seven is a meaningless number! Some TLCs have one code repository and others have tens. Some are IUI-developed software applications, but most of them are either infrastructure or COTS applications—uh, that’s commercial off-the-shelf  .  .  . ” Omar explained, turning to Andrea.

“I actually knew that one, but thanks,” Andrea replied in a whisper.

Michelle picked up on Omar’s point and attempted to expand on it. “I think what we’re saying is that the first iteration of Turbo Eureka is designed for software that’s developed in house. So we should probably focus our onboarding efforts there.”

“We also need to prioritize any of the apps called out in the original MRAs,” Andrea added. “And anything high to moderate risk, like internet facing, customer service, and so on.”

Omar nodded in agreement.

Michelle continued, “After yesterday’s meeting, I combed through each of the 587 TLCs in the CMDB and identified any that indicated they contained custom, IUI-developed software components. After filtering out the rest, I was left with a list of 183 that are eligible. I believe this is the list we need to use for onboarding.”

“Only 183?” Barry responded. “Do we trust the numbers in our CMDB? In any case, I don’t think Tim is going to be happy to hear that you only plan to cover 183 TLCs. Carol, I thought you said that IUI had over 1,900 code repositories. How does that align with this list of 183?”

“Some TLCs have more than one code repository. Think of a code repository like it represents a component of TLC. But on the other hand, some repositories do not belong to any TLCs,” Michelle tried to clarify, but she only ended up introducing more confusion.

“How do we not know how many code repositories actually belong to TLCs?” Barry was getting irritated.

“Well, anybody can create a code repository, and some of them choose to include their TLC number in the name. We require any repository that builds using a standard pipeline to declare the TLC number in their build file.”

“That’s it!” Barry exclaimed. “We know it!”

“But there’s no requirement that your repository must use the standard pipeline.” Omar rolled his eyes and continued, “Many teams use their own stuff.”

“Duh!” Barry almost gave up in frustration. “So make a new rule—repositories must be named after the TLCs.”

“The last thing we need is rules around naming repositories. This place is already way more restrictive than any tech company I’ve ever worked for.”

“I don’t disagree with you. But what are you going to use for onboarding?” Carol asked.

“I think I hate CMDB!” Omar looked away. “And I hate TLCs too. Why do we have to use them for anything? Why do we care?”

“Omar! We have to use the TLCs. And the CMDB too! The change tickets for production releases are generated by CMDB and are issued against TLCs.” Michelle tried to calm Omar down.

“We’re almost at time, and I have a 10:30 that I need to be at,” said Barry, turning to Carol. “Carol, as you know, Security is very interested in knowing how Turbo Eureka will be rolled out. As is the entire board, frankly. We still need to show the regulators that we’re doing what we say we’re doing. This group has built a great tool. I just want to make sure it’s successful. Tim has told me this is his top priority.”

“Of course,” said Carol.

After Barry and Carol left the meeting, Omar got up to close the door. When he sat back down he sighed. “So what are we going to do?” he asked.

When no one answered immediately, Dillon said, “Last year one of the contractors wrote a script to comb through repositories and capture metadata for an internal audit. He’s gone, but I think I know where he kept it. It won’t be perfect, but what if we used that to onboard?”

“That still won’t solve the problem of identifying the TLC numbers,” Michelle reminded him.

“I guess not,” Dillon responded.

“Hmm,” Omar said, perking up, “what if we ingest their build file and use that to get the TLC? If there’s no build file, then it’s not being built in the pipeline. If we block everything without a valid checksum from deploying, then we can safely assume that anything without a build file cannot be deployed, and therefore we don’t need to onboard it.”

“I like it!” Michelle exclaimed. “You try it out.”

That afternoon, Omar found the contractor’s script and executed it against IUI’s source code management system. It produced 1,204 of the 1,900 code repositories at IUI, which meant that over 700 repositories were not using a standard pipeline.

The next day the team and Carol reconvened. Carol opened up the discussion. “Omar’s script produced 1,204 results, which seems accurate. But it took over 45 minutes to run.”

“The contractor’s script,” Omar clarified with a wink and a smile. He did not want to be known for writing inefficient code.

“Yes, thanks Omar, the contractor’s script. It actually put strain on the source code platform. Many developers called the help desk saying they couldn’t push code during the time the script was running. This isn’t going to be a sustainable solution.”

“What if we ran it each night?” Omar offered, although his body language conveyed that even he wasn’t thrilled with the solution he just proposed.

“We can do better,” said Carol. “I know we can. What are other ideas?”

“So we somehow need to capture the TLCs and onboard them onto Turbo Eureka  .  .  .  ” Omar was thinking out loud.

“What about this?” Michelle began to share her screen, which showed a colorful body of text against the dark background of her code editor. “Here’s an example of the data we’re capturing during the ‘checkout’ stage of the build pipeline. It contains all of the environment variables, including the TLC number as well as the repository information. What if we added some custom code into the ‘checkout’ stage that automatically onboards the application to Turbo Eureka? That way, our inventory is updated in real time.”

“That’s a great idea! Screw the contractor’s script,” Omar said cheekily. “I know exactly how we can do that.”

Monday, October 17th

A week later, Michelle’s automated onboarding idea was working. Turbo Eureka went from monitoring a single application component to monitoring over sixty components across five TLCs. Since it was fully dependent on when the corresponding build would run, it would take some time for all 183 TLCs to be onboarded, but it was still an elegant solution that showed steady growth in onboarding. However, the ones that didn’t use standard pipeline would still remain out of Turbo Eureka.

This made Carol and Jennifer somewhat happy because they could show consistent progress to Tim, Jada, and the external audit team on a weekly basis. It also meant Team Kraken could go back to focusing on building out more policy features in Turbo Eureka.


Another eleventh-hour eureka squeaks Michelle’s accountability prototype past the skeptical brass! But grumbles remain whether full-blooded buy-in across all of sprawling IUI can ever emerge? Sharp-tongued Tim and legal eagle Jada clearly remain unsold on ceding control while lukewarm Laura from external audit vouches for leeway! Has Michelle done enough to protect her vision and avoid outside interference? Or could executive angst still undermine morale when optimizing the last uncertain miles of the audit marathon? Join us next time for the continuation of the story. Or, go to your favorite book retailer and pick up a copy of Investments Unlimited today. 

- About The Authors
Avatar photo

IT Revolution

Trusted by technology leaders worldwide. Since publishing The Phoenix Project in 2013, and launching DevOps Enterprise Summit in 2014, we’ve been assembling guidance from industry experts and top practitioners.

Follow IT Revolution on Social Media
Avatar photo

Helen Beal

Coauthor of Investments Unlimited.

Follow Helen on Social Media
Avatar photo

Jason Cox

Jason Cox is a champion of DevOps practices, promoting new technologies and better ways of working. His goal is to help businsses and organizations deliver more value, inspiration and experiences to our diverse human family across the globe better, faster, safer, and happier. He currently leads SRE teams at Disney and is the coauthor of the book Investments Unlimited. He resides in Los Angeles with his wife and their children.

Follow Jason on Social Media
Avatar photo

Michael Edenzon

Michael Edenzon is a senior IT leader and engineer that modernizes and disrupts the technical landscape for highly-regulated organizations. Michael provides technical design, decisioning, and solutioning across complex verticals and leverages continuous learning practices to drive organizational change. He is a fervent advocate for the developer experience and believes that enablement-focused automation is the key to building compliant software at scale.

Follow Michael on Social Media
Avatar photo

Dr. Tapabrata "Topo" Pal

Dr. Tapabrata "Topo" Pal is a thought leader, keynote speaker, evangelist in the areas of DevSecOps, Continuous Delivery, Cloud Computing, Open Source Adoption and Digital Transformation. He is a hands-on developer and Open Source contributor. Topo has been leading and contributing to industry initiatives around automated governance in DevOps practices. Topo resides Richmond, Virginia with his wife and two children.

Follow Dr. Tapabrata "Topo" on Social Media

No comments found

Leave a Comment

Your email address will not be published.



More Like This

Mitigating Unbundling’s Biggest Risk
By Stephen Fishman , Matt McLarty

If you haven’t already read Unbundling the Enterprise: APIs, Optionality, and the Science of…

Navigating Cloud Decisions: Debunking Myths and Mitigating Risks
By Summary by IT Revolution

Organizations face critical decisions when selecting cloud service providers (CSPs). A recent paper titled…

The Phoenix Project Comes to Life: Graphic Novel Adaptation Now Available!
By IT Revolution

We're thrilled to announce the release of The Phoenix Project: A Graphic Novel (Volume…

Embracing Uncertainty: GenAI and Unbundling the Enterprise
By Matt McLarty , Stephen Fishman

The following post is an excerpt from the book Unbundling the Enterprise: APIs, Optionality, and…