Inspire, develop, and guide a winning organization.
Create visible workflows to achieve well-architected software.
Understand and use meaningful data to measure success.
Integrate and automate quality, security, and compliance into daily work.
Understand the unique values and behaviors of a successful organization.
LLMs and Generative AI in the enterprise.
An on-demand learning experience from the people who brought you The Phoenix Project, Team Topologies, Accelerate, and more.
Learn how making work visible, value stream management, and flow metrics can affect change in your organization.
Clarify team interactions for fast flow using simple sense-making approaches and tools.
Multiple award-winning CTO, researcher, and bestselling author Gene Kim hosts enterprise technology and business leaders.
In the first part of this two-part episode of The Idealcast, Gene Kim speaks with Dr. Ron Westrum, Emeritus Professor of Sociology at Eastern Michigan University.
In the first episode of Season 2 of The Idealcast, Gene Kim speaks with Admiral John Richardson, who served as Chief of Naval Operations for four years.
New half-day virtual events with live watch parties worldwide!
DevOps best practices, case studies, organizational change, ways of working, and the latest thinking affecting business and technology leadership.
Is slowify a real word?
Could right fit help talent discover more meaning and satisfaction at work and help companies find lost productivity?
The values and philosophies that frame the processes, procedures, and practices of DevOps.
This post presents the four key metrics to measure software delivery performance.
January 24, 2023
This post is excerpted from The Value Flywheel Effect: Power the Future and Accelerate Your Organization to the Modern Cloud.
The concept of inefficiency or degradation in software has always been very difficult to explain to nonprogrammers. All systems have issues, and some are caused by mistakes, others by the simple march of time. Well-written software might exist in a system that is now changing, and the software might not be performing as originally intended.
Over thirty years ago, Ward Cunningham (pioneer of extreme programming and Agile Software Development, among many other things) used the term “debt” to explain inefficiencies that appear over time. The term evolved into technical debt, which is now often used to describe older software that requires some type of maintenance. Of course, like a financial loan, the longer you wait to address this required maintenance, the higher the price to pay—as interest is always accumulating. In the software world, the interest is more complexity to untangle as engineers build on top of the older software.
Technical debt was a strong concept, but the financial impact on the business was hard to describe. When security became more prevalent as the internet evolved, older software ran the risk of exposure via an attack. When data privacy laws were passed, like GDPR, older software with complex data had a higher risk of exposure than newer, compliant software. When we moved to the cloud and started to pay for consumption, older software had a higher cost due to its inefficiencies. Even with all these commercial reasons, we still find ways to tolerate inefficient software. Too many of today’s systems are inefficient, over-complex, not fit for purpose, or outdated.
What if we had a metric for inefficient software? Cost was almost that metric, but the software was so bad that the cloud providers started to offer savings and cost reduction plans to sweeten the deal. It’s not a smart commercial move to penalize a multibillion-dollar customer for having poor software; they’ll just move to a competitor.
Some cloud providers are starting to measure how much carbon their datacenters produce. They can invest in designing an energy-efficient, sustainably powered datacenter that we can feel better about using. In this way, cloud providers can improve the sustainability of the cloud. But what about sustainability in the cloud? We, the customers, can still forget to switch off servers and implement poor, inefficient architectures.
Cloud providers are constantly improving how they report the carbon usage of your cloud workload. Your bill will include services used, price, and carbon usage. What happens when companies are asked to report on their carbon usage from travel, buildings, physical products, virtual products—including applications in the cloud? Some digital companies may have to optimize their software systems to meet sustainability goals.
More to the point, software teams will become aware of how much carbon their software uses and will likely feel bad about an inefficient system with a high carbon burn. What if the carbon usage is reported at the end of quarterly earnings calls? It’s already a buzzword in earnings calls, as reported in the Financial Times in May 2021. Let’s fast forward: if your company has a very poor sustainability score due to inefficient software and this fact is reported, how would this affect your attempts to recruit the software engineers of the future at graduate fairs?
Carbon usage could become a leading metric for modern cloud efficiency. Despite all the fancy presentations, the slick marketing, the stories, and the cool developer advocates, a single metric at the end-of-quarter call would cut through everything. There are many stories of engineers joining a company only to realize that the tech stack is not what was promised. This isn’t happening so much now, but every job-seeking engineer will spend considerable time assessing the amount of technical debt they’ll need to deal with in a new job and will not join if the picture looks ugly.
A well-architected, serverless system will score very well in sustainability. The compute is very efficient, managed services (run by the cloud provider) will typically be more efficient than normal companies can achieve, and the portability requirements of a Well-Architected Framework mean it may be possible to run in a low-carbon region (i.e., you are not tied to one region, like US East). Further, it’s possible to reduce the size of payloads, compress more, use a slightly smaller machine, or run a batch job in an off-peak window. There are practices that we used decades ago when compute, storage, and network were in short supply that we could resurface.
Read more in The Value Flywheel Effect: Power the Future and Accelerate your Organization to the Modern Cloud.
David Anderson has been at the leading edge of the technology industry for twenty-five years. He formed The Serverless Edge, and continues to work with clients and partners to prove out the thinking in his book, The Value Flywheel Effect. He is also a member of the Wardley Mapping community.
Mark McCann is a Cloud Architect and leader focused on enabling organizations and their teams to rapidly deliver business value through well-architected, sustainable, serverless-first solutions. He was heavily involved with Liberty Mutual's journey to the cloud, leverages Wardley Mapping, and writes for the The Serverless Edge.
Michael O'Reilly is a Software-Architect who specializes in arming organizations with the ability to develop ideas into world-class products by leveraging the capabilities of the modern cloud.
No comments found
Your email address will not be published.
First Name Last Name
Δ
You've been there before: standing in front of your team, announcing a major technological…
If you haven’t already read Unbundling the Enterprise: APIs, Optionality, and the Science of…
Organizations face critical decisions when selecting cloud service providers (CSPs). A recent paper titled…
We're thrilled to announce the release of The Phoenix Project: A Graphic Novel (Volume…