Skip to content

June 23, 2024

Observing the Impact of AI on Law Firms, Software, and Writing: Winners and Losers

By Gene Kim

You may be reading this because of a certain Steve Yegge article called “Death of the Junior Developer.”  (Yes, that Steve Yegge, famous for his work at Amazon, Google, and his depiction of the famous Jeff Bezos “thou shalt communicate only by APIs” memo.)

If you haven’t read it, absolutely do read it — especially if you have kids who are early in their careers, regardless of the profession.  It’s Yegge at his best.

I was talking with him last week, and he’s as hilarious, irreverent, and brilliant as you might expect.  But we shared some stories on how AI is affecting work.  And one particular theme emerged: AI seems to help disproportionately senior people in their profession. This has some very real and consequential impacts on the junior people in so many professions, including software.

He makes a very nuanced point, with some fantastic evidence and reasoning to back up his intuition.

In this post, I provide more background, specifics, and data from the conversation I had with Steve last week, which he wrote about, as well as some additional musings. I cover the following:

  • A Story About Al and Law Firms — how partners are using AI, what implications for associates
  • Brief Aside: But, Al Can’t Do Law!
  • Lessons For Tech Leaders — a similar pattern emerges for senior vs. junior devs
  • My Story On Al-Assisted Writing — I share how I’m using Claude Opus/Sonnet for one of my writing projects
  • Aside: When Claude Didn’t Work For This Project — when AI doesn’t work so well
  • Forrest Brazeal’s Take On GenAl and Knowledge Work, And My Kids — existential dread?
  • Interested in Learning More About GenAl? See you at ETLS Vegas!

We are surely entering an era where the cost of production of all sorts of knowledge work is going to decrease by 5x, 10x or maybe more. I’m starting to think that the people who will be disproportionately affected will be the people earliest in their careers.

As a parent of a 16 year old, and two 14 year olds, how do I best prepare them for a world where entry-level jobs for new college graduates might be very, very different, and maybe even potentially far fewer in number?

(And I’m so excited that Steve Yegge is presenting at ETLS Vegas in a couple of months on his experiences with GenAI, as well as more of his thoughts on the “death of the junior developer!”)

A Story About AI and Law Firms

I have a friend who keeps getting asked to be the managing director of his mid-sized law firm. That’s quite the compliment.  This is because the managing director does what a department head does in an academic department: they’re the “highest functioning academic” who is chosen by their peers to do what a CFO and COO would do in a commercial organization. They own the strategy, financial management, the HR department, and all the other back-office functions that support people doing billable law work for clients (which pays all the bills).

Talking with my lawyer friend gave me a glimpse into how things like ChatGPT and LLMs are already upending how people are doing legal work — and I think it offers some insights into what could happen in software, as well.

I learned that he now uses LLMs to do many things an associate would normally do. I’ve heard from others that LLMs are quite good at opining on many things you’d ask a lawyer.  An example is “look at this other firm’s non-disclosure agreement, compare it to ours, and highlight any material differences between them.” He described how useful this was, and that these were tasks that he would normally give to an associate. (Associates are junior lawyers who work long hours to eventually become managers, who work long hours to become partners. This is a competitive journey that might take eight years.)

He then asked himself, “If I am doing these tasks that I’d normally give to an associate, what should my associates be doing?” I joked that it’s not just him: his clients are also likely using LLMs to do things that they’d normally give to his law firm!  (That question about NDAs came from someone who wasn’t a lawyer — that’s one more request to internal or external counsel that was never made!)

He then told me one thing that blew me away, because it was so familiar. He said they put in a new rule for all associates: if you’re using LLMs, you must disclose it, and you must verify all the information yourself. Specifically, you cannot rely on a partner to review all your work, to catch all your mistakes. (This will become even more important in a world where every associate is doing 10x more work thanks to LLMs —  that’s a number I totally made up, by the way.)

Brief Aside: But, AI Can’t Do Law!

You might be thinking, AI can’t do law.  Gene, you obviously don’t know about how badly AI has put lawyers in trouble, such as the following:

I was talking with my friend Joseph Enochs, who described how betting that AI will never be able to do [[INSERT COGNITIVE TASK HERE: generate video, coding, generate legal opinions, creative writing]] is probably a dangerous position to take.  He said, “It’s like you’re betting against Moore’s Law, when we already know that in AI, capabilities are doubling every 3.5 months (compared to 16 months for semiconductors)”  (Citation: Shiqiang Zhu, Ting Yu, et al, https://spj.science.org/doi/10.34133/icomputing.0006)

Case in point: Here’s an article about an OpenAI collaboration with Harvey, a 100-person startup focusing on law-focused AI.   They published a case study of what they can achieve by further training GPT4 on the law.  They cite the following (emphasis mine):

“To test the case law model, Harvey worked with 10 of the largest law firms. They provided attorneys with side-by-sides of the output from the custom case law model, versus the output from GPT-4 for the same question. They were surprised by how strong the reaction was.“97% of the time, the lawyers preferred the output from the case law model,” Weinberg said. “Usually, it was because it was a longer, more complete answer. It went into the nuance of what the question was asking and covered more relevant case law.” Hallucination reduction was one of Harvey’s motivations for building a custom model, and the investment paid off. “Not only does the case law model not make up cases, but every sentence is actually supported with the case it’s citing,” Weinberg said. As they roll this out to more users, Harvey is eager to explore other applications of the case law model, such as drafting briefs and motions, or helping attorneys understand how case law varies across different jurisdictions..”

Okay, admittedly, the comparison was between their new custom model versus GPT4, as opposed to human-written analyses.  But check out their earlier aha moment that made them so bullish on this direction:

“An early proof point came when Weinberg and Pereyra pulled Reddit’s r/legaladvice for landlord/tenant questions and used GPT-3 to generate answers, which they shared with attorneys. “For 86 out of 100 questions, the attorneys said they would have just sent the answer to the client without editing,” Weinberg said. “It was an aha moment.”

(Thanks to Nathan Labenz on The Cognitive Revolution Podcast for mentioning this great work!)

Lessons For Tech Leaders

There are relevant lessons here for technology leaders. 

I’ve heard so many stories about technology leaders in enterprises doing carefully thought-out pilots for coding assistants (e.g., GitHub Copilot, JetBrains AI Assistant, Sourcegraph Cody). In particular, one leader described how the pilot program did not achieve its hoped-for goals. What happened?  They found that yes, copilots enabled junior developers to write and commit much more code. The problem is that upon review, senior developers found that the code was not fit to deploy into production, due to small or even large errors.

When I was talking with Steve Yegge, he said something really astonishing.  He hilariously described working with LLMs like this: they’re like wildly productive junior developers, who are super fast, super full of energy, but also potentially whacked out on mind-altering drugs, often coming up with crazy and fundamentally unworkable approaches. 

The problem: when a junior developer sees that proposal, they might say, “Looks good to me! Let’s do it!” (Imagine what hilarity ensues over the next sixty days from that type of collaboration.  Funny, right?  In his blog post, he describes this exact scenario in brilliant detail.)

So what is in common between using LLMs in law and LLM coding assistants for developers? Done wrong, junior staff using LLMs can put an incredibly high review burden on senior staff, which is not sustainable or tenable.  (It is an organizational wiring that makes the senior developers the bottleneck.)

Let’s go back to the legal context again: one of the things managing partners are often responsible for is helping create and execute the long-term organizational strategy: what markets do they compete in, what skills and talent are needed execute the strategy (including how many people), and ensuring that the strategy is sufficiently profitable.

You can’t get very far into this type of planning process before you start thinking about workforce composition.  When partners with LLMs are vastly more productive, how much junior staff do you now need to execute all that billable work, what are the margins, etc. (Oh, and by the way, many firms have retired partners who are still being paid — for hundreds of years, one of the perks of being a partner was being paid for the rest of your life. Many firms are thinking about, or have already shortened those terms.)

So my lawyer friend was starting to think about the big questions: Just how many associates do we need? How different will the law firm of the future be from the current ones, which for centuries have had this pyramid-like shape — lots of associates, and a comparatively few number of partners.  (In fact, many professional services firms share this characteristic: accounting, tax, audit, etc.)  

(His funny comment: “This is a question that the next generation of partners will have to solve — hopefully I’ll be retired by then.”  Not an unreasonable feeling!)

These questions foreshadow conversations that I’m sure every technology leader will be having in the software delivery space — just what is the composition of technology teams of the future? 

Steve Yegge, in his typical pithy genius, describes LLMs as potentially being “the death of the junior developer.” (And I’m so delighted that Steve will be presenting on this topic at ETLS Vegas in August!)

My Story On AI-Assisted Writing

Here’s one more example: I consider myself a researcher and professional writer. My books have sold over 1MM copies, and in a good month, I spend 50% of my time writing. (Except for last year, when I working on Wiring the Winning Organization with Dr. Steve Spear, when it was 150% of my time!)

Recently, I’ve been working with an amazing team on a position paper that we’ve chosen to write as a business fable.  A couple of weeks ago, I told the team lead that I didn’t think we’d make the deadline for this particular journal.  I loved the concept, but I didn’t think we could get from here to there in the time allotted.

But the team lead was adamant — he really wanted to do it, and he said he was willing to do whatever it would take.

My reaction: let’s give it a shot, and I secretly hoped that Claude Opus could be a good tool to help.

I recorded a call where I interviewed the team lead for an hour, asking him to describe each of the missing scenes that we needed to write.  I asked him for each scene, how does it begin, how does it end, what are lessons that the protagonist learns?

I then spent 45 minutes writing a prompt, which started with “you are the world’s best editor and operations researcher. Your goal is to help me fill in the blanks within a 20K word short story, written in the style of The Phoenix Project by Gene Kim, et al.”  

I inserted specific examples of text from The Phoenix Project as an example.  I put in scene descriptions and outlines, and I added all the relevant summaries from the afforementioned call transcript (generated using Claude).  I included descriptions of the characters, giving lots of hints of people in popular culture from whom the characters could be drawn from.

I added an elaborate outline and description of the first scene we wanted written, along with several hundred words that were already written. I pasted that whole prompt into Claude, and then added, “give me three alternatives of the missing parts of this scene, 1000 words long.”  

What it generated blew my mind.  It generated three versions of 1000 word sections that  were… not terrible! Some parts were laugh out loud funny (which we wanted!).  Other parts were huge misses. But no big deal!  I rewrote the prompt, adding many more concrete elements to the outline and pulled the Claude slot machine lever again.

Repeat a couple of times, and there’s enough there that I feel like I’m ready to start editing the the section myself. As in, roll up my sleeves and start editing the text in Google Docs.  

Well, almost. I found myself highlighted portions of the text, and copied it into Claude for rewriting.  I’d write things like, “more showing, less telling, more dialogue.”  Or “change from past tense to present tense.”  Or “make the protagonist more hapless.”  Or “make dialogue less overheated.”  Or “write a paragraph that describes how the team on the ground are trying to signal to abort mission, like waving hands.” (It’s suggestion: “she motions at me frantically, one hand gesturing swiftly to her throat, slicing it horizontally in a universally recognized gesture to abort mission.” Nice!)

A couple of hours later, I texted one of my coauthors, letting him know that I had several thousand words he could look at. It was amazing seeing his comments and the changes he then made. It felt awesome and exhilarating, because this is the fun part of writing! He and I were working together to make something really awesome.

This is as opposed to working by myself, staring at a blank page, sometimes banging my head on the screen for hours, trying to write enough words so that other people could read and comment on it.

Over the last decade, I’ve learned that I can write 1,000 words per day — on a good day. Using Claude as a brainstorming partner, transcript summarizer, person who ask opinions of, editor with endless amounts of energy and readiness to suggest alternatives, I was able to write several thousand words I was pretty proud of — something that could have taken me a week to do unassisted.  Let’s call that a 10x productivity advantage, at least in the earliest “blank page” stage of writing a manuscript.  

I felt like a director of a movie, instead of just a scriptwriter.  I give Claude sections of the text, asking it to make the changes I want, until it is close enough — then I start rewriting and refining it until I love every word on the page.  To me, this is the most fun and rewarding part of the writing process.

Searching my history, here are some of the other actual prompts I wrote:

  • Rewrite this, and show how ____ is excited, and responds to the boss’s vision
  • Tone this down so it reads more like a political thriller
  • Can you introduce that she’s a brilliiant civilian somehow in the first paragraph or two? 3 ways.
  • Play up how bewildered protogonist is at mention of the project
  • Not quite: let’s try something like this?  3 alternatives  (I scratched out 4 short, grammatically incorrect sentences, to convey what I wanted)
  • focus on protagonist paragraph saying oh and uh-huh. he’s embarassed that he doesn’t actually understand, and is trying to hide it — 3 ways
  • focus on how protagonist says “hold on, everyone”.  He needs to exclaim how everyone isn’t seeing the big picture. To others, he seems unexplainably angry.
  • Suggest rewrites for this section — what’s missing is protagonist scanning the list, and realizing he’s forgotten the users, as he learned from last section.  come up with 3 suggestions:
  • write one more paragraph, of team having aha, and putting their heads together to come up with the best, most experienced users.  3 alternatives
  • Split this into multiple sentences — too long!

Even reading these prompts, I get so excited about writing with Claude. All those platitudes about automation freeing up time for your most creative work? Yes, 100% true — it can feel absolutely magical.

We’re nearing a finished first draft, so I’ve read through the entire manuscript in its entirety.  I think it’s amazing and hilarious, and I’m so proud to be one of the co-authors.  Heck, Steve Yegge looked at the draft and said, “I’ve read it, and it’s amazing.”  Other people who are part of the target audience have mentioned how it hits oddly close to home, because it describes their problems, some occurring right now to them, while being absolutely hysterical to read.

A mere fraction of those original 7,500 words are still there.  Most have been rewritten, some by me, some by others, some by Claude, but it was always humans in the loop, steering and directing. What Claude did was help solve the “blank-page problem.” As the amazing Brian Scott said, “it helped you go from the zero-yard line to the 20-yard line faster, then the 40-yard line, and hopefully all the way to the end.”

Yes!  So true!  (And I’m so excited that Brian Scott will be presenting at ETLS Vegas too, describing all the amazing work he’s doing, co-leading the governance processes for all things GenAI-related at Adobe, to maximize liberty and responsibility, so creators can go as fast as the business needs!)

I mentioned before that Claude helped me get to “first shareable draft to co-author” 10x faster — but of course, that’s just a small fraction of the total effort.  At this point, my co-author has put 100+ hours into the draft, while I’ve put in 30+ hours. Looking at what I’ve written, I’m estimating that Claude has made me 2-3x more productive overall.

I would never want to go back to writing the old way! (And my thanks to @storyhobbit for his fantastic YouTube video channel: The Nerdy Novelist)

Aside: When Claude Didn’t Work For This Project

I was talking with our head of publishing, Anna Noak (who also personally edited every book I’ve done since The DevOps Handbook) about this today.  She had some questions about how necessary good human input was in the beginning.  I’m sure part of her question was this: if AIs are generating the content, then technically, the author is the AI, not the hunan.  

(This is just one of the many huge questions being argued in the courts about copyright and AI. The other big question is how companies who train their LLMs on copyrighted content will compensate the publishers and authors.)

So I went into the Google Docs revision history and looked more closely at which text survived.  About 30% were thrown away altogether almost immediately.  Which portions were they?  It was the portions that didn’t have a human-written first draft, instead relying only on the Claude-generated outlines from the call transcript.  

Interesting, right?  It’s still Garbage In, Garbage Out — or rather, Nothing In, still Nothing Out.

In other words, to get good narrative fiction from an LLM, you need to write at least a minimal skeleton draft that provides the equivalent of a storyboard that the LLM can start “tweening between the key-frames” (read this ChatGPT explanation of the division of responsibility between senior and junior animators).

Without that equivalent of the storyboard, the LLM will just start making stuff up, without the constraints you want it to stay within, which results in text that don’t make any sense at all, and certainly does not achieve the writer goals.

I believe this furthers the case that AI helps the experienced people far more than inexperienced people — the seniors more than the juniors.

(A little aside: so what also happens when you reduce the cost of production for writing by 100x? You get 100x more things written and published. I’m thinking about turning off ads on my Kindle, because instead of recommending interesting books, I’m bombarded by spam, promoting low-quality, AI-generated content. Even the book covers are terrible, often with misspelled titles — a classic text-to-image AI error. Yuck! Javons Paradox strikes again.)

Forrest Brazeal’s Take On GenAI and Knowledge Work, And My Kids

Last week, I was also talking with the amazingly talented Forrest Brazeal, creator of the Cloud Resume Challenge, musician and performer of genius works such as Transformed!, The Re-Org Rag (I’m My Own VP), That Sinking Feeling (click here for his amazing performance). 

We were talking about how creators and knowledge workers might feel existential dread at LLMs (I’m excited that he’ll be performing this amazing new song on at ETLS Vegas!).

I told him that I’ve had so many amazing experiences with LLMs, including the story above, and my feeling is quite the opposite: exhilaration, appreciation, awe, excitement, etc. 

But upon some reflection, I do feel existential dread about LLMs in one context: about the future of my kids, who are 16, 14, and 14. They will be entering a workforce that will be undergoing incredible tumult and disruption, where the entire workforce will be changing in difficult to predict ways. 

As a parent, how do I best prepare them for a world where entry-level jobs for new college graduates might be very, very different, and potentially far fewer in number?

In summary, I have no doubt that AI will change the nature of work, in wonderful ways. But it may change the composition of the workforce, in ways that will create disruption and upheaval.

But without a doubt, I worry about people in the early parts of their careers and people just now entering the workforce (which includes my high school kids).  And the best advice I can give them is for them to read Steve Yegge’s post.

PS: this is literally true:  my wife just told me she met someone who is a junior in computer science at Boston College, and asked me if I had any advice for her.  I told her to email me, because I will send her a link to Steve’s post the instant it’s up.

Interested in Learning More About GenAI? See you at ETLS Vegas!

Holy cow, the Enterprise Technology Leadership Summit Las Vegas is less than three months away! I’ve always been proud of the fact that we have had some of the most senior technology leaders in this community, all of whom have achievements I deeply admire.

As always, we have three programming objectives:

  1. Experience reports from technology leaders, on how they improved the “socio” parts of their sociotechnical systems.
  2. Subject matter experts, sharing valuable knowledge that technology leaders need to achieve their goals.
  3. Career advice, for both technology leaders and their teams.

Regarding #2, we have a significant focus on GenAI. This is because so many technology leaders in this community are leading GenAI initiatives within their organizations. Here are some of the talks I’m excited about:

Nearly one-third of the plenary programming is on GenAI — our goal is to put together the “ultimate GenAI learning day” for technology leaders! See you there — and register now!

A sampling of the GenAI talks I’m excited about:

Devlin McConnell, Emerging Tech Strategy Lead & Matt Butler, Director Analytics & Automation,  Center for Audit Practices and Enablement — Vanguard: Devlin will talk about all the amazing things they’re doing experimenting with GenAI across their enterprise, and Matt will talk about the pioneering work they’ve been doing within internal audit to help with the audit planning process. 

Brian Scott, Principal Architect & Dan Neff, Principal Cloud Architect — Adobe: Brian and Dan discuss how they are rolling out and governing AI across one of the world’s top ten largest software companies in the world. They worked with a broad, cross-functional team—including legal, compliance, and information security—to maximize liberty for thousands of developers while also maximizing responsibility.

John Rauser, Director of Software Engineering — Cisco: John is returning to the conference to share Cisco’s Generative AI journey over the last year. John is an engineering leader building security products in the Cloud Security group. He will present Cisco’s experiences building GenAI into the security portfolio, highlighting an organizational platform strategy to enable execution. He will also present discovered patterns and unexpected antipatterns when using GenAI.

Fernando Cornago, VP Digital Tech — adidas: Fernando will share what they’ve learned from their pilot programs to study how GenAI can improve engineering productivity. Their goal was to explore how GenAI could help people have more time for focused work, as opposed to overhead meetings, design, alignment, reporting, etc.

Shawn “Swyx” Wang: If you’re like me, to keep up with everything going on in GenAI, you’ve probably listened to countless Latent Space podcast episodes by Swyx. He’s pioneered defining the role of the emerging AI engineer as a separate discipline, and he’ll share advice for technology leaders on what they need to know.

Patrick Debois: We know him as the “godfather of DevOps,” having coined the word back in 2009. Over the last 1.5 years, we’ve followed him as he continues his adventures in GenAI, bringing AI features to market when he was VP of Engineering at Showpad. He continues his pioneering work, understanding how technology leaders build and run GenAI-powered services in production.

Adam Seligman, VP Developer Experience — AWS: Adam is responsible for building tools for every kind of builder and developer and their teams, including using GenAI. He will share experience reports and lessons learned deploying GenAI within AWS, including HR, finance, and datacenter operations, as well as learnings and AWS’s strategy to its customers.

Paige Bailey, GenAI Developer Experience Lead — Google: Paige was formerly Lead Product Manager for Generative Models for Code AI, PaLM 2, and more. I am so delighted that she’ll be sharing how Gemini has been growing internal market share for the AI that powers so many of the legendary Google properties (beyond just dev productivity).

Joe Beutler, Solutions Engineer — OpenAI: Joe will share learnings from their customer engagements, including how Spotify can translate popular podcasters in their original voice into different languages, how Klarna automated many customer support activities saving $40MM annually, and a creative campaign with Coca-Cola. He will dive into how to avoid common pitfalls, such as the recent high-profile chatbot errors, and give us mind-expanding glimpses of what is possible with available tools today.

George Proorocu, IT OPS Chapter Lead – Cybersecurity & Fraud — ING Bank: All the talks above showcase incredible ways we can use GenAI for a positive impact. However, George will talk about how adversaries use deep fakes and other new techniques to impersonate executives and circumvent critical controls in enterprises. He will demo these attacks and describe what ING is doing about it, including training, new controls and procedures, etc. I suspect you’ll show this talk to all your colleagues after you see it.

Steve Yegge, Cody Platform Guy, Sourcegraph: You likely know him from his Amazon and Google days, and his depiction of the famous Bezos API memo. He will discuss his favorite dev tools that he helped build at Google, why they were so useful, and what drew him out of retirement. He will talk about lessons building and operating the Cody AI assistant and services relying on LLMs, and where we go from here.

John Willis: “Dear CIO: Sorry For All The GenAI Tech Debt” — this is an incredible working paper from a group of technology leaders who are inventorying the types of technical debt that are being built up during this incredible frenzy of distributed innovation, which someone will eventually have to clean up. As they put it, it’s like Shadow IT all over again. They will provide tips on practical guidance on how to mitigate these risks.

Nearly one-third of the plenary programming is on GenAI — our goal is to put together the “ultimate GenAI learning day” for technology leaders! See you there — and register now!

- About The Authors
Avatar photo

Gene Kim

Gene Kim is a best-selling author whose books have sold over 1 million copies. He authored the widely acclaimed book "The Unicorn Project," which became a Wall Street Journal bestseller. Additionally, he co-authored several other influential works, including "The Phoenix Project," "The DevOps Handbook," and the award-winning "Accelerate," which received the prestigious Shingo Publication Award. His latest book, “Wiring the Winning Organization,” co-authored with Dr. Steven Spear, was released in November 2023.

Follow Gene on Social Media

2 Comments

  • Anonymous Aug 8, 2024 11:24 am

    "Fascinating look at AI's impact on law firm software and writing! The analysis of winners and losers provides valuable insights into the evolving legal tech landscape."

  • Anonymous Jun 27, 2024 9:37 am

    Very interesting read! I think the world is slowly catching on that Large Language Models (LLMs) are just advanced tools to boost productivity. Comparing software development to the legal profession isn't quite fair in this context. LLMs are designed to handle human language, so it's natural they'll take over a lot of the "manual" work in professions heavily based on natural language, like law. Software development, on the other hand, is a whole different beast. Being a junior developer is more about learning how enterprise software works and understanding the processes and roles behind the scenes. Writing code and making changes are just by-products of this learning process. Sure, AI can help with this too, like a coding co-pilot. Junior programmers should definitely learn how to leverage AI to enhance their writing process. There's a ton of tasks in development, usually assigned to juniors, that can be automated. Take static mapping of classes between APIs, for instance. This task is often handed to juniors because it involves a lot of manual typing and small, insignificant decisions. Depending on the scale, these tasks can take days or even weeks. Any curious developer should see the opportunity to cut this time significantly using a specialized AI model. Now, onto the question of authorship. When I was in grammar school, I found an old "Cybernetics" textbook from the 60s. It had a definition of a cyborg that read something like: "an organism whose life processes are carried out or supported by technical devices." Examples included a man with an implanted pacemaker, a man with a hearing aid, and even a man using a calculator. By this definition, an author using LLMs is still the author of the output, just aided by a machine in the creative process. This argument reminds me of when Intellisense™ was introduced, and many senior developers argued it should be banned. So, let's embrace the future with a wink and a nudge. After all, whether it's a legal brief or a block of code, a little help from our digital friends can't hurt, right?

Leave a Comment

Your email address will not be published.



Jump to Section

    More Like This

    High Stakes Communication: The Four Pillars of Effective Leadership Communication
    By Summary by IT Revolution

    You've been there before: standing in front of your team, announcing a major technological…

    Mitigating Unbundling’s Biggest Risk
    By Stephen Fishman , Matt McLarty

    If you haven’t already read Unbundling the Enterprise: APIs, Optionality, and the Science of…

    Navigating Cloud Decisions: Debunking Myths and Mitigating Risks
    By Summary by IT Revolution

    Organizations face critical decisions when selecting cloud service providers (CSPs). A recent paper titled…

    The Phoenix Project Comes to Life: Graphic Novel Adaptation Now Available!
    By IT Revolution

    We're thrilled to announce the release of The Phoenix Project: A Graphic Novel (Volume…