Skip to content

April 7, 2016

Behind the Scenes of the 5th State of DevOps Survey

By IT Revolution

State of DevOps Survey

 

“I only got four hours of sleep last night. I woke up after an anxiety dream about deadlocks in the database.”

Anyone who has run an online service probably knows this feeling. And this is what Jez Humble wrote on our Slack channel, 24 hours before the launch of the 5th annual State of DevOps Survey. As I have mentioned many times, I’ve learned more doing this project than any project in my professional career. This has been a four-year collaboration with Jez Humble and Dr. Nicole Forsgren, as well as Nigel Kersten and Alanna Brown from Puppet Labs.

This year Jez, Nicole, and I created DevOps Research and Assessment (DORA) with the goal of taking what we’ve learned from analyzing over 20,000 respondents that we’ve collected over the last four years and using it to help organizations assess and improve how their teams are performing, both in terms of practices and performance.

If you’re interested in survey design or survey analysis, you can read more in the 2014 State of DevOps Report: Statistics Class Edition.

In this post, I want to give you a behind-the-scenes look at the 24 hours leading up to the survey launch of a survey tool we built, as well as some of our lessons learned. In previous years, we used a fantastic tool called SurveyGizmo to execute the survey. Puppet Labs uses SurveyGizmo for all their internal and external surveys, so it made sense to use something that was familiar.

However, this year, we decided to use a survey engine that Jez Humble wrote. The DORA team has been using this tool for our customers, and Jez persuaded the rest of us that we should use it for the 2016 State of DevOps research. Nigel Kersten, a former Google SRE and now CIO at Puppet Labs, and I both thought this was a preposterous idea because we had never tested the survey engine at scale, but Jez can be very convincing.

I wanted to share with you what that 24 hours before our launch was like, focusing on how we shored up our production telemetry and did some last-minute contingency plans, as well as some of our top lessons learned. I realize that compared to the stories one hears about at Velocity or DevOps Enterprise Summit, our launch is small potatoes, but for us, the stakes were high, with over six months of preparation depending on how the next few days went.

(And to be perfectly honest, one of my biggest fears was the reputational fallout of screwing this up. I could just imagine the headlines: “co-authors of The Phoenix Project and Continuous Delivery totally screw up, doing a Phoenix Project to themselves.”)  

The TL;DR version:

  • Even last-minute preparations can pay off
  • Production monitoring can compensate for many shortcuts necessitated by real-world constraints
  • Hosted Graphite and New Relic APM are amazing
  • Chat tools radically transform the nature of production launches — we used Slack, but anyone not using something like it (e.g., HipChat, Flowdock, even IRC) needs to!

Take the survey! >>>

The 24 Hours Before Launch

The launch of the State of DevOps survey was Tuesday, March 22, 2016. This is when tens of thousands of emails would go out, announcing that the survey was live, from IT Revolution, Puppet Labs, and fellow sponsors, Atlassian, Automic, CA Technologies, HP Enterprise, Splunk, and ThoughtWorks.

One minor complication: that week, everyone was scattered around the globe. Jez was working full-time at the amazing 18F organization inside the US Federal Government, but was in London doing a speaking engagement, having just been in India. As the sole engineer, he was doing all the last-minute work during his evenings.

Nicole, our resident researcher and protector of the sanctity of the survey instrument, was in the middle of a weeklong trip to India for her work at Chef, with unreliable hotel internet access.

And me? I was under the gun finishing up the developmental editing work for the upcoming DevOps Handbook (Yes, it’s coming!), to stay ahead of the copyeditors. I was also down with pneumonia.

So, 24 hours before launch, the entire team was scattered around the globe, each with lots of daytime obligations. Alanna and I had just finished reviewing the survey instrument, and we had a list of issues that we needed to fix.

It was at this moment, pondering the implications of potential code changes to the survey engine, when I realized all of the incomplete things that I personally wanted to do for this project, such as writing a testing harness in Ruby and Nokogiri so that we could do load testing, etc. But with only 24 hours to launch, there are only so many corrective actions you can take.

The mental checklist that I felt like we had to get through was along the following lines, all based on what could go wrong and how we could best mitigate those risks:

  • What production metrics were we tracking?
  • Approximately how many concurrent sessions should we be able to handle?
  • How exactly will we know if the survey engine falls over? (Besides Twitter, that is…)
  • What are the exact steps we need to take to failover onto a backup service?

We had the entire project team assembled in Slack, and we started going through these questions.

Last-Minute Production Metrics

On the subject of performance, Jez walked us through his rationale for confidence in his app’s ability to handle the expected load. “Even if we had 50K respondents go through in the next month, that’s still only 74 respondents per hour, or 1 completed survey per hour, or 8 survey pages per hour. That is nobody’s idea of a high load.”

Jez further opined that the app runs on the ever-sturdy and well-understood LAMP stack (Linux/Apache/MySQL/PHP), and “there’s only one database connection per page load, there are no sessions, and each database call is fast, so we should be clear of any of the classic performance risks.”

Good enough for me.

To track how the application was performing, we decided to post the following metrics to a Slack channel every 30 minutes:

  • Number of surveys started and completed
  • Average survey completion time

On top of this, we needed some earlier indicators of problems so that we could take any corrective actions before something catastrophic occurred, and we discussed how to shore up our production metrics. Earlier in the year, I had worked on a fun book tracking project with Tom Limoncelli, and we used a great service called Hosted Graphite.

Hosted Graphite is incredibly simple to use — it’s basically Graphite and Grafana as a service. They have many libraries available, such as for Ruby, PHP, Python, etc. Literally, within a couple of minutes, you can see metrics displayed in a new Hosted Graphite account. (Bonus: there’s a free 14-day trial, and it only takes a minute to sign up. It’s a fantastic service!)

Once you have an account set up, you write one line of code to send a metric, like this:

> HostedGraphite.send_metric(“surveys.completed”, 1)

Within an hour, we had the following metrics displayed in a nice dashboard.

  • The number of milliseconds to render each web page (which includes any database queries)
  • Number of people completing each survey: registration page, page 1, page 2, … page 8

(Quite frankly, I was blown away that any of the metric events got displayed at all. Hosted Graphite lets you send via TCP or UDP. Jez was so paranoid about blocking network calls that he insisted on using UDP. To my amazement, the metric events were actually received. I always assumed UDP packets were doomed to get dropped somewhere, especially since I think the Hosted Graphite servers are in Germany. Although looking at the graphs now, I’m pretty sure many of our UDP events aren’t actually making it to Hosted Graphite servers — but some data is better than no data, right?)

Hosted Graphite also has an Alerting module that is in Beta, so we set an alert to post to our Slack channel if any web page took more than 10 ms to render.

Jez also installed New Relic APM, which has saved the butts of an entire generation of programmers who need to figure out “Holy crap, why are all our database calls taking so long?” New Relic would allow us to have lots of telemetry if something started going wrong with our database calls.

(Incidentally, the fact that we were working in Slack not only made it possible for a team scattered around the world to work effectively, it also makes it easy to recover accurate timelines, after memories fade.  I referred to the channel history constantly while writing this article, and all the screenshots came from our Slack channel, as well.  Seriously, if you’re not using something like Slack, Flowdock, HipChat, you need to try it! Trust me, once you do, you’ll never go back!)

Planning For Failure

As a fallback, Alanna also replicated the entire survey in SurveyGizmo. If something went wrong, Jez would redirect all traffic to our SurveyGizmo site. Although as I write this, I now realize that the only person who knows how to do this and has the relevant login credentials is Jez. Given that he was going to be on a transatlantic flight from London to San Francisco three days after launch, this would have left us unable to failover. Oops! We should have walked through exactly what that procedure was and documented it.

(Huh!  Apparently, Jez had walked Nigel Kersten through this procedure, and provided all the necessary login credentials — I missed that Slack message from 3 a.m.  Having all these procedures in one Google Doc would have been great, so that you don’t need to have read all the Slack messages.)

We also discovered that Jez was being extremely parsimonious (that’s a fancy word for “cheap”), having put us on a relatively small VM (256MB RAM + 1 CPU) on Gandi, his favorite hosting platform. After some persuasion, he finally bumped us up to 2 CPUs and 2GB of RAM (which is still pretty puny — did I tell you that Jez was cheap?)

Seriously, not putting on a bigger VM is just silly. On that project that Tom Limoncelli and I were working on, Tom had initially created an “f1-micro” VM in Google Compute Engine which has 512MB of RAM. It actually worked great for months, until I needed to install some Python libraries that needed to compile OpenSSL. For hours, I pounded my head against the wall trying to figure out why the “pip install” command was failing, when I finally realized the load average was pegged at 9+. Out of memory. I probably spent four hours on that and similar problems, which could have been fixed by spending $7 more per month to run on a 1.5 GB memory “g1-small” instance. In my opinion, us letting Jez run on a measly 2GB of RAM was actually an unnecessary risk.

We also started hourly database backups to S3 from a cronjob to make sure we didn’t lose data, and verified that we could actually restore the database.

Launch Day

It’s 7 a.m. on Tuesday— one hour before the email campaigns launch. Jez is working on closing a potential problem involving links that could accidentally disclose user data, but he is leery of making significant changes that could introduce a fatal error. (More on this later.)

Alanna has been making changes to the survey throughout the night, and Nicole is almost finished making her changes, too. We just discovered that in some cases, only Jez can make code changes, and only Nicole and Alanna could make some survey changes — this meant that correcting certain errors actually had to involve two people. A very amusing “non-devopsy” situation!

By the way, the fact that Alanna is making changes to the survey, only hours after Jez showed her how, is so cool. I’m always blown away by how fearless people like Alanna are — unlike scaredy-cats like me. People like that are such a tremendous asset. And because Nicole was having intermittent network issues, thank goodness Alanna was able to make these changes on her behalf!

(And in contrast to Alanna’s fearlessness, in a very ironic moment, Jez was afraid to make any significant code changes on the day of launch. If you’ve heard enough of Jez’s talks, you’ll know that he espouses fearlessly making production changes, so this was quite amusing to us all. But his fear was for a good reason — he had only gotten 4 hours of sleep because of dreams of database locking problems, and had other obligations throughout the day of launch — frankly, I was petrified of us introducing a conditional error that would send everyone to a blank page. )

Looking back at the Slack logs, one of my favorite moments can be see in this screenshot — Nicole is making changes from her hotel in India, and her network connection keeps cutting out, which is resulting in very strange session behavior that is freaking Jez out. And Nicole also needs help from Jez and Alanna to confirm that the changes she’s pushed actually were actually pushed.

Slack messages

But by 7:30 a.m., all our changes are done and we’re finally ready — we all have the Grafana dashboard up from Hosted Graphite, and amazingly, the screenshot below is what we see: the bottom graph shows the people marching through the survey pages, and the top graph shows the time required to render the survey pages (which require a database call), which doubled to 6 ms, but then stabilized at 4 ms, with the load average hovering at 0.5.

Hosted Graphite

After all the stress of the previous day, we’re getting real survey traffic from the email campaigns, and the site is running and staying up!

At 8:24 a.m., Jez exclaims, “Someone got to page 6 of the survey!” We all cheer.

A minute later, Jez observes, “This is like watching a tortoise race,” as we’re all watching and waiting for people to finish taking the survey (which we eventually learn is taking people 22 minutes, on average).

And I suppose that is one of the lessons of this whole exercise. When production launches are sufficiently instrumented, they can be pleasantly boring — a wonderful change from endless crisis and drama! As Jez said, “We instrumented the crap out of the code so I could sleep at night.”

Hosted Graphite

 

The Need for Situational Awareness

Yeah, John Allspaw hates the word “situational awareness,” so I try to avoid using that term. But, one of the assets that we had was having Nigel constantly scanning social media and all the Slack channels he’s on, telling us about things going wrong.

We learned about SSL certificate problems and even a perception that we were leaking user data. And here’s a reality — no amount of production telemetry takes the place of real people who care, observing how people in real communities are perceiving and reacting to your work. Plus, Nigel is such a well-respected person that he was the ideal spokesperson to help correct some misperceptions that surfaced.

Also in hindsight, designating someone who can keep a clear head and isn’t buried in the weeds to do external communications is vital — when we were talking about how to communicate that we weren’t actually leaking survey data, Nigel shortcutted the entire discussion with the following phrase: “I think it’s a pretty simple message.”  And we all trusted that he could do what needed to be done.

Conclusion

Of course, there was a lot more work leading up to launch than is depicted here. The process of creating a survey instrument with this level of rigour and depth takes a lot of passion and commitment from everyone involved.

We sincerely thank all of you for supporting our efforts by taking the survey and sharing it with your peers and friends!

Take the 2016 State of DevOps Survey now! >>>

 

- About The Authors
Avatar photo

IT Revolution

Trusted by technology leaders worldwide. Since publishing The Phoenix Project in 2013, and launching DevOps Enterprise Summit in 2014, we’ve been assembling guidance from industry experts and top practitioners.

Follow IT Revolution on Social Media

No comments found

Leave a Comment

Your email address will not be published.



Jump to Section

    More Like This

    Attack of the Supply Chains – Investments Unlimited Series: Chapter 9
    By IT Revolution , Helen Beal , Bill Bensing , Jason Cox , Michael Edenzon , Dr. Tapabrata "Topo" Pal , Caleb Queern , John Rzeszotarski , Andres Vega , John Willis

    Welcome to the ninth installment of IT Revolution’s series based on the book Investments…

    Finding Digital Treasure Through the Science of Happy Accidents
    By Summary by IT Revolution

    In a recent talk at DevOps Enterprise Summit 2023, industry veterans Steven Fishman and…

    Lessons from Unexpected Places: How Watching Ted Lasso Makes Us Better at Agile and DevOps
    By Anthony Earl , Jordan Stoner

    It happened to us, and we bet it’s happened to you too—you watch a…

    The Three Lines Model: Investments Unlimited Series: Chapter 8
    By IT Revolution , Helen Beal , Bill Bensing , Jason Cox , Michael Edenzon , Dr. Tapabrata "Topo" Pal , Caleb Queern , John Rzeszotarski , Andres Vega , John Willis

    Welcome to the eighth installment of IT Revolution’s series based on the book Investments…