Skip to content

June 5, 2018

Context Switches in Software Engineering

By IT Revolution

The following is an excerpt from a presentation by Chris Hill, Software Development Manager at Jaguar Land Rover, titled “Context Switches in Software Engineering.”

You can watch the video of the presentation, which was originally delivered at the 2017 DevOps Enterprise Summit in San Francisco.

A bit of background

  • At Jaguar Land Rover we have about 42,000 employees.
  • We make about 22 billion pounds a year in revenue. (That’s about 30 million dollars.)
  • We’ve got about 135 software groups across 5000 software personnel, internal and external.

I currently head up systems engineering for the current generation of infotainment. If you were to sit in one of our Jaguar Land Rover vehicles that were made this year or last or the last two years, you would use my current product.

Today, I’m going to share about context switches in software engineering, which is probably one of my favorite topics, specifically I want to share:

  • Context of switching in infotainment and at Jaguar Land Rover
  • Some penalties that were associated with the different types of context switches
  • An analysis of the interactions of the software activities between software specialties

Let’s begin.

I like to visualize the software development life cycle in terms of a factory.

You have your inputs.

You have your planning, where your project managers would typically schedule and do your prioritization.

You’ve got your backlog where your change requests and your features come in.

Then you have flow control, whether not to release WIP or release work into your factory.

You are probably very familiar with these stations, requirements, design, develop, validate, deploy and ops. If you can envision all of the software activities that are done, all of the specialties.

10 years ago I joined a software startup

Like many software startups, they were just starting their software product. I happened to be right out of college and I was the only software personnel that they hired. Unfortunately, that means I was in every part of the software development lifecycle. One benefit of operating this way is that I could operate with full autonomy.

That meant I was the only one who could interact with the entire factory, and one thing I didn’t have was the ability to plan on what I worked on.

I may come in at the beginning of the day and think that today was going to be an ops day. I may take an hour for customer needs and wants, and do requirements authoring. When I may have remembered that I’m about 75% of the way done with a bug fix, and realize that that’s a higher priority and switch to that. I may have some code that wasn’t published from last week that I know is ready for release, (this was back in the days of manual deployment,) and so I actually need to do some deploy work. If I’m really unlucky, since I’m terrible at design, I’ll get asked to do some HMI design work, maybe within the next hour.

Unfortunately, every day was a different station for me, and I was the bottleneck at every point of the factory.

Fast forward to JLR, infotainment specifically.

We’ve got a lot more people, and these people could either just contribute at their own station, they could be proficient enough to review other people’s work, they could potentially be proficient enough at going to another station, but typically more people will allow you to scale.

Enter the idea of context switch

Imagine we’re all CPU’s. If we’re working on a current set of instructions, and another higher priority set of instructions comes into the queue, we need to save the current state of our instructions, address the higher priority set of instructions, finish it, take it off the stack, resume the state of the lower priority and finish executing that.

Humans do the same thing.

If I’m sitting at the development station or I’m working against the development station, and I’ve been asked to work on task number two even though I’m in the middle of task number one, if it’s the same project I’m going to consider it a lower penalty.

If you look on the bright side, I’ve got a ‘Barometer of Penalty’.

The next stage up in penalties is if I ask you to work on a different project.

Which happens to be the same station, happens to be the same type of work but it’s a different project. Now, I need to save the state of my previous task and previous project and ramp up on the next one to finish it if I’m told that it’s a higher priority and that I need to be working on right away. That’s a little bit higher on the penalty scale.

The next one is design or a different station. If I ask you to do a different station or a completely different work type but I keep you on the same project — I’m hoping you’ll be a little more familiar because you know the project, but this is a completely different type of work so it’s a higher penalty on my barometer.

The last one is if you switch stations, which is your task type, project, tool set, maybe computer that you’re using, maybe operating system that you’re familiar with, you could even be asked to go to a separate building, etc. There are many other variables.

If you’re completely changing your environment and your variables, this is very taxing on the human mind. I’m sure we’ve all dealt with this at one point in time or another, but in terms of a CPU, they just have to worry about addition. In terms of a human, you have to worry about all these other variables. It’s almost like asking a CPU to cook me breakfast tomorrow. It has no idea how to do something like that, but at the same time it’s higher priority and I need you to address it right away.

questions based off of those findings on our penalties

Should we eliminate all higher penalty context switches?

The answer is, it depends.

We found that if you can actually sustain the capacity and remain efficient on the different specialties, then you can actually avoid these higher penalty context switches with capacity in those specializations.

My favorite question: should we train cross-functional teams or train cross-functional people?

The difference between those is somebody who could work and be proficient at multiple stations, or a team that is capable of sharing empathy, that each one of them can be specialized at their own station.

Which one is more of a worthwhile investment?

Are some stations and roles easier to switch to than others? Do some roles understand other roles better than others?

Here in infotainment these are the specialties or the roles that contribute in our software factory.

You’ll probably recognize the majority of these because they match typically in the industry.

value contribution areas within our factory

First up, Flow control station: Idea to WIP.

I went around and I asked my coworkers and I asked other people in the software industry the question that’s defining those arrows. I call those arrows empathy and proficiency arrows.

Out of all the product owners that you know, on average could they step in and be proficient at being a customer for that product?

Out of all of the project managers you know, on average could they step in and be proficient at being a customer?

Now I know that’s a complicated question. However, the green arrow symbolizes a yes, the red arrow symbolizes a no. We found that the relationship in this case is a highly empathetic relationship towards the customer.

These are the primary actors that exist specifically within this flow control station, we’re trying to determine whether or not WIP should be released into our factory.

I’m not saying these specialties can’t do each other’s jobs, I’m just saying on average these are the results. Typically what happens in this particular station are what I call micro consulting engagements, and that’s where we’re actually interrupting all of those other specialties to determine whether or not we should release WIP. All of those interruptions are all contexts which is on their own as well.

What’s interesting is if I’m sitting at my desk and I’m absolutely crushing out code, and I’ve got my headphones on, completely in the zone, and somebody walks over to my desk and interrupts me, they’re automatically assuming that what they have to talk about is of higher priority than what I’m currently working on.

I don’t think that’s fair. In fact, I think they should rank whatever they’re going to talk about.

In the CPU’s case all they have to worry about is addition and this queue line. I kid you not, I’ve actually had a queue line at my desk full of people who were going to interrupt me.

Typically that prioritization is something of the equivalent, had the CEO been in the queue line I would imagine I’d treat it like a CPU treats a real-time threat: you can come to the front, what do you have to say?

The next station is the requirements station.

The same relationship exists. One interesting thing I’ve found is that customers on average aren’t good at writing requirements, specifications or even exactly what they want.

They’re very good at talking about needs, talking about wants, talking about vision, but typically when it comes to handing over to a developer, most of the time I’ve found it’s not enough.

They have the same sort of micro consulting engagements that the previous station did, again all interruptions, to ensure that the requirements being written are not going to be impeded further on downstream.

The next design station is ‘How should we build this?’

This is an interesting one.

This is design, and design to me can be in two different categories. Design is super overloaded. It could be architecture, it could be HMI design.

And what you see here is a lot of red arrows. Basically I asked my coworkers again: out of all the architects you know, on average would they be proficient at being an HMI designer? The answer was no. The reverse relationship exists, as well as the same thing exists within the customer.

What this actually can show is there are some automatic friction points that exist between these specialties. This could also show you that we could spend some time to make them a little bit frictionless, or maybe we could spend the time developing a relationship that doesn’t have to do with the product or the project, but with the people in general.

Typically there are validation engagements that happen, which are also interruptions. For example, one of the UX designers has a trail-off based off of how much effort they plan to put on a product. When they’re finished with their wire framer, or with the majority of iterations, they are putting remaining capacity for these interruptions. They’re adding it into their workflow, which I thought was pretty smart.

The same consulting engagements exist further on downstream.

In the develop station, if we ask ourselves the same question: out of all the developers you know, on average could they fulfill a QA engineer role and be proficient at it?

While a lot of people in these specialties don’t necessary want to be specialized in one of these areas, however they could. We get double greens here across all three of these.

This is one thing that contributes to the value of DevOps — all three of these specialties understand each other’s risk, understand where each other’s coming from, understand what they could do to help the other person complete their task.

Validation engagements exist. We’ve migrated from design or theoretical, and now we’re at implementation. Most of these engagements are “Hey, I went to build the thing you told me to build or the thing that you wrote out, and it’s not going to work for me. It’s definitely not working out.”

how we exploit the double green

All of our build machine infrastructure is all done in Docker containers, it’s all identified within Packer.

Each one of our developers who are contributing towards a product, if they have some sort of library or some sort of component they need the build machine to have, can go right to a Packer recipe, create on a branch completely in isolation, make their new set of infrastructure, point their product to build with that new set of infrastructure ALL without bothering anyone else in the entire workflow or disrupting anyone else.

Here, the ops has enabled self-service for the developer to completely work on their own, test whatever they need to do. “I’ve got this new version of Python I need to put on the build machines.” “Okay, there’s the Packer repository, go ahead and do it.” We have CI on that Packer repository. We get an automatic deployment to a Docker registry. That Docker registry is pointed to by the product.

Another way we exploited double green arrow is we have automated test feedback, with CICD pipelines. We can put test cases into an automated pipeline so that developers can get the results back quicker.

Validation and deploy stations are the same type of relationship. However, your primary actor is typically the QA engineer. There are validation engagements that also exist when you’re in the QA phase.

Sometimes the validation engagement could be, “Should we ship this or not?” or “Should we disable this maybe in production before we actually let it out?” One thing that’s unique about developing for an embedded device is we can actually put it into a production representative vehicle without officially saying that we’ve released things. It’s very difficult for us to compare to the web world because in the web world we can release everything out to millions of customers at scale very quickly. For us, we contribute toward an embedded device or an OS that runs on an embedded device, and there’s a point in time at which we bless that for a release.

One way that we exploit specifically for validation and deploy stations is virtualize and simulate the production environments so that we don’t have to use hardware. One of the challenges with hardware is it typically doesn’t scale, or by the time you’ve scaled it for what your team demands it’s already outdated.

Here’s the ops station. The only surprise here for me is actually the architect. Most of the average architects we’ve found could be proficient at an ops role. Now, that’s not necessarily whether they want to be, but they could be.

Here are the lessons that we’ve actually learned

  • We found that if context switches are inevitable, we should factor them into capacity. This was actually really hard for me to swallow, but if it’s unplanned work and it happens so frequently, it’s now become planned work.
  • The capacity at which each of your stations are staffed at depend on the project maturity. You may find out at the beginning of a project that you have significantly more architects doing these requirements are at those stations than you do on stations further downstream.
  • We found that some roles are in a perpetual state of interruption. It’s always some sort of higher priority that you must be working on, but it never actually ends. This is a very challenging problem for us to solve when we have a due date that we need to deliver vehicles to customers with.
  • We found that empathy increases if a close team member has an impediment that they could fix. If the person right next to you is struggling because of something that you could actually take care of yourself, they’re more likely to take care of that problem when they’re next to each other, and when they’re a cross-functional team. We typically found that teams that are cross-functional are more fruitful when they’re all in the same location or they at least all bond with each other on a regular basis.
  • Using the same or closely-coupled tool set will create less friction in the more expensive switches. This means if I’m going to a different task that’s a different product, or if I’m going to a different task that’s at a different station, if it is in the same tool set and I’m very comfortable, then it’s easier for me to do that context switch. This is where tools like Tasktop are extremely helpful because you can replicate an entire ALM tool in another ALM tool so that nobody has to go out of their comfort zone. This can help throughput.
  • If one of the other stations or context switches that you’re doing has a significant number of manual tasks, it ends up becoming very draining and adds more friction to whether or not you should switch to it.
  • A culture of mentoring and training typically increases throughput. From a brain surgeon perspective, I’m pretty sure after an entire 8 years, 12 years of education, they don’t just walk in and start doing surgery on brains. They probably watched a few. I think what’s interesting is I find it’s very unhealthy if a department doesn’t factor in training or mentoring into their capacity. The only way you’re going to train the next generation that will run your company is if you actually focus on training as it is important as everything else.
- About The Authors
Avatar photo

IT Revolution

Trusted by technology leaders worldwide. Since publishing The Phoenix Project in 2013, and launching DevOps Enterprise Summit in 2014, we’ve been assembling guidance from industry experts and top practitioners.

Follow IT Revolution on Social Media

No comments found

Leave a Comment

Your email address will not be published.



Jump to Section

    More Like This

    Attack of the Supply Chains – Investments Unlimited Series: Chapter 9
    By IT Revolution , Helen Beal , Bill Bensing , Jason Cox , Michael Edenzon , Dr. Tapabrata "Topo" Pal , Caleb Queern , John Rzeszotarski , Andres Vega , John Willis

    Welcome to the ninth installment of IT Revolution’s series based on the book Investments…

    Finding Digital Treasure Through the Science of Happy Accidents
    By Summary by IT Revolution

    In a recent talk at DevOps Enterprise Summit 2023, industry veterans Steven Fishman and…

    Lessons from Unexpected Places: How Watching Ted Lasso Makes Us Better at Agile and DevOps
    By Anthony Earl , Jordan Stoner

    It happened to us, and we bet it’s happened to you too—you watch a…

    The Three Lines Model: Investments Unlimited Series: Chapter 8
    By IT Revolution , Helen Beal , Bill Bensing , Jason Cox , Michael Edenzon , Dr. Tapabrata "Topo" Pal , Caleb Queern , John Rzeszotarski , Andres Vega , John Willis

    Welcome to the eighth installment of IT Revolution’s series based on the book Investments…