Optimisation is an obsession of modern workplaces. But despite a glut of technology for streamlining and simplifying work processes, the average office is still plagued with inefficiencies. There’s a lot to be said for shaving a few minutes off a routine task, especially when those extra minutes are being eaten up by something not working as it should. But how long can you spend trying to make a routine task more efficient before you hit a point of diminishing returns?

Fortunately, the ever-brilliant XKCD already worked it out:

According to XKCD, the formula goes: "If you perform a task N times per day, it makes sense to spend up to M amount of time to get an improvement of Z, amortised over five years."

So if you perform a task daily and you want to shave one minute off the time it takes, you can spend up to one day making those improvements and in five years you'll break even on time saved. If you perform this task five times a day, it's worth spending up to six days to improve it. But if you only perform it once a month, it's not worth spending more than an hour trying to make it more efficient.

For an individual, spending a full day on something in order to scrape back those extra minutes over five years might not seem like an intuitive or efficient use of time. But the calculation changes dramatically in corporate setting, where routine inefficiencies can impact dozens or even hundreds of people multiple times a day.

Let's rederive to account for corporate life:

1 minute/event * 1 event/person/day * 1 person * 240 work days/year * 1 year to positive ROI = 240 minutes = 4 hours

In a corporate setting, if you can improve an incident that affects one person, once per work day, and save one minute per event, it's worth spending up to four hours fixing that *per person it affects*.

For example: you are on a team of five developers. There's a flakey test that fails one tenth of the time. Your test suite takes five minutes to run. When the test flakes, you lose five minutes hitting `rerun`

.

Assuming six hours of productive work per day and that each developer is running the test on average three times per day, the calculation looks like this:

5 minutes/test run * 1/10 chance of flake * 3 tests/dev * 5 devs * 240 work days/year * 1 year to positive ROI = 1800 minutes = 30 hours = 5 work days (assuming 6 productive hours/dev)

So even if the test fails only one out of every ten runs, your team is still losing a full five days of productive work per year. This means it's worth a single developer spending a whole week chasing down the issue.

And this is with a relatively aggressive breakeven time of one year. If your company is post product/market fit, it's easier to justify a two or three year breakeven point, and that's when the maths starts getting into ludicrous numbers. Just to make it fair, we'll assume a "two pizza" team with its own repo and test suite. If you assume a monorepo with 10 devs pushing:

5 minutes/test run * 1/10 chance of flake * 3 test runs/dev/day * 10 devs * 240 work days/year * 3 years to positive ROI = 30 work days

In this scenario, it is worth a single developer dedicating *six work weeks* to sort out one flakey test.

This math doesn't apply just to test suites, nor just to developers. Every modern office worker deals with frustration around frictions and inefficiencies. In a 400 person company, you could scrape back nearly 140 work days over three years if every person could save just ten seconds on email admin every day. The same can be said for clearing irrelevant Slack notifications, scheduling meetings, wrangling expense reports. How many days of productive work could be saved if every virtual meeting didn't start with thirty seconds of hang-on-is-my-mic-working, especially in the era of COVID-19?

The insight here is that the cost to the company of friction and bad tools is O(n) in terms of number of employees, but the cost of fixing software is O(1): the cost is the same no matter how many people benefit, and with enough beneficiaries, *all* changes become worthwhile at some scale, and that scale is not as out-of-reach as it might seem at first.

A company of 100 engineers should probably have 10-20% of the team allocated to just internal tools and making things go faster. The calculation gets even easier when buying a solution is possible, because you can realise the benefit immediately rather than waiting for devs to ship the improvement. But in practice, it's likely that most technology companies under-allocate both developer time and money to tooling and reducing friction, both inside and outside of engineering.

So next time you're debating if you should spend your valuable productive hours sorting out a flakey test - or reorganising mailing lists, or researching better videocalling software - and asking yourself: "is it worth it?" Even for a small team, the answer is probably yes.