10% code coverage
The Road Less Perfect - a newsletter by madewithlove

10% code coverage

Unit tests are at the heart of contemporary engineering. There's no agile software development without them. Automated tests for every part of your code make change trivial because they will warn you when you accidentally break something. If all tests are green, the code works.

Since the nineties, test automation has changed how we build our products. Legacy code is often defined as "code without tests", indicating how much we rely on test automation to confidently change our products.

Any software developer worth their salt will stress the importance of code coverage. If 80% of your code is covered, that's good. But more is always better.

Yet one of our most common discoveries during technical due diligence audits is that not all code is well-tested. A lot of our clients' products lack test automation.

It's easy to see why. Startups always rush to build their first prototype. They have limited resources and want to maximize the time spent on building features and wooing customers. That's necessary in those early days.

But as a result, a lot of them find themselves in a tricky situation. Their system works, but change is hard as every small update breaks.

One of our customers found themselves in this exact situation. Their product was a big data tool installed on the customer's own infrastructure. They'd just landed a big client and were eager to prove themselves.

The first release on this customer's servers went OK, but every update later contained a new bug. Every new feature created this trial-and-error chain reaction of patches. One of the most painful symptoms was the occurrence of regression bugs; issues that were supposed to be fixed resurfaced weeks later.

When I joined their team, I found they had quite a few tests already. Unfortunately, these were heavy integration tests rather than fast unit tests. It took up to four hours to run all of them! Of course, developers chose to neglect them.

The perfect solution was obvious. Stop developing features and start writing tests. Ensure those slow tests are lightning-fast and get at least 80% code coverage. From a technical point of view, this was the way towards better quality.

But from a business point-of-view, this would be sabotage. Now that they finally landed a lucrative customer, would the engineers stop building features? Madness. That's almost a guarantee to lose the last bit of enterprise goodwill.

So, we tried another, more pragmatic approach.

  • We executed all the slow tests overnight. While that took hours to run, we would at least have feedback every 24 hours.
  • Even though there were no unit tests yet, we added a CI pipeline that executed them when we pushed a new change to version control. That way, we made adding and running new unit tests easy.
  • Finally, we agreed to write a test to prove every customer-reported bug. That way, regression issues would be caught automatically.

Of course, this wasn't a perfect approach. The team needed to adopt a mindset that favored test automation. If they didn't, nothing would change. Finally, there was little incentive to speed up those slow tests.

However, it turned out that the existing integration tests covered the vital part of their product's core. Every time one of these broke overnight, we prevented shipping another bug. By proving each customer-reported bug with a test, we increased the code coverage of the most brittle parts of the system first.

I don't think this customer ever made it to 80% code coverage. But they went from 0% to 10% overnight, and that made a real difference. It enabled them to roll out the features that mattered to their customers.

Stopping feature development to increase code coverage is not a viable strategy. It's the testing equivalent of rewriting your application from scratch.


Running the tests you already have and proving customer-reported bugs with an automated check is the first pragmatic step to better quality.

要查看或添加评论,请登录

madewithlove的更多文章

社区洞察

其他会员也浏览了