Why I Stopped Chasing 100% Test Coverage

Nov 10, 2024

Coverage is a vanity metric disguised as a safety net. The real question is whether your tests would catch the bug that wakes you up at 3am.

I spent years chasing coverage numbers. 80%, 90%, then the obsessive push to 100%. What I got was a test suite that was technically thorough and practically useless — full of tests that verified implementation details rather than behavior, that broke every refactor, that gave me false confidence in the parts that mattered least.

The shift happened when I started asking a different question: what could go wrong in production? Not "what lines haven't been executed," but "what failure modes does this code have?"

High-value tests:

  • Test the edges, not the happy path (the happy path rarely breaks)
  • Test behavior, not implementation
  • Test the contract, not the internals
  • Test what you're actually afraid of

A 40% coverage suite that tests your most complex, most-changed, most-integrated code paths is worth more than 100% coverage that's mostly exercising getters and setters.

Tests are documentation. Write the ones that tell the story of what this system does and why it can't be broken.