If you’ve been building software long enough, you’ve almost certainly been in this meeting: After a series of bugs escaping, or just one major one, company leadership wanted to know what we were going to do about the quality problems.
The response: More QA. More people, spending more time creating, executing, and maintaining regression tests to find bugs after they appear — hours and hours spent investigating test failures, updating outdated tests, and filing bug reports.
The more you test, the more you find; the more you find, the more you fix — and that’s all well and good. But you’re not getting to the root of your quality problems, which is preventing bugs from occurring at all.
Teams who sincerely want to improve the quality of their products need to turn the focus from bug detection to bug prevention. To focus on prevention, teams need to identify the places where the bugs are introduced and stop them from getting into the code in the first place.
Edsger Dijkstra said, “If debugging is the process of removing software bugs, then programming must be the process of putting them in.” To code is, inevitably, to create bugs. If your team consistently releases bugs to production, they probably have a code quality issue. The big glaring bugs that break the user experience and cost the company money, start as hidden errors, logical flaws, and inefficient code segments. These would be prevented by a culture that prioritizes white-box unit and integration tests.
By scrutinizing the internal logic and structure of the code, white-box tests identify issues at the source so that developers can fix them early. When practicing test-driven development (TDD), the white-box tests identify mistakes as the code itself is being written. Adding white-box tests to your codebase is like constructing your building with fireproof materials.
Of course, the best laid plans will still go awry. While white-box tests significantly reduce the likelihood of stupid mistakes from becoming major bugs, nothing is ever 100%. There’s no way to be completely sure a developer's code is error-free. Poor communication, application complexity, and changing requirements can all lead to unforeseen user interactions, incorrect or incomplete functional specifications, and integration issues. And that’s really where reactive, black-box regression testing comes in.
Black-box tests are like fire alarms: They don’t prevent fires from starting, but they let you know when they’re happening.
Many well-intentioned teams neglect unit testing and increase end-to-end regression testing, thinking that an alarm will prevent catastrophe. The problem with that line of thinking is that you’re constantly in reaction mode; you’re constantly putting out fires instead of building new features.
And, as Dykstra also observed: “Testing can only prove the presence of bugs, not their absence." A team needs to maintain automated regression tests for at least 80% of their workflows to be confident that an alarm will go off if new code causes a bug.
Bugs are inevitable but if all your team does is race from one blaze to the next, you need to treat the underlying quality problems in your tinderbox codebase first.
Ultimately, developers own quality. They have a responsibility to build white-box tests that prevent mistakes in their code, and they have a responsibility to fix bugs discovered during regression testing before releasing to production.
Being responsible for fireproofing the building, doesn’t mean they need to be responsible for monitoring the alarm system too. That can be QA Wolf: we build, run, and maintain an automated regression suite for 80% of your workflows and provide 24-hour bug reporting. The goal is to give developers time to focus on the true source of quality: the code base.