What QA Wolf means by "test coverage"

John Gluck
September 12, 2023

If you're reading this, you've probably seen our promise: QA Wolf gets companies to 80% end-to-end test coverage in 4 months. What you're probably wondering is what we mean by test coverage — and, therefore, what does 80% look like for your application.

Michael Bolton (the software tester, not the other one) uses a vague definition of coverage: "how thoroughly we have examined the product with respect to some model."

The reason his definition is so vague is that any given coverage metric can be interpreted objectively or subjectively. Code coverage (e.g. unit-test coverage)  is a good example because, on its face, it appears to be very objective. The prevailing definition of code coverage is the percentage of lines of code exercised (and, by omission, not exercised) when a set of tests executes. If you don't exercise every line of code, you don't have 100% code coverage. Except what if the unexercised code wasn't essential to the working of the application or didn't contain meaningful assertions? 

And the truth is that the definition of coverage isn't really important — what's important is that your team feels increasingly confident that they can release a product that works every time your customers use it.

For a team to have that confidence, the tests themselves need to be meaningful and valuable. For instance, some teams value a 100% passing rate as proof that a given feature is ready. When these teams designed their coverage, they specifically made decisions that would give everyone the confidence that the code was ready for release, knowing that all of the tests have been thoroughly code-reviewed, function as expected, and can detect defects in the application. In other words, the coverage metric has high meaning and, therefore, high value.

How teams determine what's valuable

Most teams will use one of these models, although they may not formally employ them. Rather, they use intuition based on their knowledge of the product to determine which test scenarios will contribute to their coverage's overall meaning and value. 

  • Workflow or Path coverage emphasizes observing the user's state while journeying through different paths in the application. It can help identify anomalies when the application interacts with systems outside the application team's direct control.
  • Functionality or Product coverage, similar to Workflow coverage, looks at the application from the perspective of users interacting with the UI to determine if the executed tests exercised all unique combinations of input and output. This model can help identify where the application may have coverage gaps.
  • Requirements coverage, which determines if the application is functioning according to customer expectations, can help to assure that application teams meet their implicit and explicit commitments to their customers.
  • Risk coverage helps teams identify where the application exposes risks to the team's business and customers. It helps determine the priority and rank of test scenarios in the complete set.

If these models appear to be similar, that's because they are. In fact, they frequently overlap. You wouldn't want to use any of the units of coverage these models generate exclusively. Each model has drawbacks that can be counterbalanced by emphasizing another one of the models, so they are best used together as a whole. What's more, these aren't the only models you can use. Revenue, security, compliance, and recency are additional models teams can consider when deriving a coverage plan.

How QA Wolf helps teams to define test coverage

Like all teams, QA Wolf blends these models to derive a meaningful and valuable coverage plan. Generally, though, we lean on Workflow coverage.

What we define as a workflow is derived from the Arrange, Act, Assert (AAA) framework. Assertions are what make your coverage deterministic. QA Wolf expects assertions to pass every time we execute them. A test may have more than one assertion at the end, but at no point after the assertion will it attempt to Arrange or Act again. 

We like the AAA framework because it provides consistency across all tests. It also increases our ability to determine whether someone has already reported any given bug that a test run surfaces.

We use the AAA framework to determine all the meaningful and valuable workflows in an application as a measure of full coverage. At that point, customers may request that particular requirements get additional coverage or that we increase the number of test scenarios covering riskier paths in the application. Once the customer finalizes which scenarios to target, we write the test cases and execute completed tests once the customer approves them.

Based on this collaboration and informed by our newly acquired understanding of the customer application and our experience with successfully planning coverage for hundreds of customers, QA Wolf calculates the number of test cases required for the customer's application to reach 80% test coverage in 4 months.

Regardless of the strategies or models used to derive the coverage, at the end of the day, test coverage is the results of executing a set of tests that we constructed with customer feedback that is tailored to their specific needs. 

In defining test coverage for yourself, use everything available and use it effectively

Predominantly quantitative coverage models such as code coverage and performance coverage can provide helpful information in determining overall coverage and application health, and we encourage customers to use any and all tools available to them. By the same token, we recognize that many tools require investment, so we meet the customer where they are and don't require them to implement any coverage tools on their side.

That said, when using the output of any coverage tool, it behooves team leaders to avoid using the number as an end goal. In other words, the goal of improving the signal-to-noise ratio differs from that of attaining 100% coverage. Likewise, tools that force teams into complicated processes or bureaucracy may distract the team from delivering high-quality software.

Familiarity with the various models for deriving coverage can help. The three critical ingredients for deriving the composition of a practical test coverage metric are:

  • Expert understanding of the application-under-test
  • Expert familiarity with usage patterns of the application
  • Expert-level experience in planning tests for adequate coverage.

At QA Wolf, we collaborate with our customers as subject matter experts to guide us through initially understanding what functionality is available in the application and how their customers use it. We apply our expertise in test planning to write a set of test plans comprising highly meaningful and valuable coverage.

Keep reading