Guide to planning meaningful test coverage

John Gluck
December 27, 2023

When contractors build homes, they create a punch list of things to finish. When building end-to-end test automation, your team needs a similar document that shows everything the application can do, such that, as you complete a test for that capability, you can mark that item as covered. That document is the test coverage plan.

A good test coverage plan is the starting point for automation and helps you budget and prioritize the test automation effort. It reveals the gaps between what is automated, what has yet to be automated, and what will not be automated. The process is not a secret, nor is it complicated. All it takes is some up-front effort, diligence, and a lot of collaboration. 

This outlining process takes you from broad groups to detailed test cases and sets you up for a smooth automation effort. As you progress through the outlining process, you’ll add increasing depth and focus, identify what can and can’t be automated, and prioritize the tests you plan to automate first. As you do so, you’ll check in with the developers, designers, product managers, and other team members to make sure that the test cases are accurate and valuable. The most important thing about the test plan is that it covers everything that the people building the product want it to cover. 

Step 1: Workflow naming and grouping

First, break down your application into workflows and group them. A workflow is a testable path your customers can take through the application. Any time you have to return to an earlier step, you should create a new workflow. For example, an eCommerce site might create a group called “Checkout” and list checking out with each payment or shipping method because you will need to start from the very beginning with an empty cart to test each. The workflows in this case would be “Checkout using Mastercard payment,” “Checkout using PayPal,” etc.  

In order to understand how to name a workflow, you need to understand its scope. The tests you will be creating within workflows in a later step will share setup and might build on each other. A good example of a workflow name might be “Create, write, update, delete <some entity>,” and it would contain four test cases that share the same setup (i.e., arrangement).  Another example would be “Checkout - Validation messages for the billing address.” Tests within that workflow of would exercise all possible validations for the associated billing address form. 

The grouping mechanism can be somewhat arbitrary. It’s anything that will help your team organize, such as category, navigational area, etc. 

As far as where to capture this information, you have options. Dedicated test management tools like TestRail or the test editor built into the QA Wolf platform are ideal, but an unordered list in any text editor, a mind-mapping tool, or a spreadsheet can all work. 

Step 2: Workflow prioritization

Having documented all your workflows, the next thing you want to do is prioritize what you’re going to automate first, second, and last. You can use a simple decision-making process (a heuristic if you’re nasty). 

Identify manually tested workflows

Some workflows just can’t be exercised with automated black-box tests (e.g., some security implementations, third-party integrations with licensing requirements, uncontrolled throttling, and anything where output is non-deterministic, and the range of response validity is ambiguous). 

Other workflows are not worth automating if they are too challenging to automate, infrequently used, or unimportant to the business or the customer. 

You should still track these workflows in a list of manual test cases and look them over with the team at least once a quarter to see if it can and should be automated yet. If so, prioritize that work. 

Determine workflow priority

Once you’re confident that a workflow should be automated, you’ll determine a very simple priority: automate immediately or place it into the backlog. Each team will score these factors according to their own needs, but some key criteria are:

  • Security: Does this feature expose customers to security risks?
  • Recency: Is this a new feature?
  • Complexity: How hard will this be to automate?
  • Dependencies: Do other things need to be automated first before this? 
  • Revenue Impact: How does this workflow impact the bottom line?
  • Frequency: How often are customers using this workflow?

As you prioritize, capture the priority in the same tool you use to name your workflows. Our testers create a work ticket called a “coverage request,” which allows us to track this information.

Step 3: Test case naming

The next two steps — test case naming and outlining — should be done in batches. No more than 20% of test cases at a time. You don’t want to bite off more than you can chew because if you outline tests in January, they could be outdated by March. Working in batches will show progress as quickly as possible and prevent a lot of wasted effort. 

When naming your test cases in a workflow, be descriptive and unique. For example, a workflow called “Log into the application from the homepage” workflow, would contain test names like “Log in with a good password,” “Log in with a bad password,” and “Log in with a bad username.” Avoid extra filler words (e.g., “Able to log in”). Include happy paths as well as unhappy paths.

You probably already know this, but it bears repeating here: tests should validate one thing and one thing only. If your list of test case names is getting long, ask yourself if you can break the scope of a test into several tests. The name reflects the scope of the test, so bear that in mind when you name your tests. Shorter tests run faster and are easier to debug and maintain. Should you rescope, evaluate whether you need to regroup or reprioritize some of the rescoped tests. 

We recommend getting your team’s feedback as you progress through the test naming process. Work with your team about sizing the reviews so that they can accomplish your requests and still keep to their schedules. This concentrated effort is for one-time only, and you may need to remind your team that while they will be required for these reviews on an ongoing basis, the requests will eventually become much less frequent once the team reaches the desired coverage level. 

Step 4: Test case outlining

Write your test outlines directly into a test file as comment blocks. When the automation work gets going, the coder will be able to pick up the test file and have all the context they need to crank out tests efficiently. 

A few notes on writing outlines well: 

Use a style guide: A style guide will define conventions you will use to create a unified language for the steps in your test case. This guide should help you standardize your word choice, making it easier to understand and maintain your tests. More on that below.

Be specific about which accounts to use: We want to avoid data collisions in our test suite, so be mindful of how many test user accounts you need and tell the creators which account to use.

Be very, painstakingly, excruciatingly detailed: Pretend you won’t be the one building the test (because you won’t be forever). Force yourself to provide the detail you’d need were you the one attempting to automate the test from the outline. Be explicit about which setup is required, actions to take, and specific assertions to make. For example: “Assert success toast appears and name edited to new value” instead of “Assert name edited.”

Don’t rely on static data: Call out points where tests will fail if static data is deleted since that can cause unnecessary failures and make it harder to adapt tests to different environments.

Cover clean-up: If a test can pollute application systems with excessive data, explicitly call out clean-up steps in outlines. Add a clean-up step before taking actions to ensure it happens before creating new data and that the clean-up is robust to scenarios where multiple entities are created.

Establish a style guide

Most UI elements and user actions don’t have official definitions in Merriam-Webster, and colloquial usage can vary. A style guide will help everyone speak the same language. 

First, you’ll want to select a pattern language or DSL. We wrote a blog on why we recommend AAA for this, but if organizational constraints force you to use something else (e.g., Gherkin, describe/it), don’t let that be a showstopper.

The other conventions can be more organic, but these are our standards at QA Wolf

Menu items

Actions naming

  • Click: When a user needs to click on a link, image, or anything else not listed below.

    🚨Warning: Be extremely clear about how you describe click actions. You should describe what is being clicked, such as a link, button, or image, any associated description or test, and the expected outcome of that action (for example, “Click on the “submit” button to submit the form data and be taken to the “View” page)🚨

  • Check: When a user needs to click on a checkbox specifically.
  • Fill: When a user needs to enter data into an input field.
  • Type: When a user needs to emulate a human typing into an input field.
  • Navigate: When a user needs to go from one place in the app to another through means other than a click.
  • Create: When a new entity needs to be made.
  • Add: When an entity, piece of data, etc., is part of a larger parent entity.
  • Remove: When an entity, piece of data, etc, is taken off of a larger parent entity.
  • Delete/Archive: When an entity, piece of data, etc., is removed from the persistence layer. This will depend on the capability of the app.
  • Toggle: When a user needs to interact with a binary toggle, call out whether it’s a toggle on or off action to avoid confusion
  • Set: When a user needs to put data or an entity in a specific state (e.g., set date, set time).
  • Select: When a user needs to choose from a list of options dropdown menus.

Finally, ready for automation

Once you have your critical workflows broken into test cases and those cases are fully outlined, you are ready to start automating. A single test outliner starting from scratch can reach this point in about a month. This time is worth the investment, as evidenced by our satisfied customers. 

That said, this exercise gets easier the more times you do it. In this way, in-house teams are disadvantaged because they won’t get much chance to practice these methods unless they move from application to application, which may not be possible in organizations with fewer applications.

Our outliners are the best in the business because they get so much practice. Come to us and create a meaningful test coverage plan and automate 80% of your application workflows in just 4 months.

Keep reading