Until recently, if you wanted QA services, there were only two pricing models: Pay by the hour, or pay by the test run...
Fixed hourly rates offer the most flexibility. Teams can get QA services they need on-demand and only pay for the work they need done. While there’s a lower financial risk with having contractors that can start and stop as needed, there’s an extra administrative burden in tracking hours and paying variable invoices. And make sure you plan your requests with enough lead time to avoid “rush charges” or overtime fees.
Examples: Tricentis, Qualitest, Qualitylogic
Similar to cell phone minutes from the year 2000, there’s a fixed fee for a set number of test runs per month, which typically (but not always) includes test maintenance and bug reporting. The pay-per-run model is simple to understand and can be quite cost effective for low usage or small test suites. But it discourages thorough testing, because teams try to minimize the cost of running their test suite. Pricing can also be unpredictable, as pay-per-run vendors often have premiums for going over your monthly limit, and up-charges for additional capabilities like parallelization.
Examples: Cypress Cloud, Rainforest QA, Testim
While these models never really aligned the goals of the customer and the incentives of the provider, they worked well enough. When the expectation was that only 20–30% of user flows would have automated test coverage and most testing would be done manually at the end of a project, you were less likely to run up against your billing limits.
Today, as teams target 80%+ automated test coverage, and shift to continuous testing and continuous delivery, the older models make it expensive to scale, and complicated to predict what you’ll be paying month to month.
At QA Wolf, we charge a set price per test per month. In that fee, we include everything that’s involved in the QA process: Test creation, the infrastructure to run them, and 24-hour triaging, maintenance, and bug reporting.
We define a test using the Triple-A framework. Our narrowly-focused tests assert that one event triggered one result, which is a much different approach from other QA vendors who typically create large, monolithic flows. We do this because our goal — and your value — is in finding bugs fast. With narrow tests, you know exactly what failed and where. With large tests it takes a lot longer to get a clear view of what happened.
As we’ll talk about next, it’s an approach that aligns our interests directly with our customer’s, and delivers comprehensive coverage for roughly half of an in-house resource.
The main reason we charge by the test is so the cost reflects the real value we provide to our customers, which is the pass/fail results that give you confidence to ship and the bug reports when there’s an issue. With each automated test checking a specific piece of functionality or aspect of the user experience, the price is tied directly to the coverage that you receive, instead of the labor and infrastructure costs of running them.
Pay-per-run vendors might argue that bugs are discovered when the test runs, so your costs should go up the more you run your tests. But all that does is penalize teams for continuous testing and continuous delivery. By making it more expensive for teams to test as they build, pay-per-run vendors promote waterfall-style development where testing only happens at the end of a project when bugs are harder and more expensive to fix.
Hourly billing is even more disconnected from the value you receive as an engineering team. Whether a vendor works one hour or 100 ultimately doesn’t matter — what matters is catching bugs early and often so that developers can ship faster without painful and expensive bugs.
The most expensive part of automated testing is the constant triage and maintenance. 7.5% of test runs fail, so if it takes 30 minutes to triage every test failure, a suite of 300 tests running 3 times will need more than 30 person-hours a day. At $50/hour for a QA engineer, that’s $438,000/year spent investigating failures—not even building new coverage.
Since we charge by the test our incentive is to create resilient and reliable tests that don’t flake to minimize the cost that we spend on maintenance. An hourly contractor, on the other hand, gets more money from broken tests, and is incentivized to work slowly — which blocks developers from testing and kills their velocity.
Charging by the test simplifies the pricing model for both QA Wolf and you, giving you transparency and predictability for accurate budgeting. Your investment in QA scales proportionality and directly with the value you receive, because it scales up and down with your decision to automate more or fewer user flows.
Vendors that charge for test runs never make it easy to understand what you’ll be paying. There are premiums for running more than a few tests in parallel, which can only be avoided if you're willing to extend your QA cycle and slow down your developers; or decide you only want to test a small fraction of your application with each run.
And with an hourly contractor, you’ll be charged every time you make changes to the product and need to update the tests. And you won’t always know which tests — or how many — will need work until the product is ready to ship. That’s a huge problem for teams that want to move fast so they can test and learn how real users engage with their features.
The per-test pricing model lets your roadmap define your QA budget so you can predict what the current year and upcoming year will look like. If you’re not sure how many tests a new feature will need, you can compare the size and complexity to something you already have — or just ask us to help you scope out a testing strategy.
An even easier rule of thumb is that, on average, a company needs about 30 tests per engineer on the team. If you’re planning to hire three more engineers, you can confidently estimate needing 90-ish additional.
When you pay by the hour, you really have no idea how long it’s going to take to build test coverage for the new features you’re planning to build. The complexity of the tests will vary, so will the skills of the QA engineer doing the work. QA Wolf doesn’t charge for new test set-up — our goal, and our incentive, is to get your tests running (and running well!) as quickly as possible.
You might be wondering why the pay-per-test model isn't more widespread in the QA industry. The answer lies in the design and architecture of the QA Wolf platform, which was built from the ground up with per-test pricing in mind.
End-to-end tests are extremely resource intensive but our platform was designed and built for large-scale computing. We have an highly-optimized and efficient back-end system that lets us spin up and manage many thousands of docker containers for parallel test runs while keeping our prices down — savings which we can pass on to our customers.
QA Wolf's test authoring tools make it easy for us to write reusable, componentized test code, and isolate only the unique tests that we charge for. Since conventional testing solutions were designed for per-run pricing, the tests are typically monolithic which makes the separate test steps difficult to isolate, and the tests harder to maintain overall.
When you pay by the number of tests, your interests are fully aligned with ours: high-quality coverage, fast. We have no incentive to waste your time and spend your money building out complex, unstable, or unreliable tests. We have an SLA that strictly defines our response time and makes sure that you have the test coverage you need to ship confidently and bug free. Our Triple-A test framework clearly and concisely describes each test in your suite, and our open-source Playwright code is accessible any time you want to audit our work.
So let’s talk! Whether you are doing QA in-house or pay an outside vendor, it never hurts to get a quote and see what QA Wolf can do for you.