A well-built test suite runs faster, gives more reliable results, and makes long-term maintenance easier and cheaper. After thousands and thousands of end-to-end tests, we've picked up a few tricks to build fast, stable, and accurate ones. So come along and we'll teach you how to build automated tests the QA Wolf way.
When you’re ready to hire a QA provider or invest in testing tools, you should focus on what you really need and what you’re really paying for. The goal of automated testing is to give developers pass/fail test results with detailed bug reports as often as they need, and as quickly as possible. To get that, someone has to do the build, run, and maintain the tests — but labor and infrastructure isn’t what helps you ship fast and bug-free. Our industry-first, pay-per-test model is the only pricing system that charges you for coverage and the true value you receive.
One of many things that makes QA Wolf different from conventional QA and testing solutions is that we charge by the test — not by the test run. And that brings up an important question: How do you define a test? There are several frameworks out there for outlining end-to-end tests but two of the most popular are Arrange-Act-Assert (AAA) and Given-When-Then (GWT). Let’s look at the differences between them and why using AAA for our tests gives the engineering teams we work with better coverage and faster feedback.
As you might imagine, we talk to a lot of engineering leaders about QA and end-to-end testing. Something we hear all the time is how difficult it’s been to scale their automated test coverage beyond a few key workflows. It’s not uncommon for a CTO or VP of engineering to have a 2-year vision to get 80% test coverage. While every company is a little different in their tech stacks, testing needs, and maturity, there are three obstacles that companies of all sizes share with us: human capacity, test complexity, and testing infrastructure.
Whether you're a small start-up, a large enterprise, or you partner with companies of all sizes like we do, establishing test writing standards for the QA team ensures that your entire application is covered, the tests are well-structured, and the test suite is more resilient to code changes. We’ve decided to use the Arrange, Act, Assert (AAA) framework because it's flexible, specific, and efficient.
There are lots of reasons why companies struggle with automated test coverage: limited expertise, competing priorities, infrastructure cost—the list goes on. But the number one reason is ballooning test maintenance. Many of our clients choose to work with us after realizing they simply can’t investigate failures and do the necessary maintenance fast enough to support their own release velocity. And that got us thinking: Exactly how long can a test suite survive on its own while developers continue to ship new code? Read on to find out.
As startup operators, we’ve learned that there is often a "story behind the story" that goes untold. In this post, we'll go back in time and cover the winding path that led to our first $1.1 million in funding from Sahil Lavingia, Naval Ravikant, and pre-seed lead Notation Capital.
One of the hardest things to do with software development teams is increase and sustain velocity. Often, shipping more or shipping faster means increasing your team’s overall capacity by hiring additional developers. But with a recession looming and many companies freezing their hiring plans, savvy teams can look at other levers they have, like removing bottlenecks in the QA process. Here are five cost-effective changes you can make.
When automated tests are in working order they provide engineers with rapid feedback on features that are still in development. When they're flaky — when they repeatedly fail without finding a bug — they lose any value to developers, and they're a liability to the team. But maintaining complicated and brittle end-to-end tests pulls developers off their roadmap. When teams off-load testing to QA Wolf, they get all of the signal and none of the noise: only human-verified bug reports get passed on, while flaky tests are triaged and fixed in the background.
Engineering leaders often tell us that developers should write and maintain end-to-end test suites because developers are responsible for code quality. We completely agree when it comes to whitebox tests, because whitebox tests influence how a developer writes and structures their code. But when developers have to own blackbox testing as well, productivity is usually much lower and coverage levels are far below average. In this post we talk about how whitebox testing improves code quality, and why effective teams offload blackbox end-to-end tests to dedicated experts.
Starting QA Wolf, we didn’t expect to pioneer a whole new business category. But our evolution from an open source command line interface to a full-service QA solution actually happened very naturally as we looked deeper and deeper into the problem at the heart of test automation.
When teams have high end-to-end test coverage they can deliver more value to customers, capture more of the market, and solve problems more efficiently. The value of E2E test coverage isn’t just in spotting regressions—it’s in the safety, security, and confidence to make the big moves that drive successful companies forward.
What it means, why people like it, when it happens, who does it, how to do it, and where things tend to go sideways
Since 2009, the Test Pyramid has given engineering teams an excuse to under-invest in end-to-end coverage. The feeling was that only a few critical areas were worth the time, effort, or expense. But new services and technologies have lowered those barriers, and the teams that take advantage of them ship faster, provide a better experience, and ultimately are more competitive.
The big idea in “shifting left” is moving QA earlier in your development process to find bugs before they impact the roadmap or the customer experience. Many companies take that to mean developers should write E2E tests—but we can tell you that shifting QA onto your developers ends up costing more in the end.
If you don't have automated tests yet, getting started can feel daunting. In this guide we explore three common pitfalls companies face, and provide solutions for how to avoid them.