End-to-end testing 101

Lauren Gibson
July 8, 2025

End-to-end testing confirms that your app works the way a user expects

Testing end-to-end (E2E) means running through your product the same way a user would. You log in, click around, enter data, trigger workflows—whatever real humans do with your app, from their perspective. E2E tests are unique in that they test the rendered UI and don’t have access to (or awareness of) the underlying code the way unit and integration tests do. 

A well-built set of E2E tests covers 80% or more of the critical paths users rely on most. Each of these tests is necessary because your app doesn’t just log users in or show notifications; it handles hundreds of features and flows. And most of those require multiple test cases to confirm they work and keep working with every release.

Why test end-to-end?

Unit and integration tests are useful but narrow. Even a small feature might rely on multiple components, external APIs, databases, and third-party integrations. You can and should verify each piece on its own to confirm that the feature functions as intended—but only E2E testing tells you if it all works together from the user’s perspective.

When should E2E testing happen? 

As early and often as possible. It’s more efficient and less costly to catch bugs during development than after release. Like fixing faulty wiring before the drywall goes up. 

Who does the testing? 

It varies by team. Some teams have in-house QA specialists. Others have their product engineers handle testing directly. In some cases, non-technical teams like Customer Support or Design take on testing using no-code/low-code tools. Many teams choose to outsource testing entirely, either to hourly contractors or full-service QA providers like QA Wolf. 

Who owns testing has a direct impact on whether teams decide to offload. When developers are responsible, testing often takes a backseat to shipping features. When QA engineers manage it, they may struggle to keep up with rapid product changes. And when testing falls to non-technical roles, they are often limited by the tools and expertise available. Teams choose to offload automation for a few reasons: 

  • There’s a lot to cover: Even a basic application has dozens of critical user flows. Add user roles, permissions, error states, and third-party dependencies, and the number of tests grows quickly. Building and validating them all takes time that most teams don’t have. 
  • Tests don’t maintain themselves: Every product update, UI tweak, or backend change can break test logic. Maintaining coverage means reviewing failures, rewriting selectors, and updating logic, usually without dedicated QA headcount. 
  • Running QA internally is costly: Beyond hiring and onboarding, teams need CI/CD infrastructure, parallel environments, device coverage, and someone to triage failures. The overhead adds up fast. 

👉See what an in-house QA team really costs

How do you write E2E tests? 

Start with these two things: 

  • Test matrix: A list of every user flow that should be verified. Think add to cart, change password, delete account.
  • Test scripts: Step-by-step instructions to follow. Click this, enter that, submit form.

If you’re doing manual testing, those instructions become checklists for testers. If you’re automating, they become the blueprint for writing code that simulates each action and confirms the expected outcome. Once you have those two things in place, you can consider if tooling makes sense.

What tools do you need, if any? 

If your team is automating, you’ll need a testing framework or tool. Options fall into a few buckets:

  • Coded frameworks like Selenium, Cypress, and Playwright offer the most flexibility. Selenium and Cypress can struggle with more modern applications, but Playwright works like a charm. For native mobile apps, mobile-specific frameworks like Appium, XCUITest, and UI Automator are often the best fit, but they require ongoing maintenance to stay reliable. 

👉 Learn when to use Appium, XCUITest, or UI Automator and what tradeoffs to watch for.

  • Low-code tools like Katalon, Mabl, and Testim are easier to adopt, especially for non-technical teams. But they come with tradeoffs: because test logic is abstracted from your codebase, it’s nearly impossible to model complex workflows. And since these tools are proprietary, you’ll likely be unable to export or reuse tests elsewhere.
  • Full-service QA providers like QA Wolf handle the entire test lifecycle for you — writing, running, and maintaining automated tests. This gives your team reliable coverage without the overhead of managing frameworks, infrastructure, or flaky tests in-house. It’s especially useful for teams that need to scale QA without expanding headcount.

Whether you run tests in parallel or sequentially, you’ll need supporting infrastructure like containers, clean environments, and scheduling logic to manage test execution. Without it, even a small test suite could take hours to complete.

Where things go sideways: pushing testing to developers

We see it over and over again. When teams decide to take QA seriously, they try to “shift left” by giving E2E testing to developers. On paper, it tracks—devs know the feature, and they should own the tests. But in practice, it becomes a bottleneck.

Features get delayed while developers maintain test suites. Flaky tests go unaddressed. Eventually, failures get ignored because no one trusts the results.

The problem isn’t shifting left; it’s assuming that means developers should own E2E testing. Shifting left is about catching bugs earlier, closer to where code change happens. The right approach builds systems that support early testing without forcing developers to split focus between building features and maintaining brittle tests. 

👉Read more about shifting left the right way

Some disclaimer text about how subscribing also opts user into occasional promo spam

Keep reading

AI-research
Making sense of all the AI-powered QA tools
Test automation
How we built a real iPhone device farm for iOS mobile testing
Parallelization
Running the same test in multiple environments isn’t as simple as it sounds