Why in-house QA teams shouldn’t investigate and maintain end-to-end tests and what they should do instead

John Gluck
July 17, 2023

If you’ve ever tried to build comprehensive end-to-end test coverage in-house, you know that creating the tests is the easy part. The challenge is maintaining the tests you have, which comes at the expense of adding coverage — or any of the other work you would like to get done. 

And you might say, “Well yeah, that’s the job — maintaining our automated tests and the infrastructure is QA engineering.” But that’s a slippery slope. Because there’s a limit to how many tests a person can realistically maintain (50–100) at which point, you have to keep adding more and more people just to tread water. That’s why just 37% of teams feel like they are getting a good return on their investment in automated testing. 

We have a different opinion: QA engineering is a wide-ranging discipline, and for most companies, it’s just not feasible to build a team big enough to get everything done. If you could eliminate the burden of test maintenance, you could shift your attention and get more value from the resources that you have. 

Unlocking the creativity of manual testers

If you’re trying to move from manual to automated regression testing, a coding bootcamp is a great option for manual testers but there are a number of things that they can shift their attention to right away; and which may have even more value for the product overall. 

User Acceptance Testing

No product should be released without some form of User Acceptance Testing (UAT). Traditionally UAT was conducted by customers to verify that the original product requirements were fulfilled and the customer got what they wanted. Today it’s often conducted by customer proxies, usually product owners and testers. 

UAT is a great input for later test automation because acceptance tests are a bridge from an abstract feature or user story to a concrete test outline or checklist that can be automated. Some automated testers prefer to perform UAT before automating for exactly that reason. 

In that scenario, "in-house" testers can focus on the acceptance side of things, while QA Wolf focuses on running the regression suite and converting previously validated acceptance test scenarios into future automated regression tests. If you are stumped about what else your testers will do besides manual UAT, here are some more suggestions 

Exploratory testing

Exploratory testing has been around formally for four decades, but it is seldom used and frequently misunderstood. If you ever asked three people on your team how something was going to work just to see if they give you different answers then congratulations; you’ve performed exploratory testing. Teams don’t need expensive tools to start doing it and provide immediate value. Often confused with manual testing, exploratory testing can be applied to any testing technique (white-box, gray-box, etc.). Though it is most frequently performed in the testing phase, it can be performed at any phase in the life cycle.

Many testers are drawn to QA because of their abilities both to empathize with the customer and also think outside of expected usage patterns. Good exploratory testers are creative and enjoy breaking software. We haven’t yet reached a point in technological advancement where robots can empathize with users. Automated tests are deterministic, but exploratory testing requires subjectivity.

Recommended Reading: Explore It! by Elizabeth Hendrickson

Informal usability testing

Formal usability testing is frequently the domain of UX researchers and designers and we aren’t suggesting that testers change careers; nonetheless, a tester who knows a given product well can be a valued partner. Depending on how long a given tester has been testing their application, they probably know it too well to be objective about it. They have become accustomed to its idiosyncrasies. This is where informal usability testing comes in. 

Larger organizations tend to have people in charge of making sure customers' voices are formally heard. Some of them conduct regular testing, such as A/B testing or product demos in order to collect and analyze features. 

But here’s where co-workers can help. Testers in small companies may struggle to find people within the company who haven’t actually used the company's application. But once a company gets past a couple hundred people, testers can start looking for people who may not have actually used their application. In larger companies with several products, it will likely be pretty easy to find someone who hasn’t used a given application. New hires should always be employed to see if they can find usability problems.

Testers can guide their colleagues through the app. This will go a long way to help them see their application with fresh eyes. They can hold one or more recorded sessions and, if this is successful, they can start organizing these sessions on a regular basis. 

In order to start this type of testing, the team will need to work within the organization. Testers may want to buy their helpful colleagues lunch. 

In smaller organizations, testers might be able to work with the sales or support team to make sure that any source of customer frustration gets to the developers. They may be able to organize a session where their team listens to customer calls regarding their teams’ features. 

Up-skilling automators

Automated testers with technical skills have even more opportunities to add value throughout the company when they’re not relegated to end-to-end test maintenance. Let’s talk about a few of them. 

Shift left with unit testing

Many automated testers already have enough technical prowess to write unit tests or component integration tests. Such efforts may require some assistance from developers at first, so they need to be sure to get the support of their team.

Testers can also put their exploratory skills to work here to think of cases that developers may not have considered or may have simply been too busy to write. It is generally true that developers rarely bother to write tests to confirm error conditions despite the fact that no small amount of escapes continue to show up there.

Even if a given tester doesn't consider themselves skilled enough yet to write unit tests, it’s a great place for them to start firming up their skills. Since it’s test code, it doesn’t make its way to Production. Testers start to earn greater respect for their coding prowess from the team and it will help them up their chops.

And for testers already familiar with unit testing and, better yet, Test-Driven Development (TDD), consider asking them to start a TDD dojo.

Static analysis

In all likelihood, your team already uses tools that analyze the code without executing it: linters, coverage tools, etc. Most teams would benefit from more tools to help in identifying areas in their applications where defects might show up. Perhaps a given team runs some static analysis tools but ignores the output or postpones investigation. Testers add an enormous amount of value by simply tuning those tools to be less noisy. There are also dashboards for displaying your team’s results that testers could help set up and configure. Depending on which programming languages your team uses, there are a variety of tools that can usually be easily installed and configured to help teams identify new areas of risk in their application. Once they’ve added a tool and configured it in the application (including the reporting directly), testers should be able to start reporting findings to the team.

Courtesy: CodeScene

Recommended reading: Your Code As A Crime Scene by Adam Tornhill

Performance testing

Automated testers with free cycles frequently gravitate toward “load testing,” which is one of the many forms of software performance testing whose results can be immediately understandable. There are a lot of great free tools out there on the market that automated testers can use with their existing end-to-end tests.

Some of the more complicated and ignored forms of performance testing, such as configuration testing, can be almost as easy to conduct and may yield more actionable results, in that any configuration change you need to make to speed up a given application may be more obvious than trying to decipher the results of a performance test. 

A frequently under-utilized form of performance testing is page analysis. Many front-end developers run these tools on their local boxes, but where these tools really shine is when they point to the production website. Some of these tools can be easily configured to work in your CI pipeline so that you can start getting instant feedback on where customers might be experiencing slowness on your site.

Transitioning to testing in production

For years, the software industry made fun of people who tested in production but those days are gone. The FAANG companies and other big players have made testing-in-production all the rage. In fact, if your organization is doing A/B testing, that is considered one of the first steps companies can take toward a fully mature production testing strategy. Your release team may have Canaries. Your Infosec team may be doing penetration testing. If those are happening, then your testers may be able to leverage some of the work of those teams.

It can take a lot of influence and buy-in from the organization to start experimenting with new testing methodologies on production, so testers should start in pre-prod and put together a P.O.C. of what it said testing might look like.

Courtesy: Cindy Sridharan

Recommended reading: Testing in Production, the Safe Way by Cindy Sridharan

Contract testing

When testers are writing automated tests, they are often writing system tests. System tests are intended to assure that any given application plays nicely with others and to assure bi-directional communication. In the current world of microservices, that means that testers want to make sure that applications that provide services (providers) and those client applications that use those providers (consumers) both adhere to the same agreement (contract). 

Contract Testing is no small feat to pull off. The more services your organization has, the more difficult it becomes. The advantage to doing so however is the confidence teams can get from running these contract tests, which tend to be able to be executed before you merge your code, depending on how the organization manages application versions.

In a small company though, contract testing is definitely achievable. Paid tools can certainly help, but the more services the company has, the more difficult this effort becomes. There are a limited number of tools that allow teams to accomplish this feat, the main one being Pact. It will help if you have support from upper management.

Recommended Reading: Pact.io documentation

A little from Column A, a little from Column B

If you are still looking for ways to optimize your testers regardless of whether they are manual or automated, here are some other strategies that could be leveraged by less technical users but would potentially be interesting to developer-level staff.

Monkey testing & Random testing

Monkey testing is a great way to find obscure, edge-case defects in your organization’s application. Monkey testing is a form of random testing where testers generate random inputs, frequently using specialized tools. However, since monkey testing is a black-box technique, any tools testers use to generate input don’t necessarily need to be written in a language the team is familiar with. 

That said, make sure you understand the pros and cons of this technique before allowing testers to wreak havoc on their teammates' testing environments. Sometimes, it can be difficult to understand bugs that are found by monkey testing so testers will probably want explicit consent of their team and at least one enthusiastic developer willing to listen to them.

On a bit of a side note, fuzz testing, another form of random testing, is also worth checking out if the InfoSec team isn’t already doing it.

Recommended reading: The Practical Testing Book

Observability

Where static analysis identifies risk in code that isn’t running, observability is about discovering risk by looking at the artifacts of developing and executing code, such as application log output or even source control management history. 

These days, most organizations have tools to aggregate logs and report on trends in various environments. Testers rarely use these tools for anything other than debugging application failures during tests. But there are plenty of ways to use the information in your logs to hunt for bugs, if by doing nothing else than observing errors and mapping them to application behavior. Many of the modern analytics platforms that have become de rigueur in the industry provide ways for team members to create custom dashboards. 

Courtesy: Honeycomb.io

Recommended Reading: Observability Engineering: Achieving Production Excellence by Charity Majors

Synthetic Monitoring

Synthetic monitoring is the combination of scripting and observability. The idea behind synthetic monitoring is to create transactions on a given system and track the results by looking at your logs and determining what normal behavior on your system looks like. That makes it easier to figure out what abnormal behavior looks like so that the team can proactively identify potential error conditions and fix them before customers find them instead of sitting around and waiting to see if the system is resilient. By purposely creating activity on the most important features of an application, proactively and frequently, teams stay one step ahead and provide a top-notch customer experience. 

Courtesy: Grafana

The sky's the limit

Just like DaVinci’s saying about art, testing is never completed, only abandoned. There are myriad methods testers can use to deepen their test coverage, some of which will put their automation skills to use and even stretch them, and some of which will allow them to get more creative.

What’s more, by implementing one or more of these methods, testers will begin to increase their team’s ability to identify risk in advance of feature releases. By partnering with QA Wolf’s automated testing teams, your organization can help increase the depth to which your teams test while also increasing confidence, resulting in faster, higher-quality releases.

A pessimist sees the difficulty in every opportunity; an optimist sees the opportunity in every difficulty.
—Winston Churchill

Keep reading