Let’s throw out the antiquated idea that ads exist outside of the “real” product and therefore aren’t beholden to the same quality standards. Monetization tools are real products that should deliver value not only for brands and agencies but for the end users who see the ads too.
Ads that don’t load, aren’t interactable, don’t work across devices, crash the app or browser, and use dark patterns are simply unacceptable, let alone unengaging. Plus, if you’re monetizing with ads - how can you possibly justify premium rates when pitching your product to advertisers?
The era of “throw some code on a page and pray it works” ended for professional web apps before Bill Clinton left office in 2001, but this attitude is still the norm for ads - particularly for programmatic ads.
So with that in mind, we can identify tools to turn poor ad experiences into ones that deliver real value. Exploratory testing is one such tool.
Exploratory testing can be summarized as “testing like a human”. Humans don’t think in terms of test cases or test suites or even test steps. Humans like to try stuff and watch what happens. They ask themselves, “Does this work? How about this? What if I tweak that and try again? Hmm, now why is THAT happening?”
Then, when an exploratory tester finds an issue, they note what they were doing so they can reproduce it. Over time they learn to identify common problematic behaviors in the product, and their testing becomes more efficient at finding bugs.
The key to successful exploratory testing is relying on a tester’s brain rather than a script. No predetermined suite of tests can capture the creativity and judgment a human brings to creating tests on the fly.
Exploratory testing may seem like “playing around” with a product, and in many ways that’s what it is. But there’s a method to the madness— the tester is trying to uncover what’s behaving according to spec and what’s not. Furthermore, the tester can discover undefined behavior that the spec didn’t cover, and that’s one of the real values of exploratory testing.
There’s a software joke that goes: “A software QA engineer walks into a bar. Orders a beer. Orders 0 beers. Orders 99999999999 beers. Orders a lizard. Orders -1 beers…”
But there’s an appendix to that joke: “The QA engineer signs off on the bar. The first customer walks in and asks where the bathroom is. The bar explodes.”
The customer performed a reasonable activity for a bar (ordering drinks), but because bathroom breaks weren’t covered in the spec, they missed something they would have caught had they tested the bar “like a human”.
We first need to identify the risks to quality upfront. Risks exist broadly at the system-level, as in Rex Black’s list of quality risk categories. They also exist at the product or feature-level, like “clicking on the ad does not register an event” or “the ad breaks the styling of the header”.
Using the bar example, some functionality risks are “when interacting with the bartender, you can only order beer” and “you cannot get directions to the bathroom”.
Enumerating the risks gives us some boundaries for exploratory testing. By identifying what’s at stake, we know what could go wrong, even if the spec doesn’t cover them, and we will watch for them as we’re testing. Other good practices include:
Exploratory testing requires time that can be challenging for teams, but even quick, surface-level testing by a human can save you from poor ad experiences like these:
If your users can’t escape your ads, thanks to tiny exit buttons or fine print, they can’t (and won’t want to) see your site content. These are infuriating ad experiences that could tarnish their view of your brand.
If your ads crash your users’ browsers and apps, not only do you lose the chance to monetize them during that session, but they may never come back at all.
Surrounding and embedding your organic content with ads can distract users from engaging with either. Where does the ad end and the content begin? If it takes extra brainpower for your users to figure that out, they may just jettison your site altogether for a product with a simpler, ad-free layout.
Improperly-sized ads don’t serve your users, advertisers, or your brand at all. You may have spent years making sure your site is clean and sized properly - only to be okay displaying ads that make your product look amateurish.
Probably worse that incorrectly-sized ads are ads that display code (or don’t display anything) rather than the intended creatives - leading to a terrible user experience.
Again, we find it amazing that companies that have teams dedicated to site design and brand presentation for some reason give ads a pass. Here’s an example from The New York Times Crossword section. The ad banner is out-of-place and enormous, with the ad frame taking up a third of the above-the-fold space, while the banner itself a small portion of this.
This is likely due to them using the slot for differently-sized banners - but for the sake of sleek site design, they should be, say, implementing rules using Ad Serving APIs that identify banner size and adjust the placement in real-time.
It’s also a lot of blank space that could be used instead to upsell their News subscription, Recipe subscription, or insert polished native ads. This is a win-win: users get a crisper, cleaner user experience while The New York Times can charge higher ad rates for better ads (and/or upsell other products).
Off-brand advertisers or PR-nightmare scenarios can easily occur if you’re running programmatic ad traffic. Fortunately, human testing can identify when this occurs - and you can then block certain advertisers and/or ad partners.
To be fair, it’ll be hard to catch all of these situations if you’re using an ad exchange, and if you’re displaying direct-sold ads, this is far less-likely to happen.
You shouldn’t use exploratory testing for quickly identifying regressions. That task is better suited for automated integration tests. Similarly, there are automated tools for detecting known malvertising patterns and perpetrators that are more efficient than a human.
Overall, exploratory testing comes with a time cost— the time to identify risks, the time to test, and the time for a tester to learn about the product and improve their skills. And exploratory testing cannot be done casually. An unimaginative, unempathetic tester doesn’t provide a great advantage over a test suite.
But when executed well, exploratory testing is a powerful weapon against poor ad experiences.