Is your A/B/n or multivariate testing really delivering the value you think it is?

A/B/n and multivariate testing form a foundation for many digital marketing and product development programs because they promise to squeeze every last drop from potential conversion. Who’s going to say no to that?!

Tools like Optimizely and Adobe Target make it easier than ever, even for technophobes, to test creative and optimise digital features with unquestioned, scientific certainty – or so we think.

But what if it was actually hurting your bottom line?

Here are five common pitfalls of testing you need to be addressing right now.


1. Failure can masquerade as success

Approaching testing with too few success criteria or focus that’s too narrow can inadvertently create latent or hidden consequences. 

Take for instance, notifications or countdown clocks on retail sites that create a sense of urgency to purchase - especially those where there is no real impact if a customer doesn’t immediately check out.

Sure, you’ll see conversion go up and speed to purchase go down, but if the experience hasn’t made the customer feel relieved because tickets haven’t actually sold out, or the same price is still available next week, all you’ve done is added unnecessary stress to their experience.

Are you factoring customer satisfaction into your tests? While you might close the sale now, short-term gain could translate to long-term pain as your customer looks for a less-stressful and more authentic experience next time.


2. If you don’t have a specific goal, you might sk(r)ew everything

When designing a new test, it might be tempting to throw in a few other ideas that you’ve been considering for the same page. Or you might want to try to tackle two birds with one stone by testing variations of a new feature and variations of temporary onboarding indicators at the same time (like a ‘new’ or description label, for instance).

In such an example, you’re actually trying to determine two outcomes: the first being which variation of the feature is more successful, and the second being the more successful way to draw attention to it.

But the promotional label or indicator might add critical context that is key to a variation of the feature, which could lead to a false positive result. When it’s time to turn off the promotional label, are you sure you still have a winning feature?

The more variants you add to a multivariate test, the more you increase the margin of error and the time it takes to achieve statistical significance - which can be an issue if you have inconsistent traffic.

Add variants wisely and make sure they align to a specific goal for your test.


3. Your technical implementation could render everything invalid

Thankfully, testing tools and internet speeds have come a long way and the risk of tests creating ‘page flicker’, where elements of the page seem to flick as control content is replaced with test content, has been largely minimised.

But complex or buggy tests can still cause flicker, unintentionally drawing attention to content and inadvertently impacting results. Because this jarring probably wouldn't happen under normal production circumstances, a winning variant may not necessarily be a winner after all.

Ensure your test execution mimics real-world experience.


4. Testing can be the enemy of innovation

Depending on the demographics of your users, there’s likely to be subset that is resistant to change - and it can be a real risk if these include your high-value, repeat customers.

You might be perplexed when a well-crafted, time-saving experience born from sound insights doesn’t immediately beat a control. It might be because your customers need time to adjust to it.

Even the most intuitive of interfaces might fail to deliver immediate success for all customers.

Imagine if Facebook rolled back changes every time users vocalised their initial objection! In cases where you are trying to change customer behaviour, A/B/n or multivariate testing might not be an appropriate indicator of long-term success.

But this novelty effect can also work in the opposite way, depending on what you’re testing and the overall design of the page.

A feature or content you’re testing might have a compelling hook that encourages play or exploration at first exposure, but may not drive repeat engagement. In such a case, immediate success may not be sustained.

Continued testing in pursuit of absolute perfection could see you optimise the soul, and risk, out of your digital experiences.

In a dystopian internet where testing ruled everything, we could end up with identical sites where no one was prepared to pioneer new patterns because of a temporary hit to conversion.

Know when to concede and ride out a short-term impact for innovation’s sake. 


5. Be wary of inaccurate conclusions

I wish I had a dollar for every time I've heard “choose a hero image with someone smiling because they test better”.

In many cases, it’s probably true. But in many cases, it’s not.

Framing, composition, content and context all have an impact in determining success for creative.

A shot of a reclined business class seat with warm sun shining through the window is likely to be more successful to sell long-haul flights than a busy shot of a middle business class seat with someone sitting in it because it conveys a completely different tone.

Ensure you’re not assuming previous success conditions will lead to the same outcome in the future because there are likely to be too many other variables.

You should definitely be sharing your testing outcomes, but to stimulate conversation and generate ideas to improve your testing capability – not to build a bank of inaccurate assumptions for you to bypass future testing.

Testing can give you a powerful edge and ultimately result in better outcomes for your customers. But with great power comes great responsibility.

Build data science and statistic modelling capabilities among those designing and analysing tests and consider with working with a partner that knows how to design and engineer successful testing programs to maximise your investment in this technology.


By Dan Fischer - Manager Digital Customer Experience, Qantas

About Dan Fischer

Dan is passionate about the intersection of commerce and design and the role emotion and context play in building winning digital experiences. Dan leads the Qantas digital direct channel strategy, including shaping and executing the vision for qantas.com and the Qantas app and is a member of the Digital + Technology Collective Advisory Board. Currently undergoing an end-to-end transformation, qantas.com is Qantas’ key booking channel, taking over a third of the airline’s bookings. View more