3 Critical Ingredients for Successful A/B Tests

AB Tasty’s note: this is a guest post by Jack Maden, marketing executive at Decibel Insight.

Sturgeon’s law states that “ninety percent of everything is crap.”

No matter the field, no matter the industry, no matter the area of expertise: 90% of everything produced, recommended, or discussed is not worth paying attention to.

The conversion optimization industry may be relatively new, but chances are Sturgeon’s law still applies.

So how can you avoid slipping into the 90% of crap with your web optimization? Well, ensuring your testing program heeds the following three principles is a good start.

Your hypothesis is tied to a specific metric

Having a testing culture is positive for an organization. However, if every single issue is met with the response, “we should test that!”, then it can be potentially dangerous.

Testing needs purpose.

On our blog, agency CountourThis writes that A/B and multivariate tests are not proxies for decision making. And it’s true: testing should help inform decisions, not dictate them.

The way you can ensure a test has purpose is by tying it to a specific metric. Craig Sullivan’s basic hypothesis kit is a brilliant place to start with this. The basic framework runs as follows:

  1. Because we saw (data/feedback)
  2. We expect that (change) will cause (impact)
  3. We’ll measure this using (data metric)

For example, say you find mobile users aren’t converting on a particular landing page. Then you load up a scroll heatmap and find that only 30% of mobile users are scrolling far enough to see the call to action.

picture1

We can hypothesize as follows:

  1. Because only 30% of mobile users see the CTA
  2. We expect moving the CTA up the visual hierarchy on mobile will lead to more conversions
  3. We’ll measure this by tracking the CTA conversion rate

This is a testable hypothesis with purpose, and is a better use of resources than testing anything and everything, merely for the sake of calling yourself ‘data-driven’.

Your hypothesis is based on both quantitative and qualitative data

It’s rare that hypothesis generation will be as simple as in the previous example. Website issues are often convoluted, and no obvious fix may present itself.

This is when you need to utilize qualitative data. Traditional analytics tools give a good indication of where website leaks are, but when it comes to understanding how to plug them, watching back recordings of individual user sessions is invaluable.

You can see exactly what your users see – errors and all – follow their mouse movements and clicks and instantly understand their frustrations. Say they’re filling out a checkout form, and a pop up covers the entire screen. You instantly empathize with how annoying that is, and it’s this kind of emotional insight that you just can’t get from lists of statistics and graphs.

Another key aspect of qualitative data is customer feedback. This can be obtained with voice of customer tools, which utilize on-site popups and surveys, or with more traditional direct routes (i.e. actually speaking to them!).

Qualitative data can radically alter your perspective on your website. As problems become more convoluted, it’s essential to collect it before generating hypotheses for A/B tests.

You’re not too attached to your hypothesis

When you’ve tied your hypotheses to specific metrics, and based them on a mix of qualitative and quantitative data, you might see testing them as a mere formality. These changes, you think, are guaranteed to get you more conversions.

Hold your horses.

Go into the test with a neutral mindset. These changes might work; they might not. You don’t know for sure: that’s why you’re testing them.

This neutral mindset prepares you for when a test fails. Rather than dismissing the results with such denials as “our users are stupid!”, “our tool must be broken!”, “our analytics is a load of tosh!”, you can accept the outcome and get on with coming up with new hypotheses.

By all means double, triple or even quadruple check the setup of your tool and analytics; but once you know for sure it’s all set up correctly: accept the outcome of the test and move on.

Besides, a losing A/B test isn’t a failure: now you’ve got more data on which to base your next hypothesis.

So remember, maintain your scientific approach until the bitter end!

What next?

These three principles are a good starting point for running meaningful A/B tests. But, to really ensure you fall on the right side of Sturgeon’s law, download Decibel Insight’s conversion optimization guide. It’s packed full of industry insight, techniques – and it’s free!

About the author: Jack Maden is Marketing Executive at Decibel Insight, the leading digital experience analytics technology. Connect with him on Twitter and LinkedIn.

Related Posts


Tweet
Share
Share
Pocket
Buffer
X