Article

3min read

CX Optimization Webseries APAC: Episode #3 – The Importance of Continuous Optimization in A/B Testing

 Testing as well is such a benefit from de-risking that decision making.

– Tom Shepherd, UX Lead at David Jones

Hosted by Serena Ku, Senior CSM at AB Tasty

Featuring Tom Shephard, UX Lead at David Jones

In the fast-paced world of digital commerce, A/B testing and continuous optimization are important processes allowing brands to refine strategies, improve customer experiences, and increase conversion rates over time.

One huge pitfall many businesses face is they look at what their competitors are doing and assume that it will work for them too. But remember, things are not always as they appear. 

In this third episode of our CX Optimization Web Series, Tom Shepherd, UX Lead at David Jones joins Serena Ku, Senior Customer Success Manager at AB Tasty to discuss the importance of Continuous Optimization in A/B Testing.

Discover how a business perspective can shift from “we think” to “we know”.

Episode #3:

Why is it important for brands to run A/B tests?

The main benefits are improved content engagement, increased conversion rates and reduced bounce rates. 

If you’re not A/B testing, you may already be behind your direct competitors. This by itself is a compelling motivation for why brands should start testing. Speeding up the time it takes to bring an idea or a concept to market is another benefit worth considering A/B testing.

Take note, businesses need to level up and be able to keep up with behavioral changes and look for opportunities where experiences are not achieving the results they  should be.

The Role of AB Tasty to empower David Jones’ CRO strategy

In a traditional UX setting, it is quite frustrating when you invest a lot of time mocking up experiences, taking those to customers, and later finding out that they just don’t work. 

The Australian luxury department store, David Jones, takes experience optimization seriously. They look closely to understand their customers in all facets. Using GA4 and FullStory, they can draw out ideas and build solutions that will make an experience more seamless, removing friction. With AB Tasty, they launch these experiences quickly and expose them to their customers to gather valuable insights. 

As a discipline within the user experience team, David Jones leverages AB Tasty and analytics tools to marry quantitative data with qualitative insights delighting every customer.

Winning customer loyalty

Customer loyalty is all about the experience. Its essence in the e-commerce landscape is where the digital store has made each customer feel highly valued.

Perfecting the art of customer loyalty requires both creativity and precision.That is why, like your local store attendant, EmotionsAI helps brands understand the emotional needs of audiences to bolster your Experience Optimization roadmap with effective messages, designs and CTAs that activate your visitors.

Factors to consider when testing?

Truly knowing your customer demographics and understanding their behaviors online will allow you to create a well-formulated hypothesis. Consider the time of the year when you launch a test. Is it an off-peak season, are you running promotions, or clearing stocks?  Analyze your data and focus on where your conversion points are. 

Tom suggests iterating and running as many follow-up tests as possible. If you tested something that worked, you might be up to something even greater. So test more iterations to unlock more results.

The wrap:

The strongest path to customer loyalty, higher conversion, and a customer base nobody can touch is having ‘differentiated experiences’. Start with a deeper knowledge of your industry and beyond. Know your customers and empathize with them. Be mindful that behaviors and preferences are ever-changing. Continuous optimization helps you adapt, execute strategies, and stay ahead of the game.

Subscribe to
our Newsletter

bloc Newsletter EN

We will process and store your personal data to respond to send you communications as described in our  Privacy Policy.

Article

5min read

Mutually Exclusive Experiments: Preventing the Interaction Effect

What is the interaction effect?

If you’re running multiple experiments at the same time, you may find their interpretation to be more difficult because you’re not sure which variation caused the observed effect. Worse still, you may fear that the combination of multiple variations could lead to a bad user experience.

It’s easy to imagine a negative cumulative effect of two visual variations. For example, if one variation changes the background color, and another modifies the font color, it may lead to illegibility. While this result seems quite obvious, there may be other negative combinations that are harder to spot.

Imagine launching an experiment that offers a price reduction for loyal customers, whilst in parallel running another that aims to test a promotion on a given product. This may seem like a non-issue until you realize that there’s a general rule applied to all visitors, which prohibits cumulative price reductions – leading to a glitch in the purchase process. When the visitor expects two promotional offers but only receives one, they may feel frustrated, which could negatively impact their behavior.

What is the level of risk?

With the previous examples in mind, you may think that such issues could be easily avoided. But it’s not that simple. Building several experiments on the same page becomes trickier when you consider code interaction, as well as interactions across different pages. So, if you’re interested in running 10 experiments simultaneously, you may need to plan ahead.

A simple solution would be to run these tests one after the other. However, this strategy is very time consuming, as your typical experiment requires two weeks to be performed properly in order to sample each day of the week twice.

It’s not uncommon for a large company to have 10 experiments in the pipeline and running them sequentially will take at least 20 weeks. A better solution would be to handle the traffic allocated to each test in a way that renders the experiments mutually exclusive.

This may sound similar to a multivariate test (MVT), except the goal of an MVT is almost the opposite: to find the best interaction between unitary variations.

Let’s say you want to explore the effect of two variation ideas: text and background color. The MVT will compose all combinations of the two and expose them simultaneously to isolated chunks of the traffic. The isolation part sounds promising, but the “all combinations” is exactly what we’re trying to avoid. Typically, the combination of the same background color and text will occur. So an MVT is not the solution here.

Instead, we need a specific feature: A Mutually Exclusive Experiment.

What is a Mutually Exclusive Experiment (M2E)?

AB Tasty’s Mutually Exclusive Experiment (M2E) feature enacts an allocation rule that blocks visitors from entering selected experiments depending on the previous experiments already displayed. The goal is to ensure that no interaction effect can occur when a risk is identified.

How and when should we use Mutually Exclusive Experiments?

We don’t recommend setting up all experiments to be mutually exclusive because it reduces the number of visitors for each experiment. This means it will take longer to achieve significant results and the detection power may be less effective.

The best process is to identify the different kinds of interactions you may have and compile them in a list. If we continue with the cumulative promotion example from earlier, we could create two M2E lists: one for user interface experiments and another for customer loyalty programs. This strategy will avoid negative interactions between experiments that are likely to overlap, but doesn’t waste traffic on hypothetical interactions that don’t actually exist between the two lists.

What about data quality?

With the help of an M2E, we have prevented any functional issues that may arise due to interactions, but you might still have concerns that the data could be compromised by subtle interactions between tests.

Would an upstream winning experiment induce false discovery on downstream experiments? Alternatively, would a bad upstream experiment make you miss an otherwise downstream winning experiment? Here are some points to keep in mind:

  • Remember that roughly eight tests out of 10 are neutral (show no effect), so most of the time you can’t expect an interaction effect – if no effect exists in the first place.
  • In the case where an upstream test has an effect, the affected visitors will still be randomly assigned to the downstream variations. This evens out the effect, allowing the downstream experiment to correctly measure its potential lift. It’s interesting to note that the average conversion rate following an impactful upstream test will be different, but this does not prevent the downstream experiment from correctly measuring its own impact.
  • Remember that the statistical test is here to take into account any drift of the random split process. The drift we’re referring to here is the fact that more impacted visitors of the upstream test could end up in a given variation creating the illusion of an effect on the downstream test. So the gain probability estimation and the confidence interval around the measured effect is informing you that there is some randomness in the process. In fact, the upstream test is just one example among a long list of possible interfering events – such as visitors using different computers, different connection quality, etc.

All of these theoretical explanations are supported by an empirical study from the Microsoft Experiment Platform team. This study reviewed hundreds of tests on millions of visitors and saw no significant difference between effects measured on visitors that saw just one test and visitors that saw an additional upstream test.

Conclusion

While experiment interaction is possible in a specific context, there are preventative measures that you may take to avoid functional loss. The most efficient solution is the Mutually Exclusive Experiment, allowing you to eliminate the functional risks of simultaneous experiments, make the most of your traffic and expedite your experimentation process.

References:

https://www.microsoft.com/en-us/research/group/experimentation-platform-exp/articles/a-b-interactions-a-call-to-relax/