During an experimentation campaign, showing your visitors a variation that does not convert particularly well throughout the duration of a test, when there are other variations that give better results, can be frustrating. We’re familiar with this pain point at AB Tasty, which is why we help our clients minimize the ‘waste’ of traffic inherent in every test, thanks to our popular Dynamic Traffic Allocation feature, which is based on Thompson Sampling and Bayesian statistical model algorithms.
The latter solves the tricky question of the ‘multi-armed bandit’ by finding the balance between data exploitation and exploration to continuously – and quickly! – optimizing experiences. We go above and beyond simply helping clients decide what to do after an A/B test. Instead, we offer an automated solution perfect for use when decisions need to be made continuously, when time is short or when you’re working in a constantly changing environment.
Imagine running a promotional campaign during the holiday season that will only be available for a week. Although you have variations to test, the goal is optimizing revenue and not necessarily reaching statistical significance. Suppose a global pandemic is impacting your business results and forcing you to communicate in new ways. Although you have variations to test, the goal is optimizing the communication ASAP and not necessarily running a classical A/B test.
Dynamic Allocation has been an AB Tasty capability for some time now. However, we have recently streamlined the whole process to make it more consistent and easier to activate when creating your campaign.
What is dynamic traffic allocation?
Dynamic traffic allocation consists of using an algorithm to modify the quantity of traffic sent to each live test variation.
Dynamic traffic allocation algorithm will detect the highest performing variation, and send more traffic to this version.
Why is it useful?
It’s useful for limiting loss of conversions during a test, (called ‘regret’), which can occur when a part of your website’s traffic is sent to a variation that ends up not being the winner.
Let’s look at the following A/B test: ConversionRateA = 1%, ConversionRateB = 1.5%, carried out with steady traffic of 10,000 visitors per variation.
The regret of the test is: r = 10.000 * (0.015-0.01) = 50 conversions lost. Over the duration of the test, we could have had 300 conversions (20.000*0.015), but the test made us lose 50. There would therefore only have been 250 conversions during this time period.
Of course, we can only make this type of calculation after a test is over when we have the exact figures for the conversion rates. However, this doesn’t mean nothing can be done to limit wasted traffic during a test…
How does dynamic traffic allocation work?
The solution to the above problem is to modify the test’s traffic allocation so as to send fewer visitors to ‘bad’ variations and more towards the ‘good’ variations.
Watch out, doing this manually is very risky because you might invalidate the results. However, there are algorithms that can channel traffic so as to minimize a test’s regret, while at the same time identifying the winning variation…
We’ve chosen the most reliable algorithm for the job, based on the following idea: we use the uncertainty of the conversion rate measurements to reach a compromise between ‘exploration/exploitation.’ We ‘explore’ a variation when we send it traffic, even if initial readings don’t say it’s the winner since we know these first conclusions aren’t reliable. We ‘exploit’ when we send traffic to a variation deemed the winner by the numbers already collected – this is how we avoid losing too many conversions (assuming it’s really the true winner).
Naturally, these two goals go against one another. Exploration means losing conversions, and exploitation means taking a risk if you don’t pick the winning variation! It’s therefore critical to accurately model the uncertainty of the measurements, and then to find the right compromise between ‘exploration/exploitation.’
We take into account the uncertainty of the conversion rate measurements for each variation, thanks to probability distributions. These graphs show where there’s the most chance of finding the true conversion rate value. The more the curve is high on the Y-axis, the higher the chance is that the corresponding X value is the real one.
Here’s one example:
The variation A has 7 successes from 600 visits (black curve), version B has 27 successes from 600 visits (red curve). The situation is clear: it shows us that the variation A’s conversion rate is likely between 0% and 0.2%, and that of version B is likely between 0.25% and 0.7%. Since these are distinct intervals, even if we can’t be sure of the measurements, we’re still able to say with near certainty that B is the winning version. There’s little room to doubt that version B is the winning version since the curves don’t overlap.
Here’s another example:
Variation A has 7 successes from 300 visits (black curve), version B has 14 successes from 400 visits (red curve). The simple conversion rate calculation gives us ConversionRateA = 2.39% , ConversionRateB= 3.63%. There seems to be a difference, so we’re tempted to say that version B is the winner, but this is inaccurate….by looking at the probability distributions, it’s easier to spot the uncertainty of these measurements. By noticing that the two curves overlap, we understand that there’s still room for doubt.
The ‘exploration/exploitation’ compromise
Let’s continue looking at the last example. We notice that it’s just as likely that ConversionRateA equals 3% and that ConversionRateB also equals 3 % (where the two curves cross). With this type of approach, we can calculate the probability that A is the winning variation, even if for now, B seems to be the better one. We use these types of calculations to find the right balance between ‘exploration and exploitation’. We can estimate the usefulness of exploring and the risk associated with exploiting with the help of an algorithm, such as Thompson Sampling.
This algorithm:
- is sure, over time, to find the winning variation
- Is guaranteed to lose fewer conversions than if we used steady traffic
- Will find the winning version more quickly (if there are more than 2 variations), than if we used steady traffic. The more variations there are, the more likelihood there is that there are a few (very) bad variations. These poor performers will quickly be identified and given less traffic than those that perform better. If steady traffic allocation is used, these (very) poor performers would continue to lose a non-negligible amount of traffic.
How can you use dynamic traffic allocation?
Using dynamic traffic allocation with AB Tasty is very simple, all you have to do is click on the “Change to Dynamic allocation” button, and choose the Primary KPI to optimize. In the beginning, traffic will be uniformly allocated, and then will be adapted and automatically allocated in such a way as to identify the winning variation and maximize the gain of conversions.
Once the test is launched, everything stays the same as with a classic test (with uniform traffic allocation). Of course, all statistical measurements take the dynamic allocation into account. Interpreting test results is therefore exactly the same.
Why use Dynamic Allocation?
Thanks to the Dynamic Allocation, users who are short on time will have the opportunity to directly allocate the traffic to the best variation without having to interfere with the campaign. This will secure quicker results and better ROI!
Dynamic Allocation can be very useful in the following cases:
- When users want to optimize micro conversions that are expected to occur in a short period after the user has been exposed to a variation. On e-commerce websites, for instance, some clients prefer the “add to cart” CTA over the transaction event as the primary goal.
- When users have a very short time to run a test:
- Running promotions for the holiday season with a few variations of the promotional engagement, where the business goal is maximizing revenue during this short period of time.
- For example, during the Covid crisis, a business might need to communicate with customers quickly. They want the winning variation to be used ASAP.
- When a page that needs to be tested has really low traffic. Meager traffic can make it difficult to reach statistical significance, but this doesn’t have to get in the way of you optimizing the customer experience. Dynamic Allocation would be a logical choice.
- When there are a lot of variations to test (more than 6), Dynamic Allocation enables users to quickly identify the least performing variations in order to run the test on the most relevant ones.
Interested in exploring AB Tasty, Dynamic Allocation, and other AI-powered capabilities such as Engagement Level and Content Interest to optimize your brand & product experiences? Contact us.