AB Tasty’s mission is to make A/B test and content customisation accessible to as many people as possible. The feature that best reflects this proposition is without a doubt our visual editor, which allows you to directly modify your website’s pages, even without having any technical know-how. However, bringing your ideas to life is only the first step of your testing campaign: you must then analyse and use the results to make your website better.
True to our mission, we’ve just launched a new feature within AB Tasty called Clever Stats. The result of 12 months of development work, this new method of calculation provides you with greater flexibility to make decisions faster, with less risk of identifying a version of a webpage as being effective when it’s actually not performing so well. AB Tasty is the first testing solution to rely on Bayesian statistics in order to bring you this flexibility.
We will go even further with our promise of simplicity. Having freed marketing teams from the technical constraints when modifying their pages and customer experience, we are now liberating them from further statistical constraints and making it is easier for them to use the data collected, and allowing them to take action with absolute confidence. Remi Aubert, co-founder of AB Tasty
Don’t just watch – act now!
To understand what drove us to develop this new methodology, it’s important to first understand that most testing solutions that exist are based on statistical methods, such as the Student’s t-test or the z-test. These methods have been handed down to us from the past and have proven themselves to be effective in such sectors as the pharmaceutical or agricultural industries, where data are not available in real time. However, these statistical methods are less suited to the current needs of online users and their business websites. This is mostly due to the fact that use of these methods requires a strict methodology, called a fixed horizon, where a sample size must be determined before the test begins and where no conclusions can be drawn before the full sample size has been tested. Check out our own free Sample Size Calculator.
Applied to A/B testing for websites, mobile websites or native applications, these old methods no longer meet user modes of operation. with many marketers and online business owners requiring access to real time information, asking them to calculate a sample size a priori and not look at the results before reaching this limit is inconceivable. Moreover, calculating the sample size requires very subjective information (minimum detectable effect), which is too often defined very arbitrarily. Update: we are offering a Minimum Detectable Effect Calculator to help you figure out this value based on your traffic.
Instead, users must be able to quickly interpret the data and act upon them as soon as they appear promising. Why wait when usable data have already been collected? Such behaviour as regularly looking at the results, or “data peeking”, is very common among users, but is also totally contrary to the prerequisites of fixed-horizon statistical methods. This behaviour leads especially to misunderstandings and doubts about the very significance of the tests. For example, when users find that the reliability does not always vary positively (e.g., your solution indicates that the reliability of the test is at 95 % at a particular time, but indicates otherwise the following day). If it is up to A/B testing solutions to educate people about these methodological factors, we, at AB Tasty, believe that users should not have to conform themselves to these constraints, but rather that the solutions should adapt and innovate so as to offer greater flexibility.
Fast and sound decision making
We developed Clever Stats in order to meet marketers’ and online business owners’ needs for immediate access to information and fast decision-making while ensuring the reliability of the results. Based on Bayesian statistics, this new statistical calculation engine allows for rapid action while minimising the risk of ‘false positives’ (declaring a variation as better when it is not).
Furthermore, we applied both the old and new methodology in parallel to several tests so as to measure the significance of the latter – a kind of A/B testing applied to our own algorithms ;-). And the results are beyond question! The new methodology wins hands down.
- First of all, you do not have to determine the size of your sample beforehand. Start your test and, as soon as a significant result is identified by the Advanced Stats, it can be relied upon.
- In practice, your trust in the given figures will quickly grow and your tests will take less time.
- You are free to look at the results whenever you want, and the additional information provided, such as the confidence interval for the conversion rates and the absolute gain, help you to not identify ‘false positives’.
- You also won’t find any sporadic changes in statistical reliability.
Our reporting interface changes accordingly: more data is displayed, such as the medians and measurement intervals of the conversion rate, for example, as well as the gain interval by variation. There is information to help you with improved decision-making by making you aware of the uncertainty zones, once there are some identified. For more detailed reading on this enriched reporting, see the article in our online help.
This new statistical approach also allows for dynamic resource allocation, which we are working on and will soon be available. With conventional statistical methods, each test has a cost: every time you direct a visitor to a lesser variant, you lose conversions. The flexibility afforded by our new approach instead enables the implementation of algorithms that automatically assign less traffic to the least effective conversions, thus maximising the gains during your experiments. Stay tuned!