6 testing methods that are guaranted to fail

With over five years’ experience, AB Tasty’s teams have witnessed the implementation of thousands of tests, optimisation ideas and ways of setting out hypotheses for tests and analysing results… Among them are some real gems we’re happy to share so they don’t happen again on your watch!

1. Testing useless elements

useless

There is no point spending time testing elements whose impact on your business appears negligible. Very few visitors to your website will interact with elements that have weak visibility, and in most cases you risk never seeing your tests reach statistical reliability. We have already seen users test redesigns of their website footer or different confirmation messages for resending a forgotten password which, unsurprisingly, produced no results. Don’t waste your time. Instead, concentrate on areas that are worth the effort either because they are located at a key stage in your conversion funnel, or because they receive a high concentration of web traffic. Base yourself on data that is both quantitative and qualitative to identify your opportunities for tests.

2. Testing unlikely scenarios

stadium

AB Tasty lets you target precisely those pages and users you want to feature in your tests. Targeting opportunities are very precise, which allows us to respond to a variety of test scenarios and the flexibility requirements of our clients. For instance, if a large distribution chain wishes to test the impact of reduced delivery charges in urban areas where they have few physical collection points, performing tests geolocated to certain large cities is an interesting option to consider. But targeting pages and user profiles has the effect of reducing the extent and size of the sample tested. The more precise the targeting, the longer it will take to collect a sufficient number of visitors to obtain statistical reliability. Your tests will take longer and you risk losing patience. Try our sample size calculator for statistical confidence.

To help you take a sensible decision as quickly as possible, we have recently modified our statistical reliability calculation method by basing ourselves on a Bayesian approach to statistics. Despite this, keep in mind that twenty conversions will never be enough to form a definitive conclusion. Tests on rare scenarios amount to a waste of time, and so should be abandoned.

3. Testing without taking context into account

hurricane

The marketing decisions you take influence not just the number of visitors to your site, but also their behaviour. Campaigns to boost web traffic, private sale initiatives or sales are all events that have a direct impact on the results of a test and the conclusions drawn from it. Since visitors are distributed randomly and equally among the variants, you can be left with the initial impression that the influence of your marketing strategy has been eliminated, because all variants are affected on the same level, and because the analysis focuses on differences in proportion. In part, this is true, but you must not forget that this can profoundly affect the behaviour of your sample. For instance, during private sale periods, internet users will be more motivated by prices and the possibility of finding a good deal than usual. Their tendency to complete a sale will be far stronger and will certainly increase the impact of the modifications you plan to make.

Fortunately, it is easy to avoid this trap if you have an excellent knowledge of the workings of your acquisition strategy, either by not testing during these periods or by targeting your tests on a more representative population, excluding, for example, visitors that reach your site thanks to an on-going acquisition campaign, or limiting the test to certain products. Another option is to filter the results after the fact to exclude users displaying unusual behaviour.

4. Testing to find out whether you or your colleague has the best idea

Testing is often presented as the best way to settle divergent opinions, to put an end to debates and to gain time by avoiding prevarication. But is it really effective if these opinions are all based on bad ideas, or if the matter at the heart of the argument has no impact on conversion rates? Trying to decide between them still means you have wasted your time and monopolised web traffic that you could have exploited more intelligently.So, yes, testing does allow you to choose from among several alternatives, but to obtain results, you must still ensure these alternatives are justified by tangible data.

5. Testing a redesign after the fact

canape

Personal motivations behind a test can conflict with the company’s objectives. It is unfortunately common to hear “We’re going to test a redesign to see if it’s more effective” and to see that, despite its underperformance, it is kept in production. And, of course, it is hard to admit when you have spent time developing a version of your website that it turns out to be far less effective than the previous version. It is normal not to want to throw away your work.

To avoid such disappointments, it is better to carry out incremental, step-by-step tests on small elements rather than completely overhauling a page once, if the stakes are high for the page. Testing lends itself far more readily to a continuous improvement approach, which limits the cost of failures and allows you to attribute differences in performance to specific elements. If it turns out that a completely different variant to the original is indeed more effective, you will not be in a position to identify what the modifications were that made the biggest contribution to this improvement. You may well be very happy, but you won’t really have learned a lot about your audience and why they are reacting more favourably. A great deal of uncertainty will still remain, and it is highly probable that your version contains one or more modifications that are having a negative effect.

The only time this rule should be broken is when a site has low traffic that does not lend itself to testing little elements here and there. In that case, the time it takes to obtain statistically reliable results is too much. It is therefore better to test disruptive modifications that are liable to bring significant gains, and then to retest certain sub-elements to try and refine the results.

6. Testing elements that are too far away from your conversion page

Don’t make me say what I haven’t said already: a test on your site’s homepage can impact your overall conversion rate (e.g.: a purchase on an e-commerce site, completing a form for a lead generation site, etc.). But this will rarely be the case: you are testing an element whose role is certainly small when the buyer takes their decision, because it is too far from the page where the conversion took place. It is also essential that you define intermediate performance indicators and indictors adapted to each type of page. This is because each page has a precise role that makes its own contribution to the final conversion. The objective of a category page is to view product pages; the objective of a basket page adding items to the basket, etc. Each of these is a micro-conversion that leads visitors to your website along the conversion pathway. These secondary objectives allow you to get a finer sense of the performance of a variant and are generally easier to improve than your main objective. Since these micro-conversions happen more regularly, you can reach statistical reliability more quickly on the question of whether your modifications have really had an impact. In short, if you want quick results about your transactions and revenue, concentrate on tests linked to the purchasing pathway. If you do identify hindrances to your conversion rates, you should achieve significant gains, but will reach a moment where you come across ceilings that are difficult to pass. At those moments, optimising micro-conversions becomes a key part of the continuous improvement strategy.

Conclusion

The main stumbling block common to all these examples is the dissociation of business objects, tested elements and defined indicators in the testing tool. Successful testing relies on these elements being coherent. Testing also requires a strict, yet simple methodology: formulate a coherent hypothesis, test on a representative, sufficiently large sample of your audience, interpret the results using the right objectives and accept that the hypothesis could well have been incorrect.

Anthony Brebion
Anthony is Product Marketing Manager @ABTasty. He was previously SEO consultant and worked several years in digital ad houses. He's now an A/B testing and optimization evangelist.

Related Posts

Tweet
Share
Share
Pocket
Buffer
X