Article

7min read

A/B Test Hypothesis Definition, Tips and Best Practices

Incomplete, irrelevant or poorly formulated A/B test hypotheses are at the root of many neutral or negative tests.

Often we imagine that doing A/B tests to improve your e-commerce site’s performance means quickly changing the color of the “add to cart” button will lead to a drastic increase in your conversion rate, for example. However, A/B testing is not always so simple.

Unfortunately, implementing random changes to your pages won’t always significantly improve your results – there should be a reason behind your web experiments.

This brings us to the following question: how do you know which elements to experiment with and how can you create an effective AB test hypothesis?

Determine the problem and the hypothesis

Far too few people question the true origins of the success (or failure) of the changes they put in place to improve their conversion rate.

However, it’s important to know how to determine both the problem and the hypothesis that will allow you to obtain the best results.

Instead of searching for a quick “DIY” solution, it’s often more valuable in the long term to take a step back and do two things:

  1. Identify the real problem – What is the source of your poor performance? Is it a high bounce rate on your order confirmation page, too many single-page sessions,  a low-performing checkout CTA or something more complex?
  2. Establish a hypothesis – This could show the root of the problem. For example, a great hypothesis for A/B testing could be: “Our customers do not immediately understand the characteristics of our products when they read the pages on our e-commerce site. Making the information more visible will increase the clicks on the “add-to-cart” button.”

The second step may seem very difficult because it requires a capacity for introspection and a critical look at the existing site. Nevertheless, it’s crucial for anyone who wants to see their KPIs improve drastically.

If you’re feeling a bit uncomfortable with this type of uncertainty around creating an effective hypothesis, know that you’ve come to the right place.

What is an A/B test hypothesis?

Technically speaking, the word hypothesis has a very simple definition:

“A proposal that seeks to provide a plausible explanation of a set of facts and which must be controlled against experience or verified in its consequences.”

The first interesting point to notice in this definition is “the set of facts to be explained.” In A/B testing, a hypothesis must always start with a clearly identified problem.

A/B tests should not be done randomly, or you risk wasting time.

Let’s talk about how to identify the problem:

  • Web analytics data – While this data does not explain digital consumers’ behavior exactly, it can highlight conversion problems (identifying abandoned carts, for example) and help prioritize the pages in need of testing.
  • Heuristic evaluation and ergonomic audit – These analyses allow you to assess the site’s user experience at a lower cost using an analysis grid.
  • User tests – This qualitative data is limited by the sample size but can be very rich in information that would not have been detected with quantitative methods. They often reveal problems understanding the site’s ergonomics. Even if the experience can be painful given the potential for negative remarks, it will allow you to gather qualified data with precise insights.
  • Eye tracking or heatmaps – These methods provide visibility into how people interact with items within a page – not between pages.
  • Customer feedback – As well as analyzing feedback, you can implement tools such as customer surveys or live chats to collect more information.

The tactics above will help you highlight the real problems that impact your site’s performance and save you time and money in the long run.

A/B test hypothesis formula

Initially, making an A/B test hypothesis may seem too simple. At the start, you mainly focus on one change and the effect it produces. You should always respect the following format: If I change this, it will cause that effect. For example:

Changing (the element being tested) from ___________ to ___________ will increase/decrease (the defined measurement).

At this stage, this formula is only a theoretical assumption that will need to be proven or disproven, but it will guide you in solving the problem.

An important point, however, is that the impact of the change you want to bring must always be measurable in quantifiable terms (conversion rate, bounce rate, abandonment rate, etc.).

Here are two examples of hypotheses phrased according to the formula explained above and that can apply to e-commerce:

  1. Changing our CTA from “BUY YOUR TICKETS NOW” to “TICKETS ARE SELLING FAST – ONLY 50 LEFT!” will improve our sales on our e-commerce site.
  2. Shortening the sign-up form by deleting optional fields such as phone and mailing address will increase the number of contacts collected.

In addition, when you think about the solution you want to implement, include the psychology of the prospect by asking yourself the following:

What psychological impact could the problem cause in the digital consumer’s mind?

For example, if your problem is a lack of clarity in the registration process which impacts the purchases, then the psychological impact could be that your prospect is confused when reading information.

With this in mind, you can begin to think concretely about the solution to correct this feeling on the client side. In this case, we can imagine that one fix could be including a progress bar that shows the different stages of registration.

Be aware: the psychological aspect should not be included when formulating your test hypothesis.

Once you have gotten the results, you should then be able to say whether it is true or false. Therefore, we can only rely on concrete and tangible assumptions.

Best practice for e-commerce optimization based on A/B hypotheses

There are many testable elements on your website. Looking into these elements and their metrics can help you create an effective test hypothesis.

We are going to give you some concrete examples of common areas to test to inspire you on your optimization journey:

HOMEPAGE

  • The header/main banner explaining the products/services that your site offers can increase customers’ curiosity and extend their time on the site.
  • A visible call-to-action appearing upon arrival will increase the chance visitors will click.
  • A very visible “about” section will build prospects’ trust in the brand when they arrive on the site.

PRODUCT SECTIONS

  • Filters save customers a lot of time by quickly showing them what they are looking for.
  • Highlighting a selection of the most popular products at the top of the sections is an excellent starting point for generating sales.
  • A “find out more” button or link under each product will encourage users to investigate.

PRODUCT PAGES

  • Product recommendations create a more personal experience for the user and help increase their average shopping cart
  • A visible “add to cart” button will catch the prospect’s attention and increase the click rate.
  • An “add to cart and pay” button saves the customer time, as many customers have an average of one transaction at a time.
  • Adding social sharing buttons is an effective way of turning the product listing into viral content.

Want to start A/B testing elements on your website? AB Tasty is the best-in-class experience optimization platform to help you convert more customers by leveraging intelligent search and recommendations to create a richer digital experience – fast. From experimentation to personalization, this solution can help you achieve the perfect digital experience with ease.

CART PAGE

  • The presence of logos such as “Visa certified” enhances customer confidence in the site.
  • A very visible button/link to “proceed to payment” greatly encourages users to click.

PAYMENT

  • A single page for payment reduces the exit rate.
  • Paying for an order without registration is very much appreciated by new prospects, who are not necessarily inclined to share their personal information when first visiting the site.
  • Having visibility over the entire payment process reassures consumers and will nudge them to finalize their purchase.

These best practices allow you to build your A/B test hypotheses by comparing your current site with the suggestions above and see what directly impacts conversion performance.

The goal of creating an A/B test hypothesis

The end goal of creating an A/B test hypothesis is to identify quickly what will help guarantee you the best results. Whether you have a “winning” hypothesis or not, it will still serve as a learning experience.

While defining your hypotheses can seem complex and methodical, it’s one of the most important ways for you to understand your pages’ performance and analyze the potential benefits of change.

Subscribe to
our Newsletter

bloc Newsletter EN

We will process and store your personal data to send you communications as described in our  Privacy Policy.

Article

6min read

How to Leverage Disruption in Experimentation | Ben Labay

Ben Labay outlines essential frameworks for a more strategic, tactical and disruptive approach to experimentation

With two degrees, in Evolutionary Behavior and Conservation Research Science, Ben Labay spent a decade in academia with a wide-ranging background in research and experimentation dealing with technical data work. 

Now as CEO of experimentation and conversion optimization agency Speero, Ben describes his work in experimentation as his “geek-out” area which is customer experience research and dealing with customer data. 

At Speero, Ben works to scope and run research and test program strategies for companies including  Procter & Gamble, ADP, Codecademy, MongoDB, Toast and many others around the world.

AB Tasty’s VP Marketing Marylin Montoya spoke with Ben on how to create mechanisms for companies to not only optimize but also be more disruptive when it comes to web experimentation to drive growth.

Here are some of the key takeaways from their conversation.

Consider a portfolio way of management in experimentation 

Inspired by Jim Collins’ and Jerry I. Porras’ book “Built to Last”, Ben discusses a framework that the book provides on the ways a company can grow based on the best practices from 18 successful companies. 

He identifies one big pillar that many organizations are often neglecting: experimentation. To tackle this, Ben suggests taking a portfolio management way of doing experimentation made up of three portfolio tags which provide a solution spectrum around iterative changes for optimization. 

The first level consists of making small tweaks or changes to a website based on customer feedback such as improving layouts and the second which includes more substantial types of changes such as new content pieces.

But there’s a bigger third level which Ben refers to as more “disruptive” and “innovative” such as a brand new product or pricing model that can serve as a massive learning experience. 

With three different levels of change, it’s important to set a clear distribution of time spent on each level and have alignment among your team.

In the words of Ben, “Let’s put 20% of our calories over into iterating, 20% onto substantial and 20/30/ or 40% over on disruptive. And that map – that framework has been really healthy to use as a tool to get teams on the same page.”

For Ben, applying such a framework is key to getting all teams on the same page as it helps ensure companies are not under-resourcing disruptive and “big needle movers”. Velocity of work is important, he argues, but so is quality of ideas.

Let your goal tree map guide you 

Every A/B test or personalization campaign needs to be fed with good ingredients which determine the quality of the hypothesis. 

“Every agency, every in-house company researches. We do research. We collect data, we have information, we get insights and then they test on insights. But you can’t stop there.” Ben says. 

The trick is not to stop at the insights part but to actually derive a theme based on those insights. This will allow companies to pick underlying strengths and weaknesses to map them into their OKRs. 

For example, you may have a number of insights like a page underperforming, users are confused about pricing and social proof gets skipped over. The key is to conduct a thematic analysis and look for patterns based on these different insights. 

Consequently, it’s important for companies to create a goal tree map to help them understand how things cascade down and to become more tactical and SMART about their goals and set their OKRs accordingly to organize and make sense of the vast amount of data. 

When the time comes to set up a testing program, teams will have a strategic testing roadmap for a particular theme that links to these OKRs. This helps transform the metrics into more actionable frameworks. 

And at the end of each quarter, companies can evaluate their performance based on this scorecard of metrics and how the tests they ran during the quarter impacted these metrics.

Build engagement and efficiency into your testing program strategy 

The main value prop of testing centers around making profit but Ben advocates for a second value prop which revolves around how a business operates. This requires shifting focus to efficiency and how different teams across an organization work together.

Ben parallels the A/B testing industry with Devops as it strives to bring in elements from the DevOps cultural movement when we refer to a culture of experimentation and being data-driven. In many ways, this echoes the DevOps methodology, which is focused on breaking down silos between development and operation teams to enhance collaboration and efficiency between these teams. “The whole idea is to optimize the efficiency of a big team working together”, Ben says. 

This means organizations should take a hard look at their testing program and the components that make up the program which includes getting the right people behind it. It’s also about becoming more customer-centric and embracing failure. 

Ben refers to this as the “programmatic side” of the program which serves as the framework or blueprint for decision making. It helps to answer questions like “how do I organize my team structure?” or “what is my meeting cadence with the team?”

Ultimately, it’s about changing and challenging your current process and transforming your culture internally by engaging your team within testing your program and the way you’re using data to make decisions.

What else can you learn from our conversation with Ben Labay?

  • Ways to get out of a testing rut 
  • How to structure experimentation meetings to tackle roadblocks 
  • How experimentation relates to game theory 
  • The importance of adopting a actionable framework for decision making 
About Ben Labay

Ben Labay combines years of academic and statistics training with customer experience and UX knowledge. Currently, Ben is the CEO at Speero. With two degrees in Evolutionary Behavior and Conservation Research Science (resource management), Ben started his career in academia, working as a staff researcher at the University of Texas focused on research and data modeling. This helped form the foundation for his current passion and work at Speero, which focuses on helping organizations make decisions using customer data.

About 1,000 Experiments Club

The 1,000 Experiments Club is an AB Tasty-produced podcast hosted by Marylin Montoya, VP of Marketing at AB Tasty. Join Marylin and the Marketing team as they sit down with the most knowledgeable experts in the world of experimentation to uncover their insights on what it takes to build and run successful experimentation programs.