Article

7min read

Sample Ratio Mismatch: What Is It and How Does It Happen?

A/B testing can bring out a few types of experimental flaws.

Yes, you read that right – A/B testing is important for your business, but only if you have trustworthy results. To get reliable results, you must be on the lookout for errors that might occur while testing.

Sample ratio mismatch (SRM) is a term that is thrown around in the A/B testing world. It’s essential to understand its importance during experimentation.

In this article, we will break down the meaning of sample ratio mismatch, how to spot SRM, when it is and is not a problem, why it can happen and how to detect SRM.

Sample ratio mismatch overview

Sample ratio mismatch is an experimental flaw where the expected traffic allocation doesn’t fit with the observed visitor number for each testing variation.

In other words, an SRM is evidence that something went wrong.

Sample ratio mismatch is crucial to be aware of in A/B testing.

Now that you have the basic idea, let’s break this concept down piece by piece.

What is a “sample”?

The “sample” portion of SRM refers to the traffic allocation.

Traffic allocation refers to how the traffic is split toward each test variation. Typically, the traffic will be split equally (50/50) during an A/B test. Half of the traffic will be shown the new variation and the other half will go toward the control version.

This is how an equal traffic allocation will look for a basic A/B test with only one variant:

A/b testing equal traffic allocation

If your test has two variants or even three variants, the traffic will still be allocated equally to each test to ensure that each version receives the same amount of traffic. An equal traffic allocation in an A/B/C test will be split into 33/33/33.

For both A/B and A/B/C tests, traffic can be manipulated in different ways such as 60/40, 30/70, 20/30/50, etc. Although this is possible, it is not a recommended practice to get accurate and trustworthy results from your experiment.

Even by following this best practice guideline, equally allocated traffic will not eliminate the chance of an SRM. This type of mismatch is something that can still occur and must be calculated no matter the circumstances of the test.

Define sample ratio mismatch (SRM)

Now that we have a clear picture of what the “sample” is, we can build a better understanding of what SRM means:

  • SRM happens when the ratio of the sample does not match the desired 50/50 (or even 33/33/33) traffic allocation
  • SRM occurs when the observed traffic allocation to each variant does not match the  allocation chosen for the test
  • The control version and variation receive undesired mismatched samples

Whichever words you choose to describe SRM, we can now understand our original definition with more confidence:

“Sample ratio mismatch is an experimental flaw where the expected traffic allocation doesn’t fit with the observed visitor number for each testing variation.”

sample ratio mismatch

Is SRM always a problem?

To put it simply, SRM occurs when one test version receives a noticeably different amount of visitors than what was originally expected.

Imagine that you have set up a classic A/B test: Two variations with 50/50 traffic allocation. You notice at one point that version A receives 10,000 visitors and version B receives 10,500 visitors.

Is this truly a problem? What exactly happened in this scenario?

The problem is that while conducting an A/B test, an extremely strict respect of the allocation scheme is not always 100% possible since it must be random. The small difference in traffic that is noted in the example above is something we would typically refer to as a “non-problem.”

If you are seeing a similar traffic allocation on your A/B test in the final stages, there is no need to panic.

A randomly generated traffic split has no way of knowing exactly how many visitors will stumble upon the A/B test during the given time frame of the test. This is why toward the end of the test duration, there may be a smaller difference in the traffic allocation while the majority (+95%) of traffic is correctly allocated.

When is SRM a problem?

Some tests may have SRM due to experimental setup.

When the SRM is a big problem, there will be a noticeable difference in traffic allocation.

If you see 1,000 directed to one variant and 200 directed to the other — this is an issue. Sometimes, spotting SRM does not require a particular mathematical formula dedicated to calculating SRM as it is evident enough on its own.

However, an extreme difference in traffic allocation can be very rare. Therefore, it’s essential to check the visitor counts in an SRM test before each test analysis.

Does SRM occur frequently?

Sample ratio mismatch can happen more often than we think. According to a study done by Microsoft & Booking, about 6% of experiments experience this problem.

Furthermore, if the test includes a redirect to an entirely new page, SRM can be even more likely.

Since we heavily rely on tests and trust their conclusions to make strategic business decisions, it’s important that you are able to detect SRM as early as possible when it happens during your A/B test.

Can SRM still affect tests using Bayesian?

The reality is that everyone needs to be on the lookout for SRM, no matter what type of statistical test they are running. This includes experiments using the Bayesian method.

There are no exemptions to the possibility of experiencing a statistically significant mismatch between the observed and expected results of a test. No matter the test, if the expected assumptions are not met, the results will be unreliable.

Sample ratio mismatch: why it happens

Sample ratio mismatch can happen due to a variety of different root causes. Here we will discuss three common examples that cause SRM.

One common example is when the redirection to one variant isn’t working properly for poorly connected visitors.

Another classic example is when the direct link to one variant is spread on social media, which brings all users who click on the link directly to one of the variants. This error does not allow the traffic to be properly distributed among the variants.

In a more complex case, it’s also possible that a test including JS code is crashing a variant and therefore some of the visitor configurations. In this situation, some visitors that are being sent to the crashing variant won’t be collected and indexed properly, which leads to SRM.

All of these examples have a selection bias: some non-random visitors are excluded. The non-random visitors are arriving directly from a link shared on social media, have a poor connection, or are visiting a crashing variant.

In any case, when these issues occur, the SRM is an indication that something went wrong and you cannot trust the numbers and the test conclusion.

Checking for SRM in your A/B tests

Something important to be aware of when doing an SRM check is that the priority metric when checking needs to be “users” and not “visitors.” Users are the specific people that are allocated to each variation, meanwhile, the visitors metric is counting the number of sessions that each user makes.

It’s important to differentiate between users and visitors because results may be skewed if a visitor comes back to their variation multiple times. SRM detected with “visitors” may not be the most reliable metric, but using the “users” metric is evidence of a problem.

SRM in A/B testing

Testing for sample ratio mismatch may seem a bit complicated or unnecessary at first glance. In reality, it’s quite the opposite.

Understanding what SRM is, why it happens, and how it can affect your results is crucial in A/B testing. Running an A/B test to help make key decisions is only helpful for your business if you have reliable data from those tests.

Want to get started on A/B testing for your website? AB Tasty is a great example of an A/B testing tool that allows you to quickly set up tests with low code implementation of front-end or UX changes on your web pages, gather insights via an ROI dashboard, and determine which route will increase your revenue.

Subscribe to
our Newsletter

bloc Newsletter EN

We will process and store your personal data to respond to send you communications as described in our  Privacy Policy.

Article

6min read

How to solve real user problems with a CRO strategy

Catch up on the previous installment of our Customer-Centric Data Series, How to Become a Data-Centric Company, or read the series introduction.

In the next installment of our series on a data-driven approach to customer-centric marketing, we spoke with our partner Raoul Doraiswamy, Founder & Managing Director of Conversionry to understand the flow of a customer-centric experimentation process, and why it is critical to tap into insights from experimentation processes to make better decisions.

What do you find is the biggest gap in the marketing & growth knowledge among brands right now?

Many brands today have the right set of tools such as technology investments, or the right people with marketing expertise. However, brands often face the issue of not knowing how to meet customer needs/how to give their customers what they want whether on their website, app or through digital advertising on the website, app or digital advertising – in other words, how can these brands increase conversions? Raoul identifies the lack of customer understanding to be at the core of this gap and suggests that brands should adopt a customer-centric, customer-driven process that enables a flow of customer insights, complemented by experimentation.  

Which key activities deliver the best insights into customer problems?

Raoul believes that to start a strategy that puts the customers at the core, it is important to have the right data-gathering approach to get insights. It’s the foundation of any experimentation program, but can be applied to all marketing channels.

“Imagine you are an air traffic controller. You have multiple screens constantly feeding you where the planes are, or when they might crash into each other. From all these constant insights, the person in front of the screens will have to make the right decisions,” he shares. “However, there are also inconsequential insights such as baggage holders being full – and it is up to the decision-makers to pick out the critical data and make use of them.”

Raoul provides this analogy to liken it to the role of marketing decision-makers, who normally have a dashboard with metrics like revenue, conversion rate, abandoned cart and more. An insights dashboard helps marketers better understand their customers, combining this real-time data with customer feedback from sources like analytics, heatmaps, session recordings, social media comments and user testing.  Solid research can be done through a critical analysis of session recordings and user poll forms, and the main takeaways can be fed to this dashboard. How empowering is that for a marketing decision-maker? 

Where are the best sources for experimentation ideas?

Raoul asserts that a combination of quantitative and qualitative analysis is key. Heuristic analysis and competitor analysis are also gold when coming up with experimentation ideas. He continues, “Don’t limit yourself to looking at competitors, look at other industries too. For example, for a $90M trade tools client we had to solve the problem of increasing user sign-ins to their loyalty program. By researching Expedia and Qantas, we got the idea to show users points instead of cash to pay for items.” Raoul shares, “Do heat map analysis, look at session recordings, user polls, run surveys to email databases, and user testing. User testing is critical in understanding the full picture.” 

After distilling customer problems and coming up with some rough experimentation ideas, the next step is to flesh out your experiment ideas fully. “Going back to the analogy of the Air Traffic Controller, one person on the team is seeing a potential crash but might have limited experience in dealing with this situation. That’s when more perspectives can be brought in by, let’s say, a supervisor, to make a more well-rounded decision. In the same way, when you are ideating, you do not want to just limit it to yourself but rather have a workshop where you discuss ideas with your internal team. If you are working with an agency, you can still have a workshop with both the agency and the client present, or have your CRO team and product team come together to share ideas. This way, you can get multiple stakeholders involved, each of them being able to provide expertise based on their experience with customers,” says Raoul.

Is there value in running data-gathering experiments (as opposed to improving conversion / driving a specific metric)?

“Yes, absolutely,” replies Raoul. “Aligning growth levers with clients every quarter while working with CRO and Experimentation teams on the experimentation process is important. When working towards the goal of increasing conversions, there are KPIs and predictive models to project the goals.

“On the other hand, if the focus of the program is on product feature validation or reducing the risk of revenue due to untested features, there will be a separate metric for that,” he continues. “It is key to have specific micro KPIs for the tests that are running to generate a constant flow of insights, which then allows us to make better decisions.”

In running data-gathering experiments, features such as personalization can be applied which can have a positive impact on the conversions on the website. 

What do brands need to get started?

“To begin, you need to start running experiments. Every day without a test is a day lost in revenue!” heeds Raoul. “For marketing leaders who have yet to start running experiments, you can start by pinpointing customer problems, and the flow of insights. To get the insights, you can gather them from Google Analytics, more specifically, by looking at your funnel. Through these insights, identify the drop-off point and observe the Next Page Path, to see where users go next.

“Take for example an eCommerce platform. If the users are dropping off at the product page instead of adding to the cart and moving on to the shipping page,  this shows that they are confused about the shipping requirements. This alone can tell you what goes through the user’s mind. Look at heat maps and session recordings to understand the customer’s problems. The next step then is to solve the issue and to do that, you will need an A/B testing platform. Use the A/B testing platform to build tests and launch them as quickly as possible.”

As for established marketing teams who are already doing some testing, Raoul recommends gathering insights and customer problems as they come in every month. “Then to make sense of the data you’ve collected, you need conversion optimization analysts like our experts at Conversionry who are experienced in distilling data down to problems.”

Identifying customer problems is key. If some of the issues your customers encounter stay unaddressed, it could lead to the initiatives flatlining despite months of experimentation. Instead by keeping customer feedback top of mind, you can start designing, development, testing, speak to experience optimization platforms like AB Tasty to build the experiments, then gather insights, and repeat the cycle to see what wins and what doesn’t.

Get started building your A/B tests today with the best-in-class software solution, AB Tasty. With embedded AI and automation, this experimentation and personalization platform creates richer digital experiences for your customers, fast.