Article

10min read

Digital Customer Journey: Insights and Optimization Tips

In a highly competitive digital marketplace, optimizing your website for a unique and seamless digital customer journey is no longer just a competitive advantage — it’s a necessity.

It’s important to remember that the digital customer journey does not begin and end with a purchase – it’s a web of all customer interactions and touchpoints with your brand.

AB Tasty has mapped out seven customer phases that we consider crucial in the journey. To craft unique experiences, you’ll need to differentiate these seven phases customers pass through and understand how to animate their digital journey.

Once you have a better understanding of these phases, you will be better equipped to set your business goals and properly optimize your website for growth and impact.

Click to view the full-sized infographic in new tab

digital customer journey infographic

How exactly can you optimize each phase of the digital customer journey? Let’s dive right in and take a look at some examples.

Phase 1: Awareness

When visitors land on your website for the first time, a great first impression is crucial.

Your page needs to be both visually appealing and intuitive. A dynamic above the fold design is a great place to start.

In this first phase, it’s important to let your best ideas shine to capture and keep your visitors’ attention. You can accomplish this by creating personalized welcome messages for first-time visitors, displaying your value proposition and organizing high-impact elements for better visibility.

Let’s take a look at Just Over The Top’s experiment to modify the layout of their homepage. They used AB Tasty’s experience optimization platform to test if their users responded better seeing a layout with product categories rather than individual products.

Original:

Individual product display - Just Over The Top

 

Variation:Product category display - Just Over The Top

 

After creating a test variation to run against the original layout, they saw a 17.5% click increase on the three blocks below the hero image. This brought many more users into the second phase of their customer journey.

Phase 2: Discovery

When consumers reach the second phase, they’ve already discovered your brand and they’re getting curious.

To accommodate visitors during this phase, your website should be optimized for an excellent browsing experience. Whether this means making your search bar more visible, creating dynamic filters while searching, or using a virtual assistant to get to know your visitors’ interests with a series of questions, an easy browsing experience with intelligent search is key.

In this example, Claudie Pierlot focused on optimizing the customer browsing experience by testing the search bar visibility. In their variation, the small search icon was made more visible by adding the word “recherche” (or search in English) in the top right-hand corner.

Original:

Claudie Pierlot- before

Variation:

Claudie Pierlot - recherche

This clear above the fold design made it easier for visitors to identify the search bar to begin their browsing experience.

With this simple A/B test, they saw a 47% increase in search bar clicks and a 7% increase in conversion rates coming directly from the search bar.

In another example, Villeroy & Boch, a ceramic manufacturing company, wanted to leverage intelligent search on their website. With the help of AB Tasty, they implemented an AI search algorithm to navigate online shoppers.

With our solution, they designed a new and intuitive navigation complete with filters and a comprehensive autosuggestion feature.

intelligent search - categories

By changing their search functions, Villeroy & Boch saw a 33% increase in search results clicks and a 20% increase in sales through the search function.

Phase 3: Consideration

Now is the time when your visitors are considering your brand and which products they are interested in. Showcasing your product pages in their best light during the consideration phase might be exactly what your visitor needs to continue moving down the funnel.

Let’s look at how Hanna Anderson optimized their product pages during this phase.

The clothing retail company wanted to experiment with the images on their product listing pages. Previously, their toddler line had only images of clothing sizes for an older child. They were convinced there was room for improvement and decided to run a test by changing their images to include toddler sizes.

Original:

Hanna Anderson - originalVariation:

Hanna Anderson - toddler product images - variation

After implementing age-appropriate clothing images, the results were clear. During this test, the clicks on PLPs increased by almost 8% and the purchase rate on those items skyrocketed by 22%.

Phase 4: Intent

During the intent phase, your visitors are on the verge of becoming customers but need to be convinced to make a purchase.

Social proof, urgency messaging and bundling algorithms are a few ideas to lightly nudge visitors to add to cart or add more to cart.

Let’s take a look at the impact that urgency messaging can have: IZIPIZI, an eyewear retailer, decided to add a special message flag next to their product description to show viewers how many people have purchased this product. The idea of this message is to show viewers that this product is popular and to encourage them to take action.

IZIPIZI - social proof

With this simple sentence of social proof to validate a product’s desirability, they saw a 36% increase in add-to-basket rate.

In another scenario, you can see that adding a progress bar is a simple way to upsell. With a progress par, you are showing your customer how close they are to earning free shipping, which entices them to add more to their cart.

Product bar - Vanessa Bruno

Vanessa Bruno experimented with this additive with the help of AB Tasty and saw a 3.15% increase in transactions and a €6 AOV uplift.

Phase 5: Purchase

Purchase frustration is real. If customers experience friction during checkout, you risk losing money.

Friction refers to any issues the visitors may encounter such as unclear messaging during the payment (did the payment actually go through?), confusing or expensive shipping options, discounts not working, double authentication check-in delays, difficult sign-in and more.

Optimizing your checkout sequence for your audience with rollouts and KPI-triggered rollbacks can help you find a seamless fit for your website.

Let’s look at an example for this phase: Galeries Lafayette, the French luxury department store, saw an opportunity to optimize their checkout by displaying default payment methods that do not require double authentication.

Payment options

During this test, they saw a €113,661 increase in profit, a €5 uplift in average order value, and a 38% increase in the conversion rate by adding the CB (bank card) option for a quicker checkout.

Phase 6: Experience

Optimizing the buyer experience doesn’t end after the purchase. Now is the time to grow your customer base and stop churn in its tracks. So, how do you keep your customers interested? By maintaining the same level of quality in your messages and personalization.

Let’s look at how Envie de Fraise, a French boutique, leveraged their user information to transform a normal post-purchase encounter into a personalized experience.

One of their customers had just purchased a maternity dress and had been browsing multiple maternity dresses prior to their purchase. By knowing this information, they experimented with using the “you will love these products” algorithm to gently nudge their customer to continue shopping.

products you will love algorithim

With a customized recommendation like this, Envie de Fraise saw a €127K increase in their potential profit.

As your customer spends more time with your brand, you will learn more about their habits and interests. The more time they spend with you, the more personalized you can make their experience.

Phase 7: Loyalty

In the final step of your customer’s journey, they move into the loyalty phase. To turn customers into champions of your brand, it’s important to remind them that you value their loyalty.

This can be done by sending emails with individual offers, social proof, product suggestions or incentives for joining a loyalty program to earn rewards or complete product reviews.

Another example of this is sending a personalized email displaying items that are frequently bought together that align with their purchase. This will remind the customer about your brand and give them recommendations for future purchases.

Why Optimizing the Digital Customer Journey is Essential to Boost Conversions

The fierce competition in the e-commerce marketplace is undeniable. In order to attract and retain customers, you have to focus on crafting personalized user experiences to turn passive visitors into active buyers.

Understanding their needs in each phase and optimizing your digital space is your best solution to nudge visitors down the purchasing funnel.

By personalizing the experience of your customers during each phase of the digital customer journey, you can ensure an optimal shopping experience, boost purchases, increase customer satisfaction and see more repeat customers.

Want to start optimizing your website? AB Tasty is the best-in-class experience optimization platform that empowers you to create a richer digital experience – fast. From experimentation to personalization, this solution can help you activate and engage your audience to boost your conversions.

AB Tasty Demo Banner

Article

12min read

Understanding Bounce Rate to Improve It

Bounce rate is one of the most important metrics used to understand how well your website is performing. It’s a type of web analytics metric that measures the behavior of visitors to a website or page within the website.

In this article, we will cover everything about bounce rates: how they are calculated, bounce rates vs exit rates, what is considered a good bounce rate, how to use web analytics for tracking and tips for improving this metric.

Let’s get started.

What is bounce rate on a website?

The definition of bounce rate is relatively easy to understand compared to other web analytics concepts. However, that is not to say that the data is somehow superficial or unimportant, as there are many insights that a bounce rate analysis can provide.

The bounce rate yields information on the behavior of a website’s visitors and how well the website is engaging them.

To “bounce” from a website simply means to leave before interacting with the page in some way such as leaving a comment, clicking on something, scrolling or visiting another page on the site.

In other words, to enter and to leave without engaging beyond the initial entrance of the website is a bounce. However, a bounce is not always a bad thing or a sign that the website is not performing well.

Each visitor to a website can be seen as a drop of water with the website presenting a surface that is either porous or waterproof. The goal of a website is to be as porous as possible, absorbing each visitor to the site with as relevant and interesting information as possible. Non-porous websites “bounce” visitors immediately, often indicating that the website is not performing as required and flagging issues that can be addressed with further web analytics.

How to calculate bounce rate

A bounce rate analysis is a very straightforward formula that can be summed up in a simple equation.

The number of visitors who leave a website after only visiting the landing page (the page that led them to the website) and not interacting in any way, is divided by the total number of visitors to the site.

Bounce Rate Formula

For example, if 40 visitors leave without being “absorbed” into further interaction with a site and there have been 100 visitors overall, the bounce rate will be 40%.

There are a few main ways that a visitor may bounce. For example:

  • Clicking on a link to another website
  • Clicking the back arrow that takes them to the previous page
  • Entering a new URL and hitting enter
  • Closing the browser or tab

One other way in which a visitor may bounce is if they stop interacting entirely, causing the session to time out.

Anything over half an hour of idle time is considered a bounce. Any further interaction after this time, even if it occurs within the site, is considered a new session.

Bounce rate vs exit rate

Bounce rate and exit rate are often thought to be synonyms, or at the very least that they ultimately provide the same data.

This is a huge error and can lead to false bounce rate analysis and poor decision-making.

The confusion is understandable, as both concepts seemingly measure similar things on the surface. However, the difference is quite profound and requires an understanding of what both metrics measure.

As previously described, bounce rate measures the percentage of visitors that leave a website before interacting with it in any way.

Exit rates provide information on specific pages of the website, measuring the percentage of visitors that left the site after viewing a specific page, no matter how many pages they have visited before in the session.

In other words, all bounces are exits, but not all exits are bounces.

Bounce rates are based on only sessions that start and end with one page. While exit rates calculate the last page visited in the user’s journey, regardless of the number of pages a user has visited during one session.

Bounce Rate vs Exit Rate

For example, let’s say a site has three pages named Page X, Page Y and Page Z. From Monday to Friday, the interaction might look something like this:

  • Monday – Page Y > Exit
  • Tuesday – Page Y >Page Z > Page X >Exit
  • Wednesday – Page X > Page Z > Page Y >Exit
  • Thursday – Page Z >Exit
  • Friday – Page Y > Page X > Page Z > Exit

The analysis would show:

Page X has an exit rate of 33% and a bounce rate of 0%.

  • Three sessions included Page X and one session exited from Page X.
  • There was no single-page session for Page X.
  • One session began on Page X, but it was not a single-page session, so the bounce rate is zero.
  • No visitor entered and left without any other interaction on this page.

Page Y has an exit rate of 50% and a bounce rate of 33%.

  • Four sessions included Page Y.
  • Two of those four total sessions exited directly from Page Y, so the exit rate is 50%.
  • The bounce rate is less than the exit rate because three sessions started with Page Y and only the single-page session led to a bounce.

Page Z has an exit rate of 50% and a bounce rate of 100%.

  • Just like Page Y, the exit rate of Page Z is 50% because four sessions included Page Z, and two sessions exited from Page Z.
  • The bounce rate is 100% because the only session that started with Page Z was a single-page session that led to a bounce.

High exit rates of some pages can be a good sign. For example, within e-commerce, leaving a page after completing a purchase is a good sign as it points towards a satisfactory outcome of a transaction.

Define average bounce rates

Defining average bounce rates can be a complex task. This is largely due to the fact that a high bounce rate for some pages, might be considered low for others.

Each case can be highly subjective due to the industry and purpose of the webpage. Certainly, bounce rate can only provide some information about a website’s performance and it remains highly important to use other metrics to fully understand visitor behavior.

Industry-standard bounce rates consider anything over 50% as high and anything between 20% and 50% as low, but it’s essential to go beyond this broad definition of what is an acceptable bounce rate.

Anything under 20% is likely an error and should be looked into.

What is a healthy bounce rate?

Rather than being concerned with an all-encompassing good bounce rate, a better goal would be finding a healthy bounce rate for each specific site and its unique goals.

The goal of 20% to 50% is not without merit, but it can be a superficial reading of behavior or almost impossible to attain depending on your type of page.

For an e-commerce landing page, browsing is often encouraged because it can lead to sales later on. Lower bounce rates will indicate that visitors decided to continue to look around rather than leave.

For other sites, like online recipe sites or information-specific sites, browsing is not likely. Therefore, a high bounce rate on these types of pages can be an indication that the visitor is satisfied having found the information they needed and bounced after.

Some websites are designed to be only one page, so it’s impossible to know what a “bad” bounce rate is when the design itself ensures it remains at 100%. In these circumstances, and those where a lot of information is presented on the landing page, other metrics can provide more relevant insights.

For example, it is always a bad sign if the visitor leaves within a matter of seconds. However, if they remain on the page for multiple minutes, the bounce rate might not be the best indicator that the site is not performing as desired.

To determine a high or low bounce rate, it’s important to consider the site’s purpose, the average bounce rate in your sector and typical user behavior. From there it will be easier to gain the appropriate insight that the data is providing.

Want to lower bounce rates by testing different aspects of your website? AB Tasty’s best-in-class customer experience optimization platform offers you an A/B testing tool that allows you to quickly set up tests with low code implementation of front-end or UX changes on your web pages. Go further by gathering insights via an ROI dashboard, and determining which route will attract customers and ultimately increase your revenue.

How to check the bounce rate of your website

Once you have defined what you consider to be your desired website bounce rate, the next step is to understand where to find the data.

Thankfully this data is very easy to find and comprehend in Google Analytics, which also provides a multitude of other data to help understand how well your website is performing.

Not being particularly familiar with Google Analytics should not deter you from using it. It is fairly self-explanatory once you have a grasp of the terms and purpose of the data.

Where to find the bounce rate on Google Analytics?

First, sign into the website’s Google Analytics page. Once there, select the “Audience Overview” tab, which also provides a variety of other metrics that can be of great use. From here you will need to choose whether you want to read the entire website’s or individual pages’ bounce rate.

For the entire website, simply click on the bounce rate metric, which will also provide a graph for the defined time period. The time period can be changed as required by the calendar at the top right of the screen and includes a customizable option.

For individual pages, click “Behavior” followed by “All Pages”, which will then provide a list of your website’s pages and their specific bounce rate. Much like understanding the website’s unique bounce rate context, this is also the case for individual pages.

Do all analytic software measure bounce rates the same way?

Google Analytics measures a bounce as a single-page session that has a duration of 0 seconds. Clearly, a visitor will not make any other actions during a 0-second time frame.

If you’re using another analytics software to do your website tracking, keep in mind that they may score it slightly differently.

Some analytics software do not have a time frame that they use to count a bounce. Rather,  they rely strictly on the interaction of the visitor during their session. Both ways will calculate a bounce rate for your website, but it’s important to know the specifics.

The key takeaway here is not to compare results from one analytic software to another. Consistency is key to tracking your performance.

How to reduce website bounce rate

Once you have an understanding of the concept of bounce rate, the next question is “Why is my bounce rate so high?” And more importantly, “how do I lower the bounce rate?”

There are many methods that can help reduce your bounce rate, including simpler things like seeing which page is performing best and implementing some of its elements on other pages that have higher bounce rates.

Some other ideas will become self-explanatory, while others might require a little trial and error.

Here are a few ideas to try out:

Improve content

One of the simplest ways to decrease the bounce rate of a page or the entire site is to improve the content itself.

In some ways this should be self-explanatory: the more interesting and higher quality the content, the more likely the reader will stick around to explore what else the site has to offer.

Relevancy is also key. If your website is primarily about camping, publishing unrelated content, such as politics, is likely to be met with disinterest and repel visitors instantly. This will undoubtedly lead to a high bounce rate.

Content requires some planning and forethought if a site is going to “absorb” visitors.

Readability

Content isn’t just related to interesting posts or entertaining media, readability is also key.

Text-heavy websites might be full of amazing information, but unless the page is formatted to draw the reader in, the first impression might be a bit overwhelming.

Be sure to use some mixture of imagery, bullet points, subheadings, and bolded words. This creates a balance to attract and retain visitors.

Avoid the use of excessive pop-ups

Nothing cheapens the feel of a site like the dreaded pop-up, and this isn’t just the case for ads.

Pop-ups are distracting, appear pushy and ruin the flow of the experience for a user. There is nothing quite as infuriating as having to click away one or several flashing online forms while trying to read something else.

Pop-ups do have their place, and when used at a minimum. They can help grow a subscriber list with a degree of effectiveness. If you like the idea of pop-ups but want to try something different, there are more effective measures such as email campaigns or personalized banners. By implementing one of these campaigns, visitors have the option to interact with an element after they’ve had a chance to engage with the content.

Targeted keywords

Lowering the bounce rate will always require a mixture of tactics, but one of the most reliable ways to do it is by targeting keywords.

The key to using keywords effectively is relevancy, so it is not just a matter of throwing as many keywords at the problem as possible. There is nothing more likely to lead to a high bounce rate than keywords that draw in visitors that are not interested in a website’s content.

The best place to begin is the use of high-value traffic/low-competition keywords. But, how do you find out what these are for your site? Well, it isn’t as complex as you might think. Google Keyword Planner is the perfect tool to implement targeted keywords for your site, and it’s free to use for Google account holders.

Lowering bounce rate with meta descriptions

Meta descriptions are the information that appears under the title of a website in a Google search. They add some depth to the description of a website presence on a search page and are therefore essential in garnering relevant traffic.

Meta descriptions do not rank by keyword, but they filter out traffic that will likely bounce, while also drawing in those more likely to interact. Remember that meta descriptions only show up to 155 characters. Target them to get the right information across to users who aren’t as likely to bounce.

Understanding bounce rates

As you can see, bounce rates can provide a lot of insight into your website’s performance.

Although there is no magic number to indicate a “good” or “bad” webpage bounce rate performance, you can determine your own baseline metrics based on each page’s purpose and past performance.

Article

7min read

Sample Ratio Mismatch: What Is It and How Does It Happen?

A/B testing can bring out a few types of experimental flaws.

Yes, you read that right – A/B testing is important for your business, but only if you have trustworthy results. To get reliable results, you must be on the lookout for errors that might occur while testing.

Sample ratio mismatch (SRM) is a term that is thrown around in the A/B testing world. It’s essential to understand its importance during experimentation.

In this article, we will break down the meaning of sample ratio mismatch, how to spot SRM, when it is and is not a problem, why it can happen and how to detect SRM.

Sample ratio mismatch overview

Sample ratio mismatch is an experimental flaw where the expected traffic allocation doesn’t fit with the observed visitor number for each testing variation.

In other words, an SRM is evidence that something went wrong.

Sample ratio mismatch is crucial to be aware of in A/B testing.

Now that you have the basic idea, let’s break this concept down piece by piece.

What is a “sample”?

The “sample” portion of SRM refers to the traffic allocation.

Traffic allocation refers to how the traffic is split toward each test variation. Typically, the traffic will be split equally (50/50) during an A/B test. Half of the traffic will be shown the new variation and the other half will go toward the control version.

This is how an equal traffic allocation will look for a basic A/B test with only one variant:

A/b testing equal traffic allocation

If your test has two variants or even three variants, the traffic will still be allocated equally to each test to ensure that each version receives the same amount of traffic. An equal traffic allocation in an A/B/C test will be split into 33/33/33.

For both A/B and A/B/C tests, traffic can be manipulated in different ways such as 60/40, 30/70, 20/30/50, etc. Although this is possible, it is not a recommended practice to get accurate and trustworthy results from your experiment.

Even by following this best practice guideline, equally allocated traffic will not eliminate the chance of an SRM. This type of mismatch is something that can still occur and must be calculated no matter the circumstances of the test.

Define sample ratio mismatch (SRM)

Now that we have a clear picture of what the “sample” is, we can build a better understanding of what SRM means:

  • SRM happens when the ratio of the sample does not match the desired 50/50 (or even 33/33/33) traffic allocation
  • SRM occurs when the observed traffic allocation to each variant does not match the  allocation chosen for the test
  • The control version and variation receive undesired mismatched samples

Whichever words you choose to describe SRM, we can now understand our original definition with more confidence:

“Sample ratio mismatch is an experimental flaw where the expected traffic allocation doesn’t fit with the observed visitor number for each testing variation.”

sample ratio mismatch

Is SRM always a problem?

To put it simply, SRM occurs when one test version receives a noticeably different amount of visitors than what was originally expected.

Imagine that you have set up a classic A/B test: Two variations with 50/50 traffic allocation. You notice at one point that version A receives 10,000 visitors and version B receives 10,500 visitors.

Is this truly a problem? What exactly happened in this scenario?

The problem is that while conducting an A/B test, an extremely strict respect of the allocation scheme is not always 100% possible since it must be random. The small difference in traffic that is noted in the example above is something we would typically refer to as a “non-problem.”

If you are seeing a similar traffic allocation on your A/B test in the final stages, there is no need to panic.

A randomly generated traffic split has no way of knowing exactly how many visitors will stumble upon the A/B test during the given time frame of the test. This is why toward the end of the test duration, there may be a smaller difference in the traffic allocation while the majority (+95%) of traffic is correctly allocated.

When is SRM a problem?

Some tests may have SRM due to experimental setup.

When the SRM is a big problem, there will be a noticeable difference in traffic allocation.

If you see 1,000 directed to one variant and 200 directed to the other — this is an issue. Sometimes, spotting SRM does not require a particular mathematical formula dedicated to calculating SRM as it is evident enough on its own.

However, an extreme difference in traffic allocation can be very rare. Therefore, it’s essential to check the visitor counts in an SRM test before each test analysis.

Does SRM occur frequently?

Sample ratio mismatch can happen more often than we think. According to a study done by Microsoft & Booking, about 6% of experiments experience this problem.

Furthermore, if the test includes a redirect to an entirely new page, SRM can be even more likely.

Since we heavily rely on tests and trust their conclusions to make strategic business decisions, it’s important that you are able to detect SRM as early as possible when it happens during your A/B test.

Can SRM still affect tests using Bayesian?

The reality is that everyone needs to be on the lookout for SRM, no matter what type of statistical test they are running. This includes experiments using the Bayesian method.

There are no exemptions to the possibility of experiencing a statistically significant mismatch between the observed and expected results of a test. No matter the test, if the expected assumptions are not met, the results will be unreliable.

Sample ratio mismatch: why it happens

Sample ratio mismatch can happen due to a variety of different root causes. Here we will discuss three common examples that cause SRM.

One common example is when the redirection to one variant isn’t working properly for poorly connected visitors.

Another classic example is when the direct link to one variant is spread on social media, which brings all users who click on the link directly to one of the variants. This error does not allow the traffic to be properly distributed among the variants.

In a more complex case, it’s also possible that a test including JS code is crashing a variant and therefore some of the visitor configurations. In this situation, some visitors that are being sent to the crashing variant won’t be collected and indexed properly, which leads to SRM.

All of these examples have a selection bias: some non-random visitors are excluded. The non-random visitors are arriving directly from a link shared on social media, have a poor connection, or are visiting a crashing variant.

In any case, when these issues occur, the SRM is an indication that something went wrong and you cannot trust the numbers and the test conclusion.

Checking for SRM in your A/B tests

Something important to be aware of when doing an SRM check is that the priority metric when checking needs to be “users” and not “visitors.” Users are the specific people that are allocated to each variation, meanwhile, the visitors metric is counting the number of sessions that each user makes.

It’s important to differentiate between users and visitors because results may be skewed if a visitor comes back to their variation multiple times. SRM detected with “visitors” may not be the most reliable metric, but using the “users” metric is evidence of a problem.

SRM in A/B testing

Testing for sample ratio mismatch may seem a bit complicated or unnecessary at first glance. In reality, it’s quite the opposite.

Understanding what SRM is, why it happens, and how it can affect your results is crucial in A/B testing. Running an A/B test to help make key decisions is only helpful for your business if you have reliable data from those tests.

Want to get started on A/B testing for your website? AB Tasty is a great example of an A/B testing tool that allows you to quickly set up tests with low code implementation of front-end or UX changes on your web pages, gather insights via an ROI dashboard, and determine which route will increase your revenue.

Article

7min read

The ROI of Experimentation

When you hear ‘A/B Testing’, do you think straight away of revenue gain? Uplift? A dollars and cents outcome? 

According to David Mannheim, CEO of the Conversion Rate Optimization (CRO) agency User Conversion, you probably do – and shouldn’t. Here’s why:

Unfortunately, it’s just not that simple. 

Experimentation is more than just a quick strategy to uplift your ROI. 

In this article we will discuss why we experiment, the challenges of assessing return on investment (ROI), prioritization, and what A/B testing experimentation is really about. 

Why do we experiment?

Technically speaking, experimentation is performed to support or reject a hypothesis. Experimentation provides you with valuable insights into cause-and-effect relationships by determining the outcome of a certain test when different factors are manipulated in a controlled setting. 

In other words, if there is no experiment, there is no way to refute a hypothesis and reduce the risk of losing business or negatively impacting metrics.

Experimentation is about prioritization, minimizing risk and learning from the outcome. The tests you choose to implement should be developed accordingly. It’s not necessarily about making the “right” or “wrong” decision, experimentation helps you make better decisions based on data.

In visual terms, experimentation will look something like this:

ROI frustration backlog

Online experiments in the business world must be carefully designed to learn, accomplish a specific purpose, and/or measure a key performance indicator that may not have an immediate financial effect. 

However, far too often it’s the key stakeholders (or HIPPOs) who decide what tests get implemented first. Their primary concern? The amount of time it will take to see a neat revenue uplift.

This tendency leads us to the following theory:

The ROI of experimentation is impossible to achieve because the industry is conditioned to think that A/B testing is only about gain.

Frustrations and challenges of ROI expectations 

You may be asking yourself at this point, What’s so bad about expecting revenue uplift from A/B tests? Isn’t it normal to expect a clear ROI?

It is normal, however, the issue isn’t just that simple.

We’ve been conditioned to expect a neat formula with a clean-cut solution: “We invested X, we need to get Y.”  

This is a misleading CRO myth that gets in the way. 

Stakeholders have come to erroneously believe that every test they run should function like this – which has set unrealistic ROI expectations for conversion optimization practitioners. 

As you can imagine, this way of thinking creates frustration for those implementing online experimentation tests.

Experiment backlog example

What people often overlook is the complexity of the context in which they are running their experimentation tests and assessing their ROI.

It’s not always possible to accurately measure everything online, which makes putting an exact number on it next to impossible. 

Although identifying the impact of experiments can be quite a challenge due to the complexity of the context, there are some online tools that exist to measure your ROI efforts as accurately as possible. 

AB Tasty is an example of an A/B testing tool that allows you to quickly set up tests with low-code implementation of front-end or UX changes on your web pages, gather insights via an ROI dashboard, and determine which route will increase your revenue.

Aside from the frustration that arises from the ingrained ROI expectation to be focused on immediate financial improvement, three of the biggest challenges of the ROI of experimentation are forecasting, working with averages, and multiple tests at once.

Challenge #1: Forecasting

The first challenge with assessing the ROI of experimentation is forecasting. A huge range of factors impacts an analyst’s ability to accurately project revenue uplift from any given test, such as:

  • Paid traffic strategy
  • Online and offline marketing
  • Newsletters
  • Offers
  • Bugs
  • Device traffic evolution
  • Season
  • What your competitors are doing
  • Societal factors (Brexit)

In terms of estimating revenue projection for the following year from a single experiment– it’s impossible to predict an exact figure. It’s only possible to forecast an ROI trend or an expected average. 

Expecting a perfectly accurate and precise prediction for each experiment you run just isn’t realistic – the context of each online experimentation test is too complex.

Challenge #2: Working with averages

The next challenge is that your CRO team is working with averages – in fact, the averages of averages.

Let’s say you’ve run an excellent website experiment on a specific audience segment – and you experienced a high uplift in conversion rate. 

If you then take a look at your global conversion rate for your entire site, there’s a very good chance that this uplift will be swallowed up in the average data. 

Your revenue wave will have shrunk to an undetectable ripple. And this is a big issue when trying to assess overall conversion rate or revenue uplift – there are just too many external factors to get an accurate picture.

With averages, the bottom line is that you’re shifting an average. Averages make it very difficult to get a clear understanding. 

On average, an average customer, exposed to an average A/B test will perform
 averagely. 

Challenge #3: Multiple tests

The third challenge of ROI expectations happens when you want to run multiple online experiments at one time and try to aggregate the results. 

Again, it’s tempting to run simple math equations to get a clear-cut answer for your gain, but the reality is more complicated than this. 

Grouping together multiple experiments and the results of each experiment will provide you will blurred results. 

This makes ROI calculations for experimentation a nightmare for those simultaneously running tests. Keeping experiments and their respective results separate is the best practice when running multiple tests.

Should it always be “revenue first”?

Is “revenue first” the best mentality? When you step back and think about it, it doesn’t make sense for conversion optimizers to expect revenue gain, and only revenue gain, to be the primary indicator of success driving their entire experimentation program.

What would happen if all businesses always put revenue first?

That would mean no free returns for an e-commerce site (returns don’t increase gain!), no free sweets in the delivery packaging (think ASOS), the most inexpensive product photographs on the site, and so on.

If you were to put immediate revenue gain first – as stakeholders so often want to do in an experimentation context – the implications are even more unsavory. 

Let’s take a look at some examples: you would offer the skimpiest customer service to cut costs, push ‘buy now!’ offers unendingly, discount everything, and forget any kind of brand loyalty initiatives. Need we go on?

In short, focusing too heavily on immediate, clearly measurable revenue gain inevitably cannibalizes the customer experience. And this, in turn, will diminish your revenue in the long run.

What should A/B testing be about?

One big thing experimenters can do is work with binomial metrics. 

Avoid the fuzziness and much of the complexity by running tests that aim to give you a yes/no, black or white answer.

binomial metrics examples

Likewise, be extremely clear and deliberate with your hypothesis, and be savvy with your secondary metrics: Use experimentation to avoid loss, minimize risk, and so on.

But perhaps the best thing you can do is modify your expectations. 

Instead of saying, experimentation should unfailingly lead to a clear revenue gain, each and every time, you might want to start saying, experimentation will allow us to make better decisions.

Good experimentation model

These better decisions – combined with all of the other efforts the company is making – will move your business in a better direction, one that includes revenue gain.

The ROI of experimentation theory

With this in mind, we can slightly modify the original theory of the ROI of experimentation:

The ROI of experimentation is difficult to achieve and should be contextualized for different stakeholders and businesses. We should not move completely away from a dollar sign way of thinking, but we should deprioritize it. “Revenue first” is not the best mentality in all cases- especially in situations as complex as calculating the ROI of experiments.