Article

8min read

A/B Testing: It’s Not Just About the Outcome

A/B testing is often seen as the magic bullet for improving e-commerce performance. Many believe that small tweaks—like changing the color of a “Buy Now” button—will significantly boost conversion rates. However, A/B testing is much more complex. 

Random changes without a well-thought-out plan often lead to neutral or even negative results, leaving you frustrated and wondering if your efforts were wasted. 

Success in A/B testing doesn’t have to be defined solely by immediate KPI improvements. Instead, by shifting your focus from short-term gains to long-term learnings, you can turn every test into a powerful tool for driving sustained business growth. 

This guest blog was written by Trevor Aneson, Vice President Customer Experience at 85Sixty.com, a leading digital agency specializing in data-driven marketing solutions, e-commerce optimization, and customer experience enhancement. In this blog, we’ll show you how to design A/B tests that consistently deliver value by uncovering the deeper insights that fuel continuous improvement. 

Rethinking A/B Testing: It’s Not Just About the Outcome 

Many people believe that an A/B test must directly improve core e-commerce KPIs like conversion rates, average order value (AOV), or revenue per visitor (RPV) to be considered successful. This is often due to a combination of several factors: 

1. Businesses face pressure to show immediate, tangible results, which shifts the focus toward quick wins rather than deeper learnings. 

2. Success is typically measured using straightforward metrics that are easy to quantify and communicate to stakeholders.

3. There is a widespread misunderstanding that A/B testing is a one-size-fits-all solution, which can lead to unrealistic expectations. 

However, this focus on short-term wins limits the potential of your A/B testing program. When a test fails to improve KPIs, you might be tempted to write it off as a failure and abandon further experimentation. However, this mindset can prevent you from discovering valuable insights about your users that could drive meaningful, long-term growth. 

A Shift in Perspective: Testing for Learnings, Not Just Outcomes 

To maximize the success and value of your A/B tests, it’s essential to shift from an outcome-focused approach to a learning-focused one. 

Think of A/B testing not just as a way to achieve immediate gains but as a tool for gathering insights that will fuel your business’s growth over the long term. 

The real power of A/B testing lies in the insights you gather about user behavior — insights that can inform decisions across your entire customer journey, from marketing campaigns to product design. When you test for learnings, every result — whether it moves your KPIs or not — provides you with actionable data to refine future strategies. 

Let’s take a closer look at how this shift can transform your testing approach. 

Outcome-Based Testing vs. Learning-Based Testing: A Practical Example 

Consider a simple A/B test aimed at increasing the click-through rate (CTR) of a red call-to-action (CTA) button on your website. Your analytics show that blue CTA buttons tend to perform better, so you decide to test a color change. 

Outcome-Based Approach 

Your hypothesis might look something like this: “If we change the CTA button color from red to blue, the CTR will increase because blue buttons typically receive more clicks.”

In this scenario, you’ll judge the success of the test based on two possible outcomes: 

1. Success: The blue button improves CTR, and you implement the change. 2. Failure: The blue button doesn’t improve CTR, and you abandon the test. 

While this approach might give you a short-term boost in performance, it leaves you without any understanding of why the blue button worked (or didn’t). Was it really the color, or was it something else — like contrast with the background or user preferences — that drove the change? 

Learning-Based Approach 

Now let’s reframe this test with a focus on learnings. Instead of testing just two colors, you could test multiple button colors (e.g., red, blue, green, yellow) while also considering other factors like contrast with the page background. 

Your new hypothesis might be: “The visibility of the CTA button, influenced by its contrast with the background, affects the CTR. We hypothesize that buttons with higher contrast will perform better across the board.” 

By broadening the test, you’re not only testing for an immediate outcome but also gathering insights into how users respond to various visual elements. After running the test, you discover that buttons with higher contrast consistently perform better, regardless of color. 

This insight can then be applied to other areas of your site, such as text visibility, image placement, or product page design. 

Key Takeaway: 

A learning-focused approach reveals deeper insights that can be leveraged far beyond the original test scenario. This shift turns every test into a stepping stone for future improvements. 

How to Design Hypotheses That Deliver Valuable Learnings

Learning-focused A/B testing starts with designing better hypotheses. A well-crafted hypothesis doesn’t just predict an outcome—it seeks to understand the underlying reasons for user behavior and outlines how you’ll measure it. 

Here’s how to design hypotheses that lead to more valuable insights: 1. Set Clear, Learning-Focused Goals 

Rather than aiming only for KPI improvements, set objectives that prioritize learning. For example, instead of merely trying to increase conversions, focus on understanding which elements of the checkout process create friction for users. 

By aligning your goals with broader business objectives, you ensure that every test contributes to long-term growth, not just immediate wins. 

2. Craft Hypotheses That Explore User Behavior 

A strong hypothesis is specific, measurable, and centered around understanding user behavior. Here’s a step-by-step guide to crafting one: 

Start with a Clear Objective: Define what you want to learn. For instance, “We want to understand which elements of the checkout process cause users to abandon their carts.” 

Identify the Variables: Determine the independent variable (what you change) and the dependent variable (what you measure). For example, the independent variable might be the number of form fields, while the dependent variable could be the checkout completion rate. 

Explain the Why: A learning-focused hypothesis should explore the “why” behind the user behavior. For example, “We hypothesize that removing fields with radio buttons will increase conversions because users find these fields confusing.” 

3. Design Methodologies That Capture Deeper Insights 

A robust methodology is crucial for gathering reliable data and drawing meaningful conclusions. Here’s how to structure your tests:

Consider Multiple Variations: Testing multiple variations allows you to uncover broader insights. For instance, testing different combinations of form fields, layouts, or input types helps identify patterns in user behavior. 

Ensure Sufficient Sample Size & Duration: Use tools like an A/B test calculator to determine the sample size needed for statistical significance. Run your test long enough to gather meaningful data but avoid cutting it short based on preliminary results. 

Track Secondary Metrics: Go beyond your primary KPIs. Measure secondary metrics, such as time on page, engagement, or bounce rates, to gain a fuller understanding of how users interact with your site. 

4. Apply Learnings Across the Customer Journey 

Once you’ve gathered insights from your tests, it’s time to apply them across your entire customer journey. This is where learning-focused testing truly shines: the insights you gain can inform decisions across all touchpoints, from marketing to product development. 

For example, if your tests reveal that users struggle with radio buttons during checkout, you can apply this insight to other forms across your site, such as email sign-ups, surveys, or account creation pages. By applying your learnings broadly, you unlock opportunities to optimize every aspect of the user experience. 

5. Establish a Feedback Loop 

Establish a feedback loop to ensure that these insights continuously inform your business strategy. Share your findings with cross-functional teams and regularly review how these insights can influence broader business objectives. This approach fosters a culture of experimentation and continuous improvement, where every department benefits from the insights gained through testing. 

Conclusion: Every Test is a Win 

When you shift your focus from short-term outcomes to long-term learnings, you transform your A/B testing program into a powerful engine for growth. Every

test—whether it results in immediate KPI gains or not—offers valuable insights that drive future strategy and improvement. 

With AB Tasty’s platform, you can unlock the full potential of learning-focused testing. Our tools enable you to design tests that consistently deliver value, helping your business move toward sustainable, long-term success. 

Ready to get started? Explore how AB Tasty’s tools can help you unlock the full potential of your A/B testing efforts. Embrace the power of learning, and you’ll find that every test is a win for your business.

Subscribe to
our Newsletter

bloc Newsletter EN

We will process and store your personal data to respond to send you communications as described in our  Privacy Policy.

Article

6min read

From Clicks to Connections: How AI is Shaping the Future of Digital Optimization

Any marketer will tell you that Digital Optimization is crucial to ensure successful e-commerce operations and yield the best possible return on investment (ROI). This practice includes both A/B testing and website personalization: every website presents a unique set of features and designs, which must, in turn, be optimized through A/B testing. 

Building a great website is, unfortunately, not simply a matter of following best practices. Even within a single industry, users will hold varied expectations based on your brand, communication style, target audience, funnel, etc. And while browsing the same website, users’ expectations can vary, with some knowing exactly what they want and others needing to explore, check your returns policy, learn about your sustainability initiatives, and so on.

We have all heard the hype about how AI has been revolutionizing how marketers approach experimentation. Generative AI offers new opportunities for optimizing every aspect of the user journey, allowing marketers to:

  • streamline testing,
  • create new online experiences, 
  • and create new types of user segments for more precise personalized experiences that drive conversions.

This guest blog post was written by Rodolphe Dougoud, Project Lead at fifty-five—a leading data company that helps brands harness the potential of Generative AI and mitigate associated risks effectively with a comprehensive and pragmatic AI strategy, among other services. 

Below, we’ll explore these three perspectives in depth, with real-life examples gleaned from AB Tasty’s new algorithm, Emotions AI, and fifty-five’s work with its clients around GenAI. 

AI in Action for Experiences that Matter

  1. Streamline testing

When thinking about A/B testing, you might immediately picture creating an experiment and launching it live on a website. However, the most time-consuming phases of the A/B testing process generally come before and after that: finding new features to try out in order to create a testing roadmap and analyzing the results of these tests. Here, AI can increase test velocity by helping to reduce bottlenecks hindering both of the aforementioned stages.

Test ideation

Your roadmap must not only be top-down but also bottom-up: pay close attention to insights from your UX designers, based on benchmarks from your competitors and industry trends, and data-driven insights based on your own analytics data. Here, AI can facilitate the process by analyzing large datasets (e.g., on-site Analytics data) to find insights humans might have missed.

Result analysis

Similarly, it’s essential to analyze the results of your tests thoroughly. Looking at one KPI can sometimes be enough, but it often represents only one part of a bigger story. An aptly-calibrated AI model can find hidden insights within your testing results

While we generally know what data we want to access, the actual querying of that data can be time-consuming. Applying a GenAI model to your dataset can also allow you to query your data in natural language, letting the model pull the data for you, run the query, and create instant visualizations for major time gains.

Content creation

While not necessary for most tests, creating new content to be included in the testing phase can take a long time and impact your roadmap. While GenAI cannot produce the same quality of content as your UX team, a UX designer equipped with a GenAI tool can create more content faster. The model used can be trained with your design chart to ensure it integrates with the rest of your content. Overall, adding a GenAI tool as a complement to your design arsenal can yield substantial gains in productivity and, therefore, reinforce your testing roadmap timeline.

  1. Create new online experiences

Marketers should not hesitate to experiment with AI to create unique and interactive experiences. Generative AI can create personalized content and recommendations that can engage users more effectively. 

Consider, for instance, fifty-five’s recent work with Chronodrive, a grocery shopping and delivery app. We used AI to address a common user challenge (and, frankly, near-universal issue): deciding what to make for dinner. 

With our innovative solution, taking a picture of the inside of your fridge will allow the app to create a recipe based on the ingredients it identifies, while a photo of a dish – taken at a restaurant or even downloaded from social media – will generate a recipe for said dish and its associated shopping list. 

 Artificial Intelligence opens new creative options that weren’t available with previous LLM models. Chronodrive’s solution may not be applicable to most companies, but every business can think back on their typical user’s pain points and conceptualize how GenAI could help ease them.

  1. Create new types of user segments for more precise personalized experiences

When a customer enters a store, a salesperson can instantly personalize their experience by checking if they want to be helped or just want to browse, if they are discovering the brand or are already sold on it, if they require guidance or know precisely what they want… A website, on the other hand, necessitates extra effort to present the user with a similarly personalized experience. 

Online, segmentation thus becomes indispensable to deliver the most satisfying user experience possible. Even during testing phases, deploying A/B tests on user segments makes achieving significant results more likely, as increased precision helps mitigate the risk of obtaining neutral results.

AI can analyze a wide array of user interactions on a given website to determine which elements drive the most conversions, or how different users respond to specific stimuli. This analysis can allow brands to classify users into new segments that could not have been available otherwise. For instance, fifty-five applied AI to split Shiseido’s website users between low and high-lifetime value segments. This allowed Shiseido to run differentiated A/B tests and personalize their website depending on the expected lifetime value of the user, resulting in a 12.6% increase in conversions.

Going even further, what if AI could read your emotions? AB Tasty’s new AI algorithm, Emotions AI, can automatically segment your audience into 10 categories based on emotional needs. 

  • If a user needs to be reassured, the website can emphasize its free return policy
  • If they need clarity, the website can highlight factual information about your product
  • And if they need immediacy, the website can hide any unnecessary information to instead focus on its main CTAs

The model estimates the needs of the user by taking into consideration all of their interactions with the website: how long they wait before clicking, whether they scroll through an entire page, where their mouse hovers, how many times they click, etc. This enables stronger personalization, both during testing phases and when deploying online features, by letting you know exactly what your users need. 

Want to Learn More?

If you would like to dive deeper into current experimentation trends, watch our webinar replay here, where fifty-five and AB Tasty explored key CRO case studies and more. And if you have any questions or insights you’d like to share, please leave a comment – we would love to hear from you!