Article

4min read

Beauty E-Commerce Gets a Glow-Up: Insights from Our Cosmetics Consumer Report

The way consumers shop for cosmetics is evolving fast. Today’s beauty buyers aren’t just looking for the right shade or texture. They care about what’s inside, how it’s made, and whether they can trust the brand behind it.

To help brands stay ahead, we recently hosted a webinar inspired by our e-book, Decoding Online Shopping: Cosmetics Consumer Trends for 2025. Our hosts, Lara Hourquebie and Justin Trout unpacked what today’s beauty shoppers expect, the digital experiences that build loyalty, and practical test ideas you can apply right away. If you missed the live session, here’s your recap of the new rules shaping beauty e-commerce.

Beauty

What’s shaping beauty e-commerce in 2025

From our research and client insights, three big themes stood out:

  • ? Sustainability isn’t optional: eco-friendly, cruelty-free, and ethical sourcing have become the baseline.
  • Ingredient transparency: shoppers want to know exactly what goes into their skincare and cosmetics.
  • ? Social media’s influence: skincare routines and beauty standards are amplified online, fueling demand for authenticity and trust.

And yes, price still matters, but high-quality reviews are the second most influential factor.

Beauty pollfish

Why social proof works (and how Clarins put it to the test)

One of the strongest insights from both the e-book and webinar was the importance of social validation. Shoppers feel reassured when they see that others have purchased, rated, or recommended a product – especially in beauty, where confidence is key.

Clarins put this into practice by experimenting with a social proof widget on their product pages. The idea was simple: show shoppers in real-time that others were also browsing or buying the same product.

Clarins test

The impact?

  • +5% increase in average order value
  • +€5.8K uplift in revenue

By targeting this experiment to the right audience segments, Clarins proved that even small nudges can build confidence and boost sales.

Your 2025 beauty brand checklist

  • Embed sustainability and ingredient transparency into your brand story.
  • Make reviews and social proof highly visible – don’t leave trust-building to chance.
  • Test new ideas, even small ones like Clarins’ widget – they can create outsized results.
  • Keep the focus on loyalty over discounts: long-term trust beats short-term price cuts.

Building better experiences through relevance

As our research shows, shoppers are happy to share details like their skin type, concerns, or makeup preferences if it helps them find the perfect match. But when it comes to things like personal contact details, they’re far less willing.

The takeaway? Consumers want relevance, not noise. They’re open to sharing what improves their journey – as long as brands use it thoughtfully and transparently.

In short: the beauty brands that blend values, personalization, and experimentation will be the ones to win hearts (and baskets) in 2025.

? For more insights, download our e-book Decoding Online Shopping: Cosmetics Consumer Trends for 2025 and see how to turn shopper expectations into results.

Article

4min read

Unlock Any Audience Source with AB Tasty’s Universal Connector

Breaking down silos between your data and your experiments

Every marketing team dreams of having a unified view of their customers. But in reality, data often lives in silos: a CRM here, a CDP there, a custom analytics tool somewhere else. If you want to use these audience segments inside AB Tasty for targeting and personalization, you need a simple way to connect them to our platform—regardless of the tool you use.

That’s exactly what AB Tasty’s Universal Connector delivers.

What is the Universal Connector?

The Universal Connector is built on top of AB Tasty’s Universal Data Connector (UDC). It allows you to import audience data from any third-party tool—even those not available as native integrations—and make them available in AB Tasty’s Segment Builder for experiments, personalizations, and patches.

In simple terms: If your tool can send audience data to AB Tasty, the Universal Connector can make it actionable.

Why it matters

  • Agnostic by design: Works with any CRM, CDP, analytics platform or custom tool capable of sending data via API.
  • Self-service: Marketers can set up their connectors through a simple interface—no heavy dev work required.
  • Unified audience view: Imported audiences automatically appear in AB Tasty’s audience management and, once synced, are instantly available in the Segment Builder—ready to power your campaigns.
  • Compatible with BYID: Works seamlessly with AB Tasty’s Bring Your Own ID feature, ensuring perfect reconciliation with your own user IDs across devices and platforms.

From complex workflows to a streamlined process

Traditional approach:

  • Importing custom audiences often requires manual code injection, custom attributes, and support from technical teams.
  • Segments have to be declared one by one in the JavaScript console, with a risk of errors.

With AB Tasty’s Universal Connector:

  • Configure a connector in a few clicks: declare the source, specify how to identify the visitor (cookie, local storage, BYID), and let AB Tasty handle the reconciliation automatically.
  • Audiences flow directly into the Segment Builder without manual coding.

Who benefits the most?

  • Global brands with complex stacks: Multiple CRMs, custom CDPs, or proprietary data systems.
  • The Travel & Hospitality industry: Hotel groups, cruise lines, and booking platforms that need to unify CRM, loyalty, and reservation data across web and mobile apps.
  • The Retail & e-commerce sector: Brands that want to leverage loyalty IDs or offline CRM segments in their onsite personalizations.

A powerful use case: Travel & Hospitality

Travel brands often need to merge data from several tools: CRM, booking engines, loyalty programs, and mobile apps. The Universal Connector makes it easy to bring these audiences into AB Tasty and deliver hyper-personalized experiences.

For example, a major international hotel group uses the connector to unify audiences from its CDP and CRM, enabling precise targeting based on booking history and loyalty status across devices.

Getting started

The Universal Connector is designed to be quick to set up and easy to maintain. Most of the work can be done by a marketer, with only light support from a technical contact.

  1. Create your connector: Make sure the audience identifier matches your imported file and specify how visitors are identified—via cookie, localStorage, or your own ID (BYID). The connector will handle the rest.
  2. Send your audience data to UDC: Push your segments via a simple API call.
  3. Target with confidence: Once synced, your imported segments automatically appear in the Segment Builder, ready to use in experiments and personalizations.

Bonus point: No complex coding. No manual segment declarations. Just a straightforward connection between your data and AB Tasty.

Ready to unlock your audiences?

Go beyond native integrations and make any audience actionable with AB Tasty’s Universal Connector.

Contact your KAM and start importing any audience into AB Tasty today.

Already an AB Tasty client? Let your CSM know your interest in this feature for further activation.

Article

6min read

Which Statistical Model is Best for A/B Testing: Bayesian, Frequentist, CUPED, or Sequential?

If you’ve ever run an A/B test, you know the thrill of watching those numbers tick up and down, hoping your new idea will be the next big winner. But behind every successful experiment is a secret ingredient: the statistical model that turns your data into decisions.

With so many options – Bayesian, Frequentist, CUPED, Sequential – it’s easy to feel like you’re picking a flavor at an ice cream shop you’ve never visited before. Which one is right for you? Let’s dig in!

The Scoop on Statistical Models

Statistical models are the brains behind your A/B tests. They help you figure out if your shiny new button color is actually better, or if you’re just seeing random noise. But not all models are created equal, and each has its own personality – some are straightforward, some are a little quirky, and some are best left to the pros.

Bayesian Testing Model: The Friendly Guide

Imagine you’re asking a friend, “Do you think this new homepage is better?” The Bayesian model is that friend who gives you a straight answer: “There’s a 92% chance it is!” Bayesian statistics use probability to tell you, in plain language, how likely it is that your new idea is actually an improvement.

Bayesian analysis works by updating what you believe as new data comes in. It’s like keeping a running tally of who’s winning the race, and it’s not shy about giving you the odds. This approach is especially handy for marketers, product managers, and anyone who wants to make decisions without a PhD in statistics. It’s clear, actionable, and – dare we say – fun to use.

At AB Tasty, we love Bayesian. It’s our go-to because it helps teams make confident decisions without getting tangled up in statistical spaghetti. Most of our clients use it by default, and for good reason: it’s easy to understand, hard to misuse, and perfect for fast-paced digital teams.

Pros of Bayesian Testing:

  • Results are easy to interpret (“There’s a 92.55% chance to win!”).
  • Great for business decisions (and no need to decode cryptic p-values).
  • Reduces the risk of making mistakes from peeking at your data.

Cons of Bayesian Testing:

  • Some data scientists may prefer more traditional methods.
  • Can require a bit more computing power for complex tests.

Frequentist Testing Model: The Classic Statistician

If Bayesian is your friendly guide, Frequentist is the wise professor. This is the classic approach you probably learned about in school. Frequentist models use p-values to answer questions like, “If there’s really no difference, what are the chances I’d see results like this?”

Frequentist analysis is all about statistical significance. If your p-value is below 0.05, you’ve got a winner. This method is tried and true, and it’s the backbone of academic research and many data teams.

But here’s the catch: p-values can be tricky. They don’t tell you the probability that your new idea is better; they tell you the probability of seeing your data if nothing is actually different. It’s a subtle distinction, but it trips up even seasoned pros. If you’re comfortable with statistical lingo and want to stick with tradition, the Frequentist model is a good choice. Otherwise, it can feel a bit like reading tea leaves.

Pros of Frequentist Testing:

  • Familiar to statisticians and data scientists.
  • Matches legacy processes in many organizations.

Cons of Frequentist Testing:

  • Results can be confusing for non-experts.
  • Easy to misinterpret, leading to “false positives” if you peek at results too often.

CUPED Testing Model: The Speedster (But Only for the Right Crowd)

CUPED (Controlled Experiment Using Pre-Experiment Data) is designed to go fast by using data from before your experiment even started. By comparing your test results to users’ past behavior, CUPED can reduce the noise and help you reach conclusions quicker.

But here’s the twist: CUPED only shines when your users come back again and again, like on streaming platforms (Netflix) or big SaaS products (Microsoft). If you have an e-commerce site, CUPED can actually steer you wrong, leading to misleading results.

For most e-commerce teams, CUPED is a bit like putting racing tires on a city bike, not the best fit. But if you’re running experiments on a platform with high user recurrence, it can be a powerful tool in your kit.

Pros CUPED Testing:

  • Can deliver faster, more precise results for high-recurrence platforms.
  • Makes the most of your existing data.

Cons of CUPED Testing:

  • Not suitable for most e-commerce or low-frequency sites.
  • Can lead to errors if used in the wrong context.
  • More complex to set up and explain.

Sequential Testing Model: The Early Warning System

Sequential testing is your experiment’s smoke alarm. Instead of waiting for a set number of visitors, it keeps an eye on your results as they come in. If things are going south – say, your new checkout flow is tanking conversions – it can sound the alarm early, letting you stop the test and save precious traffic.

But don’t get too trigger-happy. Sequential testing is fantastic for spotting losers early, but it’s not meant for declaring winners ahead of schedule. If you use it to crown champions too soon, you risk falling for false positives – those pesky results that look great at first but don’t hold up over time.

At AB Tasty, we use sequential testing as an early warning system. It helps our clients avoid wasting time and money on underperforming ideas, but we always recommend waiting for the full story before popping the champagne.

Experiment health check

Pros of Sequential Testing:

  • Helps you spot and stop losing tests quickly.
  • Saves resources by not running doomed experiments longer than necessary.

Cons of Sequential Testing:

  • Not designed for picking winners early.
  • Can lead to mistakes if used without proper guidance.

Which Statistic Model is Best for A/B Testing?

If you’re looking for a model that’s easy to use, hard to misuse, and perfect for making fast, confident decisions, Bayesian is your best bet – especially if you’re in e-commerce or digital marketing. It’s the model we recommend for most teams, and it’s the default for a reason.

If you have a team of data scientists who love their p-values, or you’re working in a highly regulated environment, Frequentist might be the way to go. Just be sure everyone’s on the same page about what those numbers really mean.

Running a streaming service or a platform where users log in daily? CUPED could help you speed things up – just make sure you’ve got the right data and expertise.

And if you want to keep your experiments safe from disasters, Sequential is the perfect early warning system.

Conclusion: The Right A/B Testing Model for the Right Job

Choosing a statistical model for A/B testing doesn’t have to be a headache. Think about your team, your users, and your goals. For most, Bayesian is the friendly, reliable choice that keeps things simple and actionable. But whichever model you choose, remember: the best results come from understanding your tools and using them wisely.

Ready to run smarter, safer, and more successful experiments? Pick the model that fits your needs—and don’t be afraid to ask for help if you need it. After all, even the best chefs need a good recipe now and then.

Hungry for more?
Check out our guides on Bayesian vs. Frequentist A/B Testing and When to Use CUPED. Happy testing!

Article

7min read

Is Your Average Order Value (AOV) Misleading You?

Average Order Value (AOV) is a widely used metric in Conversion Rate Optimization (CRO), but it can be surprisingly deceptive. While the formula itself is simple—summing all order values and dividing by the number of orders—the real challenge lies within the data itself.

The problem with averaging

AOV is not a “democratic” measure. A single high-spending customer can easily spend 10 or even 100 times more than your average customer. These few extreme buyers can heavily skew the average, giving a limited number of visitors disproportionate impact compared to hundreds or thousands of others. This is problematic because you can’t truly trust the significance of an observed AOV effect if it’s tied to just a tiny fraction of your audience.

Let’s look at a real dataset to see just how strong this effect can be. Consider the order value distribution:

  • The horizontal axis represents the order value.
  • The vertical axis represents the frequency of that order value.
  • The blue surface is a histogram, while the orange outline is a log-normal distribution approximation.

This graph shows that the most frequent order values are small, around €20. As the order value increases, the frequency of such orders decreases. This is a “long/heavy tail distribution,” meaning very large values can occur, albeit rarely.

A single strong buyer with an €800 order value is worth 40 times more than a frequent buyer when looking at AOV. This is an issue because a slight change in the behavior of 40 visitors is a stronger indicator than a large change from one unique visitor. While not fully visible on this scale, even more extreme buyers exist. 

The next graph, using the same dataset, illustrates this better:

  • The horizontal axis represents the size of the growing dataset of order values (roughly indicating time).
  • The vertical axis represents the maximum order value in the growing dataset in €

At the beginning of data collection, the maximum order value is quite small (close to the most frequent value of ~€20). However, we see that it grows larger as time passes and the dataset expands. With a dataset of 10,000 orders, the maximum order value can exceed €5,000. This means any buyer with an order above €5,000 (they might have multiple) holds 250 times the power of a frequent buyer at €20. At the maximum dataset size, a single customer with an order over €20,000 can influence the AOV more than 2,000 other customers combined.

When looking at your e-commerce metrics, AOV should not be used as a standalone decision-making data.

E-commerce moves fast. Get the insights that help you move faster. Download the 2025 report now.

The challenge of AB Test splitting

The problem intensifies when considering the random splits used in A/B tests.

Imagine you have only 10 very large spenders whose collective impact equals that of 10,000 medium buyers. There’s a high probability that the random split for such a small group of users will be uneven. While the overall dataset split is statistically even, the disproportionate impact of these high spenders on AOV requires specific consideration for this small segment. Since you can’t predict which visitor will become a customer or how much they will spend, you cannot guarantee an even split of these high-value users.

This phenomenon can artificially inflate or deflate AOV in either direction, even without a true underlying effect, simply depending on which variation these few high spenders land on.

What’s the solution?

AOV is an unreliable metric, how can we effectively work with it? The answer is similar to how you approach conversion rates and experimentation.

You don’t trust raw conversion data—one more conversion on variation B doesn’t automatically make it a winner, nor do 10 or 100. Instead, you rely on a statistical test to determine when a difference is significant. The same principle applies to AOV. Tools like AB Tasty offer the Mann-Whitney test, a statistical method robust against extreme values and well-suited for long-tail distributions.

AOV behavior can be confusing because you’re likely accustomed to the more intuitive statistics of conversion rates. Conversion data and their corresponding statistics usually align; a statistically significant increase in conversion rate typically means a visibly large difference in the number of conversions, consistent with the statistical test. However, this isn’t always the case with AOV. It’s not uncommon to see the AOV trend and the statistical results pointing in different directions. Your trust should always be placed in the statistical test.

The root cause: Heavy tail distributions

You now understand that the core issue stems from the unique shape of order value distributions: long-tail distributions that produce rare, extreme values.

It’s important to note that the problem isn’t just the existence of extreme values. If these extreme values were frequent, the AOV would naturally be higher, and their impact would be less dramatic because the difference between the AOV and these values would be smaller. Similarly, for the splitting problem, a larger number of extreme values would ensure a more even split.

At this point, you might think your business has a different order distribution shape and isn’t affected. However, this shape emerges whenever these two conditions are met:

  • You have a price list with more than several dozen different values.
  • Visitors can purchase multiple products at once.

Needless to say, these conditions are ubiquitous and apply to nearly every e-commerce business. The e-commerce revolution itself was fueled by the ability to offer vast catalogues.

Furthermore, the presence of shipping costs naturally encourages users to group their purchases to minimize those costs. It means that nearly all e-commerce businesses are affected. The only exceptions are subscription-based businesses with limited pricing options, where most purchases are for a single service.

Here’s a glimpse into the order value distribution across various industries, demonstrating the pervasive nature of the “long tail distribution”:

Cosmetic
Transportation
B2B packaging (selling packaging for e-commerce)
Fashion
online flash sales

AOV, despite its simple definition and apparent ease of understanding, is a misleading metric. Its magnitude is easy to grasp, leading people to confidently make intuitive decisions based on its fluctuations. However, the reality is far more complex; AOV can show dramatic changes even when there’s no real underlying effect.

Conversely, significant changes can go unnoticed. A strong negative effect could be masked by just a few high-spending customers landing in a poorly performing variation. So, now you know: just as you do for conversion rates, rely on statistical tests for your AOV decisions.

Article

3min read

Experiment Health Check: Proactive Monitoring for Reliable Experimentation

Introduction

Running hundreds of experiments each year is a sign of a mature, data-driven organization – but it also comes with challenges.

How do you ensure that every test is running smoothly, and that critical issues don’t slip through the cracks?

At AB Tasty, we’ve listened to our clients’ pain points and are excited to announce the launch of Experiment Health Check: a new feature designed to make experimentation safer, smarter, and more efficient.

The Challenge: Keeping Experiments Healthy at Scale

For leading brands running over 100 campaigns a year, experimentation is at the heart of digital optimization.

But with so many campaigns running simultaneously, manually checking reports every day to spot issues is time-consuming and inefficient. Worse, problems like underperforming variations or sample ratio mismatches (SRM) can go unnoticed, leading to lost revenue or inconclusive results.

Our Solution: Experiment Health Check

Experiment Health Check is an automated monitoring system built directly into AB Tasty. It proactively alerts you to issues in your experiments, so you can act fast and keep your testing program on track.

Key Features:

  • Automated Alerts: Get notified in-product (and by email, if you choose) when an experiment encounters a critical issue, such as:
    • Underperforming variations (sequential testing alert)
    • SRM (Sample Ratio Mismatch) problems
  • Centralized Dashboard: Super-admins can view all alerts across accounts for a global overview.
  • Customizable Notifications: Choose which alerts to display and how you want to receive them.

Why It Matters

  • Proactive, Not Reactive: No more waiting until the end of a test or sifting through reports to find problems. Experiment Health Check surfaces issues as soon as they’re detected.
  • Saves Time: Focus on insights and strategy, not manual monitoring.
  • Peace of Mind: Most clients will rarely see alerts – only about 2% of campaigns encounter SRM issues – so you can be confident your experiments are running smoothly.

What’s Next?

Experiment Health Check is available to all AB Tasty clients as of June 2025.

Simply activate it in your dashboard to start benefiting from automated experiment monitoring. We’re committed to evolving this feature with more alert types and integrations based on your feedback.

Article

6min read

9 AI Features that Transform How Digital Teams Test, Learn, and Grow

Testing doesn’t have to feel like guesswork.

What if you could describe your vision and watch it come to life? What if understanding your visitors’ emotions was as simple as a 30-second scan? What if your reports could tell you not just what happened, but why it mattered?

That’s where AI steps in – not to replace your creativity, but to amplify it.

At AB Tasty, we’ve built AI tools that work the way teams actually think: curious, collaborative, and always moving forward. Here are nine features that help you test bolder, learn faster, and connect deeper with the people who matter most.

Insight: If you’re already an AB Tasty customer, you’ve already got access to some of our most popular AI features! But don’t stop scrolling yet, there’s more to discover.

1. Visual Editor Copilot: Your vision, our AI’s creation

Visual editor copilot AB Tasty

Visual Editor Copilot turns your ideas into reality without the endless clicking. Just describe what you want – “make that button green,” “add a fade-in animation,” or “move the CTA above the fold” – and watch our AI bring your vision to life.

No more wrestling with code or hunting through menus. Your creativity leads. Our AI follows.

2. EmotionsAI Insight: Explore 10 emotional profiles

10 emotional profiles with AB Tasty's EmotionsAI

EmotionsAI Insights gives you a free peek into 10 emotional profiles that reveal what your visitors actually feel. Not just what they click – what moves them.

See the missed opportunities hiding in plain sight. Understand the emotional drivers that turn browsers into buyers. It’s personalization that goes beyond demographics to tap into what people really want.

3. Engagement Levels: Segment traffic for affinity and engagement

Engagement level segmentation

Our engagement-level segmentation uses AI to cluster visitors based on how they connect with your site. New visitors get the welcome they deserve. Returning customers get the recognition they’ve earned.

It’s traffic segmentation that makes sense – grouping people by affinity, not just attributes.

4. EmotionsAI: The future of personalization

EmotionsAI by AB Tasty

EmotionsAI is personalization with emotional clarity. In just 30 seconds, see what drives your visitors at a deeper level. Turn those insights into targeted audiences and data-driven sales.

Your visitors have unique needs and expectations. Now you can meet them where they are – emotionally and practically.

5. Recommendations and merchandising

Recommendation solution backend

Recommendations and Merchandising turns the right moment into new revenue. Our AI finds those perfect opportunities to inspire visitors – whether it’s a complementary product or an upgrade that makes sense.

You stay in control of your strategy. AI accelerates the performance. The result? A delightful experience that drives higher average order value.

6. Content Interest: No more struggling to connect

Content interest personalization

Content engagement AI identifies common interests among your visitors based on their browsing patterns – keywords, content, products. Build experiences that feel personal because they actually are.

It’s not about pushing content. It’s about finding the connections that already exist and making them stronger.

7. Report Copilot: Meet your personal assistant for reporting

Report copilot by AB Tasty

Report Copilot is your personal assistant for making sense of data. It highlights winning variations and breaks down why they drove transactions – so you can feel confident in your next move.

No more staring at charts wondering what they mean. Get clear insights that move you forward.

8. Drowning in feedback? Feedback Analysis Copilot saves you time

Feedback Analysis Copilot by AB Tasty

Feedback Analysis Copilot takes the heavy lifting out of NPS and CSAT campaigns. Our AI analyzes responses right within your reports, identifying key themes and sentiment trends instantly.

High volumes of feedback? No problem. Get the insights you need without the manual work that slows you down.

9. Struggling to craft the perfect hypothesis for your experiments?

Hypothesis Copilot by AB Tasty

Hypothesis Copilot helps you craft experiments that start strong. Clear objectives, richer insights, better structure – because every great test begins with a rock-solid hypothesis.

No more struggling with the “what if” – start testing with confidence.

AI That Amplifies Human Creativity

These aren’t just features – they’re your teammates. AI that understands how teams really work: with curiosity, collaboration, and the courage to try something new.

Every tool we build asks the same question: How can we help you go further?

Whether you’re crafting your first experiment or your thousandth, these AI features meet you where you are and help you get where you’re going. Because the best optimization happens when human insight meets intelligent tools.

Ready to see what AI-powered experimentation feels like? Let’s test something bold together.

FAQs about AI in digital experimentation

How is AI used in digital experimentation and A/B testing?

AB Tasty offers clients multiple AI features to enhance A/B testing by automating test setup, analyzing emotional responses, segmenting audiences, and generating data-driven recommendations—all aimed at faster insights and better personalization.

What are the benefits of using AI in website optimization?

AI reduces guesswork, accelerates testing, improves personalization, and turns raw data into actionable insights. It empowers teams to learn faster and create better digital experiences.

How does AI help marketing and product teams test and learn faster?

AB Tasty empowers marketing and product teams with AI tools like Report Copilot and Hypothesis Copilot to streamline data analysis and test planning, helping teams move from idea to iteration quickly and confidently.

What AI features does AB Tasty offer for experimentation and personalization?

AB Tasty offers features like Visual Editor Copilot, EmotionsAI, Content Interest segmentation, and Report Copilot to streamline testing, personalization, and reporting.

Article

2min read

Your Domain, Your Rules: Domain Delegation by AB Tasty

In an era where privacy regulations tighten, browser restrictions escalate, and trust is hard-won, brands need more than great ideas to drive their digital experiments — they need full control over how their technologies behave.

 That’s why AB Tasty is proud to introduce Domain Delegation, a groundbreaking feature designed to place independence, performance, and compliance at the heart of your experimentation strategy.

Why Domain Delegation changes the game

The digital landscape is constantly shifting at a fast pace. With evolving browser privacy policies (like ITP and ETP), widespread ad blockers, and stricter data regulations, third-party scripts are increasingly vulnerable — slowing down your site, triggering blockers, or worse, being outright rejected.

For enterprises operating under rigorous security standards, these challenges can make it nearly impossible to deploy tools like AB Tasty efficiently.

That’s where Domain Delegation steps in.

This powerful new feature allows you to serve the AB Tasty tag from a custom subdomain you control (e.g., abt.yourdomain.com), while AB Tasty takes care of the heavy lifting behind the scenes.

What you can do with Domain Delegation

  • Host the AB Tasty tag on your own subdomain (e.g., abt.brand.com)
  • Easily delegate DNS management to AB Tasty through an intuitive guided setup
  • Bypass blockers, improve load speed, and boost reliability
  • Deliver the tag under your own brand, reinforcing trust and compliance
  • Minimal technical effort, fully managed from the AB Tasty interface

What’s in It for You

✅ Higher tag reliability
⚡ Better site performance & Core Web Vitals
? Stronger data governance & security posture
? More brand trust with white-labeled tag delivery

Who benefits the most?

  • Highly regulated industries: Finance, healthcare, government
  • Privacy-first brands: Total data flow ownership
  • Tech teams optimizing performance and autonomy
  • Any organization battling browser or ad blockers

Why Now?

Privacy restrictions aren’t going away. Ad blockers aren’t easing up.
With Domain Delegation, AB Tasty empowers you to take back control over your experimentation stack — ensuring you stay compliant, performant, and trusted.

This isn’t just a technical feature.
It’s a strategic foundation for the next era of digital experimentation.

How It Works

  1. Define your subdomain (e.g., abt.mybrand.com)
  2. Follow the easy delegation flow in AB Tasty’s interface
  3. Let us handle the rest (provisioning, certificates, delivery)

Your tag. Your domain. All powered by AB Tasty.

Domain Delegation Availability

Interested in Domain Delegation? Contact your AB Tasty Customer Success Manager to get started.

Article

3min read

Evi Feedback (Copilot): Turn The Customer Voice Into Action in Seconds with AI

Every insight starts with a story, and every story deserves to be heard. But when your NPS® or CSAT campaigns generate thousands of responses, how do you turn all that feedback into real action, fast?

That’s why we created Evi Feedback (formerly known as Feedback Copilot), the AI-powered assistant that transforms your NPS® or CSAT campaigns into actionable intelligence – instantly.

The problem collecting feedback: Too many voices, not enough time

Let’s face it: analyzing feedback is a nightmare. Even when users leave valuable insights in NPS campaigns, the manual work required to analyze hundreds (or thousands) of verbatim responses can paralyze teams. One client told us:

“We received 5,000 verbatim responses. That’s two weeks of manual work.”

And because it’s so time-consuming, teams either:

  • Underutilize feedback tools like NPS/CSAT
  • Or don’t act on the insights at all

The solution to overwhelming feedback? Evi Feedback

Evi Feedback was born from this pain point – combining the best of AI with our all-in-one experimentation platform. It’s available for free within AB Tasty, and automatically activated for NPS/CSAT campaigns with over 100 responses.

What Evi Feedback does:

  • Segments feedback by sentiment: Instantly separates positive and negative comments based on campaign scores.
  • Clusters similar comments into key themes: Groups feedback into topics like “price,” “delivery,” or “UX.”
  • Summarizes each theme: Provides a short description, confidence score, and sample comments for every theme.
  • Highlights what matters most: Surfaces the top 3 positive and top 3 negative drivers of satisfaction.
  • Exports labeled feedback: Download results for use in Excel, PowerPoint, and more.

And it does all this while respecting data privacy, using a self-hosted model (Hugging Face) instead of sending sensitive content to third-party LLMs.

What makes our Feedback Copilot unique?

  • Instant categorization of massive feedback volumes
  • Quantification of qualitative input – finally, your verbatim responses have numbers to back them
  • Integrated NPS/CSAT in your test workflows – measure why something works, not just if it does
  • Enterprise-grade privacy: Comments stay on AB Tasty’s infrastructure

Who benefits from Evi Feedback?

  • CROs & Product Managers: Prioritize optimizations based on real user pain points.
  • UX & Research Teams: Detect trends and go beyond basic survey stats.
  • Marketing & Customer Success: Understand friction points before and after launches.

What’s next?

Early adopters already report major productivity gains – and they’re asking for more:

  • Direct A/B test ideas from negative themes
  • Verbatim-based segmentation for campaign targeting
  • Improved theme granularity for enterprise-scale campaigns

We’re just getting started. Evi Feedback is not just a feature – it’s your co-pilot in delivering better, faster, and more human-centered product decisions.

Article

6min read

Test, Dress, Impress:  Top Fashion Consumer Trends 2025

Forget traditional shopping journeys, today’s fashion consumers are rewriting the rules! Our 2025 Fashion Consumer Trends report reveals the shifts in how consumers discover, decide, and commit to fashion brands today.

Introduction

In a recent webinar, 3 experimentation leaders came together to unpack the latest consumer trends shaping the fashion industry. The conversation brought together Ben Labay, CEO of Speero, Jonny Longden, Speero’s Chief Growth Officer, and Mary Kate Cash, Head of Growth Marketing for North America at AB Tasty. They shared valuable insights from AB Tasty’s recent global fashion consumer survey, highlighting what drives inspiration, conversion, and retention in today’s fast-evolving fashion landscape.

Social Media is Changing the Game 

Traditional search engines remain the top channel for fashion discovery, followed by direct website visits, Google Shopping, and Social Media ads. However, the differences between these top four channels are shrinking year over year, with social media rapidly gaining ground, especially among Gen Z consumers, where 60% of survey respondents highlighted Social Media ads as their preferred avenue to finding new products. Jonny predicts this trend will expand across all age groups. 

“Social and fashion just go so hand in hand. The big change that’s happened with social is that fashion itself has become more rapid in the way it changes, and so it’s really driving different consumer behaviour.”

Jonny Longden, Chief Growth Officer at Speero

Different Channels, Different Mindsets

People use search when they know what they want. Social media, on the other hand, encourages experimentation. As Ben pointed out, shoppers arriving from social media are often inspired to try new styles or connect with communities, engaging in “social shopping” and not just focusing on finding a specific product. This opens the door for more tailored experiences based on where customers are coming from and what type of inspiration they’re seeking.

Reward Loyalty in Meaningful Ways – When asked how brands could make customers’ experiences more personal, the top answer was clear: rewarding brand loyalty. Discounts, early access, or perks for repeat buyers make shoppers feel seen and increase the chances of account creation and repeat visits. 

Jonny pointed out that “the really interesting tension in this whole industry at the moment is the difference between what is the right thing to do and what is the profitable thing to do. about finding that balance is experimentation in the broadest sense of the word.”

Make Recommendations That Actually Fit – Consumers want relevant suggestions that go beyond basic personalization. Jonny compared it to having a personal stylist: a brand should know both the customer and the market, understanding trends and styles while matching these to individual preferences.

Personalization - fashion trends

What Actually Drives Conversions

When it comes to converting browsers into buyers, shoppers across generations are surprisingly aligned. 

Product quality leads the way across all age groups and regions. Shoppers are still willing to pay for craftsmanship, comfort, and durability, even in a price-sensitive market.

Discounts come next, but the strategy matters. Overuse can cheapen brand perception. As Jonny put it: “Fashion, especially the lower price point fashion has ended up in a kind of race to the bottom where discounting is the way to compete. […] and a lot of consumers wouldn’t consider paying full price. The challenge is how to be careful with the commerciality of discounting.”

Discounts - fashion trends

Sizing and fit clarity also ranks high, especially in fashion, where hesitation often comes from uncertainty about how something will feel or look. Ben noted that some major retailers are tackling this head-on, investing heavily in tools to improve sizing and try-on experiences.

For Gen Z, high-quality reviews and transparency around production methods, sustainability, and pricing are big drivers. Ben shared tactical approaches to transparency on product detail pages, like using engaging CTAs such as “Do you want to know a secret?” to reveal value props related to sustainability and ethical production.

Why Shoppers Abandon Carts

Cart abandonment remains a major friction point, and two reasons dominate globally:

  1. Not ready to buy – Many shoppers use the cart to explore shipping, delivery timeframes, or total cost before making a decision. Jonny explained it simply: “People use the checkout of an ecommerce website just to see what’s gonna happen. […] When’s it gonna be delivered? What are the delivery options? How much is delivery gonna cost? 
  2. Payment Methods not being accepted – This came in a close second, showing how overlooked payment flexibility still is. Buy-now-pay-later options like Klarna may move the needle, especially in fashion, where customers often purchase multiple sizes with the intention of returning some items. Jonny emphasized that payment method testing is one of the best arguments for AB testing and experimentation, as the “best practice” of offering many payment options doesn’t always lead to better conversion.

Retention: Loyalty Built on Familiarity

Finally, we explored what drives customers to create accounts with fashion brands, buy products from them, and what motivates them to stick around.

Loyalty Rewards Drive EngagementGlobally, the top reason for account creation is earning loyalty points, especially among Gen Z and Millennials. Discounts and sale updates follow closely behind.

Balancing Novelty and Trust – Shoppers crave both newness and familiarity: new products ranked highest in driving retention, but previously purchased items and trusted brands followed close behind. This balance is key to keeping customers engaged long-term.

Jonny raised an interesting point: a lot of loyalty programs end up rewarding people who would have come back anyway. Mary Kate added that tools like segmentation can help brands tell the difference between genuinely loyal customers and those just passing through, making it easier to design rewards that actually make an impact.

While conventional wisdom discourages forced account creation, Ben challenged this assumption, arguing it can work when paired with compelling promotions or rewards, especially in social ads. “Social ads that inspire and combine short-term promotions, rewards, and discounts are increasingly leading into forced account creation sequences.”

Conclusion

As shown in our 2025 Fashion Consumer Trends report, the e-commerce fashion industry is evolving, along with consumer expectations. To remain competitive, brands must go beyond simply selling products. They must deliver seamless, personalized shopping experiences that speak directly to the modern shopper’s needs.

This is where experimentation becomes a critical advantage. The most successful brands are those willing to test assumptions about everything from product discovery and presentation to payment options, loyalty strategies, and the evolving role of social commerce. Experience optimization is no longer a nice-to-have. It’s the foundation for building trust, loyalty, and long-term growth in the fast-moving world of online fashion.


Want a deeper dive? Watch the full webinar below to hear expert insights and practical strategies shaping the future of fashion commerce.

Article

6min read

Minimal Detectable Effect: The Essential Ally for Your A/B Tests

In CRO (Conversion Rate Optimization), a common dilemma is not knowing what to do with a test that shows a small and non-significant gain. 

Should we declare it a “loser” and move on? Or should we collect more data in the hope that it will reach the set significance threshold? 

Unfortunately, we often make the wrong choice, influenced by what is called the “sunk cost fallacy.” We have already put so much energy into creating this test and waited so long for the results that we don’t want to stop without getting something out of this work. 

However, CRO’s very essence is experimentation, which means accepting that some experiments will yield nothing. Yet, some of these failures could be avoided before even starting, thanks to a statistical concept: the MDE (Minimal Detectable Effect), which we will explore together.

MDE: The Minimal Detectable Threshold

In statistical testing, samples have always been valuable, perhaps even more so in surveys than in CRO. Indeed, conducting interviews to survey people is much more complex and costly than setting up an A/B test on a website. Statisticians have therefore created formulas that link the main parameters of an experiment for planning purposes:

  • The number of samples (or visitors) per variation
  • The baseline conversion rate
  • The magnitude of the effect we hope to observe

This allows us to estimate the cost of collecting samples. The problem is that, among these three parameters, only one is known: the baseline conversion rate

We don’t really know the number of visitors we’ll send per variation. It depends on how much time we allocate to data collection for this test, and ideally, we want it to be as short as possible. 

Finally, the conversion gain we will observe at the end of the experiment is certainly the biggest unknown, since that’s precisely what we’re trying to determine.

So, how do we proceed with so many unknowns? The solution is to estimate what we can using historical data. For the others, we create several possible scenarios:

  • The number of visitors can be estimated from past traffic, and we can make projections in weekly blocks.
  • The conversion rate can also be estimated from past data.
  • For each scenario configuration from the previous parameters, we can calculate the minimal conversion gains (MDE) needed to reach the significance threshold.

For example, with traffic of 50,000 visitors and a conversion rate of 3% (measured over 14 days), here’s what we get:

MDE Uplift
  • The horizontal axis indicates the number of days.
  • The vertical axis indicates the MDE corresponding to the number of days.

The leftmost point of the curve tells us that if we achieve a 10% conversion gain after 14 days, then this test will be a winner, as this gain can be considered significant. Typically, it will have a 95% chance of being better than the original. If we think the change we made in the variation has a chance of improving conversion by ~10% (or more), then this test is worth running, and we can hope for a significant result in 14 days.

On the other hand, if the change is minor and the expected gain is less than 10%, then 14 days will not be enough. To find out more, we move the curve’s slider to the right. This corresponds to adding days to the experiment’s duration, and we then see how the MDE evolves. Naturally, the MDE curve decreases: the more data we collect, the more sensitive the test becomes to smaller effects.

For example, by adding another week, making it a 21-day experiment, we see that the MDE drops to 8.31%. Is that sufficient? If so, we can validate the decision to create this experiment.

MDE Graph

If not, we continue to explore the curve until we find a value that matches our objective. Continuing along the curve, we see that a gain of about 5.44% would require waiting 49 days.

Minimum Detectable Uplift Graph

That’s the time needed to collect enough data to declare this gain significant. If that’s too long for your planning, you’ll probably decide to run a more ambitious test to hope for a bigger gain, or simply not do this test and use the traffic for another experiment. This will prevent you from ending up in the situation described at the beginning of this article, where you waste time and energy on an experiment doomed to fail.

From MDE to MCE

Another approach to MDE is to see it as MCE: Minimum Caring Effect. 

This doesn’t change the methodology except for the meaning you give to the definition of your test’s minimal sensitivity threshold. So far, we’ve considered it as an estimate of the effect the variation could produce. But it can also be interesting to consider the minimal sensitivity based on its operational relevance: the MCE. 

For example, imagine you can quantify the development and deployment costs of the variation and compare it to the conversion gain over a year. You could then say that an increase in the conversion rate of less than 6% would take more than a year to cover the implementation costs. So, even if you have enough traffic for a 6% gain to be significant, it may not have operational value, in which case it’s pointless to run the experiment beyond the duration corresponding to that 6%.

MDE graph

In our case, we can therefore conclude that it’s pointless to go beyond 42 days of experimentation because beyond that duration, if the measured gain isn’t significant, it means the real gain is necessarily less than 6% and thus has no operational value for you.

Conclusion

AB Tasty’s MDE calculator feature will allow you to know the sensitivity of your experimental protocol based on its duration. It’s a valuable aid when planning your test roadmap. This will allow you to make the best use of your traffic and resources.

Looking for a free and minimalistic MDE calculator to try? Check out our free Minimal Detectable Effect calculator here.