Article

8min read

Flying Through Checkout: How Experimentation Shapes Airline Consumer Behavior

The airline checkout is where booking intent becomes revenue. Yet for most airlines, it’s also where the majority of customers drop off. This high abandonment rate isn’t just a cost of doing business; it’s a direct result of a complex booking process failing to meet modern traveler expectations. Fixing this friction is one of the biggest opportunities for growth in the industry.

This drop-off isn’t just a technical problem, it’s a human one. The checkout flow is where a traveler’s excitement meets anxiety, and where price sensitivity clashes with the desire for comfort. For airline and travel professionals, understanding this interplay is the key to conversion.

The answer isn’t to guess what travelers want or to copy a competitor’s design. It’s to listen, learn, and adapt by building a system that lets you ask customers what they prefer, not with a survey, but with their clicks. As we discuss in our Travel Essentials Kit e-book, this is the world of experimentation, where every test becomes part of a continuous cycle of learning and iteration.

Why airline checkout is so complex

Unlike a simple e-commerce purchase, booking a flight is rarely a one-click affair. The complexity is baked right into the business model. You’re not just selling a seat; you’re selling a multi-faceted travel experience, and each component adds another layer to the checkout.

First, there’s the core booking. A simple round-trip flight is one thing, but multi-leg journeys with different carriers, layovers, and time zones create a significant cognitive load for the user. Then come the additional services, such as seats, bags, meals, or insurance, where each choice is a potential exit point. Finally, regulatory requirements can create long, intimidating forms.

The result of this complexity is an abandonment rate that, according to Inai, hits 90%. To put it another way, nine out of every ten potential customers that start booking a flight will leave without paying. That’s significantly higher than the already high average e-commerce site abandonment rate of 70%. And the problem is even worse on mobile.

$260 billion in lost orders across the US and EU are recoverable through better checkout

This is more than a user experience flaw; it’s a massive financial bleed. The Baymard Institute estimates that $260 billion in lost orders across the US and EU are recoverable through better checkout design alone. It’s a multi-billion dollar design challenge waiting for a solution, but the fix doesn’t require a complete and costly overhaul. A commitment to analyzing user data, testing hypotheses, and letting the results guide incremental, high-impact changes will have your customers soaring through your checkout process in no time.

Decoding consumer behavior at checkout

To optimize the checkout flow, you have to get inside the traveler’s mind. Their behavior is driven by powerful psychological factors, and your data shows exactly where the friction is.

39% of shoppers abandon cart

The single biggest culprit is cost ambiguity. The top reason for cart abandonment, cited by 39% of shoppers in research aggregated by the Baymard Institute, is discovering high extra costs at the end of the process. This points directly to the airline industry’s practice of “drip pricing.” The low base fare gets them in the door, but the steady drip of fees erodes trust. It’s not just the final price; it’s the feeling of being misled.

Next is process friction. The same research found a “too long or complicated” checkout will cause 18% of users to leave. Forcing a user to create an account is another major barrier, responsible for another 19% of abandoned carts. This accumulation of friction—multiple pages, endless form fields, and mandatory sign-ups—creates a powerful negative momentum that pushes users to exit.

Finally, there’s the trust deficit. A staggering 19% of users will abandon a purchase simply because they didn’t trust the website with their payment information. This isn’t just about SSL logos. A user who experiences a price increase through drip pricing is psychologically primed to be more skeptical when it’s time to enter their payment details, as the final cost no longer aligns with their initial expectation.

Understanding these behaviors isn’t about exploiting them. It’s about designing a smoother, more transparent, and less stressful experience that guides the traveler confidently toward their destination while also building brand credibility.

Experimentation as a window into the traveler’s mind

So, how do you solve for cost ambiguity or process friction? The answer is to ask your users, not with a survey, but by testing different approaches and measuring the results. Experimentation, through A/B and multivariate testing, is the most effective way to understand what travelers actually do.

The process starts with a data-driven hypothesis. For example, if your analytics show a high drop-off rate on the passenger details page, you could hypothesize that reducing the number of form fields will reduce friction and increase conversions. From there, you can run a simple A/B test: Version A is your current, longer form, and Version B is the new, simplified one. By showing each version to different segments of your audience, you can measure which one leads to more completed bookings. The result is no longer a guess; it’s a data-backed insight that de-risks design changes and allows you to make improvements with a measurable impact.

But this isn’t just about one-size-fits-all fixes. You can take it a step further with personalization and segmentation. A first-time booker might need more guidance and reassurance during checkout, while a frequent flyer would prefer a streamlined experience that pre-fills their preferences and key information. Experimentation allows you to test different, tailored experiences for these segments, ensuring every traveler gets the smoothest possible path to booking.

What airlines can test in checkout

Once you embrace an experimentation mindset, you’ll see test opportunities everywhere. The goal is to challenge assumptions and find what truly moves the needle. Here are a few powerful areas to start:

  • Call-to-action design: Don’t underestimate the power of a button. We worked with Smartbox to test colour variations of their “Add to cart” button, and a simple color change resulted in a 16% increase in clicks.
  • Payment options: The payment step is the final hurdle. Adding digital wallets is one of the most impactful changes you can make. An analysis by Stripe found that businesses enabling Apple Pay saw an average 22% increase in conversion. It’s a powerful antidote to checkout friction, especially on mobile. You could even explore digital boarding passes that integrate directly with mobile wallets. 
  • Form factor and flow: Is a single-page checkout less intimidating than a multi-step progress bar? Test it and see!
  • Trust-building elements: Reinforce security at the moment of payment. Test the placement of security seals and clear language around your cancellation policies. A simple statement like “Free 24-hour cancellation” can provide the reassurance a hesitant traveler needs.
  • Upsell placement: How and when you present add-ons matters. Test bundling services versus offering them a la carte. You might find users are more receptive to upsells like early check-in or seat selection via a follow-up email after the booking is confirmed, reducing friction in the initial checkout.
  • Mobile-first experiences: Your mobile checkout shouldn’t just be a shrunken version of your desktop site. Test mobile-specific designs with larger tap targets, simplified navigation, and form fields that trigger the correct mobile keyboard layout.

From insights to impact: Building a culture of experimentation

The true power of optimization isn’t found in a single winning test. It’s found in building a culture of continuous learning. When your product, marketing, and engineering teams are united by an experimentation mindset, you stop debating opinions and start making decisions based on data. You dare to go further.

Iberojet increase in clicks

Take Iberojet, for example. The online travel agency questioned whether the order of tabs on their homepage was ideal. Working with us, they ran a simple A/B test to change the order based on user browsing history. That small change increased clicks on the “Search” button by 25%, pushing more users down the conversion funnel.

Another powerful example is Ulta Beauty. Working with us, they’ve embedded experimentation into their innovation process, scaling their program from 20 tests per year to over 65. Rather than relying on assumptions, their teams use testing to get quick, data-driven answers. For example, by testing an overlay with product recommendations in the shopping cart, they drove a 9% increase in revenue and a 15% increase in “add to bag” clicks, proving the value of a nimble, “fail-fast” environment.

This is how you find your better. It’s not about finding one perfect, final version of your checkout. It’s about the restless, determined pursuit of a better experience for every traveler, on every device, every single day. The journey starts with a single question: What will you try?

Article

4min read

Progressive Rollout: The Safer, Smarter Way to Launch New Features

Let’s face it: launching a new feature can feel a bit like walking a tightrope. You want to wow your users with something fresh, but you also know that even the best-tested releases can have surprises lurking in the shadows.

What if you could take the nerves—and the guesswork—out of your next launch? That’s exactly what Progressive Rollout is here to do.

The Problem: Risky Feature Releases and Manual Workarounds

Picture this: your team has spent weeks (maybe months!) building a new payment system, a revamped booking flow, or a shiny loyalty program. You’re excited. But you’re also worried. What if something breaks? What if a bug slips through and impacts thousands of users at once?

This is the reality for most product and engineering teams. The stakes are high, and the pressure to “get it right” is real. That’s why so many teams look for ways to release new features gradually—starting with a small group, then expanding as confidence grows.

But here’s the catch: most teams don’t have a dedicated tool for this. Instead, they put together workarounds using feature toggles or A/B tests. These methods can work, but they’re clunky, manual, and often lack the visibility and reassurance everyone craves during a launch.

The Solution: Progressive Rollout

Progressive Rollout is our answer to this all-too-common problem. It’s a feature designed not just for the tech wizards, but for everyone involved in a product launch—product managers, developers, and even business stakeholders.

How does it work?
With Manual Progressive Delivery, you can schedule your feature release in stages. Maybe you want to start with 10% of your users, then move to 20%, 40%, and so on. You decide the pace and the audience.

Our platform handles the rest, automatically exposing more users to your new feature at each step. And at every stage, you get clear notifications and a visual overview, so you always know exactly what’s happening.

What Makes Progressive Rollout a Game-Changer?

1. It’s Actually Easy to Use
Let’s be honest: many “enterprise” tools are intimidating. Progressive Rollout is different. The interface is clean, intuitive, and designed so that anyone can set up a rollout in just a few clicks. No advanced segmentation or manual math required. Whether you’re a seasoned developer or a product manager new to experimentation, you’ll feel right at home.

2. Full Control, Full Reassurance
One of the biggest anxieties during a rollout is not knowing what’s happening. With Progressive Rollout, you get a crystal-clear view of your rollout plan: who’s getting the feature, when, and how much of your audience is included at each step. Email notifications keep you in the loop, so you’re never caught off guard. This transparency isn’t just a nice-to-have—it’s a must for teams who want to move fast and stay safe.

3. Flexible for Any Scenario
Want to give early access to your VIPs or most loyal users? Easy. Need to roll out to everyone, but in controlled increments? No problem. You can import user lists, target specific segments, or just roll out to “all users” in stages. Progressive Rollout adapts to your needs, not the other way around.

Fun Fact: Most Teams Aren’t Doing This—Yet

Here’s something surprising: despite the clear benefits, most teams aren’t using dedicated progressive rollout tools. They’re still relying on toggles and A/B tests, or even manual processes. Why? Because until now, the tools have been too complex or not user-friendly enough. Progressive Rollout changes that, making safe, staged launches accessible to everyone.

The Bottom Line: Launch With Confidence

Progressive Rollout isn’t just another feature—it’s peace of mind for your next big launch. By making gradual releases easy, transparent, and accessible, we help you reduce risk, improve user experience, and focus on what matters: delivering value to your customers.

Article

6min read

A New Era for Product Recommendations: AB Tasty’s Semantic Proximity Algorithm

Picture this: You’ve just launched a new product line, or maybe you’re gearing up for a themed campaign–think “Back to School” or a limited-edition collection. You want your customers to discover the right products, right away. But traditional recommendation engines are stuck waiting for data to trickle in, leaving you with generic suggestions and little control over what’s shown. For merchandisers, that’s not just frustrating – it’s a missed opportunity.

That’s exactly why we built AB Tasty’s Semantic Proximity Algorithm. Instead of relying on yesterday’s sales numbers, this new approach lets you craft relevant, business-driven product recommendations from day one. Whether you’re working with a fresh catalog or pivoting to a new campaign, you get the flexibility and control you need –  no waiting, no guesswork, just smarter recommendations tailored to your goals.

From Algorithm to Merchandiser Mindset

Most recommendation engines are just that – algorithms. But AB Tasty’s Semantic Proximity Algorithm is a paradigm shift: it allows your catalog to think like a merchandiser. Instead of passively waiting for data, it actively understands your products, your campaigns, and your business goals – giving your catalog a brain and putting you in the driver’s seat from day one.

Why Rethink Product Recommendations?

Traditional recommendation algorithms are built on analytics data – think Google Analytics or similar tools. These models can be effective, but only if you have enough historical data. What happens when you launch a new product line, a new brand, or want to activate a campaign around a specific theme (“Back to School,” “Harry Potter,” etc.)? Merchandisers are often left with little control, unable to quickly tailor recommendations to their business needs or campaign goals.

This is the challenge that inspired us to create the Semantic Proximity Algorithm: a tool that empowers merchandisers to launch relevant, business-driven recommendations instantly, even with zero historical data.

The Semantic Proximity Algorithm: A New Approach

AB Tasty’s Semantic Proximity Algorithm takes a fundamentally different approach. Instead of relying on analytics data, it leverages advanced Natural Language Processing (NLP) to analyze the attributes of your product catalog – such as product name, description, category, price, and even custom metafields. This allows the algorithm to identify products that are semantically related, regardless of whether they have ever been purchased together.

Key benefits include:

  • Fast ROI: Campaign launches, upsell, cross-sell
  • Instant setup: No need to wait for analytics data to accumulate. Recommendations are ready as soon as your catalog is integrated.
  • Total flexibility: Merchandisers can select and combine any catalog attributes to build strategies and adapt recommendations on the fly for seasonal events or business needs.
  • Full control and transparency: Preview and iterate on recommendations before going live, ensuring relevance and quality.
  • Adaptable for all expertise levels: The algorithm is as simple or as advanced as you need. SMBs can start with just product names, while advanced users can leverage dozens or even hundreds of attributes for highly customized strategies.

Previously, recommendation engines were blind – waiting for clicks, sales, and data to slowly trickle in before making generic suggestions.

AB Tasty’s Semantic Proximity Algorithm delivers instant, intelligent recommendations. As soon as your catalog is integrated, the algorithm “thinks” like a merchandiser – making smart, relevant suggestions based on product meaning, not just past behavior. No more waiting, no more guesswork -just instant, business-driven recommendations that adapt as quickly as you do

Unique on the Market

No direct competitor offers this level of semantic attribute selection and flexibility. While some platforms provide basic attribute filtering, none allow merchandisers to select and combine multiple catalog attributes to fine-tune recommendations. Most competitors still rely mainly on analytics and sales data, with only limited semantic analysis capabilities.

This is a true differentiator for AB Tasty, empowering clients to adapt their recommendation strategies to their unique business challenges – without being held back by data limitations.

How Does It Work in Practice?

The Semantic Proximity Algorithm is designed to be both powerful and user-friendly. Merchandisers can choose which attributes to use for each recommendation strategy  – whether that’s product name, description, category, price, or even custom fields like Shopify metafields. This means you can tailor recommendations for specific campaigns, themes, or business objectives.

For example, during a seasonal campaign, you might want to recommend products that share a common theme in their description or category, even if they’ve never been purchased together before. Or, you might want to upsell higher-value editions of a product by prioritizing price as an attribute. The algorithm allows you to preview and iterate on these strategies instantly, making it easy to adapt to changing business needs.

Upsell, Cross-sell, and Beyond with Product Recommendations

The flexibility of the Semantic Proximity Algorithm opens up new possibilities for both upsell and cross-sell strategies. For upsell, you can recommend alternative products that are not only similar but also more profitable. For cross-sell, you can suggest complementary items that enhance the customer’s purchase – think of the classic “chewing gum at the checkout” scenario, but tailored to your specific catalog and business logic.

This approach is especially valuable for businesses with large or complex catalogs, or those looking to launch new products and campaigns quickly. It’s also ideal for expert merchandisers who want granular control over their recommendation logic, as well as for SMBs seeking a fast, easy-to-implement solution.

Fun Facts & Unique Highlights

  • Did you know? This is the first AB Tasty algorithm that works directly from your product catalog–no analytics setup required.
  • Unique on the market: No competitor allows merchandisers to select and combine multiple catalog attributes (including custom metafields) to fine-tune recommendations.
  • Instant preview: You can see and iterate on your recommendation strategies before going live – perfect for adapting to seasonal campaigns or special events.
  • Scalable: The algorithm can handle catalogs with hundreds or even thousands of attributes per product.

Conclusion

AB Tasty’s Semantic Proximity Algorithm ushers in a new era for product recommendations: faster, more flexible, and more intelligent. Whether you’re an SMB looking for simplicity or an enterprise seeking advanced personalization, this innovation lets you transform the customer experience and maximize revenue from day one.

FAQs

Is this just another “black box” AI?

No. You control which attributes are used, can preview results, and iterate. It’s transparent and customizable.

What if the recommendations don’t make sense?

You can filter and combine attributes, set thresholds, and preview results before going live. Early feedback has led to rapid improvements.

Does it work with custom fields?

Yes! Any attribute in your catalog, including custom metafields, can be used.

Article

5min read

Why AB Tasty is the Best Digital Optimization Partner for Your Team

When it comes to digital optimization, you need more than just another tool—you need a partner who understands that every test is a step toward something bigger.

Here’s why AB Tasty stands out as the best choice for teams ready to go further.

1. Built for Everyone: Usability That Empowers Your Whole Team

Your team shouldn’t need a developer for every test.

AB Tasty’s visual editor and theme builder work for everyone—whether you’re a marketer launching your first campaign or a developer building complex experiments. Our intuitive interface means less time wrestling with code and more time testing bold ideas.

Real autonomy, real speed. Teams choose AB Tasty because their previous platform kept them dependent on developers for basic changes. With AB Tasty, they launched campaigns faster and gave their marketing team the independence they needed to iterate quickly.

Widgets that work, right out of the box. Our widget library comes from 12+ years of real-world testing. These aren’t just features—they’re battle-tested components that help teams launch more campaigns with confidence. While newer platforms struggle with bugs and limitations, our widgets deliver reliability when you need it most.

The result? Teams report launching more experiments, faster, with fewer roadblocks.

2. Honest Pricing: What You See Is What You Get

No surprise costs. No hidden fees. Just transparent value.

What starts as your solution stays your complete solution—no extra charges for essential features down the line.

Predictable partnerships. Many platforms lure teams in with low initial costs, then surprise them with steep price increases or essential features locked behind add-ons. We believe in honest pricing from day one, so you can plan your growth without budget surprises.

Long-term value that makes sense. When you calculate total cost over time—including all the features you’ll actually need—AB Tasty delivers better value. We’re smarter for the long haul.

3. Support That Actually Supports You

Customer Success Managers focused on your success—not their sales quotas.

Our CSMs are dedicated to helping you win, not upselling you. They’re your advocates, your strategic partners, and your go-to team for navigating complex challenges. No conflicts of interest, no hidden agendas—just genuine support.

Local expertise when you need it. Whether you’re based in the UK, France, or anywhere else we serve, you get local support that understands your market, your timezone, and your specific needs. Responsive, knowledgeable, and always ready to help.

Technical reliability you can count on. We handle complex environments—React, SPAs, multi-brand setups—with confidence. Teams praise our ability to navigate technical challenges that trip up other platforms. When your setup is complicated, we make the solution simple.

4. Technical Excellence: Speed, AI, and Innovation That Works

The fastest tag performance in the industry. Speed matters. Our script loads at 482ms—significantly faster than major competitors. That means better user experience, higher conversion rates, and tests that don’t slow down your site.

AI that’s transparent and ready now. Our Engagement Level and EmotionsAI aren’t black boxes or future promises—they’re transparent, advanced tools you can use today. While others demo concepts, we deliver production-ready AI that helps you understand and optimize for real user behavior.

Built for modern web experiences. Single Page Applications and dynamic content work out-of-the-box with AB Tasty. No manual workarounds, no technical debt—just seamless experimentation on the modern web.

5. Proven Reliability: Trust Built Over Time

Platform stability when it matters most. Experimentation requires trust—in your data, your results, and your platform. We deliver consistent reliability while others struggle with bugs, lost test goals, and API limitations that disrupt your work.

Recognition from the experts. Industry analysts consistently recognize AB Tasty for experiment design, pricing flexibility, community support, and market presence. But the real validation comes from our customers—teams who’ve switched to us and never looked back.

Real client wins, real results. Multiple teams have moved from other platforms to AB Tasty for better usability, superior support, and genuine value. They stay because we help them accomplish more together.

The Best Choice for Teams Ready to Go Further

What makes AB Tasty the best digital optimization partner isn’t just one thing—it’s how everything works together. Intuitive tools that empower your whole team. Transparent pricing that respects your budget. Support that genuinely cares about your success. Technical excellence that delivers results. And proven reliability you can count on.

We’re not just another platform. We’re your collaborators, your advocates, and your partners in every bold test you want to try.

Other tools might promise quick wins or flashy features. We deliver something better: a partnership that grows with you, technology that works when you need it, and a team that believes in your potential.

Try, learn, iterate—then go again. That’s how teams grow, and that’s how we help you get there.

Ready to experience the difference? Let’s build something better—together.

Article

6min read

From “What If” to “What Works”: How AB Tasty AI Transforms Experimentation

If you’ve ever wondered what to test next, struggled to get developer time, or felt overwhelmed by reporting dashboards, you’re not alone.

These are the frustrations experimentation teams face daily. That’s why we built AB Tasty AI—a suite of AI designed not to add hype to your workflow, but to genuinely help you move faster, test smarter, and get real business impact from your experimentation program.

With AB Tasty AI, those roadblocks disappear. Our AI guides you through ideation, building, personalization, and analysis—so you can focus less on the “what ifs” and more on the results that matter.

Let’s walk through how it works.

AI that crushes your “We’re guessing what to test next” problem

Step 1: Ideation generation

In many organizations, idea generation depends on gut feelings or endless whiteboard sessions that rarely produce actionable outcomes. That’s where AB Tasty AI steps in.

Our platform scans your pages and surfaces data-backed test ideas that are proven to make an impact. Instead of guessing, you get a prioritized list of opportunities aligned with your business goals. It’s like having an intelligent co-pilot who not only brainstorms with you but also brings evidence to the table.

AI that eliminates your “Our hypotheses are hunches” frustration

Step 2: Develop a hypothesis

Hypothesis Copilot by AB Tasty

A test idea is only as strong as the hypothesis behind it. Yet many teams struggle to move from fuzzy thinking to clear, structured hypotheses with measurable objectives.

AB Tasty AI eliminates the guesswork by helping you sharpen your hypotheses. You can turn casual “what if we tried this?” conversations into formal statements that define the change, predict the impact, and set up the right metrics for evaluation.

This structured approach not only improves your test quality but also boosts team confidence and stakeholder trust.

AI that annihilates your “I can’t build what I’m thinking” roadblock

Step 3: Start building

One of the biggest blockers in experimentation is the dependency on developer resources. Great ideas often languish in backlogs because the dev team is focused on other priorities.

With AB Tasty AI, you can instantly transform ideas into buildable experiments—no coding required. Whether you want to tweak a button, test a new layout, or launch a more complex variation, our AI makes it possible to build, preview, and launch without waiting weeks for a developer.

This shift not only accelerates testing velocity but also democratizes experimentation, empowering marketers, product managers, and designers to run with their ideas.

AI that ends your “Our personalization feels robotic” paralysis

Step 4: Understand your audience

10 emotional profiles with AB Tasty's EmotionsAI

Many brands struggle with personalization that feels forced, generic, or robotic. Visitors sense it, and the results often disappoint.

AB Tasty AI introduces EmotionsAI Insights, giving you a window into the emotional triggers that shape customer behavior. Instead of relying only on demographic or behavioral data, you get deeper visibility into what truly motivates your audience.

It’s personalization with empathy—designed to feel natural, human, and meaningful.

AI that solves your “I don’t know why visitors convert” mystery

Step 5: Personalize the customer journey

Understanding emotional drivers is just the start. With EmotionsAI Segments, you can act on those insights by creating experiences tailored to specific motivations.

For example, one group of visitors might be motivated by security and reassurance, while another thrives on novelty and excitement. AB Tasty AI combines emotional, behavioral, and contextual data to reveal these distinctions, allowing you to craft experiences that resonate at a deeper level.

The result? More conversions, stronger loyalty, and a customer journey that feels less like a funnel and more like a personalized conversation.

AI that crushes your “I don’t understand this report” problem

Step 6: Analyze your reports

Once experiments are running, the next challenge is often reporting. Traditional dashboards can be dense, and interpreting results takes time—especially if stakeholders want quick answers.

AB Tasty AI simplifies the process with natural language analysis. You can ask plain-English questions like “Which variation performed best with mobile visitors?” and get clear, actionable answers instantly.

This not only saves hours of manual analysis but also democratizes data, empowering non-technical teams to explore results with confidence.

Why AB Tasty AI Stands Out

The market is full of AI solutions, many of which promise more than they deliver. AB Tasty AI is different. We’ve designed it to remove the real blockers experimentation teams face every day:

  • No more guessing what to test
  • No more hunch-based hypotheses
  • No more dev backlog bottlenecks
  • No more robotic personalization
  • No more confusing reports
  • No more lost learnings

In short, AB Tasty AI moves your experiments from start to success.

FAQs about AI in digital experimentation

What type of AI does AB Tasty offer?

AB Tasty offers practical, experimentation-focused AI that supports the full testing journey. This includes AI for idea generation, hypothesis creation, no-code experiment building, emotional personalization (EmotionsAI), natural language reporting, and more.

How does AB Tasty AI help with personalization?

AB Tasty AI uses EmotionsAI to uncover visitor motivations and segment audiences based on emotional, behavioral, and contextual data. This allows businesses to create experiences that feel more human and relevant.

Can AB Tasty AI help non-technical teams run experiments?

Yes. AB Tasty AI empowers marketers, product managers, and designers to launch tests without relying on developers, thanks to its no-code experiment builder.

What makes AB Tasty AI different from other AI solutions on the market?

AB Tasty AI is designed to deliver practical, business-ready solutions. While many AI tools focus on hype, AB Tasty AI helps teams move from “what if” to “what works” by providing tangible results at every stage of the experimentation cycle.

Article

4min read

Beauty E-Commerce Gets a Glow-Up: Insights from Our Cosmetics Consumer Report

The way consumers shop for cosmetics is evolving fast. Today’s beauty buyers aren’t just looking for the right shade or texture. They care about what’s inside, how it’s made, and whether they can trust the brand behind it.

To help brands stay ahead, we recently hosted a webinar inspired by our e-book, Decoding Online Shopping: Cosmetics Consumer Trends for 2025. Our hosts, Lara Hourquebie and Justin Trout unpacked what today’s beauty shoppers expect, the digital experiences that build loyalty, and practical test ideas you can apply right away. If you missed the live session, here’s your recap of the new rules shaping beauty e-commerce.

Beauty

What’s shaping beauty e-commerce in 2025

From our research and client insights, three big themes stood out:

  • ? Sustainability isn’t optional: eco-friendly, cruelty-free, and ethical sourcing have become the baseline.
  • Ingredient transparency: shoppers want to know exactly what goes into their skincare and cosmetics.
  • ? Social media’s influence: skincare routines and beauty standards are amplified online, fueling demand for authenticity and trust.

And yes, price still matters, but high-quality reviews are the second most influential factor.

Beauty pollfish

Why social proof works (and how Clarins put it to the test)

One of the strongest insights from both the e-book and webinar was the importance of social validation. Shoppers feel reassured when they see that others have purchased, rated, or recommended a product – especially in beauty, where confidence is key.

Clarins put this into practice by experimenting with a social proof widget on their product pages. The idea was simple: show shoppers in real-time that others were also browsing or buying the same product.

Clarins test

The impact?

  • +5% increase in average order value
  • +€5.8K uplift in revenue

By targeting this experiment to the right audience segments, Clarins proved that even small nudges can build confidence and boost sales.

Your 2025 beauty brand checklist

  • Embed sustainability and ingredient transparency into your brand story.
  • Make reviews and social proof highly visible – don’t leave trust-building to chance.
  • Test new ideas, even small ones like Clarins’ widget – they can create outsized results.
  • Keep the focus on loyalty over discounts: long-term trust beats short-term price cuts.

Building better experiences through relevance

As our research shows, shoppers are happy to share details like their skin type, concerns, or makeup preferences if it helps them find the perfect match. But when it comes to things like personal contact details, they’re far less willing.

The takeaway? Consumers want relevance, not noise. They’re open to sharing what improves their journey – as long as brands use it thoughtfully and transparently.

In short: the beauty brands that blend values, personalization, and experimentation will be the ones to win hearts (and baskets) in 2025.

? For more insights, download our e-book Decoding Online Shopping: Cosmetics Consumer Trends for 2025 and see how to turn shopper expectations into results.

Article

4min read

Unlock Any Audience Source with AB Tasty’s Universal Connector

Breaking down silos between your data and your experiments

Every marketing team dreams of having a unified view of their customers. But in reality, data often lives in silos: a CRM here, a CDP there, a custom analytics tool somewhere else. If you want to use these audience segments inside AB Tasty for targeting and personalization, you need a simple way to connect them to our platform—regardless of the tool you use.

That’s exactly what AB Tasty’s Universal Connector delivers.

What is the Universal Connector?

The Universal Connector is built on top of AB Tasty’s Universal Data Connector (UDC). It allows you to import audience data from any third-party tool—even those not available as native integrations—and make them available in AB Tasty’s Segment Builder for experiments, personalizations, and patches.

In simple terms: If your tool can send audience data to AB Tasty, the Universal Connector can make it actionable.

Why it matters

  • Agnostic by design: Works with any CRM, CDP, analytics platform or custom tool capable of sending data via API.
  • Self-service: Marketers can set up their connectors through a simple interface—no heavy dev work required.
  • Unified audience view: Imported audiences automatically appear in AB Tasty’s audience management and, once synced, are instantly available in the Segment Builder—ready to power your campaigns.
  • Compatible with BYID: Works seamlessly with AB Tasty’s Bring Your Own ID feature, ensuring perfect reconciliation with your own user IDs across devices and platforms.

From complex workflows to a streamlined process

Traditional approach:

  • Importing custom audiences often requires manual code injection, custom attributes, and support from technical teams.
  • Segments have to be declared one by one in the JavaScript console, with a risk of errors.

With AB Tasty’s Universal Connector:

  • Configure a connector in a few clicks: declare the source, specify how to identify the visitor (cookie, local storage, BYID), and let AB Tasty handle the reconciliation automatically.
  • Audiences flow directly into the Segment Builder without manual coding.

Who benefits the most?

  • Global brands with complex stacks: Multiple CRMs, custom CDPs, or proprietary data systems.
  • The Travel & Hospitality industry: Hotel groups, cruise lines, and booking platforms that need to unify CRM, loyalty, and reservation data across web and mobile apps.
  • The Retail & e-commerce sector: Brands that want to leverage loyalty IDs or offline CRM segments in their onsite personalizations.

A powerful use case: Travel & Hospitality

Travel brands often need to merge data from several tools: CRM, booking engines, loyalty programs, and mobile apps. The Universal Connector makes it easy to bring these audiences into AB Tasty and deliver hyper-personalized experiences.

For example, a major international hotel group uses the connector to unify audiences from its CDP and CRM, enabling precise targeting based on booking history and loyalty status across devices.

Getting started

The Universal Connector is designed to be quick to set up and easy to maintain. Most of the work can be done by a marketer, with only light support from a technical contact.

  1. Create your connector: Make sure the audience identifier matches your imported file and specify how visitors are identified—via cookie, localStorage, or your own ID (BYID). The connector will handle the rest.
  2. Send your audience data to UDC: Push your segments via a simple API call.
  3. Target with confidence: Once synced, your imported segments automatically appear in the Segment Builder, ready to use in experiments and personalizations.

Bonus point: No complex coding. No manual segment declarations. Just a straightforward connection between your data and AB Tasty.

Ready to unlock your audiences?

Go beyond native integrations and make any audience actionable with AB Tasty’s Universal Connector.

Contact your KAM and start importing any audience into AB Tasty today.

Already an AB Tasty client? Let your CSM know your interest in this feature for further activation.

Article

6min read

Which Statistical Model is Best for A/B Testing: Bayesian, Frequentist, CUPED, or Sequential?

If you’ve ever run an A/B test, you know the thrill of watching those numbers tick up and down, hoping your new idea will be the next big winner. But behind every successful experiment is a secret ingredient: the statistical model that turns your data into decisions.

With so many options – Bayesian, Frequentist, CUPED, Sequential – it’s easy to feel like you’re picking a flavor at an ice cream shop you’ve never visited before. Which one is right for you? Let’s dig in!

The Scoop on Statistical Models

Statistical models are the brains behind your A/B tests. They help you figure out if your shiny new button color is actually better, or if you’re just seeing random noise. But not all models are created equal, and each has its own personality – some are straightforward, some are a little quirky, and some are best left to the pros.

Bayesian Testing Model: The Friendly Guide

Imagine you’re asking a friend, “Do you think this new homepage is better?” The Bayesian model is that friend who gives you a straight answer: “There’s a 92% chance it is!” Bayesian statistics use probability to tell you, in plain language, how likely it is that your new idea is actually an improvement.

Bayesian analysis works by updating what you believe as new data comes in. It’s like keeping a running tally of who’s winning the race, and it’s not shy about giving you the odds. This approach is especially handy for marketers, product managers, and anyone who wants to make decisions without a PhD in statistics. It’s clear, actionable, and – dare we say – fun to use.

At AB Tasty, we love Bayesian. It’s our go-to because it helps teams make confident decisions without getting tangled up in statistical spaghetti. Most of our clients use it by default, and for good reason: it’s easy to understand, hard to misuse, and perfect for fast-paced digital teams.

Pros of Bayesian Testing:

  • Results are easy to interpret (“There’s a 92.55% chance to win!”).
  • Great for business decisions (and no need to decode cryptic p-values).
  • Reduces the risk of making mistakes from peeking at your data.

Cons of Bayesian Testing:

  • Some data scientists may prefer more traditional methods.
  • Can require a bit more computing power for complex tests.

Frequentist Testing Model: The Classic Statistician

If Bayesian is your friendly guide, Frequentist is the wise professor. This is the classic approach you probably learned about in school. Frequentist models use p-values to answer questions like, “If there’s really no difference, what are the chances I’d see results like this?”

Frequentist analysis is all about statistical significance. If your p-value is below 0.05, you’ve got a winner. This method is tried and true, and it’s the backbone of academic research and many data teams.

But here’s the catch: p-values can be tricky. They don’t tell you the probability that your new idea is better; they tell you the probability of seeing your data if nothing is actually different. It’s a subtle distinction, but it trips up even seasoned pros. If you’re comfortable with statistical lingo and want to stick with tradition, the Frequentist model is a good choice. Otherwise, it can feel a bit like reading tea leaves.

Pros of Frequentist Testing:

  • Familiar to statisticians and data scientists.
  • Matches legacy processes in many organizations.

Cons of Frequentist Testing:

  • Results can be confusing for non-experts.
  • Easy to misinterpret, leading to “false positives” if you peek at results too often.

CUPED Testing Model: The Speedster (But Only for the Right Crowd)

CUPED (Controlled Experiment Using Pre-Experiment Data) is designed to go fast by using data from before your experiment even started. By comparing your test results to users’ past behavior, CUPED can reduce the noise and help you reach conclusions quicker.

But here’s the twist: CUPED only shines when your users come back again and again, like on streaming platforms (Netflix) or big SaaS products (Microsoft). If you have an e-commerce site, CUPED can actually steer you wrong, leading to misleading results.

For most e-commerce teams, CUPED is a bit like putting racing tires on a city bike, not the best fit. But if you’re running experiments on a platform with high user recurrence, it can be a powerful tool in your kit.

Pros CUPED Testing:

  • Can deliver faster, more precise results for high-recurrence platforms.
  • Makes the most of your existing data.

Cons of CUPED Testing:

  • Not suitable for most e-commerce or low-frequency sites.
  • Can lead to errors if used in the wrong context.
  • More complex to set up and explain.

Sequential Testing Model: The Early Warning System

Sequential testing is your experiment’s smoke alarm. Instead of waiting for a set number of visitors, it keeps an eye on your results as they come in. If things are going south – say, your new checkout flow is tanking conversions – it can sound the alarm early, letting you stop the test and save precious traffic.

But don’t get too trigger-happy. Sequential testing is fantastic for spotting losers early, but it’s not meant for declaring winners ahead of schedule. If you use it to crown champions too soon, you risk falling for false positives – those pesky results that look great at first but don’t hold up over time.

At AB Tasty, we use sequential testing as an early warning system. It helps our clients avoid wasting time and money on underperforming ideas, but we always recommend waiting for the full story before popping the champagne.

Experiment health check

Pros of Sequential Testing:

  • Helps you spot and stop losing tests quickly.
  • Saves resources by not running doomed experiments longer than necessary.

Cons of Sequential Testing:

  • Not designed for picking winners early.
  • Can lead to mistakes if used without proper guidance.

Which Statistic Model is Best for A/B Testing?

If you’re looking for a model that’s easy to use, hard to misuse, and perfect for making fast, confident decisions, Bayesian is your best bet – especially if you’re in e-commerce or digital marketing. It’s the model we recommend for most teams, and it’s the default for a reason.

If you have a team of data scientists who love their p-values, or you’re working in a highly regulated environment, Frequentist might be the way to go. Just be sure everyone’s on the same page about what those numbers really mean.

Running a streaming service or a platform where users log in daily? CUPED could help you speed things up – just make sure you’ve got the right data and expertise.

And if you want to keep your experiments safe from disasters, Sequential is the perfect early warning system.

Conclusion: The Right A/B Testing Model for the Right Job

Choosing a statistical model for A/B testing doesn’t have to be a headache. Think about your team, your users, and your goals. For most, Bayesian is the friendly, reliable choice that keeps things simple and actionable. But whichever model you choose, remember: the best results come from understanding your tools and using them wisely.

Ready to run smarter, safer, and more successful experiments? Pick the model that fits your needs—and don’t be afraid to ask for help if you need it. After all, even the best chefs need a good recipe now and then.

Hungry for more?
Check out our guides on Bayesian vs. Frequentist A/B Testing and When to Use CUPED. Happy testing!

Article

7min read

Is Your Average Order Value (AOV) Misleading You?

Average Order Value (AOV) is a widely used metric in Conversion Rate Optimization (CRO), but it can be surprisingly deceptive. While the formula itself is simple—summing all order values and dividing by the number of orders—the real challenge lies within the data itself.

The problem with averaging

AOV is not a “democratic” measure. A single high-spending customer can easily spend 10 or even 100 times more than your average customer. These few extreme buyers can heavily skew the average, giving a limited number of visitors disproportionate impact compared to hundreds or thousands of others. This is problematic because you can’t truly trust the significance of an observed AOV effect if it’s tied to just a tiny fraction of your audience.

Let’s look at a real dataset to see just how strong this effect can be. Consider the order value distribution:

  • The horizontal axis represents the order value.
  • The vertical axis represents the frequency of that order value.
  • The blue surface is a histogram, while the orange outline is a log-normal distribution approximation.

This graph shows that the most frequent order values are small, around €20. As the order value increases, the frequency of such orders decreases. This is a “long/heavy tail distribution,” meaning very large values can occur, albeit rarely.

A single strong buyer with an €800 order value is worth 40 times more than a frequent buyer when looking at AOV. This is an issue because a slight change in the behavior of 40 visitors is a stronger indicator than a large change from one unique visitor. While not fully visible on this scale, even more extreme buyers exist. 

The next graph, using the same dataset, illustrates this better:

  • The horizontal axis represents the size of the growing dataset of order values (roughly indicating time).
  • The vertical axis represents the maximum order value in the growing dataset in €

At the beginning of data collection, the maximum order value is quite small (close to the most frequent value of ~€20). However, we see that it grows larger as time passes and the dataset expands. With a dataset of 10,000 orders, the maximum order value can exceed €5,000. This means any buyer with an order above €5,000 (they might have multiple) holds 250 times the power of a frequent buyer at €20. At the maximum dataset size, a single customer with an order over €20,000 can influence the AOV more than 2,000 other customers combined.

When looking at your e-commerce metrics, AOV should not be used as a standalone decision-making data.

E-commerce moves fast. Get the insights that help you move faster. Download the 2025 report now.

The challenge of AB Test splitting

The problem intensifies when considering the random splits used in A/B tests.

Imagine you have only 10 very large spenders whose collective impact equals that of 10,000 medium buyers. There’s a high probability that the random split for such a small group of users will be uneven. While the overall dataset split is statistically even, the disproportionate impact of these high spenders on AOV requires specific consideration for this small segment. Since you can’t predict which visitor will become a customer or how much they will spend, you cannot guarantee an even split of these high-value users.

This phenomenon can artificially inflate or deflate AOV in either direction, even without a true underlying effect, simply depending on which variation these few high spenders land on.

What’s the solution?

AOV is an unreliable metric, how can we effectively work with it? The answer is similar to how you approach conversion rates and experimentation.

You don’t trust raw conversion data—one more conversion on variation B doesn’t automatically make it a winner, nor do 10 or 100. Instead, you rely on a statistical test to determine when a difference is significant. The same principle applies to AOV. Tools like AB Tasty offer the Mann-Whitney test, a statistical method robust against extreme values and well-suited for long-tail distributions.

AOV behavior can be confusing because you’re likely accustomed to the more intuitive statistics of conversion rates. Conversion data and their corresponding statistics usually align; a statistically significant increase in conversion rate typically means a visibly large difference in the number of conversions, consistent with the statistical test. However, this isn’t always the case with AOV. It’s not uncommon to see the AOV trend and the statistical results pointing in different directions. Your trust should always be placed in the statistical test.

The root cause: Heavy tail distributions

You now understand that the core issue stems from the unique shape of order value distributions: long-tail distributions that produce rare, extreme values.

It’s important to note that the problem isn’t just the existence of extreme values. If these extreme values were frequent, the AOV would naturally be higher, and their impact would be less dramatic because the difference between the AOV and these values would be smaller. Similarly, for the splitting problem, a larger number of extreme values would ensure a more even split.

At this point, you might think your business has a different order distribution shape and isn’t affected. However, this shape emerges whenever these two conditions are met:

  • You have a price list with more than several dozen different values.
  • Visitors can purchase multiple products at once.

Needless to say, these conditions are ubiquitous and apply to nearly every e-commerce business. The e-commerce revolution itself was fueled by the ability to offer vast catalogues.

Furthermore, the presence of shipping costs naturally encourages users to group their purchases to minimize those costs. It means that nearly all e-commerce businesses are affected. The only exceptions are subscription-based businesses with limited pricing options, where most purchases are for a single service.

Here’s a glimpse into the order value distribution across various industries, demonstrating the pervasive nature of the “long tail distribution”:

Cosmetic
Transportation
B2B packaging (selling packaging for e-commerce)
Fashion
online flash sales

AOV, despite its simple definition and apparent ease of understanding, is a misleading metric. Its magnitude is easy to grasp, leading people to confidently make intuitive decisions based on its fluctuations. However, the reality is far more complex; AOV can show dramatic changes even when there’s no real underlying effect.

Conversely, significant changes can go unnoticed. A strong negative effect could be masked by just a few high-spending customers landing in a poorly performing variation. So, now you know: just as you do for conversion rates, rely on statistical tests for your AOV decisions.

Article

3min read

Experiment Health Check: Proactive Monitoring for Reliable Experimentation

Introduction

Running hundreds of experiments each year is a sign of a mature, data-driven organization – but it also comes with challenges.

How do you ensure that every test is running smoothly, and that critical issues don’t slip through the cracks?

At AB Tasty, we’ve listened to our clients’ pain points and are excited to announce the launch of Experiment Health Check: a new feature designed to make experimentation safer, smarter, and more efficient.

The Challenge: Keeping Experiments Healthy at Scale

For leading brands running over 100 campaigns a year, experimentation is at the heart of digital optimization.

But with so many campaigns running simultaneously, manually checking reports every day to spot issues is time-consuming and inefficient. Worse, problems like underperforming variations or sample ratio mismatches (SRM) can go unnoticed, leading to lost revenue or inconclusive results.

Our Solution: Experiment Health Check

Experiment Health Check is an automated monitoring system built directly into AB Tasty. It proactively alerts you to issues in your experiments, so you can act fast and keep your testing program on track.

Key Features:

  • Automated Alerts: Get notified in-product (and by email, if you choose) when an experiment encounters a critical issue, such as:
    • Underperforming variations (sequential testing alert)
    • SRM (Sample Ratio Mismatch) problems
  • Centralized Dashboard: Super-admins can view all alerts across accounts for a global overview.
  • Customizable Notifications: Choose which alerts to display and how you want to receive them.

Why It Matters

  • Proactive, Not Reactive: No more waiting until the end of a test or sifting through reports to find problems. Experiment Health Check surfaces issues as soon as they’re detected.
  • Saves Time: Focus on insights and strategy, not manual monitoring.
  • Peace of Mind: Most clients will rarely see alerts – only about 2% of campaigns encounter SRM issues – so you can be confident your experiments are running smoothly.

What’s Next?

Experiment Health Check is available to all AB Tasty clients as of June 2025.

Simply activate it in your dashboard to start benefiting from automated experiment monitoring. We’re committed to evolving this feature with more alert types and integrations based on your feedback.