Article

7min read

Personalization Approach Remastered | David Mannheim

David Mannheim explains a remastered approach to personalization for long-term customer loyalty

With over 15 years of experience in digital businesses, David Mannheim has helped many companies, such as ASOS, Sports Direct and Boots to improve and personalize their digital experience and conversion strategy. He was also the founder of one of the UK’s largest independent conversion optimization consultancies – User Conversion.

With his experience as an advisor helping e-commerce businesses to innovate and iterate personalization and creativity at speed, David has recently published his own book where he tackles the “Person in Personalisation”, why he believes personalization has lost its purpose and what to do about it. David is currently building a solution to tackle this epidemic with his new platform; Made With Intent – a product that helps retailers understand the intent and mindset of their audience, not just their behaviors or what page they’re on.

AB Tasty’s VP Marketing Marylin Montoya spoke with David about the current state of personalization and the importance of going back to the basics and focusing on putting the person back in personalization. He also highlights the need for brands to build a relationship with customers based on trust and loyalty, particularly in the digital sphere instead of focusing on immediate gratification.

Here are some key takeaways from their conversation. 

Personalization is about being personal

David stresses the importance of not forgetting the first three syllables at the beginning of personalization. In other words, it’s imperative to remember that personalization is about being personal and putting the person at the heart of everything- it’s all about customer-centricity.

For David, personalization nowadays has become too commercialized and too focused on immediate gratification. Instead, the focus should be on metrics such as customer lifetime value and loyalty. Personalization should be a strategic value add rather than a tactical add-on used solely to drive short-term sales and growth. 

“If we move our metrics to focus more on the long-term metrics of customer satisfaction, more quality than quantity, more about customer lifetime value and loyalty as well as recognizing the intangibles, not just the tangibles, I think that puts brands in a much better place.”

He further argues that there is a sort of frustration point when it comes to the topic of personalization and who actually does it well. This frustration was clear after David interviewed 153 experts for his book, most of whom struggled to answer the question of “who does personalization well” and found it difficult to name any brands outside of the typical “big players” such as Netflix and Amazon.

This frustration, David believes, stems from the difficulty of replicating an in-store experience in a human-to-screen relationship. Nonetheless, when customers are loyal to a brand, that same loyalty should be reciprocated from the brand side as well to make a customer feel they’re more than just a number. The idea is to achieve a sort of familiarity and acknowledgment with the customer and create a genuine, authentic relationship with them. This is the key to unlocking customer-centricity. 

It’s about offering a personalized experience that focuses on adding value for each individual customer, rather than exploiting value where only customers end up with a commercialized experience geared towards driving growth for the company itself.

Disparity between brands’ and customers’ perceptions of personalization 

Citing Salethru’s Personalization Index, David refers to a particular finding in their yearly report where 71% of brands think they excel in personalization but only 34% of customers actually agree with that.

In that sense, there’s a mismatch between customers’ expectations and brands’ own expectations of what is competent customer service.

He refers to recommendations as one example that brands primarily incorporate into their personalization strategy. However, he believes recommendations only address the awareness part of the AIDA model (Awareness, Intent, Desire and Action).

“Product discovery for me is only one piece of a puzzle. If you take personalization back to what it’s designed to be, to be personal, well, where is the familiarity? Where’s the acknowledgment? Where’s the connection? Where’s the conversation?” David argues.

What’s missing is a core, intangible ingredient that helps create a relationship between two individuals, in this case, a human and a brand. Because brands have difficulty pinpointing what that is, they choose instead to base their personalization strategy on something more tangible and visible – recommendations.

For brands, the recommendations narrative is fully immersed within customer expectations and so encompasses the idea of personalization, particularly as that’s the approach that the “bigger” brands have adopted when it comes to personalizing the user experience. 

“It becomes an expectation. I go on X website so I expect the bare minimum which is to see things that are relevant to what I search for or the things that I’m interested in…..This is what people associate personalization to,” David says. 

Recommendations are an essential first step of personalization but David argues the future of personalization requires brands to go even further.

Brands should focus on building trust

In order for brands to build that sense of familiarity and truly become more personal with customers, brands need to take personalization to the next stage beyond awareness. For example, customers should be able to trust that a brand is recommending to them what they actually need rather than what makes the most profit.

David believes that the concept of trust is missing in a human-to-screen relationship, which is what’s hindering brands from reaching that next level.

In other words, it’s all about transforming the whole approach of personalization along with its purpose to demonstrate greater care with the few rather than “trying to get the many” to establish trust with customers. Brands should shift their focus to care, which David believes is what makes a brand truly customer-centric.

“I think it’s an initiative, if you can call it that, to focus on care. It does make the brand more customer-centric. You’re putting the customer, their experiences and expectations first with the purpose of providing a better experience for them.”

 In that sense, two crucial aspects play into the concept of trust, according to David: competence and care. 

Brands need to be able to be competent in that customers can trust they’re being recommended the most suitable products for their needs rather than the one that has the higher profit margin; in other words, recommending products that are best for the business instead of the customer. At the same time, brands need to demonstrate care by being more personable with customers to be able to create a connection between brand and consumer. 

“The more caring you are, the more you can demonstrate trust,” David says.

Think of banking. Banking demonstrates all the competence in the world, but no care whatsoever. And that therefore destroys their trust. Think of the other way around. Think of your grandma giving you a sweater at Christmas. I’m sure you trust your grandma, but you won’t trust her to buy you a Christmas present, for example.”

For David, context is a prerequisite for trust and the best way to understand user context is through intent, which is where the difference between persuasion and manipulation lies. This is why he has been busy building Made With Intent for the past 8 months focused on that very same concept. 

When it comes to recommendations, in particular, it’s essential to contextualize them and understand customer intent. Only then can a brand excel at its recommendation strategy and create a relationship of trust where customers can be confident they’re being recommended products unique to them only.

What else can you learn from our conversation with David Mannheim?

  • His take on AI and its role in personalization
  • Ways brands can demonstrate care to build trust and familiarity with their consumers
  • How brands can shift their personalization approach
About David Mannheim

David has worked in the digital marketing industry for over 15 years and along with founding one of the UK’s largest independent conversion optimization consultancies, he has worked with some of the UK’s biggest retailers to improve and personalize their digital experience and conversion strategy. Today, David has published his own book about personalization and is also building a new platform that helps retailers understand the intent and mindset of their audience, not just their behaviors or what page they’re on.

About 1,000 Experiments Club

The 1,000 Experiments Club is an AB Tasty-produced podcast hosted by Marylin Montoya, VP of Marketing at AB Tasty. Join Marylin and the Marketing team as they sit down with the most knowledgeable experts in the world of experimentation to uncover their insights on what it takes to build and run successful experimentation programs.

Article

6min read

Maximize the Potential of Experience Optimization Platforms: Key Questions to Ask for Performance Success

In the dynamic realm of e-commerce, selecting the right experience optimization platform (EOP) is essential for achieving success. But, how do you assess the impact on your website performance and unleash its full potential on your site?

We’re here to guide you with key questions to ask experimentation and personalization solutions you’re assessing, specifically designed to help you evaluate performance – so buckle up and continue reading to unlock new levels of success!

Bonus audio resource: Curious to know more about what AB Tasty does to address performance and optimize customer experience? Listen to this insightful discussion between Léo, one of our product managers, and Margaret, our product marketing manager. In this chat, Léo explains what AB Tasty specifically does to improve performance for our customers. Want to know even more? Check out Léo’s in-depth blog post.

#1: Does the platform offer 99.9% uptime and availability?

Downtime can be a nightmare for your business. Make sure the EOP is known for its reliability and high uptime. Although it might not sound like a big deal, the difference between 99.5% uptime and 99.9% uptime is huge. With 99.9% uptime, you can expect less than 9 hours of downtime annually, vs. 99.5% which can mean nearly 2 full days of downtime in a year. It’s crucial to choose a platform that can keep your website accessible to customers as often as possible, ensuring a seamless shopping experience around the clock and more revenue for your business.

#2. Does the platform prioritize website speed and load time?

It goes without saying that in the fast-paced online world, speed matters. Does the EOP offer features that prioritize website load time? Look for optimization techniques such as caching, image compression and code optimization to ensure quick and smooth page loading. A snappy website keeps customers engaged and drives conversions.

#3. Does the platform provide a comprehensive performance center?

Acting on detailed performance data ensures your website is always giving users the best experience. Does the EOP offer comprehensive insights into reducing the tag or campaign weight for optimal performance and user experience? Your EOP should have a performance center that guides you to campaign optimization, including ways to reduce tag weight, identify heavy or old campaigns you can delete, or targeting verification.

#4. Do the performance metrics they’re showing you come from sites that are active?

Some EOPs might show you performance metrics that include sites that aren’t actually active. An inactive site has a much lighter tag weight than an active site, which makes their performance metrics look much better than they actually are. Always ask the EOP if their metrics are from active sites to ensure you’re seeing the most accurate representation of what you can expect if you go with them.

#5. Are they regularly adding new features to enhance performance?

To stay ahead in the rapidly evolving digital ecosystem, it’s imperative that your EOP consistently adds new features to optimize performance. With regular updates like these, you can ensure you’re meeting user expectations, addressing emerging challenges, enhancing performance metrics, and keeping an edge on the competition.

Take, for example, dynamic imports. Using dynamic imports has a huge advantage. When we were using a monolithic approach, as some EOPs are still doing, removing a semi-colon in one campaign and pushing this change to production meant that all visitors would have to download the full package again, even though only one character over tens of thousands had changed. With dynamic imports, all visitors redownload the new version of the campaign – and that’s it. Simple.

#6. Can the platform handle spikes in web traffic?

E-commerce sites often face surges in traffic during peak periods or promotional events like Black Friday. How does the EOP handle increased web traffic without compromising performance? Look for platforms with content delivery networks (CDNs) that handle load balancing and scalability to ensure your website remains stable and accessible during high-demand periods.

#7. Does the platform have both server-side and client-side offers?

Having both server-side and client-side EOPs is crucial for e-commerce companies, especially given how much e-commerce is happening on mobile and apps. Server-side optimizes performance with zero flicker and seamless mobile experience, while client-side enhances user experience and puts the power of experimentation and personalization into the hands of marketers, freeing up developer time. Utilizing both platforms enables holistic optimization and consistent experiences, drives business growth, and leads to more satisfied customers.

#8. What level of local customer support and documentation does the platform offer?

Technical support and comprehensive documentation are vital for a smooth experience with your platform. What kind of reliable customer support channels does the EOP provide? Look for platforms that offer timely assistance in your locality and language, and extensive documentation, empowering you to resolve issues and make the most of your platform’s features. Review P2P sites like G2 to understand what EOPs consistently offer the best service.

#9. Is the platform scalable and adaptable to future needs?

As your e-commerce business grows, your optimization needs may change. To what degree is the EOP scalable and flexible enough to accommodate future requirements without affecting performance? Does the platform have well-known medium and large client brands with high traffic demands? Choose a platform that can adapt to evolving business goals and easily incorporate new features. This ensures the platform remains aligned with your growing needs.

#10. Can you test out the tag for yourself?

Tags should be easy to implement. You want to make sure that the one you go with is compatible with your system. While industry reports can give you an idea of what you can expect, they aren’t representative of your site. The best way to tell is to test it for yourself on your site. This lets you see if what the EOP says is actually what you get. It can also give you an idea of implementation, use, accuracy, reliability and confidence. Finally, it lets you see if there may be any issues that could arise and gives the EOP a chance to address them immediately.

Evaluate the Performance of EOPs to unlock your potential

By asking these key questions, you can begin to evaluate the performance of experience optimization platforms and ensure you select one that helps you unlock your potential. Focus on uptime, speed, traffic handling, mobile optimization, integration capabilities, support, and scalability – and ensure the EOP has an answer for every one of these questions, with proof to back it up. This way, you’ll be able to make an informed decision and optimize your ecommerce site for a seamless user experience, driving higher conversions and business growth.

Go through the checklist below, whether you have an EOP already in place, or are looking to start your EOP journey, and ask providers what they offer:

☑️ Does the platform offer 99.9% uptime and availability?
☑️ How does the platform prioritize website speed and load time?
☑️ What does the platform’s performance center look like?
☑️ How does the platform handle spikes in traffic?
☑️ Does the platform offer both server-side and client-side optimization?
☑️ Does the platform integrate with the tools and systems that you already use?
☑️ What level of support and documentation does the platform offer?
☑️ Is the platform scalable and adaptable to your future business needs?

Article

11min read

CRO Metrics: Navigating Pitfalls and Counterintuitive KPIs

Metrics play an essential role in measuring performance and influencing decision-making.

However, relying on certain metrics alone can lead you to misguided conclusions and poor strategic choices. Potentially misleading metrics are often referred to as “pitfall metrics” in the world of Conversion Rate Optimization.

Pitfall metrics are data indicators that can give you a distorted version of reality or an incomplete view of your performance if analyzed in isolation. Pitfall metrics can even cause you to backtrack in your performance if you’re not careful about how you evaluate these metrics.

Metrics are typically split into two categories:

  • Session metrics: Any metrics that are measured on a session instead of a visitor basis
  • Count metrics: Metrics that count events (for instance number of pages viewed)

Some metrics can mesh into both categories. Needless to say, that’s the worst option for a few main reasons: no real statistical model is used when meshing into both categories. There is no direct/simple link to business objectives and these metrics may not need standard optimization.

While metrics are very valuable for business decisions, it’s crucial to use them wisely and be mindful of potential pitfalls in your data collection and analysis. In this article, we will explore and explain why some metrics are very not wise to use in practice in CRO.

Session-based metrics vs visitors

One problem with session-based metrics is that “power users” (AKA users returning for multiple sessions during the experimentation) will lead to a bias with the results.

Let’s remember that during experimentation, the traffic split between the variations is a random process.

Typically you think of a traffic split as very random but very even groups. When we talk about big groups of users – this is typically true. However, when you consider a small group, it’s very unlikely that you will have an even split in terms of visitor behaviors, intentions and types.

Let’s say that you have 12 power users that need to be randomly divided between two variations. Let’s say that these power users have 10x more sessions than the average user. It’s quite likely that you will end up with a 4 and 8 split, a 2 and 10 split, or another uneven split. Having an even split randomly occur is very unlikely. You will then end up in one of two very likely situations:

  • Situation 1: Very few users may make you believe you have a winning variation (which doesn’t yet exist)
  • Situation 2: The winning variation is masked because it  received too few of these power users

Another problem with session-based metrics is that a session-based approach blurs the meaning of important metrics like transaction rates. The recurring problem here is that not all visitors display the same type of behavior. If average buyers need 3 sessions to make a purchase while some need 10, this is a difference in user behavior and does not have anything to do with your variation. If your slow buyers are not evenly split between the variations, then you will see a discrepancy in the transaction rate that doesn’t actually exist.

Moreover, the metric itself will lose part of its intuitive meaning over time. If your real conversion rate is around 3%, but counted by session and not by unique visitors, you will only likely only see a 1% conversion rate when switching to unique visitors.

This is not only disappointing but very confusing.

Imagine a variation urging visitors to buy sooner by using “stress marketing” techniques. Let’s say this leads to a one session purchase instead of three sessions. You will see a huge gain (3x) on the conversion per session. BUT this “gain” is not an actual gain since the number of conversions will have no effect on the revenue earned. It’s also good to keep in mind that visitors under pressure may not feel very happy or comfortable with such a quick purchase and may not return.

It’s best practice to avoid using session-based metrics unless you don’t have another choice as they can be very misleading.

Understanding count metrics

We will come back to our comparison of these two types of metrics. But for now, let’s get on the same page about “count metrics.” To understand why count metrics are harder to assess, you need to have more context on how to measure accuracy and where exactly the measure comes from.

To model rate accuracy measures, we use beta distribution. In the graph below, we see the measure of two conversion ratios – one blue and one orange. The X-axis is the rate and Y-axis is the likelihood. When trying to measure the probability that the two rates are different, we implicitly explore the part of the two curves that are overlapping.

In this case, the two curves have very little overlap. Therefore, the probability that these two rates are actually different is quite high.

The more narrow or compact the distribution is, the easier it is to see that they’re different.

Want to start optimizing your website with a platform you can trust? AB Tasty is the best-in-class experience optimization platform that empowers you to create a richer digital experience – fast. From experimentation to personalization, this solution can help you activate and engage your audience to boost your conversions.

The fundamental difference between conversion and count distributions

Conversion metrics are bounded into [0:1] as a rate or [0%:100%] as a percentage. But, for count metrics the range is open, and the counts are in [0,+infinity].

The following figure shows a gamma distribution (in orange) that may be used with this kind of data, along with a beta distribution (in blue).

These two distributions are based on the same data: 10 visitors and 5 successes. This is a 0.5 success rate (or 50%) when considering unique conversions. In the context of multiple conversions, it’s a process with an average of 0.5 rate conversion per visitor.

Notice that the orange curve (for the count metric) is non-0 above x = 1, this clearly shows that it expects that sometimes there will be more than 1 conversion per visitor.

We will see that comparisons between this kind of metric depend on whether we consider it as a count metric or as a rate. There are two options:

  • Either we consider that the process is a conversion process, using a beta distribution (in blue), which is naturally bounded in [0;1].
  • Or we consider that the process is a count process, using gamma distribution (in orange), which is not bounded on the right side.

On the graph, we see an inner property of count data distributions, they are dissymmetric: the right part goes slower to 0 than the left part. This makes it naturally more spread out than the beta distribution.

Since both curves are distributions, their surface under the curve must be 1.

As you can see, the beta distribution (in blue) has a higher peak than the gamma distribution (in orange). This exposes that the gamma distribution is more spread out than the beta distribution. This is a hint that count distributions are harder to get accurate than conversion distributions. This is also why we need more visitors to assess a difference when using count metrics rather than when using conversion metrics.

To understand this problem you have to imagine two gamma distribution curves, one for each variation of an experiment. Then, gradually shift one on the right, showing an increasing difference between the two distributions. (see figure below)

Since both curves are right-skewed, the overlap region will occur on at least one of the skewed parts of the distributions.

This means that differences will be harder to assess with count data than with conversion data. This comes from the fact that count data works on an open range, whereas conversion rates work on a closed range.

Do count metrics need more visitors to get accurate results?

No, it is more complex than that in the CRO context. Typical statistical tests for count metrics are not suited for CRO in practice.

Most of these tests come from the industrial world. A classic usage of count metrics is counting the number of failures of a machine in a given timeframe. In this context, the risk of failure doesn’t depend on previous events. If a machine already had one failure and has been repaired, the chance of a second failure is considered to be the same.

This hypothesis is not suited for the number of pages viewed by a visitor. In reality, if a visitor saw two pages, there’s a higher chance that they will see a third page compared to a visitor that just saw one page (since they have a high probability to “bounce”).

The industrial model does not fit in the CRO context since it deals with human behavior, making it much more complex.

Not all conversions have the same value

The next CRO struggle also comes from the direct exploitation of formulas from the industrial world.

If you run a plant that produces goods with machines, and you test a new kind of machine that produces more goods per day on average, you will conclude that these new machines are a good investment. Because the value of a machine is linear with its average production, each extra product adds the same value to the business.

But this is not the same in CRO.

Imagine this experiment result for a media company:

Variation B is yielding an extra 1,000 page views more than the original A. Based on that data, you put variation B in production. Let’s say that variation B lost 500 people that saw 2 pages and variation B also won 20 people that saw 100 pages each. That makes a net benefit of 1000 page views for variation B.

But what about the value? These 20 people, even if they spent a lot of time on the media, are maybe not the same value as 500 people that come regularly.

In CRO each extra value added to a count metric does not have the same value, so you cannot trust measured increment as a direct added value.

In applied statistics, one adds an extra layer to the analysis: a utility function, which links extra counts to value. This utility function is very specific to the problem and is unknown to most CRO problems. Even if you get some more conversions in a count metric context, you are unsure about the real value of this gain (if any).

Some count metrics are not meant to be optimized

Let’s see some examples where raising the number of a count metric might not be a good thing:

  • Page views: If the count of page views rises, you can think it’s a good thing because people are seeing more of your products. However, you can also think that people get lost and need to browse more pages to find what they need.
  • Items added to cart: We have the same idea for the number of products added to the cart. If you do not check how many products remain in the cart at the checkout stage, you don’t know if the variation helps to sell more or if it just makes the product selection harder.
  • Product purchased: Even the number of products purchased may be misleading as a business objective alone if used alone in an optimization context. Visitors could be buying two cheaper products instead of one high-quality (and more expensive) product.

You can’t tell just by looking at these KPIs if your variation or change is good for your business or not. There is more that needs to be considered when looking at these numbers.

How do we use this count data then?

We see in this article how counterintuitive optimization based on sessions is. And even worse, we see how misleading count metrics are in CRO.

Unless you have both business and statistics expert resources, it’s best practice to avoid them, at least as a unique KPI.

As a workaround, you can use several conversion metrics with specific triggers using business knowledge to set the thresholds. For instance:

  • Use one conversion metric for count is in the range [1; 5] called “light users.”
  • Use another conversion metric in the range [6,10] called “medium users.”
  • Use another one for the range [11,+infinity] called “heavy users”.

Splitting up the conversion metrics in this way will give you a clearer signal about where you gain or lose conversions.

Another piece of advice is to use several KPIs to have a broader view.

For instance, although analyzing the product views alone is not a good idea – you can check the overall conversion rate and average order value at the same time. If product views and conversion KPIs are going up and the average order value is stable or goes up, then you can conclude that your new product page layout is a success.

Counterintuitive Metrics in CRO

Now you see that except for conversions counted on a unique visitor basis, nearly all other metrics can be very counterintuitive to use in CRO. Mistakes can happen because of statistics that work differently, and also because the meaning of these metrics and their evolutions may have several interpretations.

It’s important to understand that CRO skill is a mix of statistics, business and UX knowledge. Since it’s very rare to have all this within one person, the key is to have the needed skills spread across a team with good communication.

Article

7min read

Understanding shopping engagement software: How do virtual shopping assistants work?

Every visitor shopping online wants to find a product that precisely meets their expectations quickly and efficiently. To achieve this, you can offer your potential customers purchasing advice to guide them throughout their buying journey.

In this article, you will discover the different forms of virtual shopping assistants available in e-commerce and the advantages they bring to you and your customers.

What are virtual shopping assistants?

Virtual shopping assistants, enabled by shopping engagement software, provide your shoppers with support in their product selection through an interactive and personalized exchange. By asking precise questions, your customers can find products that align with their wishes and needs more quickly.

This approach is based on the purchase advice provided in brick-and-mortar retail, aiming to overcome the impersonal components of online shops and enhance the individual user experience.

How do virtual shopping assistants differ from faceted search?

With faceted search, your customers can filter their search results in the online shop to view the products that interest them. For example, when searching through an e-commerce apparel shop they can use faceted navigation to select features, such as women’s blue capris in size 40, providing a user-friendly experience.

However, customers need to already know exactly what they want to buy to filter accordingly. If a customer is uncertain about their purchase or unsure about the specific product features they desire, they require support in the form of virtual shopping assistants.

What kinds of virtual shopping assistants are available?

There are various formats of virtual shopping assistants in e-commerce that can be integrated at different points of the customer journey. Let’s take a closer look at two categories: person-to-person communication tools and automated tools that can handle multiple customer inquiries in real time.

Virtual shopping assistants with human-to-human communication

Below, we present two examples of virtual shopping assistants that utilize human-to-human communication:

Live chat

Live chat is a messenger tool that allows your customers to directly contact an employee of your online shop. Typically integrated as a pop-up window on the company website, it facilitates one-to-one communication, resembling the experience of brick-and-mortar retail.

Video consultation

Video consultation is a rising trend in the e-commerce industry. 

Customers visiting your e-commerce site may still be exploring their needs, making phone, chat or email interactions insufficient. With video consulting, customers can engage in face-to-face conversations with an employee of your online shop, ask questions, and receive individual advice on your products and processes. 

For instance, customers can share their screens and present their ideas and inspiration to the sales representative, leading to a more targeted sales pitch. This combination of online shopping with personalized attention replicates the experience of boutique purchases and ultimately boosts customer loyalty and satisfaction. 

The advantage: Your customers receive immediate, personalized answers to their questions about products and processes while they browse your shop. Especially for complex products that require explanation, customer-oriented live chat can positively influence purchase decisions. Additionally, you can offer appointments for individual purchase advice.

Virtual shopping assistants with AI-based tools

Now, let’s explore two examples of online consulting software that utilize AI-based tools for real-time interactions with multiple customers at once.

AI-based chatbots

Chatbots using artificial intelligence can respond to hundreds of customer inquiries simultaneously and in real time. 

With the emergence of large language model chatbots such as OpenAI’s ChatGPT and Google’s Bard, brands have the potential to revolutionize how they engage with their customers online.

Depending on how the tool is programmed, it can recognize natural language, generate suitable answers from text blocks and databases on your website, and even escalate queries to a human employee if necessary. This enables personnel-friendly automation of various processes. 

Guided Selling

Guided Selling involves guiding your customers through the product selection process to facilitate a confident purchase decision. This is particularly useful for potential buyers who may not possess enough knowledge about the products to make an informed choice.

For instance, when it comes to purchasing a stroller, expectant parents can feel overwhelmed by the countless models available. Guided Selling assists them in narrowing down the selection through targeted questions, leading to the ideal stroller. This can be seen in the example from babymarkt.de, who uses Guided Selling from AB Tasty to provide better shopping experiences for their customers.

This form of assistance, where a customer is guided step-by-step through the consultation process based on specific questions, is especially suitable for products that require explanation and mirrors the experience of a sales pitch in brick-and-mortar retail. Guided Selling can also be used for self-explanatory products, where customers can find the right product selection by selecting certain tags.

What makes Guided Selling special is that the results can be personalized to display suitable products based on the individual click and buying behavior of your customer. This ensures that your customer receives not only products that match their desired features and requirements but also their unique preferences.

Why is good customer engagement important in e-commerce?

Customers who feel well-advised are happy to come back. This applies to both brick-and-mortar stores and e-commerce shops. In addition, there are other reasons for using shopping engagement software like virtual shopping assistants.

Personalized shopping experience

When potential buyers walk into a brick-and-mortar store, they can approach the on-site sales consultants to find the right product. 

By integrating this service into your online shop in the form of live chats, video advice or Guided Selling, you enable your customers to recreate the feeling of an interactive, personalized shopping experience.

Shoppers become customers

Virtual shopping assistants help you convert potential buyers into customers. By putting customers in direct contact with your team or catalog, they get answers to their questions that can positively influence their purchase decision. 

For very personal products such as mattresses, a virtual shopping assistant tool helps visitors to find the one that exactly meets their needs from the multitude of models. 

A better user experience

Your visitors appreciate positive experiences throughout their customer journey. 

Support through virtual shopping assistants gives them a secure feeling when choosing a product and more frequently leads to a purchase decision. In addition, virtual shopping assistants make shopping easier: You present your customers with suitable solutions, they feel understood and the positive user experience is anchored in their memory.

Higher conversion

With virtual shopping assistants and shopper engagement software, you can reduce lost sales opportunities and thus increase your conversions. Sometimes potential buyers leave a shop because they didn’t find a product that is actually there. If they can easily ask a sales representative about the product via live chat, it will improve their shopping experience.

Your potential customers have already added products to their shopping cart, so why are they abandoning the checkout process? One possible reason: They had a question about a process that was not answered quickly enough. With an AI-based chatbot available during the checkout, these questions can be solved quickly and efficiently.

Higher customer satisfaction

The personalized service of a virtual shopping assistant creates an intimate atmosphere – a 1:1 exchange reminiscent of brick-and-mortar experiences. This not only strengthens potential buyers’ trust in your company but also their satisfaction. And satisfied customers turn into loyal customers. 

Fewer Returns

Implementing virtual shopping assistants in your shop reduces the risk of returns. The two most common reasons for returns are either that the product didn’t fit or they didn’t like it. 

With personal, targeted advice, you can help your customers to choose the right products that meet their wishes and needs as precisely as possible. This reduces your costs and makes your returns management easier.

Conclusion: Virtual shopping assistants make e-commerce more human

Virtual shopping assistants are a must-have in e-commerce. It offers advantages for you as an e-commerce marketer as well as for your customers. 

Live chats or chatbots, video advice and Guided Selling make it easier for potential buyers to select a product and improve their user experience. In a 1:1 exchange, they receive personalized answers to their questions – the online shop becomes more human. At the same time, you benefit from higher customer loyalty and fewer returns, which means you can increase your sales.

Article

6min read

Put Data in the Driver’s Seat | Marianne Stjernvall

Marianne Stjernvall explains the evolution of CRO and the importance of centralizing your CRO Program to create a data-driven organization

Before becoming a leading specialist in CRO and A/B testing, Marianne Stjernvall was studying computer and systems science when a company reached out to her on LinkedIn for a position as a CRO specialist, which for her turned out to be the perfect mix of logic programming data and business and people. 

Since then she founded the Queen of CRO where Marianne acts as an independent CRO consultant helping many organizations with experimentation, CRO, personalization and creating a data-driven culture for growth. 

Previously, Marianne worked for companies such as iProspect, TUI and Coop Sverige where she spearheaded their CRO roadmap and developed a culture of experimentation. Additionally, she was awarded CRO Practitioner of the Year in 2020.

AB Tasty’s VP Marketing Marylin Montoya spoke with Marianne on the importance of contextualizing A/B test data to make better-informed decisions. Marianne also shared her own take on the much debated build vs buy topic and some wise advice from her years of experience with CRO and experimentation.

Here are some key takeaways from their conversation. 

The importance of contextualizing data

For Marianne, CRO is becoming a big part of product development and delivery. She highlights the importance of this methodology when it comes to collecting data and acting on it in order to drive decisions. 

Marianne stresses the importance of putting data into context and deriving insights from that data. This means companies need to be able to answer why they’re collecting certain information and what they plan to do with that information or data. 

CRO is the key to unlocking many of those insights from the vast amount of data organizations have at hand and to pinpoint exactly what they need to optimize. 

“What are you going to do with that information? You need context to provide insights and that, I think, is what CRO actually is about,” Marianne says. 

This is what makes CRO so powerful as it enables organizations to take more valuable actions based on the insights derived from data. 

When done right, testing within the spectrum of CRO can help move organizations into a completely different path that they were on before onto a more innovative and transformative journey.

Centralize and standard your experimentation processes first

When companies are just starting to create their experimentation or CRO program, Marianne recommends having parts of it centralized and to run tests within a framework or process to avoid teams running their own tests and executing these tests all over each other. 

Otherwise, you could have different teams, such as marketing, product development and CRO teams, executing tests with no set process in place which could potentially lead to chaos. 

“You will be taking decisions on A/B tests on basically three different data sets because you will be checking different kinds of data. So having an ownership of that to produce this framework and process, this is how the organization should work with these kinds of tests,” says Marianne. 

With established frameworks and processes in place, organizations can set rules on how to carry out tests to get better value out of them and create ownership for the entire organization. The trick is to start small with one team and build in these processes over time onto the next team and so on.

This is especially important as Marianne argues that many organizations cannot increase their test velocity because they don’t have set processes to act on the data they get from their A/B tests. This includes how they’re calculating the tests, how they’re determining the winning or losing variation and what kind of goals or KPIs they’ve set up.

In other words, experimentation needs to be democratized as a starting point to allow an organization to naturally evolve around CRO. 

Putting people at the center of your CRO program

When it comes to the build vs buy debate, Marianne argues that an A/B testing tool will not automatically solve everything. 

“A great A/B testing tool can make you comfortable in that we have all the grounds covered with that. Now we can actually execute on this, but the rest is people and the organization. That’s the big work.”

In fact, companies tend to blame the tech side of things when their A/B testing is not going as planned. For Marianne, that has nothing to do with the tool; the issue primarily lies with people and processes. 

As far as the build vs buy debate, before deciding to build a tool in-house, companies should first ask themselves why they want to build their own tool beyond the fact it’s more cost-efficient. This is because these tools need time to get set up and running. It may not be so cost-effective as many tend to think when choosing to build their own tool.  

Marianne believes that companies should focus their energy and time on building processes and educating teams on these processes instead. In other words, it’s about people first and foremost; that’s where the real investment lies. 

Nevertheless, before starting the journey of building their own tool, companies should evaluate themselves internally to understand how teams are utilizing and incorporating data obtained from tests into their feature releases. 

If you’re just starting on your CRO journey, it’s largely about organizing your teams and involving them in these processes you’re building. The idea is to build engagement across all teams so that this journey happens in the organization as a whole. (An opinion that was shared by 1,000 Experiments Club podcast guest Ben Labay). 

What else can you learn from our conversation with Marianne Stjernvall?

  • What to consider when choosing the right A/B testing tool 
  • Her own learnings from experiments she’s run
  • How to get HIPPOs more involved during A/B testing
  • How “failed” tests and experiments can be a learning experience

 

About Marianne Stjernvall

Having worked with CRO and experiments for a decade and executed more than 500 A/B tests, Marianne Stjernvall has helped over 30 organizations to help them grow their CRO programs. Today, Marianne has transformed her passion for creating experimental organisations with a data-driven culture to become a CRO consultant at her own company, the Queen of CRO. She also regularly teaches at schools to pass on her CRO knowledge and show the full kind of spectrum of what it takes to execute on CRO and A/B testing and experimentation.

About 1,000 Experiments Club

The 1,000 Experiments Club is an AB Tasty-produced podcast hosted by Marylin Montoya, VP of Marketing at AB Tasty. Join Marylin and the Marketing team as they sit down with the most knowledgeable experts in the world of experimentation to uncover their insights on what it takes to build and run successful experimentation programs.

 

Article

5min read

The Future of Digital Personalization: EmotionsAI by AB Tasty

At AB Tasty, we understand the importance of personalization in reaching your audience. We also know that up to 80% of consumers are more likely to complete an online purchase with brands that offer personalized customer experiences.

We have worked extensively to enable businesses to dynamically customize website content, product recommendations and promotional offers based on individual user preferences, behavior and demographics.

However, website experiences have not lived up to customer expectations when it comes to feeling understood by brands. If brands can’t bring relevance to their audience, at the very least they should reduce frustration and negative emotions.

The role of emotions

Emotions have a big impact on the entire purchasing journey. Brands not only need to understand customer preferences, but they also need to understand the emotional impact behind each decision. People are not always rational when it comes to making buying decisions – and not all people react in the same way.

Emotions play a huge role in how we make our decisions. In fact, once we start to think of the customer journey as a succession of micro-decisions (e.g. clicking on a CTA is one of them), we can easily understand how important it is to serve a personalized experience depending on emotional profiles. 

What if you can understand your customers beyond the surface level? To make concrete data-driven decisions based on the abstract notion of emotional needs in order to connect with audiences like never before? To be equipped with more knowledge and data on your customers’ behaviors? To be able to use language to describe different shopper personalities? 

How can you optimize according to the distinct desires of each person?

The next step in digital personalization: AB Tasty’s EmotionsAI

Hundreds of behavioral patterns uncover your buyers’ emotional needs and train our EmotionsAI algorithm.

At AB Tasty, we love to push the boundaries of digital experiences which is why we are excited to launch our most recent acquisition. With EmotionsAI, you can experiment with unique, personalized messages for each visitor type, delve into data to understand their needs, conduct tests to identify effective messaging and construct personalized journeys targeting specific emotional needs.

Formerly known under the name Dotaki, this new technology is based on years of psychographic modeling, customer journey mapping and AI technology combined with real-time interactions on your site and device usage.

Brands are already using EmotionsAI and AB Tasty to:

  • Understand the emotional needs of audiences to bolster their Experience Optimization roadmap with effective messages, designs and CTAs that activate their visitors.
  • Have more winning variations by digging deeper into what works and for which type of personality with analytics.
  • Personalize campaigns by targeting based on emotional needs in the AB Tasty Audience Builder.

Customer Segmentation By Personality Type

EmotionsAI can help you understand what type of visitor is on your site. For instance, if they were classed as a “Competitive” visitor, they would react strongly to either social proof or labels that indicated previous sales or limited stock on products. If they were considered a “Safety” visitor – they would be looking for a clear, secure payment system, with easy reassurance along the way. Pragmatic visitors, who are looking for “immediacy” want the quickest route to order completion, with as few blocking points as possible.

Results

Once you are able to classify visitors with EmotionsAI, you can then start using winning variations to address their specific needs.

You can instantly identify when a variation meets the emotional need of a portion of the audience. The impact on the test success rate is impressive: with EmotionsAI, it is possible to detect a significant impact on sales in 3 times more A/B tests. This opens the door to easily implement personalizations targeting visitors on the most relevant criterion: the emotional.

In addition, the emotional segments make it possible to identify which stages of the online journey do not respond well enough to the emotional needs of the audience and generate a shortfall. This gives you ideas for future tests, for example, adding a reassurance strip to a basket stage. A/B tests based on these emotional insights have a success rate twice as high as the average.

We have seen a massive increase in revenue from previous customers. More than 60% of test variations show a successful business impact compared to 10% without EmotionsAI. Additionally, personalization campaigns using EmotionsAI have driven revenue increases ranging from 5% to 10%.

Stay ahead of the curve with the next step in experience optimization by mastering emotional personalization with EmotionsAI. Let your audience be seen by incorporating learning algorithms to map customer behaviors for predictable buying profiles.

EmotionsAI is an AI-Powered Segmentation Tool by AB Tasty, allowing for better personalization and higher conversion rates.

Want to find out more? Get in touch with us today!

Article

5min read

A New Chapter for Flagship as it Merges with the AB Tasty Website

We are excited to share that as a part of our ongoing strategy to optimize how you access AB Tasty’s suite of experimentation and personalization tools, Flagship by AB Tasty is now evolving to join the AB Tasty brand and website.

This doesn’t mean your favorite experience rollout and feature management tools are disappearing, but rather it’s part of a new exciting chapter for AB Tasty with the goal to make all our features available in one place under one name.

We have merged the AB Tasty and Flagship websites. All resources and landing pages previously hosted on Flagship’s website (flagship.io) can now be found in one location on the AB Tasty website (abtasty.com).

This branding evolution means the Flagship name will be phased out and then retired. While we feel a little nostalgic for the old name, the end goal is to make it easier to get access to the AB Tasty umbrella of solutions and features and join them together to keep our promise of being your go-to-platform for improving and optimizing the customer experience.

If you have questions about what this change means for you, you’ve come to the right place. Below we will dig into what is changing, helpful links and resources and some general FAQs.

As always, our team of AB Tasty magic makers are available to answer any additional questions that might pop up along the way. If you have any more questions after reading this, don’t hesitate to send us an email at hello@abtasty.com and we will update this page as needed.

How are AB Tasty and Flagship related?

AB Tasty and Flagship have always been the same company, just with different names for the server-side solutions and client-side solutions.

AB Tasty’s experimentation suite enables brands to carry out client-side A/B testing and personalizations in order to provide a richer digital experience and boost conversions.

Meanwhile, Flagship by AB Tasty is also built to provide richer experiences that convert through risk-free feature management, server-side experimentation and personalization. Again, same company, just different ways of helping brands provide the best experience for their customers.

What do you mean when you say merge? Will the Flagship website be gone for good?

Yes, everything on the Flagship website (flagship.io) has moved over to the AB Tasty website (abtasty.com). This means links to existing landing pages and resources are all redirected to AB Tasty, and any new resources will be posted directly on AB Tasty from here on out. Easily access resources like e-books, blogs, guides and more by clicking on the resources tab above or following the link here.

Why are we merging the Flagship and AB Tasty websites and names?

From the start, our focus has always been on what we do best, which is giving clients the tools they need to validate ideas while maximizing impact, minimizing risk and accelerating time to market.

Marketing teams and tech teams are working more closely together than ever before to bring new features to market to stay competitive. Our customer-first approach means we want to make our features more accessible and find the tools you need for all your experimentation and personalization needs. For this reason, we have decided to bring Flagship to the AB Tasty website and to position it as AB Tasty’s Feature Experimentation and Experience Rollouts rather than as a separate solution.

Many of our client-side clients have evolved their experimentation needs to the point where they are running more advanced experiments and rolling out more advanced features. For our clients who are ready to start server-side experimentation, this change makes it much easier and faster to find all the information and support they need about all our features, including our server-side functionality, in one place.

What will happen to all the resources (blog posts, guides, e-books, etc.) on the Flagship.io?

As mentioned above, the Flagship content is now migrated and all links from flagship.io are redirected to the AB Tasty. From there, all our resources from guides to blog posts and e-books about feature management, experimentation and more can all be found on the AB Tasty website.

You’ll find your favorite content can be easily accessed here if you filter for the “Rollouts” and “Feature Experimentation” topics.

How can I log into my Flagship account? And where can I access the documentation and SDK libraries?

You can access your accounts by visiting abtasty.com and clicking the login button in the top right-hand corner.

All our documentation and SDKs will have the same links as before. You can access them below:

How will the merger affect existing customers of both Flagship and AB Tasty and the support they receive?

All our clients, regardless of whether they are using AB Tasty or Flagship or both, will not be affected. You can continue to use our platform for all your experimentation needs without any changes.

Likewise, you can expect to receive the same level of support and have access to the same dedicated team for client- and/or server-side experiments as before.

As always, your CSM will inform you in a timely manner when/if there any changes to the platform occur.

How will the merger affect new customers? Where can I sign up for a demo for AB Tasty’s Feature Experimentation and Rollouts?

If you’re new and you’d like to try out AB Tasty’s Feature Experimentation or Experience Rollouts, click the banner below or click the “Get a demo” button on the top right-hand corner of the page to explore how server-side experiments can positively impact your business.

A very special thank you to our customers and our partners for supporting us in this exciting evolution of AB Tasty. Your feedback and support helps shape important changes such as these, and we are grateful for it.

Have any additional questions about AB Tasty? Send us an email at hello@abtasty.com to let us know and stay tuned for more exciting updates and information still to come!

Article

10min read

The Ultimate Guide to Experience Rollouts Using Feature Flags

In modern software development, DevOps teams have shifted their attention on the continuous delivery of features to keep up with fast-changing market and consumer demands.

Teams now more than ever have to be in the driver’s seat when it comes to delivering these features and to whom.

This is why feature flags (or feature toggles) have become the ultimate tool to manage the release of new features.

What are experience rollouts?

When we talk about experience rollouts, we’re referring to the risk-free deployment of features that improve and optimize the customer experience.

This could be in the form of progressive deployments where features are gradually released to reduce the risk of big bang releases or by targeting new features to the most relevant users in order to personalize their experience.

But how do you ensure you’re delivering optimal experiences without negatively impacting the user experience? How can you minimize risk when rolling out new features and ensure that they actually meet your customers’ needs and expectations?

The answer to both these questions is feature flags.

Feature flags are a great solution to allow you to continuously deliver new features while limiting user access to these features, thereby reducing risk.

By decoupling deployment from release, feature flags give teams the power to choose who to send new features to and when. Thus, teams can continuously develop and deliver new features without having to make them available to all users.

What are feature flags?

Let’s start with the most basic definition of feature flags.

Feature flags are a software development tool that enable teams to turn functionalities on or off in order to safely test new features by separating code deployment from feature release.

They can also be referred to as feature toggles as they allow you to toggle a feature on or off by hiding that feature behind a flag and then deciding who to make this feature visible for.

This is particularly useful when you’re looking to personalize the customer experience according to the type of user. This means you can enable features to only target certain users to display the right content to the right audience at the right time, while tracking their performance over time.

With AB Tasty Rollouts, you can configure personalization campaigns, for example, to personalize the user experience for new visitors on mobile only and show them discount codes as a welcome offer. Therefore, you can define the targeted users and the flag (with its value) that will activate the discount code for a particular scenario; in this case, new users on mobile while monitoring the relevant KPIs.

Feature flags can be leveraged across different use cases. This is because there are many different types and categories of feature flags as seen in the image below and which one you choose depends on the purpose of using the flag in the first place.

For example, release toggles support dev teams as they write new features while experiment toggles are primarily used by product and marketing teams to facilitate A/B testing.

For this reason, feature flags can be used across a wide variety of use cases by multiple teams across an organization, especially when you have a feature management solution to manage all your flags.

In particular, feature flags give teams a very granular level of control and risk management over code, which can be important when modifying backend features that have a wide-ranging impact on how your system performs.

Read more: When to make the leap from client- to server-side testing and how feature flags can help you seamlessly carry out server-side experiments

The following section will provide further details on what the term “experience rollouts” entails and discuss how feature flags can help you regain control of how you roll out experiences to your customers at the flip of a switch.

  • Progressive deployment and rollouts

Perhaps one of the greatest benefits of feature flags is their ability to mitigate risk when it comes to new feature releases.

This is because feature flags empower teams to release their features to any user groups of their choice.

Therefore, teams can safely test out their new features on a preselected number of users, whether with internal or external users, to validate functionality and gather feedback in order to make any necessary changes and optimize future feature releases. By continuously iterating features in real time during the release process, companies can provide more value to their customers and ensure customer satisfaction.

Sophisticated feature flagging functionalities give you the ability to closely monitor metrics that indicate how a new feature is performing and how well-received it is by users.

This way, should anything go wrong with a release, teams can minimize the blast radius and any negative impact due to a faulty feature. This also gives them the time necessary to address the issue by disabling the flag before releasing it to everyone else.

The best thing about progressive deployments and rollouts is that teams are essentially in the driver’s seat: they have control over who sees what and when, allowing them to maintain the momentum of CI/CD but with less risk.

Another great advantage of progressive rollouts is that it increases both the velocity of the development lifecycle and testing as teams can roll out releases in phases, they can quickly test on their chosen user group, make the iterations necessary and then run more tests.

  • Rollbacks

Just as you can roll out new features and experiences to your users, you can also easily roll back these features whenever needed with the help of feature flags.

This means that if anything goes wrong with any feature you’ve rolled out to your chosen users, you can quickly disable the flag so that users no longer have access to the feature.

Releasing new features to real-world users is always a risky endeavor and can cause real harm to your brand’s user relationships but it doesn’t have to be.

Now, after any feature release, teams can isolate the faulty or buggy individual feature(s) that have and perform a targeted rollback on them. With advanced third-party feature management platforms, you can roll back a feature in real-time by just toggling a single field with just one click.

AB Tasty is one such tool that allows you to roll out new features to subsets of users by assigning specific flag values to different user segments and comes with an automatic triggered rollback in case something goes wrong.

The automatic rollback option enables you to stop the deployment of a feature and to revert all the changes that have been made in order to ensure that your feature isn’t breaking your customer experience. This is done by defining the business KPI you want to track that would indicate the performance of your feature.

When this KPI is set, you will then associate a value (in %) which, if exceeded or reached, will trigger the rollback. To make the rollback significant, you must define a specific number of visitors from which the rollback comparison will be triggered.

When the conditions are met, the progressive rollout feature will be rolled back, which means that no more targeted users will see the feature.

  • Targeting

We’ve talked a lot about how you can use feature flags to allow certain users to see a feature while hiding it from others.

When you do a targeted rollout, you’re basically releasing new features to a predefined set of users rather than opting for the riskier big bang release.

Here’s a look at some targeting scenarios where feature flags do their best work:

  • Alpha and beta testing
  • A/B testing 
  • Managing entitlements 
  • Blocking users
  • Canary deployments/percentage rollouts 
  • Ring deployments

There are many ways teams can progressively deploy and roll out features to a select audience. With the help of feature flags, teams can manage and streamline these deployment methods to perform highly granular user targeting.

AB Tasty Rollouts allows you to target users based on certain identifying attributes like beta testers, age group, or any other user attributes you have access to.

Furthermore, our integrations with third-party tools such as Segment, GA4, Mixpanel and Heap means that you can also target your test and personalization use cases with audiences built in these tools and then exporting these user groups or cohorts to AB Tasty to target them.

  • Flag management

To truly reap the benefits of feature flags, you have to know how to manage them effectively. Otherwise, you will end up with so many flags in your system that you start to lose track of which flag does what. This could ultimately lead to the most dangerous pitfall of feature flags: technical debt.

At that point, your code could become too complex that it will be difficult to manage and could negatively affect the quality of your code.

This is why feature management and feature management solutions are so essential today for modern software development teams. With such solutions, teams have access to advanced features to enable them to work with feature flags at scale and avoid the most common problems associated with them.

AB Tasty is one solution packed full of features that can help you avoid the dreaded technical debt with a clear, easy-to-use dashboard that all your teams, from development to product, can easily use to efficiently track and manage feature flags usage across your organization no matter how far along you are in your feature flag journey.

Furthermore, flags can be controlled from another platform using AB Tasty’s Remote Control API allowing teams to work from just one tool without having to log onto the platform. This saves a lot of time and effort as you can perform all AB Tasty tasks directly with API calls, including managing your projects, use cases, variations, variation groups, users, targeting keys, and your flags.

Experience rollouts with feature flags

As we’ve seen, the idea of experience rollouts revolves around rolling out your best features to end-users. This is when feature flags become the most powerful tool to ensure you’re only releasing optimal features that provide the best customer experience possible.

This is because feature flags give you the ability to progressively deploy and roll out new features to gather feedback from the users – giving you the most relevant feedback to iterate and optimize your releases. This will help your teams to make more informed, data-driven decisions to drive your conversion rates, ultimately aligning the user experience with business objectives.

Consequently, when you finally do a full release, you’re confident that you’re releasing features that provide the most value to your customers and so will have the best impact on your business in terms of revenue and conversions.

Do you want to deliver first-in-class customer experiences? Click on the “Get a Demo” button at the top to see for yourself what feature flags can do for your own experience rollouts.

Article

4min read

5 Mistakes to Avoid When Selecting an EOP

If you’re an e-commerce company, you know better than anyone how important it is to optimize your website and have the best possible user experience.

You’ve heard about experience optimization platforms (EOPs) and how they can improve your website’s performance, enhance customer loyalty and increase order size. We know you’re excited – and so are we!

But, before you dive headfirst into the dizzying world of EOP selection, let’s go through the most common mistakes e-commerce companies make so you can avoid them!

Mistake #1: Focusing on quantity over quality

You might be tempted to select an EOP that offers a ton of features, capabilities, and integrations. After all, more is better… right? Not necessarily.

When it comes to experience optimization, quality > quantity.

It’s especially important to make sure that what the EOP provides is right for YOUR company and its goals.

It’s easy to be seduced by an EOP that boasts 50 integrations or hundreds of features. But, a platform that overwhelms you with options and doesn’t deliver real value will end up costing you precious time and resources.

Opt for the platform that offers the most effective features that help you achieve your specific goals.

Mistake #2: Not considering customer success

You’ve worked hard to hire the smartest people for the job and have full faith in your employees. They can probably figure out any problem that comes up. But, when it comes to experience optimization, things don’t always go as planned.

This is why it’s so important to choose a platform that offers excellent customer support.

Look for a platform that not only provides responsive support but also has a community of users with people you genuinely enjoy working with. A platform like this can offer strategic guidance in alignment with your goals.

Mistake #3: Overlooking scalability

We know that part of the reason why you’re looking for an EOP is to grow your customer base. One of the best ways you can do that is by selecting an EOP that can grow with you.

Not every EOP is designed to scale, so make sure you’re choosing one that accommodates your company’s growth goals and can accompany you along the way.

Look for a platform that offers scalability within its plans and can support increased traffic, as well as new features.

Mistake #4: Ignoring mobile and app optimization

More and more customers are choosing to shop from their phones, which means mobile optimization for all devices is critical.

A poorly optimized mobile experience can lead to lost sales – and we know you can’t afford that!

Choose a platform that prioritizes mobile optimization and delivers a seamless experience across all devices, whether it’s web or app.

Omnichannel is the way of the future. A poor mobile experience can leave a bad taste in a customer’s mouth.

Mistake #5: Neglecting data and analytics

Robust data and analytics help you make better business decisions and are absolutely essential for effective experience optimization.

Go for a platform that relies on solid statistical models, creates reports based on your needs as an e-commerce company, and lets you choose the KPIs that are most relevant to your business.

A platform that delivers valuable insights will enable you to make smarter, data-driven decisions, which can lead to higher ROI and revenue.

Selecting the right EOP for e-commerce

Selecting the right EOP is more crucial than ever for e-commerce companies.

We know you have a lot of options out there, big goals to achieve, and even bigger dreams of where you want to take your company next. AB Tasty is here to help guide you through the confusing, often complicated process of selecting an EOP that fits with your business now and in the future.

By choosing a company that addresses the five areas above, you can rest assured that you’re on the right track.

Looking for a solution that addresses all five of these areas? AB Tasty is the best-in-class experience optimization platform that empowers you to create a richer digital experience – fast. From experimentation to personalization, this solution can help you activate and engage your audience to boost your conversions.

Article

6min read

How to Leverage Disruption in Experimentation | Ben Labay

Ben Labay outlines essential frameworks for a more strategic, tactical and disruptive approach to experimentation

With two degrees, in Evolutionary Behavior and Conservation Research Science, Ben Labay spent a decade in academia with a wide-ranging background in research and experimentation dealing with technical data work. 

Now as CEO of experimentation and conversion optimization agency Speero, Ben describes his work in experimentation as his “geek-out” area which is customer experience research and dealing with customer data. 

At Speero, Ben works to scope and run research and test program strategies for companies including  Procter & Gamble, ADP, Codecademy, MongoDB, Toast and many others around the world.

AB Tasty’s VP Marketing Marylin Montoya spoke with Ben on how to create mechanisms for companies to not only optimize but also be more disruptive when it comes to web experimentation to drive growth.

Here are some of the key takeaways from their conversation.

Consider a portfolio way of management in experimentation 

Inspired by Jim Collins’ and Jerry I. Porras’ book “Built to Last”, Ben discusses a framework that the book provides on the ways a company can grow based on the best practices from 18 successful companies. 

He identifies one big pillar that many organizations are often neglecting: experimentation. To tackle this, Ben suggests taking a portfolio management way of doing experimentation made up of three portfolio tags which provide a solution spectrum around iterative changes for optimization. 

The first level consists of making small tweaks or changes to a website based on customer feedback such as improving layouts and the second which includes more substantial types of changes such as new content pieces.

But there’s a bigger third level which Ben refers to as more “disruptive” and “innovative” such as a brand new product or pricing model that can serve as a massive learning experience. 

With three different levels of change, it’s important to set a clear distribution of time spent on each level and have alignment among your team.

In the words of Ben, “Let’s put 20% of our calories over into iterating, 20% onto substantial and 20/30/ or 40% over on disruptive. And that map – that framework has been really healthy to use as a tool to get teams on the same page.”

For Ben, applying such a framework is key to getting all teams on the same page as it helps ensure companies are not under-resourcing disruptive and “big needle movers”. Velocity of work is important, he argues, but so is quality of ideas.

Let your goal tree map guide you 

Every A/B test or personalization campaign needs to be fed with good ingredients which determine the quality of the hypothesis. 

“Every agency, every in-house company researches. We do research. We collect data, we have information, we get insights and then they test on insights. But you can’t stop there.” Ben says. 

The trick is not to stop at the insights part but to actually derive a theme based on those insights. This will allow companies to pick underlying strengths and weaknesses to map them into their OKRs. 

For example, you may have a number of insights like a page underperforming, users are confused about pricing and social proof gets skipped over. The key is to conduct a thematic analysis and look for patterns based on these different insights. 

Consequently, it’s important for companies to create a goal tree map to help them understand how things cascade down and to become more tactical and SMART about their goals and set their OKRs accordingly to organize and make sense of the vast amount of data. 

When the time comes to set up a testing program, teams will have a strategic testing roadmap for a particular theme that links to these OKRs. This helps transform the metrics into more actionable frameworks. 

And at the end of each quarter, companies can evaluate their performance based on this scorecard of metrics and how the tests they ran during the quarter impacted these metrics.

Build engagement and efficiency into your testing program strategy 

The main value prop of testing centers around making profit but Ben advocates for a second value prop which revolves around how a business operates. This requires shifting focus to efficiency and how different teams across an organization work together.

Ben parallels the A/B testing industry with Devops as it strives to bring in elements from the DevOps cultural movement when we refer to a culture of experimentation and being data-driven. In many ways, this echoes the DevOps methodology, which is focused on breaking down silos between development and operation teams to enhance collaboration and efficiency between these teams. “The whole idea is to optimize the efficiency of a big team working together”, Ben says. 

This means organizations should take a hard look at their testing program and the components that make up the program which includes getting the right people behind it. It’s also about becoming more customer-centric and embracing failure. 

Ben refers to this as the “programmatic side” of the program which serves as the framework or blueprint for decision making. It helps to answer questions like “how do I organize my team structure?” or “what is my meeting cadence with the team?”

Ultimately, it’s about changing and challenging your current process and transforming your culture internally by engaging your team within testing your program and the way you’re using data to make decisions.

What else can you learn from our conversation with Ben Labay?

  • Ways to get out of a testing rut 
  • How to structure experimentation meetings to tackle roadblocks 
  • How experimentation relates to game theory 
  • The importance of adopting a actionable framework for decision making 
About Ben Labay

Ben Labay combines years of academic and statistics training with customer experience and UX knowledge. Currently, Ben is the CEO at Speero. With two degrees in Evolutionary Behavior and Conservation Research Science (resource management), Ben started his career in academia, working as a staff researcher at the University of Texas focused on research and data modeling. This helped form the foundation for his current passion and work at Speero, which focuses on helping organizations make decisions using customer data.

About 1,000 Experiments Club

The 1,000 Experiments Club is an AB Tasty-produced podcast hosted by Marylin Montoya, VP of Marketing at AB Tasty. Join Marylin and the Marketing team as they sit down with the most knowledgeable experts in the world of experimentation to uncover their insights on what it takes to build and run successful experimentation programs.