Talia Wolf reveals how emotional marketing can revolutionize your experimentation process and lift conversions.
Taking a customer-centric approach to marketing, founder and CEO of Getuplift, Talia Wolf, harnesses the power of emotional marketing techniques to increase visitor conversions.
Her natural interest in conversion rate optimization (CRO) and experimentation was sparked through her early work in a social media agency, later moving on to become an expert in the field – consulting for many companies on the subject, and speaking on stage at Google, MozCon and Search Love.
Guest host and AB Tasty’s Head of Growth Marketing UK, John Hughes, spoke with Talia about emotional marketing as a tool for optimization, delving into how customer research can facilitate the experimentation process, reduce the rate of failure, and earn the buy-in from company stakeholders.
Here are some of the key takeaways from their conversation.
What is emotional marketing?
Based upon the idea that emotion drives every single decision that we make in life, the emotional targeting methodology shifts the focus of your online marketing content from your solution, features, or pricing, to your customer. Rather than playing a guessing game and simply reshuffling elements on a page, this technique requires a deeper understanding of human behavior. By identifying customer intent and buying motivation, you can create an optimized experience, which meets their needs and increases conversions.
Backed by academic research, the fundamental role of emotion in our daily choices can be integrated into your strategy to better cater to your customers by figuring out a) their biggest challenges and, b) how they want to feel after finding a solution. What is their desired outcome?
With this in mind, you can optimize your digital communications with high-converting copy and visuals that speak directly to your customers’ needs. By shifting the conversation from the product to the customer, an incredible opportunity opens up to scale and multiply conversions.
How do you build and measure an emotion-based experiment?
Firstly, experimentation should be backed by research. From customer and visitor surveys, to review mining, social listening and emotional competitor analysis, Talia encourages extensive research in order to create the most likely hypothesis upon which to base an A/B test.
Once you know more about your customers, you can review the copy and visuals on your product page for example, and from your research you might discover that your content is not relevant to your target customer. You can then come up with a hypothesis based on their actual needs and interests supported by compelling social proof, and write a brief for your designer or copywriter based on the new information.
From there you can build your experiment into your A/B testing platform with a selected North star metric, whether it’s check-outs, sign-ups or add-to-carts, to prove or disprove your hypothesis. And, while we know that nine out of 10 A/B tests fail, emotional marketing facilitates the hypothesizing process, strengthening the chance of creating a winning experiment by testing variables that can actually impact the customer journey.
How to persuade stakeholders to support your experiments.
When it comes to CRO, there are often too many chefs in the kitchen, especially in smaller organizations where founders have a concrete vision of their customers and their messaging.
Talia explains that a research-based approach to experimentation can offer reassurance as part of a slow-and-steady strategy, backed by evidence. This personalized methodology involves talking to your customers and website visitors and scouring the web for conversations about your specific industry, rather than simply following your competitor’s lead.
It becomes a lot easier to propose a test to a founder or CEO when your hypothesis is supported by data and research, however, Talia recommends resisting the urge to change everything at once and rather, start small. Test the emotional marketing in your ads or send out an email sequence requiring only a copywriter, and share the results.
When you’re trying to get buy-in, you need to have a strong hypothesis paired with good research to prove that it makes sense. If this is the case, you can demonstrate the power of emotional marketing by running a couple of A/B tests: one where the control is the current solution-focused content and the variant is a customer-focused alternative, and another which highlights how customers feel right now versus how they want to feel – two important variations which help you to relate better to your customer. The key to garnering support is to take baby steps and continuously share your research and results.
What else can you learn from our conversation with Talia Wolf?
Why B2B purchases are more emotional than B2C. (15:50)
How to stand out in a crowded market by knowing your customer. (20:00)
How emotional marketing impacts the entire customer journey. (25:50)
How to relate to your customer and improve conversions. (32:40)
About Talia Wolf
Conversion optimization specialist Talia Wolf is the founder and CEO of Getuplift – a company that leverages optimization strategies such as emotional targeting, persuasive design, and behavioral data to help businesses generate more revenue, leads, engagement and sales.
Starting her career in a social media agency, where she was introduced to the concept of CRO, Talia went on to become the Marketing Director at monday.com, before launching her first conversion optimization agency, Conversioner, in 2013.
Today, with her proven strategy in hand, Talia teaches companies all over the world to optimize their online presence using emotional techniques.
About 1,000 Experiments Club
The 1,000 Experiments Club is an AB Tasty-produced podcast hosted by Marylin Montoya, VP of Marketing at AB Tasty. Join Marylin and the Marketing team as they sit down with the most knowledgeable experts in the world of experimentation to uncover their insights on what it takes to build and run successful experimentation programs.
In A/B testing, there are two main ways of interpreting test results: Frequentist vs Bayesian.
These terms refer to two different inferential statistical methods. Debates over which is ‘better’ are fierce – and at AB Tasty, we know which method we’ve come to prefer.
If you’re shopping for an A/B testing vendor, new to A/B testing or just trying to better interpret your experiment’s results, it’s important to understand the logic behind each method. This will help you make better business decisions and/or choose the best experimentation platform.
In this article, we discuss these two statistical methods under the inferential statistics umbrella, compare and contrast their strong points and explain our preferred method of measurement.
As opposed to descriptive statistics (which describes purely past events), inferential statistics try to infer or forecast future events.
Would version A or version B have a better impact on X KPI?
Side note: If we want to geek out, technically inferential statistics isn’t really forecasting in a temporal sense, but extrapolating what will happen when we apply results to a larger pool of participants.
What happens if we apply winning version B to my entire website audience? There’s a notion of ‘future’ events in that we need to actually implement version B tomorrow, but in the strictest sense, we’re not using statistics to ‘predict the future.’
For example, let’s say you were really into Olympic sports, and you wanted to learn more about the men’s swimming team. Specifically, how tall are they? Using descriptive statistics, you could determine some interesting facts about ‘the sample’ (aka the team):
The average height of the sample
The spread of the sample (variance)
How many people are below or above the average
Etc.
This might fit your immediate needs, but the scope is pretty limited.
What inferential statistics allows you to do is to infer conclusions about samples that are too big to study in a descriptive way.
If you were interested in knowing the average height of all men on the planet, it wouldn’t be possible to go and collect all that data. Instead, you can use inferential statistics to infer that average from different, smaller samples.
Two ways of inferring this kind of information through statistical analysis are the Frequentist and Bayesian methods.
What is the Frequentist statistics method in A/B testing?
The Frequentist approach is perhaps more familiar to you since it’s more frequently used by A/B testing software (pardon the pun). This method also makes an appearance in college-level stats classes.
This approach is designed to make a decision about a unique experiment.
With the Frequentist approach, you start with the hypothesis that there is no difference between test versions A and B. And at the end of your experiment, you’ll end up with something called a P-Value (probability value).
The P-Value is the probability of obtaining results at least as extreme as the observed results assuming that there is no (real) difference between the experiments.
In practice, the P-Value is interpreted to mean: the probability that there is no difference between your two versions. (That’s why it is often “inverted” with the basic formula p = 1-pValue, in order to express the probability that there is a difference.)
The smaller the P-Value, the higher the chance that there is, in fact, a difference, and also that your hypothesis is wrong.
Frequentist pros:
Frequentist models are available in any statistic library for any programming language.
The computation of frequentist tests is blazing fast.
Frequentist cons:
You only estimate the P-Value at the end of a test, not during. ‘Data peeking’ before a test has ended generates misleading results because it actually becomes several experiments (one experiment each time you peek at the data), whereas the test is designed for one unique experiment.
You can’t know the actual gain interval of a winning variation – just that it won.
What is the Bayesian statistics method in A/B testing?
The Bayesian approach looks at things a little differently.
We can trace it back to a charming British mathematician, Thomas Bayes, and his eponymous Bayes’ Theorem.
The Bayesian approach allows for the inclusion of prior information (‘a prior’) into your current analysis. The method involves three overlapping concepts:
Prior – information you have from a previous experiment. At the beginning of the experiment, we use a ‘non-informative’ prior (think ’empty’)
Evidences – the data of the current experiment
Posterior – the updated information you have from the prior and the evidences. This is what is produced by the Bayesian analysis.
By design, this test can be used for an ongoing experiment. When data peeking, the ‘peeked at data’ can be seen as a prior, and the future incoming data will be the evidence, and so on.
This means ‘data peeking’ naturally fits in the test design. So at each ‘data peeking,’ the posterior computed by the Bayesian analysis is valid.
Crucially for A/B testing in a business setting, the Bayesian approach allows the CRO practitioner to estimate the gain of a winning variation – more on that later.
Bayesian pros:
Allows you to ‘peek’ at the data during a test, so you can either stop sending traffic if a variation is tanking or switch earlier to a clear winner.
Allows you to see the actual gain of a winning test.
By its nature, often rules out the implementation of false positives.
Bayesian cons:
Needs a sampling loop, which takes a non-negligible CPU load. This is not a concern at the user level, but could potentially gum things up at scale.
Bayesian vs Frequentist: which is better?
So, which method is the ‘better’ method?
Let’s start with the caveat that both are perfectly legitimate statistical methods. But at AB Tasty, our customer experience optimization and feature management software, we have a clear preference for the Bayesian a/b testing approach. Why?
Gain size
One very strong reason is because with Bayesian statistics, you can estimate a range of the actual gain of a winning variation, instead of only knowing that it was the winner, full stop.
In a business setting, this distinction is crucial. When you’re running your A/B test, you’re really deciding whether to switch from variation A to variation B, not whether you choose A or B from a blank slate. You therefore need to consider:
The implementation cost of switching to variation B (time, resources, budget)
Additional associated costs of variation B (vendor costs, licenses…)
As an example, let’s say you’re a B2B software vendor, and you ran an A/B test on your pricing page. Variation B included a chatbot, whereas version A didn’t. Variation B outperformed variation A, but to implement variation B, you’ll need 2 weeks of developer time to integrate your chatbot into your lead workflow, plus allocate X dollars of marketing budget to pay for the monthly chatbot license.
You need to be sure the math adds up, and that it’s more cost-effective to switch to version B when these costs are weighed against the size of the test gain. A Bayesian A/B testing approach will let you do that.
Let’s take a look at an example from the AB Tasty reporting dashboard.
In this fictional test, we’re measuring three variations against an original, with ‘CTA clicks’ as our KPI.
We can see that variation 2 looks like the clear winner, with a conversion rate of 34.5%, compared to the original of 25%. But by looking to the right, we also get the confidence interval of this gain. In other words, a best and worst-case scenario.
The median gain for version 2 is 36.4%, with the lowest possible gain being +2.25% and the highest being 48.40%
These are the lowest and the highest gain markers you can achieve in 95% of cases.
If we break it down even further:
There’s a 50% chance of the gain percentage lying above 36.4% (the median)
There’s a 50% chance of it lying below 36.4%.
In 95% of cases, the gain will lie between +2.25% and +48.40%.
There remains a 2.5% chance of the gain lying below 2.25% (our famous false positive) and a 2.5% chance of it lying above 48.40%.
This level of granularity can help you decide whether to roll out a winning test variation across your site.
Are both the lowest and highest ends of your gain markers positive? Great!
Is the interval small, i.e. you’re quite sure of this high positive gain? It’s probably the right decision to implement the winning version.
Is your interval wide but implementation costs are low? No harm in going ahead there, too.
However, if your interval is large and the cost of implementation is significant, it’s probably best to wait until you have more data to shrink that interval. At AB Tasty we generally recommend that you:
Wait until you have recorded at least 5,000 unique visitors per variation
Let the test run for at least 14 days (two business cycles)
Wait until you have reached 300 conversions on the main goal.
Data peeking
Another advantage of Bayesian statistics is that it’s ok for you to ‘peek’ at your data’s results during a test (but be sure not to overdo it…).
Let’s say you’re working for a giant e-commerce platform and you’re running an A/B test involving a new promotional offer. If you notice that version B is performing abysmally – losing you big money – you can stop it immediately!
Conversely, if your test is outperforming, you can switch all of your website traffic to the winning version earlier than if you were relying on the Frequentist method.
This is precisely the logic behind our Dynamic Traffic Allocation feature – and it wouldn’t be possible without Mr. Thomas Bayes.
Dynamic Traffic Allocation
If we pause quickly on the topic of Dynamic Traffic Allocation, we’ll see that it’s particularly useful in business settings or contexts that are volatile or time-limited.
Dynamic Traffic Allocation option in the AB Tasty Interface.
Essentially, (automated) Dynamic Traffic Allocation strikes the balance between data exploitation and exploration.
The test data is ‘explored’ rigorously enough to be confident in the conclusion, and ‘exploited’ early enough so as to not lose out on conversions (or whatever your primary KPI is) unnecessarily. Note that this isn’t manual – a real live person is not interpreting these results and deciding to go or not to go.
Instead, an algorithm is going to make the choice for you, automatically.
In practice, for AB Tasty clients, this means checking the associated box and picking your primary KPI. The platform’s algorithm will then make the determination of if or when to send the majority of your traffic to a winning variation, once it’s determined.
This kind of approach is particularly useful:
Optimizing micro-conversions over a short time period
When the time span of the test is short (for example, during a holiday sales promotion)
When your target page doesn’t get a lot of traffic
When you’re testing 6+ variations
Though you’ll want to pick and choose when to go for this option, it’s certainly a handy one to have in your back pocket.
Want to start A/B testing on your website with a platform that leverages the Bayesian method? AB Tasty is a great example of an A/B testing tool that allows you to quickly set up tests with low code implementation of front-end or UX changes on your web pages, gather insights via an ROI dashboard, and determine which route will increase your revenue.
False Positives
In Bayesian statistics, like with Frequentist methods, there is a risk of what’s called a false positive.
A false positive, as you might guess, is when a test result indicates a variation shows an improvement when in reality it doesn’t.
It’s often the case with false positives that version B gives the same results as version A (not that it performs inadequately compared to version A).
While by no means innocuous, false positives certainly aren’t a reason to abandon A/B testing. Instead, you can adjust your confidence interval to fit the risk associated with a potential false positive.
In other words, you consider that your test is statistically significant when you’ve reached a 95% certainty level. You’re 95% sure your version B is performing as indicated, but there’s still a 5% risk that it isn’t.
For many marketing campaigns, this 95% threshold is probably sufficient. But if you’re running a particularly important campaign with a lot at stake, you can adjust your gain probability threshold to be even more exact – 97%, 98% or even 99%, practically ruling out the potential for a false positive.
While this seems like a safe bet – and it is the right choice for high-stakes campaigns – it’s not something to apply across the board.
This is because:
In order to attain this higher threshold, you’ll have to wait longer for results, therefore leaving you less time to reap the rewards of a positive outcome.
You will implicitly only get a winner with a bigger gain (which is rarer), and you will let go of smaller improvements that still could be impactful.
If you have a smaller amount of traffic on your web page, you may want to consider a different approach
Bayesian tests limit false positives
Another thing to keep in mind is that because the Bayesian approach provides a gain interval – and because false positives virtually only appear to perform slightly better than in reality – you’re unlikely to implement a false positive in the first place.
A common scenario would be that you run an A/B test to test whether a new promotional banner design increases CTA click-through rates.
Your result says version B performs better with a 95% gain probability but that the gain is minuscule (1% median improvement). Were this to be a false positive, you’re unlikely to deploy the version B promotional banner across your website, since the resources needed to implement it wouldn’t make it worth the minimum again.
But, since a Frequentist approach doesn’t provide the gain interval, you might be more tempted to put in place the false positive. While this wouldn’t be the end of the world – version B likely performs the same as version A – you would be spending time and energy on a modification that won’t bring you any added return.
Bottom line? If you play it too safe and wait for a confidence level that’s too high, you’ll miss out on a series of smaller gains, which is also a mistake.
Wrapping up: Frequentist vs Bayesian
So, which is better, Frequentist or Bayesian?
As we mentioned early, both approaches are perfectly sound, statistical methods.
But at AB Tasty, we’ve opted for the Bayesian approach, since we think it helps our clients make even better business decisions on their web experiments.
It also allows for more flexibility and maximizing returns (Dynamic Traffic Allocation). As for false positives, these can occur whether you go with a Frequentist or Bayesian approach – though you’re less likely to fall for one with the Bayesian approach.
At the end of the day, if you’re shopping for an A/B testing platform, you’ll want to find one that gives you easily interpretable results that you can rely on.
The A/B testing method involves a simple process: create two variations, expose them to your customer, collect data, and analyze the results with a statistical formula.
But, how long should you wait before collecting data? With 14 days being standard practice, let’s find out why as well as any exceptions to this rule.
Why 14 days?
To answer this question we need to understand what we are fundamentally doing. We are collecting current data within a short window, in order to forecast what could happen in the future during a more extended period. To simplify this article, we will only focus on explaining the rules that relate to this principle. Other rules do exist, which mostly correlate to the number of visitors, but this can be addressed in a future article.
The forecasting strategy relies on the collected data containing samples of all event types that may be encountered in the future. This is impossible to fulfill in practice, as periods like Christmas or Black Friday are exceptional events relative to the rest of the year. So let’s focus on the most common period and set aside these special events that merit their own testing strategies.
If the future we are considering relates to “normal” times, our constraint is to sample each day of the week uniformly, since people do not behave the same on different days. Simply look at how your mood and needs shift between weekdays and weekends. This is why a data sampling period must include entire weeks, to account for fluctuations between the days of the week. Likewise, if you sample eight days for example, one day of the week will have a doubled impact, which doesn’t realistically represent the future either.
This partially explains the two-week sampling rule, but why not a longer or shorter period? Since one week covers all the days of the week, why isn’t it enough? To understand, let’s dig a little deeper into the nature of conversion data, which has two dimensions: visits and conversions.
Visits: as soon as an experiment is live, every new visitor increments the number of visits.
Conversions: as soon as an experiment is live, every new conversion increments the number of conversions.
It sounds pretty straightforward, but there is a twist: statistical formulas work with the concept of success and failure. The definition is quite easy at first:
Success: the number of visitors that did convert.
Failures: the number of visitors that didn’t convert.
At any given time a visitor may be counted as a failure, but this could change a few days later if they convert, or the visit may remain a failure if the conversion didn’t occur.
So consider these two opposing scenarios:
A visitor begins his buying journey before the experiment starts. During the first days of the experiment he comes back and converts. This would be counted as a “success”, but in fact he may not have had time to be impacted by the variation because the buying decision was made before he saw it. The problem is that we are potentially counting a false success: a conversion that could have happened without the variation.
A visitor begins his buying journey during the experiment, so he sees the variation from the beginning, but doesn’t make a final decision before the end of the experiment – finally converting after it finishes. We missed this conversion from a visitor who saw the variation and was potentially influenced by it.
These two scenarios may cancel each other out since they have opposite results, but that is only true if the sample period exceeds the usual buying journey time. Consider a naturally long conversion journey, like buying a house, measured within a very short experiment period of one week. Clearly, no visitors beginning the buying journey during the experiment period would have time to convert. The conversion rates of these visitors would be artificially in the realm of zero – no proper measurements could be done in this context. In fact, the only conversions you would see are the ones from visitors that began their journey before the variation even existed. Therefore, the experiment would not be measuring the impact of the variation.
The delay between the effective variation and the conversion expedites the conversion rate. In order to mitigate this problem, the experiment period has to be twice as long as the standard conversion journey. Doing so ensures that visitors entering the experiment during the first half will have time to convert. You can expect that people who began their journey before the experiment and people entering during the second half of the experiment period will cancel each other out: The first group will contain conversions that should not be counted, and some of the second group’s conversions will be missing. However, a majority of genuine conversions will be counted.
That’s why a typical buying journey of one week results in a two-week experiment, offering the right balance in terms of speed and accuracy of the measurements.
Exceptions to this rule
A 14-day experiment period doesn’t apply to all cases. If the delay between the exposed variation and the conversion is 1.5 weeks for instance, then your experiment period should be three weeks, in order to cover the usual conversion delay twice.
On the other hand, if you know that the delay is close to zero, such as in the case of a media website, where you are trying to optimize the placement of an advertisement frame on a page where visitors only stay a few minutes, you may think that one day would be enough based on the this logic, but it’s not.
The reason being that you would not sample every day of the week, and we know from experience that people do not behave the same way throughout the week. So even in a zero-delay context, you still need to conduct the experiment for an entire week.
Takeaways:
Your test period should mirror the conditions of your expected implementation period.
Sample each day of the week in the same way.
Wait an integer number of weeks before closing an A/B test.
Respecting these rules will ensure that you’ll have clean measures. The accuracy of the measure is defined by another parameter of the experiment: the total number of visitors. We’ll address this topic in another article – stay tuned.
If you ask most e-commerce marketers how to optimize your website to generate more conversions, they’ll tell you to focus on your homepage or product detail pages. While that answer is technically correct, there is a potential goldmine for clicks that even the most seasoned marketers overlook: product listing pages
While these pages are often used as a catalog for your products and services, they can offer much more than an opportunity to optimize the customer experience. Since visitors browsing your product listing pages are already engaged with your online store, they just need one final push to convert.
In this article, we’ll show you everything you need to know about product listing pages, how to optimize your PLPs, and some examples of great product listing pages.
What are product listing pages?
Product listing pages (sometimes called PLPs or category landing pages) are pages on a website that display products based on a selected category; they may also be based on applied search filters. Product landing pages lead visitors to product detail pages where they can find more information on the items they’re interested in or even add said items to their cart.
One of the main drivers for optimizing your product listing pages is the opportunity they present for optimizing your user experience, as they can be tailored to shoppers with different user intent. On the one hand, we have buyers who visit a website knowing exactly what they are looking for. These potential buyers want to view the items that are most relevant to their search or intent (e.g. a user looking for a mountain bike doesn’t want to view road bikes). On the other hand, other visitors simply love browsing and use your PLPs to sift through the list of products that suit their preferences.
Key elements of a product listing page and how to design your PLPs for better conversions
Creating an effective product listing page starts with the basics. Designing your product listing pages in an optimal way, with all the relevant elements, will increase the odds of shoppers finding the products they are looking for and making a purchase. Here’s what you should make sure to include in your product listing pages:
1. PLP page name: Descriptive title
Remember that Google will only display the first 25, 50, or 64 characters of your title, so make sure your PLP title is optimized accordingly. For example, if you are selling cell phones, you might want to structure your titles according to make, model, memory size, and color so that shoppers can see the most important information upfront.
2. Description: Keyword-rich
The product description and title have a big impact on your PLP’s SEO and product discoverability. Make sure that your descriptions are thorough and contain all the relevant keywords that will help you rank higher. Remember: the more specific, the better.
3. Breadcrumbs: Proper category name
Make sure that each product is placed in the most relevant category to both orient your shoppers and help them discover similar products. Breadcrumbs can display the parent category/subcategories so that users can jump back and forth between product listing pages with ease.
4. Imagery: Thumbnail
People process visual information faster than anything else, and your product images will be the first thing a customer sees. Use high-quality photos and be consistent (for example, use the same color background for every image). If you use various backgrounds, colors, and sizes, your customers will be distracted. Want proof? Read our case study on Hanna Andersson where they have proven that keeping all images simple, clear, and harmonized will work wonders for results.
5. Price
Make sure that your pricing is competitive. Do your research and benchmark your prices against your competitors and make amendments. Highlight any other elements that make your pricing more competitive, like free shipping, or buy-one-get-one-free offers.
6. CTAs
Call to action buttons (also known as CTAs) are items that use imperative wording to nudge your users towards the action you want them to take, like “Add to cart now!” or “Save to Wishlist” if a product is unavailable. It’s important to create an effective CTA by following design best practices and carefully testing different variations of your call to action’s copy, location and colors.
Make sure that your button is visible against the background and all the other elements on the page. This not only draws the visitor’s eye to the call to action but shows them that the button is clickable. It’s important that your button looks like a button, even if you want to adopt a more minimalistic design for your website.
Next, make sure that your call to action conveys urgency. Using phrases like “Sign up now,” “Hurry” or “Don’t delay” encourages your users to not only act, but to act fast. It’s also a good idea to utilize first-person copy so that the visitor feels more connected to the CTA.
7. Filter menu
This menu displays the filtering options available to refine searches by attributes, like pricing, color, style, availability, size, and more. This will help your customers find what they are looking for easily.
8. Sorting menu
The sorting menu presents different options for organizing products using a dropdown menu, including “Price: Low to High,” “Newest,” or “Rating: High to Low.”
Sorting options have a sole purpose: narrowing down the number of products in order to increase conversion. Your sorting options should be based on your audience’s needs and expectations regarding your products. Thinking in the minds of your customers is crucial for optimization at this point in the digital customer journey.
Let’s take WatchShop as an example.
WatchShop knows that watches come in all sorts of shapes and colors, so they created various sorting options to match visitors’ requests. This includes water resistance, strap type, case color, movement type, and so on.
It’s all tailored to match customers’ expectations – and it delivers.
10 best practices for creating and optimizing product listing pages
Now that you know how to design your product listing pages, let’s get into how to optimize them for the best results:
1. Optimize headers or banners
Headers play the “title” role of each category and listing page design.
They’re the main indication of the page’s content and should be treated as the most important thing. If the header does not properly describe the page or the category, visitors will not be able to find what they are looking for.
Headers can reinforce your branding, so use the space on the top of your page to create a great-looking banner that engages and informs without adding clutter. In addition, never forget to include your keywords inside your <h1> tags. Not only will this make them more visible on the page, but they’re also a bit part of your on-page SEO efforts that will help you appear higher in search results.
In the image below, beauty specialist Ulta bets on shiny visuals to increase its headers’ visibility. It’s a good solution to avoid “all text” headers that can seem dull at first glance.
Note: headers can also be used as promotional spaces to display featured products, special offers, and discounts.
2. Experiment with multiple layouts on your product listing display
Your product listing display has a significant impact on the way your customers interact with the products. Unfortunately, there’s no easy answer when it comes to choosing between list and grid views. In fact, it all depends on what type of products you’re selling and what experience you want to provide. The most common choices are list view or grid view:
List view
List views require a little more scrolling but can display more product information than a grid. This makes it easy for shoppers to compare product attributes, like dimensions or features. Some sites let customers toggle between a list or grid view, depending on their preferences.
The list view is better suited for products that require extensive information and specifications in order to help customers compare aspects of similar products.
It’s a great fit for technical products like TVs, computers, electronics, DVDs, hardware, etc. However, this isn’t the only time to use list view for your product pages.
If we take a look at the image below, Wine.com sells luxury wines and champagnes. In this case, it’s important that visitors take their time benchmarking the brands and “grands crus” before making a purchase decision.
Notice how they capitalize on the extra horizontal space to display ratings.
Grid view
Grid views allow customers to browse and compare products next to each other. This is a good option if your site is picture-heavy and doesn’t require a lot of description outside the product titles.
Grid view is mostly used for products that rely a lot on pictures and can be compared quickly without paying much attention to the specifications. It also allows for more visual experiences.
Amazon uses grid view to display products inside its “gift ideas” category. They also use tags to rank the bestsellers and lure visitors into clicking on the products based on their popularity.
3. Add persuasion triggers
Persuasion triggers create a sense of urgency or scarcity. You most often see this on hotel booking sites or the sales section of a fashion store (“Only 1 item left at this price!” or “Selling out fast!”). These labels trigger visitors’ fear of missing out and push them to take action, so be sure to add them to your images to nudge them into making a purchase.
ㅤ
4. Personalize the shopping experience
Personalization can dramatically increase conversions, boost engagement, and help shoppers discover relevant products by sorting them according to their individual preferences. Personalization has also been known to reduce bounce rates by 20-30% and increase customer loyalty.
One thing you can do to personalize the experience for your visitors is to display complementary products that they might be interested in. For example, customers shopping for a new bedspread might also be interested in buying pillowcases or sheets, so steer them in that direction.
5. Use recommendations
If someone is already browsing your product listing page, the chances are that they already have the intent of making a purchase. This is the best time to make suggestions and cross-sell or upsell your products:
Some customers suffer from decision fatigue when they are presented with too many options. Gently recommend popular products, others within the same category and with the same tags, or similar (but slightly pricier) alternatives.
Show customers recommended products that might be relevant to the one they are viewing. For example, clients who are interested in a technical product would appreciate a “People who purchased this product also purchased” section that shows the accessories that go along with it.
Present seasonal bestsellers to add specificity and relevance, which could lead to more clicks and conversions. We tend to think that other people’s actions are the correct ones, so if a product is tagged as “trending,” it gains additional legitimacy that could push a customer to make a decision.
Your navigation has to be tailored to help prospective customers find what they are looking for as easily as possible with as little friction as possible. There are a few tips and tricks you can try, including:
Put your best-selling items front and center: We’ve already touched on the fact that customers like knowing what items others are buying. The most popular options are often seen as the safest ones to buy. Throw in some social proof messages like user ratings to really drive the point home.
Site speed is a crucial factor for UX: Make sure that your site loads quickly on both desktop and mobile devices to ensure that customers have an enjoyable experience.
Ensure that your navigation bar is fixed to the top of the page and organizes your products in a logical fashion.
No matter the level of page depth, navigation always plays a crucial role in the user’s experience – and your product listing page should not be different. Because some products have complex specifications and require extensive sorting options, pay attention to your website’s performance when it comes to sorting products and helping customers find their perfect product.
In the image below, RevZilla does a great job of guiding customers through the endless journey of finding the right motorcycle helmet.
They use their left column to help customers sort and rank products according to several criteria (faceted search):
Color
Type
Shape
Category
Size
Gender
Bonus point: RevZilla provides visitors the opportunity to only display products that have a video review. This is a huge value proposition compared to their competitors.
7. What information to display on your PLPs?
There are tons of options regarding which information you can display on your product listing pages and category pages. Simply put, you need to display information that will effectively help and convince consumers to move down the funnel and make a purchase.
In order to help you choose, here’s a list of information that may be displayed on your product listing page:
Star ratings
Discounts
Color options
Stock availability
Best-sellers
Add to cart
New / Used
Short descriptions
As an example in the image below, BestBuy does a great job of providing useful information on its product listing page. Besides the pictures and the price, they also added: star ratings, discounts, and an add to cart button with a smart color hierarchy.ㅤ
8. Optimize SEO for product listing pages
SEO is a big deal for most e-commerce players. In fact, search engine traffic accounts for around 50% of all e-commerce traffic according to a 2023 study led by SmartInsights.
There are two main reasons that justify the dominance of product listing pages regarding SEO:
A. Product listing pages are keyword-rich
Because they contain the names, brands, prices, specifications, and descriptions of products, category pages tend to be keyword-rich. This means that they naturally rank for a lot of keywords in search engines.
B. Product listing pages are the most heavily linked to
Product listing pages are typically where you want your customers to start their journey (or alternatively on the product page itself), which is why SEO pros tend to focus their efforts on these pages. Besides this, all products within a category generally link back to that category, which is a strong internal link-building pattern.
Tips for optimizing SEO on your product listing pages:
Optimizing your title tags
Using unique and original product and meta descriptions
Linking to internal pages
Using image alt attributes and rich snippets
9. Should you use Quick View or add-to-cart buttons?
Quick View is an e-commerce function that allows visitors to generate a miniature version of the desired product page. In other words, it’s a mini product page that generally embeds a direct “add to cart” button.
Not all products require lengthy deliberation and consideration before making a purchase, especially for returning customers or others purchasing fast-moving goods like groceries. Creating add-to-cart buttons makes it easier to speed through the checkout process. You can also implement add-to-wishlist buttons for more complex or expensive items to maximize conversions.ㅤ
10. Use clear and concise CTAs
Call to action buttons can have a massive impact on your conversions. When Dutch watch brand Cluse noticed that their product listing pages had high bounce rates (and that clicks to the product display pages were low), they turned to AB Tasty to find a solution.
Cluse set up a simple test to see whether changing their CTA’s location and color would improve the results. The team’s hypothesis was correct, and the site saw a 2.39% increase in the click-through rate to the product display page and a 1.12% uplift in transactions during the three-week test.
Examples of effective product listing pages
ASOS
ASOS uses short but descriptive copy on their product listing pages.
The clothing retailer’s product listing pages are categorized by trends and style. They use extremely simple copy and appealing photographs to convince shoppers to make a purchase. Users can add items to their wishlist or cart directly from the product listing page and check out using the simple navigation banner.
Everlane
Everlane uses quick add-to-cart buttons to optimize their product listing pages
Everlane uses a number of features from the best practice guidebook, including adding product size options as an overlay in the image, easy navigation using the grid view and sidebar, and quick add-to-cart buttons.
Walmart
Walmart uses compelling headers and content on their product listing page.
Walmart puts bestsellers on top of their product listing pages, Walmart puts bestsellers on top of their product listing pages, along with engaging headers that feature collections by influencers like Sofia Vergara and Kim Kardashian. They also use quick add-to-cart buttons to make it easier to shop. The copy is clear and concise, and users are able to comfortably scroll through galleries of attractive images. Returning users are greeted with a warm, personalized message.
How many products per page and per row to display on your PLP?
If you opt for a grid view template, there’s no doubt that you’ll eventually come to the question, how many products per row?
As for choosing between list view and grid view, there’s no single answer that will fit everyone’s needs. In fact, the number of products displayed per row depends on 3 main factors:
Image size – If you choose to display big, high-resolution images; there’s no doubt that you will have a hard time squeezing more than 4-5 products in a row.
Number of products – The number of products listed per row also depends on your total number of products for a given category. If you only have 12 products to display, it’s a lot more coherent to opt for a 4X3 grid structure rather than 2X6. You need to fill the page visually.
Volume of information – Not all products are considered equal when it comes to their product description. Some products natively require more information than others. The more space they need, the fewer products you will display.
In the image below, Canada Goose, a high-quality outerwear provider, relies on a 2-products-per-row structure. This strategy highlights the visuals and delivers a more premium feeling to the user’s experience.
How to find what works best on your product listing pages?
A/B test your product listing pages.
There is no secret when it comes to Conversion Rate Optimization (CRO) – testing is what makes it work. The recipe for success doesn’t change for your product listing pages, you just have to A/B test them.
Now the question is, how can you do that? We have great news for you: we’re A/B testing specialists.
Making a good product listing page isn’t easy. You will have to identify elements that work and elements that don’t to gradually increase your conversions and offer an overall better user experience to your customers.
Want to start optimizing your product listing pages? AB Tasty is the best-in-class experience optimization platform that empowers you to create a richer digital experience – fast. From experimentation to personalization to smart search and recommendations, our solution can help you activate and engage your audience to boost your conversions.
Conclusion: The ultimate product listing page
Product listing pages can be conversion machines. When they’re properly optimized, they’re key for delivering an exceptional customer experience, helping you rise to the top of search engine results, and increasing basket size.
Whether you’re a seasoned seller or are venturing into the world of e-commerce for the first time, it may seem overwhelming to hit all the right notes – and find the best ideas to take your product listing pages to the next level!
Creating product listing pages will look a bit different depending on your market sector. However, for maximum performance, keep these best practices in mind for your e-commerce brand.
Personalization is a hypothesis that needs to be tested
Ben Combe, Data Director, Optimization & Personalization APAC at Monks
Hosted by Julia Simon, VP APAC at AB Tasty
Featuring Ben Combe, Data Director, Optimization & Personalization APAC atMonks
Conversion Rate Optimization (CRO) is a user-centric approach that emphasizes long-term benefits over just leading customers to click on certain elements or CTAs. To achieve this, understanding your data through the use of experimental and scientific methods is key. In this episode, Ben Combe, Data Director, Optimization & Personalization APAC at Monks joins Julia Simon, VP APAC at AB Tasty to discuss CRO techniques and best practices. They find answers to where companies should start, what to prioritize, which methodologies to use, and how to execute a compelling optimization roadmap.
Whether you’re just starting your CRO journey, or you’re already a CRO expert, this session is for you!
Episode #2:
Where do you start?
Ideas flow from everywhere in the business as data collection happens perpetually. Knowing what your top priorities are is where you should start. You don’t just change the color of your CTA from blue to red because it’s Valentine’s Day and you have a gut feeling.
Ben points out to first take a look at how the business is doing and where you can focus on for the most impact. Should you focus on acquisition, retention, or loyalty? Identify what and where are the pain points that need solving. Secondly, dive into your customer data by looking at your conversion points. Draw a parallel to where your customers are dropping off and mix them with your qualitative insights. Thirdly, brainstorm with your team to come up with ideas.
Prioritization Frameworks: PIE or ICE?
In CRO, time and resources are finite, therefore every experiment counts. You need clear guidelines to choose what ideas to test and what to leave behind. So it’s essential to prioritize – but should you use PIE or ICE?
If you’re just starting your experimentation journey, Ben recommends taking a look at traffic, value and ease. It’s basically like answering how many people are visiting a webpage, what is it worth in dollars, and what are your development resources. If you’re mature in CRO, a bespoke checklist tailored towards your business needs is recommended.
The importance of UX
Running A/B tests is a great way of conducting UX research while your product is live. It helps you decide on what works and what doesn’t work for your customers. By testing different design options, designers are able to gather valuable user feedback. This can then be used for design improvement that is more user-centric, and that leads to increased user engagement and satisfaction. Keeping the UX Team in the loop is essential for continuous learning and improvement.
The Quick Wins
Looking into easy, quick wins in the beginning of your experimentation strategy will bring you good results. Once you pick all the low-hanging fruit, Ben encourages you to shift your mindset towards a more innovative approach. Think outside the box, analyze your segments deeper, and iterate.
Synchronizing AB Testing and Personalization
AB testing allows you to understand the effectiveness of your personalization strategies by comparing various content, design elements, and offers. This insight allows you to deliver an experience that resonates best with customers, leading to higher engagement. It’s important to take note that no personalization goes live without being tested. Behaviors change and it’s necessary to continuously experiment in order to validate that your personalization is still relevant.
Rand Fishkin discusses the importance of “non-attributable” marketing and why companies should take more risks and allow themselves the freedom to fail.
Rand Fishkin is the co-founder and CEO of SparkToro, a software company that specializes in audience research for targeted marketing. Previously, Rand was the co-founder and CEO of Moz, where he started SEOmoz as a blog that turned into a consulting company, then a software business. Over his seven years as CEO, Rand grew the company to 130+ employees, $30M+ in revenue, and brought website traffic to 30M+ visitors/year.
He’s also dedicated his professional life to helping people do better marketing through his writing, videos, speaking, and his latest book, Lost and Founder.
AB Tasty’s VP Marketing Marylin Montoya spoke with Rand Fishkin about the culture of experimentation and fear of failure when it comes to marketing channels and investments. Rand also shares some of his recommendations on how to get your brand in front of the right audience.
Here are some key takeaways from their conversation.
Taking a more risk-based approach
Rand believes there’s too much focus on large markets that people often overlook the enormous potential of smaller markets to go down the more typical venture path. In that sense, founders become biased towards huge, totally addressable markets.
“They don’t consider: here’s this tiny group of people. Maybe there are only three or 4000 people or companies who really need this product, but if I make it for them, they’re going to love it. I think that there’s a tremendous amount of opportunity there. If folks would get out of their head that you have to look for a big market,” Rand says.
People avoid such opportunities because of the regulatory challenges, restrictions, and other barriers to entry that often come with them but for Rand, these underserved markets are worth the risk because competition is scarce. There’s a real potential to build something truly special for those willing to overcome the challenges that come with it, Rand argues.
There are a lot of underserved niches and many business opportunities out there in the tech world, if companies would shift away from the “growth at all cost” mentality.
“The thing about being profitable is once you’re there, no one can take the business from you. You can just keep iterating and finding that market, finding new customers, finding new opportunities. But if you are constantly trying to chase growth unprofitably and get to the metrics needed for your next round, you know all that goes out the window,” Rand says.
Freedom to fail
Similarly, Rand states that there’s a huge competitive advantage in committing resources toward marketing channels where attribution is hard or impossible because no one else is investing in these kinds of channels. That’s where Rand believes companies should allocate their resources.
“If you take the worst 10 or 20%, worst performing 10 or 20% of your ads budget, your performance budget, and you shift that over to hard-to-measure, experimental, serendipitous, long-term brand investment types of channels, you are going to see extraordinary results.”
However, the problem is getting buy-in from more senior stakeholders within a company because of these “hard-to-attribute” and “hard-to-measure” channels. In other words, they refuse to invest in channels where they can’t prove an attribute – a change of conversion rate or sales – or return on investment. Thus, any channels that are poor at providing proof of attribution get underinvested. Rand strongly believes that it’s still possible to get clicks on an organic listing of your website and get conversions even if a brand doesn’t spend anything on ads.
“I think brand and PR and content and social and search and all these other organic things are a huge part of it. But ads are where those companies can charge because the CEO, CMO, CFO haven’t figured out that believing in hard-to-measure channels and hard-to-attribute channels and putting some of your budget towards experimental stuff is the right way to do things,” Rand argues.
According to Rand, these are exactly the kinds of channels where more resources need to be allocated as they generate a higher return on investment than any ad a company might spend on the more typical and bigger name platforms.
“Your job is to go find the places your audience pays attention to and figure out what your brand could do to be present in those places and recommended by the people who own those channels.”
According to Rand, there is a learning curve in finding the message that resonates with this audience and the content that drives their interest as well as the platforms where you can connect with them and this will all depend on who your audience is.
Experiment with AI
For Rand, the AI boom is more realistic and interesting than previous big tech trends. He especially sees its biggest advantage in solving big problems within organizations that can be best solved with large language model generative AI.
However, it’s important not to insert AI in a business or create problems just for the sake of using it or to apply it to the wrong places.
“If you find that stuff fascinating and you want to experiment with it and learn more about it, that’s great. I think that’s an awesome thing to do. Just don’t don’t go trying to create problems just to solve this, to use it.”
He believes the best use case for AI is for more tedious jobs that would be otherwise too time-consuming as opposed to using it for more tactical or strategic marketing advice. Nonetheless, he does believe that there are a lot of interesting and useful solutions and products being built with AI that will solve many problems.
What else can you learn from our conversation with Rand Fishkin?
The importance of brand and long-term brand investments
Why it’s hard to get leadership to shift away from common ad platforms
How social networks have become “closed networks”
Why attention needs to shift to your audience and how they can become “recommenders” of your product
About Rand Fishkin
Rand Fishkin is the co-founder and CEO of SparkToro, makers of fine audience research software to make audience research accessible to everyone. He’s also the founder and former CEO of Moz and also co-founded Inbound.org alongside Dharmesh Shah, which was sold to Hubspot in 2014. Rand has become a frequent worldwide keynote speaker over the years on marketing and entrepreneurship with a mission to help people do better marketing.
About 1,000 Experiments Club
The 1,000 Experiments Club is an AB Tasty-produced podcast hosted by Marylin Montoya, VP of Marketing at AB Tasty. Join Marylin and the Marketing team as they sit down with the most knowledgeable experts in the world of experimentation to uncover their insights on what it takes to build and run successful experimentation programs.
The opportunity cost of NOT testing is never knowing how much revenue you are losing from not knowing.
Dave Anderson, VP Product Marketing and Strategy
We are living in a time where people treat products and services as commodities. Customers of today expect an experience alongside whatever they have purchased. Optimizing digital experiences can directly impact a company’s bottom line by improving conversion rates, reducing customer frustration, and enhancing brand sentiment.
Hosted by Julia Simon, VP APAC at AB Tasty
Featuring Dave Anderson, VP Product Marketing and Strategy at Contentsquare
In this episode, Dave joins us to discuss various facets of customer experience and experimentation trends in Asia Pacific. They unravel key insights regarding the impact of Customer Experience (CX) Optimization on revenue generation, the widespread adoption of optimization practices across industries, the importance of collaboration between teams, and the value of continuous experimentation.
Dive deep into Episode #1
1. Impact of CX Optimization on Revenue:
Businesses that focus on understanding the needs of their customers increase revenue by making new buyers loyal and loyal customers purchase consistently. Providing a great customer experience directly impacts a company’s bottom line by improving conversion rates, reducing customer frustration, and in the long run increasing customer lifetime value.
2. Adoption of Optimization Practices Across Industries:
Virtually every industry including education, finance, retail, and telecommunications is now embracing CX optimization as a means to meet evolving customer expectations. They discuss how companies leverage social proof, countdown banners, personalisation strategies and more to enhance digital experiences and stay competitive in today’s market.
3. Importance of Collaboration Between Teams:
Collaboration between different teams in an organization is key to driving a successful CX strategy. The need for alignment between UX, product, tech, and marketing teams is important to ensure that optimization efforts are cohesive and well executed.
4. Value of Continuous Experimentation:
Continuous experimentation is the cornerstone of a successful optimization strategy. Our content also underscores the importance of testing hypotheses, analyzing results, and iterating based on insights to drive ongoing improvements in digital experiences. Closing up this section, they determined that organizations need to adopt a culture of experimentation and data-driven decision-making to remain agile and responsive to evolving customer needs.
AB Tasty and Google BigQuery have joined forces to provide seamless integration, enabling customers with extensive datasets to access insights, automate, and make data-driven decisions to push their experimentation efforts forward.
We have often discussed the complexity of understanding data to power your experimentation program. When companies are dealing with massive datasets they need to find an agile and effective way to allow that information to enrich their testing performance and to identify patterns, trends, and insights.
Go further with data analytics
Google BigQuery is a fully managed cloud data warehouse solution, which enables quick storage and analysis of vast amounts of data. This serverless platform is highly scalable and cost-effective, tailored to support businesses in analyzing extensive datasets for making well-informed decisions.
With Google BigQuery, users can effortlessly execute complex analytical SQL queries, leveraging its integrated machine-learning capabilities.
This integration with AB Tasty’s experience optimization platform means customers with large datasets can use BigQuery to store and analyze large volumes of testing data. By leveraging BigQuery’s capabilities, you can streamline data analysis processes, accelerate experimentation cycles, and drive innovation more effectively.
Here are some of the many benefits of Google BigQuery’s integration with AB Tasty to help you trial better:
BigQuery as a data source
With AB Tasty’s integration, specific data from AB Tasty can be sent regularly to your BigQuery set. Each Data Ingestion Task has a name, an SQL query to get what you need, and timed frequency for data retrieval. This information helps make super-focused ads and messages, making it easier to reach the right people.
Centralized storage of data from AB Tasty
The AB Tasty and BigQuery integration simplifies campaign analysis too by eliminating the need for SQL or BI tools. Their dashboard displays a clear comparison of metrics on a single page, enhancing efficiency. You can leverage BigQuery for experiment analysis without duplicating reporting in AB Tasty, getting the best of both platforms. Incorporate complex metrics and segments by querying our enriched events dataset and link event data with critical business data from other platforms. Whether through web or feature experimentation, it means more accurate experiments at scale to drive business growth and success.
Machine learning
BigQuery can also be used for machine learning on experimentation programs, helping you to predict outcomes and better understand your specific goals. BigQuery gives you AI-driven predictive analytics for scaling personalized multichannel campaigns, free from attribution complexities or uncertainties. Access segments that dynamically adjust to real-time customer behavior, unlocking flexible, personalized, and data-driven marketing strategies to feed into your experiments.
Enhanced segmentation and comprehensive insight
BigQuery’s ability to understand behavior means that you can segment better. Its data segmentation allows for categorizing users based on various attributes or behaviors. With data that is sent to Bigquery from experiments, you can create personalized content or features tailored to specific user groups, optimizing engagement and conversion rates.
Finally, the massive benefit of this integration is to get joined-up reporting – fully automated and actionable reports on experimentation, plus the ability to feed data from other sources to get the full picture.
A continued partnership
This integration comes after Google named AB Tasty an official Google Cloud Partner last year, making us available on the Google Cloud Marketplace to streamline marketplace transactions. We are also fully integrated with Google Analytics 4. We were also thrilled to be named as one of the preferred vendors from Google for experimentation after the Google Optimize sunset.
As we continue to work closely with the tech giant to help our customers continue to grow, you can find out more about this integration here.
When it comes to CRO, or Conversion Rate Optimization, it would be natural to assume that conversion is all that matters. At least, we can argue that conversion rate is at the heart of most experiments. However, the ultimate goal is to raise revenue, so why does the CRO world put so much emphasis on conversion rates?
In this article, we’ll shed some light on the reason why conversion rate is important and why it’s not just conversions that should be considered.
Why is conversion rate so important?
Let’s start off with the three technical reasons why CRO places such importance on conversion rates:
Conversion is a generic term. It covers the fact that an e-commerce visitor becomes a customer by buying something, or simply the fact that this visitor went farther than just the homepage, or clicks on a product page, or adds this product to the cart. In that sense, it’s the Swiss Army Knife of CRO.
Conversion statistics are far easier than other KPI statistics, and they’re the simplest from a maths point of view. In terms of measurement, it’s pretty straightforward: success or failure. This means off-the-shelf code or simple spreadsheet formulas can compute statistics indices for decision, like the chance to win or confidence intervals about the expected gain. This is not that easy for other metrics as we will see later with Average Order Value (AOV).
Conversion analysis is also the simplest when it comes to decision-making. There’s (almost) no scenario where raising the number of conversions is a bad thing. Therefore, deciding whether or not to put a variation in production is an easy task when you know that the conversion rate will rise. The same can’t be said about the “multiple conversions” metric where, unlike the conversion rate metric that counts one conversion per visitor even if this visitor made 2 purchases, every conversion counts and so is often more complex to analyze. For example, the number of product pages seen by an e-commerce visitor is harder to interpret. A variation increasing this number could have several meanings: the catalog can be seen as more engaging or it could mean that visitors are struggling to find what they’re looking for.
Due to the aforementioned reasons, the conversion rate is the starting point of all CRO journeys. However, conversion rate on its own is not enough. It’s also important to pay attention to other factors other than conversions to optimize revenue.
Beyond conversion rate
Before we delve into a more complex analysis, we’ll take a look at some simpler metrics. This includes ones that are not directly linked to transactions such as “add to cart” or “viewed at least one product page”.
If it’s statistically assured to win, then it’s a good choice to put the variation into production, with one exception. If the variation is very costly, then you will need to dig deeper to ensure that the gains will cover the costs. This can occur, for example, if the variation holds a product recommender system that comes with its cost.
The bounce rate is also simple and straightforward in that the aim is to keep the figure down unlike the conversion rate. In this case, the only thing to be aware of is that you want to lower the bounce rate unlike the conversion rate. But the main idea is the same: if you change your homepage image and you see the bounce rate statistically drop, then it’s a good idea to put it in production.
We will now move onto a more complex metric, the transaction rate, which is directly linked to the revenue.
Let’s start with a scenario where the transaction rate goes up. You assume that you will get more transactions with the same traffic, so the only way it could be a bad thing is that you earn less in the end. This means your average cart value (AOV) has plummeted. The basic revenue formula shows it explicitly:
Total revenue = traffic * transaction rate * AOV
Since we consider traffic as an external factor, then the only way to have a higher total revenue is to have an increase in both transaction rate and AOV or have at least one of them increase while the other remains stable. This means we also need to check the AOV evolution, which is much more complicated.
On the surface, it looks simple: take the sum of all transactions and divide that by the number of transactions and you have the AOV. While the formula seems basic, the data isn’t. In this case, it’s not just either success or failure; it’s different values that can widely vary.
Below is a histogram of transaction values from a retail ecommerce website. The horizontal axis represents values (in €), the vertical axis is the proportion of transactions with this value. Here we can see that most values are spread between 0 and €200, with a peak at ~€50.
The right part of this curve shows a “long/fat tail”. Now let’s try to see how the difference within this kind of data is hard to spot. See the same graph below but with higher values, from €400 to €1000. You will also notice another histogram (in orange) of the same values but offset by €10.
We see that the €10 offset which corresponds to a 10-unit shift to the right is hard to distinguish. And since it corresponds to the highest values this part has a huge influence when averaging samples. Due to the shape of this transaction value distribution, any measure of the average value is somewhat blurred, which makes it very difficult to have clear statistical indices. For this reason, changes in AOV need to be very drastic or measured over a huge dataset to be statistically asserted, making it difficult to use in CRO.
Another important feature is hidden even further on the right of the horizontal axis. Here’s another zoom on the same graph, with the horizontal axis ranging from €1000 to €4500. This time only one curve is shown.
From the previous graph, we could have easily assumed that €1000 was the end, but it’s not. Even with a most common transaction value at €50, there are still some transactions above €1000, and even some over €3000. We call these extreme values.
As a result, whether these high values exist or not makes a big difference. Since these values exist but with some scarcity, they will not be evenly spread across a variation, which can artificially create difference when computing AOV. By artificially, we mean the difference comes from a small number of visitors and so doesn’t really count as “statistically significant”. Also, keep in mind that customer behavior will not be the same when buying for €50 as when making a purchase of more than €3000.
There’s not much to do about this except know it exists. One good thing though is to separate B2B and B2C visitors if you can, since B2C transaction values are statistically bigger and less frequent. Setting them apart will limit these problems.
What does this mean for AOV?
There are three important things to keep in mind when it comes to AOV:
Don’t trust the basic AOV calculation; the difference you are seeing probably does not exist, and is quite often not even in the same observed direction! It’s only displayed to give an order of magnitude to interpret changes in conversion rates but shouldn’t be used to state a difference between variations’ AOV. That’s why we use a specific test, the Mann-Whitney U test, that’s adapted for this kind of data.
You should only believe the statistical index on AOV, which is only valid to assess the direction of the difference between AOV, not its size. For example, you notice a +€5 AOV difference and the statistical index is 95%; this only means that you can be 95% sure that you will have an AOV gain, but not that it will be €5.
Since transaction data is far more wild than conversion data, it will need stronger differences or bigger datasets to reach statistical significance. But since there are always fewer transactions than visitors, reaching significance on the conversion rate doesn’t imply being significant on AOV.
This means that a decision on a variation that has a conversion rate gain can still be complex because we rarely have a clear answer about the variation effect on the AOV.
This is yet another reason to have a clear experimentation protocol including an explicit hypothesis.
For example, if the test is about showing an alternate product page layout based on the hypothesis that visitors have trouble reading the product page, then the AOV should not be impacted. Afterwards, if the conversion rate rises, we can validate the winner if the AOV has no strong statistical downward trend. However, if the changes are in the product recommender system, which might have an impact on the AOV, then one should be more strict on measuring a statistical innocuity on the AOV before calling a winner. For example, the recommender might bias visitors toward cheaper products, boosting sales numbers but not the overall revenue.
The real driving force behind CRO
We’ve seen that the conversion rate is at the base of CRO practice because of its simplicity and versatility compared to all other KPIs. Nonetheless, this simplicity must not be taken for granted. It sometimes hides more complexity that needs to be understood in order to make profitable business decisions, which is why it’s a good idea to have expert resources during your CRO journey.
That’s why at AB Tasty, our philosophy is not only about providing top-notch software but also Customer Success accompaniment.
In the ever-evolving landscape of fashion and e-commerce, digital innovation has become a driving force behind transforming the customer experience. The intersection of technology and fashion has given rise to new opportunities for brands to connect with their customers in more meaningful and engaging ways.
In this guest blog post from Conversio, a leading UK-based optimization and analytics agency, we explore key trends in fashion e-commerce and how brands can leverage digital strategies to enhance the customer experience.
1. The Mobile Customer: Shopping on the Go
The mobile customer has become a dominant force in the fashion industry. Today’s consumers expect a seamless and intuitive mobile experience when browsing, shopping, and making purchases. Brands must prioritize mobile optimization, ensuring their websites and apps are responsive, fast-loading, and user-friendly. By providing a frictionless mobile experience, fashion brands can capture the attention and loyalty of the on-the-go consumer.
2. The Rise of Social: Influencing Fashion Choices
Social media platforms have revolutionized the way we discover, engage with, and purchase fashion items. From influencers showcasing the latest trends to shoppable posts and personalized recommendations, social media has become an integral part of the customer journey. Fashion brands must embrace social commerce and leverage these platforms to connect with their audience, build brand awareness, and drive conversions. By actively engaging with customers on social media, brands can create a community around their products and foster brand loyalty.
3. Increasing Returns Rates: The Challenge of Fit and Expectations
One of the ongoing challenges in fashion e-commerce is the issue of increasing returns rates. Customers want convenience and flexibility when it comes to trying on and returning items. Brands must address this challenge by providing accurate size guides, detailed product descriptions, and visual representations. Additionally, incorporating virtual try-on technologies and utilizing user-generated content can help improve the customer’s confidence in their purchase decisions and reduce returns rates.
4. Measuring the Customer Experience
To truly enhance the customer experience, brands must measure and analyze key metrics to gain insights into their customers’ behaviors and preferences. Conversion rate optimization (CRO) is a crucial aspect of this process. By A/B testing, tracking and optimizing conversion rates, brands can identify areas for improvement and implement strategies to increase conversions. Additionally, measuring customer satisfaction, engagement, and loyalty through surveys, feedback, and data analytics can provide valuable insights into the effectiveness of the customer experience.
5. Improving the Fashion CX through Experimentation
To stay ahead in the competitive fashion industry, brands must embrace a culture of experimentation. A/B testing different elements of the customer experience, such as website layout, product recommendations, and personalized messaging, can help identify what resonates best with customers. By continuously iterating and refining their digital strategies, fashion brands can deliver a more tailored and enjoyable experience for their customers.
Our Key Takeaways
As fashion brands navigate the digital landscape, there are several key takeaways to keep in mind:
Brand Perception: Recognise that 90% of new customers won’t see your homepage. Focus on delivering a consistent and compelling brand experience across all touchpoints.
Post-Purchase: Extend your focus beyond the conversion. Invest in post-purchase experiences, such as order tracking, personalised recommendations, and exceptional customer service, to foster customer loyalty and encourage repeat purchases.
Measure Everything: Establish a robust measurement framework to track and validate the value of your content, campaigns, and overall customer experience. Leverage data to make data-driven decisions and continuously optimize your strategies.
In conclusion, digital fashion has reshaped the customer experience, offering new avenues for engagement, personalization, and convenience. By understanding and embracing key trends, testing and measuring customer experience, and experimenting with innovative strategies, fashion brands can successfully navigate the digital landscape and deliver exceptional experiences that resonate with their target audience.