Article

6min read

Maximizing the Value of Customer Data Through Experimentation

Check out the introduction to the Customer-Centric Data Series here.

For the first blog in our series on the different ways you can utilize data to help brands build a more customer-centric vision, our partner Aimee Bos, VP of Analytics and Data Strategy at Zion & Zion, and AB Tasty’s Chief Data Scientist, Hubert Wassner, delve into how experimentation data can help you better understand your customers. They explore the who, what, and when of testing, discuss key customer insight metrics, the importance of audience sizes, where your best ideas for testing are lurking, and more. 

 

Why is experimentation important for understanding customers? 

Put simply, experimentation enables brands to “perfect” their products. By improving upon the value that’s already been developed, the customer experience is improved. And each time a new feature or option is added to a product, consistent A/B testing ensures consistent customer reactions. Experimentation operates in a feedback loop with customers, moving beyond conversions or acquisition, improving adoption and retention, eventually making your product indispensable to your customers.

 

Which key metrics deliver the best insights about customers? 

Hubert says, “Basically, the metrics that deliver reliable customer insights are conversion rate and average cart value, segmented on meaningful criteria such as geolocation, or CRM data. But there are others that are interesting, such as revenue per visitor (RPV). It’s a low-value metric but important to monitor. 

“And average order value (AOV) is another. This metric will vary enormously over time so it shouldn’t be taken as fact. Seasonality (think Christmas or Black Friday, for example), or even one huge buyer can skew the statistics. It needs to be viewed in multiple contexts to get a better understanding of progress – not just Year over Year but Month over Month and even Week over Week to be effectively computed. 

“AOV and RPV are important because their omission can lead to data bias. People often forget to analyze metrics about non-converting visitors. Of course, AOV only gives you data about those who actually make it fully through the purchase cycle.” 

And Aimee agrees, “Well, win rate, of course. For e-commerce it’s conversions, value, RPV, how they’re moving the needle, are they increasing the value of the average order? We want as much data as possible at the most granular level possible for lead generation, gated content, and micro-conversions… These smaller tests can be tied to more customer-centric metrics, as opposed to larger business-level metrics such as revenue, growth, number of customers, ROI, etc.”

 

Where are the best sources for experimentation ideas?

Aimee has her own process. “I start by asking myself what my business objectives are (micro/macro). Then I check Google Analytics and ask myself ‘Where are conversions not happening?’ For experimentation ideas, I check tools like HotJar, voice-of-customer data (VoC), Qualtrics data, see actual customer feedback, user panelists: give them choices, ask what they prefer. Always hypothesize friction points, these will give you your best ideas for testing!”

Hubert likes to get his ideas from NPS scores. “Net promoter score (NPS) has useful information and comments and can be a good starting point for fact-based rather than random hypotheses, which are a waste of time. NPS can add some real focus to well-designed tests. It’s based on a single question: On a scale of 0 to 10, how likely are you to recommend this company’s product or service to a friend or a colleague? NPS is a good way to identify areas that need improvement, but as a signifier of a company’s CX score, it needs to be paired with qualitative insights to understand the context behind the score.”

 

How do I pull everything together? What do I need to carry out my tests?

Obviously, you need a tool to run your AB tests and collect the data necessary to make good hypotheses, but a good way to add a big boost to your testing program – and help drive more ROI – is with tools like Contentsquare or Fullstory which offer more data on customer behavior and experience to focus your testing data. Designed to bridge the gap between the digital experiences companies think they’re offering their customers and what customers are actually getting, analytics platforms can provide real opportunities for useful testing hypotheses by offering more educated guesses about variables for testing to improve CX. 

Aimee has an important note about initial data collection, too.  “You also need three months of data before you begin testing if you want reliable results, and you need to be sure it’s accurate. Most people rely on Google Analytics (GA). That’s a lot of data to handle and organize. A Customer Data Platform (CDP) represents a significant investment, but centralizing your data in one is extremely useful for customer segmentation and detailed analysis. The sooner you can invest in a tool like a CDP, the better for a sustainable data architecture strategy.

 

I’m ready to test, but I have several hypotheses. How to begin?

According to Aimee, “when that happens, we break large problems into smaller ones. We have a customer that wants to triple their business and also wants a CDP this year among other goals. It’s a lot! To help them, we build out a customer journey roadmap to see what influences the client’s goals. We select five or six high-level goals (landing page, navigation measured against click-through rate, for example), then test various aspects of each of these goals.”

Hubert notes, “it’s possible to test more than one hypothesis at once if your sample size is big enough. But first, you need to know what the statistical power of your experiment is. Small sample sizes can only detect big effects: it’s important to know the order of magnitude in order to carry out meaningful experiments. It’s always best to test your variables on a large audience, with varied behaviors and needs, in order to get the most reliable and informed results.”

 

Is there value in running data-gathering experiments (as opposed to improving conversion / driving a specific metric)?

Hubert is a full believer in testing no matter what you think may happen. “Testing is always useful because a good test teaches you something, you learn something, win or lose. As long as you have a hypothesis. For instance, measuring the effect of a (supposed) selling feature (like an ad or sale) is useful. You know how much an ad or a sale costs, but without experimenting you don’t know how much it pays. 

“Or say you have a 100% win rate. That means you’re not learning anymore. So you test to gain new information in other areas, you don’t just stand still. You minimize losses to maximize wins.”

 


 

Enjoy what you read? Be sure to read part 2 of the Customer-Centric Data Series here.

You might also like...

Subscribe to
our Newsletter

bloc Newsletter EN

AB Tasty's Privacy Policy is available here.