Article

4min read

Transaction Testing With AB Tasty’s Report Copilot

Transaction testing, which focuses on increasing the rate of purchases, is a crucial strategy for boosting your website’s revenue. 

To begin, it’s essential to differentiate between conversion rate (CR) and average order value (AOV), as they provide distinct insights into customer behavior. Understanding these metrics helps you implement meaningful changes to improve transactions.

In this article, we’ll delve into the complexities of transaction metrics analysis and introduce our new tool, the “Report Copilot,” designed to simplify report analysis. Read on to learn more.

Transaction Testing

To understand how test variations impact total revenue, focus on two key metrics:

  • Conversion Rate (CR): This metric indicates whether sales are increasing or decreasing. Tactics to improve CR include simplifying the buying process, adding a “one-click checkout” feature, using social proof, or creating urgency through limited inventory.
  • Average Order Value (AOV): This measures how much each customer is buying. Strategies to enhance AOV include cross-selling or promoting higher-priced products.

By analyzing CR and AOV separately, you can pinpoint which metrics your variations impact and make informed decisions before implementation. For example, creating urgency through low inventory may boost CR but could reduce AOV by limiting the time users spend browsing additional products. After analyzing these metrics individually, evaluate their combined effect on your overall revenue.

Revenue Calculation

The following formula illustrates how CR and AOV influence revenue:

Revenue=Number of Visitors×Conversion Rate×AOV

In the first part of the equation (Number of Visitors×Conversion Rate), you determine how many visitors become customers. The second part (×AOV) calculates the total revenue from these customers.

Consider these scenarios:

  • If both CR and AOV increase, revenue will rise.
  • If both CR and AOV decrease, revenue will fall.
  • If either CR or AOV increases while the other remains stable, revenue will increase.
  • If either CR or AOV decreases while the other remains stable, revenue will decrease.
  • Mixed changes in CR and AOV result in unpredictable revenue outcomes.

The last scenario, where CR and AOV move in opposite directions, is particularly complex due to the variability of AOV. Current statistical tools struggle to provide precise insights on AOV’s overall impact, as it can experience significant random fluctuations. For more on this, read our article “Beyond Conversion Rate.”

While these concepts may seem intricate, our goal is to simplify them for you. Recognizing that this analysis can be challenging, we’ve created the “Report Copilot” to automatically gather and interpret data from variations, offering valuable insights.

Report Copilot

The “Report Copilot” from AB Tasty automates data processing, eliminating the need for manual calculations. This tool empowers you to decide which tests are most beneficial for increasing revenue.

Here are a few examples from real use cases.

Winning Variation:

The left screenshot provides a detailed analysis, helping users draw conclusions about their experiment results. Experienced users may prefer the summarized view on the right, also available through the Report Copilot.

Complex Use Case:


The screenshot above demonstrates a case where CR and OAV have opposite trends and need a deeper understanding of the context.

It’s important to note that the Report Copilot doesn’t make decisions for you; it highlights the most critical parts of your analysis, allowing you to make informed choices.

Conclusion

Transaction analysis is complex, requiring a breakdown of components like conversion rate and average order value to better understand their overall effect on revenue. 

We’ve developed the Report Copilot to assist AB Tasty users in this process. This feature leverages AB Tasty’s extensive experimentation dashboard to provide comprehensive, summarized analyses, simplifying decision-making and enhancing revenue strategies.

Subscribe to
our Newsletter

bloc Newsletter EN

We will process and store your personal data to respond to send you communications as described in our  Privacy Policy.

Article

5min read

The Past, Present, and Future of Experimentation | Bhavik Patel

What is the future of experimentation? Bhavik Patel highlights the importance of strategic planning and innovation to achieve meaningful results.

A thought leader in the worlds of CRO and experimentation, Bhavik Patel founded popular UK-based meetup community, CRAP (Conversion Rate, Analytics, Product) Talks, seven years ago to fill a gap in the event market – opting to cover a broad range of optimization topics from CRO, data analysis, and product management to data science, marketing, and user experience.

After following his passion throughout the industry from acquisition growth marketing to experimentation and product analytics, Bhavik landed the role of Product Analytics & Experimentation Director at product measurement consultancy, Lean Convert, where his interests have converged. Here he is scaling a team and supporting their development in data and product thinking, as well as bringing analytical and experimentation excellence into the organization.

AB Tasty’s CMO Marylin Montoya spoke with Bhavik about the future of experimentation and how we might navigate the journey from the current mainstream approach to the potentialities of AI technology.

Here are some of the key takeaways from their conversation.

The evolution of experimentation: a scientific approach.

Delving straight to the heart of the conversation, Bhavik talks us through the evolution of A/B testing, from its roots in the scientific method, to recent and even current practices – which involve a lot of trial and error to test basic variables. When projecting into the future, we need to consider everything from people, to processes, and technology.

Until recently, conversion rate optimization has mostly been driven by marketing teams, with a focus on optimizing the basics such as headlines, buttons, and copy. Over the last few years, product development has started to become more data driven. Within the companies taking this approach, the product teams are the recipients of the A/B test results, but the people behind these tests are the analytical and data science teams, who are crafting new and advanced methods, from a statistical standpoint.

Rather than making a change on the homepage and trying to measure its impact on outcome metrics, such as sales or new customer acquisition, certain organizations are taking an alternative approach modeled by their data science teams: focusing on driving current user activity and then building new products based on that data.

The future of experimentation is born from an innovative mindset, but also requires critical thinking when it comes to planning experiments. Before a test goes live, we must consider the hypothesis that we’re testing, the outcome metric or leading indicators, how long we’re going to run it, and make sure that we have measurement capabilities in place. In short, the art of experimentation is transitioning from a marketing perspective to a science-based approach.

Why you need to level up your experiment design today.

While it may be a widespread challenge to shift the mindset around data and analyst teams from being cost centers to profit-enablement centers, the slowing economy might have a silver lining: people taking the experimentation process a lot more seriously. 

We know that with proper research and design, an experiment can achieve a great ROI, and even prevent major losses when it comes to investing in new developments. However, it can be difficult to convince leadership of the impact, efficiency and potential growth derived from experimentation.

Given the current market, demonstrating the value of experimentation is more important than ever, as product and marketing teams can no longer afford to make mistakes by rolling out tests without validating them first, explains Bhavik. 

Rather than watching your experiment fail slowly over time, it’s important to have a measurement framework in place: a baseline, a solid hypothesis, and a proper experiment design. With experimentation communities making up a small fraction of the overall industry, not everyone appreciates the ability to validate, quantify, and measure the impact of their work,  however Bhavik hopes this will evolve in the near future.

Disruptive testing: high risk, high reward.

On the spectrum of innovation, at the very lowest end is incremental innovation, such as small tests and continuous improvements, which hits a local maximum very quickly. In order to break through that local maximum, you need to try something bolder: disruptive innovation. 

When an organization is looking for bigger results, they need to switch out statistically significant micro-optimizations for experiments that will bring statistically meaningful results.

Once you’ve achieved better baseline practices – hypothesis writing, experiment design, and planning – it’s time to start making bigger bets and find other ways to measure it.

Now that you’re performing statistically meaningful tests, the final step in the evolution of experimentation is reverse-engineering solutions by identifying the right problem to solve. Bhavik explains that while we often focus on prioritizing solutions, by implementing various frameworks to estimate their reach and impact, we ought to take a step back and ask ourselves if we’re solving the right problem.

With a framework based on quality data and research, we can identify the right problem and then work on the solution, “because the best solution for the wrong problem isn’t going to have any impact,” says Bhavik.

What else can you learn from our conversation with Bhavik Patel?

  • The common drivers of experimentation and the importance of setting realistic expectations with expert guidance.
  • The role of A/B testing platforms in the future of experimentation: technology and interconnectivity.
  • The potential use of AI in experimentation: building, designing, analyzing, and reporting experiments, as well as predicting test outcomes. 
  • The future of pricing: will AI enable dynamic pricing based on the customer’s behavior?

About Bhavik Patel

A seasoned CRO expert, Bhavik Patel is the Product Analytics & Experimentation Director at Lean Convert, leading a team of optimization specialists to create better online experiences for customers through experimentation, personalization, research, data, and analytics.
In parallel, Bhavik is the founder of CRAP Talks, an acronym that stands for Conversion Rate, Analytics and Product, which unites CRO enthusiasts with thought leaders in the field through inspiring meetup events – where members share industry knowledge and ideas in an open-minded community.

About 1,000 Experiments Club

The 1,000 Experiments Club is an AB Tasty-produced podcast hosted by Marylin Montoya, AB Tasty CMO. Join Marylin and the Marketing team as they sit down with the most knowledgeable experts in the world of experimentation to uncover their insights on what it takes to build and run successful experimentation programs.