Article

5min read

1000 Experiments Club: A Conversation With Elissa Quinby of Quantum Metric

Elissa Quinby explains why a frugal mindset around experimentation can actually accelerate the process and increase resourcefulness.

Elissa Quinby lives and breathes retail, with eight years under her belt at Amazon working across multiple business units and functions on the marketing and product teams, as well as prior positions at Google and American Eagle Outfitters. 

Currently the Senior Director of Retail Marketing at Quantum Metric, an experience analytics company that helps brands to gain insights about their customers and make rapid, data-driven decisions, her expertise has been put to good use for the past year.

AB Tasty’s VP Marketing Marylin Montoya spoke with Elissa about ways to encourage loyalty from customers, methods for experimentation and how even the smallest piece of data can have a huge impact on tailoring the customer journey for a better overall experience.  

Here are some of the key takeaways from their conversation.

 

 

Start with ONE key piece of data from your customer and use it to build brand loyalty.

As marketers, we know the value of our current customer base, given the time, effort and cost of acquiring new customers. So it’s only logical to focus on improving the user experience in order to encourage repeat shoppers. 

During her time working at Amazon, Elissa adopted a mindset of frugality and learned how much of an impact can be made with only one piece of customer data. Today, she challenges retailers to ask themselves what data they already have that could revolutionize their customer experience.

With first-party data being the “secret sauce,” Elissa recommends starting small and offering value in return for their cooperation. Customers are increasingly hesitant to share their information with brands, so it’s important to offer an enticing incentive that will allow you to gather that one valuable piece of data that will improve the consumer experience. 

The hardest part of gathering that vital first-party data is encouraging customers to create an account. Once a customer has a profile, trust can be built over time and more data can be gathered, but always in exchange for value. For example, you can encourage customers to sign in to shop by offering personalized filtering or search results. This creates a more efficient and enjoyable online shopping experience for your customers as a reward for their loyalty.

 

“There’s literally nothing that should not be experimented on.”

Experimentation should be at the core of every marketing strategy. In a process of continual improvement, the possibilities for optimizing the customer journey are endless, however data is the only way to know for sure which modifications to pursue.  

With an emphasis on speed, the idea of experimentation is to test a new solution as quickly as possible, releasing any attachment to perfection, in order to start collecting customer feedback. 

Elissa explains that any new feature must be tested before it launches. Until customers offer feedback via their interactions, it remains a simple hypothesis to be proven. Not only does this save time on development, but you can gauge the user response to the experiment and make the necessary adjustments.

The experimentation process is precise, methodical and data-driven, to ensure the experiment is set up correctly for a reliable and insightful result – regardless of its success or failure. 

As the majority of tests do fail, it’s important to fail fast in order to learn as quickly as possible from the customers’ reaction. Elissa explains that running tests multiple times with slight adjustments can help to pinpoint the issue, which might be as simple as where in the customer journey a prompt is showing up. 

 

Experimentation tools can help brands optimize customer experience.

While manual methods for testing can yield results, an experimentation tool can supercharge your customer experience optimization. 

An experimentation tool not only saves time, but also ensures you are getting the most out of each test. It begins with data-driven ideation for the best hypotheses, and if your test fails to meet target metrics, a tool will allow you to pivot by ensuring that you have another hypothesis at the ready, also backed by data. 

Secondly, being able to pinpoint why an experiment failed, with comprehensive analysis, is key to improving your results without exhausting your resources. 

Finally, an experimentation tool can offer real-time data. If your experiment isn’t tracking well, you’ll know immediately and can shut it down. Conversely, if it’s a winner, you can start working with the product team to launch the new feature. It allows innovation cycles to be sped up, with decisions based on real-time data analysis of the user journey and browsing behavior. 

By optimizing the experimentation process with an intelligent analytics solution, you can improve efficiency and quickly hone in on features that are going to bring meaningful improvement to the customer experience and therefore drive results for the company. 

What else can you learn from our conversation with Elissa Quinby?

  • How to do more with less resources (both time and money)
  • How to stand out from competitors via a loyalty program
  • Why you should leverage digital during all phases of the customer journey
  • Why all customer insights play a vital role in improving business results

 

About Elissa Quinby

Elissa Quinby is an expert in retail insights, starting her career as an Assistant Buyer at American Eagle Outfitters followed by two years at Google as a Digital Marketing Strategist. She went on to spend eight years at Amazon across multiple business units and functions including marketing, program management and product.

Today, Elissa is the Senior Director of Retail Marketing at Quantum Metric, an experience analytics company that helps brands to gather customer insights which drive intelligent decision-making.

About 1,000 Experiments Club

The 1,000 Experiments Club is an AB Tasty-produced podcast hosted by Marylin Montoya, VP of Marketing at AB Tasty. Join Marylin and the Marketing team as they sit down with the most knowledgeable experts in the world of experimentation to uncover their insights on what it takes to build and run successful experimentation programs.

 

Subscribe to
our Newsletter

bloc Newsletter EN

We will process and store your personal data to respond to send you communications as described in our  Privacy Policy.

Article

6min read

Statistics: What are Type 1 and Type 2 Errors?

Statistical hypothesis testing implies that no test is ever 100% certain: that’s because we rely on probabilities to experiment.

When online marketers and scientists run hypothesis tests, they’re both looking for statistically relevant results. This means that the results of their tests have to be true within a range of probabilities (typically 95%).

Even though hypothesis tests are meant to be reliable, there are two types of errors that can still occur.

These errors are known as type 1 and type 2 errors (or type i and type ii errors).

Let’s dive in and understand what type 1 and type 2 errors are and the difference between the two.

Type 1 and Type 2 Errors explained

Understanding Type I Errors

Type 1 errors – often assimilated with false positives – happen in hypothesis testing when the null hypothesis is true but rejected. The null hypothesis is a general statement or default position that there is no relationship between two measured phenomena.

Simply put, type 1 errors are “false positives” – they happen when the tester validates a statistically significant difference even though there isn’t one.

Source

Type 1 errors have a probability of  “α” correlated to the level of confidence that you set. A test with a 95% confidence level means that there is a 5% chance of getting a type 1 error.

Consequences of a Type 1 Error

Why do type 1 errors occur? Type 1 errors can happen due to bad luck (the 5% chance has played against you) or because you didn’t respect the test duration and sample size initially set for your experiment.

Consequently, a type 1 error will bring in a false positive. This means that you will wrongfully assume that your hypothesis testing has worked even though it hasn’t.

In real-life situations, this could potentially mean losing possible sales due to a faulty assumption caused by the test.

Related: Sample Size Calculator for A/B Testing

A Real-Life Example of a Type 1 Error

Let’s say that you want to increase conversions on a banner displayed on your website. For that to work out, you’ve planned on adding an image to see if it increases conversions or not.

You start your A/B test by running a control version (A) against your variation (B) that contains the image. After 5 days, variation (B) outperforms the control version by a staggering 25% increase in conversions with an 85% level of confidence.

You stop the test and implement the image in your banner. However, after a month, you noticed that your month-to-month conversions have actually decreased.

That’s because you’ve encountered a type 1 error: your variation didn’t actually beat your control version in the long run.

Related: Frequentist vs Bayesian Methods in A/B Testing

Want to avoid these types of errors during your digital experiments?

AB Tasty is an a/b testing tool embedded with AI and automation that allows you to quickly set up experiments, track insights via our dashboard, and determine which route will increase your revenue.

Understanding Type II Errors

In the same way that type 1 errors are commonly referred to as “false positives”, type 2 errors are referred to as “false negatives”.

Type 2 errors happen when you inaccurately assume that no winner has been declared between a control version and a variation although there actually is a winner.

In more statistically accurate terms, type 2 errors happen when the null hypothesis is false and you subsequently fail to reject it.

If the probability of making a type 1 error is determined by “α”, the probability of a type 2 error is “β”. Beta depends on the power of the test (i.e the probability of not committing a type 2 error, which is equal to 1-β).

There are 3 parameters that can affect the power of a test:

  • Your sample size (n)
  • The significance level of your test (α)
  • The “true” value of your tested parameter (read more here)

Consequences of a Type 2 Error

Similarly to type 1 errors, type 2 errors can lead to false assumptions and poor decision-making that can result in lost sales or decreased profits.

Moreover, getting a false negative (without realizing it) can discredit your conversion optimization efforts even though you could have proven your hypothesis. This can be a discouraging turn of events that could happen to any CRO expert and/or digital marketer.

A Real-Life Example of a Type 2 Error

Let’s say that you run an e-commerce store that sells cosmetic products for consumers. In an attempt to increase conversions, you have the idea to implement social proof messaging on your product pages, like NYX Professional Makeup.

Social Proof Beispiel NYXYou launch an A/B test to see if the variation (B) could outperform your control version (A).

After a week, you do not notice any difference in conversions: both versions seem to convert at the same rate and you start questioning your assumption. Three days later, you stop the test and keep your product page as it is.

At this point, you assume that adding social proof messaging to your store didn’t have any effect on conversions.

Two weeks later, you hear that a competitor had added social proof messages at the same time and observed tangible gains in conversions. You decide to re-run the test for a month in order to get more statistically relevant results based on an increased level of confidence (say 95%).

After a month – surprise – you discover positive gains in conversions for the variation (B). Adding social proof messages under the purchase buttons on your product pages has indeed brought your company more sales than the control version.

That’s right – your first test encountered a type 2 error!

Why are Type I and Type II Errors Important?

Type one and type two errors are errors that we may encounter on a daily basis. It’s important to understand these errors and the impact that they can have on your daily life.

With type 1 errors you are making an incorrect assumption and can lose time and resources. Type 2 errors can result in a missed opportunity to change, enhance, and innovate a project.

To avoid these errors, it’s important to pay close attention to the sample size and the significance level in each experiment.