Article

6min read

How to solve real user problems with a CRO strategy

Catch up on the previous installment of our Customer-Centric Data Series, How to Become a Data-Centric Company, or read the series introduction.

In the next installment of our series on a data-driven approach to customer-centric marketing, we spoke with our partner Raoul Doraiswamy, Founder & Managing Director of Conversionry to understand the flow of a customer-centric experimentation process, and why it is critical to tap into insights from experimentation processes to make better decisions.

What do you find is the biggest gap in the marketing & growth knowledge among brands right now?

Many brands today have the right set of tools such as technology investments, or the right people with marketing expertise. However, brands often face the issue of not knowing how to meet customer needs/how to give their customers what they want whether on their website, app or through digital advertising on the website, app or digital advertising – in other words, how can these brands increase conversions? Raoul identifies the lack of customer understanding to be at the core of this gap and suggests that brands should adopt a customer-centric, customer-driven process that enables a flow of customer insights, complemented by experimentation.  

Which key activities deliver the best insights into customer problems?

Raoul believes that to start a strategy that puts the customers at the core, it is important to have the right data-gathering approach to get insights. It’s the foundation of any experimentation program, but can be applied to all marketing channels.

“Imagine you are an air traffic controller. You have multiple screens constantly feeding you where the planes are, or when they might crash into each other. From all these constant insights, the person in front of the screens will have to make the right decisions,” he shares. “However, there are also inconsequential insights such as baggage holders being full – and it is up to the decision-makers to pick out the critical data and make use of them.”

Raoul provides this analogy to liken it to the role of marketing decision-makers, who normally have a dashboard with metrics like revenue, conversion rate, abandoned cart and more. An insights dashboard helps marketers better understand their customers, combining this real-time data with customer feedback from sources like analytics, heatmaps, session recordings, social media comments and user testing.  Solid research can be done through a critical analysis of session recordings and user poll forms, and the main takeaways can be fed to this dashboard. How empowering is that for a marketing decision-maker? 

Where are the best sources for experimentation ideas?

Raoul asserts that a combination of quantitative and qualitative analysis is key. Heuristic analysis and competitor analysis are also gold when coming up with experimentation ideas. He continues, “Don’t limit yourself to looking at competitors, look at other industries too. For example, for a $90M trade tools client we had to solve the problem of increasing user sign-ins to their loyalty program. By researching Expedia and Qantas, we got the idea to show users points instead of cash to pay for items.” Raoul shares, “Do heat map analysis, look at session recordings, user polls, run surveys to email databases, and user testing. User testing is critical in understanding the full picture.” 

After distilling customer problems and coming up with some rough experimentation ideas, the next step is to flesh out your experiment ideas fully. “Going back to the analogy of the Air Traffic Controller, one person on the team is seeing a potential crash but might have limited experience in dealing with this situation. That’s when more perspectives can be brought in by, let’s say, a supervisor, to make a more well-rounded decision. In the same way, when you are ideating, you do not want to just limit it to yourself but rather have a workshop where you discuss ideas with your internal team. If you are working with an agency, you can still have a workshop with both the agency and the client present, or have your CRO team and product team come together to share ideas. This way, you can get multiple stakeholders involved, each of them being able to provide expertise based on their experience with customers,” says Raoul.

Is there value in running data-gathering experiments (as opposed to improving conversion / driving a specific metric)?

“Yes, absolutely,” replies Raoul. “Aligning growth levers with clients every quarter while working with CRO and Experimentation teams on the experimentation process is important. When working towards the goal of increasing conversions, there are KPIs and predictive models to project the goals.

“On the other hand, if the focus of the program is on product feature validation or reducing the risk of revenue due to untested features, there will be a separate metric for that,” he continues. “It is key to have specific micro KPIs for the tests that are running to generate a constant flow of insights, which then allows us to make better decisions.”

In running data-gathering experiments, features such as personalization can be applied which can have a positive impact on the conversions on the website. 

What do brands need to get started?

“To begin, you need to start running experiments. Every day without a test is a day lost in revenue!” heeds Raoul. “For marketing leaders who have yet to start running experiments, you can start by pinpointing customer problems, and the flow of insights. To get the insights, you can gather them from Google Analytics, more specifically, by looking at your funnel. Through these insights, identify the drop-off point and observe the Next Page Path, to see where users go next.

“Take for example an eCommerce platform. If the users are dropping off at the product page instead of adding to the cart and moving on to the shipping page,  this shows that they are confused about the shipping requirements. This alone can tell you what goes through the user’s mind. Look at heat maps and session recordings to understand the customer’s problems. The next step then is to solve the issue and to do that, you will need an A/B testing platform. Use the A/B testing platform to build tests and launch them as quickly as possible.”

As for established marketing teams who are already doing some testing, Raoul recommends gathering insights and customer problems as they come in every month. “Then to make sense of the data you’ve collected, you need conversion optimization analysts like our experts at Conversionry who are experienced in distilling data down to problems.”

Identifying customer problems is key. If some of the issues your customers encounter stay unaddressed, it could lead to the initiatives flatlining despite months of experimentation. Instead by keeping customer feedback top of mind, you can start designing, development, testing, speak to experience optimization platforms like AB Tasty to build the experiments, then gather insights, and repeat the cycle to see what wins and what doesn’t.

Get started building your A/B tests today with the best-in-class software solution, AB Tasty. With embedded AI and automation, this experimentation and personalization platform creates richer digital experiences for your customers, fast.

Subscribe to
our Newsletter

bloc Newsletter EN

We will process and store your personal data to send you communications as described in our  Privacy Policy.

Article

6min read

Using Failed A/B Test Results to Drive Innovation

“Failure” can feel like a dirty word in the world of experimentation. Your team spends time thinking through a hypothesis, crafting a test, and finally when it rolls out … it falls flat. While it can feel daunting to see negative results from your a/b tests, you have gained valuable insights that can help you make data-driven, strategic decisions for your next experiment. Your “failure” becomes a learning opportunity.

Embracing the risk of negative results is a necessary part of building a culture of experimentation. On the first episode of the 1,000 Experiments Club podcast, Ronny Kohavi (formerly of Airbnb, Microsoft, and Amazon) shared that experimentation is a time where you will “fail fast and pivot fast.” As he learned while leading experimentation teams for the largest tech companies, your idea might fail. But it is your next idea that could be the solution you were seeking.

“There’s a lot to learn from these experiments: Did it work very well for the segment you were going after, but it affected another one? Learning what happened and why will lead to developing future strategies and being successful,” shares Ronny.

In order to build a culture of experimentation, you need to embrace the failures that come with it. By viewing negative results as learning opportunities, you build trust within your team and encourage them to seek creative solutions rather than playing it safe. Here are just a few benefits to embracing “failures” in experimentation:

  1. Encourage curiosity: With AB Tasty, you can test your ideas quickly and easily. You can bypass lengthy implementations and complex coding. Every idea can be explored immediately and if it fails, you can get the next idea up and running without losing speed, saving you precious time and money.
  2. Eliminate your risks without a blind rollout: Testing out changes on a few pages or with a small audience size can help you gather insights in a more controlled environment before planning larger-scale rollouts.
  3. Strengthen hypotheses: It’s easy to fall prey to confirmation bias when you are afraid of failure. Testing out a hypothesis with a/b testing and receiving negative results confirms that your control is still your strongest performer, and you’ll have data to support the fact that you are moving in the right direction.
  4. Validate existing positive results: Experimentation helps determine what small changes can drive a big impact with your audience. Comparing negative a/b test results against positive results for similar experiments can help to determine if the positive metrics stand the test of time, or if an isolated event caused skewed results.

In a controlled, time-limited environment, your experiment can help you learn very quickly if the changes you have made are going to support your hypothesis. Whether your experiment produces positive or negative results, you will gain valuable insights about your audience. As long as you are leveraging those new insights to build new hypotheses, your negative results will never be a “failure.” Instead, the biggest risk would be allowing a status quo continuing to go unchecked.

“Your ability to iterate quickly is a differentiation,” shares Ronny. “If you’re able to run more experiments and a certain percentage are pass/fail, this ability to try ideas is key.”

Below are some examples of real-world a/b tests and the crucial learnings that came from each experiment:

Lesson learned: Removing “Add to Basket” CTAs decreased conversion

In this experiment, our beauty/cosmetics client tested removing the “Add to Basket” CTA from their product pages. The idea behind this was to test if users would be more interested in clicking through to the individual pages, leading to a higher conversion rate. The results? While there was a 0.4% increase in visitors clicking “Add to Basket,” conversions were down by 2%. The team took this as proof that the original version of the website was working properly, and they were able to reinvest their time and effort into other projects.

Beauty client add to basket use case

Lesson learned: Busy form fields led to decreased leads

A banking client wanted to test if adjusting their standard request form would drive passage to step 2 and ultimately increase the number of leads from form submissions. The test focused on the mandatory business identification number field, adding a pop-up explaining what the field meant in the hopes of reducing form abandonment. The results? They saw a 22% decrease in leads as well as a 16% decrease in the number of visitors continuing to step 2 of the form. The team’s takeaways from this experiment were that in trying to be helpful and explain this field, their visitors were overwhelmed with information. The original version was the winner of this experiment, and the team saved themselves a huge potential loss from hardcoding the new form field.

Banking client form use case

Lesson learned: Product availability couldn’t drive transactions

The team at this beauty company designed an experiment to test whether displaying a message about product availability on the basket page would lead to an increase in conversions by appealing to the customer’s sense of FOMO. Instead, the results proved inconclusive. The conversion rate increased by 1%, but access to checkout and the average order value decreased by 2% and 0.7% respectively. The team determined that without the desired increase in their key metrics, it was not worth investing the time and resources needed to implement the change on the website. Instead, they leveraged their experiment data to help drive their website optimization roadmap and identify other areas of improvement.

Beauty client availability use case

Despite negative results, the teams in all three experiments leveraged these valuable insights to quickly readjust their strategy and identify other places for improvement on their website. By reframing the negative results of failed a/b tests into learning opportunities, the customer experience became their driver for innovation instead of untested ideas from an echo chamber.

Jeff Copetas, VP of E-Commerce & Digital at Avid, stresses the importance of figuring out who you are listening to when building out an experimentation roadmap.  “[At Avid] we had to move from a mindset of ‘I think …’ to ‘let’s test and learn,’ by taking the albatross of opinions out of our decision-making process,” Jeff recalls. “You can make a pretty website, but if it doesn’t perform well and you’re not learning what drives conversion, then all you have is a pretty website that doesn’t perform.”

Through testing you are collecting data on how customers are experiencing your website,  which will always prove to be more valuable than never testing the status quo. Are you seeking inspiration for your next experiment? We’ve gathered insights from 50 trusted brands around the world to understand the tests they’ve tried, the lessons they’ve learned, and the successes they’ve had.