Article

5min read

Rethinking the Travel Experience | Danielle Harvey

Danielle Harvey shares how travel customers are using different channels, why testing doesn’t always have to end in success, and how travel companies can integrate AI to provide a more engaging customer experience.

Currently Vice President, Industries, Partnerships & Emerging Products at Quantum Metric, Danielle Harvey, has a long experience in the travel industry. She also spent 11 years at one of the world’s largest hotel brands, Wyndham Hotels & Resorts, driving a data-driven approach to optimizing customer experience. With roles including digital acquisition, voice of the customer, CRM, experimentation, and digital analytics, she has a unique understanding of the travel customer journey.

Danielle Harvey spoke with AB Tasty’s Head of Marketing and host of The 1000 Experiments podcast, John Hughes, about the importance of connecting channels in the travel industry, using testing to understand the customer journey, and how brands can best harness the power of AI.

Here are some of the key takeaways from their conversation.

Let customers do what they want where they want

COVID forced the travel industry to undergo an accelerated digital transformation. And travel customers now want a seamless cross-channel experience when booking. This is especially true when people frequently use different platforms at different stages of their buying journey and make multiple visits to your website before making a booking.

“We did some benchmark data and 75-80% of traffic in travel is on mobile at this point, but only about 25% of bookings are. It’s a heavy research channel, a day-of-travel channel, but not necessarily where people are comfortable purchasing yet,” says Danielle.

Enabling customers to transact in their channel of preference and connecting different channels therefore becomes vital. This provides immediate benefits for the customer but also operational efficiencies for providers.

Omnichannel may have been little more than a buzzword a few years ago. But with the true adoption of digital technology and improved methods of data collection, connecting those experiences is becoming more of a reality.

“A lot of travel can still be pretty siloed, but your customers don’t care,” explains Danielle. “They expect that your teams are speaking to each other, that there’s an overarching strategy.”

Even flat and failed tests can be learning experiences

Testing and experimentation don’t always have to be successful to provide you with valuable information to help you improve the customer experience. An example that Danielle gave was testing customer ratings and reviews on a website.

“Some of the most interesting testing I did was around reviews. Because the assumption was that if you get those out there on the site, they should really have an impact,” says Danielle.

But when almost a quarter of the people researching travel will typically visit your website as least five times before booking, it’s likely that they’re getting much of their information from other sources.

“It was always interesting that whenever we tried testing reviews, they didn’t really move the needle. So, your website is often not the only place people are going to go for information,” notes Danielle. 

But this flat result helped drive the realization that while adding reviews might not have a direct financial impact, they were important for transparency. And at the same time, they made things easier for the customer.

And just because a test has failed, that doesn’t mean it shouldn’t help to inform your strategy going forward. The key is to try and understand what happened and learn from that.

“Over time, I would typically see a 50/50 win to fail rate. But my focus on failed tests was always what do we learn from this, digging into the reason why it failed and then building a pipeline of testing and experimentation off of that,” says Danielle.

Use AI to improve the customer experience

AI-powered tools can create time efficiencies for travel providers and provide valuable context about customer intent. And many travel brands are using AI to help their employees service the customer faster.

“We’re doing some cool stuff at Quantum where an AI chat component will send a summary to a support agent who can immediately see what the customer was trying to do, rather than putting the burden on the customer to repeat themselves,” explains Danielle.

Integrating AI can also be extremely valuable for people involved in testing and experimentation.

“A lot of the excitement around AI, especially in things like personalization, is that you don’t need to come up with ideas yourself and test them, but ideally some of that is automated for you,” says Danielle. 

If you launch a test and don’t specifically track certain behaviors, for example, it’s often hard to know how a user might interact with it. By using AI to auto capture data, you can watch what users did and use heat maps to see where they were engaging.

There’s also an increasing focus from both customers and travel providers on self-service. But many brands are still hesitant to have a lot of AI facing the customer. The key is finding the right balance.

“The unique thing with travel and hospitality is there is always a human element. You don’t want to digitize it completely,” advises Danielle. “You’re ideally delivering a nice experience as well.”

What else can you learn from our conversation with Danielle Harvey?

  • The long-haul effect: How the travel customer journey differs from that of e-commerce.
  • Voice of the customer: The importance of turning qualitative feedback into quantitative data.
  • On brand: Some of the challenges involved in testing across different brand websites
  • Experience over things: Why travel will continue to be a priority for many people going forward even though it might look different.

About Danielle Harvey

Danielle Harvey is Vice President, Industries, Partnerships & Emerging Products at Quantum Metric. Passionate about the travel industry, she spent 11 years prior to this leading digital and analytic teams at Wyndham Hotels & Resorts and has also worked for the Avis Budget Group.

About 1,000 Experiments Club

The 1,000 Experiments Club is an AB Tasty-produced podcast hosted by John Hughes, Head of Marketing at AB Tasty. Join John as he sits down with the experts in the world of experimentation to uncover their insights into what it takes to build and run successful experimentation programs.

Subscribe to
our Newsletter

bloc Newsletter EN

We will process and store your personal data to send you communications as described in our  Privacy Policy.

Article

6min read

How to Prevent Knowledge Turnover in your Experimentation Program

The Problem: The High Cost of Experimentation Amnesia

In digital optimization, we often obsess over velocity—how fast can we test? But this focus masks a deeper, more expensive problem: Experimentation Amnesia.

At AB Tasty, an analysis of over 1.5 million campaigns revealed a startling trend. While thousands of tests are launched daily, the specific context—why a test won, what surprised us, and the strategic lesson learned—often evaporates the moment the campaign ends.

It vanishes into a 250-slide PowerPoint deck that no one opens again. It disappears into a Slack thread. Or, most painfully, it walks out the door when your CRO Manager or agency partner moves on to their next opportunity.

If you are running tests but not archiving the insights in a retrievable way, you aren’t building a program; you’re just running in circles. It’s time to shift your focus from Execution to Knowledge Management.

The Hidden Cost of “One-and-Done” Testing

The digital industry is notorious for its high turnover. On average, internal digital teams change every 18 months and agencies rotate every two years.

In traditional workflows, knowledge is tied to people, not platforms. When a key manager leaves, they take their “mental hard drive” with them.

This is the “Knowledge Drain.” It is the silent budget killer of CRO programs.

Every time you repeat a test because you couldn’t find the previous results, you are paying double for the same insight. Every time you lose the context of a winning test (i.e., you know that it won, but not why), you lose the ability to iterate and double your gains.

This is why the most mature experimentation teams are moving away from simple testing tools and adopting Program Management platforms that secure their knowledge.

The Solution? AB Tasty’s new Learnings Library.

We designed this feature to serve as a centralized, searchable repository that lives directly where your experiments do. It acts as the institutional memory of your digital team, ensuring that every test—whether a massive win or a “flat” result—contributes to a permanent asset library.

Context is King: Why AI Can’t Replace the Human “Why”

In an era where everyone is rushing to automate everything with AI, you might ask: “Why can’t an AI just write my test conclusions?”

While AI is powerful for analyzing raw numbers, it lacks business context. An AI can tell you that “Variation B increased transactions by 12%.” But it cannot tell you why that matters to your strategy.

  • Was that 12% expected?
  • Was it a shocking surprise that disproved a long-held internal belief?
  • Did it cannibalize another product line?

AB Tasty’s Learnings Library is designed to capture Qualitative Intelligence. It prompts your team to manually qualify results with human tags like “Surprising” or “Expected.” It asks for the narrative behind the numbers.

This human layer is critical. A “failed” test (one that produced no uplift) is often more valuable than a win, provided you document the lesson. By recording, “We learned that our users do not care about social proof on the cart page,” you create a defensive asset. You prevent future teams from wasting budget on that specific hypothesis again.

Visual History: The Power of “Before and After”

One of the biggest friction points in reporting is visual documentation. How much time does your team spend taking screenshots, cropping them, pasting them into PowerPoint, and trying to align the “Control” vs. “Variation” images?

Our Learnings Library automates this friction. It should allow you to upload your screenshots and automatically generate a Comparison View—a visual “Before and After” slide that lives alongside the data.

This visual history is vital for continuity. Two years from now, a spreadsheet number won’t spark inspiration. But seeing the exact design that drove a 20% increase in conversions? That is instant clarity for a new Designer, Developer, or Strategist.

Conclusion: Stop Renting Your Insights

If your testing history lives in the heads of your employees or on a local hard drive, you are effectively “renting” your insights. The moment that employee leaves, the lease is up, and you are back to square one.

It is time to own your knowledge.

Don’t let your next great insight slip through the cracks. Start building your library today.

FAQs: Learnings Library

What is AB Tasty’s Learnings Library?

Our Learnings Library is a centralized digital repository that archives the results, visual history, and strategic insights of every A/B test run by an organization. Unlike static spreadsheets, it connects data (uplift/downlift) with qualitative context (hypotheses and observations), transforming individual test results into a permanent, searchable company asset

How does staff turnover impact A/B testing ROI?

Staff turnover creates a “Knowledge Drain.” When optimization managers leave without a centralized system of record, they take valuable historical context with them. This forces new hires to “restart” the learning curve, often leading to redundant testing (paying for the same insight twice) and a slower velocity of innovation.

Should I document “failed” or inconclusive A/B tests?

Yes. A “failed” test is only a failure if the lesson is lost. Documenting inconclusive or negative results creates “defensive knowledge,” which prevents future teams from wasting budget on the same disproven hypotheses. A robust Learning Library treats every result as a data point that refines the customer understanding.

How do I stop my team from re-running the same A/B tests?

The most effective way to prevent redundant testing is to implement a searchable timeline of experiments that includes visual evidence (screenshots of the original vs. variation). This allows any team member to instantly verify if an idea has been tested previously, under what conditions, and what the specific outcome was.

What is the best platform for scaling a CRO program?

Scaling a program isn’t just about running more tests; it’s about running smarter tests. Unlike competitors that focus on “gadget” features (like AI text generation), AB Tasty invests in Program Management infrastructure. By combining execution with a native Knowledge Management system, AB Tasty allows your program to compound its value over time, rather than resetting every year.