AB Tasty is a complete tool for website and conversion rate optimization. We serve as your digital lab, equipped with everything you need to create experiments that will help you to better understand your users and customer journeys so that you can create the clearest and most engaging user experience possible, ensuring your website performs well and yields the maximum results.
At AB Tasty, we love to help you improve your customers’ experiences – and we are here to do the same for you on the AB Tasty platform! We’re constantly gathering feedback from our users, and next month, you’ll see us roll out our new navigation based on that feedback.
We’re doing this for a few reasons:
We want to give you the best – and that means further improving the quality of your experience on the platform. 💖
We want you to be able to find exactly what you need, when you need it – which means improving the organization of information, classifying your favorite (and new!) features in an easy-to-navigate way. 🕵️
We want you to have the most intuitive experience possible – by providing you with better guidance from the first time you log in and get you from A to B as quick as can be. 🗺
What does that mean for you?
We’ll guide you through the updates in the coming weeks, but here’s a sneak peek of what to expect:
Better visibility with a new sidebar navigation, allowing you to easily access any area of the platform with a single click – and collapse it for more workspace.
We’ve gotten rid of the hamburger menu in favor of giving you more control over where you want to go within the platform – whether it be Tests, Personalization, Audience, Analysis, or ROI – plus a login button to take you directly to Flagship, our feature management solution. 🧭
Improved access to Settings, reorganized to match our customers’ most-used options.
We’ve designed a sleeker look, consolidating settings menu for a cleaner appearance and easier navigation. 💅
New header to accompany you through every step of the workflow, from campaign creation to reporting, giving you a better bird’s eye view of a campaign’s status.
Your step-by-step buttons will remain exactly where they are, but the header will shift to make everything more easily visible to you – including an editable campaign name, status, and reporting, right alongside the tag and account info. 👀
We hope these exciting changes make a big impact on how you use AB Tasty! 💥
We know you might have questions as you go through the new navigation, and we are here to help! We also know you might have feedback – about the new design and beyond – and we invite you, as always, to share it with us on our Canny board, accessible via this link.
When it comes to kickstarting experimentation within an organization, Lukas Vermeer recommends starting small and (keeping it) simple.
Lukas Vermeer took this advice to heart when he dove head-first into the world of AI and machine learning during the early stages of its development, when there was little industry demand. Through consulting for various companies, Lukas discovered his ideal work environment: a scale-up, where he could put his data and machine learning expertise to use.
Enter Booking.com. Lukas joined the Dutch digital travel company during the scale-up phase and went on to lead the experimentation team for eight years, scaling the team from three people to 30 people.
Once the experimentation team at Booking.com had reached maturity, he embarked on a new adventure in 2021 as director of experimentation at Vista. He is building and shaping the experimentation culture and tapping into the potential of their data, to further Vista’s impact as an industry leader in design and marketing solutions for small businesses.
Lukas spoke with AB Tasty’s VP of Marketing Marylin Montoya about the process and culture of experimentation; from the methods to the roles of the teams involved within an organization. Here are some of the key insights from their conversation.
Get strategic about experimentation
Knowing the purpose of your experiment is key. Lukas recommends focusing your efforts on testing big features that can drive real change or impact the company’s bottom line, rather than UI design.
Ask yourself, “What are the biggest questions that are driving your business case at the moment? What are the biggest assumptions that are behind your strategic planning?” he says. Rather than increasing the number of experiments, focus on the correct execution of more significant experiments.
When it comes to building a culture of experimentation within an organization, Lukas suggests using the flywheel method. The first experiment should garner attention by splitting the company’s opinion 50/50, as to whether it will work. This demonstrates that it can be hard to predict the success of experiments, thereby underlining the “unquantifiable value of experimentation.” We need to acknowledge that it is equally valuable to avoid shipping a bad product (that could reduce revenue), as it is to figure out strategically what you should invest in going forward.
Structure your organization for experimentation success
The way your business and teams are structured will impact how seamlessly your experiments are executed. Lukas recommends that the product development team take full ownership of the experiments.
The experimentation team should be facilitating experiments by providing the tools, education and troubleshooting support to the product development team, who can then run their experiments autonomously.
By training product managers in the process of experimentation — such as the different tests and tools available, their strengths and weaknesses, the assumptions they make and when to use them — they can work autonomously to test their ideas and select from a portfolio of experimental methods in order to make a decision.
There is, however, a social aspect to experimentation that should not be ignored. Given the subjective nature of data interpretation and analysis, Lukas highlights the importance of discussing the outcomes and giving feedback on the experimentation process in order to optimize it.
“The whole point of an experiment is to (…) drive a decision, and the decision should be supported by the evidence at hand,” Lukas says. Just as scientists peer-review their papers before publishing, experiments using the scientific method should follow the same guidelines to document the hypothesis, method, results and discussion in the reporting. (An opinion that has been echoed by 1,000 Experiments Club podcast guest Jonny Longden.)
The biggest threat to experimentation culture: leadership or roadmaps?
When people in product development talk about “roadmaps,” they’re not actually roadmaps, Lukas says. It’s more of a linear wishlist of steps that they hope will bring them to the goal. The problem is that there’s rarely alternative routes or redirections should they stray from the original plan.
It’s hard to change direction at the first failed experiment, Lukas explains, due to the “escalation of commitment.” That is, the more time and energy you have invested into something, the more difficult it is to change course.
So, is it time to ditch roadmaps altogether? Lukas advises that roadmaps should simply acknowledge that there is inherent uncertainty. There are many unknowns in product development, and these only become visible once the products are being built and exposed to customers. This is why the build-measure-learn model works, because we take a few steps and then check if we’re heading in the right direction.
Lukas says the goal should not be to “deliver a final product in two months,” rather you should incorporate the uncertainty into the deliverables and word the objective accordingly, for example: to check if customers are responding in the desired way.
What else can you learn from our conversation with Lukas Vermeer?
When to start experimenting and how to build a culture of experimentation
The importance of autonomy for experimentation teams
The three levels of experimentation: method, design, execution
How to accelerate the experimentation process
About Lukas Vermeer
Lukas Vermeer is an expert in implementing and scaling experimentation with a background in AI and machine learning. Currently, Lukas is the director of experimentation at Vista. Prior to this, he spent over eight years at Booking.com, from data scientist, product manager to director of experimentation. He continues to offer his expert consulting services to companies that are starting to implement experimentation. His most recently co-authored paper, “It Takes a Flywheel to Fly: Kickstarting and Keeping the A/B Testing Momentum,” helps companies get started and accelerate experimentation using the “investment follows value follows investment” flywheel.
About 1,000 Experiments Club
The 1,000 Experiments Club is an AB Tasty-produced podcast hosted by Marylin Montoya, VP of Marketing at AB Tasty. Join Marylin and the Marketing team as they sit down with the most knowledgeable experts in the world of experimentation to uncover their insights on what it takes to build and run successful experimentation programs.
In a world where customers increasingly seek to buy into a brand than buy from a brand, it’s critical that companies create experiences that turn customers into loyal fans, rather than regard them as simple business transactions.
Customer satisfaction alone is no longer enough to thrive in today’s economy. The goal is to earn your customers’ fierce loyalty with authenticity and transparency, while aligning your offers and actions with a mission that speaks to them.
By measuring the net promoter score (NPS), businesses gain unique insight into how consumers perceive their customer journey in a number of different ways. Companies that use NPS to analyze customer feedback and identify areas of improvement hold the keys to optimizing rapid and effective business growth.
In this article, we’ll cover why measuring NPS is essential to scaling business sustainably, how to gather and calculate NPS feedback, and best practices to increase response rates and run successful NPS campaigns.
[toc]
What is NPS?
Let’s start with a little history. The Net Promoter Score was officially pioneered and coined by Fred Reichheld in the early 2000s, and has since become an invaluable methodology for traditional and online businesses alike. The value lies in using data to effectively quantify customer loyalty and its effect on business performance — a factor that was previously challenging to measure at scale.
The system works by asking customers a version of this question: How likely are you to recommend our brand/product/service to a friend or colleague? Answers range on a scale of 0-10, from “not at all likely” to “extremely likely.” Depending on their answers, respondents are separated into one of three categories.
Promoters (score 9-10): Loyal customers who keep buying and actively promote and refer your brand to their circle of friends, family, and/or colleagues.
Passives (score 7-8): Customers who’ve had satisfactory or standard experiences with your brand, and are susceptible to competitors’ offers.
Detractors (score 0-6): Unhappy customers who risk damaging your brand with public complaints and negative word-of-mouth.
To calculate the final net promoter score, subtract the percentage of promoters from the percentage of detractors. The metric can range from a low of -100 to a maximum of 100, the latter if every customer was a promoter.
For many e-commerce companies, high customer retention, referral, and positive reviews are all critical drivers of success. NPS helps these businesses understand overall buyer behaviors and identify which customer profiles have the potential to be brand enthusiasts, enabling marketers to adjust their strategy to convert passives into promoters.
Simply put, NPS surveys are a simple and powerful method for companies to calculate how customer experience management impacts their overall business performance and growth.
How to gather NPS feedback
Common methods used to gather NPS feedback are email, SMS, and website pop-ups or chat boxes. Regardless of which method is used, there is a common set of steps to ensure a successful NPS campaign:
Set clear objectives before sending out the NPS survey. Save time and increase the relevance of survey responses by determining exactly what kind of feedback you’re looking for before launching the survey.
Segment recipients with customer behavior profiles. Get specific with your survey questions by customizing them to different audiences based on their unique history and interaction(s) with your brand.
Make surveys short, concise, and timely. Instead of lengthy annual or quarterly feedback requests, increase response rates by sending quick and easy surveys to customers soon after they’ve had meaningful interactions with your brand.
Use an automation tool to optimize survey delivery. Whether it’s with an email marketing platform or website widget integration, using automation tools to design and deliver your NPS surveys streamlines the entire feedback process, while reducing the margin for human error.
Integrating the NPS survey directly into the customer journey on your website increases response rate and relevancy of feedback. To implement a NPS survey like this, try using an intuitive visual editor like AB Tasty with NPS widget capabilities.
AB Tasty’s visual editor enables marketers of all levels to:
Modify visual and interactive elements on the website without any manual coding necessary;
Set up action-tracking to directly measure the performance of variations you’ve created;
Use the NPS widget to customize the content and feel of surveys across one or more pages of the website; and
Track the evolution potential of customer loyalty and benchmark against competitor performance via the NPS report.
Below are two case studies of clients who’ve used the AB Tasty NPS widget with highly successful campaigns to collect customer feedback and gain valuable insight to improve their customer experiences.
How to calculate NPS feedback
So what makes a good NPS score? A general rule of thumb states that anything below 0 means your business has some work to do … and a “good score” falls between 0-30. However, the true value of a NPS score depends on several factors — namely what industry your business is in.
If your NPS score isn’t as high as you’d hoped, don’t fret! There is always room for improvement and the good news is that it’s easy to implement actionable changes to optimize your NPS campaigns, no matter where you are on the scale.
When benchmarking for NPS, look at competitors that are in the same industry and relatively similar size as your company to get the most accurate visualization possible. Look for graphs that map out average NPS data by industry to get more insights on performance and opportunities for improvement in your sector.
It’s important to understand that comparing your business’s results to significantly larger or unrelated brands can lead not only to inaccurate interpretation of the data, but also sets unrealistic and irrelevant goals for customer experience teams.
How to increase your NPS response rate
Reaching your customers with your NPS survey is just one half of the battle. The other half is getting enough customers to actually respond to it, which is critical to calculate an NPS score that accurately reflects your company’s customer satisfaction performance. Here are some tips for boosting your NPS response rate:
Customize your NPS survey. Take the time to brand your survey with the proper fonts and colors, following your brand design guide. Given the fact that the average person sees upwards of 6,500 ads in a day, information overload is a real struggle for consumers and marketers alike. A consistent look and feel from your survey helps customers recognize and trust your brand, making it an easy transition to take the next step in their customer journey.
Personalize the message. Studies show that personalized subject lines increase email open rates by 26%. If you’re sending the survey in an email, use merge fields or tags to automatically add each recipient’s name into the subject line or body of the email.
Use responsive design. 75% of customers complete surveys on their phone. Make sure your survey is fully functional and accessible from all devices (i.e., desktop, mobile, and tablet), as well as on as many operating systems and internet browsers as possible.
Offer incentives for completing the survey. From gift cards, cash, and promo codes to raffles, offering monetary rewards is an easy method to increase engagement, especially for longer surveys. However, this should be researched and done carefully to avoid review bias and more seriously, legal issues.
Why you should use NPS
Taking customer feedback seriously is important business. As of 2020, 87% of people read online reviews for local businesses, and 79% said they trust online reviews as much as a personal recommendation from friends or family. This means your customers’ perception of your brand can literally make or break it.
It’s clear that looking at sales revenue as the sole determiner of success is not sustainable for long-term business growth. Neither is assuming that several user case scenarios represent the majority without the data to prove it.
NPS is an especially powerful metric for e-commerce, as it uses data to help businesses identify truly relevant areas for improvement and opportunities to build a strong and loyal customer base that is so vital to thrive in this sector.
Building a strong relationship with your customer base and incentivizing brand promoters is crucial to succeeding in the e-commerce market
Rather than guesstimating what priorities should be, businesses can use longer surveys with open-ended questions to evaluate how their customers feel about specific aspects of the business (e.g., products, website, and brand) and target strategy accordingly.
When calculated correctly, NPS is the key to determining the likelihood of repeat business and acquisition driven by brand promoters. Marketing and product teams can boost customer retention and increase sales with customized products they know buyers want. Happy customers love loyalty programs and referral rewards, which also bring in new business with significantly less spend than cold advertising.
When is the ideal time to send users an NPS survey
Deciphering what time customers are most likely to open emails, or when they’re more responsive to brand communications, is one of the biggest challenges for marketing teams.
Some studies suggest that the best time of the week to send emails is Tuesday at 10am. Although as many marketers know from experience, a one-time-fits-all solution doesn’t truly exist (though we wish it did!).
Depending on your industry and audience, your brand’s ideal time to hit send will likely change over time — and experimentation and optimization are the best ways to stay on top of it.
Identifying the right time to send customer satisfaction surveys requires continual testing of different elements like message personalization and audience segmentation
However it is possible to find ideal times based on data you likely already have: by focusing on meaningful interactions between brand and customer.
One of the optimal times to send a NPS survey is shortly after customers have had a meaningful interaction with the brand. This could be after a customer finishes a purchase cycle, receives a product, or even speaks with customer service.
During this time, the customer experience is still top-of-mind, which means they are more likely to complete a feedback survey with higher chances of providing more detailed — and honest — insights.
It’s also better to send short surveys more frequently. Asking for smaller amounts of feedback more often than once or twice a year enables you to monitor customer satisfaction with a quicker response time.
With regular feedback surveys, businesses can catch onto unhappy customers early on and make prompt changes to address problems in the customer journey, increasing customer retention.
Another benefit of this practice is that businesses can also identify highly successful campaigns throughout the year and prioritize resources on scaling strategies that are already proven to work well.
Do’s and don’ts for running an effective NPS campaign
Do:
Add open-ended questions. If you want more qualitative insight to support your business decisions, ask customers for specific input, as Eurosport did in this campaign.
Send from a person. Humans value real connections. Increase NPS response rate by sending surveys with the name and email of a real employee, not an automatic “no-reply” bot address.
Integrate your NPS survey into the user journey. To boost your reach beyond email surveys, use an NPS widget on your website for increased response rate and in-depth responses. Match your survey’s design to flow with the product page UX.
Don’t:
Disrupt the customer journey. Don’t overdo it with pop-up surveys or make them difficult to close, this can distract customers from their website experience and increase bounce rate.
Ask only one question. Don’t ask for just a 0-10 score. To collect actionable insight, add a follow-up question after the NPS score to ask why they gave that rating.
Not share NPS results. Transparency makes cross-team collaboration more effective and creative. NPS data is valuable for not only customer-facing teams, but also marketing and product teams to improve the customer experience.
Optimize your NPS strategy
In summary, NPS is incredibly user-friendly and simple to implement. This metric helps brands gain actionable insight into their customer loyalty and satisfaction, and identify opportunities to significantly boost customer retention and acquisition.
NPS widgets and automated feedback collection enable cross-team collaborators to work more cohesively on customer experience campaigns
Businesses can use this data to run their operations better and smarter, and also improve cross-team collaboration on enhancing the customer experience. Regular testing and following best practices enable teams to continually improve their NPS strategy and reach higher response rates.
Ready to integrate your next NPS campaign directly into your website and customer journey? With an intuitive interface and no-code visual editor, AB Tasty enables you to fully customize the entire NPS survey live on your website, and experiment with different triggers to optimize your NPS strategy.
Our NPS widget makes it easy to scale this process quickly within even the fastest growing companies — give it a spin today.
Join VP Marketing Marylin Montoya as she takes a deep dive into all things experimentation
Today, we’re handing over the mic to AB Tasty’s VP Marketing Marylin Montoya to kick off our new podcast series, “1,000 Experiments Club.”
At AB Tasty, we’re a bunch of product designers, software engineers and marketers (aka Magic Makers), working to build a culture of experimentation. We wanted to move beyond the high-level rhetoric of experimentation and look into the nitty gritty building blocks that go into running experimentation programs and digital experiences.
Enter: “1,000 Experiments Club,” the podcast that examines how you can successfully do experimentation at scale. Our podcast brings together a selection of the best and brightest leaders to uncover their insights on how to experiment and how to fail … successfully.
In each episode, Marylin sits down to interview our guests from tech giants, hyper-growth startups and consulting agencies — each with their own unique view on how they’ve made experimentation the bedrock of their growth strategies.
You’ll learn about why failing is part of the process, how to turn metrics into your trustworthy allies, how to adapt experimentation to your company size, and how to get management buy-in if you’re just starting out. Our podcast is for CRO experts, product managers, software engineers; there’s something for everyone, no matter where you fall on the maturity model of experimentation!
We are kicking things off with three episodes, each guest documenting their journey of where they went wrong, but also the triumphs they’ve picked up from decades of experimentation, optimization and product development.
He believes anyone can and should do experimentation
In the culture of experimentation, there’s no such thing as a “failed” experiment: Every test is an opportunity to learn and build toward newer and better ideas. So have a listen and subscribe to “1,000 Experiments Club” on Apple Podcasts, Spotify or wherever you get your podcasts.
Is experimentation for everyone? A resounding yes, says Jonny Longden. All you need are two ingredients: A strong desire and tenacity to implement it.
There’s a dangerous myth lurking around, and it’s the idea that you have to be a large organization to practice experimentation. But it’s actually the smaller companies and start-ups that need experimentation the most, says Jonny Longden of performance marketing agency Journey Further.
With over a decade of experience in conversion optimization and personalization, Jonny co-founded Journey Further to help clients embed experimentation into the heart of what they do. He currently leads the conversion division of the agency, which also focuses on PPC, SEO, PR — among other marketing specializations.
Any company that wants to unearth any sort of discovery should be using experimentation, especially start-ups who are in the explorative phase of their development. “Experimentation requires no size: It’s all about how you approach it,” Jonny shared with AB Tasty’s VP Marketing Marylin Montoya.
Here are a few of our favorite takeaways from our wide-ranging chat with Jonny.
.
The democratization of experimentation
People tend to see more experimentation teams and programs built at large-scale companies, but that doesn’t necessarily mean other companies of different sizes can’t dip their toes in the experimentation pool. Smaller companies and start-ups can equally benefit from this as long as they have the tenacity and capabilities to implement it.
You need to truly believe that without experimentation, your ideas won’t work, says Jonny. There are things that you think are going to work and yet they don’t. Conversely, there are many things that don’t seem like they work but actually end up having a positive impact. The only way to arrive at this conclusion is through experimentation.
Ultimately, the greatest discoveries (for example, space, travel, medicine, etc.) have come from a scientific methodology, which is just observation, hypothesis, testing and refinement. Approach experimentation with this mindset, and it’s anyone’s game.
Building the right roadmaps with product teams
Embedding experimentation into the front of the product development process is important, but yet most people don’t do it, says Jonny. From a pure business perspective, it’s about trying to de-risk development and prove the value of a change or feature before investing any more time, money and bandwidth.
Luckily, the agile methodology employed by many modern teams is similar to experimentation. Both rely on iterative customer collaboration and a cycle of rigorous research, quantitative and qualitative data collection, validation and iteration. The sweet spot is the collection of both quantitative and qualitative data — a good balance of feedback and volume.
The success of building a roadmap for an experimentation program comes down to understanding the organizational structure of a company or industry. In SaaS companies, experimentation is embedded into the product teams; for e-commerce businesses, experimentation fits better into the marketing side. Once you’ve determined the owner and objectives of the experimentation, you’ll need to understand whether you can effectively roll out the testing and have the right processes in place to implement results of a test.
Experimentation is, ultimately, innovation
The more you experiment, the more you drive value. Experimentation at scale enables people to learn and build more tests based on these learnings. Don’t use testing to only identify winners because there’s much more knowledge to be gained from the failed tests. For example, you may only have 1 in 10 tests that work. The real value comes in the 9 lessons you’ve acquired, not just the 1 test that showed positive impact.
When you look at it through these lenses, you’ll realize that the post-test research and subsequent actions are vital: That’s where you’ll start to make more gains toward bigger innovation.
Jonny calls this the snowball effect of experimentation. Experimentation is innovation — when done right. At the root, it’s about exploring and seeing how your customers respond. And as long as you’re learning from the results of your tests, you’ll be able to innovate faster precisely because you are building upon these lessons. That’s how you drive innovation that actually works.
What else can you learn from our conversation with Jonny Longden?
Moving from experimentation to validation
How to maintain creativity during experimentation
Using CRO to identify the right issues to tackle
The required building blocks to successful experimentation
About Jonny Longden
Jonny Longden leads the conversion division of Journey Further, a performance marketing agency specializing in PPC, SEO, PR, etc. Based in the United Kingdom, the part-agency, part-consultancy helps businesses become data-driven and build experimentation into their programs. Prior to that, Jonny dedicated over a decade in conversion optimization, experimentation and personalization, working with Sky, Visa, Nike, O2, Mvideo, Principal Hotels and Nokia.
About 1,000 Experiments Club
The 1,000 Experiments Club is an AB Tasty-produced podcast hosted by Marylin Montoya, VP of Marketing at AB Tasty. Join Marylin and the Marketing team as they sit down with the most knowledgeable experts in the world of experimentation to uncover their insights on what it takes to build and run successful experimentation programs.
Are you on the brink of launching a new feature – one that will affect many of your high-value clients? You’ve worked hard to build it, you’re proud of it and you should be!
You can’t wait to release it for all your users, but wait! What if you’ve missed something? Something that would ruin all your engineering efforts?
There’s nothing worse than starting the day after a release by having to immediately deal with a number of alerts for production issues and spending the day checking a number of logging and monitoring systems for errors and, ultimately, having to rollback the feature you just launched. You would just feel frustrated and unmotivated.
In addition to sapping the morale of your technical teams, NIST has shown that the longer a bug takes to be detected, the more costly it is to fix. This is illustrated by the following graph:
This is explained by the fact that once the feature has been released and is in production, finding bugs is difficult and risky. In addition to preventing users from being affected by problems, it’s critical to ensure service availability.
Are you sure your feature is bug-free?
You might think that this won’t happen to you. That your feature is safe and ready to deploy.
History has shown that it can happen to the biggest companies. Let’s name a few examples.
Facebook, May 7, 2020. An update to Facebook’s SDK rolled out to all users, missed a bug: a server value that was supposed to provide a dictionary of things was changed to provide a simple YES/NO instead. This really tiny change was enough to break Facebook’s authentication system and affect tons of other apps like TikTok, Spotify, Pinterest, Venmo, and other apps that didn’t even use Facebook’s authentication system as it is extremely common for apps to connect to Facebook regardless of whether they use a Facebook-related feature, mainly for ad attribution. The result was unequivocal, the app simply crashed right after launch. Facebook fixed the problem in a hurry, with about two hours for things to get back to normal. But do you have the same resources as Facebook?
Apple, September 19, 2012. Another good example, even though it’s a bit older, would be the replacement of Google Maps with Apple Maps in iOS 6 in 2012 on iOS devices. For many customers and especially fans, Apple always handles the rollout of new features carefully, but this time they messed up. Apple didn’t want to be tied to Google’s app anymore, so they made their own version. However, in their rush to release their map system, Apple made some unforgivable navigational mistakes. Among the many failures of Apple Maps are erased cities, disappearing buildings, flattened landmarks, duplicate islands, distorted graphics, and erroneous location data. A large part of this mess could have been avoided if they had deployed their new map application progressively. They would have been able to spot the bugs and quickly fix them before massive deployment.
And now, thinking about this and seeing that even big companies are impacted, you’re stressed out and may not even want to release it anymore.
But don’t worry! At AB Tasty, we know that building a feature is only half of the story and that to be truly effective, that feature has to be well deployed.
Our feature management service has you covered. You’ll find a set of useful features, such as progressive rollout, to free you from the fear of a release catastrophe and erase feature management frictions, so that you can focus on value-added tasks to get high-quality features into production and apply your energy and innovation in the best way possible, thereby delivering maximum value to your customers.
What’s progressive rollout?
So now you’re curious: what’s progressive rollout? How will this help me monitor the release and make sure everything is okay?
A progressive rollout approach lets you test the waters of a new version with a restricted set of clients. You can set percentages of users to whom your feature will be released and gradually update the percentage to safely deploy your feature. You can also do a canary launch by manually targeting several groups of people at various stages of your rollout.
This is a practice already used by large companies that have realized the significant benefits of a progressive rollout.
Netflix, for example, is one of the most dynamic companies and its developers are constantly releasing updates and new software, but users rarely experience downtime and encounter very few bugs or issues. The company is able to deliver such a smooth experience thanks to sophisticated deployment strategies, such as Canary deployment and progressive deployment, multiple staging environments, blue/green deployments, traffic splitting, and easy rollbacks to help development teams release software changes with confidence that nothing will break.
Disney is another good example of a company that makes the most of progressive deployment. It has taken the phased deployment approach to a whole new level for its “Disney +” and “Star” streaming services by deploying them regionally rather than globally. This delivery method is driven by the needs of the business. The company is making sure that everything is ready at the regional level, in line with its focus on the most important markets. Prior to launching Disney+ in Europe, it spent a lot of time building the local infrastructure needed to deliver a high-quality experience to consumers when launching Disney+ in Europe, including establishing local colocation facilities and beefing up data centers to cache content regionally. After starting to roll it out in Europe, Disney was able to identify that, for some markets, the launch of Disney+ could actually create issues that would have resulted in latency and thus provide a poor experience for affected users. So they took proactive steps to reduce their overall bandwidth usage by at least 25% prior to their march 24 launch and delayed their launch in France by two weeks. Without progressive deployment, they wouldn’t have been able to identify these issues. And that’s why the launching of Disney + was remarkable.
What are the benefits of the progressive rollout?
There are three main benefits to the progressive rollout approach.
Avoiding bugs impacting everyone in production at once
First, by slowly ramping up the load, you can monitor and capture metrics about how the new feature impacts the production environment. If any unanticipated issues come to light, you can pause the full launch, fix the issues, and then smoothly move ahead. This data-driven approach guarantees a release with maximum safety and measurable KPIs.
Validating the “Viable part” in your MVP
You can effectively measure how well your feature is welcomed by your users. If you launch a new feature to 10% of your client base and notice revenue or engagement taking a dip, you can pause the release and investigate. The other major advantage? Anticipating costs. Since margin, profit and revenue are an important part of sustainability, unexpected costs that blow up your projected budgets at the end of the month are almost as bad as the night sweats that come from an unexpected bug! Monitoring your costs during a progressive rollout and immediately pausing the launch if those costs spike is a phenomenal level of control that you will absolutely want to get in on.
Progressively deploying services based upon business drivers
Finally, deploying a service or product progressively can also be seen as a way of prioritizing specific markets based on data-driven business plans. Disney, for example, decided not to launch the service in the U.S. when it launched “Star,” its new channel available in the Disney+ catalog for international audiences, which will feature more mature R-rated movies, FX TV shows, and other shows and movies that Disney owns the rights to but that do not fit the Disney+ family image. Ironically: U.S. customers will have to pay extra on their Disney+ subscription to access the same content on the other streaming service, Hulu.
The decision was made following a complex matrix of rights agreements and revenue streams. Disney found that subscribers are willing to pay for the separate Hulu and Disney+ libraries in the U.S., but that Star’s more limited lineup was enough to justify a standalone paid purchase for international customers, who will have to add $2 to their initial $6.99 subscription to access it. When the content library for Star is enough to justify not going through Hulu anymore, the U.S. customers will have access to it by paying just 1$ more. This progressive rollout approach has enabled Disney to make sure that once they launch Star in the U.S., everything will be ready and they will achieve good results.
In other words, the progressive rollout approach helps you ensure that your functionality meets the criteria of usability, viability, and desirability in accordance with your business plan.
How to act fast when you identify bugs while progressively deploying a feature?
Now that you know more about the progressive rollout of your features/products, you may be wondering how to take action if you identify bugs or if things aren’t going well. Lucky for you, we’ve thought of that part too. In addition to progressive rollout, you’ll also find automatic rollback on KPIs and feature flagging in the AB Tasty toolkit.
Feature flagging will let you set up flags on your feature, that work as simply as a switch on/off button. If for any reason you identify threats in your rollout or if the engagement of your users is not really convincing, you can simply toggle your feature off and take time to fix any issues.
This implies that you are aware and that someone from the product team is available to turn it off. But what if something happens overnight and no one can check on the progress of the deployment? Well, for that eventuality, you can set up automatic rollbacks (also called Rollback Threshold) linked to key performance indicators. Our algorithm will check the performance of your deployment and, based on the KPIs you set, if something goes wrong, it will automatically roll back the deployment and inform you that a problem has occurred. This way, in the morning, your engineers will be able to fix the problems without having to deal with the rollback themselves.
Conclusion
Downtime incidents are stressful for both you and your customers. To resolve them quickly and efficiently, you need to have access to the right tools and make the most of them. The progressive rollout, automatic rollback, and feature flagging are great levers to relieve your product teams of stress and let them focus on innovating your product to create a wonderful experience for your users. Highly effective organizations have already realized the importance of having the right approach to deployment with the right tools. What about your organization?
AB Tasty minimizes risk and maximizes results to make the lives of Product teams a whole lot easier. Create a free account today!
Chad Sanderson breaks down the most successful types of experimentations based on company size and growth ambitions
For Chad Sanderson, head of product – data platform at Convoy, the role of data and experimentation are inextricably intertwined.
At Convoy, he oversees the end-to-end data platform team — which includes data engineering, machine learning, experimentation, data pipeline — among a multitude of other teams who are all in service of helping thousands of carriers ship freight more efficiently. The role has given him a broad overview of the process, from ideation, construction to execution.
As a result, Chad has had a front-row seat that most practitioners never do: The end-to-end process of experimentation from hypothesis, data definitions, analysis, reporting to year-end financials. Naturally, he had a few thoughts to share with AB Tasty’s VP Marketing Marylin Montoya in their conversation on the experimentation discipline and the complexities of identifying trustworthy metrics.
Introducing experimentation as a discipline
Experimentation, despite all of its accolades, is still relatively new. You’ll be hard pressed to find great collections of literature or an academic approach (although Ronny Kohavi has penned some thoughts on the subject matter). Furthermore, experimentation has not been considered a data science discipline, especially when compared to areas of machine learning or data warehousing.
While there are a few tips here and there available from blogs, you end up missing out on the deep technical knowledge and best practices of setting up a platform, building a metrics library and selecting the right metrics in a systematic way.
Chad attributes experimentation’s accessibility as a double-edged sword. A lot of companies have yet to apply the same rigor that they do to other data science-related fields because it’s easy to start from a marketing standpoint. But as the business grows, so does the maturity and the complexity of experimentation. That’s when the literature on platform creation and scaling is scant, leading to the field being undervalued and hard to recruit the right profiles.
When small-scale experimentation is your best bet
When you’re a massive-scale company — such as Microsoft or Google with different business units, data sources, technologies and operations — rolling out new features or changes is an incredibly risky endeavour, considering that fact that any mistake could impact millions of users. Imagine accidentally introducing a bug for Microsoft Word or PowerPoint: The impact on the bottom line would be detrimental.
The best way for these companies to experiment is with a cautious, small-scale approach. The aim is to focus on immediate action, catching things quickly in real time and rolling them back.
On the other hand, if you’re a startup in a hyper-growth stage, your approach will vastly differ. These smaller businesses typically have to show double-digit gains with every new feature rollout to their investors, meaning their actions are more so focused on proving the feature’s positive impact and the longevity of its success.
Make metrics your trustworthy allies
Every business will have very different metrics depending on what they’re looking for; it’s essential to define what you want before going down the path of experimentation and building your program.
One question you’ll need to ask yourself is: What do my decision-makers care about? What is leadership looking to achieve? This is the key to defining the right set of metrics that actually moves your business in the right direction. Chad recommends doing this by distinguishing your front-end and back-end metrics: the former is readily available, the latter not so much. Client-side metrics, what he refers to as front-end metrics, measure revenue per transaction. All metrics then lead to revenue, which in and of itself is not necessarily a bad thing, but that just means all your decisions are based on revenue growth and less on proving the scalability or winning impact of a feature.
Chad’s advice is to start with the measurement problems that you have, and from there, build out your experimentation culture, build out the system and lastly choose a platform.
What else can you learn from our conversation with Chad Sanderson?
Different experimentation needs for engineering and marketing
Building a culture of experimentation from top-down
The downside of scaling MVPs
Why marketers are flagbearers of experimentation
About Chad Sanderson
Chad Sanderson is an expert on digital experimentation and analysis at scale. He is a product manager, writer and public speaker, who has given lectures on topics such as advanced experimentation analysis, the statistics of digital experimentation, small-scale experimentation for small businesses and more. He previously worked as senior program manager for Microsoft’s AI platform. Prior to that, Chad worked for Subway’s experimentation team as a personalization manager.
About 1,000 Experiments Club
The 1,000 Experiments Club is an AB Tasty-produced podcast hosted by Marylin Montoya, VP of Marketing at AB Tasty. Join Marylin and the Marketing team as they sit down with the most knowledgeable experts in the world of experimentation to uncover their insights on what it takes to build and run successful experimentation programs.
One of the pioneers of experimentation shares a humbling reality check: Most ideas will fail (and it’s a good thing)
Few people have accumulated as much experience as Ronny Kohavi when it comes to experimentation. His work at tech giants such as Amazon, Microsoft and Airbnb — just to name a few — has laid the foundation of modern online experimentation.
Before the idea of “build fast, deploy often” took hold across tech companies, developers followed a waterfall model that saw fewer releases (sometimes every 2-3 years). The shortening of development cycles in the early 2000s thanks to the Agile methodology and an uptick in online experimentation created the perfect storm for a software development revolution ― and Ronny was at the center of it all.
AB Tasty’s VP Marketing Marylin Montoya set out to uncover the early days of experimentation with Ronny and why failure is actually a good thing. Here are some of the key takeaways from their conversation.
.
Progressive deployments as a safety net
A typical cycle of experimentation involves exposing the test to 50% of the population for an average of two weeks before a gradual release. But Ronny suggests coming at it from a different vantage point: Starting with a small audience of just 2% before ramping up to 50%. The slower ramp-up gives you the time to detect any egregious issues or a degradation in metric values in near real time.
In an experiment, we may focus on just two features, but we have a large set of guardrails that suggest we shouldn’t be degrading X, Y or Z. Statistical data that you’re collecting could also suggest that you’re impacting something you didn’t mean to. Hence, the usage of progressive deployments in which you can identify external factors and easily rollback your test.
It’s like if you’re cooling water: You may realize you’re changing the temperature, but it’s not until you reach 0ºC (32ºF) that ice forms. You suddenly realize that when you get to a certain point, something very big happens. So, deploying at a safe velocity and monitoring the results can lead to huge improvements.
Your great idea? It will most likely fail.
Nothing gives you a better reality check than experimentation at scale. Everyone thinks they’re doing the best stuff in the world until it’s in the hands of their users. That’s when the real feedback kicks in.
Over two-thirds of ideas actually fail to move the metrics that they were designed to improve — a statistic Ronny shares from his time at Microsoft, where he founded the experimentation platform team of more than 100 data scientists, developers and program managers.
Don’t be deterred, however. In the world of experimentation, failing is a good thing. Fail fast, pivot fast. Being able to realize that the direction you’re going in isn’t as promising as previously thought enables you to use those new findings to enrich your next actions.
At Airbnb, Ronny’s experimentation team deployed a lot of machine learning algorithms to improve search. Out of 250 ideas tested in controlled experiments, only 20 of them proved to have a positive impact on the key metrics — meaning over 90% of ideas failed to move the needle. On the flip side, however, the 20 ideas that did succeed in some form? Those resulted in a 6% improvement in booking conversion, worth hundreds of millions of dollars.
The starter kit to experimentation
It’s easier today to convince leadership to invest in experimentation because there are plenty of successful use cases out there. Ronny’s advice is to start with a team that has iteration capital. If you’re able to run more experiments and a certain percentage are pass/fail, this ability to try ideas is key.
Pick a scenario where you can easily integrate the experimentation process into the development cycle and then work your way on to more complex scenarios. The value of experimentation is clearer because deployments are happening more often. If you’re working in a team that deploys every six months, there’s not a lot of wiggle room because everyone has already invested their efforts into this idea that the feature cannot fail. Which, as Ronny pointed out earlier, has a low probability of success.
Is experimentation for every company? The short answer is no. A company has to have certain ingredients in order to unlock the value of experimentation. One ingredient you need is being in a domain where it’s easy to make changes, such as website services or software. A second ingredient is you need enough users. Once you have tens of thousands of users, you can start experimenting and doing it at scale. And lastly, make sure you have trustworthy results from which you are taking your decisions.
What else can you learn from our conversation with Ronny Kohavi?
How experimentation becomes central to your product build
Why experimentation is at the root of top tech companies
The role leaders play in evangelizing an experimentation culture
How to build an environment for true experimentation and trustworthy results
About Ronny Kohavi
Ronny Kohavi is an authority in experimentation, having worked on controlled experiments, machine learning, search, personalization and AI for nearly three decades. Ronny previously was vice president and technical fellow at Airbnb. Prior to that, Ronny led the Analysis and Experimentation at Microsoft’s Cloud and AI group and was the director of data mining and personalization at Amazon. Ronny has also co-authored “Trustworthy Online Controlled Experiments : A Practical Guide to A/B Testing.,” which is currently the #1 best-selling data-mining book on Amazon.
About 1,000 Experiments Club
The 1,000 Experiments Club is an AB Tasty-produced podcast hosted by Marylin Montoya, VP of Marketing at AB Tasty. Join Marylin and the Marketing team as they sit down with the most knowledgeable experts in the world of experimentation to uncover their insights on what it takes to build and run successful experimentation programs.
Statistical significance is a powerful yet often underutilized digital marketing tool.
A concept that is theoretical and practical in equal measures, you can use statistical significance models to optimize many of your business’s core marketing activities (A/B testing included).
A/B testing is integral to improving the user experience (UX) of a consumer-facing touchpoint (a landing page, checkout process, mobile application, etc.) and increasing its performance while encouraging conversions.
By creating two versions of a particular marketing asset, both with slightly different functions or elements, and analyzing their performance, it’s possible to develop an optimized landing page, email, web app, etc. that yields the best results. This methodology is also referred to as two-sample hypothesis testing.
When it comes to success in A/B testing, statistical significance plays an important role. In this article, we will explore the concept in more detail and consider how statistical significance can enhance the A/B testing process.
But before we do that, let’s look at the meaning of statistical significance.
What is statistical significance and why does it matter?
According to Investopedia, statistical significance is defined as:
“The claim that a result from data generated by testing or experimentation is not likely to occur randomly or by chance but is instead likely to be attributable to a specific cause.”
In that sense, statistical significance will bestow you with the tools to drill down into a specific cause, thereby making informed decisions that are likely to benefit the business. In essence, it’s the opposite of shooting in the dark.
Make informed decisions with testing and experimentation
Calculating statistical significance
To calculate statistical significance accurately, most people use Pearson’s chi-squared test or distribution.
Invented by Karl Pearson, the chi (which represents ‘x’ in Greek)-squared test commands that users square their data to highlight possible variables.
This methodology is based on whole numbers. For instance, chi-squared is often used to test marketing conversions—a clear-cut scenario where users either take the desired action or they don’t.
In a digital marketing context, people apply Pearson’s chi-squared method using the following formula:
Statistically significant = Probability (p) < Threshold (ɑ)
Based on this notion, a test or experiment is viewed as statistically significant if the probability (p) turns out lower than the appointed threshold (a), also referred to as the alpha. In plainer terms, a test will prove statistically significant if there is a low probability that a result has happened by chance.
Statistical significance is important because applying it to your marketing efforts will give you confidence that the adjustments you make to a campaign, website, or application will have a positive impact on engagement, conversion rates, and other key metrics.
Essentially, statistically significant results aren’t based on chance and depend on two primary variables: sample size and effect size.
Statistical significance and digital marketing
At this point, it’s likely that you have a grasp of the role that statistical significance plays in digital marketing.
Without validating your data or giving your discoveries credibility, you will probably have to take promotional actions that offer very little value or return on investment (ROI), particularly when it comes to A/B testing.
Despite the wealth of data available in the digital age, many marketers are still making decisions based on their gut.
While the shooting in the dim light approach may yield positive results on occasion, to create campaigns or assets that resonate with your audience on a meaningful level, making intelligent decisions based on watertight insights is crucial.
That said, when conducting tests or experiments based on key elements of your digital marketing activities, taking a methodical approach will ensure that every move you make offers genuine value, and statistical significance will help you do so.
Using statistical significance for A/B testing
Now we move on to A/B testing, or more specifically, how you can use statistical significance techniques to enhance your A/B testing efforts.
Testing uses
Before we consider its practical applications, let’s consider what A/B tests you can run using statistical significance:
Emails clicks, open rates, and engagements
Landing page conversion rates
Notification responses
Push notification conversions
Customer reactions and browsing behaviors
Product launch reactions
Website calls to action (CTAs)
The statistical steps
To conduct successful A/B tests using statistical significance (the chi-squared test), you should follow these definitive steps:
1. Set a null hypothesis
The idea of the null hypothesis is that it won’t return any significant results. For example, a null hypothesis might be that there is no affirmative evidence to suggest that your audience prefers your new checkout journey to the original checkout journey. Such a hypothesis or statement will be used as an anchor or a benchmark.
2. Create an alternative theory or hypothesis
Once you’ve set your null hypothesis, you should create an alternative theory, one that you’re looking to prove, definitively. In this context, the alternative statement could be: our audience does favor our new checkout journey.
3. Set your testing threshold
With your hypotheses in place, you should set a percentage threshold (the (a) or alpha) that will dictate the validity of your theory. The lower you set the threshold—or (a)—the stricter the test will be. If your test is based on a wider asset such as an entire landing page, then you might set a higher threshold than if you’re analyzing a very specific metric or element like a CTA button, for instance.
For conclusive results, it’s imperative to set your threshold prior to running your A/B test or experiment.
4. Run your A/B test
With your theories and threshold in place, it’s time to run the A/B test. In this example, you would run two versions (A and B) of your checkout journey and document the results.
Here you might compare cart abandonment and conversion rates to see which version has performed better. If checkout journey B (the newer version) has outperformed the original (version A), then your alternative theory or hypothesis will be proved correct.
5. Apply the chi-squared method
Armed with your discoveries, you will be able to apply the chi-squared test to determine whether the actual results differ from the expected results.
To help you apply chi-squared calculations to your A/B test results, here’s a video tutorial for your reference:
By applying chi-squared calculations to your results, you will be able to determine if the outcome is statistically significant (if your (p) value is lower than your (a) value), thereby gaining confidence in your decisions, activities, or initiatives.
6. Put theory into action
If you’ve arrived at a statistically significant result, then you should feel confident transforming theory into practice.
In this particular example, if our checkout journey theory shows a statistically significant relationship, then you would make the informed decision to launch the new version (version B) to your entire consumer base or population, rather than certain segments of your audience.
If your results are not labelled as statistically significant, then you would run another A/B test using a bigger sample.
At first, running statistical significance experiments can prove challenging, but there are free online calculation tools that can help to simplify your efforts.
Statistical significance and A/B testing: what to avoid
While it’s important to understand how to apply statistical significance to your A/B tests effectively, knowing what to avoid is equally vital.
Here is a rundown of common A/B testing mistakes to ensure that you run your experiments and calculations successfully:
Unnecessary usage: If your marketing initiatives or activities are low cost or reversible, then you needn’t apply strategic significance to your A/B tests as this will ultimately cost you time. If you’re testing something irreversible or which requires a definitive answer, then you should apply chi-squared testing.
Lack of adjustments or comparisons: When applying statistical significance to A/B testing, you should allow for multiple variations or multiple comparisons. Failing to do so will either throw off or narrow your results, rendering them unusable in some instances.
Creating biases: When conducting A/B tests of this type, it’s common to apply biases to your experiments unwittingly—the kind of which that don’t consider the population or consumer base as a whole.
To avoid doing this, you must examine your test with a fine-tooth comb before launch to ensure that there aren’t any variables that could push or pull your results in the wrong direction. For example, is your test skewed towards a specific geographical region or narrow user demographic? If so, it might be time to make adjustments.
Statistical significance plays a pivotal role in A/B testing and, if handled correctly, will offer a level of insight that can help catalyze business success across industries.
While you shouldn’t rely on statistical significance for insight or validation, it’s certainly a tool that you should have in your digital marketing toolkit.
We hope that this guide has given you all you need to get started with statistical significance. If you have any wisdom to share, please do so by leaving a comment.
Any business selling products or services online has a conversion funnel — but not everyone realizes it. If you’re unsure what a conversion is or how you can refine yours to sell more online, you’re in the right place. In this post, we’re going to take you through everything you need to know about conversion funnels. We’ll start with the basics — what conversion funnels are and the three key stages — before moving on to some of the most effective strategies to improve your funnels to increase sales. Let’s get stuck in!
In this article, we’ll cover:
[toc]
What is a conversion funnel?
A conversion funnel is a process that takes potential customers on a journey towards buying your products or services. They’re the cornerstone of all e-commerce business models, guiding potential customers from the moment they first become aware of your brand to the moment they make a purchase and beyond.
If you’re new to conversion funnels, think about the shape of a funnel — it’s wider at the top and narrower at the bottom. This represents the flow of people through your marketing strategy. Not everyone who becomes aware of your business will go on to become a paying customer. It’s like brewing coffee using a drip filter — a large volume of coffee grounds go into the top of the brewing equipment and then the funnel filters the high-quality stuff out of the bottom into your mug. A sales funnel works in the same way. The goal is to get as many relevant leads into the top of the funnel as possible, filtering out unsuitable prospects to leave your ideal customers ready to buy from you.
When you optimize your conversion funnel, you maximize the impact of your online marketing strategy and boost sales. This isn’t a once-and-done exercise, but something you need to continually refine throughout your business life. Do you want to know how to do it?
What’s the difference between a conversion funnel and a sales funnel?
The terms conversion funnel and sales funnel are often used interchangeably, but are they the same thing? The answer to this question is no, although they are closely related. A sales funnel typically starts when a potential customer enters the sales pipeline. This can happen online (in an e-commerce environment) as well as offline. However, a prospect typically doesn’t enter your sales funnel until they’re already familiar with your brand and your products or services.
It can take a while to get to this point in the online world, particularly if you’re targeting people who have never heard of your brand before. It takes time to build a connection and trust with your audience.
This is where a conversion funnel comes in. Here, the focus isn’t just on making a sale. It’s about making a connection with your audience, generating leads, and then taking those leads on a journey with your company. Potential customers might come into your funnel cold, without much awareness of who you are or what you do. Over time, your funnel will warm them up, build trust in your offer, and get them ready to buy. It encapsulates the whole process — from the first contact through to purchasing.
The three conversion funnel stages
There are many different conversion funnel models out there. All of them broadly suggest the same thing: breaking the process down into several conversion funnel stages the leads must travel through before making a purchase. Although a customer may enter or exit the funnel at any stage, your personalized model sets out how you intend customers to connect with your business.
The exact model will look different for every organization, but here are the three stages we suggest you follow.
Stage 1: Building awareness at the top of the funnel
The top of the funnel is all about making people aware of your brand and capturing leads. This stage is arguably the most crucial. If you don’t get people into your funnel, how are you going to sell to them? This critical step is often referred to as the awareness stage, and the exact strategy you use to do this will depend on your ideal customer. Who are they? Where do they hang out? What are their fundamental problems and challenges? Why would they be interested in what you have to offer them? The answers to these questions can provide useful directions during the awareness stage. Remember: this isn’t about you; it’s about the customer. Here are a few things that should be happening at the top of the funnel.
Content marketing
To grab attention online, you’re going to need content. This content can take many forms, so it’s essential to think about the types of content your audience is most likely to consume. For example, TikTok videos will likely appeal to 18 to 24-year-olds, but they might not be the best option if you’re targeting an older demographic.
You should consider both onsite and offsite content when outlining your content marketing strategy. An effective conversion funnel needs both. Offsite content helps capture attention and attract people to your website. In contrast, onsite content engages your audience and encourages them to take the next step, such as signing up for your mailing list.
Marketing campaigns
Alongside your content marketing strategy, you should also consider the marketing campaigns you will be running to get people to engage with this content. How will you get your content seen? How will you capture users’ attention? Are you only operating online, or will you use offline marketing to generate leads?
Often, e-commerce businesses are quick to dismiss offline marketing campaigns as irrelevant. However, highly targeted offline campaigns can be extremely useful. The online marketplace is crowded! If you can think of innovative ways to reach your audience offline and direct them to your online content, it could turn out to be a cost-effective way to generate leads for your conversion funnel.
You could also consider how you might automate some of your marketing campaigns. Creating evergreen campaigns that can run in the background while you and your employees focus on other tasks is useful to maximize profits. In essence, it means you can be generating leads for your business while you sleep.
Lead capture
Lead capture is the final step of the awareness stage. It’s where you move your prospects from the top of your conversion funnel to the middle. Once you’ve directed a potential customer to your website and encouraged them to engage with your content, what’s next? Each piece of content your audience engages with on your website should have a call to action — something that tells them what action to complete next.
To achieve this, you might want to consider a lead magnet. This can be something as simple as a discount code. But, for maximum results, you could develop something that helps solve a problem directly related to the product or service you’re offering.
Not only does this ensure you’re capturing highly qualified leads, but it also means people are likely to sign up even when they’re not ready to make a purchase. Given the point of a conversion funnel is to get them ready to buy from you, this is a vital point to consider when outlining your content marketing strategy.
Once you have that email address, it’s time to move on to the second stage of the conversion funnel: nurturing your audience to build desire for your products or services.
To maximize the number of leads you’re capturing, you should focus your stage one activities across a range of digital marketing channels. Here are some of the most popular options:
Social media
Given there are almost 4 billion social media users worldwide (over half the world’s population), it’s no surprise social media marketing is one of the most popular ways to generate leads. That said, it’s important to note it isn’t an easy option! Many business owners expect social media to be a fast and cheap way to grow an audience. Still, it takes time and persistent effort to get results — just like any other marketing strategy.
Work with a professional to develop a social media marketing plan that helps you stand out from the crowd. Many businesses use social media to attract people into their conversion funnel, but few do it well.
Paid search
What’s the first place you turn to when you need information? It’s estimated there are around 2 trillion Google searches every year — so advertising your content on Google could potentially be very lucrative! Unlike social media marketing, people using search engines are actively looking for the information you’re providing. To get the best click-through rate, make sure the phrases you’re targeting are directly relevant to the content. And test campaigns with a small budget before increasing your spending.
Organic search
It’s also a good idea to optimize your content for organic search. While this isn’t a short-term strategy, Search Engine Optimization (SEO) can deliver large volumes of traffic to your website over time. Focus on creating evergreen content — content that doesn’t become irrelevant or outdated and can appear in organic searches for many years to come. When you gain website visitors organically from search engines, you improve your ability to build a list of qualified leads, improving the quality of people entering your conversion funnel.
Stage 2: Nurturing your audience
Many online businesses make the fundamental mistake of pushing for a sale too soon. While you can (and should) always have an option for potential customers to buy from you on their terms, you should design your conversion funnel to nurture your leads, building trust with your brand before moving them into the sales pipeline.
Staying in contact
Once a potential customer has told you they want to hear more from you, it’s essential to stay in touch with them. If you can, you should aim to use multiple channels to do this. Encourage them to follow you on social media, re-target them with relevant online content, and send them regular emails. Research consistently shows the more opportunities a potential customer has to engage with your brand online, the more likely they will buy from you.
In short, it’s not enough to let people know you exist. If you want to sell to them, you need to put in the work to keep them engaged!
Positioning your products and services
As you stay in touch and nurture your audience, you should also ensure each lead is familiar with your products and services. This step isn’t about pushing for the sale — we’ll come back to this in the next stage — but you should be introducing your offering interestingly and engagingly. Essentially, we need your leads to be ready to make a purchase when you deliver your sales pitch. To get to this stage, they need to know what you’re selling.
Building a desire to buy
And finally, throughout the nurturing stage, you should be gearing up your audience to perform the desired action. In most cases, this is completing a purchase. How do you do this? Use emotion.
Humans are emotional beings. Remember earlier when we discussed the problems and challenges your product or service can solve for your customers? What are the emotions behind that problem? Aim to appeal to these emotions when engaging with your audience, and make it clear that you’re here to help them overcome these feelings to foster more positive and desirable emotions. How will your product or service make them feel? Can you impart some of these feelings with your content?
As well as feeling emotion, people have an inbuilt desire to be understood. The more you can show them you understand them, the more they will connect with your brand, and the more desire they will have to do business with you.
Throughout this step, you should be keeping your competitors in mind, especially if you’re operating in a competitive niche. Why should your audience choose you above your competition?
Stage 3: Convert potential customers into paying customers
Stage three is what it’s all about — securing the sale. Without this stage, your business is nothing — without paying customers, you have no profits. But we hope you now appreciate why it’s important to take your audience on a journey through the preceding stages before you attempt to convert them. Once you’ve optimized your funnel, your leads will now be ready to buy from you.
Continue to nurture leads
It’s crucial to be aware of this: you don’t stop nurturing your prospects once you get them to the end of your funnel. This stage should continue as long as your leads — and eventual customers — are in contact with your business.
Work at your potential customer’s pace
It’s also important to remember your potential customers will all travel at their own pace. Some will be ready to make a purchase sooner than others. For this reason, you should think of your conversion funnel as a process. It isn’t about throwing leads in at one end and spitting them out at the other side but about fostering connections that will help your organization thrive over time.
If you attempt to trigger a sale, but your customers aren’t ready, you should continue to engage and nurture them — and try again further down the line. Similarly, if none of your prospects are buying from you at this stage in your conversion funnel, it’s a sign something needs tweaking — we’ll get back to this in a little while.
Trigger a Sale
Now it’s time to encourage your leads to become paying customers, but how should you do it? As always, there are many options here. Finding the right approach will likely involve some trial-and-error. It’s a good idea to test out a few sales tactics and see what works. For some, a simple email or retargeting campaign on social media might do the trick. But for other businesses, you might need to come up with something more personal or creative.
What makes a good call-to-action?
Calls-to-action are the lifeblood of any effective conversion funnel. But how can you make sure yours are effective? Here are some tips to get you started.
Be clear and concise
Your call-to-action shouldn’t be too wordy. It would be best if you were direct. Use short sentences and tell your audience exactly what you want them to do. Use verbs like “buy,” “shop,” or “download.” Telling someone to “shop the new collection” is likely to result in more sales than something like “our new collection is now live on our website.”
Ask yourself why
As you develop your call-to-action, put yourself in your potential customer’s shoes. Why should they do what you’re asking them to? This is where the copy in the rest of your sales pitch comes in. The call-to-action is the final piece of the puzzle. By the time your lead gets to this part of your content, they should already be ready to hit that button. Make it a no-brainer for them.
The role of the shopping cart
The shopping cart on your website can be one of your biggest assets for driving sales. Did you know you can follow up on abandoned carts with your email subscribers? If not, you’re missing out on one of the most effective conversion tools available to e-commerce businesses. Research suggests around 70% of all shopping carts are abandoned online. Think about it: these are leads that have been through the conversion funnel and are almost ready to make a purchase. What is it that stopped them? It might have been something as simple as an interruption. Get back in touch and ask them if they’re ready to complete their purchase. The results may surprise you.
Evaluating your funnel with conversion funnel metrics
As we mentioned at the start of this post, a conversion funnel isn’t something you can create and then forget about. It’s an ongoing, interactive process that you must refine over time. The digital marketing world is dynamic and ever-changing — and your conversion funnel will need to evolve alongside industry trends and technological advances. Evaluating your funnel is an essential part of this, enabling you to improve each stage of the process to generate more qualified leads and convert more of them into paying customers.
Your first step should be to set up Google Analytics to track your conversion funnel. When you do this, you can track a lead from the moment they join your funnel until they make a purchase. This gives you an overview of how well your funnel is performing, as well as helping you access some of the key conversion funnel metrics that help you decide what to focus on next, such as:
Cost per acquisition (CPA)
Marketing costs money and the expenses associated with your conversion funnel can quickly mount up. It’s vital to understand the benefit these investments bring. What is the return on investment (ROI) associated with your conversion funnel? To understand this, you need to calculate your cost per acquisition. To calculate this, divide the costs associated with your conversion funnel by the number of paying customers the funnel generated in the same time period. For example, if you invested $500 and generated 10 paying customers, your CPA would be $50.
You can then compare this with the average spend to figure out whether your conversion is profitable or not. Using the example above, if the average customer spends $200, your funnel is profitable. On the other hand, if the average lifetime spend is $20, the funnel is operating at a loss.
Conversion rate
Google Analytics calculates your funnel’s conversion rate by working out how many of the visitors went to the goal page (e.g., “thank you for your purchase”) as well as one of the pages associated with the earlier stages of your conversion funnel. This provides you with useful insight into how well your funnel is working over time, which can help you evaluate any changes that you make to optimize the funnel.
Are you ready to optimize your funnel?
In summary, conversion funnels are an essential asset to all e-commerce businesses. If you want to improve sales, optimizing your funnel is often the best place to start. What steps will you take after reading this post?