Jeremy Epperson explains why startups should leverage conversion rate optimization to maximize growth.
Jeremy Epperson is about to change the way you approach growth in your business. The chief growth officer at ConversionAdvocates, a top-ranking CRO agency specialized in data analysis, takes a data-driven approach to identify the roadblocks in testing and optimize these processes for maximum effectiveness.
Over the past decade, he has launched CRO programs for 150+ growth-stage startups, creating a repeatable proven process for conversion rate optimization that can be implemented across different verticals and business sizes. By collating the insights gained from the different businesses, notably the common mistakes, Jeremy has gathered the expertise to facilitate CRO programs and avoid the steep learning curve that comes with launches.
In his conversation with AB Tasty’s VP Marketing Marylin Montoya, Jeremy delves into the granular level of data analysis and takes on topics that most people in CRO steer clear of.
Focus on customer experience optimization to catapult business growth
In today’s digital landscape, the old-school ideology of branding and push marketing is no longer an effective strategy. These days, customers have easy access to online reviews, forums and price comparison websites to inform their purchasing decision.
Rather than trying to control the customer journey, Jeremy recommends optimizing the experience of each of its four phases, using a data-driven, scientific-testing approach. This leads to the creation of different processes and reshapes the idea of optimization: The game-changing idea is that agility (allowing companies to move, learn and improve faster) can trump exorbitant budgets, thus allowing smaller companies to take market share from giants.
Passionate about being involved with teams on the ground level to “iteratively work through the entire process,” Jeremy touts CRO as the best mechanism and catalyst for growth, which challenges teams to rethink and rebuild processes and workflows, break down silos and build communication. Jeremy says this team-building aspect is more valuable from a CRO perspective than any individual winning test.
All data is equal: the value of wins, losses and flat tests in post-test analysis
When it comes to testing, certain results are deemed more “sexy” by marketers, and others are often swept under the carpet. However, Jeremy explains the utility of all test results, be that a win, a loss or a flat result, for informing how testing should evolve.
A string of inconclusive tests means that the testing has not been focused on what is actually blocking the conversion. “If we’re not targeted in on the things that are blocking them (users) from converting then we’re not going to see big movement in the conversion rates, so that’s really important,” says Jeremy.
When test results show big changes in the conversion rate, positive or negative, this indicates that an important part of the customer experience has been impacted. While winning tests are celebrated and losing tests shied away from, Jeremy advises that in both cases, the next step should be to double down on test variations to fully resolve the problem, creating at least three variations for each of those hypotheses.
Understand your customer and remove their purchasing roadblocks
Oftentimes, marketers, especially in smaller businesses, are reluctant to spend their budget on research and insights, opting for customer acquisition strategies involving ads and content. However, according to Jeremy, investing in research to better understand the customer can bring us closer to answering one question that’s key to creating the right growth strategy for your business: Why does your customer buy or not buy your product?
Research and testing can offer 360-degree insights into customer behavior such as their buying criteria, decision-making and their buying process in order to remove any conversion roadblocks. It could be as simple as creating an FAQ page to clarify primary questions, resulting in a 23% lift in lead conversion, as Jeremy exemplified.
Jeremy explains that businesses will naturally experience growth when they focus on offering a better customer experience, eliminating customer frustrations and roadblocks, which would otherwise cause them to abandon their purchase. This customer-centric mindset will actually have a direct positive impact on revenue and growth.
What else can you learn from our conversation with Jeremy Epperson
How to combine research and testing in CRO to double the average validated win rate
How to encourage teams to embrace the CRO process and cooperate across verticals
The inutility of customer personas and how to replace them
How to implement CRO for the first time
About Jeremy Epperson
Jeremy Epperson, chief growth officer at ConversionAdvocates, has worked in the field of startup growth and conversion rate optimization (CRO) for 14 years, as a consultant in his own businesses as well as part of digital agencies. Jeremy is passionate about researching, building and implementing processes to generate growth and has launched CRO processes within more than 155 growth-stage startups. He also specializes in customer journey mapping, CRO maturity assessments and marketing and customer research.
About 1,000 Experiments Club
The 1,000 Experiments Club is an AB Tasty-produced podcast hosted by Marylin Montoya, VP of Marketing at AB Tasty. Join Marylin and the Marketing team as they sit down with the most knowledgeable experts in the world of experimentation to uncover their insights on what it takes to build and run successful experimentation programs.
If you’re looking to get started on building an application, you may be wondering whether to design it as a monolith or build it as a collection of microservices. In fact, this has been a long-standing point of debate for many years among application architects.
So what is the difference between these two architectures and how do you decide which one to choose and which one is best for your organization?
While monolithic architectures have been used for many years, microservices seem to be taking over as it’s becoming a key driver of digital transformation.
Indeed, in a world where speed and agility are more important than ever, you may find that switching over to the more versatile microservices approach to build applications that are quicker to create and deploy the go-to-strategy to remain competitive and to be able to continuously deliver software without delay.
In this post, we will investigate the above questions by comparing monolithic and microservices application architectures to help you in your decision. We will also explain, since moving to microservices might be a risky endeavor, how feature flags may help reduce some of that risk.
Monolithic architecture
Before we move on to the migration process, we will quickly go through the definitions of these architectures and why one may take precedence over the other.
By definition, a monolith refers to a “large block of stone”. In the same way, a monolithic application is an application made up of one piece or block built as a single indivisible unit.
In that sense, in a typical monolith application, code is handled in one single, tightly knit codebase and so data is stored in a single database.
Although this type of application is considered to be the common and traditional method to build applications, it may cause some major problems and over time may become unmanageable.
The image below illustrates the makings of this architecture, which consists of a client-side user interface, server-side application and a database. They all function as a single unit and so changes are made in the code base and require an update of the entire application.
Below, we will list some of the difficulties and drawbacks associated with this architecture, which prompts many to move to microservices.
Drawbacks of monolithic applications
Less scalability- components cannot be scaled independently; instead, the whole application will need to be scaled, not to mention that every monolith has scalability limitations.
Reliability issues- given how the components of a monolithic application are interdependent, any minor issue may lead to the breakdown of the entire application.
Tight coupling- the components of the application are tightly coupled inside a single execution meaning that changes are harder to implement. Furthermore, all code changes affect the whole system, which could significantly slow down the development process.
Flexibility- with monolithic applications, you will need to stick to a single technology as integrating any new technology would mean rewriting the entire application which is costly and time consuming.
Complexity- as a monolithic application scales up, it becomes too complicated to understand due to how the structure is tightly connected and becomes even harder to modify that eventually it may become too difficult to manage the complex system of code within the application.
Despite its drawbacks, monoliths do offer some advantages. Firstly, monolithic applications are simple to build, test and deploy. All source code is located in one place and can be quickly understood.
This offers the added advantage when it comes to debugging. As code is one place, any issues can be easily identified to be fixed.
As already mentioned, a monolithic approach has been in existence for a long time and since it’s become such a common method for developing apps, this means that engineering and development teams have the sufficient knowledge and skills to create a monolithic program.
Nonetheless, the many disadvantages of monolithic architecture has led to many businesses shifting to microservices.
Microservices architecture
Unlike a monolithic architecture, microservices architecture divides an application into smaller, independent units and breaks down an app into its core functions-each function is called a service.
Every application process is handled by these units as a separate service and each service is self-contained; this means that in the event that a service fails, it won’t impact the other services.
In other words, the application is developed as a collection of services, where each service has its own logic and database and the ability to execute specialized functions. The following image depicts how this architecture works:
You can look at each microservice as a way to break down an application into pieces or units that are easier to manage. In the words of Martin Fowler:
“In short, the microservice architectural style [1] is an approach to developing a single application as a suite of small services, each running in its own process and communicating with lightweight mechanisms, often an HTTP resource API.”
In other words, microservices architecture is a way to design software applications as suites of independently deployable services that communicate with one another through specific methods, i.e by using well-defined APIs.
Microservices: The answer to accelerated application development and time to market?
More distributed systems architectures such as microservices are increasingly replacing the more traditional monolithic architecture. One of the main reasons is that systems designed with microservices architecture are easier to modify and scale.
Due to its distributed nature, developers can develop multiple microservices simultaneously.
Since services can be deployed independently, each service is a separate codebase that can be managed by a small development team, as can be seen in the image below, which illustrates the major differences between these two architectures:
This results in shortened development cycles so releases are ready for market faster.
Microservices, as a result, are used to speed up the application development process as this type of architecture enables the rapid delivery of large, complex applications on a frequent basis.
Moreover, since these services are deployed independently, a team can update an existing service without redeploying the entire application unlike monolithic architecture. This makes continuous deployment possible.
This also makes these types of applications less risky to work with than monolithic applications. Risk mitigation, then, is one of the key drivers for adoption of microservices.
This makes it easier to add new changes or functionalities to it than to a monolithic program. This means that updating the program is more straightforward and less troublesome.
With monolithic applications, even the most minor modifications require redeployment of the entire system and so feature releases could be delayed and any bugs require a significant amount of time to be fixed.
Thus, microservices fits within an Agile workflow as using such an approach makes it easier to fix bugs and manage feature releases. You can update a service without redeploying the entire application and roll back if something goes wrong.
Not to mention that a microservices architecture addresses the scalability limitations that come with monolithic architecture. Because of its smaller, autonomous parts, each element can be scaled independently so this process is more cost- and time-efficient.
Finally, each service can be written in a different language without affecting the other services. Developers are also unrestricted by the technology they choose so they can use a variety of technologies and frameworks instead of going for a standardized one-size fit all approach.
To sum up the differences…
The table below summarizes some of the major differences between the two architectures:
Monolithic
Microservices
Deployment
Simple deployment of the entire system
More complex as there are independent services which need to be deployed independently
Scalability
Harder to scale; the whole system needs to be redeployed
Each element can be scaled independently without downtime
Testing
Easier to test: end-to-end testing
Harder to test; each component needs to be tested individually
Flexibility
Limited to single technology
Freedom of choice of tech stack
Security
Communication with a single unit and so security is handled in one place
Large system of standalone services communicating via network protocols raises security concerns
Adoption
Traditional way to build applications so easier to implement and develop as developers possess necessary skills
Specialized skills are required
Resiliency
Single point of failure- any issue can cause a breakdown in the entire application
A failure in one microservice doesn’t affect the other services
Tread carefully with microservices
In sum, a microservices architecture offers many advantages. Nonetheless, this type of architecture may not be suited for all companies so a proper evaluation will need to be made to choose the best approach for them depending on factors such as type of product or audience.
As a result, before moving onto the migration process, it is important to proceed carefully before attempting this migration as a microservices architecture is not without its cons.
Among some of the drawbacks of microservices include:
We’ve already mentioned how monolithic architectures have been used for a long time that many engineering teams have the knowledge and experience to create a monolithic program. Meanwhile, building a microservice application without the necessary skills could be a risky endeavor as a microservice architecture is a distributed system and so you would need to configure all the modules and database connections.
Just a monolithic application could become complex with time, standalone services that make up a microservice application could also lead to high developmental and operational complexities.
Because of the distributed system that makes up this architecture, testing such an application is more difficult because of the large number of deployable parts.
Debugging and deploying these large numbers of independently deployable components are also much more complex processes. (However, should any individual microservice become unavailable, the entire application will not be disrupted).
Testing, such as integration and end-to-end testing, can become difficult due to its distributed nature. This is in contrast to monolithic apps which consist of a single unit that makes it easier to run end-to-end testing.
In the end, transitioning to a microservices architecture will ultimately depend on the pain point you’re trying to solve.
You’ve got to ask yourself whether your current (monolithic) architecture is giving you trouble and whether actually migrating to microservices will help solve your issues.
Make the transition less risky: Feature flags and microservices
With the above in mind, DevOps teams might still want to make the transition from monolithic to microservices architecture due to its compatibility with Agile development workflows, that come with with lower risks and fewer errors.
During this process, teams will look to replace the old code and roll out the new code at once, which could be very risky.
Therefore, migration to a microservice-based ecosystem could turn out to be a challenging and time consuming process, especially for businesses with large and complex systems with monolithic architecture.
This is where feature flags come into play.
Feature flags are a great asset when it comes to releases and we’re not only referring to front-end releases but also when it comes to your architectural strategy.
Feature flags give you greater control over the release process by choosing when and to whom you will release products and features by separating deployment from release.
Thus, you can turn features on or off for certain users by simply wrapping them up in a feature flag without redeploying, lessening the risk associated with the release process.
Just as feature flags enable progressive delivery of features instead of a big bang release, the same idea applies when it comes to migrating to services: it’s best to do it one piece at a time instead of all at once.
The main idea is to slowly replace functionality in the system with microservices to minimize the impact of the migration.
You would essentially be making small deployments of your microservices by deciding who sees the new service instead of going ahead with a big bang migration.
This will be preceded by analyzing your current system to identify what you can start to migrate. You can experiment with functionalities within your customer journey to start migrating and gradually direct traffic to it via feature flags away from your monolith and then slowly kill off the old code.
There are other ways to go about the migration process- which often involve a roll out of the new code all at once- but feature flags lessen the risk usually associated with microservices releases through progressive rollout instead.
Split your monolith into microservices using feature flags
The key is to move from monoliths towards microservices in incremental ways. Think of it as if you’re untangling a knot that’s been tightly woven together and feature flags as the tools that will help you to gradually unravel this knot.
Start with identifying a functionality within your monolith to migrate to a microservice architecture. It could be a core or preferably an edge functionality such as a code that sends coupon or welcome emails to users in the case of an e-commerce platform, for example.
Proceed by building a microservice version of this functionality. The code that controls the functionality within the monolith will need to be diverted to where the new functionality lives, i.e within the microservice.
Then, wrap a feature flag around this microservice with the traffic going to the old version. Once the feature flag is turned on, the microservice code is turned on so you can direct traffic to the new version to test it.
Note that you should keep the existing functionality in place in the monolith application during the transition so you can then alternate between different versions or implementations of this functionality-the one in the monolith and the one in the new microservice.
If anything goes wrong, you will be able to revert traffic back to the monolith with the original functionality. Hence, you can switch between the two functionalities until you’re satisfied that the microservice is working properly.
Using a dedicated feature flag management tool, you can test the microservices to ensure everything is working as expected. Feature flags allow you to target certain users such as percentage rollouts (similar to a canary deployment), through IP address or whatever other user attributes you set.
If no issues come up, then you can turn the flag on for more users and continue to monitor the microservice to ensure that nothing goes wrong as you increase the traffic to it.
Should anything go wrong, you can roll back by turning the flag off (i.e kill switch) and delete the old application code.
Make sure you remove the flag once you no longer need it to avoid the accumulation of technical debt.
Then, you will repeat this process with each functionality and validate them with your target users using your feature flag management tool.
Remember, the whole point is to create these microservices progressively to ensure things go smoothly and with feature flags, you further decrease the risk of the migration process.
This term is inspired by a kind of plan, where in a similar way to the plant, the pattern describes a process of wrapping an old system with a new one, the microservice architecture, using an HTTP proxy to divert calls from the old monolith functionality to the new microservice..
This would allow the new system to gradually take over more features from the old system, as can be seen in the image below, where the monolith is ‘strangled’:
In this scenario, a feature flag can be applied to the proxy layer to be able to switch between implementations.
Conclusion
Monoliths aren’t all bad. They’re great when you’re just getting started with a simple application and have a small team; the only issue comes from their inability to support your growing business needs.
On the other hand, microservices are a good fit for more complex and evolving applications that need to be delivered rapidly and frequently and particularly when your existing architecture has become too difficult to manage.
There is no one-size fits all approach. It will eventually depend on the unique needs of your company and the capabilities of your team.
Should you decide to take the plunge and shift to microservices architecture, make sure that you have a feature management tool where you can track the flags in your system and how your features are performing.
AB Tasty’s server-side functionality is one such tool that allows you to roll out new features to subsets of users and comes with an automatic triggered rollback in case something goes wrong during the migration process.
The most important takeaway is to carefully consider whether you really need to migrate and if so, why. You must evaluate your options and think about the kind of outcome you’re hoping to achieve and whether a microservices architecture provides the right path to this outcome.
Once upon a time, driving digital customer experience optimization (EXO) meant having a competitive edge. You went the extra mile, you won. Nowadays, everyone is focused on EXO to the point where it’s the minimum necessary to stay in the game.
“Experience” encompasses the entire user journey across all touchpoints that a consumer encounters when interacting with your brand. Be it website, app, tablet, mobile, bot-generated or in-store, the quality of these interactions will impact your customers’ purchasing decisions and their loyalty.
Customer experience optimization can greatly influence buyers’ purchasing decisions and loyalty (Source)
Deliver solid experiences and it will shape your brand reputation and increase your conversion rates – the key is to never stop moving. Remain stagnant, and you’ll be overtaken; but if you can figure out what your customers want, find the line between what they’re looking for and what you can offer, and then evolve your interactions on an ongoing basis, you can deliver superior experiences and business success.
Here at AB Tasty we believe that optimization is the bare minimum you should be delivering. In order to stay competitive and stay ahead, the work should never stop. Establishing a continuous feedback loop through experimentation and data gathering and analysis are what it takes to maximize customer experience and keep your competitive edge.
In this article, we’ll cover:
[toc]
Why is customer experience optimization so relevant?
At the base, no matter what the product or sales channel, any business will try to satisfy their customers. Customer centricity has been around longer than we might think, but customer experience optimization really started to take flight as technology advanced and brand touchpoints and interactions diversified.
Throw in the fact that data is more readily available, collectible and collected, and suddenly the means to understand your customers better than they understand themselves is out there for the taking.
Use the data you collect to take your customer experience to the next level (Source)
Not convinced that it really matters? Think again. PwC’s Future of CX report found that one in three consumers will walk away from a brand after just one negative experience. Furthermore, 73% of consumers nominate their experience in brand interactions as an important factor in making purchasing decisions.
Is customer experience optimization truly essential?
Think about your own experiences when shopping online. How does it feel? Which brands do you gravitate towards and which ones just don’t seem to tickle your fancy? Do they see you as an individual, a real person, or are you just another transaction to them? It only takes a moment’s pause to consider your own experiences to understand why optimizing customer experiences is not just important, but essential.
As consumers, we make decisions about where to shop, which products to buy and which ones to keep buying based on our past experience of acquiring and consuming them. What’s more: the aforementioned Future of CX report from PwC found that customers are more likely to try additional products and services from a brand they trust, and that they’re even willing to pay more, too – up to 16% more depending on the product category. It’s also less expensive to encourage repeat business (customer loyalty) than to acquire new customers, so leveraging customer experience optimization to drive long-term brand affinity and customer lifetime value will pay for itself.
The three key ingredients to supercharge your customer experience optimization
When a customer arrives on your site – whether they’re searching for products, comparing different options or just looking to learn more about your products – there are a number of steps they’ll go through to achieve their end goal. All of these add up to a path that they’ve taken through your website, and one that presents both opportunities and pitfalls when it comes to optimizing your site and meeting your customers’ needs. The more you can understand your user journey and implement improvements while removing frictions along the purchase funnel, the better your site will perform.
Gathering data about your customers’ behavior and preferences will give you the information you need to run experiments to discern the optimal setup using A/B testing. Not sure if your CTAs have the best wording? Test them! Trying to understand the best configuration for your landing page? Run an experiment! Have doubts about whether product images should be cropped or full body? We can examine that too!
Ultimately, you’re aiming to ensure that all roads lead to an increase in conversions – and driving UX optimization on an ever-changing customer pathway is necessary to keep you ahead of the game.
Continuously optimizing your user experience is essential for staying ahead of the curve (Source)
2. Improve your personalization efforts
Know your customers and tailor to their needs!
Tailoring a digital brand interaction to the unique needs of the person behind the screen builds customer loyalty and drives repeat business. In the experience economy, you’re selling your product plus the interaction with the brand and the purchase itself alongside it. The user experience when acquiring and consuming the product is just as important as the utility it performs. Accordingly, personalizing these digital exchanges with your consumers is key to long-term customer retention.
To better understand your customers on a personal level, building a solid data foundation allows you to best understand your users, identify their needs and deliver personalized experiences that will keep your shoppers returning again and again. After all, personalization is about getting to the root of what customers have shown you that they want and delivering against that.
Use the data you gather to tailor each user’s experience on your site (Source)
As with your customer journey, responding to ever-changing desires can be challenging, so knowing your customers intimately is crucial for personalization success. Get it right and the impact is high so don’t leave any stone unturned when exploring improvement opportunities.
3. Implement server-side testing and feature management
Bring in the tech teams to expand your optimization activities!
Server-side testing is where we bring in the heavy hitters. While A/B testing can be rapidly implemented by marketing teams, server-side experimentation requires the buy-in and expertise of tech teams and developers.
Collaboration between the two groups is essential to deliver seamless customer experiences where the front-end (client-side) lures in your customers and the back-end (server-side) runs smoothly to ensure an effortless shopping experience. For instance, presenting a promotional offer (front-end) will only deliver results if the payment gateway runs glitch-free and the page loading times are fast (back-end).
Lukas Vermeer, director of experimentation at Vista, champions the value of testing both sides. “A lot of the value from experimentation…comes from two things: One is not shipping the bad stuff – a huge value point for experimentation. The other amount of value [comes from] figuring out strategically, going forward, what you should invest in.”
Lukas Vermeer, a guest speaker in the “1000 Experiments Club” podcast, champions the value of testing both sides (Source)
If your business has reached a certain level of maturity and sophistication, maximizing both client- and server-side testing will ensure that your optimization approaches are working the hardest they possibly can to deliver improved business outcomes.
How can customer experience optimization apply to different sectors?
E-commerce
Delivering digital customer experience optimization through experimentation can drive transactions, increase conversion rates and optimize user experience as you test your site in an effort to offer a smoother purchasing experience that caters to your users’ every need.
B2B
Not every website is for purchasing then and there; sometimes site visits are an initial step on a longer journey. You can drive lead generation for purchases in areas like automotive, bedroom furniture or holiday rentals by optimizing site layout, CTAs, and access to product and store information.
Travel
Offering a range of solutions, from individual products (like hotel or transport bookings) right up to comprehensive packages that take care of every step of a holiday, is a particularity of the travel industry. When bundling items together into packages, finding that pricing sweet spot is especially key. Server-side testing is particularly relevant in this field and can give you the tools to both curate your product offering and increase bookings as well.
Conclusion
When it comes to digital customer experience optimization, improving continuously is essential to your strategy; here at AB Tasty, we can’t stress that enough!
With both technology and customer attitudes evolving every second, the only way to keep the pace is by continuously adapting your company’s own optimization practices to respond to customer demands and unlock increased value and continuing loyalty.
Living and breathing such an approach means setting up your marketing, product and technical teams for smooth cross-collaboration and a shared mission and objectives. Ensuring that they’re also sharing the same experimentation and development roadmap to unlock resources and roll out improvements at the right time will keep your business on the road to success.
For the fourth installment in our series on a data-driven approach to customer-centric marketing, we got together with Filip von Reiche, CTO of Integrated Customer Experiences at Wunderman Thompson, and Gaetan Philippot, Data Scientist at AB Tasty. We discussed the pros and cons of vanity metrics, how they’re different from actionable metrics, and the roles all types of metrics play when measuring a brand’s digital impact.
Let’s begin with digital transformation. What is it, and why have companies been so focused on it over the past few years?
Digital transformation, as defined by Salesforce, is the process of using digital technologies to create new – or modify existing – business processes, culture, and customer experiences to meet changing business and market requirements. It began in the late 20th century and underwent rapid acceleration in the first two decades of the 21st century, spreading across almost all industries.
Resisting digital transformation is risky. TechTarget tells the fateful story of Blockbuster LLC, a once-global entity with video rental stores throughout the US and the world. But its presence and relevance precipitously declined from about 2005, as Netflix harnessed emerging technologies and capitalized on consumer appetite for on-demand entertainment delivered by the then newly-available streaming services.
But digital transformation can also be seen as a buzzword, says Filip, “in the sense that people think it’s something they need to do. The original impetus behind digital transformation was that brands were trying to be more competitive – in how they grew their market share, how they were perceived, and so on. And digital transformation was the engine that enabled them to achieve these things, to react faster, and to be able to measure their impact.
“Initially, it was focused on giving brands an online presence, and of course, it has achieved that, but over time, it has acquired new uses. Its latest purpose is to help brands create personalized experiences by providing them with the right content and flow which allows them to have better conversations with their customers, and that leads to more conversions.”
For Gaetan, “Part of it is imitative: people say ‘Amazon is doing a thousand experiments a year, so we have to do the same,’ but not everyone has the vast resources of Amazon, or can hope for the same results.”
But if the objective is to have personalized brand experiences, Amazon isn’t a website where people want to spend much time. “On the contrary, people go to Amazon because they can get in, buy what they want, and get out fast. It’s totally impersonal,” explains Filip. “However, the reason I spend more time with a brand is because I want a specific product or service they offer, and I expect personalization from brands I’m engaged with.”
For personalization to be successful, there must be constant validation of your perceptions before going live with any website or campaign.“More than half of all campaigns that customers perform using AB Tasty have to do with personalization or experimenting with personalization,” remarks Gaetan.“They’re the foundation on which everything else is built.”
What are the differences between vanity metrics and actionable metrics?
The use of vanity metrics varies across different verticals at different levels and from client to client. The one constant is that vanity metrics are very alluring because they provide what Filip calls “A dopamine rush that lights up your brain – and in some cases, depending on what you’re trying to achieve with your personalization, that ‘rush’ might be sufficient. But ideally, you want to know what the long-range impact will be.”
The problem is that the impact is not always easily attainable. “Let’s take real estate as an example. It’s unfortunately not as simple as the target sees a personalized message, the target clicks, the target purchases a house. Wouldn’t that be great? But in reality, the lapse of time between that initial personalization and the purchase might be 30, 60, 90 days, or even longer. In some cases, you do need a vanity metric such as page likes, favorites, shares, etc., as an indicator to tell you where things are going, but it’s always better to have a conversion metric in the background to tell you what it all really means,” explains Filip.
“This is where more in-depth analytics come into play. If you have a customer who is engaged but not converting, you need to find out what the barrier is and find a way to get around it. If you can propose a solution using personalization that meets the consumer’s needs and knocks down that barrier, great. But you always have to respect the trust the consumer has placed in you by giving you the data you need for personalization. You can’t just pop out and say “Hi! We see you’re looking at our website! That’s creepy. But you can indicate that you, as a brand, are present and listening to your consumers’ needs. It’s a delicate balance.”
Can vanity metrics be transformed into actionable metrics?
It should be emphasized that the use of a “superficial” or vanity metric is always justified when there is a notable response, whether positive or negative, because it may prompt a company to want to dig deeper and analyze further; to do so, they turn to actionable metrics for answers.
Gaetan remarks, “But it’s important to remember that not everything is actionable immediately: sometimes the payoff will be further along. The value of each type of metric varies according to industry and also according to client maturity. For example, e-commerce clients that are just starting out will test all sorts of things before they learn which key metrics are the most useful and offer the best results for their businesses.”
“The entire metric discussion needs to begin as soon as you devise your personalization or testing strategy,” says Filip. “You’ll have a goal in mind: to achieve a certain type of awareness or engagementor a certain number of conversions, etc. Everything you test that you want to use as a measure of success must align with that goal. If a vanity metric can support that goal, then it’s sufficient. If the final conversion is needed to prove my point, then we need to figure out how to get it. Sometimes that can be more complicated and involve offline integrations, but that’s usually how it works.”
What questions should companies ask to find the right metrics to track?
For Filip, a vital question concerns the scope of the project you’re undertaking. Are you measuring an entire campaign or are you breaking it down into individual parts? A high-level scope is easier to measure, meaning fewer metrics are needed, generally speaking. A detailed scope is more complex, as measuring on an individual basis raises questions of how to determine identity, how to relate conversions back to specific individuals, etc., especially when using data from a Customer Data Platform (CDP). But the most fundamental question is: ‘Should I be testing and personalizing my experiences?’ And Filip’s answer is “Hell yes! But there are lots of different paths to take to do these things. One way is to ask a company like Wunderman Thompson to help you in doing analysis, acting as a consultant to show you what’s working and what isn’t, where there are blockages, places for improvement, etc. (Sorry for the sales pitch).
“But if you’d rather appeal to consumers on your own, from a consumer experience point of view, you need to test to discover what the best way is to have a conversation with them. How can you show them you want to help them without being intrusive? It may help companies to think of this in terms of a retail store experience by asking themselves, ‘How do I, as a customer, want to be welcomed, assisted, guided?’ Understanding this is the best way to start their personalization framework.”
How is Customer Lifetime Value measured?
Customer lifetime value (CLTV) is the profit margin a company expects to earn over the entirety of its business relationship with the average customer. A CleverTap article explains further: “Because CLTV is a financial projection, it requires a business to make informed assumptions. For example, in order to calculate CLTV, a business must estimate the value of the average sale, average number of transactions, and the duration of the business relationship with a given customer. Established businesses with historical customer data can more accurately calculate their customer lifetime value.” A bit blunt, but that’s how it works.
A visual example of calculating customer lifetime value using sale, transactions, and retention metrics – all of which can be impacted by experimentation.
Now, where to find this precious historical customer data?
“CDPs play an essential role in measuring CLTV because they can combine data from dozens of sources to retrace a customer’s entire history of interactions with a brand, from their web and mobile experiences to their in-store and support experiences. And with this data, you can measure how long you’ve been engaging with that customer, what the value of that engagement has been, what things you offer that they’re interested in,” says Filip.
“Obviously, if a consumer has been engaging with a particular brand for a very long time, they’re going to expect a certain level of personalization from you. They’re going to expect the warm embrace and friendly conversation you have with someone you’ve known for years, not just the quick hello and small talk you’d offer to someone you just met. And it’s worth offering this level of personalization because the better you know your customers, the longer you can continue your conversation with them, which results in loyalty and retention and hopefully, referrals.”
There are techniques to maximize CLTV, including segmenting, personalization, increasing marketing channels, cross-selling, and up-selling, to mention but a few.
In today’s economy, where the markets are crowded with competitors vying for the same customers, engagement and conversion are crucial to the success of any business.
Watch for the fifth installment in our Customer-Centric Data Series in two weeks!
Welcome to the first post within our new ‘Feature Experimentation’ series, where we’ll be broaching different topics related to this modern and essential practice in modern product development.
In this series, we’ll be introducing various scenarios where you can reap the benefits of feature experimentation as well as other relevant guides to help you on your experimentation journey.
In this first post, we will list and discuss some essential best practices when it comes to feature experimentation to ensure that your experiments run smoothly and successfully.
Why running experiments should be a central part of your product development process
Running experiments has become a growing, popular trend and a necessity to develop high quality features and products.
Such experiments are key in helping you uncover usage patterns and to give you insight on how your users interact with your products.
Therefore, experiments are a great way, particularly for product managers and product teams, to validate product quality and to ensure that a product aligns with business objectives.
To measure the outcome of your experiments, metrics can be used to help gauge how your customers are reacting to the new feature and whether it meets their expectations.
This means that experiments help you build and optimize your products so you can make sure that you’re releasing products that can guarantee customer satisfaction.
Experiments are also a great way to learn and prioritize resources so that product teams can focus on the most impactful areas for further iteration.
We talked generally about experiments in the previous section but in this series we will focus on a specific type of experimentation.
As the name suggests, feature experimentation involves feature testing or running experiments on developed or modified features with live users in order to see whether they’re performing as intended.
When we talk about feature experimentation, we’re referring to certain areas within your product that may have issues and need further optimization and improvement.
These features are ones that define the functionality of your software which make the product as a whole more effective and the overall user experience better such as a sign-up flow, a referral program, a purchase funnel or pricing offers, for example.
In other words, features refer to complete parts of your product that often involve multiple stakeholders or teams and are tied to your internal processes or business logic.
These are the features that often have a major impact, positive or negative. As a result, such features need to be tested to avoid the risks associated with blindly launching them into the wild without a clear understanding how they will perform or what their impact will be on revenue and sales or on product usage, for example.
Thus, your team can compare different variations of features with users, instead of going for a full bang release, and see which one confirms your initial hypothesis and shows a positive impact.
This way, only your best features reach your customers after looking at the data that points to the better performing variation.
Experimentation will essentially give you the data you need to do exactly that. Once the winning feature is determined, it can then be rolled out to the rest of your users with the promise of a great user experience.
Some essential best practices for running impactful experiments
As we’ve just seen, feature experimentation and experimentation in general is an indispensable tool for any modern tech and product teams.
In this section, we will discuss some general best practices when it comes to running experiments so you can achieve the best results and avoid any missteps in your experimentation journey.
Create a culture of experimentation
This should go without saying but in order to get started with experimentation, you need to build and nurture a culture of experimentation within your organization.
Some factors will come into play during this process such as your company size, your team’s workflow and capabilities and the type of industry and market you’re operating in.
What this essentially means is that you primarily need to have a clear strategy and roadmap in place so that your teams are aware of the main business objectives to build efficient tests.
We will look into building an experimentation roadmap in another post within our Feature Experimentation series so stay tuned for that!
In the meantime, what is important to note is that this roadmap will serve as the key to link business objectives with product managers’ ideas in order to execute tests and experiments and to be able to set and track the right metrics.
Furthermore, having a culture of experimentation will enable you to make data-driven decisions.
The data gathered from your experiments will allow you to determine and measure the impact of your ideas to see how they resonate with your customers, enabling you to have a clearer understanding of your target audience’s needs.
Building such a culture means you will need to have the right tools in place to help you segment your audience accordingly and tools that will also help you to collect the appropriate metrics and to analyze the results.
Just as important is having and investing in the right people, management and infrastructure to get the most out of experimentation.
However, keep in mind that building this culture of experimentation doesn’t happen overnight.
It requires time and effort but with the right mindset, you can start nurturing this kind of culture within your organization and motivating your team to get started on their roadmaps.
Make it a team effort
To embrace experimentation as part of your company culture, all the relevant teams need to be involved in product or feature testing and not just engineers and developers.
It is important to remember that a good experiment comes as a result of well-defined, shared goals and metrics by all stakeholders.
For example, as mentioned previously, experimentation is a great way for product teams to test out their ideas so everyone needs to be part of the brainstoming process and to look at experiments as a learning experience even if they failed.
In fact, sometimes, it is failed experiments that give the best insight. Any data and learnings gathered from experiments, then, will need to be shared widely among teams so everyone gets a chance to review the results and take the necessary action.
Increasing experiment visibility will allow more people within an organization to clearly see the benefits and processes underlying this practice and highlighting the success and areas of improvement boosts engagement so that they can share their own inputs thereby further instilling a culture of experimentation.
Product managers, in turn, can empower the rest of the teams to be part of the decision-making process on how to improve and optimize products so experimentation becomes a collaborative effort.
It also holds them accountable for the experiments they run so that there is a shared sense of commitment. The earlier a team is involved, the more invested they’ll be in the experiment.
Make it easy
You want to build a culture of experimentation, great, but it’s also important not to make it too complex or a time consuming process that ends up discouraging your team from running their own experiments.
Remember, experimentation should be a collaborative effort, as mentioned previously. Often, experiments may involve cross-functional teams depending on the type and the scope of the experiment you’re looking to launch.
At the same time, there shouldn’t be too much dependence among teams. We already mentioned that every team, and not just development and engineering teams, should be able to run their own experiments.
Feature flags are one way to decrease risk of running experiments by decoupling release from deployment so that all teams feel confident enough to execute experiments. We will go into further detail on that later.
Set realistic experimentation goals
The goal of running experiments is to improve your product for your customers. The results gathered should give you sufficient data to enable you to make informed decisions to optimize your products.
To be able to obtain relevant data, you will need to have a specific goal or objective that will lead you to create a viable hypothesis that you can prove (or disprove).
This is why having a roadmap, as mentioned previously, will be important to allow you to focus your tests so you can get the right data with statistically significant results.
Also, remember that it’s not always possible to test everything. This means you will need to channel your testing energy into running experiments that are relevant to your goals and objectives.
Additionally, some companies may not have a high volume of traffic or users to be able to test everything. This is especially true for feature experiments. A feature needs to receive enough traffic when running A/B tests on this feature in order to generate efficient results.
In sum, good tests or experiments should be focused enough that they give you relevant results and data to improve your products to ultimately ensure customer satisfaction.
Learn from failure
If an experiment goes wrong for any reason and you don’t obtain the results you were expecting, this doesn’t mean that the experiment was a waste of time.
Failures when it comes to experimentation can be considered as a learning experience. This encourages your team to take more risks and boosts creativity.
As a result, implementing experimentation as part of your company culture, regardless whether your experiments turn out to be successful or not, means that it becomes embedded within your team’s natural workflow.
Also, remember knowing what not to do will actually help in improving your product by preventing you from implementing ideas that didn’t perform well so that you know it’s time to move on to the next idea.
Consider the metrics
If you want to make the most out of your experiments by making data-driven decisions then you need to carefully consider the metrics you will track to help you judge whether your feature was a success such as clicks, registrations or sales.
This is an essential best practice as good, efficient experiments are built around a specific goal or metric- the key is to keep a certain focus during experiments, as already mentioned, so as not to deviate from the original goal and lose sight of why you were conducting the experiment in the first place.
This all means that you need to basically tie your experiments to specific KPIs so you can track and analyze the impact of your experiments.
Choosing the right metrics serve as a baseline for your KPIs to enable you to track the results of your experiments so you can make sound decisions.
Target the right audience
This may seem like a no-brainer but to get the results you need to improve your products, you need to choose the right audience to give you those results.
Proper targeting will allow you to see what kind of changes you need to make to your feature variations and consequently, you will be able to tailor the user experience according to the needs of a specific set of users.
This way, product managers can gain valuable insight into their target audience by observing how they interact with different variations of a feature, allowing these managers to validate theories and assumptions about a certain audience.
There are many ways you can go about segmenting your audience, which includes by region, company, device, etc. It will ultimately depend on your own unique objectives.
Remember that to target the right audience, gather the data and analyze the results, you will need to have the appropriate tools at hand depending on your business objectives and teams’ preferences.
Consider the duration of the tests
With feature experimentation, you need to run these experiments for long enough time so you can gather enough data to yield statistically significant results.
Click here to read more about statistical significance and type 1 and type 2 errors which may occur during experiments.
This is important because statistical significance indicates that the results of your experiments can be attributed to a specific cause or trend and are not just a random occurrence.
Therefore, as you start to build your roadmap, you will need to include guidelines for the scheduling and duration of your tests in order to standardize workflows for your team.
However, keep in mind that having a sufficient sample size will be more important than the amount of time an experiment runs.
Use feature flags for safer experiments
For some, the idea of testing in production seems risky and stressful.
However, there is a way to run feature experiments safely without any headaches.
Feature flags are software development tools that decouple deployment from release giving you full control over the release process. In that sense, feature flags can be considered as the foundation of a good experiment.
Feature flags allow you to safely conduct experiments by turning on features for certain users and turning them off for everyone else. If anything goes wrong during your experiment, then you can easily turn off the faulty feature until it’s fixed.
Using feature flags alongside feature experimentation will help you maintain the continuous delivery momentum that is required from modern software development while minimizing the risk of disgruntled customers due to an unstable release.
Furthermore, once you have completed your experiment and obtained the results, you can implement the necessary changes through progressive rollout to further test how these new changes perform with users.
Therefore, through progressive delivery using feature flags, you can introduce changes slowly to your users to ensure a smooth user experience before releasing them to everyone else.
Embrace feature experimentation as part of your company DNA
Some of the biggest companies have achieved their market leadership position precisely because they have embraced experimentation as part of their culture.
Therefore, feature experimentation, when done right, will allow you to make more powerful decisions based on quantifiable data straight from your users.
This means that instead of making decisions on a whim, experimentation will demonstrate what works and what doesn’t based on mathematically-sound data.
Experimentation is one of the most important capabilities offered by many feature management tools.
Our own feature flagging solution, for example, offers an experiment platform that runs A/B tests to track the business impact of feature releases.
This means that everyone has the tools and confidence to take part in experimentation.
For product managers, in particular, it gives them the power to set up, monitor and release confidently without waiting on engineering teams to run the experiments for them through a simple, easy-to-use dashboard.
Our platform focuses specifically on more advanced server-side experiments that allow you to test deeper modifications tied to your back-end architecture using feature flags where you can then measure their impact on the user experience and business.
Find out how AB Tasty can help you transition seamlessly into the world of experimentation by signing up for a free trial.
Staying ahead of the game to deliver seamless brand experiences for your customers is crucial in today’s experience economy. Today we’ll dip our toe into the “how” by looking at the underlying foundation upon which all of your experiences, optimization and experimentation efforts will be built: data.
Data is the foundation experimentation is built on (Source)
Data is the technology that can power the experiences you build for your customers by first understanding what they want and how it’ll best serve your business to deliver this. It’s the special sauce that helps connect the dots between your interpretation of existing information and trends, and the outcomes that you hypothesize will address customer needs (and grow revenue).
If you’ve ever wondered whether the benefits of a special offer are sufficiently enticing for your customer or why you have so many page hits and so few purchases, then you’ve asked the questions the marketing teams of your competitors are both asking and actively working to answer. Data and experimentation will help you take your website to the next level, better understand your customers’ preferences, and optimize their purchasing journey to drive stronger business outcomes.
So, the question remains: Where do you start? In the case of e-commerce, A/B testing is a great way to use data to test hypotheses and make decisions based on information rather than opinions.
A/B testing helps brands make decisions based on data (Source)
“The idea behind experimentation is that you should be testing things and proving the value of things before seriously investing in them,” says Jonny Longden, head of the conversion division at agency Journey Further. “By experimenting…you only do the things that work and so you’ve already proven [what] will deliver value.”
Knowing and understanding your data foundation is the platform upon which you’ll build your knowledge base and your experimentation roadmap. Read on to discover the key considerations to bear in mind when establishing this foundation.
Five things to consider when building your data foundation
Know what data you’re collecting and why
Knowing what you’re dealing with when it comes to slicing and dicing your data also requires that you understand the basic types and properties of the information to which you have access. Firstly, let’s look at the different types of data:
First-party data is collected directly from customers, site visitors and followers, making it specific to your products, consumers and operations.
Second-party data is collected by a secondary party outside of your company or your customers. It’s usually obtained through data-sharing agreements between companies willing to collaborate.
Third-party data is collected by entirely separate organizations with no consideration for your market or customers; however, it does allow you to draw on increased data points to broaden general understanding.
Data also has different properties or defining characteristics: demographic data tells you who, behavioral data tells you how, transactional data tells you what, and psychographic data tells you why. Want to learn more? Download our e-book, “The Ultimate Personalization Guide”!
ㅤ
Gathering and collating a mix of this data will then allow you to segment your audience and flesh out a picture of who your customers are and how to meet their needs, joining the dots between customer behavior and preferences, website UX and the buyer journey.
Chad Sanderson, head of product – data platform at Convoy, recommends making metrics your allies to ensure data collection and analysis are synchronized. Knowing what your business leaders care about, and which metrics will move the business forward, will ensure that your data foundation is relevant and set up for success.
ㅤ
Invest in your data infrastructure
Data is everywhere, in its myriad of forms and gathered from a multitude of sources. Even so, if you’re going to make use of it, you need a robust system for gathering, storing and analyzing it in order to best put it to work. Start by understanding how much first-party data you have the capacity to gather by evaluating your current digital traffic levels. How many people are visiting your site or your app? You can get this information using Google Analytics or a similar platform, and this will help you understand how sophisticated your data-leveraging practices can be and identify gaps where you might need to source supplementary data (second- and third-party).
Next, you’ll need to evaluate your infrastructure. Companies that are further on their data analytics journey will invest in customer data platforms (CDPs) that allow them to collect and analyze data – gathered from a variety of sources and consolidated into a central database – at a more granular level. Stitching together this data via a CDP helps you bring all the pieces together to form a complete picture of your customers and identify any gaps. This is a critical step before you leap into action. Chad Sanderson concurs. “[Start] with the business and what the business needs,” he advises. “Tailoring your… solution to that – whatever that is – is going to be a lot more effective.”
ㅤ
Get consent to build consumer trust
Data security is rightly of foremost concern to consumers. The very users from whom you want to gather that first-party data want to ensure that their private information remains secure. Getting their consent and being transparent about the inherent benefit to them if they agree to your request – be it through giveaways, exclusive offers, additional information or services – will give you the best chance of success. Demonstrating that you adhere to, and take seriously, various data compliance laws (such as GDPR) and good governance will also build trust in your brand and give you the opportunity to make it worth their while through improved UX and personalized experiences.
Build trust in your brand by respecting your users’ private information (Source)
ㅤ
Collect and discover insights to upgrade your customer strategy
We’ve already covered the fact that data is everywhere. As Chad Sanderson highlighted above, identifying immediate business needs and priorities – as well as focusing on quick wins and low-lift changes that can have a quick and high-level impact – can help you navigate through this minefield. It’s best to think of this section as a four-step process:
ㅤㅤ•Collect data as it flows into your CDP ㅤㅤ• Transform or calibrate your data so that it can be compared in a
ㅤ ㅤlogical manner ㅤㅤ• Analyze the data by grouping and categorizing it according to
ㅤ ㅤthe customer segments you’ve identified and benchmarking ㅤ ㅤagainst business priorities ㅤㅤ• Activate your insights by pushing the learnings back into
ㅤ ㅤyour platforms and/or your experimentation roadmap and really ㅤ ㅤput this data to work
ㅤ
Turn your data into actions
It’s crunch time (no pun about numbers intended)! We’ve examined the different types of data and where to source them, how to be responsible with data collection and how to set up the infrastructure needed to consolidate data and generate insights. We’ve also covered the need to understand business priorities and core strategy to drive data collection, analysis and activation in the same direction. Now we need to put that data and those insights to work.
In the experience economy, where constant evolution is the name of the game, innovation and optimization are the key drivers of experimentation. Taking the data foundation that you’ve built and using it to fuel and nourish your experimentation roadmap will ensure that none of the hard work of your tech, marketing and product teams is in vain. Testing allows you to evaluate alternatives in real time and make data-driven decisions about website UX. It also ensures that business metrics are never far from reach, where conversion and revenue growth take center stage. Use the data you’ve gathered to fuel your experimentation roadmap (Source)
Invest in a solid data foundation to maximize and scale
At AB Tasty, we apply the Bayesian approach to interpreting data and test results because in A/B testing, this method not only shows whether there is a difference between the tested options but also goes beyond that by calculating a measure of that difference. Being able to identify what that variance is allows you to best understand what you will gain by adopting a permanent change.
Collecting and analyzing data, and then leveraging the insights that you glean, are key to unlocking the next level of experience optimization for your customers and your business. An experimentation roadmap grounded in real-time responsiveness and long-term, server-side improvements will have a solid data foundation approach at its core, where understanding who you want to target and how to act drives success. Furthermore, if you invest in your data foundation – and the five core drivers we’ve explored above – you’ll be equipped to scale your experimentation and allow optimization to become a key business maximizer.
For the third blog in our series on a data-driven approach to customer-centric marketing, we talked with our partner Matt Wright, Director of Behavioral Science at Widerfunnel, and Alex Anquetil, Manager of North America Customer Success at AB Tasty, who discuss what emotional connection means in a marketing context, why it’s critical for brands to forge emotional connections with their customers, and how data can be used to both build and measure the efficacy of these connections.
What do we mean when we talk about creating an “emotional connection” in a marketing context?
Simply put, emotions are the driving force behind every purchase. People don’t buy from a given brand because they need a product they could easily find elsewhere, but because they feel an affinity, a sense of trust, well-being, or inclusion with or loyalty to that brand.
In such a crowded market, forging deep emotional connections with customers is essential for marketers to attract and retain customers today. Marketers can’t merely “appeal to emotions,” but need to understand their behaviors and motivations and ensure that their missions and messages align with customers’ emotions and needs.
Matt asks to reframe the question, “What’s the role of emotional decision-making in marketing? People build mental models around their emotions, experiences, and cultural associations. They think of some as ‘good’ or ‘bad’… they tie emotion to them. The key for marketers is to understand which emotions resonate with which group of people. And this is where A/B testing can help you find clues as to what works and what doesn’t. Creating strong emotional connections is paramount, and through experimentation, you can create them allthroughout your sales funnel.”
“Our brains have limited bandwidth,” remarks Alex, “so we tend to save our resources for the important things. When we make a simple purchase, we take shortcuts. We grab what’s available from the wheel of our basic emotions– happiness, anger, surprise – to enable us to make quick decisions. If brands can leverage these emotions, whether positive or negative, and align their sales tactics to them, they can create frictionless experiences. The fact that every purchase is emotional is the reason why we don’t have ‘one perfect user interface,’ or ‘one ideal sales funnel:’ every brand, product, and user is different.”
Matt says, “That’s a great analogy. Usability is the foundation, but you need to build upon it. Even if your UI is ugly, in the right circumstances, it will convert. For example, if your website is for a charity, people don’t want you to spend your money on making it look beautiful. They want the money to go to the cause – so they may negatively judge you if you have a digital masterpiece for a website. But if you’re designing for a chic brand, people want it to look and feel exclusive. This is what A/B testing teaches us: it’s not about win or lose, it’s about gathering insights, which I think is often overlooked at a base level of experimentation.”
Why are emotional connections with customers so important for brands?
For Matt, emotion is especially important for positioning. “It’s not something people typically do experimentation around – I wish they did – because the data you can glean from testing things like value propositions or copywriting is extremely valuable for successfully positioning a product. Also, as customers move through their journeys, they’re going to have different emotions at different moments, including doubts, so give them signals to reassure them they’ve made the right decisions. By doing that, you’ll strengthen their loyalty to you.”
Alex thinks that first impressions matter, and if you don’t connect on the first day, you may not get another shot. “People look for meaning in what they buy, even when it’s something as banal as a pack of batteries. Utilitarian products can have ‘the right’ signals attached to them (think of the Energizer bunny, and the tradition and reliability attached to it). No one wants to buy products that have negative connotations. When it comes to clothing or luxury items, these are 100% emotional, and it’s essential for marketers to confer the correct image and status by selling to the right groups (because, of course, there are in-groups and out-groups by the brand’s standards) and by attaching the right emotions and motivators specific to each brand and product.
Should brands create different types of emotional connections for different audiences?
Again, Matt has a preliminary question to reposition how we approach the subject: “Is it worth it to build multiple experiences? The best way to decide is to start small then go deeper, and keep testing until the data leads you to a value proposition. If the data shows you it’s worth it, then build different approaches, yes.”
But Alex, who’s familiar with both the French and US markets, says yes right away. “When looking at short term and long term outcomes, I think there have to be different types of emotional connections for different cultural or geographical audiences. The question is, do you want the emotions to serve sales or marketing at all costs? In other words, do you want your value proposition to associate your brand with specific emotions? When brands expand to new markets, they may require different approaches. For example, certain French luxury brands sell product collections only in France and entirely different ones in the US. With perfume, US customers tend to buy larger bottles, while the French buy smaller ones, due to different cultural priorities andmotivators.
Examples of motivators and leverage:
Source: HBR.org, “THE NEW SCIENCE OF CUSTOMER EMOTIONS,” NOVEMBER 2015, SCOTT MAGIDS, ALAN ZORFAS, AND DANIEL LEEMON
“You can analyze your own market data to find out what your highest-value group is and what their motivators are, then push that to the market and take everyone on your journey, or you can do it the other way around, and make sales your ultimate objective.”
Matt thinks the brand will usually lead and cites the example of Netflix. “There’s a debate going on right now to decide whether, in order to keep growing, Netflix should sell ads. Now, they can probably run an A/B test and find out they’ll make more money if they do sell ads, obviously. But how will that affect their brand image in the short, medium, and long term? They might not lose money, but on an emotional level, they might lose a lot of their historical appeal.
“When dabbling with emotions, it’s not as simple as just an A/B test. When making strategic decisions, experimentation can certainly help incrementally optimize things, but it can do bigger things, including help you make key decisions, better understand your customers, innovate, take risks… Not enough people realize the power of advanced testing. Companies that use them see exponential improvements.”
Talking about experimentation tools, Matt explains: “Early on in the industry, we talked about A/B testing in pretty much only an optimization win-or-lose mindset. And it’s so much more than that. When you make this investment, it’s going to help you make decisions, not just find tiny, incremental bits of revenue for your company. There’s a resourcing problem: conversion rate improvement isn’t the only thing you can do, there’s a huge range of other things you can achieve, and teams need more than a CRO manager to effect the full capabilities. It’s a key competitive differentiator.”
How can data be used to create emotional connections in marketing?
It’s a lot harder to target audiences today due to cookie policy changes and new regulations. But as Matt says (and everyone else agrees), “First-party data will lead to strong positioning and really good ads that connect with users. Because it’s owned by brands, it’s going to be the best quality data for testing hypotheses and segmenting data so brands can offer personalized, exclusive experiences.”
Alex puts it this way: “At the end of the day, you’re still going to be tracking conversions and clicks, so you need to do the groundwork in marketing. It’s more advanced than usability testing. To test for emotions, you have to do some groundwork and some guesswork. You need to know your brand; you need to work with market research. And when you find an emotion aligned with what you want your brand to represent, you need to identify a segment of high-potential customers. Then you find the motivators you associate with that segment, thanks to qualitative research and feedback; then you need to quantify all of that to see if you’re correct. Then you push motivators, measure results, see what boosts efficiency, retention, loyalty, customer lifetime value… and discover whether you’ve got a winning proposition.”
Matt grins: “I wouldn’t call any part of that approach ‘guesswork’. You’re simply combining qualitative with quantitative to come up with better hypotheses for testing. It’s the heart of good experimentation.”
The next installment in our Customer-Centric Data Series will be out in two weeks. Don’t miss it!
We teamed up with our friends at Creative CX to take a look at the impact of experimentation on Core Web Vitals. Read our guest blog from Creative CX’s CTO Nelson Sousa giving you insights into how CLS can affect your Google ranking, the pros and cons of experiments server and client side, as well as organisational and technical considerations to improve your site experience through testing, personalisation and experimentation.
What are Core Web Vitals?
Core Web Vitals (CWV) are a set of three primary metrics that affect your Google search ranking. According to StarCounter, the behemoth search engine accounts for 92% of the global market share. This change has the potential to reshape the way we look at optimising our websites. As more and more competing businesses seek to outdo one another for the top spots in search results.
One notable difference with CWV is that changes are focused on the user experience. Google wants to ensure that users receive relevant content and are directed to optimised applications. The change aims to minimise items jumping around the screen or moving from their initial position. The ability to quickly and successfully interact with an interface and ensure that the largest painted element appears on the screen in a reasonable amount of time.
What is CLS?
Let’s imagine the following scenario:
You are navigated to a website. Click on an element. It immediately moves away from its position on the page. This is a common frustration. It means you click elsewhere on a page, or on a link, which navigates you somewhere else again! Forcing you to go back and attempt to click your desired element again.
You have experienced what is known as Cumulative Layout Shift, or for short, CLS; a metric used to determine visual stability during the entire lifespan of a webpage. It is measured by score, and according to Core Web Vitals, webpages should not exceed a CLS score of 0.1
CLS within Experimentation
When working with client-side experimentation, a large percentage of A/B testing focuses on making experimentation changes on the client side (in the browser). This is a common pattern, which normally involves placing a HTML tag in your website, so that the browser can make a request to the experimentation tool’s server. Such experimentation tools have become increasingly important as Tech teams are no longer the sole entities making changes to a website.
For many, this is a great breakthrough.
It means marketing and other less technical teams access friendly user interfaces to manipulate websites without the need of a developer.It also frees up time for programmers to concentrate on other more technical aspects.
One drawback for client-side, is certain elements can be displayed to the user before the experimentation tool has had a chance to perform its changes. Once the tool finally executes and completes its changes, it may insert new elements in the same position where other elements already exist. Pushing those other elements further down the page. This downward push is an example of CLS in action.
Bear in mind that this only affects experiments above the fold. Elements initially visible on the page without the need of scrolling.
So when should you check for CLS and its impact upon the application? The answer is up for debate. Some companies begin to consider it during the design phase, while others during the User Acceptance Testing phase. No matter what your approach is, however, it should always be considered before publishing an experiment live to your customer base.
Common CLS culprits
According to Google’s article on optimising CLS, the most common causes of CLS are:
Actions waiting for a network response before updating DOM
Overall CLS Considerations
Team awareness and communication
Each variation change creates a unique CLS score. This score is a primary point in your prioritisation mechanism. It shapes the way you approach an idea. It also helps to determine whether or not a specific experiment will be carried out.
Including analysis from performance testing tools during your ideation and design phases can help you understand how your experiment will affect your CLS score. At Creative CX, we encourage weekly communication with our clients, and discuss CLS impact on a per-experiment basis.
Should we run experiments despite possible CLS impact?
Although in an ideal world you would look to keep the CLS score to 0, this isn’t always the case. Some experiment ideas may go over the threshold, but that doesn’t mean you cannot run the experiment.
If you have data-backed reasons to expect the experiment to generate an uplift in revenue or other metrics, the CLS impact can be ignored for the lifetime of the experiment. Don’t let the CLS score to deter you from generating ideas and making them come to life.
Constant monitoring of your web pages
Even after an experiment is live, it is vital to use performance testing tools and continuously monitor your pages to see if your experiments or changes cause unprecedented harmful effects. These tools will help you analyse your CLS impact and other key metrics such as First Contentful Paint and Time to Interactivity
Be aware of everyone’s role and impact
For the impact of experimentation on Web Core Vitals, you should be aware of two main things:
What is the impact of your provider?
What is the impact of modifications you make through this platform?
Experimentation platforms mainly impact two Web Vitals: Total Blocking Time and Speed Index. The way you use your platform, on the other hand, could potentially impact CLS and LCP (Largest Contentful Paint).
Vendors should do their best to minimize their technical footprint on TBT and Speed Index. There are best practices you should follow to keep your CLS and LCP values, without the vendor being held liable.
Here, we’ll cover both aspects:
Be aware of what’s downloaded when adding a tag to your site (TBT and Speed Index)
When you insert any snippet from an experimentation vendor onto your pages, you are basically making a network request to download a JavaScript file that will then execute a set of modifications on your page. This file is, by its nature, a moving piece: based on your usage – due to the number and nature of your experimentations, its size evolves.
The bigger the file, the more impact it can have on loading time. So, it’s important to always keep an eye on it. Especially as more stakeholders in your company will embrace experimentation and will want to run tests.
To limit the impact of experimenting on metrics such as Total Blocking Time and Speed Index, you should download strictly the minimum to run your experiment. Providers like AB Tasty make this possible using a modular approach.
Dynamic Imports
Using dynamic imports, the user only downloads what is necessary. For instance, if a user is visiting the website from a desktop, the file won’t include modules required for tests that affect mobile. If you have a campaign that targets only logged in users to your site, modifications won’t be included in the JavaScript file downloaded by anonymous users.
Every import also uses a caching policy based on its purpose. For instance, consent management or analytics modules can be cached for a long time. While campaign modules (the ones that hold your modifications) have a much shorter lifespan because you want updates you’re making to be reflected as soon as possible. Some modules can also be loaded asynchronously which has no impact on performance. For example, analytics modules used for tracking purposes.
To make it easy to monitor the impact on performance, AB Tasty also includes a tool, named “Performance Center”. The benefit of this is that you get a real time preview of your file size. It also provides on-going recommendations based on your account and campaign setup:
to stop campaigns that have been running for too long and that add unnecessarily weight to the file,
to update features on running campaigns, that have benefited from performance updates since their introduction (ex: widgets).
How are you loading your experimentation tool?
A common way to load an A/B testing platform is by inserting a script tag directly into your codebase, usually in the head tag of the HTML. This would normally require the help of a developer; therefore, some teams choose the route of using a tag manager as it is accessible by non-technical staff members.
This is certainly against best practice. Tag managers cannot guarantee when a specific tag will fire. Considering the tool will be making changes to your website, it is ideal for it to execute as soon as possible.
Normally it’s placed as high up the head tag of the HTML as possible. Right after any meta tags (as these provide metadata to the entire document), and before external libraries that deal with asynchronous tasks (e.g. tracking vendors such as ad networks). Even if some vendors provide asynchronous snippets to not block rendering, it’s better to load synchronously to avoid flickering issues, also called FOOC (Flash of Original Content).
Best Practice for flickering issues
Other best practice to solve this flickering issue include:
Make sure your solution uses vanilla JavaScript to render modifications. Some solutions still rely on the jQuery library for DOM manipulation, adding one additional network request. If you are already using jQuery on your site, make sure that your provider relies on your version rather than downloading a second version.
Optimize your code. For a solution to modify an element on your site, it must first select it. You could simplify this targeting process by adding unique ids or classes to the element. This avoids unnecessary processing to spot the right DOM element to update. For instance, rather than having to resolve “body > header > div > ul > li:first-child > a > span”, a quicker way would be to just resolve “span.first-link-in-header”.
Optimize the code auto generated by your provider.When playing around with any WYSIWYG editors, you may add several unnecessary JavaScript instructions. Quickly analyse the generated code and optimize it by rearranging it or removing needless parts.
Rely as much as possible on stylesheets. Adding a stylesheet class to apply a specific treatment is generally faster than adding the same treatment using a set of JavaScript instructions.
Ensure that your solution provides a cache mechanism for the script and relies on as many points of presence as possible (CDN)so the script can be loaded as quickly as possible, wherever your user is located.
Be aware of how you insert the script from your vendor. As performance optimization is getting more advanced, it’s easy to mess around with concepts such as async or defer, if you don’t fully understand them and their consequences.
Be wary of imported fonts
Unless you are using a Web Safe font, which many businesses can’t due to their branding, the browser needs to fetch a copy of the font so that it can be applied to the text on the website. This new font may be larger or smaller than the original font, causing a reflow of the elements. Using the CSS font-display property, alongside preloading your primary webfonts, can increase the change of a font meeting the first paint, and help specify how a font is displayed, potentially eliminating a layout shift.
Think carefully about the variation changes
When adding new HTML to the page, consider if you can replace an existing element with an element of similar size, thus minimising the layout shifts. Likewise, if you are inserting a brand-new element, do preliminary testing, to ensure that the shift is not exceeding the CLS threshold.
Technical CLS considerations
Always use size attributes for the width and height of your images, videos and other embedded items, such as advertisements, and iframes. We suggest using CSS aspect ratio properties for images specifically. Unlike older responsive practices, it will determine the size of the image before it is downloaded by the browser. The more common aspect ratios out there in the present day are 4:3 and 16:9. In other words, for every 4 units across, the screen is 3 units deep, and every 16 units across, the screen is 9 units deep, respectively.
Knowing one dimension makes it possible to calculate the other. If you have an element with 1000px width, your height would be 750px. This calculation is made as follows:
height = 1000 x (3 / 4)
When rendering elements to the browser, the initial layout often determines the width of a HTML element. With the aspect ratio provided, the corresponding height can be calculated and reserved. Handy tools such as Calculate Aspect Ratio can be used to do the heavy lifting math for you.
Use CSS transform property
The CSS transform property is a CSS trigger which will not trigger any geometry changes or painting. This will allow changing the element’s size without triggering any layout shifts. Animations and transitions, when done correctly with the user’s experience in mind, are a great way to guide the user from one state to another.
Move experiment to the server-side
Experimenting with many changes at once is considered against best practice. The weight of the tags used can affect the speed of the site. It may be worth moving these changes to the server-side, so that they are brought in upon initial page load. We have seen a shift in many sectors, where security in optimal, such as banking, to experiment server-side to avoid the use of tags altogether. This way, once a testing tool completes the changes, layout shift is minimised.
Working hand in hand with developers is the key to running server-side tests such as this. It requires good synchronisation between all stockholders, from marketing to product to engineering teams. Some level of experience is necessary. Moving to server-side experiments just for the sake of performance must be properly evaluated.
Server-side testing shouldn’t be confused with Tag Management server-side implementation. Some websites that implement a client-side experimentation platform through tag managers (which is a bad idea, as described previously), may be under the impressions that they can move their experimentation tag on the server-side as well and get some of tag management server-side benefits, namely reducing the number of networks request to 3rd party vendors. If this is applicable for some tracking vendors (Goggle Analytics, Facebook conversions API…), this won’t work with experiment tags that need to apply updates on DOM elements.
Summary
The above solutions are there to give you an overview of real life scenarios. Prioritise the work to be done in your tech stack. This is the key factor in improving the site experience in general. This could include moving requests to the server, using a front-end or server-side library that better meets your needs. All the way to rethinking your CDN provider and where that are available versus where most of your users are located.
One way to start is by using a free web tool such as Lighthouse and get reports about your website. This would give you the insight to begin testing elements and features that are directly or indirectly causing low scores.
For example, if you have a large banner image that is the cause of your Largest Contentful Paint appearing long after your page begins loading, you could experiment with different background images and test different designs against one another to understand which one loads the most efficiently. Repeat this process for all the CWV metrics, and if you’re feeling brave, dive into other metrics available in the Lighthouse tools.
While much thought has gone into the exact CWV levels to strive for, it does not mean Google will take you out of their search ranking as they will still prioritise overall information and relevant content over page experience. Not all companies will be able to hit these metrics, but it certainly sets standards to aim for.
Written by Nelson Sousa, Chief Technology Officer, Creative CX
Nelson is an expert in the field of experimentation and website development with over 15 years’ experience, specialising in UX driven web design, development, and optimisation.
Let’s say you have an online shop and in that online shop you have a product. Your product is designer eyewear and prescription glasses. A customer visits your online shop to learn about your product. That customer needs to determine which frames will suit their face and what size to order. A similar shop that sells similar products to yours offers free shipping and free returns of up to 3 pairs at no charge, or the use of a virtual reality assistant, via their mobile app, to help their customers make purchasing decisions without needing to visit a store. Your shop, though well-intentioned and bug-free, does not. The customer’s experience researching and selecting their product is what ultimately drives their decision-making process, and they purchase from the other shop. And the next time they need glasses, they purchase from that other shop again. That’s the experience economy.
In the experience economy, finding a differentiating edge is crucial for brands (Source)
Expressed in more academic terms, the experience economy is the packaging of goods and services into a bundle such that the experience of acquiring or consuming is the key selling point – it’s the reason the customer came into your shop in the first place.
In 1998, two Harvard researchers published an article detailing the concept of the experience economy for the first time, using a birthday cake analogy to eventually draw out the definition we see above. These days, the concept is more important than ever, as the rapidly evolving digital transformation of the way we consume information and goods creates a never-ending, multi-channel interaction between brands and consumers. And it’s key to your overall business success.
How e-commerce brands can succeed in the experience economy
In the age of digitalization, not only do all brands have websites, incorporating an e-commerce platform for online sales, but they also have Facebook, Instagram, TikTok and Snapchat accounts, more than likely a YouTube channel, a web browser adapted to mobile devices and an app to sit alongside it. In short, multiple channels and touchpoints for their customers to interact and engage with them, and multiple opportunities to create experiences to acquire new customers and drive sales. This all makes for a non-linear shopping experience, and requires careful examination of what customers expect on which channel and at which time.
A customer-first mindset is crucial for businesses that are looking to win the digital CX game (Source)
How can brands adapt to shifting consumer preferences
At AB Tasty, we’re convinced that the brands opting for a “business as usual” approach will quickly be left in the dust. Customers expect better servicing, more meaningful interactions and suggest that they’ll spend more when brands deliver. This means having a strategy that considers multiple channels, across physical, digital and social touch points, and adapts to the preferences of each individual so that interactions remain authentic and personal. If you’re engaging with customers without being able to have in-person contact, experience matters even more, because consumers still want to be seen as individuals with their own unique needs. Ultimately, their experience will influence their buying decisions and according to Salesforce, 66% of consumers expect companies to understand their unique needs and preferences.
Create a personalized, relevant shopping experience for each customer (Source)
Figuring out what your customers want doesn’t just need to be a guessing game, experimentation is standard practice for the experience economy. In B2C environments, marketing teams test website performance using a range of experiments that examine layout, colors, purchase journeys, product information and visual features to ensure no stone is left unturned in maximizing transactions and revenue. And adopting an experimentation mindset really is a win-win. On the one hand, you’re identifying the best way to interact with your customers – identifying what they respond to and what they want – and on the other, you’re maximizing every opportunity to drive purchases and serve your bottom line.
Why prioritizing customer experiences matters
That’s all very well and good, you might say, but what difference does it really make? Plenty, in fact. Relevant and personalized consumer experiences are key to keeping your brand ahead of its competitors. Let’s explore some of the reasons for this.
Loyalty is hard-earned and easily lost
PWC’s 2021 Global Consumer Insights Survey found that 84% of shoppers trust brands that provide exceptional customer service, but one in three will walk away after just one negative shopping experience. In a similar vein, Qualtrics’ 2022 Global Consumer Trends survey reported that 60% of consumers would buy more if businesses treated them better, and also determined that 9.5% of your overall revenue is at risk from negative shopping experiences. These statistics still haven’t convinced you? Read on!
ㅤ
Seamlessness is synonymous with success
You can design any number of gimmicks to attract attention, but it’s the seamless ones that stick. Take the Clarins Singles Day Wheel of Fortune promotion, where any customer landing on the brand’s desktop or mobile site in EMEA saw a pop-up to spin the wheel. They were then rewarded with one of six special offers, which was automatically added to their basket via a promo code at the checkout. This automatic add proved crucial: Results were strong across all key territories, with Ireland particularly notable, seeing a 495% increase in orders and a 585% increase in revenue. Clarins uncovered a clever, engaging offer and coupled it with a seamless UX process for their shoppers, delivering simply stunning results.Clarins delivered a customer experience on par with their clients’ expectations (Source)
ㅤ
Stagnate and you’ll be left behind
To innovate or not to innovate, is it even a question? If you’re thinking about it, then your competitors almost certainly are too. And if you’re not trying something new, you almost certainly risk falling behind. While bug-free websites and a smooth journey through the purchase funnel is great, it’s also the bare minimum that you should be doing. Salesforce found that 91% of customers are more likely to make a repeat purchase from a company after a positive customer experience. Delivering a seamless, multichannel experience across all business interactions is integral to staying ahead and it’s clear there is still scope for brands to optimize.
4 examples of brands that are excelling in the experience economy
As we’ve seen in the above section, brands that embrace the experience economy are best-positioned to see increased loyalty, repeat business, and convert their customers into advocates for their products. Pushing beyond experiences into memorable interactions for their consumers has allowed some of the best-known brands in the world to gain further ground on their direct competitors, all while staying true to their core values. Let’s take a look at the best-in-class trends and examples of the experience economy model.
Nike
Nike is driven by delivering innovative products, services and experiences to inspire athletes. One such experience is their Nike Fit solution: an AI-driven app that allows you to virtually measure and fit your foot to ensure you choose the right pair of Nike shoes, no matter the style nor the shape of your foot, and without having to leave your living room.
Nike introduces innovative solutions to their clients’ biggest point of friction (Source)
Sephora
In 2019, Sephora pioneered their intelligent digital mirror in the brand’s Madrid flagship, using the power of AI to deliver hyper-personalized experiences and product recommendations to shoppers. The mirror not only allows consumers to “test” products by displaying how they’ll render when applied, it also provides personalized product recommendations and suggestions based on an analysis of the customer’s features.
Sephora develops new ways to offer their customers personalized recommendations (Source)
Starbucks
Starbucks has revolutionized their physical footprint by opening pickup-only stores in key, high-traffic locations where rental space is at a premium and busy lives mean in-and-out transactions are the order of the day. This store concept allows coffee lovers to order and pay ahead of time, via the Starbucks mobile app, and nominate the pickup location, for a speedy service that saves tedious, peak-hour queues. Not to mention a boost to sales per square foot, a key metric in the brick-and-mortar retail space.
Starbucks identifies their customers’ needs and delivers an optimal shopping experience (Source)
Asos
This online fashion retailer was founded in London in 2000, and now sells over 850 brands around the world. In identifying one of the key barriers to online shopping for clothes – choosing the correct size – Asos developed their Fit Assistant tool to ensure customers could navigate the online shopping experience hassle-free. Available on both desktop and mobile, Fit Assistant delivers personalized recommendations according to shoppers’ individual shapes and sizes.
Asos optimizes their customers’ online shopping experience (Source)
Why the experience economy is here to stay
Through a combination of rapid digital transformation, technological innovation of smart devices (phones, tablets, watches and more), and the increasing pace of our daily lives, the manner in which we consume products has evolved beyond mere acquisition. How we consume the product matters. How we feel about how we consume the product matters. How the brand ensures we enjoy our consumption of the product matters. And if your brand is not up for the challenge and staying ahead of the game, consumers will find one that is. It’s as simple as that. Evolve, innovate, and deliver seamless brand experiences, and you’ll lead the competition, win market share and generate growth.
If you’re looking for some guidance on how to deliver impactful brand experiences that will “wow” your customers, draw inspiration from the first-ever digital customer journey that maps out how to drive optimization and innovation to take your customer experience to the next level.
Prior to the launch of a product, a number of tests are usually run to ensure that a software is ready for release and provides a good user experience. The purpose of these tests would be to validate the software before going ahead with a final release to your end-users.
These sorts of tests are essential to make sure that the software you’re releasing is free of bugs and meets the quality and requirements expected by your customers.
Among such tests are alpha and beta tests. These tests are conducted towards the end of the software development life cycle (SDLC) to test releases outside the development team to help uncover any issues that would otherwise not show up in previous tests that are run in more controlled environments.
What is alpha testing?
Alpha testing is typically run on internal users by the QA team (Quality Assurance) to make sure that the software meets all expectations and is working as it should. Thus, it represents an opportunity to evaluate the performance and functionality of a product release as well as obtain feedback from technical users.
In other words, the main purpose of this test is to uncover any bugs or issues to resolve them before the final product is released to users. It helps ensure bug-free functionality by carrying out tasks that a typical user may perform.
This test is usually performed when your product is nearly complete towards the end of the software development cycle in a staging environment, which attempts to mimic an actual production environment as closely as possible, but before beta testing, which we’ll get to later.
It seeks to answer the question whether your product actually works.
Alpha testing involves two main phases:
The first phase is run by software developers using debugging tools or software to catch bugs quickly.
The second phase is performed by the QA team and may involve ‘white box’ and ‘black box’ testing. A white box test will test the software system’s design and internal structure allowing QA testers to ‘look inside’ the product. A black box test, meanwhile, will test the system’s input and output functionality.
The advantages of this type of testing are clear.
It allows teams to locate bugs and issues that managed to escape previous tests so that they may be fixed before they reach your end-users.
Up until that point, tests were focused on testing specific parts of the software but alpha testing, on the other hand, looks to see if the software as a whole functions correctly.
In other words, it enables teams to validate the quality and functionality of their releases before it is released to customers. Put simply, Alpha testing opens up the application to receive initial feedback.
This results in improved software quality as the software is tested in an environment that is a very close replica of the environment it will eventually be used in, hence creating realistic testing conditions. This also allows the QA team to understand how the software will behave when it is later released to end-users.
To sum up, alpha testing provides an opportunity to put your product in real user environments but with technical users who are more adept at identifying and discovering bugs before conducting beta tests with actual real-world users.
However, conducting alpha testing may prolong the test execution cycle thereby delaying the release of the product to your end-users. Also, keep in mind that since the software is still in the development stage, alpha testing doesn’t provide in-depth testing of the functionality of the software.
Now, we will move on to the next testing phase, beta testing.
What is beta testing?
Beta testing involves releasing the software to a limited number of real users external to the organization. As a result, this type of testing is done in a production environment.
These users will then be asked to provide their feedback on the release, also named “beta version”. Beta testing, then, is an opportunity that allows users to test out the product to uncover any bugs or issues related to user experience (UX) before it is rolled out to the rest of your users.
In other words, it represents the final stage of testing before releasing the product to a wider audience.
It also enables teams to run security and reliability tests as those tests cannot be conducted in a staging or lab environment.
There are many ways to implement beta testing. For example, often, companies will ask a select number of users to willingly op-in to get early access to the software. The advantage of this is that these users will be aware that the beta version may not be very stable and so they are more forgiving of any potential bugs and are happy to provide the feedback you need to optimize your product.
To be more specific, you may go for a closed or open beta test. In an open test, anyone can use the product but users are given a clear indication that the product is a beta version so they know that it’s still a work in progress.
Meanwhile, in a closed test, as in the example given above, the testing is limited to a specific set of users, which would be by invite only. These users would be composed of early adopters, current customers or even paid beta testers.
Such exclusivity is one way to build close relationships with specific users as you are demonstrating that you value their opinion in particular before doing a wider release.
The advantage of this testing is clear. It is the first chance to test how the software will behave in real-world settings and observe how your end-users interact with it and what the user experience looks like.
Product managers, in particular, can make use of the feedback received to collect ideas and suggestions when planning future releases.
Beta testing is a way these managers can observe usage behavior and analytics to confirm that users are interacting with the product as expected. They may also run experiments and A/B tests of features to decide which one to choose for a general release.
This, in turn, allows developers to uncover any bugs in real production and less controlled environments so that they may be fixed before a full launch.
Many bugs may have been discovered during alpha testing by your internal users but nothing can truly simulate real world users, which is why beta testing is necessary after alpha testing.
However, as we’ve seen, beta testing is conducted in real environments as opposed to controlled environments during alpha testing and so the former is more difficult to control.
Feature flags and beta testing: safer testing in production
During beta testing, you are essentially testing in production, which doesn’t come without its risks but luckily there is a way to mitigate those risks: by using feature flags.
A feature flag is a software development tool that helps decouple deployment from release, giving you full control over the release process. With feature flags, you can perform beta tests by enabling features for certain users and turning them off for everyone else.
Feature flags also act as a kill switch so that you can gradually roll out features to users to test performance and if something goes wrong, you can just as easily roll it back or turn off the buggy feature.
Feature flags are a great way for all teams within an organization to carry out beta testing as using feature flags for beta testing means even non-technical users such as product and marketing teams can turn on features for specific users, which means they’re not so reliant on development teams anymore.
Alpha vs beta testing
The major advantage of such types of testing is that it helps the development team to identify issues in advance before it goes to launch, allowing them to fix these issues early on before going for a full release.
However, as already alluded to in above sections, there are still major differences between these two types of testing, some of which are summarized in the table below.
Alpha α
Beta β
Testers
Internal employees
End-users or customers not part of the organization
Environment
Requires a specific environment for testing
Does not require a testing environment
What’s tested
Functionality and usability are tested while security and reliability tests are not carried out in depth
Reliability, security and stability tests are key aspects of a beta test
Testing technique
Uses both white and black box testing techniques
Focuses mainly on black box testing
When
Run before the product officially launches to the market
Run after the product is launched
Purpose
Test that the product is working as it should to evaluate product quality
Understand how real users interact with the product and to evaluate customer satisfaction
Duration
Long execution cycle
Short process usually only lasting a few weeks
Post-test actions
Any bugs or issued discovered will be immediately rectified
Most issues identified and feedback received will be implemented as improvements future versions of the product
Conclusion
Clearly, testing is important to ensure the delivery of high quality, bug-free releases. There are a number of tests carried out throughout a software’s life cycle, each of which serves a unique purpose.
Here we looked at two important ones that occur towards the end of a software’s life cycle: alpha and beta testing.
Both alpha and beta tests are an important part of the testing process as they provide a valuable means to highlight crucial issues with your releases and provide user feedback, both internally and externally.
Alpha testing helps validate the quality of your software while beta testing allows you to obtain real-world feedback to ensure you’re building products that your customers actually like.
Therefore, in the testing lifecycle, both alpha and beta testing are essential.