Article

10min read

The Ultimate Guide to Experience Rollouts Using Feature Flags

In modern software development, DevOps teams have shifted their attention on the continuous delivery of features to keep up with fast-changing market and consumer demands.

Teams now more than ever have to be in the driver’s seat when it comes to delivering these features and to whom.

This is why feature flags (or feature toggles) have become the ultimate tool to manage the release of new features.

What are experience rollouts?

When we talk about experience rollouts, we’re referring to the risk-free deployment of features that improve and optimize the customer experience.

This could be in the form of progressive deployments where features are gradually released to reduce the risk of big bang releases or by targeting new features to the most relevant users in order to personalize their experience.

But how do you ensure you’re delivering optimal experiences without negatively impacting the user experience? How can you minimize risk when rolling out new features and ensure that they actually meet your customers’ needs and expectations?

The answer to both these questions is feature flags.

Feature flags are a great solution to allow you to continuously deliver new features while limiting user access to these features, thereby reducing risk.

By decoupling deployment from release, feature flags give teams the power to choose who to send new features to and when. Thus, teams can continuously develop and deliver new features without having to make them available to all users.

What are feature flags?

Let’s start with the most basic definition of feature flags.

Feature flags are a software development tool that enable teams to turn functionalities on or off in order to safely test new features by separating code deployment from feature release.

They can also be referred to as feature toggles as they allow you to toggle a feature on or off by hiding that feature behind a flag and then deciding who to make this feature visible for.

This is particularly useful when you’re looking to personalize the customer experience according to the type of user. This means you can enable features to only target certain users to display the right content to the right audience at the right time, while tracking their performance over time.

With AB Tasty Rollouts, you can configure personalization campaigns, for example, to personalize the user experience for new visitors on mobile only and show them discount codes as a welcome offer. Therefore, you can define the targeted users and the flag (with its value) that will activate the discount code for a particular scenario; in this case, new users on mobile while monitoring the relevant KPIs.

Feature flags can be leveraged across different use cases. This is because there are many different types and categories of feature flags as seen in the image below and which one you choose depends on the purpose of using the flag in the first place.

For example, release toggles support dev teams as they write new features while experiment toggles are primarily used by product and marketing teams to facilitate A/B testing.

For this reason, feature flags can be used across a wide variety of use cases by multiple teams across an organization, especially when you have a feature management solution to manage all your flags.

In particular, feature flags give teams a very granular level of control and risk management over code, which can be important when modifying backend features that have a wide-ranging impact on how your system performs.

Read more: When to make the leap from client- to server-side testing and how feature flags can help you seamlessly carry out server-side experiments

The following section will provide further details on what the term “experience rollouts” entails and discuss how feature flags can help you regain control of how you roll out experiences to your customers at the flip of a switch.

  • Progressive deployment and rollouts

Perhaps one of the greatest benefits of feature flags is their ability to mitigate risk when it comes to new feature releases.

This is because feature flags empower teams to release their features to any user groups of their choice.

Therefore, teams can safely test out their new features on a preselected number of users, whether with internal or external users, to validate functionality and gather feedback in order to make any necessary changes and optimize future feature releases. By continuously iterating features in real time during the release process, companies can provide more value to their customers and ensure customer satisfaction.

Sophisticated feature flagging functionalities give you the ability to closely monitor metrics that indicate how a new feature is performing and how well-received it is by users.

This way, should anything go wrong with a release, teams can minimize the blast radius and any negative impact due to a faulty feature. This also gives them the time necessary to address the issue by disabling the flag before releasing it to everyone else.

The best thing about progressive deployments and rollouts is that teams are essentially in the driver’s seat: they have control over who sees what and when, allowing them to maintain the momentum of CI/CD but with less risk.

Another great advantage of progressive rollouts is that it increases both the velocity of the development lifecycle and testing as teams can roll out releases in phases, they can quickly test on their chosen user group, make the iterations necessary and then run more tests.

  • Rollbacks

Just as you can roll out new features and experiences to your users, you can also easily roll back these features whenever needed with the help of feature flags.

This means that if anything goes wrong with any feature you’ve rolled out to your chosen users, you can quickly disable the flag so that users no longer have access to the feature.

Releasing new features to real-world users is always a risky endeavor and can cause real harm to your brand’s user relationships but it doesn’t have to be.

Now, after any feature release, teams can isolate the faulty or buggy individual feature(s) that have and perform a targeted rollback on them. With advanced third-party feature management platforms, you can roll back a feature in real-time by just toggling a single field with just one click.

AB Tasty is one such tool that allows you to roll out new features to subsets of users by assigning specific flag values to different user segments and comes with an automatic triggered rollback in case something goes wrong.

The automatic rollback option enables you to stop the deployment of a feature and to revert all the changes that have been made in order to ensure that your feature isn’t breaking your customer experience. This is done by defining the business KPI you want to track that would indicate the performance of your feature.

When this KPI is set, you will then associate a value (in %) which, if exceeded or reached, will trigger the rollback. To make the rollback significant, you must define a specific number of visitors from which the rollback comparison will be triggered.

When the conditions are met, the progressive rollout feature will be rolled back, which means that no more targeted users will see the feature.

  • Targeting

We’ve talked a lot about how you can use feature flags to allow certain users to see a feature while hiding it from others.

When you do a targeted rollout, you’re basically releasing new features to a predefined set of users rather than opting for the riskier big bang release.

Here’s a look at some targeting scenarios where feature flags do their best work:

  • Alpha and beta testing
  • A/B testing 
  • Managing entitlements 
  • Blocking users
  • Canary deployments/percentage rollouts 
  • Ring deployments

There are many ways teams can progressively deploy and roll out features to a select audience. With the help of feature flags, teams can manage and streamline these deployment methods to perform highly granular user targeting.

AB Tasty Rollouts allows you to target users based on certain identifying attributes like beta testers, age group, or any other user attributes you have access to.

Furthermore, our integrations with third-party tools such as Segment, GA4, Mixpanel and Heap means that you can also target your test and personalization use cases with audiences built in these tools and then exporting these user groups or cohorts to AB Tasty to target them.

  • Flag management

To truly reap the benefits of feature flags, you have to know how to manage them effectively. Otherwise, you will end up with so many flags in your system that you start to lose track of which flag does what. This could ultimately lead to the most dangerous pitfall of feature flags: technical debt.

At that point, your code could become too complex that it will be difficult to manage and could negatively affect the quality of your code.

This is why feature management and feature management solutions are so essential today for modern software development teams. With such solutions, teams have access to advanced features to enable them to work with feature flags at scale and avoid the most common problems associated with them.

AB Tasty is one solution packed full of features that can help you avoid the dreaded technical debt with a clear, easy-to-use dashboard that all your teams, from development to product, can easily use to efficiently track and manage feature flags usage across your organization no matter how far along you are in your feature flag journey.

Furthermore, flags can be controlled from another platform using AB Tasty’s Remote Control API allowing teams to work from just one tool without having to log onto the platform. This saves a lot of time and effort as you can perform all AB Tasty tasks directly with API calls, including managing your projects, use cases, variations, variation groups, users, targeting keys, and your flags.

Experience rollouts with feature flags

As we’ve seen, the idea of experience rollouts revolves around rolling out your best features to end-users. This is when feature flags become the most powerful tool to ensure you’re only releasing optimal features that provide the best customer experience possible.

This is because feature flags give you the ability to progressively deploy and roll out new features to gather feedback from the users – giving you the most relevant feedback to iterate and optimize your releases. This will help your teams to make more informed, data-driven decisions to drive your conversion rates, ultimately aligning the user experience with business objectives.

Consequently, when you finally do a full release, you’re confident that you’re releasing features that provide the most value to your customers and so will have the best impact on your business in terms of revenue and conversions.

Do you want to deliver first-in-class customer experiences? Click on the “Get a Demo” button at the top to see for yourself what feature flags can do for your own experience rollouts.

Subscribe to
our Newsletter

bloc Newsletter EN

We will process and store your personal data to respond to send you communications as described in our  Privacy Policy.

Article

9min read

A/B, Split or Multivariate Test: How to Choose the Right One

In the fast-paced world of digital marketing, settling for anything less than the best user experience is simply not an option.

Every marketing strategy has room for improvement and unlocking more comes from recognizing hidden opportunities.

With analytics data and a little bit of creativity, you can uncover some valuable insights on how to optimize your conversion rate on your website or campaign landing pages. However, achieving structured and streamlined data from your assumptions requires diligent testing.

Marketing professionals have steadily used different testing methodologies such as A/B testing, split testing, multivariate testing and multipage testing to increase conversion rates and enhance digital performance.

Experimenting and testing are essential as they eliminate opinions and bias from the decision-making process, ensuring data-driven decisions.

With the availability of many diverse testing options, it can be challenging to find your starting point. In this article, we’ll dive into the specifics of different forms of testing to help you navigate this testing landscape.

What is A/B testing?

flowers-366155_1280

A/B testing is a method of website optimization where you are comparing two versions of the same page: variation A and variation B.  For the comparison, it’s common to look at the conversion rates and metrics that matter to your business (clicks, page views, purchases, etc) while using live traffic.

It’s also possible to do an A/B/C/D test when you need to test more than two content variations. The A/B/C/D method will allow you to test three or more variations of a page at once instead of testing only one variation against the control version of the page.

When to use A/B tests?

A/B tests are an excellent method to test radically different ideas for conversion rate optimization or small changes on a page.

A/B testing is the right method to choose if you don’t have a large amount of traffic to your site. Why is this? A/B tests can deliver reliable data very quickly, without a large amount of traffic. This is a great approach to experimentation to maximize test time to achieve fast results.

If you have a high-traffic website, you can evaluate the performance of a much broader set of variations. However, there is no need to test 20 different variations of the same element, even if you have adequate traffic. It’s important to have a strategy when approaching experimentation.

Want to start testing? AB Tasty is the best-in-class experience optimization platform that empowers you to create a richer digital experience – fast. From experimentation to personalization, this solution can help you activate and engage your audience to boost your conversions.

Split testing vs A/B testing

A/B tests and split tests are essentially the same concept.

“A/B” refers to the two variations of the same URL where changes are made “live” using Javascript on the original page. SaaS tools that provide you with a visual editor, like AB Tasty, allow you to create these changes quickly without technical knowledge.

Meanwhile, “split” refers to the traffic redirection towards one variation or another, each hosted on its own URL and fully redesigned in the code.

You can consider A/B tests to work the same as split tests.

The variation page may differ in many aspects depending on the testing hypothesis you put forth and your industry goals (layout, design, pictures, headlines, sub-headlines, calls to action, offers, button colors, etc.).

In any case, the number of conversions on each page’s variation is compared once each variation gets enough visitors.

In A/B tests, the impact of the design as a whole is tracked, not individual elements – even though many design elements might be changed on variations simultaneously.

TIP: Keep in mind that testing is all about comparing the performances of variations. It’s recommended not to make too many changes between the control and variation versions of the page at the same time. You should limit the number of changes to better understand the impact of the results. In the long term, a continuous improvement process will lead to better and lasting performance.

What is multivariate testing?

magic-cube-378543_1280

Multivariate tests or multi-variant tests are the same as A/B tests in their core mechanism and philosophy. The difference is that multivariate testing allows you to compare a higher number of variables and the interactions between each other. In other words, you can test and track changes to multiple sections on a single page.

For multivariate testing, you need to identify a few key page sections and then create variations for those sections specifically. You aren’t creating variations of a whole page as you do while A/B testing.

TIP: Use multivariate testing when several element combinations on your website or landing page are called into question.

Multivariate testing reveals more information about how these changes to multiple sections interact with one another. In multivariate tests, website traffic is split into each possible combination of a page – where the effectiveness of the changes is measured.

It’s very common to use multivariate testing to optimize an existing website or landing page without making a significant investment in redesign.

Although this type of testing can be perceived as an easier way of experimentation – keep in mind that multivariate testing is more complicated than traditional A/B testing.

Multivariate tests are best suited for more advanced testers because they give many more possibilities of combinations for visitors to experience on your website. Too many changes on a page at once can quickly add up. You don’t want to be left with a very large number of combinations that must be tested.

Multivariate test example

Let’s say that you’ve decided to run a multivariate test on one of your landing pages. You choose to change two elements on your landing page. On the first variation, you swap an image for a video, and on the second variation, you swap the image for a slider.

For each page variation, you add another version of the headline. This means that now you have three versions of the main content and two versions of the headline. This is equal to six different combinations of the landing page.

Image Video Slider
Headline 1 Combination 1 Combination 2 Combination 3
Headline 2 Combination 4 Combination 5 Combination 6

After only changing two sections, you quickly have six variations. This is where multivariate testing can get tricky.

When to use multivariate testing?

Multivariate tests are recommended for sites with a large amount of daily traffic. You will need a site with a high volume of traffic to test multiple combinations, and it will take a longer time to obtain meaningful data from the test.

mvt
AB Tasty’s reporting allows you to weigh up each element’s impact on the conversion rate

The multivariate testing method will allow you to incrementally improve an existing design, while the test results can be used to apply to a larger website or landing page redesign.

What is multipage testing?

Multipage testing is an experimentation method similar to standard A/B testing. As we’ve discussed, in A/B testing, changes can be made to one specific page or to a group of pages.

If the changed element appears on several pages, you can choose whether or not to change it on each page. However, if the element is on several pages but it’s not identical, appears at a different place or has a different name, you’ll have to set up a multipage test.

Multipage tests allow you to implement changes consistently over several pages. 

This means that multipage tests allow you to link together variations of different pages and are especially useful when funnel testing.

In multipage tests, site visitors are funneled into one funnel version or the other. You need to track the way visitors interact with the different pages they are shown so you can determine which funnel variation is the most effective.

You must ensure that the users see a consistent variation of changes throughout a set of pages. This is key to getting usable data and allows one variation to be fairly tested against another.

Multipage test example

Let’s say you want to conduct a multipage test with a free shipping coupon displayed in the funnel at different places. You want to run the results of this test against the original purchase funnel without a coupon.

For example, you could offer visitors a free shipping coupon on a product category page – where they can see “Free shipping over €50” as a static banner on the page. Once the visitor adds a product to the shopping cart,  you can show them a new dynamic message according to the cart balance – “Add €X to your cart for free shipping”.

In this case, you can experiment with the location of the message (near the “Proceed to checkout” button, near the “Continue shopping” button, near the shipping cost for his order or somewhere else) and with the call-to-action variations of the message.

This kind of test will help you understand visitors’ purchase behavior better – i.e. how does the placement of a free shipping coupon reduce shopping cart abandonment and increase sales? After enough visitors come to the end of the purchase funnel through the different designs, you will be able to compare the effect of design styles easily and effectively.

How to test successfully?

Remember that the pages being tested need to receive substantial traffic so the tests will give you some relevant data to analyze.

Whether you use A/B testing, split testing, multivariate testing or multipage testing to increase your conversion rate or performance, remember to use them wisely.

Each type of test has its own requirements and is uniquely suited to specific situations, with advantages and disadvantages.

Using the proper test for the right situation will help you get the most out of your site and the best return on investment for your testing campaign. Even though testing follows a scientific method, there is no need for a degree in statistics when working with AB Tasty.

Related: How long you should run a test and how statistics calculation works with AB Tasty