Personalization is a hypothesis that needs to be tested
Ben Combe, Data Director, Optimization & Personalization APAC at Monks
Hosted by Julia Simon, VP APAC at AB Tasty
Featuring Ben Combe, Data Director, Optimization & Personalization APAC atMonks
Conversion Rate Optimization (CRO) is a user-centric approach that emphasizes long-term benefits over just leading customers to click on certain elements or CTAs. To achieve this, understanding your data through the use of experimental and scientific methods is key. In this episode, Ben Combe, Data Director, Optimization & Personalization APAC at Monks joins Julia Simon, VP APAC at AB Tasty to discuss CRO techniques and best practices. They find answers to where companies should start, what to prioritize, which methodologies to use, and how to execute a compelling optimization roadmap.
Whether you’re just starting your CRO journey, or you’re already a CRO expert, this session is for you!
Episode #2:
Where do you start?
Ideas flow from everywhere in the business as data collection happens perpetually. Knowing what your top priorities are is where you should start. You don’t just change the color of your CTA from blue to red because it’s Valentine’s Day and you have a gut feeling.
Ben points out to first take a look at how the business is doing and where you can focus on for the most impact. Should you focus on acquisition, retention, or loyalty? Identify what and where are the pain points that need solving. Secondly, dive into your customer data by looking at your conversion points. Draw a parallel to where your customers are dropping off and mix them with your qualitative insights. Thirdly, brainstorm with your team to come up with ideas.
Prioritization Frameworks: PIE or ICE?
In CRO, time and resources are finite, therefore every experiment counts. You need clear guidelines to choose what ideas to test and what to leave behind. So it’s essential to prioritize – but should you use PIE or ICE?
If you’re just starting your experimentation journey, Ben recommends taking a look at traffic, value and ease. It’s basically like answering how many people are visiting a webpage, what is it worth in dollars, and what are your development resources. If you’re mature in CRO, a bespoke checklist tailored towards your business needs is recommended.
The importance of UX
Running A/B tests is a great way of conducting UX research while your product is live. It helps you decide on what works and what doesn’t work for your customers. By testing different design options, designers are able to gather valuable user feedback. This can then be used for design improvement that is more user-centric, and that leads to increased user engagement and satisfaction. Keeping the UX Team in the loop is essential for continuous learning and improvement.
The Quick Wins
Looking into easy, quick wins in the beginning of your experimentation strategy will bring you good results. Once you pick all the low-hanging fruit, Ben encourages you to shift your mindset towards a more innovative approach. Think outside the box, analyze your segments deeper, and iterate.
Synchronizing AB Testing and Personalization
AB testing allows you to understand the effectiveness of your personalization strategies by comparing various content, design elements, and offers. This insight allows you to deliver an experience that resonates best with customers, leading to higher engagement. It’s important to take note that no personalization goes live without being tested. Behaviors change and it’s necessary to continuously experiment in order to validate that your personalization is still relevant.
Rand Fishkin discusses the importance of “non-attributable” marketing and why companies should take more risks and allow themselves the freedom to fail.
Rand Fishkin is the co-founder and CEO of SparkToro, a software company that specializes in audience research for targeted marketing. Previously, Rand was the co-founder and CEO of Moz, where he started SEOmoz as a blog that turned into a consulting company, then a software business. Over his seven years as CEO, Rand grew the company to 130+ employees, $30M+ in revenue, and brought website traffic to 30M+ visitors/year.
He’s also dedicated his professional life to helping people do better marketing through his writing, videos, speaking, and his latest book, Lost and Founder.
AB Tasty’s VP Marketing Marylin Montoya spoke with Rand Fishkin about the culture of experimentation and fear of failure when it comes to marketing channels and investments. Rand also shares some of his recommendations on how to get your brand in front of the right audience.
Here are some key takeaways from their conversation.
Taking a more risk-based approach
Rand believes there’s too much focus on large markets that people often overlook the enormous potential of smaller markets to go down the more typical venture path. In that sense, founders become biased towards huge, totally addressable markets.
“They don’t consider: here’s this tiny group of people. Maybe there are only three or 4000 people or companies who really need this product, but if I make it for them, they’re going to love it. I think that there’s a tremendous amount of opportunity there. If folks would get out of their head that you have to look for a big market,” Rand says.
People avoid such opportunities because of the regulatory challenges, restrictions, and other barriers to entry that often come with them but for Rand, these underserved markets are worth the risk because competition is scarce. There’s a real potential to build something truly special for those willing to overcome the challenges that come with it, Rand argues.
There are a lot of underserved niches and many business opportunities out there in the tech world, if companies would shift away from the “growth at all cost” mentality.
“The thing about being profitable is once you’re there, no one can take the business from you. You can just keep iterating and finding that market, finding new customers, finding new opportunities. But if you are constantly trying to chase growth unprofitably and get to the metrics needed for your next round, you know all that goes out the window,” Rand says.
Freedom to fail
Similarly, Rand states that there’s a huge competitive advantage in committing resources toward marketing channels where attribution is hard or impossible because no one else is investing in these kinds of channels. That’s where Rand believes companies should allocate their resources.
“If you take the worst 10 or 20%, worst performing 10 or 20% of your ads budget, your performance budget, and you shift that over to hard-to-measure, experimental, serendipitous, long-term brand investment types of channels, you are going to see extraordinary results.”
However, the problem is getting buy-in from more senior stakeholders within a company because of these “hard-to-attribute” and “hard-to-measure” channels. In other words, they refuse to invest in channels where they can’t prove an attribute – a change of conversion rate or sales – or return on investment. Thus, any channels that are poor at providing proof of attribution get underinvested. Rand strongly believes that it’s still possible to get clicks on an organic listing of your website and get conversions even if a brand doesn’t spend anything on ads.
“I think brand and PR and content and social and search and all these other organic things are a huge part of it. But ads are where those companies can charge because the CEO, CMO, CFO haven’t figured out that believing in hard-to-measure channels and hard-to-attribute channels and putting some of your budget towards experimental stuff is the right way to do things,” Rand argues.
According to Rand, these are exactly the kinds of channels where more resources need to be allocated as they generate a higher return on investment than any ad a company might spend on the more typical and bigger name platforms.
“Your job is to go find the places your audience pays attention to and figure out what your brand could do to be present in those places and recommended by the people who own those channels.”
According to Rand, there is a learning curve in finding the message that resonates with this audience and the content that drives their interest as well as the platforms where you can connect with them and this will all depend on who your audience is.
Experiment with AI
For Rand, the AI boom is more realistic and interesting than previous big tech trends. He especially sees its biggest advantage in solving big problems within organizations that can be best solved with large language model generative AI.
However, it’s important not to insert AI in a business or create problems just for the sake of using it or to apply it to the wrong places.
“If you find that stuff fascinating and you want to experiment with it and learn more about it, that’s great. I think that’s an awesome thing to do. Just don’t don’t go trying to create problems just to solve this, to use it.”
He believes the best use case for AI is for more tedious jobs that would be otherwise too time-consuming as opposed to using it for more tactical or strategic marketing advice. Nonetheless, he does believe that there are a lot of interesting and useful solutions and products being built with AI that will solve many problems.
What else can you learn from our conversation with Rand Fishkin?
The importance of brand and long-term brand investments
Why it’s hard to get leadership to shift away from common ad platforms
How social networks have become “closed networks”
Why attention needs to shift to your audience and how they can become “recommenders” of your product
About Rand Fishkin
Rand Fishkin is the co-founder and CEO of SparkToro, makers of fine audience research software to make audience research accessible to everyone. He’s also the founder and former CEO of Moz and also co-founded Inbound.org alongside Dharmesh Shah, which was sold to Hubspot in 2014. Rand has become a frequent worldwide keynote speaker over the years on marketing and entrepreneurship with a mission to help people do better marketing.
About 1,000 Experiments Club
The 1,000 Experiments Club is an AB Tasty-produced podcast hosted by Marylin Montoya, VP of Marketing at AB Tasty. Join Marylin and the Marketing team as they sit down with the most knowledgeable experts in the world of experimentation to uncover their insights on what it takes to build and run successful experimentation programs.
The opportunity cost of NOT testing is never knowing how much revenue you are losing from not knowing.
Dave Anderson, VP Product Marketing and Strategy
We are living in a time where people treat products and services as commodities. Customers of today expect an experience alongside whatever they have purchased. Optimizing digital experiences can directly impact a company’s bottom line by improving conversion rates, reducing customer frustration, and enhancing brand sentiment.
Hosted by Julia Simon, VP APAC at AB Tasty
Featuring Dave Anderson, VP Product Marketing and Strategy at Contentsquare
In this episode, Dave joins us to discuss various facets of customer experience and experimentation trends in Asia Pacific. They unravel key insights regarding the impact of Customer Experience (CX) Optimization on revenue generation, the widespread adoption of optimization practices across industries, the importance of collaboration between teams, and the value of continuous experimentation.
Dive deep into Episode #1
1. Impact of CX Optimization on Revenue:
Businesses that focus on understanding the needs of their customers increase revenue by making new buyers loyal and loyal customers purchase consistently. Providing a great customer experience directly impacts a company’s bottom line by improving conversion rates, reducing customer frustration, and in the long run increasing customer lifetime value.
2. Adoption of Optimization Practices Across Industries:
Virtually every industry including education, finance, retail, and telecommunications is now embracing CX optimization as a means to meet evolving customer expectations. They discuss how companies leverage social proof, countdown banners, personalisation strategies and more to enhance digital experiences and stay competitive in today’s market.
3. Importance of Collaboration Between Teams:
Collaboration between different teams in an organization is key to driving a successful CX strategy. The need for alignment between UX, product, tech, and marketing teams is important to ensure that optimization efforts are cohesive and well executed.
4. Value of Continuous Experimentation:
Continuous experimentation is the cornerstone of a successful optimization strategy. Our content also underscores the importance of testing hypotheses, analyzing results, and iterating based on insights to drive ongoing improvements in digital experiences. Closing up this section, they determined that organizations need to adopt a culture of experimentation and data-driven decision-making to remain agile and responsive to evolving customer needs.
AB Tasty and Google BigQuery have joined forces to provide seamless integration, enabling customers with extensive datasets to access insights, automate, and make data-driven decisions to push their experimentation efforts forward.
We have often discussed the complexity of understanding data to power your experimentation program. When companies are dealing with massive datasets they need to find an agile and effective way to allow that information to enrich their testing performance and to identify patterns, trends, and insights.
Go further with data analytics
Google BigQuery is a fully managed cloud data warehouse solution, which enables quick storage and analysis of vast amounts of data. This serverless platform is highly scalable and cost-effective, tailored to support businesses in analyzing extensive datasets for making well-informed decisions.
With Google BigQuery, users can effortlessly execute complex analytical SQL queries, leveraging its integrated machine-learning capabilities.
This integration with AB Tasty’s experience optimization platform means customers with large datasets can use BigQuery to store and analyze large volumes of testing data. By leveraging BigQuery’s capabilities, you can streamline data analysis processes, accelerate experimentation cycles, and drive innovation more effectively.
Here are some of the many benefits of Google BigQuery’s integration with AB Tasty to help you trial better:
BigQuery as a data source
With AB Tasty’s integration, specific data from AB Tasty can be sent regularly to your BigQuery set. Each Data Ingestion Task has a name, an SQL query to get what you need, and timed frequency for data retrieval. This information helps make super-focused ads and messages, making it easier to reach the right people.
Centralized storage of data from AB Tasty
The AB Tasty and BigQuery integration simplifies campaign analysis too by eliminating the need for SQL or BI tools. Their dashboard displays a clear comparison of metrics on a single page, enhancing efficiency. You can leverage BigQuery for experiment analysis without duplicating reporting in AB Tasty, getting the best of both platforms. Incorporate complex metrics and segments by querying our enriched events dataset and link event data with critical business data from other platforms. Whether through web or feature experimentation, it means more accurate experiments at scale to drive business growth and success.
Machine learning
BigQuery can also be used for machine learning on experimentation programs, helping you to predict outcomes and better understand your specific goals. BigQuery gives you AI-driven predictive analytics for scaling personalized multichannel campaigns, free from attribution complexities or uncertainties. Access segments that dynamically adjust to real-time customer behavior, unlocking flexible, personalized, and data-driven marketing strategies to feed into your experiments.
Enhanced segmentation and comprehensive insight
BigQuery’s ability to understand behavior means that you can segment better. Its data segmentation allows for categorizing users based on various attributes or behaviors. With data that is sent to Bigquery from experiments, you can create personalized content or features tailored to specific user groups, optimizing engagement and conversion rates.
Finally, the massive benefit of this integration is to get joined-up reporting – fully automated and actionable reports on experimentation, plus the ability to feed data from other sources to get the full picture.
A continued partnership
This integration comes after Google named AB Tasty an official Google Cloud Partner last year, making us available on the Google Cloud Marketplace to streamline marketplace transactions. We are also fully integrated with Google Analytics 4. We were also thrilled to be named as one of the preferred vendors from Google for experimentation after the Google Optimize sunset.
As we continue to work closely with the tech giant to help our customers continue to grow, you can find out more about this integration here.
When it comes to CRO, or Conversion Rate Optimization, it would be natural to assume that conversion is all that matters. At least, we can argue that conversion rate is at the heart of most experiments. However, the ultimate goal is to raise revenue, so why does the CRO world put so much emphasis on conversion rates?
In this article, we’ll shed some light on the reason why conversion rate is important and why it’s not just conversions that should be considered.
Why is conversion rate so important?
Let’s start off with the three technical reasons why CRO places such importance on conversion rates:
Conversion is a generic term. It covers the fact that an e-commerce visitor becomes a customer by buying something, or simply the fact that this visitor went farther than just the homepage, or clicks on a product page, or adds this product to the cart. In that sense, it’s the Swiss Army Knife of CRO.
Conversion statistics are far easier than other KPI statistics, and they’re the simplest from a maths point of view. In terms of measurement, it’s pretty straightforward: success or failure. This means off-the-shelf code or simple spreadsheet formulas can compute statistics indices for decision, like the chance to win or confidence intervals about the expected gain. This is not that easy for other metrics as we will see later with Average Order Value (AOV).
Conversion analysis is also the simplest when it comes to decision-making. There’s (almost) no scenario where raising the number of conversions is a bad thing. Therefore, deciding whether or not to put a variation in production is an easy task when you know that the conversion rate will rise. The same can’t be said about the “multiple conversions” metric where, unlike the conversion rate metric that counts one conversion per visitor even if this visitor made 2 purchases, every conversion counts and so is often more complex to analyze. For example, the number of product pages seen by an e-commerce visitor is harder to interpret. A variation increasing this number could have several meanings: the catalog can be seen as more engaging or it could mean that visitors are struggling to find what they’re looking for.
Due to the aforementioned reasons, the conversion rate is the starting point of all CRO journeys. However, conversion rate on its own is not enough. It’s also important to pay attention to other factors other than conversions to optimize revenue.
Beyond conversion rate
Before we delve into a more complex analysis, we’ll take a look at some simpler metrics. This includes ones that are not directly linked to transactions such as “add to cart” or “viewed at least one product page”.
If it’s statistically assured to win, then it’s a good choice to put the variation into production, with one exception. If the variation is very costly, then you will need to dig deeper to ensure that the gains will cover the costs. This can occur, for example, if the variation holds a product recommender system that comes with its cost.
The bounce rate is also simple and straightforward in that the aim is to keep the figure down unlike the conversion rate. In this case, the only thing to be aware of is that you want to lower the bounce rate unlike the conversion rate. But the main idea is the same: if you change your homepage image and you see the bounce rate statistically drop, then it’s a good idea to put it in production.
We will now move onto a more complex metric, the transaction rate, which is directly linked to the revenue.
Let’s start with a scenario where the transaction rate goes up. You assume that you will get more transactions with the same traffic, so the only way it could be a bad thing is that you earn less in the end. This means your average cart value (AOV) has plummeted. The basic revenue formula shows it explicitly:
Total revenue = traffic * transaction rate * AOV
Since we consider traffic as an external factor, then the only way to have a higher total revenue is to have an increase in both transaction rate and AOV or have at least one of them increase while the other remains stable. This means we also need to check the AOV evolution, which is much more complicated.
On the surface, it looks simple: take the sum of all transactions and divide that by the number of transactions and you have the AOV. While the formula seems basic, the data isn’t. In this case, it’s not just either success or failure; it’s different values that can widely vary.
Below is a histogram of transaction values from a retail ecommerce website. The horizontal axis represents values (in €), the vertical axis is the proportion of transactions with this value. Here we can see that most values are spread between 0 and €200, with a peak at ~€50.
The right part of this curve shows a “long/fat tail”. Now let’s try to see how the difference within this kind of data is hard to spot. See the same graph below but with higher values, from €400 to €1000. You will also notice another histogram (in orange) of the same values but offset by €10.
We see that the €10 offset which corresponds to a 10-unit shift to the right is hard to distinguish. And since it corresponds to the highest values this part has a huge influence when averaging samples. Due to the shape of this transaction value distribution, any measure of the average value is somewhat blurred, which makes it very difficult to have clear statistical indices. For this reason, changes in AOV need to be very drastic or measured over a huge dataset to be statistically asserted, making it difficult to use in CRO.
Another important feature is hidden even further on the right of the horizontal axis. Here’s another zoom on the same graph, with the horizontal axis ranging from €1000 to €4500. This time only one curve is shown.
From the previous graph, we could have easily assumed that €1000 was the end, but it’s not. Even with a most common transaction value at €50, there are still some transactions above €1000, and even some over €3000. We call these extreme values.
As a result, whether these high values exist or not makes a big difference. Since these values exist but with some scarcity, they will not be evenly spread across a variation, which can artificially create difference when computing AOV. By artificially, we mean the difference comes from a small number of visitors and so doesn’t really count as “statistically significant”. Also, keep in mind that customer behavior will not be the same when buying for €50 as when making a purchase of more than €3000.
There’s not much to do about this except know it exists. One good thing though is to separate B2B and B2C visitors if you can, since B2C transaction values are statistically bigger and less frequent. Setting them apart will limit these problems.
What does this mean for AOV?
There are three important things to keep in mind when it comes to AOV:
Don’t trust the basic AOV calculation; the difference you are seeing probably does not exist, and is quite often not even in the same observed direction! It’s only displayed to give an order of magnitude to interpret changes in conversion rates but shouldn’t be used to state a difference between variations’ AOV. That’s why we use a specific test, the Mann-Whitney U test, that’s adapted for this kind of data.
You should only believe the statistical index on AOV, which is only valid to assess the direction of the difference between AOV, not its size. For example, you notice a +€5 AOV difference and the statistical index is 95%; this only means that you can be 95% sure that you will have an AOV gain, but not that it will be €5.
Since transaction data is far more wild than conversion data, it will need stronger differences or bigger datasets to reach statistical significance. But since there are always fewer transactions than visitors, reaching significance on the conversion rate doesn’t imply being significant on AOV.
This means that a decision on a variation that has a conversion rate gain can still be complex because we rarely have a clear answer about the variation effect on the AOV.
This is yet another reason to have a clear experimentation protocol including an explicit hypothesis.
For example, if the test is about showing an alternate product page layout based on the hypothesis that visitors have trouble reading the product page, then the AOV should not be impacted. Afterwards, if the conversion rate rises, we can validate the winner if the AOV has no strong statistical downward trend. However, if the changes are in the product recommender system, which might have an impact on the AOV, then one should be more strict on measuring a statistical innocuity on the AOV before calling a winner. For example, the recommender might bias visitors toward cheaper products, boosting sales numbers but not the overall revenue.
The real driving force behind CRO
We’ve seen that the conversion rate is at the base of CRO practice because of its simplicity and versatility compared to all other KPIs. Nonetheless, this simplicity must not be taken for granted. It sometimes hides more complexity that needs to be understood in order to make profitable business decisions, which is why it’s a good idea to have expert resources during your CRO journey.
That’s why at AB Tasty, our philosophy is not only about providing top-notch software but also Customer Success accompaniment.
In the ever-evolving landscape of fashion and e-commerce, digital innovation has become a driving force behind transforming the customer experience. The intersection of technology and fashion has given rise to new opportunities for brands to connect with their customers in more meaningful and engaging ways.
In this guest blog post from Conversio, a leading UK-based optimization and analytics agency, we explore key trends in fashion e-commerce and how brands can leverage digital strategies to enhance the customer experience.
1. The Mobile Customer: Shopping on the Go
The mobile customer has become a dominant force in the fashion industry. Today’s consumers expect a seamless and intuitive mobile experience when browsing, shopping, and making purchases. Brands must prioritize mobile optimization, ensuring their websites and apps are responsive, fast-loading, and user-friendly. By providing a frictionless mobile experience, fashion brands can capture the attention and loyalty of the on-the-go consumer.
2. The Rise of Social: Influencing Fashion Choices
Social media platforms have revolutionized the way we discover, engage with, and purchase fashion items. From influencers showcasing the latest trends to shoppable posts and personalized recommendations, social media has become an integral part of the customer journey. Fashion brands must embrace social commerce and leverage these platforms to connect with their audience, build brand awareness, and drive conversions. By actively engaging with customers on social media, brands can create a community around their products and foster brand loyalty.
3. Increasing Returns Rates: The Challenge of Fit and Expectations
One of the ongoing challenges in fashion e-commerce is the issue of increasing returns rates. Customers want convenience and flexibility when it comes to trying on and returning items. Brands must address this challenge by providing accurate size guides, detailed product descriptions, and visual representations. Additionally, incorporating virtual try-on technologies and utilizing user-generated content can help improve the customer’s confidence in their purchase decisions and reduce returns rates.
4. Measuring the Customer Experience
To truly enhance the customer experience, brands must measure and analyze key metrics to gain insights into their customers’ behaviors and preferences. Conversion rate optimization (CRO) is a crucial aspect of this process. By A/B testing, tracking and optimizing conversion rates, brands can identify areas for improvement and implement strategies to increase conversions. Additionally, measuring customer satisfaction, engagement, and loyalty through surveys, feedback, and data analytics can provide valuable insights into the effectiveness of the customer experience.
5. Improving the Fashion CX through Experimentation
To stay ahead in the competitive fashion industry, brands must embrace a culture of experimentation. A/B testing different elements of the customer experience, such as website layout, product recommendations, and personalized messaging, can help identify what resonates best with customers. By continuously iterating and refining their digital strategies, fashion brands can deliver a more tailored and enjoyable experience for their customers.
Our Key Takeaways
As fashion brands navigate the digital landscape, there are several key takeaways to keep in mind:
Brand Perception: Recognise that 90% of new customers won’t see your homepage. Focus on delivering a consistent and compelling brand experience across all touchpoints.
Post-Purchase: Extend your focus beyond the conversion. Invest in post-purchase experiences, such as order tracking, personalised recommendations, and exceptional customer service, to foster customer loyalty and encourage repeat purchases.
Measure Everything: Establish a robust measurement framework to track and validate the value of your content, campaigns, and overall customer experience. Leverage data to make data-driven decisions and continuously optimize your strategies.
In conclusion, digital fashion has reshaped the customer experience, offering new avenues for engagement, personalization, and convenience. By understanding and embracing key trends, testing and measuring customer experience, and experimenting with innovative strategies, fashion brands can successfully navigate the digital landscape and deliver exceptional experiences that resonate with their target audience.
Good things happen to those who change. And that’s exactly what we did.
Change is what propels us towards progress. Change is how we find our better. Change is how we dare to go further.
Today marks a significant day in our history as a company. Today, we’re thrilled to share our updated brand identity with you. We’re stepping into a new era that better aligns our forever commitment to “test and learn” with our position in the market as a partner that helps brands push ideas even further.
With over 13 years in the industry, we’ve seen dynamic changes in the market. Brands now understand the importance and impact of continual experience optimization. The thriving experimentation sector has launched us into our most successful financial quarters following our strategic technology acquisitions. Beyond our strengthened AI and personalization portfolios, it’s become crystal clear that what makes us unique is our people. And our people are what make our customers happy.
Time to Talk Tasty
You may have noticed a few recent changes to AB Tasty – and we don’t mean just our new brand colors.
“Electric Blue” and “Crash Test Yellow”
Although our vibrant visual identity may catch you by surprise, our rebrand is much more than just a cosmetic makeover. We’ve been very intentional with our decisions at each step of the way.
Over the past 14 months, we’ve embraced some exciting technological advancements within our platform:
In October 2022, we saw a big need in the market for more personalization and acquired a company specializing in recommendations and search solutions.
In June 2023, we extended our personalization offering to help teams better cater to their different audiences and compete on a higher level. We acquired an emotions-based, personalization technology that enriches and expands our portfolio.
Then, we unified those platforms with our own API-based experimentation, personalization engine, and web solution.
Now, we’re happy to say that we are one unified platform offering everything that brands need for complete experience optimization. With our new brand identity, we proudly promote everything we are, everything we can be, and everything we want to be.
Our strategic shift in branding was the logical next step after our tremendous period of growth.
New Look, Same Commitment
One thing hasn’t changed – and that’s our commitment to our clients. They are, and always will be our focus.
Everything we’ve done will better suit the needs of our clients. Unifying our products into one harmonious platform allows for endless optimization opportunities and our our messaging reflects our human touch and leading expertise.
We are the optimization partners pushing brave ideas from the inside out.
Our Brand Story
Our clients need to be different, not just better. And for that, they need an optimization partner in their progress. Our commitment to customer support is consistently recognized on G2 and is something our clients rave about. Our team and the level of support we offer our clients have always been and will always be what makes AB Tasty great. That’s why we embed ourselves at the heart of company culture to push brave ideas from the inside out.
How can we do that? By focusing on our three pillars as our foundation.
Human Touch: Our people are everything – they bring the soul and substance to our technology. We build relationships with our clients to transcend the transactional with our deep partnerships and client understanding.
Leading Expertise: We back brave ideas with data and knowledge. We stay ahead as leaders of the industry and continue to learn with our “test and learn” culture. We make every move by choice, not chance by de-risking brave ideas.
Unifying Product: Our product connects teams, platforms, tools, and collaborators. We transform cultures changing the way our clients work and think. We work as a team with one vision and common goals.
We do all of this so our clients can level up. We make their next step our next challenge. Giving them the courage and push they need to dare to go further.
Conclusion
Every next step looks different for our clients, company, and people. That’s why we provide the courage and conviction to make it happen.
The concept of feature flags is quite straightforward and easy to implement, at least at first. In the beginning, you would usually be managing one feature flag by modifying a configuration file but when you start using multiple flags they may become harder to manage and to keep everyone in sync across different functions.
Undoubtedly, feature flags become increasingly important as engineering and product teams begin to see their benefits. By separating code deployment from feature release, teams can now deliver new functionalities to users safely and quickly.
Feature flags are also extremely versatile and their uses can extend to a number of different scenarios to achieve various tasks by all teams across your organization. As feature flags help developers release faster with lower risk, it makes sense that teams would want to extend their usage across these additional use cases.
We can look at feature flag implementation as a journey that is used initially for one simple use case and which then evolves to more advanced implementations by different stakeholders. This article will illustrate this journey by introducing you to the different use cases of feature flags from simple to more complex and to help you consider whether it is in your best interest to build or buy a feature flag management system according to your goals.
Are you looking for a feature flagging solution packed full of features with an easy-to-use dashboard? AB Tasty is the all-in-one feature flagging, rollout, experimentation and personalization solution that empowers you to create a richer digital experience — fast.
The value of feature flags
Before we go deeper into the build vs buy topic, it’s important to highlight exactly why you need feature flags in your daily workflows and the value they can bring to your teams.
As we’ve mentioned, feature flags can be used across a range of use cases. Here’s a quick overview of when feature flags are especially handy:
User targeting and feature testing: When you have a new feature but you’re not yet ready for a big bang release; instead, you want to have the control to target who sees this new feature to collect necessary feedback for optimization purposes.
Testing in production: When you want to test a production by gradually rolling out a new feature or change to validate it.
Kill switch: When you want to have the ability to quickly roll back a feature in case anything goes wrong and turn it off while the issue is being fixed.
This means that feature flags are a great way to continuously (and progressively) roll out releases with minimal risk by controlling who gets access to your releases and when.
The journey begins with a simple step: if/else statements
A feature flag in a code is essentially an IF statement. Here is a very straightforward, basic example:
Therefore, you can just be starting off with a simple if/else statement, usually reserved for short-lived flags but less so if you’re planning to keep the flag around for a long time or for other more advanced use cases which require more sophistication. Therefore, feature flags have evolved beyond one use case and can serve a variety of purposes. Inserting a few IF statements is easy but it’s actually maintaining a feature flag management system that’s hard work; it requires time, resources and commitment.
You can implement a feature flag by reading from a config file in order to control which code path will be exposed to your subset of users. Using a config file at the beginning may seem like a viable solution but in the long-term may not be so practical, resulting in technical debt that accumulates over time.
Here, a simple flagging solution will not suffice and so you would need to turn to a more advanced solution. Implementing the solution you need in-house can be quite costly and requires a lot of maintenance. In this case, you can turn to a third-party option.
Bumps along the road: Evolving use cases
When you’re just starting out, you’ll implement a feature flag from a config file with an easy on/off toggle to test and roll out new features. Sounds simple enough. Then, one flag turns into 10 or 20 and as you keep adding to these flags leading to the aforementioned technical debt issue as it becomes harder to pinpoint which of your active feature flags need to be removed. In this case, a proactive approach to managing your flags is essential in the form of a comprehensive feature flag management system.
Therefore, at the start of your feature flag journey, you may simply be employing one use case which is experimentation through release management but over time, you may want to implement feature flags across a variety of use cases once you’ve seen first-hand the difference feature flags are making to your releases.
Test in production
You may for example want to test in production but only internally so you expose the feature to people within your organization. You may also use feature flags to manage entitlements, that is a small subset of users can access your feature, such as users with a premium subscription to your product or service. These types of flags are referred to as permission toggles. So you will need to build a system that can handle different levels of permissions for different users.
To be able to carry out such controlled roll-outs, your feature flagging system should enable you to make such context-specific flagging decisions, for example, for carrying out A/B tests.
So, for example, you might want to expose your feature to 5, 10 or 15% of your users or you might be looking to test this feature on users from a certain region. A good feature management system provides the means necessary to take such specific contexts when making flagging decisions. Therefore, such contexts can include additional information about the user so here we take into consideration the server handling the request as well as the geographic market the request is linked to.
As a result, feature flags allow you to choose who you want to release your feature to, so the new code can be targeted to a specific group of users whose feedback you need. This would require you to have a system in place that would allow you to perform feature experimentation on those users and attach KPIs to your releases to monitor their reception. However, some companies may not have the time or resources or even experience to collect this kind of rich data.
Kill switches
Feature flags can be used to kill off non-essential features or disable any broken features in production. Therefore, as soon as your team logs an error, they can easily turn off the feature immediately with the click of a button while your team investigates the issue. This would require your team to have a 2-way communication pathway between monitoring tools and the internal flag system that could be complex to set-up and maintain. The feature can then just as easily be turned on again once it’s ready for deployment. Such kill switches usually require a mature feature flag service implementation platform.
Feature flag hell
We can conclude that when implementing feature flags, you must continuously be aware of the state of each of your feature flags. Otherwise, you could find yourself becoming overwhelmed with the amount of flags in your system leading you to lose control of them when you’re unable to keep track of and maintain them properly. Things could get complicated fast as you add more code to your codebase so you need to make sure that the system you have in place is well-equipped to handle and reduce those costs.
You’ve probably already come across the term ‘merge hell’ but there’s also such a thing as ‘feature flag hell’. This is basically when you add too many feature flags which can convert your code into the nightmare that is ‘feature flag hell’.
As mentioned above, you can start off with a simple if/else statement but more sophistication will be needed to implement these more advanced use cases.
It is also important to be able to manage the configuration of your in-house system. Any small configuration change can have a major impact on the production environment. Therefore, your system will need to have access controls, audit logs and custom permissions to restrict who can make changes.
Your system will also need to have an environment-aware configuration that supports a flag configuration from one environment to the next. Most systems should be able to create two kinds of environments: one for development and one for production with its own SDK key. Then you would be able to control the flag’s value depending on which of these environments it’s being used. For example, the flag could be ‘true’ in development but ‘false’ in production.
Having different environments prevents you from accidentally exposing something in production before you are prepared. When you have all these flags across different environments, it becomes harder to keep everyone in sync, which leads us back to the issue of ‘feature flag hell’ if you don’t have the right system in place.
Feature flags categorization
With such sophisticated use cases, it would not make sense to place feature flags under one category and call it a day. Thus, here we will talk about feature flags when it comes to their longevity and dynamism.
Static vs dynamic
The configuration for some flags will need to be more dynamic than for others. Flipping a toggle can be a simple on/off switch. However, other categories of toggle are more dynamic and will require more sophisticated, very context-specific flagging decisions which are needed for advanced use cases such as A/B testing. For example, permission toggles, usually used for entitlements mentioned earlier, tend to be the most dynamic type of flag as their state depends on the current user and are triggered on a user basis.
Long- vs short-lived
We can also categorize flags based on how long the decision logic for that flag will remain in the codebase. On the one hand, some flags may be transient in nature, such as release toggles, which can be removed within a few days where the decision logic can be implemented through a simple if/else statement. On the other hand, for flags that will last longer then you’ll need to use more maintainable implementation techniques. Such flags include permission toggles and kill switches.
Therefore, it is important that your feature management solution can keep track of all the flags by determining which flag is which and indicating which flags need to be removed that are no longer needed or in use.
Challenges of an in-house system
As use cases grow so do the challenges of developing an in-house feature flagging system. Among the challenges organizations face when developing such a system include:
Many organizations will start out with a basic implementation where config changes would need to be made manually so the config change for every release will need to be made manually, which is time-consuming. Similarly, when rolling out releases, compiling customer IDs will also be done manually so keeping track of the features rolled out to each user would prove to be a major challenge.
Most of these manual processes would be carried out by the engineering team so product managers would be unable to make changes from their end and so will be dependent on engineers to make those changes for them.
The preceding point also raises the question of what you want your engineers to devote their time to. Your engineers will need to dedicate a large chunk of their time maintaining your in-house feature flagging tool which could divert their attention from building new features that could drive revenue for your company.
This ties to a lack of a UI that could serve as an audit log tracking to monitor when changes are made and by who. The lack of a UI will also mean that only engineers can control feature rollouts while product managers cannot do such deployments themselves or view which features are rolled out to which users. Thus, a centralized dashboard is needed so that all relevant stakeholders can monitor feature impact.
As mentioned previously, inability to monitor and clean up old flags will become increasingly difficult as more flags are generated. When flag adoption increases, people across your organization will find it more difficult to track which flags are still active.
Eventually, if your team does not remove these flags from the system, technical debt would become a major issue. Even keeping track of who created which flag and for what purpose could become a problem if the in-house system doesn’t provide such data.
Thus, while the advantages of feature flags are numerous, they will be far outweighed by the technical debt you start to accumulate over time that could slow you down if you are not able to take control and keep track of your feature flags’ lifecycles.
There are often high costs associated with maintaining such in-house tools as well as costs associated with upgrades so over time you will see such costs as well as your technical debt accumulating over time.
Besides the rising costs, building and maintaining a feature flagging system requires ample resources and a high degree of technical expertise as such systems require a solid infrastructure to handle large amounts of data and traffic, which many smaller organizations lack.
Such in-house tools are usually built initially to address one pain point so they have minimal functionality and thus cannot be used widely across teams and lack the scalability required to handle a wide range of uses and functions.
Time taken to develop feature flag solutions could be time lost that you could have spent developing features for your customers so you will need to consider how much time you are willing to dedicate to developing such a system.
On the other hand:
Buying a platform from a third-party vendor can be cost-effective which means you can avoid the associated costs with building a platform. There are also ongoing costs associated with buying a platform but with many options out there, companies can find a platform that suits their needs and budget.
Third-party systems typically come with ongoing support and maintenance from the vendor including comprehensive documentation so you wouldn’t have to worry about handling the upkeep for it yourself or the costs associated to maintain the platform to handle large-scale implementations.
Perhaps one of the biggest advantages of buying a solution is its immediate availability and market readiness as the solution is ready-made with expert support and pre-existing functionalities. Thus, you can save valuable time and your teams can quickly implement feature flags in their daily workflows to accelerate releases and time-to-market.
Time dedicated to building and maintaining your in-house solution could otherwise be spent developing innovative and new revenue-generating features.
Safe landing: How to proceed
To ensure a safe arrival at the final spot of your feature flag journey (depending on why and how you’re using feature flags), you will need to decide whether in-house or a third-party solution is what’s right for you. With each additional use case, maintaining an in-house solution may become burdensome. In other words, as the scope of the in-house system grows so do the challenges of building and maintaining your in-house system.
Let’s consider some scenarios where the “buy” end of the argument wins:
Your flag requirements are widening: your company is experiencing high growth- your teams are growing and different teams beyond development and engineering are becoming more involved in your feature flag journey, who in turn have different requirements.
With increasing flag usage and build-up, it’s become harder to keep track of all them in your system eventually leading to messy code.
You’re now working with multiple languages that maintaining SDKs may become highly complex.
You have an expanding customer-base which means higher volume of demand and release velocity leading to strained home-grown systems.
You need more advanced features that can handle the needs of more complex use cases. In-house systems usually lack advanced functionalities as they are usually built for immediate needs unlike third-party tools that come equipped with sophisticated features.
All these different scenarios illustrate the growing scope of feature flag usage which in turn means an increase in scope for your feature flagging system, which could pose a serious burden on in-house solutions that often lack the advanced functionalities to grow as you grow.
Many third-party feature flagging platforms come equipped with a user-friendly UI dashboard that teams can easily use to manage their feature flag usage.
Using AB Tasty’s Feature Experimentation and Rollouts, all teams within an organization from development to product can leverage to streamline the software development and delivery processes. Product teams can run sophisticated omnichannel experiments to get critical feedback from real-world users while development teams can continuously deploy new features and test in production to validate them.
Teams also have full visibility over all the flags in their system in our “flag tracking dashboard” where they can control who gets access to each flag so when the time comes they can retire unused flags to avoid build-up of technical debt.
Feature flag system is a must
At this point, you may decide that using a third-party feature flag management tool is the right choice for you. Which one you opt for will largely depend on your needs. As already pointed out, implementing your own solution is possible at first but it can be quite costly and troublesome to maintain.
Keep in mind the following before selecting a feature flag solution:
Pain points: What are your goals? What issues are you currently facing in your development and/or production process?
Use cases: We’ve already covered the many use cases where feature flags can be employed so you need to consider what you will be using feature flags for. You also need to consider who will be using it (is it just your developers or are there stakeholders involved beyond developers such as Product, Sales, etc?)
Needs and resources: Carefully weigh the build vs buy decision taking into account factors such as total costs and budget, the time required to build the platform, the scope of your solution (consider the long-term plan of your system), whether there is support across multiple programming languages-the more languages you use, the more tools you will need to support them.
Following the aforementioned points, your feature flagging management system will need to be: stable, scalable, flexible, highly-supported and multi-language compatible.
It’s more than fine to start simple but don’t lose sight of the higher value feature flags can bring to your company, well beyond the use case of decoupling deploy from release. To better manage and monitor your flags, the general consensus is to rely on a feature flag management tool. This will make feature flags management a piece of cake and can help speed up your development process.
With AB Tasty, formerly known as Flagship, we take feature flagging to the next level where we offer you more than just switching features on and off, offering high performance and highly scalable managed services. Our solution is catered not just to developers but can be widely used across different teams within your organization. Sign up for a free trial today to learn how you can ship with confidence anytime anywhere.