Article

2min read

How We Made AB Tasty’s Feature Experimentation SDK OpenFeature-Compatible — And Even Easier to Adopt

Why We Chose to Adopt OpenFeature

At AB Tasty, we believe a great product experience starts with smooth feature delivery and personalization. Our Feature Experimentation SDK empowers tech teams to control feature rollout and tailor interfaces to each visitor.

But in today’s complex, fast-moving ecosystems, interoperability is key. That’s where OpenFeature comes in.

What is OpenFeature?

OpenFeature is an open-source specification that defines a standardized API for feature flag management. It lets developers manage feature flags consistently across tools and platforms.

Why it matters:

  • Interoperability: A unified API across providers.
  • No vendor lock-in: Switch tools without rewriting business logic.
  • Thriving community: Backed by CNCF and designed for cloud-native development.

What We Built: The Official AB Tasty OpenFeature Provider

To ensure our SDK plays well with OpenFeature, we created an official provider:
@flagship.io/openfeature-provider-js

You can now use our feature flags in any OpenFeature-compliant setup, like this simple Node.js example:


JavaSvript
const express = require("express");
const { ABTastyProvider } = require("@flagship.io/openfeature-provider-js");
const { OpenFeature } = require("@openfeature/server-sdk");

const app = express();
app.use(express.json());

const provider = new ABTastyProvider("<ENV_ID>", "<API_KEY>");
await OpenFeature.setProviderAndWait(provider);

app.get("/item", async (req, res) => {
  const evaluationContext = {
    targetingKey: "visitor-id",
    fs_is_vip: true,
  };

  OpenFeature.setContext(evaluationContext);
  const client = OpenFeature.getClient();

  const fsEnableDiscount = await client.getBooleanValue("fs_enable_discount", false, evaluationContext);
  const fsAddToCartBtnColor = await client.getStringValue("fs_add_to_cart_btn_color", "blue", evaluationContext);

  res.json({
    product: {
      name: "AB Tasty Feature Experimentation T-shirt",
      price: 20,
      discountEnabled: fsEnableDiscount,
      btnColor: fsAddToCartBtnColor,
    },
  });
});

app.listen(3000, () => console.log("Server running on port 3000"));

Even Smarter: Our Built-In Codebase Analyzer

We’ve made onboarding even easier. Our CLI tool, which is also bundled with the AB Tasty VSCode Extension, includes a powerful codebase analyzer.

What it does:

  • Scans your codebase
  • Detects flags and usage from other providers (e.g., Optimizely, Kameleoon, etc.)
  • Identifies if you’re already using OpenFeature
  • Automatically generates corresponding flags in Flagship

Example: Already using OpenFeature with a competitor? Just plug in our CLI, and AB Tasty will detect your flags and preconfigure them for you — saving you hours of manual setup.

What This Means for You

By supporting OpenFeature, we offer:

  • Faster integration with your current stack
  • Greater flexibility in your feature flag strategy
  • Portability across cloud platforms
  • Alignment with open, modern dev practices

Quick Integration Guide

1. Install dependencies


Shell
npm install @openfeature/server-sdk
npm install @flagship.io/openfeature-provider-js

2. Initialize the provider


JavaScript
const { ABTastyProvider } = require("@flagship.io/openfeature-provider-js");
const { OpenFeature } = require("@openfeature/server-sdk");

const provider = new ABTastyProvider("<ENV_ID>", "<API_KEY>");
await OpenFeature.setProviderAndWait(provider);

3. Define the evaluation context


JavaScript
const evaluationContext = {
  targetingKey: "visitor-id",
  fs_is_vip: true,
};

OpenFeature.setContext(evaluationContext);

4. Evaluate your flags


JavaScript
const client = OpenFeature.getClient();

const fsEnableDiscount = await client.getBooleanValue(
  "fs_enable_discount",
  false,
  evaluationContext
);

const fsAddToCartBtnColor = await client.getStringValue(
  "fs_add_to_cart_btn_color",
  "blue",
  evaluationContext
);

Want to Learn More?

Check out these resources for more information about Feature Experimentation and OpenFeature.

Article

3min read

Tag V4: Elevating your Experience Building with an Improved Modification Engine 

At AB Tasty, we love providing our users with the best possible experience by making it easy to create and execute optimization campaigns. That’s why we recently significantly improved our Modification Engine, one of the core components of our Visual Editor and our JavaScript tag.

The Modification Engine is the system that dynamically alters our client’s website content and appearance without requiring direct changes to the source code. It applies the modifications defined in campaigns by injecting the changes via JavaScript on the visitor’s browser.

Here are the two big improvements:

  • Maximum compatibility with our clients’ websites, including Shadow DOM and iFrames support.
  • Optimized performance for faster loading and smoother execution of modifications, enhancing the experience for visitors on our clients’ sites.

Enhanced Compatibility with Modern Technologies

Our clients are developing increasingly complex websites, utilizing technologies like Shadow DOM and iFrames to structure their web applications. Now, our Visual Editor is compatible with these technologies so you can create, modify, and manage content to deliver the latest in experiences to your visitors. 

Significant Performance Improvements

Our teams have worked extensively to optimize the loading and execution times of the Modification Engine, leading to tangible improvements in overall site performance. (If you don’t believe us, check out this blog on how we’re 4 times faster than other solutions)

Reduced Impact on Website Performance

  • A lighter JavaScript tag: The overall impact of AB Tasty has already been reduced by 2.3 KB, with further optimizations planned.
  • Less impact on overall performance: We observed an improvement up to 11% in website performance, depending on the number of modifications applied.

Optimized Loading and Execution Times

  • More efficient JavaScript execution: The average execution time for modifications is 30% to 55% faster.
  • Decreased “Render Blocking Time”: Improvements range from 11% to 50%, with even greater benefits for larger campaigns.
  • No longer classified as a “Long main-thread task”

Faster Application and Reapplication of Modifications

Another key improvement in this update is the speed of applying and reapplying modifications:

  • Applying modifications is 2.2 to 2.75 times faster.
  • Reapplying modifications is 4 to 5.7 times faster, a major advantage for dynamic A/B testing.

Real-World Examples

We conducted tests on various campaigns to measure these improvements in action:

  • Campaign with 19 Modifications:
    • Total execution time: 1.79ms (down from 3.96ms, 2.2 times faster).
    • Reapplying time after a modification is removed: 0.58ms (down from 3.34ms, 5.7 times faster).
    • up to 5% improvement in overall site performance.
  • Campaign with 64 Modifications:
    • Total application time: 4ms (down from 10-11ms, 2.5 to 2.75 times faster).
    • Reapplying time after a modification is removed: 2.7ms (down from 11-12ms, 4 to 4.4 times faster).
    • Up to 11% improvement in overall site performance.

With these enhancements, our newest Modification Engine version is now more robust, faster, and better suited for modern websites.

You benefit from a smoother user experience, and your visitors enjoy faster loading times. And this is just the beginning: stay tuned for even more powerful optimizations in the coming months.

Feel free to test these improvements and share your feedback with us!

Article

2min read

New Visual Studio Code Extension: Dev-Friendly Experimentation & Personalization

AB Tasty’s Visual Studio Code extension lets developers manage their experimentation and personalization campaigns directly from their IDE. This game-changing tool streamlines technical workflows and makes experimentation more accessible across your organization.

We’re all about making experimentation and personalization seamless for all teams. While AB Tasty’s UI is designed to be intuitive, we know that many developers prefer to work directly in their IDE. That’s why we built the AB Tasty Visual Studio Code extension—inspired by user feedback and driven by our mission to simplify the dev experience.

With this extension, you can:

  • Eliminate back-and-forth between your IDE and the AB Tasty web interface.
  • Leverage VS Code’s power: smart autocomplete, built-in linters, syntax highlighting, real-time validation, and more.

The benefits of VS Code Extension for server-side:

With AB Tasty’s Feature Experimentation and Roll-outs, from the Visual Studio Code Extension you can:
✅ Manage feature flags and product experimentation with a code-first approach
✅ Call key campaign resources (flags, targeting keys, goals)
✅ Detect and create feature flags directly from your codebase

The benefits of VS Code Extension for client-side:

With AB Tasty’s Web Experimentation and Personalization, from the Visual Studio Code Extension you can:
✅ Manage product experimentation with a code-first approach
✅ List and access segments, triggers, and favorite-URLs linked to your account
✅ List and access campaigns (variations, targeting, modifications)
✅ Manage JS scripts tied to accounts, campaigns, variations, and modifications

AB Tasty Joins the “Dev-Friendly” Movement

Our vision is clear: AB Tasty should adapt to developers, not the other way around. This extension is just the beginning—we have plenty more enhancements in the pipeline.

Try It & Share Your Feedback!

? The extension is now available in beta on the Visual Studio Code Marketplace:
VS Code Marketplace
VS Code Documentation

? Ready to experiment differently?

Article

8min read

Harmony or Dissonance: Decoding Data Divergence Between AB Tasty and Google Analytics

The world of data collection has grown exponentially over the years, providing companies with crucial information to make informed decisions. However, within this complex ecosystem, a major challenge arises: data divergence. 

Two analytics tools, even if they seem to be following the same guidelines, can at times produce different results. Why do they differ? How do you leverage both sets of data for your digital strategy?

In this article, we’ll use a concrete example of a user journey to illustrate differences in attribution between AB Tasty and Google Analytics. GA is a powerful tool for gathering and measuring data across the entire user journey. AB Tasty lets you easily make changes to your site and measure the impact on specific goals. 

Navigating these differences in attribution strategies will explain why there can be different figures across different types of reports. Both are important to look at and which one you focus on will depend on your objectives:

  • Specific improvements in cross-session user experiences 
  • Holistic analysis of user behavior

Let’s dive in! 

Breaking it down with a simple use case

We’re going to base our analysis on a deliberately very basic use case, based on the user journey of a single visitor.

Campaign A is launched before the first session of the visitor and remains live until the end which occurs after the 3rd session of the visitor.

Here’s an example of the user journey we’ll be looking at in the rest of this article: 

  • Session 1:  first visit, Campaign A is not triggered (the visitor didn’t match all of the targeting conditions)
  • Session 2:  second visit, Campaign A is triggered (the visitor matched all of the targeting conditions)
  • Session 3:  third visit, no re-triggering of Campaign A which is still live, and the user carries out a transaction.

NB A visitor triggers a campaign as soon as they meet all the targeting conditions: 

  • They meet the segmentation conditions
  • During their session, they visit at least one of the targeted pages 
  • They meet the session trigger condition.

In A/B testing, a visitor exposed to a variation of a specific test will continue to see the same variation in future sessions, as long as the test campaign is live. This guarantees reliable measurement of potential changes in behavior across all sessions.

We will now describe how this user journey will be taken into account in the various AB Tasty and GA reports. 

Analysis in AB Tasty

In AB Tasty, there is only one report and therefore only one attribution per campaign.

The user journey above will be reported as follows for Campaign A:

  • Total Users (Unique visitors) = 1, based on a unique user ID contained in a cookie; here there is only one user in our example.
  • Total Session = 2, s2 and s3, which are the sessions that took place during and after the display of Campaign A, are taken into account even if s3 didn’t re-trigger campaign A
  • Total Transaction = 1, the s3 transaction will be counted even if s3 has not re-triggered Campaign A.

In short, AB Tasty will collect and display in Campaign A reporting all the visitor’s sessions and events from the moment the visitor first triggered the campaign

Analysis in Google Analytics

The classic way to analyze A/B test results in GA is to create an analysis segment and apply it to your reports. 

However, this segment can be designed using 2 different methods, 2 different scopes, and depending on the scope chosen, the reports will not present the same data. 

Method 1: On a user segment/user scope

Here we detail the user scope, which will include all user data corresponding to the segment settings. 

In our case, the segment setup might look something like this: 

This segment will therefore include all data from all sessions of all users who, at some point during the analysis date range, have received an event with the parameter event action = Campaign A.

We can then see in the GA report for our user journey example: 

  • Total User = 1, based on a user ID contained in a cookie (like AB Tasty); here there is only one user in our example
  • Total Session = 3, s1, s2 and s3 which are the sessions created by the same user entering the segment and therefore includes all their sessions
  • Total Transaction = 1, transaction s3 will be counted as it took place in session s3 after the triggering of the campaign.

In short, in this scenario, Google Analytics will count and display all the sessions and events linked to this single visitor (over the selected date range), even those prior to the launch of Campaign A.

Method 2: On a session segment/session scope 

The second segment scope detailed below is the session scope. This includes only the sessions that correspond to the settings.

In this second case, the segment setup could look like this: 

This segment will include all data from sessions that have, at some point during the analysis date range, received an event with the parameter event action = Campaign A.

As you can see, this setting will include fewer sessions than the previous one. 

In the context of our example:

  • Total User = 1, based on a user ID contained in a cookie (like AB Tasty), here there’s only one user in our example
  • Total Session = 1, only s2 triggers campaign A and therefore sends the campaign event 
  • Total Transaction = 0, the s3 transaction took place in the s3 session, which does not trigger campaign A and therefore does not send an event, so it is not taken into account. 

In short, in this case, Google Analytics will count and display all the sessions – and the events linked to these sessions – that triggered campaign A, and only these.

Attribution model

Tool – scope Counted in the selected timeframe
AB Tasty All sessions and events that took place after the visitor first triggered campaign A
Google Analytics – user scope  All sessions and events of a user that triggered campaign A at least once during one their sessions
Google Analytics – session scope  Only sessions that have triggered campaign A

 

Different attribution for different objectives

Depending on the different attributions of the various reports, we can observe different figures without the type of tracking being different. 

The only metric that always remains constant is the sum of Users (Unique visitors in AB Tasty). This is calculated in a similar (but not identical) way between the 2 tools. It’s therefore the benchmark metric, and also the most reliable for detecting malfunctions between A/B testing tools and analytics tools with different calculations. 

On the other hand, the attribution of sessions or events (e.g. a transaction) can be very different from one report to another. All the more so as it’s not possible in GA to recreate a report with an attribution model similar to that of AB Tasty. 

Ultimately, A/B test performance analysis relies heavily on data attribution, and our exploration of the differences between AB Tasty and Google Analytics highlighted significant distinctions in the way these tools attribute user interactions. These divergences are the result of different designs and distinct objectives.

From campaign performance to holistic analysis: Which is the right solution for you?

AB Tasty, as a solution dedicated to the experimentation and optimization of user experiences, stands out for its more specialized approach to attribution. It offers a clear and specific view of A/B test performance, by grouping attribution data according to campaign objectives. 

Making a modification on a platform and testing it aims to measure the impact of this modification on the performance of the platform and its metrics, during the current session and during future sessions of the same user. 

On the other hand, Google Analytics focuses on the overall analysis of site activity. It’s a powerful tool for gathering data on the entire user journey, from traffic sources to conversions. However, its approach to attribution is broader, encompassing all session data, which can lead to different data cross-referencing and analysis than AB Tasty, as we have seen in our example.

It’s essential to note that one is not necessarily better than the other, but rather adapted to different needs. 

  • Teams focusing on the targeted improvement of cross-session user experiences will find significant value in the attribution offered by AB Tasty. 
  • On the other hand, Google Analytics remains indispensable for the holistic analysis of user behavior on a site.

The key to effective use of these solutions lies in understanding their differences in attribution, and the ability to exploit them in complementary ways. Ultimately, the choice will depend on the specific objectives of your analysis, and the alignment of these tools with your needs will determine the quality of your insights.