Article

9min read

How to Create a Modern Data Foundation for Experimentation

Staying ahead of the game to deliver seamless brand experiences for your customers is crucial in today’s experience economy. Today we’ll dip our toe into the “how” by looking at the underlying foundation upon which all of your experiences, optimization and experimentation efforts will be built: data.

Data is the foundation experimentation is built on
Data is the foundation experimentation is built on (Source)

 

Data is the technology that can power the experiences you build for your customers by first understanding what they want and how it’ll best serve your business to deliver this. It’s the special sauce that helps connect the dots between your interpretation of existing information and trends, and the outcomes that you hypothesize will address customer needs (and grow revenue).

If you’ve ever wondered whether the benefits of a special offer are sufficiently enticing for your customer or why you have so many page hits and so few purchases, then you’ve asked the questions the marketing teams of your competitors are both asking and actively working to answer. Data and experimentation will help you take your website to the next level, better understand your customers’ preferences, and optimize their purchasing journey to drive stronger business outcomes.

So, the question remains: Where do you start? In the case of e-commerce, A/B testing is a great way to use data to test hypotheses and make decisions based on information rather than opinions.

A/B testing helps brands make decisions based on data (Source)
A/B testing helps brands make decisions based on data (Source)

 

“The idea behind experimentation is that you should be testing things and proving the value of things before seriously investing in them,” says Jonny Longden, head of the conversion division at agency Journey Further. “By experimenting…you only do the things that work and so you’ve already proven [what] will deliver value.”

Knowing and understanding your data foundation is the platform upon which you’ll build your knowledge base and your experimentation roadmap. Read on to discover the key considerations to bear in mind when establishing this foundation.

 

Five things to consider when building your data foundation

  1. Know what data you’re collecting and why
    Knowing what you’re dealing with when it comes to slicing and dicing your data also requires that you understand the basic types and properties of the information to which you have access. Firstly, let’s look at the different types of data:

    • First-party data is collected directly from customers, site visitors and followers, making it specific to your products, consumers and operations.
    • Second-party data is collected by a secondary party outside of your company or your customers. It’s usually obtained through data-sharing agreements between companies willing to collaborate.
    • Third-party data is collected by entirely separate organizations with no consideration for your market or customers; however, it does allow you to draw on increased data points to broaden general understanding.

     

    Data also has different properties or defining characteristics: demographic data tells you who, behavioral data tells you how, transactional data tells you what, and psychographic data tells you why. Want to learn more? Download our e-book, “The Ultimate Personalization Guide”!

    Ultimate personalization guide e-book

    ‎‎ㅤ‎‎

    Gathering and collating a mix of this data will then allow you to segment your audience and flesh out a picture of who your customers are and how to meet their needs, joining the dots between customer behavior and preferences, website UX and the buyer journey.

    Chad Sanderson, head of product – data platform at Convoy, recommends making metrics your allies to ensure data collection and analysis are synchronized. Knowing what your business leaders care about, and which metrics will move the business forward, will ensure that your data foundation is relevant and set up for success.

    ‎ㅤ

  2. Invest in your data infrastructure
    Data is everywhere, in its myriad of forms and gathered from a multitude of sources. Even so, if you’re going to make use of it, you need a robust system for gathering, storing and analyzing it in order to best put it to work. Start by understanding how much first-party data you have the capacity to gather by evaluating your current digital traffic levels. How many people are visiting your site or your app? You can get this information using Google Analytics or a similar platform, and this will help you understand how sophisticated your data-leveraging practices can be and identify gaps where you might need to source supplementary data (second- and third-party).
    Next, you’ll need to evaluate your infrastructure. Companies that are further on their data analytics journey will invest in customer data platforms (CDPs) that allow them to collect and analyze data – gathered from a variety of sources and consolidated into a central database – at a more granular level. Stitching together this data via a CDP helps you bring all the pieces together to form a complete picture of your customers and identify any gaps. This is a critical step before you leap into action. Chad Sanderson concurs. “[Start] with the business and what the business needs,” he advises. “Tailoring your… solution to that – whatever that is – is going to be a lot more effective.”‎
  3. Get consent to build consumer trust
    Data security is rightly of foremost concern to consumers. The very users from whom you want to gather that first-party data want to ensure that their private information remains secure. Getting their consent and being transparent about the inherent benefit to them if they agree to your request – be it through giveaways, exclusive offers, additional information or services – will give you the best chance of success. Demonstrating that you adhere to, and take seriously, various data compliance laws (such as GDPR) and good governance will also build trust in your brand and give you the opportunity to make it worth their while through improved UX and personalized experiences.

    Build trust in your brand by respecting your users’ private information
    Build trust in your brand by respecting your users’ private information (Source)

    ‎ㅤ

  4. Collect and discover insights to upgrade your customer strategy
    We’ve already covered the fact that data is everywhere. As Chad Sanderson highlighted above, identifying immediate business needs and priorities – as well as focusing on quick wins and low-lift changes that can have a quick and high-level impact – can help you navigate through this minefield. It’s best to think of this section as a four-step process:
    ㅤㅤ Collect data as it flows into your CDP
    ㅤㅤ• Transform or calibrate your data so that it can be compared in a
    ㅤ  ㅤlogical manner
    ㅤㅤ• Analyze the data by grouping and categorizing it according to
    ㅤ  ㅤthe customer segments you’ve identified and benchmarking
    ㅤ  ㅤagainst business priorities
    ㅤㅤ• Activate your insights by pushing the learnings back into
    ㅤ  ㅤyour platforms and/or your experimentation roadmap and really
    ㅤ  ㅤput this data to work
  5. Turn your data into actions
    It’s crunch time (no pun about numbers intended)! We’ve examined the different types of data and where to source them, how to be responsible with data collection and how to set up the infrastructure needed to consolidate data and generate insights. We’ve also covered the need to understand business priorities and core strategy to drive data collection, analysis and activation in the same direction. Now we need to put that data and those insights to work.
    In the experience economy, where constant evolution is the name of the game, innovation and optimization are the key drivers of experimentation. Taking the data foundation that you’ve built and using it to fuel and nourish your experimentation roadmap will ensure that none of the hard work of your tech, marketing and product teams is in vain. Testing allows you to evaluate alternatives in real time and make data-driven decisions about website UX. It also ensures that business metrics are never far from reach, where conversion and revenue growth take center stage.Use the data you’ve gathered to fuel your experimentation roadmap
    Use the data you’ve gathered to fuel your experimentation roadmap (Source)

 

Invest in a solid data foundation to maximize and scale

At AB Tasty, we apply the Bayesian approach to interpreting data and test results because in A/B testing, this method not only shows whether there is a difference between the tested options but also goes beyond that by calculating a measure of that difference. Being able to identify what that variance is allows you to best understand what you will gain by adopting a permanent change.

Collecting and analyzing data, and then leveraging the insights that you glean, are key to unlocking the next level of experience optimization for your customers and your business. An experimentation roadmap grounded in real-time responsiveness and long-term, server-side improvements will have a solid data foundation approach at its core, where understanding who you want to target and how to act drives success. Furthermore, if you invest in your data foundation – and the five core drivers we’ve explored above – you’ll be equipped to scale your experimentation and allow optimization to become a key business maximizer.

Subscribe to
our Newsletter

bloc Newsletter EN

We will process and store your personal data to respond to send you communications as described in our  Privacy Policy.

Article

13min read

The Impact of Experimentation on Cumulative Layout Shift (CLS)

We teamed up with our friends at Creative CX to take a look at the impact of experimentation on Core Web Vitals. Read our guest blog from Creative CX’s CTO Nelson Sousa giving you insights into how CLS can affect your Google ranking, the pros and cons of experiments server and client side, as well as organisational and technical considerations to improve your site experience through testing, personalisation and experimentation.

What are Core Web Vitals?

Core Web Vitals (CWV) are a set of three primary metrics that affect your Google search ranking. According to StarCounter, the behemoth search engine accounts for 92% of the global market share. This change has the potential to reshape the way we look at optimising our websites. As more and more competing businesses seek to outdo one another for the top spots in search results.

One notable difference with CWV is that changes are focused on the user experience. Google wants to ensure that users receive relevant content and are directed to optimised applications. The change aims to minimise items jumping around the screen or moving from their initial position. The ability to quickly and successfully interact with an interface and ensure that the largest painted element appears on the screen in a reasonable amount of time.

Core Web Vitals

What is CLS?

Let’s imagine the following scenario:

You are navigated to a website. Click on an element. It immediately moves away from its position on the page. This is a common frustration. It means you click elsewhere on a page, or on a link, which navigates you somewhere else again! Forcing you to go back and attempt to click your desired element again.

You have experienced what is known as Cumulative Layout Shift, or for short, CLS; a metric used to determine visual stability during the entire lifespan of a webpage. It is measured by score, and according to Core Web Vitals, webpages should not exceed a CLS score of 0.1

CLS within Experimentation

When working with client-side experimentation, a large percentage of A/B testing focuses on making experimentation changes on the client side (in the browser). This is a common pattern, which normally involves placing a HTML tag in your website, so that the browser can make a request to the experimentation tool’s server. Such experimentation tools have become increasingly important as Tech teams are no longer the sole entities making changes to a website.

For many, this is a great breakthrough.

It means marketing and other less technical teams access friendly user interfaces to manipulate websites without the need of a developer.It also frees up time for programmers to concentrate on other more technical aspects.

One drawback for client-side, is certain elements can be displayed to the user before the experimentation tool has had a chance to perform its changes. Once the tool finally executes and completes its changes, it may insert new elements in the same position where other elements already exist. Pushing those other elements further down the page. This downward push is an example of CLS in action.

Bear in mind that this only affects experiments above the fold. Elements initially visible on the page without the need of scrolling.

So when should you check for CLS and its impact upon the application? The answer is up for debate. Some companies begin to consider it during the design phase, while others during the User Acceptance Testing phase. No matter what your approach is, however, it should always be considered before publishing an experiment live to your customer base.

Common CLS culprits

According to Google’s article on optimising CLS, the most common causes of CLS are:

  • Images without dimensions
  • Ads, embeds, and iframes without dimensions
  • Dynamically injected content
  • Web Fonts causing FOIT/FOUT
  • Actions waiting for a network response before updating DOM

Overall CLS Considerations

Team awareness and communication

Each variation change creates a unique CLS score. This score is a primary point in your prioritisation mechanism. It shapes the way you approach an idea. It also helps to determine whether or not a specific experiment will be carried out.

Including analysis from performance testing tools during your ideation and design phases can help you understand how your experiment will affect your CLS score. At Creative CX, we encourage weekly communication with our clients, and discuss CLS impact on a per-experiment basis.

Should we run experiments despite possible CLS impact?

Although in an ideal world you would look to keep the CLS score to 0, this isn’t always the case. Some experiment ideas may go over the threshold, but that doesn’t mean you cannot run the experiment.

If you have data-backed reasons to expect the experiment to generate an uplift in revenue or other metrics, the CLS impact can be ignored for the lifetime of the experiment. Don’t let the CLS score to deter you from generating ideas and making them come to life.

Constant monitoring of your web pages

Even after an experiment is live, it is vital to use performance testing tools and continuously monitor your pages to see if your experiments or changes cause unprecedented harmful effects. These tools will help you analyse your CLS impact and other key metrics such as First Contentful Paint and Time to Interactivity

Be aware of everyone’s role and impact

For the impact of experimentation on Web Core Vitals, you should be aware of two main things:

  • What is the impact of your provider?
  • What is the impact of modifications you make through this platform?

Experimentation platforms mainly impact two Web Vitals: Total Blocking Time and Speed Index. The way you use your platform, on the other hand, could potentially impact CLS and LCP (Largest Contentful Paint).

Vendors should do their best to minimize their technical footprint on TBT and Speed Index. There are best practices you should follow to keep your CLS and LCP values, without the vendor being held liable.

Here, we’ll cover both aspects:

Be aware of what’s downloaded when adding a tag to your site (TBT and Speed Index)

When you insert any snippet from an experimentation vendor onto your pages, you are basically making a network request to download a JavaScript file that will then execute a set of modifications on your page. This file is, by its nature, a moving piece: based on your usage – due to the number and nature of your experimentations, its size evolves.

The bigger the file, the more impact it can have on loading time. So, it’s important to always keep an eye on it. Especially as more stakeholders in your company will embrace experimentation and will want to run tests.

To limit the impact of experimenting on metrics such as Total Blocking Time and Speed Index, you should download strictly the minimum to run your experiment. Providers like AB Tasty make this possible using a modular approach.

Dynamic Imports

Using dynamic imports, the user only downloads what is necessary. For instance, if a user is visiting the website from a desktop, the file won’t include modules required for tests that affect mobile. If you have a campaign that targets only logged in users to your site, modifications won’t be included in the JavaScript file downloaded by anonymous users.

Every import also uses a caching policy based on its purpose. For instance, consent management or analytics modules can be cached for a long time. While campaign modules (the ones that hold your modifications) have a much shorter lifespan because you want updates you’re making to be reflected as soon as possible. Some modules can also be loaded asynchronously which has no impact on performance. For example, analytics modules used for tracking purposes.

To make it easy to monitor the impact on performance, AB Tasty also includes a tool, named “Performance Center”. The benefit of this is that you get a real time preview of your file size. It also provides on-going recommendations based on your account and campaign setup:

  • to stop campaigns that have been running for too long and that add unnecessarily weight to the file,
  • to update features on running campaigns, that have benefited from performance updates since their introduction (ex: widgets).

How are you loading your experimentation tool?

A common way to load an A/B testing platform is by inserting a script tag directly into your codebase, usually in the head tag of the HTML. This would normally require the help of a developer; therefore, some teams choose the route of using a tag manager as it is accessible by non-technical staff members.

This is certainly against best practice. Tag managers cannot guarantee when a specific tag will fire. Considering the tool will be making changes to your website, it is ideal for it to execute as soon as possible.

Normally it’s placed as high up the head tag of the HTML as possible. Right after any meta tags (as these provide metadata to the entire document), and before external libraries that deal with asynchronous tasks (e.g. tracking vendors such as ad networks). Even if some vendors provide asynchronous snippets to not block rendering, it’s better to load synchronously to avoid flickering issues, also called FOOC (Flash of Original Content).

Best Practice for flickering issues

Other best practice to solve this flickering issue include:

  • Make sure your solution uses vanilla JavaScript to render modifications. Some solutions still rely on the jQuery library for DOM manipulation, adding one additional network request. If you are already using jQuery on your site, make sure that your provider relies on your version rather than downloading a second version.
  • Optimize your code. For a solution to modify an element on your site, it must first select it. You could simplify this targeting process by adding unique ids or classes to the element. This avoids unnecessary processing to spot the right DOM element to update. For instance, rather than having to resolve “body > header > div > ul > li:first-child > a > span”, a quicker way would be to just resolve “span.first-link-in-header”.
  • Optimize the code auto generated by your provider.When playing around with any WYSIWYG editors, you may add several unnecessary JavaScript instructions. Quickly analyse the generated code and optimize it by rearranging it or removing needless parts.
  • Rely as much as possible on stylesheets. Adding a stylesheet class to apply a specific treatment is generally faster than adding the same treatment using a set of JavaScript instructions.
  • Ensure that your solution provides a cache mechanism for the script and relies on as many points of presence as possible (CDN)so the script can be loaded as quickly as possible, wherever your user is located.
  • Be aware of how you insert the script from your vendor. As performance optimization is getting more advanced, it’s easy to mess around with concepts such as async or defer, if you don’t fully understand them and their consequences.

Be wary of imported fonts

Unless you are using a Web Safe font, which many businesses can’t due to their branding, the browser needs to fetch a copy of the font so that it can be applied to the text on the website. This new font may be larger or smaller than the original font, causing a reflow of the elements. Using the CSS font-display property, alongside preloading your primary webfonts, can increase the change of a font meeting the first paint, and help specify how a font is displayed, potentially eliminating a layout shift.

Think carefully about the variation changes

When adding new HTML to the page, consider if you can replace an existing element with an element of similar size, thus minimising the layout shifts. Likewise, if you are inserting a brand-new element, do preliminary testing, to ensure that the shift is not exceeding the CLS threshold.

Technical CLS considerations

Always use size attributes for the width and height of your images, videos and other embedded items, such as advertisements, and iframes. We suggest using CSS aspect ratio properties for images specifically. Unlike older responsive practices, it will determine the size of the image before it is downloaded by the browser. The more common aspect ratios out there in the present day are 4:3 and 16:9. In other words, for every 4 units across, the screen is 3 units deep, and every 16 units across, the screen is 9 units deep, respectively.

screen sizeKnowing one dimension makes it possible to calculate the other. If you have an element with 1000px width, your height would be 750px. This calculation is made as follows:

height = 1000 x (3 / 4)

When rendering elements to the browser, the initial layout often determines the width of a HTML element. With the aspect ratio provided, the corresponding height can be calculated and reserved. Handy tools such as Calculate Aspect Ratio can be used to do the heavy lifting math for you.

Use CSS transform property

The CSS transform property is a CSS trigger which will not trigger any geometry changes or painting. This will allow changing the element’s size without triggering any layout shifts. Animations and transitions, when done correctly with the user’s experience in mind, are a great way to guide the user from one state to another.

Move experiment to the server-side

Experimenting with many changes at once is considered against best practice. The weight of the tags used can affect the speed of the site. It may be worth moving these changes to the server-side, so that they are brought in upon initial page load. We have seen a shift in many sectors, where security in optimal, such as banking, to experiment server-side to avoid the use of tags altogether. This way, once a testing tool completes the changes, layout shift is minimised.

Working hand in hand with developers is the key to running server-side tests such as this. It requires good synchronisation between all stockholders, from marketing to product to engineering teams. Some level of experience is necessary. Moving to server-side experiments just for the sake of performance must be properly evaluated.

Server-side testing shouldn’t be confused with Tag Management server-side implementation. Some websites that implement a client-side experimentation platform through tag managers (which is a bad idea, as described previously), may be under the impressions that they can move their experimentation tag on the server-side as well and get some of tag management server-side benefits, namely reducing the number of networks request to 3rd party vendors. If this is applicable for some tracking vendors (Goggle Analytics, Facebook conversions API…), this won’t work with experiment tags that need to apply updates on DOM elements.

Summary

The above solutions are there to give you an overview of real life scenarios. Prioritise the work to be done in your tech stack. This is the key factor in improving the site experience in general. This could include moving requests to the server, using a front-end or server-side library that better meets your needs. All the way to rethinking your CDN provider and where that are available versus where most of your users are located.

One way to start is by using a free web tool such as Lighthouse and get reports about your website. This would give you the insight to begin testing elements and features that are directly or indirectly causing low scores.

For example, if you have a large banner image that is the cause of your Largest Contentful Paint appearing long after your page begins loading, you could experiment with different background images and test different designs against one another to understand which one loads the most efficiently. Repeat this process for all the CWV metrics, and if you’re feeling brave, dive into other metrics available in the Lighthouse tools.

While much thought has gone into the exact CWV levels to strive for, it does not mean Google will take you out of their search ranking as they will still prioritise overall information and relevant content over page experience. Not all companies will be able to hit these metrics, but it certainly sets standards to aim for.

Written by Nelson Sousa, Chief Technology Officer, Creative CX

Nelson is an expert in the field of experimentation and website development with over 15 years’ experience, specialising in UX driven web design, development, and optimisation.