After our amazing digital summit at the end of 2020, we wanted to sit down with Matt Bullock, Director of Growth at Roboboogie to learn more about ROI-driven design.
Tell us about Roboboogie and your session. Why did you choose this topic?
Matt: Our session was titled Building an ROI-Driven Testing Plan. When working with our existing clients, or talking with new potential clients, we look at UX opportunities from both a data and design perspective. By applying ROI-modeling, we can prioritize the opportunities with the highest potential to drive revenue or increase conversions.
What are the top 3 things you hope attendees took away from your session?
Matt: We have made the shift from “Design and Hope” to a data-backed “Test and Optimize” approach to design and digital transformation, and it’s a change that every organization can make.
An ROI-Driven testing plan can be applied across a wide range of conversion points and isn’t exclusive to eCommerce.
Start small and then evolve your testing plan. Building a test-and-optimize culture takes time. You can lead the charge internally or partner with an agency. As your ROI compounds, everyone is going to want in on the action!
2021 is going to be a transformative year where we hope to see a gradual return to “normalcy.” While some changes we endured in 2020 are temporary, it looks like others are here to stay. What do you think are the temporary trends and some that you hope will be more permanent?
Matt: Produce to your doorstep and curbside pickup were slowly picking up steam before 2020. Before the end of the year, it was moving into the territory of a customer expectation for all retailers with a brick-and-mortar location. While there will undoubtedly be nostalgia and some relief when retailers are able to safely open for browsing, I do think there will be a sizable contingent of users who will stick with local delivery and curbside pickup.
There is a lot of complexity that is added to the e-commerce experience when you introduce multiple shipping methods and inventory systems. I expect the experience will continue evolving quickly in 2021.
We saw a number of hot topics come up over the course of 2020: the “new normal,” personalization, the virtual economy, etc. What do you anticipate will be the hot topics for 2021?
Matt: We’re hopeful that we’ll be safely transitioning out of isolation near the end of 2021, and that could bring some really exciting changes to the user’s digital habits. We could all use less screen time in 2021 and I think we’ll see some innovation in the realm of social interaction and screen-time efficiency. We’ll look to see how we can use personalization and CX data to create experiences that help users efficiently use their screen time so that we can safely spend time with our friends and family in real life.
What about the year ahead excites the team at Roboboogie the most?
Matt: In the last 12 months, the consumer experience has reached amazing new heights and expectations. New generations, young and old, are expanding their personal technology stacks to stay connected and to get their essentials, as they continue to socialize, shop, get their news, and consume entertainment from a safe distance. To meet those expectations, the need for testing and personalization continues to grow and we’re excited to help brands of all sizes meet the needs of their customers in new creative ways.
Facebook’s Ads Manager is a marketing powerhouse. Even with a $5 daily budget, you could reach hundreds of thousands of people in your target audience. A report by Buffer estimated that as many as 91% of marketers use Facebook ads. Facebook marketing continues to push full steam ahead.
Although Facebook ads can be great for drumming up brand awareness, knowing how to A/B test your ads is the secret to long term success. Without it, you’re just guessing at what works instead of rigorously analyzing and improving your approach. Consistent A/B testing (also known as split testing) provides the analytics you need to improve your strategy, boost engagement, and increase your click-through rate.
Read on for a step-by-step guide on how to A/B test your Facebook ads. By the end, you’ll know how to set up your own A/B ads on Facebook, and the best presets to choose for each option along the way.
But First, What is an A/B Test?
A/B testing your Facebook ads can teach you more about your audience’s preferences.
A and B refer to the versions of the ad. Your A version acts as the control. It’s the ad version you’ve seen work in the past or believe will work best. Your B ad version implements a variable. It’s a variation of A and is meant to compete with your A version.
If A beats out B, then you keep running the A ad and make a different change to B to try again.
Once the B version performs better than A, B becomes your new control – your new A. Your original A is discarded or archived. The new A now acts as the baseline to beat when you split test again in the future.
Split testing is meant to help identify which variables pull the most weight and altering the parts that don’t support conversions.
Before you begin split testing, be sure you’re clear on what specific goal you have for that ad. Usually, you’ll be looking for post engagement, such as a click-through to the website or increasing sign-ups.
Don’t forget to check that the click-through destination matches the promise of the ad. If you were offering a discount on a pair of sneakers, make sure that’s precisely where your audience ends up.
Three Options to Get Started in Facebook’s Ad Manager
Facebook gives you three options to create a split test.
Guided creation: Facebook will walk you through the process of creating a split test. Once you complete their questions, your ads will be ready to go. This method works best if you’re new at Facebook advertising or prefer a step-by-step guide. The screenshots below show walkthrough this method.
Quick creation: Facebook lets you design the structure for a split test. This structure can be saved and deployed at a later time. This can be helpful if you know what you plan to test, but your campaign doesn’t start for another week.
Duplication: If you’ve run Facebook Ads before, the Duplication method allows you to add a new ad set or alter an existing campaign for your split test. We’d recommend this if you want to test one variable in an ad you’ve already run.
There’s no wrong choice since it’ll depend on your preference and history of running Facebook ads. For more detailed steps on each option, review their Help page here.
Select Your Ad’s Objective & Variable
Select the objective that you decided on earlier. Once you choose one of these options, a menu will appear. Select “Create Split Test”, then select the variable you plan to change. The dropdown menu options are creative, delivery optimization, audience, and placement.
Creative: Design changes such as video, image, description, etc.
Delivery Optimization: Delivers ads to the Audience that is most likely to do your desired action (landing page views, link clicks, impressions, daily unique reach).
Audience: Change the target audience for the ad.
Placement: Changing which platforms your ad appears.
Once you choose that, Facebook will walk you through the next several decisions you need to make. This includes deciding where you want to drive traffic, creating an offer, choosing an audience, optimize for ad delivery, and setting a budget. Here are menu screenshots of each. As you can see, there’s a high-level of customizability available for each ad set you run.
Variable
Although you selected this already, you have the option to change it again here. In Facebook’s Ads Manager, you’re only allowed to select one variable at a time. Like we recommended earlier, this is the best way to know which variables caused which change.
Audience & Placements
The next two sections are audience and placement. These will both depend on your specific brand and location, so you’ll need to navigate this on your own. Audience can be narrowed down by location, sex, age, and even by past engagement with your page. Consider your target audience’s personality, including their hobbies, interests, and lifestyle. Once you determine a target audience, you can save that cohort and alter it in the future.
Because Facebook placements cover such a broad range of formats and platforms (think everything from Instagram stories to Messenger Inbox), it’s probably best to leave it on recommended. Facebook’s Ads Manager uses its database of ad analytics to determine the best combination of placements for your ad. As you continue to analyze your results, you can create custom placements in the future.
Delivery Optimization
In this section, you can optimize your ad delivery for specific conversions such as link clicks, landing page views, impressions, and daily unique reach. This should reflect the original goal you set out for your ad. You also have a choice between getting charged per impression, or per click.
For ad scheduling, we recommend narrowing your time to when your audience is most likely to be interested in your ad, or at the very least awake. For example, if you’ve seen that your ads tend to convert in the morning, that’s when you should schedule your ads, you get the best chance at ROI.
Split Test Budget & Schedule
This is where you can determine how much to spend, and the runtime of your ads. Here you have the choice of a daily budget vs. a lifetime budget. For example, if you decide to spend $70 a day for a 4-day campaign, your daily budget would be $70, and your lifetime budget would be $280.
If you choose daily budget, Facebook will spend up to that amount per day, regardless of performance on the account. Daily budgets don’t allow for ad scheduling since it’ll be working to spend that set amount.
Facebook is more budget and result-conscious with the lifetime budget option. Choosing lifetime budget means Facebook will alter daily spend levels in response to campaign results.
Don’t forget to keep an eye on the “Estimated Test Power”. This percentage is the likelihood of detecting a difference in your ad sets if there is one to detect. Facebook recommends you keep this test power at 80% or above for a worthwhile split test.
Once you’ve made your selections, you can click continue to upload and design your ad control.
Design Two Versions of your Ad
A/B Test of the Same Ad: Photo Credit to Jeff Bullas
To split test, you’ll need to create one control (A), and one variable (B). Regardless of which variable you’re testing, it’s best only to change one so the results are clear. Some audience-facing variables you might switch could include changing your call-to-action, using a different image, or remove the image entirely.
Regardless of which you choose, be sure the final ad is noticeably different than before and is an aspect that’s broad enough to be applied in the future.
For example, if you’re marketing a winter holiday, don’t A/B test between two different photos of a decorative table setting. Choose a photo with a person, add text to the image, or remove the image entirely. That way if you’re advertising a summer holiday in the future, you’ll be able to paint a more generalized picture of what sparks interest in your audience.
Once you’re ready, input your ad into Facebook’s platform. Be sure to preview your ad (top right) and create URL parameters (bottom left) so you can track which engagement came from where.
When you’re ready, click the “Continue to ad B” button in the bottom right corner. This page auto-fills with the same information as ad A. It’s here that you introduce any variables, such as changing the audience, ad format, or other specs.
Finally, you click the green “Confirm” button to finalize and purchase the ad.
Review the Results
Once your ads are finished running, it’s time to review the results of your A/B test. Drawing actionable conclusions is the most important step in increasing your ad’s CTR. Thankfully, Facebook Ads Manager makes this easy.
First, apply a filter so that only relevant campaigns and ad sets that were part of the split tests will show in the reporting table. To do this, click Filter and choose Split Test from the menu.
For a quick initial result, ads Manager puts a star next to the winning ad set. Facebook determines the winning set by comparing the cost per result of each ad set and other variables.
Facebook Ads Manager will also send you a detailed email report, that includes:
Their Winning Ad Determination
Your A/B Test Settings
Results
Cost
Amount Spent
From these results, you can determine what worked and what changes you’d like to make for your next Facebook campaign.
—
Understanding Facebook advertising and split test marketing is a worthwhile investment for any marketer worth their salt. 80% of all internet users have a Facebook Account, meaning that you’re practically guaranteed to reach your target audience on the platform.
Using their Ads Manager, you can build a robust and ever-improving marketing strategy using their analytics. Over time, you’ll see an increase in revenue, sales, and lead generation. Once you have everything prepared, it only takes minutes to set up a Facebook Ad, so get started today!
A/B testing has been around for decades, even before the advent of the internet or social media. In fact, it actually spans back to the time when James Lind first conducted an A/B test in a clinical trial over 300 years ago.
Many years later, Google famously used an A/B test to decide which shade of blue to use in its campaigns, showing each shade to 1% of their users. Some time in between James Lind and Google, marketers would run tests on TV or newspaper ads. They would then assess the results and make changes accordingly, and then conduct more tests, and so forth. These tests started weeks – or even months – before the campaign launch, making for a time-consuming and tedious process.
Fortunately, testing is an easier process nowadays, and marketers are able to test virtually all elements of a campaign. More specifically, A/B testing has found a special importance in social media. Digital marketers introduce two slightly different posts, testing various elements to see which one gets a better response.
Although testing has become easier, it has certainly become more complex as well. The issue that many marketers now face is knowing where and how to introduce testing in their social media campaigns. To help, we’ve compiled a list of the elements of a social campaign that you should be testing, and how you can start executing these tests right away.
1. Find Your Target Audience
Before you start a campaign, you have to get to know your target audience. This process for testing is unique, in that you won’t be changing the actual contents of the campaign. Instead, you will show the same advertisement or post to various segments to see which one will react best.
For instance, when testing Facebook ads, you will generally want to segment by location, age, gender, device, platform, or interests.
2. Experiment with Hashtags
While using too many hashtags might annoy your audience, just the right amount could get your post more attention. Having said that, you should avoid simply testing a post with hashtags versus a post without hashtags. Companies tend to test posts with multiple hashtags against those with just one, posts with different hashtags, as well as hashtag placement within the post.
3. Test Various Ad Formats
When using social media advertising, you should definitely be testing different ad formats. Specifically, in the case of Facebook, some formats will work best for certain types of posts. Edith McClung, a Digital Marketer at Academized, gives a great example: “While a carousel ad might work for a product launch – viewers will be able to see multiple pictures of your product – an advertisement with ‘Get Directions’ might work better with a restaurant launch”. Keep in mind that different advertisement types will have varied results based on your target audience and the content you are promoting.
4. Change Up the Post Text
This is perhaps the most common practice when it comes to social media split testing, as there are various elements of your post text might affect your success differently.
Here are some things that you could test:
Length of the post
Style
Use of emoji
Tone of voice
Use of numbers and lists
Remember, you always want to always proofread your posts. Even though we live in the age of texting and abbreviations, readers still expect your posts to be flawless. Even the smallest mistakes can be off-putting to the reader. Using tools such as AcademAdvisororVia Writing can help.
5. Use Different Images and Videos
While it’s generally the case that social media users prefer posts that feature images and videos, it’s still important to test this on your own audience for each specific platform. For example, split testing often shows that Twitter users prefer GIFs to regular images, so companies present on this social media platform tend to use GIFS more often than other types of graphics.
The testing possibilities are endless, as you can try posts with no images or videos versus text with images and videos, posts with gifs versus posts with images, the length of the video in posts, etc.
Just be sure to balance informative text out with visual content and use an appealing format. Tools like likeBoom Essays orEssay Roo can help.
6. Play Around With Your CTAs
Your Call-To-Action is another crucial, yet often overlooked component to your post. Users have varied responses to different CTAs, and you need to find the one that will work best for your audience. Test several CTAs in your posts and use the one that is most relevant yet also gets you the most clicks.
7. Try Out Different Headlines
Headlines are one of the most important aspects to your posts, as they are often the most prominent component. Test the same factors that you normally would in post content – length of the headline, use of numbers, style, etc. If writing headlines aren’t your strength, it might be a good idea to use a guide – websites likeStateOfWriting orUK Writings can help you.
Wrapping Up
Split testing is one of the best methods out there for getting things right on social media. The same post can get a different response based on the title, CTA, advertisement type, etc. By continuing to test, you will be able to optimize your social media strategy by finding what works best with your audience.
In this day and age, it has become so apparent how much social media can impact the success of a business or brand, and by adding A/B testing to your repertoire, you could be seeing even more of a benefit from platforms that you are already using. So get creative, have fun with it, and watch your business grow.
Freddie Tubbs is a digital marketing strategist at Paper Fellows. He regularly takes part in online marketing conferences and contributes expert articles to the Vault, Australian Help and Big Assignments blogs.
Breaking news: according to CopyBlogger, 80% of all readers never make it past the headline.
If you read this, you’re among our happy 20% and you shall not be disappointed.
The truth is: it’s a pretty serious issue for all publishers.
Similarly, the Washington Post reported that 6 in 10 Americans acknowledge that they don’t read past the headlines regarding any type of news.
So, should we just stop writing?
Obviously not.
In 2018, the written content is still one of the most consumed media (in competition with videos) and remains a powerful tool to:
Build brand awareness
Generate B2B Leads
Report news
Drive sales
Grow your audience
Rank on search engines
Knowing that most readers won’t spend more than 15 seconds reading an average article (source: Buffer), crafting powerful and catchy headlines has never been more important to ensure that your audience will stick around for a while and that you don’t produce content in vain.
But how do you make sure that your headlines really work?
It’s simple: you need to run some headline A/B testing.
What is headline testing?
Definition: headline testing consists of creating several title variations for the same article (or online media piece) in order to find out which one performs the best.
Based on your objectives, headline testing can be used to track several metrics such as:
Headline testing requires you to define a title as the “control version” in order to compare it with one or more variant.
While choosing the number of variants, bear in mind that the more variants you wish to test, the larger sample you will need in order to obtain statistically relevant results.
Once you’ve chosen your variants, you will use an A/B testing tool to run your tests and see which headline outperforms the others.
Typically, an A/B testing tool will send a percentage of your page’s traffic to each variant until it identifies a winner.
From there, the tool will allocate 100% of the traffic to the “winner” in order to maximize your page’s performance.
Sound good?
Let’s see how to come up with brilliant headline ideas that you will be able to A/B test later on.
How to brainstorm headline ideas
Headlines come in many forms depending on whether you’re writing an article, a landing page or even a product description.
Given this variety of headlines, we’ll try to help you craft various headlines through general guidelines to meet your business objectives.
In 2013, Conductor came up with a study that showed the impact of adding numbers to your headlines: it appears that readers do prefer headlines that include numbers.
Craft a strong value proposition
Creating a value proposition for your readers means that you need to work on including a real benefit inside your headline.
Working on your value proposition is the cornerstone of every headline creation process: it helps you address your core audience while promising something in exchange for their attention.
Depending on the content you’re working on, crafting your value proposition is a process that basically sells your content: it will determine whether or not your potential readers will click on your content.
In order to grab your visitors’ attention from the beginning, try to avoid headlines that can easily be answered by “Yes” or “No”.
“Yes and No” headlines are dangerous because they force your visitors to form an opinion about your question or your statement; which will eventually lead to a significant share of visitors choosing not to click.
Here’s a list of formulations used to trigger curiosity:
“How to …”
“The 7 facts you didn’t know about …”
“How [insert_name] managed to [action] in [days]”
“The Complete Guide to …”
“What every [target] should know about [subject]”
Watch your competition
There’s no secret for marketing success: practice makes perfect.
Because most businesses typically have dozens of competitors, you should pay attention to your competitors’ headline formulations.
From there, try to identify general trends and success formulas that you could apply to your own content.
Watch headlines used by your competitors
Ideas for effective headlines from the competition can be found in:
Online visitors and shoppers are over-exposed to marketing messages all day long.
Knowing this, it can be clever to keep your headlines short, simple and clear in order to deliver straightforward information to your potential readers.
Because marketers are always searching for new magic formulas, they sometimes come up with complex, tricky formulations that you should avoid.
Use a headline analyzer
Headlines analyzers are online tools that score your headlines based on a certain number of parameters.
Typically, these tools will grade your headlines on a 100 scale in order to help you craft catchier, better headlines.
They often measure the length and analyze your headline’s structure to determine optimal word order, keyword use, and formulation.
Here are 2 free tools you can use to analyze your headlines:
We’ve analyzed our own headline to see what type of results we would get.
Key Takeaway: our headline “How to Effectively A/B Test your Content Headlines” scored a reassuring 72/100 because it contains a power word “effectively” and an emotional element that triggers curiosity “How to…”.
The tool even identified our main keywords, which is a good starter for search engine optimization.
Run A/B tests and compare results
Impact of Headline Testing on Pageviews. Source: Priceonomics.com
As you know, headline testing can bring tremendous benefits to your key metrics such as page views, CTR and conversions.
To prove this point, Priceonomics came with an analysis that showed a 33% improvement on pageviews following headline testing: a major improvement that could drastically improve the way visitors behave on your website.
Now that you’ve come across our best practices for headline creation, it’s high time you start testing your own headlines variations to find out the most effective ones.
In order to do so, here’s a little checklist you can follow:
Use our A/B Testing tool to set up your experimental environment
Our WYSIWYG editor makes it easy to test headlines
Did you like this article? Feel free to share and check out our other in-depth articles on how to optimize your website, ecommerce and digital marketing.
In a digital world that mainly relies on a customer-centric approach and data-driven technologies, collecting user feedback is key to developing successful products, be they apps, websites, or services.
In order to design products and services that truly answer customers needs and expectations, effective companies use iterative design processes whose sole purpose is to constantly allow for better user experiences.
Usability testing is all about asking people and monitoring how intuitive and how easy is it to use a product.
Many people assume that usability testing only happens in the pre-launch design phase.
That’s not the case.
In fact, developing an iterative design process implies implementing repeated user tests at every stage of your product lifecycle.
Why?
Mostly because your product will undergo multiple new versions, features, and services that will all require user tests to validate assumptions.
Because digital marketers and UX researchers have long studied the methods and processes to harvest user insights, many different usability testing options have emerged in recent years.
What Exactly is Usability Testing?
Usability tests are processes designed to observe and track real users while they use a product to measure its usability and user-friendliness in order to achieve marketing objectives.
Moderated or not, your usability tests are meant to harvest user insights in order to develop an efficient user experience and design an overall better product.
Usability tests are used to confront assumptions before launching a new product or releasing a new feature.
They are also useful to measure a product’s efficiency in its current version in order to identify possible pain points and therefore solve them.
Your Objectives Behind Usability Testing
Because development and marketing teams often have to cope with tight deadlines and management pressure the temptation to skip any usability testing phase can be strong.
But this could cost you a lot.
In fact, usability testing should be included in your product development roadmap from the beginning.
That way, you’ll be certain to have time to actually carry out proper user tests.
Why is usability testing so important?
As a product developer, your job is to deliver a product or service that is:
Efficient
User-friendly
Profitable
In order to achieve these 3 objectives, your goal is to gather as much feedback as you can before actually releasing the product or the feature.
With this in mind, your user tests will have to deliver meaningful insights that will eventually lead to product updates.
Note: the objectives behind usability testing differ from one product to another.
However, here are some crucial objectives that can be tracked through user tests, regardless of your company’s product.
Do people enjoy using your product?
Are users able to successfully complete pre-determined tasks?
Does the product match your core target’s expectations?
How easy to use is your product?
Are users pleased with the interface, colors, buttons, forms?
Now that we covered the general aspects of usability testing, let’s take a closer look at the different types of usability tests that you can implement in order to develop a better product.
Moderated & Unmoderated User Tests
a) Moderated User Tests
Moderated user testing consists of different tests run on users with the presence of moderators.
These moderators will guide test participants, answer their questions and harvest useful feedback.
Although moderators might interfere with the live experience, moderated tests are useful to ask precise questions at very specific stages in order to collect targeted feedback based on assumptions.
These tests are a great opportunity for companies developing prototypes that require extensive feedback in the early design phases.
Using moderated tests, you will be able to gather actionable insights that will save your company precious time and money that would otherwise have been spent on a costly inefficient prototype.
Key takeaway: moderated user tests are specifically adapted to early-stage products and services because moderators can guide participants through the process. However, be careful so that your moderators don’t actually tell users what to do: the user experience has to remain natural.
Good to know: moderated user tests can either be run remotely or with the actual presence of participants.
Naturally, having users come to you or vice versa will cost you more than remote tests.
Although both types of tests are viable, you will usually generate more reaction from the participants during a real live test than a remote test.
b) Unmoderated User Tests
As the name suggests, unmoderated user tests are led without any supervision from your side.
Generally, these types of test are run remotely without the presence of a moderator.
These tests require the use of specific tools or SaaS platforms to automatically gather user insights and record their interactions for a delayed analysis.
During unmoderated tests, users are assigned pre-determined tasks to complete and are invited to express their thoughts and struggles out loud.
Using this solution, your company will then analyze users’ reactions that have been recorded during the tests.
Key Takeaway: unmoderated tests are definitely cheaper and easier to implement. Solution providers like UserTesting can deliver ready-to-use panels tailored to your core target in a matter of hours, which is extremely convenient compared to having to manually recruit participants.
Because there’s no involvement from your side apart from designing and reviewing user tests, unmoderated tests can also be run simultaneously and on a much larger scale.
Good to know: unmoderated tests don’t necessarily replace moderated tests – they rather complete each other.
Because there will be no supervision from your side, it is highly advised to craft crystal-clear guidelines and expectations to avoid confusion among users.
Focus Groups
Focus groups are specific processes that consist of inviting approximately 10 participants to discuss their needs and expectations about your product.
These tests can be run both before and after a product’s release – depending on your objectives.
Contrary to moderated user tests, focus groups are used to discuss participants’ needs, expectations and feelings about your product rather than just evaluating your design’s usability.
Typically, moderators will create a set of predetermined questions that will lead to multiple discussions regarding how participants feel about your product or certain features.
Key Takeaway: focus groups are useful to gather insights about your users’ potential needs and expectations. Used in complement with moderated or unmoderated user tests, they will provide meaningful feedback that can be leveraged to create new features or rethink the user interface.
Beta Tests & Surveys
Although they truly differ from other user tests, beta tests can be extremely useful to provide your usability testing process with a more quantitative approach.
Because beta tests require a large sample, companies can find it difficult to recruit a sufficient and representative number of beta-testers for the test to be viable.
However, beta tests can become a priceless opportunity to uncover many usability issues at once, comforted by a large variety of opinions coming from hundreds or thousands of participants.
Particularly popular in the video game industry, beta tests can also be used to test your MVP (minimum viable product) before your final product actually launches.
Using the same quantitative approach, surveys and online questionnaires are a cheap, quick and semi-reliable way to gather feedback on your product.
For these to work, you will have to address the right audience if you want relevant answers to appear in your questionnaires.
Surveys are useful when it comes to quantitative comparison.
Example: Your company develops a new fashion marketplace and hesitates between two logo designs: you could send survey questionnaires to your target audience that would ask to choose between the two designs.
A/B Tests
Agreed, these tests are a bit different – but they really work.
As opposed to most of the other tests we’ve mentioned, A/B tests are run on your product’s current version in order to determine which of two design options is better.
Example: let’s say that your company runs an ecommerce website and recently created a new product page layout. Your team wants to decide between the two layouts (version A & B) without compromising on conversions: they will use A/B testing to sort this out and choose a “winner” from these two options.
A/B tests can be conveniently used to track all sorts of “goals” depending on your website or product – which is extremely convenient to gather data and boost your current product’s usability and user-friendliness.
Did you like this article? Feel free to share and check out our other in-depth articles on how to optimize your website, ecommerce and digital marketing.
AB Tasty is a complete personalization and A/B testing software integrating cutting-edge features so that you, as a marketer, can take action now and increase your website’s performance.
A/B testing is an effective way to improve your site’s user experience and its ability to convert users to clients.
While changes made to your site may impact your user’s behavior, they are also seen by search engine crawlers, especially Google. The latter is perfectly capable of interpreting JavaScript, the scripting technology behind a lot of A/B tests.
As A/B testing experts, we are often asked about the impact of A/B testing on our clients’ organic search rankings. If SEO is not taken into account, an A/B testing campaign can impact the visibility of the site, notably for tests based on URL redirects.
This post is a good opportunity to review A/B testing best practices for SEO and help you do what’s best when it comes to optimizing conversions, without jeopardizing your rankings and web traffic.
General SEO recommendations
To start, let’s review some general recommendations from Google.
Google completely accepts A/B testing and even encourages it if it’s geared towards improving user experience. Google also offers its own client-side A/B testing tool (Google Optimize) that uses JavaScript to manipulate the DOM (Document Object Model) to create page variations.
On its blog, Google shares rules to be respected so that its algorithms do not penalize your site. The main rule concerns opening your test to the search engine’s robots, who must navigate on the same version of your pages as your visitors.
So, one of the first best practices for SEO is to not exclude Google’s bot from your A/B tests. Even if your A/B testing solution offers some advanced user-targeting capabilities, like user-agent detection, do not use them to exclude Googlebot.
It is also recommended that you do not display pages that are too different from one another to your users. For one, it will be more difficult to identify which element(s) had a greater impact on the conversion rate. Second, Google may consider the two versions to be different and to interpret that action as a manipulation attempt. Losing ranking may result or, worst case scenario, your site may be completely removed.
Depending on your objectives, the A/B testing setup may differ and each way of doing things can have an impact on SEO.
Best practices for A/B tests with URL redirects
A/B testing using URL redirects, also known as split testing, is one of these methods. Instead of using a WYSIWYG (What You See Is What You Get) editor to design your variation, you redirect users to a completely separate page, often hosted on your site, that has its own URL. Using this method is justified if you have a lot of changes to make on your page; for example, when you want to test a different design or another landing page concept.
This use case is the most prone to error and can have a dramatic impact on your search engine ranking, namely your original page being removed from the Google index, and replaced by your variant page. To avoid this, remember the following points:
Never block Google’s bots via your site’s robots.txt file with the Disallow instruction or by adding the noindex command on your alternate pages. The first prevents bots from reading the content of targeted pages, whereas the latter prevents them from adding the pages to Google’s index. It’s a common error, as the site publisher is afraid that the alternate version will appear in results. If you respect the following instructions, there is no reason for your alternate version to “rank” instead of your original version.
Place a canonical attribute on the variant page and set the value to the original page. This tells Google the original page is the one it must take into account and offer to internet users. Search engine bots will understand that page B has no added value compared to A, which is the only version to be indexed. In the case of a test on a set of pages (e.g. you want to test 2 product page formats across your catalog), you must set up this matching for each page.
Redirect visitors via a 302 or JavaScript redirection, both of which Google interprets as temporary redirects. In other words, the search engine considers it to be a temporary modification of your site and does not modify its index accordingly.
When a redirect test is completed, you must put into production the changes that have been shown to be useful. The original page A is then modified to include the new elements that foster conversion. Page B, meanwhile, can either be redirected to page A with a 301 (permanent) or 302 (temporary, if the page will be used for other tests) redirection.
Best practices for standard A/B tests
Applying a JavaScript overlay is by far the most common way to conduct A/B tests. In this case, your variants are no more or less than changes applied on the fly when the page loads into the user’s browser. The A/B testing solution manages the whole process from the JavaScript code interpretation of changes you made via a graphics editor, up to data collection, by randomly assigning users to one of the variants and respecting this assignment throughout the test. In this case, your URLs do not change and changes only occur in the client browser (Chrome, Firefox, Internet Explorer, etc.).
This type of A/B test does not harm your SEO efforts. While Google is perfectly capable of understanding JavaScript code, these changes will not be a problem if you do not try to trick it by showing it an initial content that is very different from that presented to users. Therefore, make sure that:
The number of elements called by the overlay is limited given the overall page and that the test does not overhaul the page’s structure or content.
The overlays do not delete or hide elements that are important for the page’s ranking and improve its legitimacy in the eyes of Google (text areas, title, images, internal links, etc.).
Only run the experiment as long as necessary. Google knows that the time required for a test will vary depending on how much traffic the tested page gets, but says you should avoid running tests for an unnecessarily long time as they may interpret this as an attempt to deceive, especially if you’re serving one content variant to a large percentage of your users.
Tips:
While it’s better to avoid overlay phases that are too heavy on pages generating traffic, you have complete freedom for pages that Google’s bots do not browse or that do not have an SEO benefit (account or basket pages, purchase tunnel pages, etc.). Don’t hesitate to test new optimizations on these pages that are key to your conversion rate!
What about mobile SEO?
Using your A/B testing solution to improve the user journey on mobile devices is a use case that we sometimes encounter. This is a particularly sensitive point for SEO since Google is rolling out its Mobile First Indexing.
Until now, Google’s ranking algorithm was based primarily on the content of a site’s desktop version to position it in both desktop and mobile search results. With the Mobile First Indexing algorithm, Google is switching this logic around: the search engine will now use the mobile page’s content as a ranking signal rather than the desktop version, no matter what the device.
Therefore, it’s particularly important to not remove from mobile navigation – for UX reasons – elements that are vital to SEO, like, for example, removing page-top content that takes up too much space on a smartphone.
Can personalization impact your SEO?
Some A/B testing tools also offer user personalization capabilities. AB Tasty, for example, helps you boost user engagement via custom scenarios. Depending on your visitors’ profile or their journeys on your website, you can easily offer them messages or a personalized browsing experience that is more likely to help them convert.
Can these practices have an impact on your SEO? Like for A/B tests using JavaScript, impact from SEO is limited but some special cases should be taken into consideration.
For instance, highlighting customized content with an interstitial (pop-in) presents a challenge in terms of SEO, notably on mobile. Since January 2017, Google considers it to be harmful to the user experience since the page’s content is not easily accessible. So personalized interstitials must be adjusted to Google’s expectations. Otherwise, you take the risk of seeing your site lose ranking and the resulting traffic.
Note that Google seems to tolerate legal interstitials that take up a majority of the screen (cookie information, age verification, etc.) for which there is no SEO impact.
During an A/B test, you must only modify one element at a time (for example, the wording of an action button) to be able to determine the impact. If you simultaneously change this button’s wording and color (for example, a blue “Buy” button vs. red “Purchase” button) and see an improvement, how do you know which of the wording or the color changes really contributed to this result? The contribution of one may be negligible, or the two may have contributed equally.
The benefits of multivariate tests
A multivariate test aims to answer this question. With this type of experiment, you test a hypothesis for which several variables are modified and determine which is the best combination of all possible ones. If you change two variables and each has three possibilities, you have nine combinations between which to decide (number of variants of the first variable X number of possibilities of the second).
Multivariate testing has three benefits:
avoid having to conduct several A/B tests one after the other, saving you time since we can look at a multivariate test as several A/B tests conducted simultaneously on the same page,
determine the contribution of each variable to the measured gains,
measure the interaction effects between several supposedly independent elements (for example, page title and visual illustration).
Types of multivariate tests
There are two major methods for conducting multivariate tests:
“Full Factorial“: this is the method that is usually referred to as multivariate testing. With this method, all combinations of variables are designed and tested on an equal part of your traffic. If you test two variants for one element and three variants for another, each of the six combinations will be assigned to 16.66% of your traffic.
“Fractional Factorial“: as its name suggests, only a fraction of all combinations is actually subjected to your traffic. The conversion rate of untested combinations is statistically deduced based on that of those actually tested. This method has the disadvantage of being less precise but requires less traffic.
While multivariate testing seems to be a panacea, you should be aware of several limitations that, in practice, limit its appeal in specific cases.
Limits of multivariate tests
The first limit concerns the volume of visitors to subject to your test to obtain usable results. By multiplying the number of variables and possibilities tested, you can quickly reach a significant number of combinations. The sample assigned to each combination will be reduced mechanically. Where, for a typical A/B test, you are allocating 50% of your traffic to the original and the variant, you are only allocating 5, 10, or 15% of your traffic to each combination in a multivariate test. In practice, this often translates into longer tests and an inability to achieve the statistical reliability needed for decision-making. This is especially true if you are testing deeper pages with lower traffic, which is often the case if you test command tunnels or landing pages for traffic acquisition campaigns.
The second disadvantage is related to the way the multivariate test is brought into consideration. In some cases, it is the result of an admission of weakness: users do not know exactly what to test and think that by testing several things at once, they will find something to use. We often find small modifications at work in these tests. A/B testing, on the other hand, imposes greater rigor and better identification of test hypotheses, which generally leads to more creative tests supported by data and with better results.
The third disadvantage is related to complexity. Conducting an A/B test is much simpler, especially in the analysis of the results. You do not need to perform complex mental gymnastics to try to understand why one element interacts positively with another in one case and not in another. Keeping a process simple and fast to execute allows you to be more confident and quickly iterate your optimization ideas.
Conclusion
While multivariate tests are attractive on paper, note that carrying out tests for too long only to obtain weak statistical reliability can make them a less attractive option in some cases. In order to obtain actionable results that can be quickly identified, in 90% of cases, it is better to stick to traditional A/B tests (or A/B/C/D). This is the ratio found among our customers, including those with an audience of hundreds of thousands or even millions of visitors. The remaining 10% of tests are better reserved for fine-tuning when you are comfortable with the testing practice, have achieved significant gains through your A/B tests, and are looking to exceed certain conversion thresholds or to gain a few increments.
Finally, it is always helpful to remember that, more than the type of test (A/B vs. multivariate), it is the quality of your hypotheses – and by extension that of your work of understanding conversion problems – which will be the determining factor in getting boosts and convincing results from your testing activity.
A/A testing is little known and subject to strong discussions on its usefulness, but it brings added value for those who are looking to integrate an A/B testing software with rigor and precision.
But before we begin…
What is A/A testing?
A/A testing is a derivative of A/B testing (check out A/B testing definition). However, instead of comparing two different versions (of your homepage, for example), here we compare two identical versions.
Two identical versions? Yes!
The main purpose of A/A testing is simple: verify that the A/B testing solution has been correctly configured and is effective.
We use A/A testing in three cases:
To check that an A/B testing tool is accurate
To set a conversion rate as reference for future tests
To decide on an optimal sample size for A/B tests
Checking the accuracy of the A/B Testing tool
When performing an A/A test, we compare two strictly identical versions of the same page.
Of course, the purpose of an A/A test is to display similar values in terms of conversion. The idea here is to prove that the test solution is effective.
Logically, we will organize an A/A test when we set up a new A/B test solution or when we go from one solution to another.
However, sometimes a “winner” is declared on two identical versions. Therefore, we must seek to understand “why” and this is the benefit of A/A testing.
The test may not have been conducted correctly
The tool may not have been configured correctly
The A/B testing solution may not be effective.
Setting a reference conversion rate
Let’s imagine that you want to set up a series of A/B tests on your homepage. You set up the solution but a problem arises: you do not know to which conversion rate to compare the different versions to.
In this case, an A/A Test will help you find the “reference” conversion rate for your future A/B tests.
For example, you begin an A/A Test on your homepage where the goal is to fill out a contact form. When comparing the results, you get nearly identical results (and this is normal): 5.01% and 5.05% conversions. You can now use this data with the certainty that it truly represents your conversion rate and activate your A/B tests to try to exceed this rate. If your A/B tests tell you that a “better” variant achieves 5.05% conversion, it actually means that there is no progress.
Finding a sample size for future tests
The problem in comparing two similar versions is the “luck” factor.
Since the tests are formulated on a statistical basis, there is a margin of error that can influence the results of your A/B testing campaigns.
It’s no secret how to reduce this margin of error: you have to increase the sample size to reduce the risk that random factors (so-called “luck”) skew the results.
By performing an A/A test, you can “see” at what sample size the test solution comes closest to “perfect equality” between your identical versions.
In short, an A/A test allows you to find the sample size at which the “luck” factor is minimized; you can then use that sample size for your future A/B tests. That said, A/B tests generally require a smaller sample size.
A/A testing: a waste of time?
The question is hotly debated in the field of A/B Testing: should we take the time to do an A/A test before doing an A/B test?
And that is the heart of the issue: time.
Performing A/A tests takes considerable time and traffic
In fact, performing A/A tests takes time, considerably more time than A/B tests since the volume of traffic needed to prove that the two “identical variants” lead to the same conversion rate is significant.
The problem, according to ConversionXL, is that A/A testing is time-consuming and encroaches on traffic that could be used to conduct “real tests,” i.e., those intended to compare two variants.
Finally, A/A testing is much easier to set up on high traffic sites.
The idea is that if you run a site that is being launched or has low traffic, it is useless to waste your time doing an A/A test: focus instead on optimizing your purchase tunnel or on your Customer Lifetime Value: the results will be much more convincing and, especially, must more interesting.
An interesting alternative: data comparison
To check the accuracy of your A/B Testing solution, there is another way that is easy to set up. To do this, your A/B Testing solution needs to integrate another source of analytic data.
By doing this, you can compare the data and see if it points to the same result: it’s another way to check the effectiveness of your test solution.
If you notice significant differences in data between the two sources, you know that one of them is:
Flickering, also called FOOC (Flash of Original Content) is when an original page is briefly displayed before the alternative appears during an A/B test. This happens due to the time it takes for the browser to process modifications. There is no miracle fix to this problem, and those claiming to be quick fixes have limited effectiveness. The good news is that there are several best practices to accelerate the application of your modifications, effectively masking the flickering effect.
Update: to get rid of flickering, you can switch from Client-Side testing to Server-Side testing. The latter doesn’t involve any kind of Javascript to apply modifications on your pages and completely removes the FOOC. Read more about this feature now available within AB Tasty.
What is flickering, exactly?
Although you may have never heard of flickering, you have undoubtedly experienced it without even knowing: a test page loads and, after a few milliseconds, your modifications show up. In the blink of an eye, you’ve seen two versions of your page—the old and the new. The result is poor user experience, not to mention that your users now know your test is just that: a test.
Flickering is caused by the basic client-side operation of A/B testing solutions that apply a JavaScript overlayer during page loading to ensure elements are modified. In most cases, you will not notice it at all, but if your site takes a while to load or relies on intensive external resources, your modifications can take time to be applied, giving way to a previously unnoticeable flickering.
Is there a miracle cure for flickering?
Some providers claim to use innovative techniques that get rid of flickering. Beware, however, that although the techniques they use are commonplace and available to anyone, they present a certain number of technical limits. By analyzing market leaders’ documentation, it is also clear that such “miracle” methods are only implemented as a last resort, when no other options have worked for a lasting period of time. This is because flickering can be different for each site and depends a great deal on initial performance.
So how does the method work? For starters, displayed content is temporarily masked using CSS properties such as visibility: hidden or display: none for the body element. This property masks page content as quickly as possible (since the solution’s tag is located in the page’s <head> element), before redisplaying it again once the modifications have had enough time to be applied. This effectively eliminates the “before/after” flicker effect, but replaces it with a “blank page/after” effect.
The risk of using such a method is that if the page encounters any loading problems or there are implementation problems, users might end up with a blank page for a few seconds, or they could even be stuck with a blank screen with nowhere to click. Another drawback of this solution is that it gives off the impression that site performance is slow. That’s why it is important to ensure that rendering is not delayed for more than a few milliseconds at most—just enough for the modifications to be applied. And of course, for valid results, you’ll need to apply this delayed rendering to a control group to prevent bias in your analysis of behaviors linked to the various rendering speeds.
So there you have it. If your modifications take time to apply, you won’t want a blank page to greet your users. When it comes down to it, you should always adhere to the best practices listed below. Among other things, they aim to ensure modifications are applied at an accelerated rate.
That’s why we here at AB Tasty don’t recommend the above method for most of our users and why we don’t suggest it by default. Nonetheless, it is easy to implement with just a few lines of JavaScript.
How can flickering be limited?
If you don’t want to use the aforementioned method, what are your options? Here are some best practices you can use to reduce flickering and maybe even eliminate it:
Optimize your site’s loading time by all means possible: page caching, compression, image optimization, CDNs, parallel query processing with the HTTP/2 protocol, etc.
Place the A/B testing solution tag as high as possible in the source code, inside the <head> element and before intensive external resources (e.g. web fonts, JavaScript libraries, etc.) are called.
Use the synchronous version of the AB Tasty script, since the asynchronous version increases flickering odds
Don’t use a tag manager to call your solution’s tags (e.g. Google Tag Manager). This might not be as convenient, but you’ll have an easier handle on your tag’s firing priority.
Do not insert a jQuery library in the tag if your site provider already uses one. Most client-side A/B testing solutions use jQuery. AB Tasty nonetheless makes it so you don’t have to integrate the infamous JavaScript framework if you already use it elsewhere, so you can cross a few kb off your transfer list.
Reduce the size of your solution’s script by removing inactive tests. Some solutions include the entirety of your tests in their script, whether they are suspended or in draft mode. AB Tasty includes only active tests by default. If, however, you have a high number of ongoing customizations, it might be appropriate to make them permanently operational and deactivate them on AB Tasty.
Pay attention to the nature of modifications. Adding several hundred lines of code to obtain your modification will cause more flickering than a minor change to CSS styles or wording.
Rely as much as possible on style sheets. It is usually possible to obtain the desired visual effect using style sheets. For example, you can simply implement a JavaScript instruction that adds a CSS class to an element, letting the class modify its aspect, rather than writing lines of script to manipulate its style.
Optimize your modified code. When fiddling around with the WYSIWYG editor to try and implement your changes, you may add several unnecessary JavaScript instructions. Quickly analyze the generated code in the “Edit Code” tab and optimize it by rearranging it or removing needless parts.
Ensure that your chosen solution uses one (or many) CDNs so the script containing your modifications can be loaded as quickly as possible, wherever your user is located.
For advanced users: Add jQuery selectors to the cache as objects so they don’t need to be reorganized in the DOM multiple times. You can also make modifications in JavaScript rather than in jQuery, particularly when dealing with elements by class or ID.
Use redirect tests where possible and if useful after an evaluation of the relation between the modification’s nature and the time required to put the test into place.
If you still see flickering after performing these optimizations, you may want to use the content-masking technique detailed above. If you’re not comfortable doing this alone, contact our support team.
AB Tasty’s note: This is a guest post by Umesh Kumar, digital marketing evangelist at Designhill.
A/B Testing isn’t a rocket science to understand and implement. It is just about testing two different versions of a page on your site to see which one attracts more audience. More than anything else, this test helps you know and understand your consumers better. After you run an A/B test, you will find that a few more have added in your earlier list of customers.
It surely is one of the best ways to improve your conversion rates. In fact, an article published in CrazyEgg.com reveals that using correct testing methods can increase conversion rates up to 300 percent. But it’s shocking that still, the majority of marketers choose not to use A/B test experiments. Don’t believe us? Check out the following stats:
Given these stats, it’s no surprise that many marketers steer clear of A/B testing for optimizing their site. But, how exactly can you optimize your conversions with A/B Testing? The answer is simple! Why not do what smart marketers do? Learn lessons from companies that have emerged as shining examples of A/B testing genius.
No matter what is the nature of your business, there is no harm in taking a step back and learning from others’ achievements. To help you, we have listed 5 classic case studies that will provide you with interesting test hypotheses and give you an insight on what and how visitors think. You can learn a lot from these case studies and use the learning to take on the conversion challenges in your way to success headfirst. These examples are quite simple to implement with any A/B testing tool.
Case Study 1: Conversions improve by 11.5% by Adding FAQs, Statistics and Social Proof on Websites
Test Carried Out ByKiva.org, an innovative non-profit organization, allows people to lend money via the Internet to low-income entrepreneurs and students across countries. Kiva conducted an A/B Test as they wanted to increase the number of donations from first-time visitors to their landing page.
Hypothesis: Giving more information to visitors coming to Kiva’s landing page will help boost the number of donors.
Version B: Addition of information box (FAQ, social proof & statistics)
What You Can Learn from This Test?
Ensure that your landing page is designed in such a way that it answers all questions that a visitor may have. In this case, the information box at the bottom of the page helped the organization explain about themselves and their services, providing statistics. The information increased their trustworthiness and credibility as a site.
Case Study 2: 2.57% increase in Open Rates 5.84% Higher Click-through Rate (CTR) by Changing the Subject Line of an Email
Test Carried Out By Designhill.com, one of the fastest-growing peer-to-peer crowdsourcing platforms that connect graphic artists with design seekers. They did an email blast a few days before Christmas to promote its content and increase the click-through rate.
Hypothesis: Just mentioning the title of the blog in the subject line of the email would get the majority of click-through rather than requesting recipients to review the post with the blog’s title.
Just writing “Top 10 Off-Page SEO Strategies for Startups in 2015” in the subject line of the email would get the majority of click-through rather than writing “Check out My Recent Post – Top 10 Off-Page SEO Strategies for Startups in 2015”.
Result: The company was able to score 5.84% higher CTR and 2.57% higher open rate by including just the title of the blog in the subject line.
What You Can Learn From This Test:
Your subject line is the first thing the recipient of your email sees. Therefore, your subject line must have the power to entice the readers to open the mail and know more about your products or services. Because after all, it doesn’t really matter what your offer is if it is not opened by your readers. Therefore, choose your words wisely as they will have a higher impact on open rates and click-through. But it’s not only important to ensure great subject lines, but you must also ensure that your logo design is laid out in a way that vital information pop-up. For example, your logo design and contact details must be easily locatable. For another, CTAs and other links must be out of clutter. Read our Beginner’s Guide to A/B Testing your Emails.
Case Study 3: 49% Increase in CTR by Adding Text in the Call-to-Action Button
Test carried out byFab, an online community whose members can buy and sell apparel, home goods, accessories, collectibles, etc.
Hypothesis: Making the “Add to Cart” button clearer (by adding text) will lead to an increase in the number of people adding items to their shopping carts.
Result: There was an increase of 49% in CTR over the original after the text “Add to Cart” was included in the CTA button rather than just an image or symbol.
In the following image, you’ll see that the original design (on the far left) features a small shopping cart with a “+” sign and no text. The two versions (middle and right) added text-based designs. Version A helped increase cart adds by 49% over the original.
You must have heard that a picture is worth a thousand words. People love visuals and there could be no other better place than your site to impress them with the pictures of your products. Having an image of your product or service in the background of your site can drive conversions as they get to see (visualize) what they’ll be getting. Images hold the power to grab the attention of your audience and turn them into customers.
Case Study 5: Leads Increase by 256% after Having a Mobile-Optimized Landing Page
Test Carried Out By Rasmussen College, a for-profit private college and Public Benefit Corporation, who wanted to increase leads from Pay-Per-Click traffic on their mobile site.
Hypothesis: Creating a new mobile-friendly website, featuring a click-through menu, will improve conversions.
Hammad Akbar, founder of TruConversion.com says that “Unpleasant mobile experiences and a lack of mobile-friendliness makes 48% users believe that the company doesn’t care about their business.” A mobile-responsive website enhances the browsing experience of the site visitors. It is essential if you don’t want to lose customers just because your site took time to load. Keep the design of your site simple with only the basic information on the first page. Try and find different way of improving mobile navigation such as having a drop-down menu.
Conclusion
We hope that after reading this post, you are inspired to hold some amazing A/B tests on your own site. It is actually exciting and amazing to see what your customers like or dislike. But don’t forget that these tests are mere a guiding tool and nothing can replace your own tests and judgments about your visitors as well as the site. Remember, there is always a scope for improvement.
Author Bio:Umesh Kumar is a digital marketing evangelist who works with Designhill, the fastest-growing custom design marketplace, to develop and execute their marketing strategies. He started his online marketing career in 2008 and is actively engaged within internet business modeling, website development, social networks, lead generation, search engine optimization, and search engine marketing. In addition, he loves blogging and shares his expertise about tips, tricks, and upcoming trends in the world of digital marketing.Get in touch Facebook | Twitter | Google+