Article

9min read

A Beginner’s Guide to Usability & User Testing

In a digital world that mainly relies on a customer-centric approach and data-driven technologies, collecting user feedback is key to developing successful products, be they apps, websites, or services.

In order to design products and services that truly answer customers needs and expectations, effective companies use iterative design processes whose sole purpose is to constantly allow for better user experiences.

Usability testing is all about asking people and monitoring how intuitive and how easy is it to use a product.

Many people assume that usability testing only happens in the pre-launch design phase.

That’s not the case.

In fact, developing an iterative design process implies implementing repeated user tests at every stage of your product lifecycle.

Why?

Mostly because your product will undergo multiple new versions, features, and services that will all require user tests to validate assumptions.

Because digital marketers and UX researchers have long studied the methods and processes to harvest user insights, many different usability testing options have emerged in recent years.

What Exactly is Usability Testing?

Usability Testing and User Tests

Usability tests are processes designed to observe and track real users while they use a product to measure its usability and user-friendliness in order to achieve marketing objectives.

Moderated or not, your usability tests are meant to harvest user insights in order to develop an efficient user experience and design an overall better product.

Usability tests are used to confront assumptions before launching a new product or releasing a new feature.

They are also useful to measure a product’s efficiency in its current version in order to identify possible pain points and therefore solve them.

Your Objectives Behind Usability Testing

Because development and marketing teams often have to cope with tight deadlines and management pressure the temptation to skip any usability testing phase can be strong.

But this could cost you a lot.

In fact, usability testing should be included in your product development roadmap from the beginning.

That way, you’ll be certain to have time to actually carry out proper user tests.

Why is usability testing so important?

As a product developer, your job is to deliver a product or service that is:

  • Efficient
  • User-friendly
  • Profitable

In order to achieve these 3 objectives, your goal is to gather as much feedback as you can before actually releasing the product or the feature.

With this in mind, your user tests will have to deliver meaningful insights that will eventually lead to product updates.

Note: the objectives behind usability testing differ from one product to another.

However, here are some crucial objectives that can be tracked through user tests, regardless of your company’s product.

  1. Do people enjoy using your product?
  2. Are users able to successfully complete pre-determined tasks?
  3. Does the product match your core target’s expectations?
  4. How easy to use is your product?
  5. Are users pleased with the interface, colors, buttons, forms?

Now that we covered the general aspects of usability testing, let’s take a closer look at the different types of usability tests that you can implement in order to develop a better product.

Moderated & Unmoderated User Tests

a) Moderated User Tests

What are moderated user tests?

Moderated user testing consists of different tests run on users with the presence of moderators.

These moderators will guide test participants, answer their questions and harvest useful feedback.

Although moderators might interfere with the live experience, moderated tests are useful to ask precise questions at very specific stages in order to collect targeted feedback based on assumptions.

These tests are a great opportunity for companies developing prototypes that require extensive feedback in the early design phases.

Using moderated tests, you will be able to gather actionable insights that will save your company precious time and money that would otherwise have been spent on a costly inefficient prototype.

Key takeaway: moderated user tests are specifically adapted to early-stage products and services because moderators can guide participants through the process. However, be careful so that your moderators don’t actually tell users what to do: the user experience has to remain natural.

Good to know: moderated user tests can either be run remotely or with the actual presence of participants.

Naturally, having users come to you or vice versa will cost you more than remote tests.

Although both types of tests are viable, you will usually generate more reaction from the participants during a real live test than a remote test.

b) Unmoderated User Tests

As the name suggests, unmoderated user tests are led without any supervision from your side.

Generally, these types of test are run remotely without the presence of a moderator.

These tests require the use of specific tools or SaaS platforms to automatically gather user insights and record their interactions for a delayed analysis.

During unmoderated tests, users are assigned pre-determined tasks to complete and are invited to express their thoughts and struggles out loud.

Using this solution, your company will then analyze users’ reactions that have been recorded during the tests.

Key Takeaway: unmoderated tests are definitely cheaper and easier to implement. Solution providers like UserTesting can deliver ready-to-use panels tailored to your core target in a matter of hours, which is extremely convenient compared to having to manually recruit participants.

Because there’s no involvement from your side apart from designing and reviewing user tests, unmoderated tests can also be run simultaneously and on a much larger scale.

Good to know: unmoderated tests don’t necessarily replace moderated tests – they rather complete each other.

Because there will be no supervision from your side, it is highly advised to craft crystal-clear guidelines and expectations to avoid confusion among users.

Focus Groups

Focus Group

Focus groups are specific processes that consist of inviting approximately 10 participants to discuss their needs and expectations about your product.

These tests can be run both before and after a product’s release – depending on your objectives.

Contrary to moderated user tests, focus groups are used to discuss participants’ needs, expectations and feelings about your product rather than just evaluating your design’s usability.

Typically, moderators will create a set of predetermined questions that will lead to multiple discussions regarding how participants feel about your product or certain features.

Key Takeaway: focus groups are useful to gather insights about your users’ potential needs and expectations. Used in complement with moderated or unmoderated user tests, they will provide meaningful feedback that can be leveraged to create new features or rethink the user interface.

Beta Tests & Surveys

Although they truly differ from other user tests, beta tests can be extremely useful to provide your usability testing process with a more quantitative approach.

Simply put, beta tests consist in giving access to a new feature or product to a restrained number of voluntary participants.

Because beta tests require a large sample, companies can find it difficult to recruit a sufficient and representative number of beta-testers for the test to be viable.

However, beta tests can become a priceless opportunity to uncover many usability issues at once, comforted by a large variety of opinions coming from hundreds or thousands of participants.

Particularly popular in the video game industry, beta tests can also be used to test your MVP (minimum viable product) before your final product actually launches.

surveys and online questionnairesUsing the same quantitative approach, surveys and online questionnaires are a cheap, quick and semi-reliable way to gather feedback on your product.

For these to work, you will have to address the right audience if you want relevant answers to appear in your questionnaires.

Surveys are useful when it comes to quantitative comparison.

Example: Your company develops a new fashion marketplace and hesitates between two logo designs: you could send survey questionnaires to your target audience that would ask to choose between the two designs.

A/B Tests

Agreed, these tests are a bit different – but they really work.

As opposed to most of the other tests we’ve mentioned, A/B tests are run on your product’s current version in order to determine which of two design options is better.

A/B Testing and User Tests

Example: let’s say that your company runs an ecommerce website and recently created a new product page layout. Your team wants to decide between the two layouts (version A & B) without compromising on conversions: they will use A/B testing to sort this out and choose a “winner” from these two options.

A/B tests can be conveniently used to track all sorts of “goals” depending on your website or product – which is extremely convenient to gather data and boost your current product’s usability and user-friendliness.

 

Did you like this article? Feel free to share and check out our other in-depth articles on how to optimize your website, ecommerce and digital marketing.

AB Tasty is a complete personalization and A/B testing software integrating cutting-edge features so that you, as a marketer, can take action now and increase your website’s performance.

Subscribe to
our Newsletter

bloc Newsletter EN

We will process and store your personal data to respond to send you communications as described in our  Privacy Policy.

Article

7min read

How to A/B Test Without Jeopardizing your SEO Efforts

A/B testing is an effective way to improve your site’s user experience and its ability to convert users to clients.

While changes made to your site may impact your user’s behavior, they are also seen by search engine crawlers, especially Google. The latter is perfectly capable of interpreting JavaScript, the scripting technology behind a lot of A/B tests.

As A/B testing experts, we are often asked about the impact of A/B testing on our clients’ organic search rankings. If SEO is not taken into account, an A/B testing campaign can impact the visibility of the site, notably for tests based on URL redirects.

This post is a good opportunity to review A/B testing best practices for SEO and help you do what’s best when it comes to optimizing conversions, without jeopardizing your rankings and web traffic.

General SEO recommendations

To start, let’s review some general recommendations from Google.

Google completely accepts A/B testing and even encourages it if it’s geared towards improving user experience. Google also offers its own client-side A/B testing tool (Google Optimize) that uses JavaScript to manipulate the DOM (Document Object Model) to create page variations.

On its blog, Google shares rules to be respected so that its algorithms do not penalize your site. The main rule concerns opening your test to the search engine’s robots, who must navigate on the same version of your pages as your visitors.

So, one of the first best practices for SEO is to not exclude Google’s bot from your A/B tests. Even if your A/B testing solution offers some advanced user-targeting capabilities, like user-agent detection, do not use them to exclude Googlebot.

It is also recommended that you do not display pages that are too different from one another to your users. For one, it will be more difficult to identify which element(s) had a greater impact on the conversion rate. Second, Google may consider the two versions to be different and to interpret that action as a manipulation attempt. Losing ranking may result or, worst case scenario, your site may be completely removed.

Depending on your objectives, the A/B testing setup may differ and each way of doing things can have an impact on SEO.

Best practices for A/B tests with URL redirects

A/B testing using URL redirects, also known as split testing, is one of these methods. Instead of using a WYSIWYG (What You See Is What You Get) editor to design your variation, you redirect users to a completely separate page, often hosted on your site, that has its own URL. Using this method is justified if you have a lot of changes to make on your page; for example, when you want to test a different design or another landing page concept.

This use case is the most prone to error and can have a dramatic impact on your search engine ranking, namely your original page being removed from the Google index, and replaced by your variant page. To avoid this, remember the following points:

  • Never block Google’s bots via your site’s robots.txt file with the Disallow instruction or by adding the noindex command on your alternate pages. The first prevents bots from reading the content of targeted pages, whereas the latter prevents them from adding the pages to Google’s index. It’s a common error, as the site publisher is afraid that the alternate version will appear in results. If you respect the following instructions, there is no reason for your alternate version to “rank” instead of your original version.
  • Place a canonical attribute on the variant page and set the value to the original page. This tells Google the original page is the one it must take into account and offer to internet users. Search engine bots will understand that page B has no added value compared to A, which is the only version to be indexed. In the case of a test on a set of pages (e.g. you want to test 2 product page formats across your catalog), you must set up this matching for each page.
  • Redirect visitors via a 302 or JavaScript redirection, both of which Google interprets as temporary redirects. In other words, the search engine considers it to be a temporary modification of your site and does not modify its index accordingly.
  • When a redirect test is completed, you must put into production the changes that have been shown to be useful. The original page A is then modified to include the new elements that foster conversion. Page B, meanwhile, can either be redirected to page A with a 301 (permanent) or 302 (temporary, if the page will be used for other tests) redirection.

Best practices for standard A/B tests

Applying a JavaScript overlay is by far the most common way to conduct A/B tests. In this case, your variants are no more or less than changes applied on the fly when the page loads into the user’s browser. The A/B testing solution manages the whole process from the JavaScript code interpretation of changes you made via a graphics editor, up to data collection, by randomly assigning users to one of the variants and respecting this assignment throughout the test. In this case, your URLs do not change and changes only occur in the client browser (Chrome, Firefox, Internet Explorer, etc.).

This type of A/B test does not harm your SEO efforts. While Google is perfectly capable of understanding JavaScript code, these changes will not be a problem if you do not try to trick it by showing it an initial content that is very different from that presented to users. Therefore, make sure that:

  • The number of elements called by the overlay is limited given the overall page and that the test does not overhaul the page’s structure or content.
  • The overlays do not delete or hide elements that are important for the page’s ranking and improve its legitimacy in the eyes of Google (text areas, title, images, internal links, etc.).
  • Only run the experiment as long as necessary. Google knows that the time required for a test will vary depending on how much traffic the tested page gets, but says you should avoid running tests for an unnecessarily long time as they may interpret this as an attempt to deceive, especially if you’re serving one content variant to a large percentage of your users.

Tips:
While it’s better to avoid overlay phases that are too heavy on pages generating traffic, you have complete freedom for pages that Google’s bots do not browse or that do not have an SEO benefit (account or basket pages, purchase tunnel pages, etc.). Don’t hesitate to test new optimizations on these pages that are key to your conversion rate!

What about mobile SEO?

Using your A/B testing solution to improve the user journey on mobile devices is a use case that we sometimes encounter. This is a particularly sensitive point for SEO since Google is rolling out its Mobile First Indexing.

Until now, Google’s ranking algorithm was based primarily on the content of a site’s desktop version to position it in both desktop and mobile search results. With the Mobile First Indexing algorithm, Google is switching this logic around: the search engine will now use the mobile page’s content as a ranking signal rather than the desktop version, no matter what the device.

Therefore, it’s particularly important to not remove from mobile navigation – for UX reasons – elements that are vital to SEO, like, for example, removing page-top content that takes up too much space on a smartphone.

Can personalization impact your SEO?

Some A/B testing tools also offer user personalization capabilities. AB Tasty, for example, helps you boost user engagement via custom scenarios. Depending on your visitors’ profile or their journeys on your website, you can easily offer them messages or a personalized browsing experience that is more likely to help them convert.

Can these practices have an impact on your SEO? Like for A/B tests using JavaScript, impact from SEO is limited but some special cases should be taken into consideration.

For instance, highlighting customized content with an interstitial (pop-in) presents a challenge in terms of SEO, notably on mobile. Since January 2017, Google considers it to be harmful to the user experience since the page’s content is not easily accessible. So personalized interstitials must be adjusted to Google’s expectations. Otherwise, you take the risk of seeing your site lose ranking and the resulting traffic.

Note that Google seems to tolerate legal interstitials that take up a majority of the screen (cookie information, age verification, etc.) for which there is no SEO impact.

To learn more, download your free copy of our A/B testing 101 ebook.