After our amazing digital summit at the end of 2020, we wanted to sit down with Matt Bullock, Director of Growth at Roboboogie to learn more about ROI-driven design.
Tell us about Roboboogie and your session. Why did you choose this topic?
Matt: Our session was titled Building an ROI-Driven Testing Plan. When working with our existing clients, or talking with new potential clients, we look at UX opportunities from both a data and design perspective. By applying ROI-modeling, we can prioritize the opportunities with the highest potential to drive revenue or increase conversions.
What are the top 3 things you hope attendees took away from your session?
Matt: We have made the shift from “Design and Hope” to a data-backed “Test and Optimize” approach to design and digital transformation, and it’s a change that every organization can make.
An ROI-Driven testing plan can be applied across a wide range of conversion points and isn’t exclusive to eCommerce.
Start small and then evolve your testing plan. Building a test-and-optimize culture takes time. You can lead the charge internally or partner with an agency. As your ROI compounds, everyone is going to want in on the action!
2021 is going to be a transformative year where we hope to see a gradual return to “normalcy.” While some changes we endured in 2020 are temporary, it looks like others are here to stay. What do you think are the temporary trends and some that you hope will be more permanent?
Matt: Produce to your doorstep and curbside pickup were slowly picking up steam before 2020. Before the end of the year, it was moving into the territory of a customer expectation for all retailers with a brick-and-mortar location. While there will undoubtedly be nostalgia and some relief when retailers are able to safely open for browsing, I do think there will be a sizable contingent of users who will stick with local delivery and curbside pickup.
There is a lot of complexity that is added to the e-commerce experience when you introduce multiple shipping methods and inventory systems. I expect the experience will continue evolving quickly in 2021.
We saw a number of hot topics come up over the course of 2020: the “new normal,” personalization, the virtual economy, etc. What do you anticipate will be the hot topics for 2021?
Matt: We’re hopeful that we’ll be safely transitioning out of isolation near the end of 2021, and that could bring some really exciting changes to the user’s digital habits. We could all use less screen time in 2021 and I think we’ll see some innovation in the realm of social interaction and screen-time efficiency. We’ll look to see how we can use personalization and CX data to create experiences that help users efficiently use their screen time so that we can safely spend time with our friends and family in real life.
What about the year ahead excites the team at Roboboogie the most?
Matt: In the last 12 months, the consumer experience has reached amazing new heights and expectations. New generations, young and old, are expanding their personal technology stacks to stay connected and to get their essentials, as they continue to socialize, shop, get their news, and consume entertainment from a safe distance. To meet those expectations, the need for testing and personalization continues to grow and we’re excited to help brands of all sizes meet the needs of their customers in new creative ways.
Developers can create feature toggles by coding a “decision point” where the system runs a given feature depending on whether specified conditions are met. In other words, feature toggles allow you to quickly and efficiently deliver context-sensitive software.
Feature toggles have a wide range of possible applications for everything from supporting agile development, to market testing, to streamlining ongoing operations. However, with this power comes the potential for introducing unnecessary complexity into your code. You need to properly manage feature toggles to get the most from them.
In this article, we’ll give you an overview of precisely what feature toggles can do, how you can implement them in your development and production environments, and share some of our own recommended best practices for using them.
[toc]
What exactly is a feature toggle?
In the simplest possible case, a feature toggle is a powerful “if” statement, from which at least one of two different codepaths is followed at runtime depending on a condition or conditions provided. Here is a straightforward example:
In this Python code sample, we have defined two different, generic features: normalFeature and testFeature. At runtime, the application checks in configuration to see whether an internal test user is loading it. If so, the application loads the test feature under development. If not, the regular customer sees the current feature.
Example of a feature toggle controlling two codepaths (Source)
Feature toggles can be anything from a simple “if” statement to complex decision trees, which act upon many different variables. A wide variety of conditions, including fitness test results from other features in the codebase, a setting in feature management software, or a variable provided by a config file, can be used to determine which way a toggle flips.
Different feature toggles for different tasks
You should manage feature toggles differently depending on how you deploy them. One useful way to think about toggles is to break them down into categories across two dimensions: their longevity in your development and operational processes and how dynamic their function is. Considered this way, we can break feature toggles out into four different categories:
Release toggles
Experimental toggles
Operational toggles
Permission toggles
A chart of the four feature toggle categories (Source)
Release toggles usually aren’t meant to be permanent fixtures in your codebase. You should remove them once their associated feature is complete. In practice, this usually means they have a lifecycle of a few days to a few weeks, which puts them lower on the longevity scale. Release toggles also tend not to be very dynamic. Either the feature is ready for release, or it isn’t.
Release toggle example
An e-commerce company has a new configurator tool in development at the request of one high-profile customer. The configurator monitors items the customer has already selected for a built-out and suggests item sets to complete their order.
The company eventually wants to roll out that feature to all customers, but for now, the configurator only works within that one customer’s specifications. The configurator’s dev team enables a release toggle for this new feature that keeps it inactive.
Experiment toggles
These toggles are used to facilitate A/B testing or multivariable testing. You create a toggle point beyond which the different features you want to test are down two or more different code paths. At runtime, the system —or the toggle itself— splits users into different cohorts and exposes them to the different features.
Usually, experiment toggles should only exist as long as data needs to be gathered for feature testing. The exact timeframe will depend on traffic volume to that feature, but typically that means on the order of several weeks to several months. This constraint is more about the test itself than the toggle. The value of the data collected will diminish over time as other feature and code updates invalidate comparisons to earlier gathered user data.
Experiment toggle example
Our e-commerce company has finished debugging its new configurator, but there is some debate over which of the two suggestion algorithms provides the best experience. They decide to set up an A/B test to get some real-world data.
They add an experiment toggle to the production configurator with the two different suggestion algorithms behind it. The toggle splits users into two cohorts with a modulo when they try loading the configurator. After three weeks, the team feels they have conclusive data showing more users complete their orders using the B algorithm. The e-commerce company removes the experiment toggle, and that algorithm goes live for all users.
Operational (Ops) toggles are used to turn features off —like a “kill switch“— or otherwise adjust their performance. For example, if certain conditions are not met, such as KPI targets dipping below a threshold, the toggle turns that feature off until conditions improve. Operational toggles are useful to code in front of new features just out of testing or in front of resource-intensive features.
The longevity of ops toggles varies depending on their specific use case. If you’re using one to regulate a new feature just out of development, you probably only need the toggle in place for a couple of months. On the other hand, a kill switch toggle is usually designed to be a permanent code fixture. Ops toggles usually are as static or dynamic as the conditions under which the feature they control will operate. For example, ops toggles tied to just one performance metric tend to be relatively static.
Operational toggle example
Our e-commerce company is preparing for a spike in traffic ahead of their popular annual sale. This will be the first such sale with the configurator in production. During testing, devs noticed the user-preferred B algorithm was a little greedy with system resources.
The operators ask for a kill switch to be coded for the configurator before the sale goes live. They just want a single toggle they need to click in their release management software should performance degrade. Lo and behold, on the first day of the sale, the configurator begins to degrade performance, and ops staff quickly kill it before too many users notice.
Permission toggles
Permission toggles are intended to be longer-lived or even permanent fixtures in your code. They are used as a method to make features available to specific subsets of users. For example, you might use a permission toggle to show premium content only to premium users logged into your site. Permission toggles tend to be the most dynamic of the four categories defined here, as they usually trigger on a per-user basis.
Permission toggle example
The simple example at the beginning of this article is close to what a permission toggle might look like. After the annual sale is complete, our e-commerce company decides algorithm B is too resource-intensive to make it available to their entire user population. Instead, they decide to make it a premium feature.
Feature toggles vs. feature flags
As a brief aside, there is some debate over the name feature toggle as opposed to feature flag. “Toggle” is a more appropriate name when code is turned on or off for a few major code branches. “Flag” is a more appropriate term if a decision point is followed by a very multi-conditional or broad set of codepaths.
Including feature toggles in your roadmap supports agile workflows
Applying feature toggles to your development process supports newer agile approaches. You can release software even while code sprints on new features are still in progress. Those features just need to be hidden behind toggles until they’re ready for release, market testing, or whatever the next stage in their development is.
You would usually write the user’s newly requested features on code branches under more traditional waterfall development models. Those features would then go through a lengthy testing and QA process before your team could integrate them back into trunk code. Using feature toggles, you can perform the entire development and testing process right on trunk code.
Our best practices for using feature toggles
As we’ve discussed, feature toggles are a powerful and flexible development method. If you don’t carefully implement and manage your toggles, they can quickly lead to a messy codebase or increased technical debt.
Many different best practices for coding feature toggles have been proposed, but we wanted to offer some of our own. Once one messy decision point is written into your codebase, many more seem to follow. Applying these best practices from the start will help keep problems like that in check.
Website code under development (Source)
Use feature toggles to gradually transition to agile development
If your team wants to try out agile development and testing methodologies without jumping entirely into a new development methodology, then introducing feature toggles into your roadmap is an excellent place to start. The cost to try them out is low. You could just have one team try using an experimental toggle for a single canary deployment they’re working on, for example.
If the trial goes well, you can replace that experimental toggle with an ops toggle when the feature goes into production. Then expand toggle use to other teams or other processes from there. Introduce them earlier in development cycles as release toggles. Then, slowly but surely, you’ll be on your way to full agile development.
Use toggles for both internal and external features
As should be clear by now, feature toggles have uses throughout the development and production lifecycle of your software. Don’t limit your toggle usage to just customer-visible features. You can use release and operational toggles to manage backend features too. They give DevOps teams a very granular level of control and risk management over code, which can be important when modifying backend features that have a wide-ranging impact on how your system performs.
Include toggle planning in your design phase
Everything from toggle naming, configuration settings, removal processes, and access control trickles down from how you first feature design new features. Build that toggle planning into your design process, and feature management six months from now will be greatly simplified.
Have a standardized toggle naming scheme
Many organizations use a style guide to regulate how developers write and organize code. For example, how they employ everything from spacing, ordering, and parentheses, to naming. If you’re going to use feature toggles, you should also standardize your naming style early in your toggle adoption process.
Brevity is essential in other aspects of coding, but when it comes to toggle names, be verbose. Detail means clarity. Verbose toggle names help devs and ops staff outside your core understand what they’re looking at when their only reference is the toggle name you chose on a whim six months ago.
Some other toggle naming conventions we suggest adopting include:
Include the team or the project name.
Include the toggle’s creation date.
Identify the flag’s category.
Be descriptive of the toggle’s actual behavior.
Here is an example: algteam_10-12-2021_Ops_configurator-killswitch
This name gives some useful information someone on any team can use to understand what they’re looking at when a toggle is called in an error message. They know who wrote the toggle, how long it has been sitting in the codebase, and what the toggle does.
This practice sounds self-evident, but it is an important point to underline. As we discussed above, feature toggles can be divided into four general categories. You should manage each of those four categories differently.
Think about our configurator example from earlier as it moved from development to market testing to operational management. The configurator code sat behind a feature toggle of one kind or another the entire time. But the way the development and product teams interact with that toggle needs to change at every stage.
During early development, the toggle might just be configured in source control. Then while the e-commerce company is doing A/B testing, the toggle might be in a feature management platform. When the ops team adds a kill switch, they may decide they want it in the same feature management platform but on a different dashboard.
Always expose feature toggle configurations
As with any other code object, it is beneficial to document feature toggle configurations as metadata, so other developers, testers, and production staff have a “paper trail” they can follow to understand precisely how your feature toggle runs in a given environment. Ideally, store your toggle configurations in a human-readable format so that it is easy for people outside your team to understand what a toggle does.
This best practice is beneficial for features you expect to be toggled for a long time. Think about our configurator example again. A brand new product operator trying to understand a sudden, unexpected performance slowdown will be very grateful to have a human-readable file explaining that the B algorithm was surprisingly resource-intensive in testing a year earlier.
Keep the holding costs of feature toggles in check
When first using feature toggles, try to resist the temptation to use them everywhere in your code all at once. While feature toggles are easy to create, their use requires proper management and testing to realize any benefit. Scale up your feature toggle usage slowly, or consider integrating a feature management platform into your development and testing environments.
Deploy feature toggles strategically and keep your inventory of toggles as low as possible. Use them wherever necessary, but make sure there is a process for vetting whether toggles are the appropriate method for solving a particular problem.
Don’t let old toggles hang around in your code. Prune them as soon as their lifecycle has run its course. The more idle toggles your code has, the greater the management overhead that falls on your team. You can manage toggle removal by adding code cleanup tasks to your team’s backlog or building the process into your management platform.
Keep toggle scope as small as possible
Since toggling can be so powerful, it is often tempting to put large swaths of code under the control of a complex series of toggles. Resist this urge and keep feature toggling within as small a scope as possible to complete any given task.
If a toggle overlaps more than one feature at a time, it can be confusing for the rest of your team and a nightmare to debug weeks or months down the road when it begins to impact other teams’ work.
Consider our configurator example again. Our dev team is building four separate widgets that users will manipulate within the configurator tool. In this scenario, we would recommend setting up five toggles: one for the configurator itself and one for each widget. Code the widget toggles with a dependency on the configurator toggle. In this framework, if one of the widgets fails to load correctly, the others will still be served to the user.
Feature toggles can transform your entire development process
Feature toggles are powerful methods for developing, testing, and operating code features within a continuous integration and continuous delivery framework. They are a simple method that helps your team deliver higher quality, more stable code according to agile principles.
In this article, we walked through how feature toggles work, what types of toggles you can create, and how you can employ them in your agile process — or try them out in any development process. We also shared some of our recommended best practices for making sure your company gets the most from using feature toggles.
Start small and scale up
There is no reason you can’t start using feature toggles today. Start small and scale up your usage as your team gets comfortable with how they work. If you’re starting to code a brand new feature from your backlog, consider setting up a release toggle in trunk code, so you don’t have to branch. If you’re beginning market testing, consider setting up an experiment toggle for some split testing.
Once your team has a good handle on how they want to use feature toggles, consider whether a feature management platform can streamline their administration. Streamlining development and testing was what we had in mind when we developed our release and feature management platform.
AB Tasty allows your team to use a single tool to streamline toggle workflows and communication. Regardless of a team’s tasks or focus, our feature management product has everything it takes to deliver the right features in the right way.
When it comes to feature testing, you’re in a bind.
On the one hand, you need real-world data and feedback from real-world users. You know that every new feature you develop is, at best, an educated guess about what your real-world users want from you. No matter how educated that guess might be, and no matter much internal validation you perform, you can only generate meaningful data and feedback on each new feature you develop by releasing it to real-world users to test out in their real-world environments.
On the other hand, it’s risky to give real-world users an unproven feature. You know that every new feature you release might have something wrong with it. Maybe there’s a technical issue you missed during development. Maybe it just doesn’t align to user expectations as closely as you hoped. No matter the issue, releasing an unproven feature can cause real harm to your brand’s user relationships.
This is a tricky problem and one that is never going to be fully solved. But, thankfully, there are methods you can follow to minimize the problem, and collect real-world data and feedback while mitigating the impact when something (inevitably) goes wrong.
In this piece, we’ll explore one of these methods— rollbacks.
What is a Feature Rollback?
It’s a simple practice, with powerful implications.
When you perform a rollback, you take some code out of a live environment. Back in the day rollbacks could be truly massive. Software products used to be updated in giant new releases that could include a wide range of changes— including multiple new features and significant changes to existing features. If one of these huge releases had some fatal bugs in it, or just wasn’t well-received by users, then the entire thing might need to be rolled back (even if the issues were contained within just a few elements of the release).
All of this has changed with the adoption of Agile methodology. Releases keep getting smaller and more incremental, and so do rollbacks. Most modern Product Managers have adopted phased release plans, where they only release a single new or upgraded feature at a time— and often only to individual segments. And when modern Product Managers do release multiple new or upgraded features at once, the different features are separate from each other.
This evolution has changed the way rollbacks happen. After a new release, Product Managers can now isolate the individual feature(s) that have proven unfit for live usage and perform a targeted rollback on them, and them alone. The whole rollback process is now much faster, much nimbler, much more precise— and delivers much greater benefits.
Why Should Product Managers Perform Rollbacks?
When a Product Manager properly structures and deploys rollbacks, they improve their ability to test new features in a real-world environment with real-world users with a minimal level of risk. An imperfect feature is no longer the end of the world. If a feature has development issues or poor alignment with user requirements, you can perform a rollback and remove it from a live environment in real-time with just one click.
For Product Managers, this changes the game. The more mature your rollback capability, the more you can afford to make mistakes. Your risk shrinks, giving you the freedom to test more features with more users earlier in the development cycle, ultimately leading you to iterate your products faster and faster.
Now, rollbacks are not a silver bullet. They don’t absolve you from doing everything you can to develop the highest-quality features possible before you test them. But rollbacks allow you to test new features with greater confidence and reduced concerns about creating problems for your users.
When Should You Perform a Rollback? Two Common Use Cases
For most Product Managers, there are two common use cases why you might need to perform a rollback.
Rollback Use Case 1: Your Feature has a Bug
This first use case is pretty self-explanatory.
You might have the most robust and thorough QA and testing processes in the world. It’s still highly likely your new features will still have one or more bugs in them when you release them into a live environment. Maybe they’re issues you just didn’t think to search for or didn’t know how to search. Maybe they’re issues that only show up in live environment after hundreds of real-world users tool around with the feature.
Regardless of the reason, if significant technical issues pop up in your new feature, then you’ll likely want to perform a rollback on that feature to fix it. With the right rollback process, you can react to these errors in near-real-time and remove the feature—and maybe even fix it—in minutes before it impacts too many users.
Rollback Use Case 2: Your Feature is Poorly Received
This second use case is a little more sophisticated.
Essentially, after you release a new feature you monitor how users respond to it, and how well it’s hitting your business KPIs. If your new feature is not performing as expected, and is generating negative usage data and user feedback, then you can perform a rollback to remove it from its live environment. If it isn’t hitting—or at least tracking towards—its business goals, then it might not be worth keeping live.
After you roll back your feature, you can either utilize the data and feedback you collected to fix the feature and help it better align to user expectations and business requirements, or you can decide that the feature was fundamentally misguided and just needs to be retired.
With the right rollback process, you can also review and respond to the usage data and user feedback you receive in near-real-time, and prevent too many users from getting too disgruntled about receiving a feature that misses the mark.
What Do These Two Use Cases Have in Common?
In one word: speed.
In both use cases, rollbacks are most effective—and mitigate the most risk—when you are able to first monitor feature performance in real-time, to then translate that performance into a quick “yes/no” decision to rollback (or not), and finally to execute on that rollback decision as rapidly as possible.
The faster you can go through this entire process, the lower the chance that you will create a prolonged negative user experience. In some scenarios, the decision to perform a rollback and the execution of that rollback need to happen in minutes.
It’s a daunting mandate, but here are a few tips to help you meet it.
How to Make Faster Rollback Decisions
It’s challenging to decide—in the moment—whether or not to rollback a feature. Even the best feature release can be complex and chaotic.
There are multiple moving parts to monitor…
There are many different data and feedback points to take into consideration…
And there’s a lot of emotion at play…
You and your team just spent weeks, maybe even months, pouring your blood, sweat, and tears into designing and developing the new feature that you’re testing. If your users love it, then you get that sugar high of knowing you just completed a job well done, and you can just sit back and watch the good data and feedback roll in. But if your users don’t immediately respond as positively as you hoped, then it’s easy to experience an emotional crash and to want to rollback the feature before you even know if the bad response is consistent, let alone what you should do to fix your errors in the next iteration.
For these reasons, and many more, it’s hard to make the right rollback decision in the moment during a feature release test. Instead, it’s better to make your rollback decisions before you release your new features into the wild.
Here’s what we mean.
Basically, before you release any new features to any real-world users, you first decide what success and failure looks like for this feature in objective, data-driven terms.
Then, you decide how much data your release will need to generate before you can make an accurate call about whether the feature is a success or a failure.
Finally, you use these parameters during your release to make objective “yes/no” decisions about whether or not you should rollback your feature at any point. Instead of getting caught up in the moment, you just monitor the performance metrics that you decided were most important, and once they hit the thresholds you set prior to release you simply follow the plan and you either rollback the feature or you don’t— no real-time agonizing required.
How to Execute Rollbacks Faster
In the past, it was near-impossible to perform a rollback quickly from a technical perspective. You needed to have a technical team standing by, waiting to dig into the code to turn off live features, or to revert to a prior state of the entire platform. The entire process was slow, it was labor-intensive, and it took your technical teams away from their valuable development work.
Software has solved all of these problems. With our own feature management platform, you can rollback a feature in real-time by just toggling a single field with just one click. You don’t need any technical expertise to do so. You don’t need to develop and test a complex rollback process prior to feature release. You don’t even need to think about the technical details— you can save all of that thinking to create the right strategic decision trees that we outlined in the prior section.
AB Tasty also gives you—or any non-technical user—the ability to perform sophisticated feature releases and rollbacks. You can release multiple features at once, monitor how each feature is performing individually, and only rollback the features that aren’t delivering. You can roll out a feature to multiple user segments and only rollback that feature to the individual segments that aren’t responding well to it. We designed our server-side solution to make the execution of rollbacks faster, easier, and far more intuitive than they ever were before.
20 years ago, tech companies were hit with the ‘Agile Revolution’. The idea? Shipping working software every week or two would help teams deliver better products, even if this method implied more risk. In other words, the ‘move fast and break things’ mentality reigned.
But that was two decades ago. Today, agile is mainstream, and new philosophies, building on the agile movement, have come to the fore; namely, Continuous Integration, Delivery and Deployment, largely geared towards DevOps teams. Their big draw is that these processes and tools automate quality assessment, assuring that when code is merged in piecemeal fashion – and not on one big bang release day – it works. Even better, software can be deployed to the product environment at any time, by anyone. Now, your product manager can take the reins.
Today, the market is ready to go a step further. From Agile to Continuous Integration, Delivery and Deployment comes a thirst for Continuous Development. Continuous Development – we could even call it Continuous Activation – encompasses all of these ideas, but takes the logical next step. It puts even more control and autonomy in the hands of Product Managers. It allows them to not only deploy software themselves, (with mitigated risk), but also to pick and choose according to their own prerogatives which audiences are exposed to a given feature. In other words, they can run experiments, personalize the user experience, and exercise complete rollback control based on real-time data.
Continuous Development platforms and processes transform the Product Manager into a Chief Experimentation Officer, and there are many reasons to embrace this new paradigm shift:
Move Fast, Risk Less
‘Move fast and break things’ only works if you’re willing to accept the consequences of what you’ve broken. Most software developers would still like to move fast, but without the risk.
Continuous Development and the tools that support it factor in risk assessment. By avoiding code merges on one big release day, and by enabling progressive rollout techniques (canary deployment, ring deployment), developers can avoid putting all of their metaphorical eggs in one basket. If your system has a feature flagging or rollback KPI embedded in the platform, switching off a defective or negative feature can be done instantaneously and painlessly.
Your Customers, Not Your HIPPOs, Decide
How do decisions get made in your tech company? Chances are, HIPPOs, new bosses, vocal salespeople, consulting groups or the noisiest Product Manager in the room dominate that discussion, letting their personal experience, gut feeling or intuition determine the road map.
With Continuous Development platforms, the focus shifts from subjective ideas to customer feedback and data. Early adopter programs, beta testing, progressive deployment, A/B tests… all of these methods, enabled by feature flagging and other Continuous Development techniques, make your main measurement of success the behavior and opinions of your customers.
In a B2B context, this might look like extensive interviews with early adopters. In B2C, it’s likely your support teams or community manager who will pick up on positive or negative feedback around a new feature launch. Either way, Product Managers get direct access to the Voice of the Customer and can form data-driven arguments for why to rollback, stick with or modify a new feature.
Get off the Ford Line
If your team is project-driven, chances are your Product Managers and developers feel they need to keep their heads down and noses to the grindstone, working on their piece of the software production puzzle. They might be productive, they might be agile, but they might also not really feel the business impact of what they’re working on 40+ hours of the week.
When you can experiment with and test the features you’re developing; when you can get direct user feedback and adjust your work accordingly; when you have clear, measurable KPIs that determine success, your work all of a sudden feels a lot more meaningful. This keeps teams motivated, fresh and loyal.
Marketing and Product Manager Alignment
When you give your Product Managers more control, it’s easier for them to align with the teams around them, especially the Marketing and Communication departments. A new feature release, especially depending on the size and importance of your company, can mean a big web of marketing and communications campaigns. Emailings, press releases, articles, social network posting, corporate website updates…retroplannings and shifting deadlines are much easier to manage when your Product Managers are in the driver’s seat and not beholden to developer teams that have other priorities and are even more far removed from your marketing and communications personnel.
Developers Focus on Core Business Objectives
If you have a robust developer team, there’s a chance you could set up these types of feature management systems in-house, without the need for a dedicated platform. But this is time-consuming, and one could argue that it diverts skills and resources away from your core business objectives.
I believe that the time is now for Continuous Development. By turning our Product Managers into Experimenters, we’re able to build a better product and bring it to market faster, with less risk; we continue in the vein of ‘customer obsession’; we keep our teams creative and motivated; and we generally build up what, at AB Tasty, we’ve been advocating for since our founding – a test and learn, experimentation culture.
According to a PWC survey, one in three customers would leave a brand after just one bad experience. Hence, your company may invest a lot of time and money optimizing your digital product to stay relevant in today’s often crammed markets.
A critical part of the overall product experience is user onboarding: get it right and win loyal customers, but get it wrong and lose those users forever.
So it makes sense to continuously tweak the user onboarding process – the perfect job for a product team. Such a team often consists of 5 to 8 people, including product managers, designers, and developers. Different companies work with various product team sizes and configurations – whatever is best for their use case. However, we rarely see DevOps engineers in these teams because many view DevOps as just a vehicle for successful feature releases.
Ultimately, however, these DevOps engineers have to get up at night to fix a newly deployed feature that crashes the app every time a user navigates through the onboarding process.
We want to ask you: Can an app whose onboarding process doesn’t work technically be successful, and do release teams significantly impact UX after all? Let’s find out.
In this article, we’ll be exploring how to:
[toc content=”.entry-content”]
Make users feel right at home with a great onboarding experience
Most apps require an onboarding process to show new users how to achieve their goals as efficiently and conveniently as possible.
For this, we need to keep in mind that the onboarding experience can affect your relationship with prospects – both positively and negatively.
No matter how good your app actually is, the first impression counts!
Large companies like Slack or Dropbox also frequently overhaul their user onboarding to ensure users have a comfortable, fun, and productive start to their product. But see for yourself. The following images show an excerpt from Slack’s onboarding process from 2014 and 2021. Of course, the design has changed drastically, but you can also see that instead of reading where the team name comes up in the Slack interface, we actually see the user interface and our team name on it. These improvements are certainly not the results of guesswork but of meticulously coordinated optimization workflows.
–
The evolution of Slack’s onboarding process (Source)
As even big enterprises invest in optimizing their onboarding processes, we realize that we should do the same and not rest on our laurels. The question remains, how do you make sure you are building the right onboarding experience in the right way?
And this is where cross-functional product teams and Flagship come into play!
Leverage Flagship to unite product teams and ensure great UX
At AB Tasty, when we work towards a great user experience, we focus on two main themes:
Release the right feature: We step into our users’ shoes and conduct experiments and tests to ensure that the feature delivers value and looks and feels good.
Deploy the feature right: It’s not just about functionality and looks. We utilize feature management to ensure that what we’ve created works flawlessly at all times and on different platforms. –
Flagship provides a shared environment for experimentation and feature management
–
Flagship gives you the means to get the most out of both: data-driven experimentation and feature management to create and release features for great customer experiences. So we see release teams as an integral part of creating value for our users. This may not be the most popular opinion. Still, now we’d like to tell you more about why we think DevOps should be more closely integrated with product teams.
It’s no secret that teams that work toward a common goal are more likely to reach their true potential than those that don’t. By isolating DevOps from product teams, you probably can’t count on the positive effects of unity and passion necessary to create and release great products. For this reason, we encourage product teams to work more closely with DevOps. Release teams also care about delivering value and great experiences to users. And they bring the skills required to do so to the table.
Flagship provides product managers, developers, and DevOps engineers with a shared environment for experimentation and feature management. You get easy access to all the data and tools needed to have a productive conversation about the product optimization process in a common data-driven language. Simultaneously, instead of isolating specific roles and responsibilities in silos, each member of the product team can focus on doing their job while continuing to work as a collective force.
Now, let’s take a look at how Flagship’s experimentation and feature management capabilities enable product teams to deliver outstanding user experiences.
Deploy the feature right with feature management
First, let’s talk about a few examples of how feature management and releasing a feature right can positively impact your users’ onboarding experience.
Suppose you want to add tooltips to your onboarding process to help users navigate your product’s dashboard confidently. The product team prepares the new feature accordingly and thoroughly tests the functionality on the test servers. After everything seems to be working, they roll out the new feature for all users in one fell swoop. Hopefully, it’s not Friday afternoon, as the changeover could cause unforeseen problems on the production server, like:
Your user is stuck in an infinite loop that they can’t exit
User input isn’t saved, e.g., in a form
The app crashes repeatedly
The user is sent back to the start for no apparent reason
Just imagine what such behavior means for users going through your onboarding process and looking forward to finally using your product when it suddenly stops working. Poof, the magic moment has passed. The user has most likely lost confidence in your app due to bad UX.
Flagship makes code deployments stress-free
With Flagship’s feature management capabilities, your product teams can publish new features with ease – even on Friday afternoons.
Feature management enables release teams to provide the new tooltips feature to a selected target group before continually rolling it out to everyone. This way, you can be sure that the new feature works under realistic conditions, i.e., on production servers with real users.
Through controlled and monitored rollouts, DevOps teams immediately know whether something isn’t working correctly. This enables them to react on time and be glad that only a few users have noticed the error.
For example, suppose the developers wrapped the tooltip feature in a feature flag (which they really should be doing). In that case, they can quickly deactivate it via the flagship dashboard if a problem occurs. Of course, they can also configure automatic code rollbacks based on KPIs to react even faster.
Proper feature management can de-stress your release teams: Gone are the sleepless nights spent dealing with damage control! If you want to learn more about the benefits of feature management for tech teams, we recommend our blog post here.
Release the right feature with experimentation
Perhaps you have great empaths on your product teams and feel like you know your users pretty well. Still, it is wise to experiment and test to create an onboarding process that your users will love.
Let’s look at the tooltip example from before again. Suppose that after your product team successfully integrated the tooltips into user onboarding, your analytics data shows that something must be wrong. Many users still don’t know how to use your app and abandon the process midway through. If you can’t identify and resolve the problem right away, you need to leverage other means to improve the tooltip’s user experience.
First, make sure that everything is fine from a technical point of view. Next, your product team should start working on possible variants to improve the tooltips’ presentation and functionality. You can then experiment and test with Flagship to determine which of these variants and ideas offer the best user experience.
For example, you could utilize A/B tests to see if showing a how-to video before displaying the tooltips helps users get started with your product. Or experiment with different tooltips sequences – perhaps the process is easier to understand if you change the tooltips’ order.
You’re also free to experiment with different colors, copy, UI elements, call to action, and so on. To make your experiments as meaningful as possible, you can define which users see which feature variant and track user acceptance, test results, and KPIs in the Flagship dashboard.
Another advantage of Flagship is that you can utilize 1-to-1 personalization based on audience segments to provide users with unique experiences. For example, after a user registered for a paid subscription, show them a customized welcome message and add more value to their onboarding experience.
… What about client-side tools for experimentation?
Many client-side experience optimization tools, such as our AB Tasty, can also perform most of these experiments – without code deployments. However, the advantage of coding your experiments for a critical process such as user onboarding is that you don’t potentially slow it down with automatically generated UI overlays. Instead, tests and experiments with Flagship are fast, secure, and flicker-free, as they come directly from the server and don’t have to be calculated in the user’s browser. Of course, client-side tools still have their justification and unique uses – Flagship is a great tool to complement your client-side strategy.
Wrapping up
If you want to provide users with the best possible onboarding experience, you need cross-functional teams who know how to release the right feature and how to release a feature right. One of our goals is to advocate the importance of release teams to great UX – whether a product technically works is as important as how it looks and behaves.
Using Flagship’s experimentation and feature management capabilities, product teams can benefit from a shared platform to collaborate on improving the onboarding experience in a productive and data-driven way.
Would you like to try Flagship for your product teams? Book a demo and see how experimentation and feature management can transform your users’ onboarding experience from okay to Yay.
In a perfect world, you release a product that is bug-free and works exactly as it should and so there is no need for further testing.
However, both product managers and developers know that it’s not as simple as that. They need a way to make sure that there is a process in place that reveals any issues in code in a live production environment.
This is where testing in production comes in.
But it’s also one of the highly debated topics out there with those who say you should always test in production, and those who are more wary of the concept and say you never should.
In this article, we’ll look into these two different perspectives and share our own point of view on this controversial topic and we’ll guide you through the best ways to reap the benefits of this type of testing.
What is testing in production?
To keep it short and simple, testing in production is a software development practice of running different tests on your product when it’s in a live environment in real time.
This type of testing is not meant to be a replacement for your QA team or eliminating a unit test or integration test. In other words, it is not supposed to replace testing before production but to complement these tests.
To do or not to do: That is the real question
These are big benefits, and they are enough to create consensus among many developers and product managers who say “Yes, always!” to the practice.
But there’s also another group of developers and product managers who say “No, never!” to testing in production.
On the one hand, they admit all of the great benefits that testing in production can deliver. On the other hand, they also believe that the practice carries too many potential downsides and that its benefits just aren’t worth taking on the risks the practice can bring.
Which side are we on?
We believe testing in production is a cornerstone practice for anyone in the software development world. And we believe it is particularly important for Product Managers, as it gives them a powerful method to generate real-world feedback and performance data they need to make sure they are always building a viable pipeline of products.
But even though we are great advocates of this practice, we still want to consider the point of view of those who are “No, never!” when it comes to this type of testing.
Once we acknowledge these issues, we can start to map out some ways to mitigate the practice’s potential downsides and focus on its benefits instead.
What are the big risks of testing in production?
To be blunt: a lot of things can go wrong when you test in production.
You risk deploying bad code
You may accidentally leak sensitive data
It can possibly cause system overload
You can mess up your tracking and analytics
You risk releasing a poorly designed product or feature
The list goes on and on. Anything that can go wrong, could go wrong.
Worst of all— if something does go wrong when you are testing in production, your mistake will have real-world consequences. Your product might crash at a critical moment of real-time usage.
You might also end up collecting inaccurate KPIs and creating issues with your business stakeholders.
Worse case scenario: your poorly designed product or feature might result in multiple paying customers leaving your product for a competitor instead.
Those who say “No, never!” to testing in production are correct to consider the practice highly risky, and we understand why they stay away from it.
And yet, while we acknowledge these concerns, when it comes down to it, we believe that this form of testing is an essential aspect of modern software development.
Why should you still test in production?
When done properly, testing in production gives you some great benefits that you just can’t get through any other method.
Collect real-world data and feedback
Testing in production allows you to collect user data in terms of users’ engagement with your new features. This enables the collection of valuable feedback from the customers that matters the most, which in turn would allow you to optimize the user experience based on this feedback.
This will also allow you to brainstorm ideas for features that you may not have considered before.
Uncover bugs
Since you’re testing on live users, you would be able to discover any bugs or issues that you may have otherwise missed in the development stage. Thus, you can ensure your new products and features are stable and capable of handling a high volume of real-world usage.
It is worth noting that there are certain technical issues that will never show up until you put your product or feature in front of real-world users.
Therefore, you can monitor the performance of your releases in real life so that developers can analyze performance and optimize the releases accordingly.
Higher quality releases
Because you’re receiving continuous feedback from your users, developers can improve the products resulting in high quality releases that meet your customers’ needs and expectations.
Additionally, you can verify the scalability of your product or feature through load testing in production.
Support a larger strategy of incremental release
Testing in production helps facilitate an environment of continuous delivery.
This is especially true when you roll out your releases to a certain percentage of users so that they may no longer have to wait long periods of time before they have access to your brand new features.
This way, you can limit the blast radius as with incremental releases, you would not have affected all of your users.
Perhaps, most importantly: you already are testing in production, even if you didn’t know it!
Most of Agile development and product management’s best practices are forms of testing in development. We’re talking about very common practices like:
If you are following any of these practices—and many more like them—then you are already running tests with real-world users in a live production environment.
You are already testing in production, whether you call it that or not, even if you thought you were in the “No, never!” camp this whole time.
Testing in production done right
If testing in development is inevitable these days, then you should spend less time debating its pros and cons, and more time finding the most effective and responsible way to follow the practice.
We believe in this perspective so strongly that we’ve built an entire product suite around helping product developers gain all of the benefits of the practice while minimizing their risks.
Feature flags – a software development practice that allows you to enable or disable functionality without deploying code – are at the core of this new platform.
By wrapping your features in a flag and deploying them into production without making them visible to all users, you can safely perform all of the testing in production that you need.
With feature flags—combined with the rest of AB Tasty— you can:
Deploy smaller releases that minimize the impact of failure.
Only test your new features on your most loyal and understanding users.
Personalize their tests so they know to expect a few hiccups with the release.
Immediately toggle off underperforming features with a single click.
With feature flags and a little planning, you can dramatically reduce the risk and increase the sophistication of the testing in production you are already performing.
This means more real-world user data, more reliable products & features, and less worry about seeing how your hard work performs outside of the safe confines of development and staging environments.
Our UK partner series continues with Andrew Furlong, Managing Director at REO.
In this interview, we asked him the following 3 questions:
At REO, you “believe digital experiences can always be better” – what does that mean?
Like with most things, there is always room for improvement. The growth of AB testing and Personalization is because brands and consumers believe and demand a better experience. So, what we mean is: no matter how good you think your digital experience is, it can always be improved. And we are here to help if you are not sure how!
What are you most proud of at REO?
This one is easy… the team! They are great to work with, they are challenging and speak their mind to help REO be the very best it can be. Having such a brilliant team pays off as shown with our latest client satisfaction score of 8.5/10.
Which ultimate tip for experience optimization do you have for our readers?
It is important to always have multiple streams of optimization running, I am not talking about concurrent tests, although where feasible that should be done. I mean having a fallback strategy so that if a test is delayed for reasons beyond your control, you can quickly pivot to a different part of the site for example.
About REO
REO is a digital experience agency. We are an eclectic mix of bright and creative thinkers, embracing the best of research, strategy, design and experimentation to solve our clients’ toughest challenges. We work across a variety of sectors, with companies such as Amazon, M&S, Tesco and Samsung. To fearlessly transform our clients’ businesses and reputations by evolving the Digital Experience for their customers. We achieve this through:
Our curious and relentless drive to gather insights that matter.
Our proactive mindset and forward thinking to deliver lasting value.
Our adventurous approach to adapt and learn quickly.
In an increasingly cutthroat digital age, standing out in your niche while meeting the exact needs of your consumers is essential to business growth and longevity.
By getting under the skin of your customers, you can tailor your messaging, applications, and touchpoints to meet their exact needs —that’s where eye tracking enters the mix.
Consider this for a moment:
You’re running a usability test on a product landing page for a new range of gym shoes. Your test subject, Nancy, browses the page and chooses a shiny new pair of gym shoes with ease. But, on the next page, there is a snag. She hesitates and eventually abandons her cart because the journey was confusing.
You take notes based on Nancy’s feedback and think about how you can improve your checkout journey. But, if you could view her movements— or see what she sees—you would have the power to make informed improvements that will ultimately increase conversions and drive more sales.
With eye tracking, you can. But while this widely-used sensor-based technology offers a deep glimpse into user browsing behavior, some industry experts believe that eye tracking is an unnecessary expense.
Like many platforms and digital innovations, with the right approach, eye tracking will give you the tools to offer your customers a seamless level of user experience (UX)—the kind that will increase loyalty while helping you boost your bottom line.
Here we explore the dynamics of eye tracking and explain why it could make an excellent investment for your business.
So, what is eye tracking?
Eye tracking is a type of sensor technology that gives a computer or mobile device the tools to understand and trace where a person is looking.
An eye tracker can detect the presence, attention and focus of a user while engaging with a specific app, touchpoint or website.
From a marketing perspective, eye tracking dates back to the 1980s, where it was used to test and measure the value of ads in print papers or magazines.
An effective alternative to lie detection-style techniques such as voice stress analysis and galvanic skin response (neither of which offer truly reliable results or data), eye tracking gave the advertisers of the day essential insights into which elements of a page people read as well as how long they spend engaging with specific pieces of content.
The popularity of the eye tracker rose over the years and the rapid evolution of digital technology paved the way for a wealth of innovative developments.
Now, eye tracking technology is able to offer deep-dive insights into user behavior and dynamic page as well as app design as well as offering intuitive tools that enhance the user journey for disabled people.
In the modern age, one of the most prominent features of eye tracking is a little something called Facial Expression Analysis (FEA).
Based on ‘points of fixation’—times during the user journey when someone stops and focuses long enough to process the content before them (commonly known as a ‘saccade’)—FEA technology helps marketers gauge the effectiveness of their page design and messaging.
But, how does this apply to business and why is it so useful? Let’s find out.
As a marketer or business owner, the more you understand your target audience, the more chance you have of creating a fluent and engaging customer journey across platforms.
As eye tracking provides a visual map of how your users engage with your website, landing pages, and mobile applications, you can identify strengths and weaknesses related to user experience (UX) and content placement.
An essential part of the consumer research process, eye tracking is a powerful medium as it taps into the fact that 95% of human decision-making (particularly online) is carried out sub-consciously.
By using eye tracker tools to trace navigational patterns, you can adopt your customers’ vision, uncovering information that will help you to make improvements that boost engagement, improve your customer experience (CX) offerings, and ultimately, accelerate the growth of your business.
From heat mapping to task-based usability tools, there are a wealth of eye tracking innovations available to businesses in today’s digital world.
Invest in the right eye tracking tool for your business and you will:
Understand what your target audience is looking at and for how long
Identify redundant or disruptive visuals or design elements
Document how users scan and interact with your web pages or apps
Gain a practical understanding of what works and what doesn’t
Prove the value of certain marketing strategies, techniques or campaigns
Continually improve and evolve your efforts in a landscape that is ever-changing
Eye tracking is an effective means of seeing through the lens of your customers. But, as powerful as it is, eye trackers alone are unlikely to give you a complete insight into the content that really sticks in the users’ mind.
To gain additional context on how to use your data to improve usability and drive engagement, eye tracking should be a pivotal part of your consumer research strategy rather than a sole means of information.
That said, if you use it the right way, eye tracking can help you understand your customers in ways that can give you an all-important edge on the competition.
How eye tracking can help you understand your customers
Using eye tracking to understand your customers on a deeper level boils down to adopting a cohesive mix of the right tools and techniques.
Eye tracking tools and software provide a visual representation of your users’ focus points—returning data based on:
Fixation points or saccades: information that can tell you how engaging or eye-catching particular elements or pieces of content on a webpage are to your customers.
Navigational patterns: by understanding common navigational patterns, you can see how people scan or interact with your page. This level of knowledge will give you the data you need to optimize your content and design for increased engagement and conversions.
Problematic elements As mentioned, an eye tracking test will return invaluable data based on any images, graphics, calls to action (CTA) or command buttons, informational content or design elements that hinder the user experience and prevent customers from either getting what they need from your page or carrying out a desired action (clicking through to a specific product page or signing up to an email newsletter, etc).
Automotive repair and wreckage company, Truckers Assist, conducted eye tracking tests to track the performance of its homepage.
This test showed that while the ‘NO FEES’ graphic (the red point on the image) was gaining a lot of attention, it wasn’t clickable. As a result, many users were focusing their attention in the wrong place, steering them away from more valuable information as a result.
To fix this glaring issue, Trucker Assist improved its homepage design, removing the ‘NO FEES’ banner and placing focus on its contact information and service search bar.
Conducting successful eye tracking testing takes consistency as well as a clear cut goal. Do you want to improve the user journey of your new mobile app? Are you looking to drive more revenue through a specific product page? Perhaps you’re trying to understand if your general messaging and branding is performing the way it should?
There are many actionable insights you can gain from eye tracking—and outlining your specific goals will give your tests or studies direction.
This hand-picked video offers practical advice and information to help you get started with eye tracking:
How eye tracking benefits UX optimization
88%of consumers are less likely to return to a website after a poor user experience. Today’s consumers expect a seamless level of UX from brands and businesses—anything less and you could see customer loyalty as well as sales drop through the floor.
Eye tracking and UX go hand in hand. Through eye tracking, you will gain access to objective and unbiased insights that will show you where improvements are necessary.
With eye trackers, you can drill down into a specific UI element (is it facilitating the right interactions or are your consumers missing it altogether?) to test whether it fits into the user journey while getting to the very root of any distracting, problematic or misleading page elements.
This perfect storm of on-page information will empower you to make very specific improvements to any app, web or landing page—enhancing its usability and performance significantly.
How eye tracking works in a nutshell
As a concept, a significant part of eye tracking is based in Fitts’ Law. Essentially, every visual object or element carries a certain amount of ‘weight’ and this determines the amount of attention as well as clicks it ultimately earns.
Concerning eye tracking and UX, Fitts’s Law is important because it can help you predict the amount of time taken to move the eyes or cursor to a specific target.
Armed with this information, you can establish a visual hierarchy and optimize your webpages or applications to ensure consumers can connect with the right functions or information at the right times within their journey.
To get your eye tracking tests off to the best possible start, giving you users clearcut instructions while ensuring good lighting and consistent positioning is essential. Doing so will give you reliable data, as detailed in this infographic from IMOTIONS:
Essential eye tracking methods & techniques
There are thousands of eye tracking tools and countless ways of approaching this most powerful approach to user testing.
To guide you along the right path, here we’re going to explore the most essential eye tracking methods and techniques.
Heat maps
A branch of eye tracking, a heat map is a dynamic tool that offers a definitive visual representation of where users focus their attention and how they navigate your website based on their on-page interactions.
Heat mapping platforms provide color-coded data to give an indication of the areas of a website or mobile page users are interacting with the most.
As you can see from the image above, the red spots show the areas where users focus their attention most while the lighter colors are the areas with the lowest engagements.
Heat mapping technology also serves concrete data based on how much particular buttons or links are clicked by users on a page while offering navigational information such as scroll rates to show how far people move down the page before bouncing off.
Essentially an inversion of heat mapping, focus mapping provides digestible visual insights on the main fixation points on a specific page.
With focus maps, the page is blanked out except for the spots that receive the most attention or fixation.
A visual technique to complement additional eye tracking tests and consumer research strategies, with focus mapping you will get a panoramic view of which elements are working as well as the content you need to improve to encourage focus and engagement.
Gaze path plots
As sensory-based technology, eye tracking can provide a wealth of valuable insight in a single browsing session.
By adding metrics related to time as part of your eye tracking strategy, you can follow the path a user takes on a webpage and the time they spent on each element.
As you can see from the ‘Where’s Wally’ video, gaze plots make an effective eye tracking technique as they offer a dynamic interpretation of how users interact with your site or mobile app.
If you follow these paths, it’s possible to get a real-time insight into the eyes of your audience. This wealth of visual eye tracking data will empower you to drill down into specific areas of a web or app page, making design or content tweaks to optimize the overall user experience.
Eye tracking metrics
In addition to diversifying your approach to eye tracking and working with a mix tool or platforms, focusing on the right metrics will improve your chances of success exponentially.
Here are the main eye tracking metrics you should work with during your user research tests and studies:
Areas of Interest (AOIs): Before running an eye tracking test or study, you should determine your AOIs. Mapping out your main areas of interest on a particular web or mobile page will give your test definitive direction while ensuring you only collect or concentrate data to provide answers to the right questions.
Dwell time: This metric is focused on the actual amount of time a user or test subject spends interacting with a predetermined AOI. You can, for instance run A/B testing to compare two versions of a webpage to see which one returns the healthiest AOI dwell times and thus, offer the best return on investment (ROI).
Fixation count: Like dwell time, fixation count can offer interaction data based on your AOIs. But, rather than quantifying time, fixation count records up how many fixations your AOIs receive during a test or time period. As such, you can compare fixation and dwell time data to identify a correlation while painting a panoramic picture that will make your optimization efforts more successful.
Ratio: In an eye tracking context, ratio will tell you how many users or test subjects have guided their gaze to a particular AOI. Tracking ratio will give you an insight into whether you need to alter a specific piece of content or design element to capture the attention of more users and streamline your site navigation.
Revisits: Based on your AOIs, revisits determine how many times a user returns their gaze to a specific point during a test or browsing session. This metric is beneficial for understanding whether your design and layout make it easy for people to navigate a page or not. If you find that a particular AOI is showing a high rate of revisit rates, it might be that your content is a little confusing or that there is a design element causing distraction.
Getting started with eye tracking: best practices
Now that you understand how eye tracking works and you’re acquainted with essential techniques, we’re going to look at some top tips to help you get the most from your eye tracking test efforts:
Once you’ve established the aims and criteria of your test, allow your users to complete the process uninterrupted. Asking questions during the test itself is likely to distract your test subjects, skewing your results in the process.
Make sure that your participants remain within the monitoring range from start to finish. If a subject falls out of position, halt the test to ensure they are back in the monitoring range. Doing so will ensure the best quality data.
For qualitative testing and manual eye tracking, around five test subjects will typically offer the level of insight you need for the job. For heat maps and broader eye tracking studies, it’s recommended to use at least 39 test subjects for well-rounded and actionable results.
For mobile optimization tests, you should focus on the performance and value of your functional icons across devices; the precision of your error messaging; and the consistency as well as responsibility of your mobile design and layout.
To maximize your eye tracking efforts, working with customer experience optimization experts will help you run successful eye tracking-type tests that will return data that is aligned with your specific strategies and goals.
Final thoughts
“You can’t solve a problem on the same level that it was created. You have to rise above it to the next level.”
—Albert Einstein
Niche or sector aside, without knowing your audience and understanding how they interact with your business, you are merely shooting in the dark.
In an ever-evolving digital age, a data-driven approach to consumer research and UX optimization will help you meet your users’ needs head on while empowering you to adapt to constant change.
Eye tracking may not be the answer to all of your consumer-centric needs but this most innovative of sensor technology could easily play an important role in your ongoing marketing and development strategy.
There is no one-size-fits-all way to approach eye tracking—success will depend on your goals and needs as a business. But, by embracing the methods and techniques we’ve discussed, you will open yourself up to a treasure trove of data-driven insights that will accelerate the growth of your business.
We hope our guide to eye tracking has helped you on your way and for more business-boosting pearls of wisdom!
We developed our feature management tool to provide tech teams with the capabilities to deliver frictionless customer experiences more effectively. Naturally, we also use this tool at AB Tasty, but in the past, we also had to master our development cycles without the tool.
In this article, I’d like to give you insight into how our tech teams’ work has changed thanks to our Feature Experimentation and Rollouts solution. How did we work before? What has changed, and why do we appreciate the tool? Without further ado, let’s find out!
What a typical development cycle without our feature management platform looks like
The beginning of a typical development cycle is marked by a problem, or user need that we want to solve. We start with a discovery phase, during which we work towards a deep understanding of the situation and issues. This allows us to ideate possible solutions, which we then validate with a Proof of Concept (POC). For this, we usually implement a quick and dirty variant – the Minimum Viable Product (MVP) – which we then test with a canary deployment on one or two clients.
When the solution seems to be responding to customer needs as intended, we start iterating the MVP. We’re allocating more resources to the project to get it into a robust, secure, and user-friendly state. During this process, we alternate between developing, deploying, and testing until we feel confident enough to share the solution with our entire user base. This is when we usually learn how most of our users react to the solution and how it performs in a realistic environment.
The pitfalls of this approach, or: Why we developed a server-side solution
Let’s see why we weren’t happy with this strategy and decided to improve it. Here are some of the main weaknesses we discovered:
Unconvincing test results.
A canary release with one or two clients is great for getting first impressions but doesn’t provide a good representation of the solution’s impact on a larger user base. We lacked qualitative and quantitative test data and the ability to use it simply and purposefully. Manual trial and error slowed us down, and our iterations didn’t always produce satisfactory results that we could rely on.
Shaky feature management.
Developers were often nervous about new releases because they didn’t know how the feature would behave under a higher workload. When something went wrong in production, it was always incredibly stressful to go through our entire deployment cycle to disable the buggy code. And that’s just one example of why we needed a proper feature management solution.
We see tech teams around the world know and fear the same difficulties. That’s why we created a server-side and feature flagging solution to help them – and us – innovate and deliver faster than ever before while reducing risks and headaches.
I spoke to some of my tech teammates to determine how their work lives have changed since we started using our new tool. I noticed some major themes that I’d like to share with you now.
With our feature management platform, we no longer have to guess and can follow a scientific approach. We now know for sure whether a business KPI is positively impacted by the feature in question.
Suppose we publish a new feature while the marketing team starts a campaign without us knowing about it. We may get abnormal test results such as increased traffic, engagement, and clicks because of this. The problem: how can we measure the real impact of our feature?
The platform lets us define control groups to reduce this risk. And thanks to statistical modeling (Bayesian statistics), we get accurate data from which we can make a reliable interpretation.
One time, we worked on a new version of one of our APIs and used our server-side solution for load testing. Fortunately, we found that the service crashed at some point as we gradually increased the number of users (the load). The problem wasn’t necessarily the feature itself. It had to do with changes in the environment, which can be easy to miss with traditional web testing strategies. However, we could stop the deployment immediately and prevent end-users or our SLAs with customers from being harmed by the API changes. Instead, we had the opportunity to further stabilize the API and then make it available to all users with confidence.
We iterate faster by decoupling code releases from feature deployments
We often deploy half-finished features into production – obviously, we wrap them in feature flags to manage their status and visibility. This technique allows us to iterate so much faster than before. We no longer have to wait for the feature to be presentable to do our first experiments and tests. Instead, we enjoy full flexibility and can define exactly when and with whom to test.
Additionally, we no longer have to laboriously find out who can see what in production during feature development, as we don’t have to integrate these things into our code anymore. Instead, we use the Decision API to connect features with the admin interface through which we define and change the target groups at any time.
What’s more, everyone in the team can theoretically use this interface and see how the new feature performs without involving us developers. This is a huge time saver and lets us focus on our actual tasks.
“Our Feature Experimentation and Rollouts solution helps me take back control of my own features. In my old job, I was asked to justify what I was doing in real-time, and I sometimes had trouble getting my own data in terms of CDP MOA, now I can get it.”
Julien Madiot, Technical Support Engineer
We can rely on secure deployments
Proper feature management has definitely changed how we work and how we feel about our work. And by managing our feature flags with our feature flagging platform, the whole process has become much easier for our large and diverse teams.
They’re ON/OFF switches. Let’s not lie: we still make mistakes or overlook problems. But that’s not the end of the world. Especially not if our code is enclosed in a feature flag so that we can “turn it off” when things get hairy! With our feature flagging platform as our main base for feature management, we can do this instantly, without code deployments.
They help us to conduct controlled experiments. We use feature flags to securely perform tests and experiments in real-world conditions, aka in production. A developer or even a non-tech team member can easily define, change, and expand the test target groups in the dashboard. Thanks to this, we don’t have to code these changes or touch our codebase in any way!
They cut the stress of deployments. Sometimes we want to push code into production, but not yet for it to work its magic. This comes in handy when a feature is ready, but we’re waiting for the product owner’s final “Go!”. When the time comes, we can activate the feature in our dashboard hassle-free.
DevOps engineers have many responsibilities when it comes to software delivery. Managing our feature flags with our server-side solution is an effective way to lift the burden off their shoulders:
I honestly sleep better since we started using our server-side solution 🙂 Because I’m the one that puts things in production on Thursdays. When people say ‘Whoops, we accidentally pushed that into production,’ now I can say, ‘Yeah, but it’s flagged!’
Guillaume Jacquart, Technical Team Leader
Wrapping up
I hope you found the behind-the-scenes look at AB Tasty both interesting and informative. And yes, if there was any doubt, we actually use AB Tasty’s Feature Experimentation for all AB Tasty feature development! This helps us improve the product and ensure that it serves its purpose as a valuable addition to modern tech teams.
Website heatmaps are visual representations of attention, engagement, and interactions generated by your visitors as they navigate through your website. Free or paid tools help you generate different maps to better understand user behavior and optimize conversions.
What is a heatmap?
Heatmaps are visual representations of attention, engagement, and interactions generated by your visitors as they navigate through your site. Warm colors indicate areas that attract the most attention or engagement, whereas cool colors show overlooked spots on the page.
Let your own visitors show you the areas of improvement on your site to help you boost sales, then simply make the appropriate changes and measure how well it’s working.
Source: www.pixeller.co
When to use heatmaps?
Heatmap tools measure attention, engagement and even number of clicks on your website. They are a key component to your optimization toolkit (to learn more, read our complete guide to Conversion Rate Optimization).
To give you some concrete examples, here are some of the main reasons to use these softwares:
To measure engagement. Do you write online articles, and wonder up until what point your audience stops reading? Using a heatmap can help you visualize the ‘scroll’ of a user, and where they interact with your site. If you notice that only a tiny percentage of people actually reach your CTA, it might be time to make a change.
To measure actions. Where do my visitors click? Are they clicking the right button? Heatmaps help you see if your visitors are completing your desired actions, and also highlight where they might be getting stuck.
To measure attention. What headlines attract the most attention? What images attract the most attention? What elements are distracting from the main content? Do my visitors see my form? Once you have solid answers to these questions, you can start making changes that will increase your conversion rates.
Source: Unbounce
Gaining the answers to the above questions can help you answer even more nagging questions:
Where should I place my most important content?
What’s the best way to use images and videos?
Where are my visitors getting distracted?
Where should I talk about my product/service?
Most heatmap softwares will let you generate maps that show user interactions from different points of view. The idea is that you should refer to all of them in order to reveal your visitors’ behavior.
Clickmap
This type of map allows you to quantify actions. It’s a visual representation of all of the clicks visitors make on your page. This ‘map’ generates precious data since it allows you to see precisely where people interact with your site.
Each time someone clicks on a precise area on a page, the heatmap marks the spot with a light dot. If you see large areas of white, this is where the majority of visitors are clicking.
Source: Sumo
By quickly identifying the ‘hot spots’ on your site, you can immediately tell if people are clicking where you want them to click. On the above image from Sumo, we can see that the ‘SHARE’ and ‘IMAGE SHARER’ are the least popular areas.
Scroll heatmap
The scroll-map lets you see how far down a page visitors scroll, and especially: what elements attract their attention and what do they linger on?
By using a scroll-map, you can determine if users ‘see’ the right parts of your site, or if they get distracted by unimportant elements.
source: nguyenvincent.com
If we look at the above screenshot of an article that talks about SEO, we can see that the image and the two lines of text below it are the most popular: about 85% of visitors have seen these elements.
Percentage of clicks heatmap
The ‘percentage of clicks’ heatmap compliments the classic one. It lets you see, element by element, how many clicks were generated by a certain image or CTA. The ability to quantify clicks by element is extremely important.
This allows you to:
Understand how much importance users give to each element
Avoid allowing users to click on images without links
Source: nguyenvincent.com
Confetti heatmap
The confetti heatmap lets you see each individual click on a page, as opposed to a view that shows a ‘density’ of clicks. It allows you to see if people are trying to click on non-clickable areas, and to fix the problem if so!
Source: CrazyEgg
Heatmap vs. eye-tracking
If heatmaps rely mostly on tracking a user’s mouse movements and clicks, eye-tracking analyzes their gaze.
The point of eye-tracking is to see exactly how your site users look at your site, to analyze the zones where they pay the most attention.
Source: Nielsen Norman Group
As with heat mapping, the areas highlighted in warm colors show the areas where readers pay the most attention.
Although it’s certainly useful, eye-tracking relies on technology that’s a bit more difficult to put in place. It requires specific equipment that most agencies don’t have.
If you’re interested in eye-tracking, there is software based on AI, like Feng-Gui or EyeQuant, that allow you to simulate eye-tracking with the help of algorithms.
Heat mapping tool features
When looking for a heatmap tool, keep in mind the following points:
Segmentation: The tool should allow you to create heatmaps specific to certain audiences that you define using certain targeting and segmentation criteria (ex. new visitors, visitors who have converted, visitors from sponsored link campaigns…)
Map Comparison: You should be able to easily and visually compare the results of different maps from different user segments.
Page Template: Having a heatmap specific to each page can make the analysis tricky if your page is an ecommerce product page, and you have hundreds or even thousands of them. You need to be able to aggregate results for all pages of a certain type.
Responsive Heatmaps: The tool has to work on pages accessed from a mobile device. Actions specific to these devices should be recorded, such as touches, scrolls, and swipes. During the analysis, you should be able to distinguish between behaviors and navigation sessions seen on mobile vs desktop devices so that you can correctly interpret the data.
Exportable Maps: This important feature lets you easily share your results with teammates.
Dynamic Heatmaps: You should be able to see clicks on dynamic elements: drop-down menu, slider, carousel, elements loaded using AJAX or using a JavaScript framework like React.js or Angular.js.
Retroactive Heatmaps: Has your site design changed since your last analysis? Your software should be able to conserve previous results as shown on the then-current website design, and not simply superimpose the results on your new design – the results wouldn’t make any sense.
Combine heatmap and A/B testing
Let’s imagine that you’ve used a heatmap to better understand how your website users interact with your brand. You’ve identified strong and weak points on your site, and you’d like to make the appropriate changes.
Question: How can you measure how effective these changes were? There’s only one solution – A/B testing your modifications.
The idea is to create different versions of your web pages, ads, landing pages, etc. in order to compare how they perform.
By combining heatmap and experimentation, you’ve got yourself a 3-step method to:
Identify problems
Test potential solutions
Choose the highest performing solution
On this home-repair website, a preliminary heatmap reveals that users’ attention and engagement are split between too many competing elements.
Insight: attention is divided and conversion is low.
With the help of A/B testing, the company made a few changes to the home page in order to refocus visitor attention on one call-to-action.
A second heatmap is made after the modifications.
Insight: attention is refocused on the phone number, the main call-to-action, and conversions increase.
To sum up, use heatmaps and A/B testing to:
Analyze visitor behavior and engagement
Reveal strong and weak areas on certain web pages
Find specific ways of increasing conversion rates
Test these solutions until you see your conversion rates go up