Ben Labay outlines essential frameworks for a more strategic, tactical and disruptive approach to experimentation
With two degrees, in Evolutionary Behavior and Conservation Research Science, Ben Labay spent a decade in academia with a wide-ranging background in research and experimentation dealing with technical data work.
Now as CEO of experimentation and conversion optimization agency Speero, Ben describes his work in experimentation as his “geek-out” area which is customer experience research and dealing with customer data.
At Speero, Ben works to scope and run research and test program strategies for companies including Procter & Gamble, ADP, Codecademy, MongoDB, Toast and many others around the world.
AB Tasty’s VP Marketing Marylin Montoya spoke with Ben on how to create mechanisms for companies to not only optimize but also be more disruptive when it comes to web experimentation to drive growth.
Here are some of the key takeaways from their conversation.
Consider a portfolio way of management in experimentation
Inspired by Jim Collins’ and Jerry I. Porras’ book “Built to Last”, Ben discusses a framework that the book provides on the ways a company can grow based on the best practices from 18 successful companies.
He identifies one big pillar that many organizations are often neglecting: experimentation. To tackle this, Ben suggests taking a portfolio management way of doing experimentation made up of three portfolio tags which provide a solution spectrum around iterative changes for optimization.
The first level consists of making small tweaks or changes to a website based on customer feedback such as improving layouts and the second which includes more substantial types of changes such as new content pieces.
But there’s a bigger third level which Ben refers to as more “disruptive” and “innovative” such as a brand new product or pricing model that can serve as a massive learning experience.
With three different levels of change, it’s important to set a clear distribution of time spent on each level and have alignment among your team.
In the words of Ben, “Let’s put 20% of our calories over into iterating, 20% onto substantial and 20/30/ or 40% over on disruptive. And that map – that framework has been really healthy to use as a tool to get teams on the same page.”
For Ben, applying such a framework is key to getting all teams on the same page as it helps ensure companies are not under-resourcing disruptive and “big needle movers”. Velocity of work is important, he argues, but so is quality of ideas.
Let your goal tree map guide you
Every A/B test or personalization campaign needs to be fed with good ingredients which determine the quality of the hypothesis.
“Every agency, every in-house company researches. We do research. We collect data, we have information, we get insights and then they test on insights. But you can’t stop there.” Ben says.
The trick is not to stop at the insights part but to actually derive a theme based on those insights. This will allow companies to pick underlying strengths and weaknesses to map them into their OKRs.
For example, you may have a number of insights like a page underperforming, users are confused about pricing and social proof gets skipped over. The key is to conduct a thematic analysis and look for patterns based on these different insights.
Consequently, it’s important for companies to create a goal tree map to help them understand how things cascade down and to become more tactical and SMART about their goals and set their OKRs accordingly to organize and make sense of the vast amount of data.
When the time comes to set up a testing program, teams will have a strategic testing roadmap for a particular theme that links to these OKRs. This helps transform the metrics into more actionable frameworks.
And at the end of each quarter, companies can evaluate their performance based on this scorecard of metrics and how the tests they ran during the quarter impacted these metrics.
Build engagement and efficiency into your testing program strategy
The main value prop of testing centers around making profit but Ben advocates for a second value prop which revolves around how a business operates. This requires shifting focus to efficiency and how different teams across an organization work together.
Ben parallels the A/B testing industry with Devops as it strives to bring in elements from the DevOps cultural movement when we refer to a culture of experimentation and being data-driven. In many ways, this echoes the DevOps methodology, which is focused on breaking down silos between development and operation teams to enhance collaboration and efficiency between these teams. “The whole idea is to optimize the efficiency of a big team working together”, Ben says.
This means organizations should take a hard look at their testing program and the components that make up the program which includes getting the right people behind it. It’s also about becoming more customer-centric and embracing failure.
Ben refers to this as the “programmatic side” of the program which serves as the framework or blueprint for decision making. It helps to answer questions like “how do I organize my team structure?” or “what is my meeting cadence with the team?”
Ultimately, it’s about changing and challenging your current process and transforming your culture internally by engaging your team within testing your program and the way you’re using data to make decisions.
What else can you learn from our conversation with Ben Labay?
Ways to get out of a testing rut
How to structure experimentation meetings to tackle roadblocks
How experimentation relates to game theory
The importance of adopting a actionable framework for decision making
About Ben Labay
Ben Labay combines years of academic and statistics training with customer experience and UX knowledge. Currently, Ben is the CEO at Speero. With two degrees in Evolutionary Behavior and Conservation Research Science (resource management), Ben started his career in academia, working as a staff researcher at the University of Texas focused on research and data modeling. This helped form the foundation for his current passion and work at Speero, which focuses on helping organizations make decisions using customer data.
About 1,000 Experiments Club
The 1,000 Experiments Club is an AB Tasty-produced podcast hosted by Marylin Montoya, VP of Marketing at AB Tasty. Join Marylin and the Marketing team as they sit down with the most knowledgeable experts in the world of experimentation to uncover their insights on what it takes to build and run successful experimentation programs.
At AB Tasty, we have always put client feedback at the heart of our product roadmap. Listening to our clients’ needs and helping them achieve their goals is a top priority for us. We just don’t say it, we do it:
🎁 55 new features brought to users in 2022
📣 10 market releases per year
🤝 1545 feedback requests processed
To go one step further, we have decided to launch our User Club! 🎉
The AB Tasty User Club is a new opportunity for you to share your feedback, experiences, and needs with us. Being part of the Club means you’ll have exclusive access to:
Our new features
A way to interact directly with our Product Managers and Designers
Our product related events
A real community where you can share your usage and hear best practices from other users
A successful launchpad for our User Club
As a first step, we organized our first User Games of the year in January 2023 in Paris, France on the theme of data, monitoring and performanceanalysisin CRO activities.
This event was a great success for our Product teams and for the 5 customers who attended to discuss their data understanding and analysis needs. We welcomed participants from different industries, all interested in data analysis techniques and how to use them to improve products and services. We also invited experts in the field of data analysis to share their experience and knowledge.
Baptiste Deroche, Product Designer @AB Tasty:
“This event was the perfect opportunity to validate and challenge assumptions we have about the product. We learned a lot from our customers that day, and it’s a really good start to getting closer and closer to our end users.”
Stéphanie Duchemin, Product Design Team Leader @AB Tasty: “It was a pleasure to meet our users again in a real session and not remotely, and I think that the pleasure was shared. This reinforces my conviction that feedback is not the same in a face-to-face session as in a remote one. Through our discussions we learned and discovered some pain points that were not necessarily related to the initial topic and that will feed our roadmap for at least 6 months!”
“It was a collaborative and enriching moment where each participant presented their feedback, their experimentation process and their vision of the tool. It’s really reassuring to know that AB Tasty values its users’ feedback. I will gladly participate in this type of event again!”
The AB Tasty User Club was created to offer our customers a space to discuss and share their opinions and suggestions. We have received a lot of positive feedback from participants and this gives us even more motivation to create other similar projects and events where you will be at the forefront.
Stay tuned for the upcoming events or announcements! If you are not part of the Club yet, do not hesitate to talk about it with your dedicated Customer Success Manager!
In an age of rapidly changing demands and high competition, it’s imperative for all businesses to understand how well they’re performing and whether they’re moving in the right direction towards accomplishing their objectives and fulfilling customer needs.
To accomplish that, these businesses will need KPIs in the form of actionable data to give insights for every department on how successful they are in reaching their goals. These KPIs can take on various forms depending on the needs and circumstances of each business.
Put simply, KPIs are performance metrics that measure performance based on key business goals that are used as indicators into what your organization needs to achieve to reach its long-term objectives and make more informed strategic decisions.
Thus, their purpose is to provide data on various aspects of a business. In this article, we will focus on the KPIs designed to measure the core of your business: your products. In particular, we’ll be looking at how to measure the success of new features of your products to determine whether they have the intended impact on your business (and your customers).
How to measure new feature success
More often than not, product teams are working on optimizing and updating an existing product based on customer feedback. This means that new iterations of a product are released in the form of features.
We always hear about how to measure the success of new products but measuring the success of individual product features is just as important so teams are not wasting valuable resources by developing features that are actually not being used.
This article will go through some of the key KPIs that teams should be tracking to measure the success and performance of new features to ensure it meets consumer expectations.
Any feature you release should be tied to and aligned with the overall business objectives and consumer needs. This will then serve as the foundation for defining what the criteria for success look like.
Afterwards, you can determine the KPIs that will help you track the success of your product features, which are related to your organization’s overall OKRs and to analyze the impact your features have on your business post-launch.
Each KPI should have a threshold which is considered as good or poor performance and an action plan should be put in place in case a feature is not performing as well as expected.
Before going further into the typical KPIs to track for feature success, remember that the KPIs you choose should be: measurable, specific, relevant, actionable and align with overall business strategy and outcomes.
Setting the right KPIs early on is essential as they allow product managers to evaluate feature usage, engagement and user experience to gauge its success depending on what the objectives are. It gives teams a framework to start measuring the things that matter most to your business in order to make better informed decisions (and build better features).
In sum, to determine new feature success, you need to consider the following points:
The goal of the new feature– once you narrow down the objective of your new feature, it’ll be easier to determine which KPIs you need to focus on. Remember that your feature goal should be aligned with the larger business or product goals and overall product vision.
What KPIs to monitor
What success looks like– this will primarily depend on the goal of your feature.
The next section will highlight some of the key KPIs that will help you determine the success of your new feature.
Key KPIs to measure feature success
Usage KPIs
Active users
To analyze user engagement, you could start by looking at the number of active users who are using the feature. Active users are those who engage with your new feature and perform some kind of action. In other words, they are actively using the feature.
These can be divided into three categories:
Daily active users (DAU)
Weekly active users (WAU)
Monthly active users (MAU)
Session duration
You can also track session duration to measure the total time spent by a user using the feature. It can also give you an indication into how much a user enjoys the experience of using the new feature- whether they’re leaving right away or actually spending time using it.
The best way to measure session duration is to calculate the total time users spend in your feature divided by the number of users in a given time frame. You can then take the mean value to find the average time spent using your feature in a given session.
Average Session Duration= Sum of Individual Session Lengths / Total Sessions During that Time Frame
Number of sessions per user
You may also want to look into the number of sessions for each user in a given time period to hone in on those users that are using your feature more than once every day. Thus, this can reveal the popularity of the feature since the more a customer interacts with your feature, the more likely they are to remain an active customer.
To obtain this figure, calculate the total number of sessions in a given period divided by the total number of users in that same period.
You can also consider collecting their feedback to gain insights on what they like about your feature and the value they get from it, particularly from those who spent a considerable amount of time using the feature.
Customer KPIs
Customer retention
This refers to the percentage of customers retained within a specific time period. Tracking this KPI will help you determine how well your new feature resonated with your customers and whether it helped to improve retention.
This can usually be calculated by picking a certain period that you want to measure and then finding the numbers for:
Customers you had at the beginning of that period
Customers at the end of the same period
New customers added in that period
Customer Retention Rate= Total Number of Customers at the End of a Period – New Customers Acquired/Customers at the Start of the Period
Customer churn rate
Unlike retention rate that measures the percentage of customers who stayed, churn rate measures those you’ve lost.
A high churn rate usually indicates that your feature or product is not delivering the value expected and not fulfilling your customers’ needs. For example, if you measure churn rate after introducing your new feature, it can give you insight of how satisfied they are with this feature and how well it resonated with them.
To calculate customer churn rate, you start by finding the number of customers lost during a certain period of time and divide it by the total number of customers at the beginning of this period.
Customer Churn Rate = Customers Lost / Total Customers
Customer satisfaction
Using the Customer Satisfaction Score (CSAT), you can measure how satisfied your customers are with a specific feature- user sentiment.
Using a customer satisfaction survey, customers can rate their satisfaction on a scale from 1 to 5 with 5 being “very satisfied” (or sometimes on a scale of 1-10) as seen in the image below. The satisfaction score can then be calculated by dividing the number of satisfied/happy customers by the total number of responses.
For example, if the rating is from 1-5 then you would typically be collecting the total number of 4 and 5 responses for the “satisfied customers”.
CSAT score: (The Total Number of 4 and 5 responses) / (Numberof Total Responses) x 100 = % of Satisfied Customers.
Thus, a CSAT survey could be used to ask customers to rate their experience with a new feature. Make sure you also follow up on low CSAT scores by reaching out directly to these customers to find personalized solutions to any issues they’re facing.
Net promoter score
The Net Promoter Score (NPS) determines customer satisfaction and sentiment by measuring how likely customers are to recommend your product to others on a scale from 1 to 10.
The NPS score can be calculated by subtracting the percentage of detractors– those who give a score of 6 or lower- from the percentage of promoters– those who give a score of 9 or 10.
While NPS is usually used to gauge sentiment for your product as a whole and overall customer experience, it can still give you an idea of how happy customers are with your product and could give you insight on how to improve it (by introducing new features, for example). This can be done by following up on customers’ responses to find out why they feel that way and the kind of features they’d like to see introduced to enhance the user experience.
Why is measuring success important?
As we’ve seen, there are a number of KPIs (among many others) you can track to measure your new feature’s success.
But, why is this so important?
The simple answer is that you’ve invested time, money and valuable resources to build this feature and so if it’s not performing as expected then it’s crucial to look into why that is in order to improve it (or in the worst case scenario, remove it altogether).
To delve deeper into the importance of measuring new feature success, tracking KPIs will help you stay on top of whether customers are actually using the new feature. For example, the usage KPIs discussed above will allow you to deduce whether your feature is receiving enough engagement from customers.
Setting clear KPIs and designated thresholds for these KPIs as well as an action plan early on will enable teams to ascertain a feature’s performance shortly after it’s released and make informed decisions quicker and more effectively.
Once you’ve decided on the KPIs you want to track, you should start thinking about the kind of tools you will use to gather the necessary data. This will depend on factors such as the resources you have at your disposal and how complex the KPIs you want to track are.
After collecting all the essential KPIs, it’s the product manager’s responsibility to provide information about the performance of the new feature and insights gained to all the relevant stakeholders.
Tips to ensure new feature success
Announce new features
The first step to ensuring the success of your new feature is to bring awareness to it. This means you should make sure you let users know that you’ve launched this feature by using multiple channels such as on your blog, social media or through in-app announcements.
This may not be necessary for every feature release. In other words, you want to be careful about sending out too many notifications about new releases so often, which could have the opposite effect and put off customers.
In your announcement, make sure you explain your feature in-depth; in other words, how it works and how it will change the user experience.
This is also a chance to collect valuable initial feedback that you can use to optimize your feature.
However, bear in mind the audience you want to target with your announcement. Some user groups may be more relevant than others for a particular feature release. Thus, be selective when it comes to bringing awareness to your new feature.
Set up tutorials and webinars for educational purposes
A great way to explain to your customers the value your feature brings to them is by releasing tutorials and organizing webinars to give a deeper look into the new feature.
This is a great way to get up-close and personal with your customers and have one-on-one interactions about the feature and get the in-depth feedback you need to optimize your features.
Segment users and gather their feedback
Once you start tracking KPIs, you will be able to determine what kind of users are most engaging with your new feature. You can segment users and place them into different groups such as those who engaged with your feature more than once, those who engaged once and those who have never used the feature.
Segmenting users this way will allow you to identify a usage pattern so you can deduce what kind of users are most likely to use your feature and allows you to collect actionable feedback from each of these segments to better understand how your new feature is being adopted (and why it’s not resonating with some).
While KPIs and metrics give you raw data to monitor what your users are doing, it’s important to put this data into context to get to the why.
Therefore, collecting feedback will help you iterate and optimize your feature for better results. It could also give you great insight on how to convert infrequent users of your feature to more engaged users, for example.
Opt for a “soft” launch
Sometimes, releasing a brand new feature may be a risky move. In this case, you might want to consider releasing this feature to a pre-chosen subset of users to collect their feedback and improve the feature based on the feedback before going for a full release.
For example, you could test internally within your organization. This way, internal teams can test out the new feature and learn more about it regardless of whether or not they interact directly with customers. This is also a risk-free release as you’re only testing within your teams, who in turn can give you the right kind of feedback to optimize your feature before releasing externally to customers.
Afterwards, you might still be wary about releasing your feature to all your customers. Luckily, there’s a foolproof way to release to a small number of your target audience and that’s through feature flags.
Feature flags allow you to put your feature behind a flag and toggle it on for a group of users to test it and monitor their experience and collect feedback by testing live in production.
KPIs provide you with an essential framework to help measure feature success and allow you to identify areas for improvement to ensure customer satisfaction and loyalty.
As already discussed, there is no one-size-fits-all approach when it comes to choosing the right KPIs. It will largely depend on your feature goals, your overall objectives and the industry you’re in.
However, it’s best to make sure that they are not only aligned with your product and business goals but should also be focused on customer satisfaction and value. Ask yourself what you really want to learn from these KPIs and remember to put the user at the heart of whatever KPIs you end up choosing.
Tracking feature performance (and measuring its success) early on- or shortly after release- will put you on the right path to customer satisfaction and retention. With any feature you release, it’s important to look for ways of improvement and find the right audience that will most likely find value in it and use it.
However, remember that KPIs are valuable but are not sufficient on their own unless teams can extract learnings and insights from them as well as give context to them in order to drive your future planning and deliver better value to your customers.
You know it best: shaping the customer journey on your sites or apps from search to cart has become key for conversions. That’s why you’re using state-of-the-art tools to collect data, run campaigns as well as experiment and personalize experiences. But these are just tools that help you execute your plan. A big part of your job is to think about and play with ideas to tackle your business goals around loyalty, conversions, and turnover. Did you know that you can rely on your stack for shaking your ideation process and detecting ROI-driven business opportunities?
If you’re using AB Tasty, enjoy now tailored sources of inspiration available for you, such as the Audience Recommendation, available for websites in English and French.
Dedicated to letting your good ideas take flight, Audience Recommendation will quickly identify segments of customers that are likely to be leveraged efficiently in your conversion strategy. Once connected to your site, it will suggest ideas of audiences based on your visitors’ interests that are likely to be converted into a thematic journey.
But that’s not the only way to quickly find opportunities and turn them into wins.
Read this article that suggests 5 ways to detect personalization journeys that will help marketers meet their business goals.
1. Engage your consumers based on the content they like
Let’s say you’re in charge of an e-commerce website: clothing, shoes or books. Think about the wide range of products or services that are available. Thousands of references. Do your buyers browse your entire catalog before adding items to the cart? We doubt it. We rather assume on the one hand you deal with bestsellers items, and on the other you have niche, premium or overstock that have trouble selling. How do you handle them today in your conversion strategy?
Our suggestion: Come play matchmaking with AB Tasty’s Content Interest. Identify key audiences that are sensitive to content found on your site – and combine topic-interested visitors with these items that make perfect sense for them. Our in-house AI suggests building segments based on browsing and transaction history thanks to Natural Language Processing. And that enables you to think about customer experience differently, aligning visitors’ interests and business needs in deeply personalized campaigns.
But even better: you will always be aware of current trends on your site – therefore able to adapt quickly to these always-changing consumers’ needs.
Let’s take a look at a typical online store such as a shopping website for shoes, on which our AI runs for content interest. You can see below an example of content segments and the volume of views and transactions it represents.
Wearers of black leather boots? Or rather low-top sneakers? Associated views and transactions help you make decisions about campaigns you could be inspired about. Of course, content-based messaging, with relevant offers such as targeted discounts, free shipping, or loyalty points, is very likely to be effective.
2. Build the journey based on visitor engagement
Looking to seduce newcomers to engage with you? Or to reward those who are loyal to your brand? But there are also those who come regularly and never shop. Do you already have a strategy in place for your different groups of visitors?
With AB Tasty you can target shoppers based upon the profile they have with your site. Here again, our AI comes into action. It will automatically allocate traffic into 4 logic groups of users: Disengaged, Wanderers, Valuable and Loyal consumers. That means you can have a dedicated strategy for each group and deploy it easily, combined with dedicated triggers to increase even more campaign success.
Like these newcomers, you don’t want to scare them right? Let them browse a bit or take action before displaying your campaign. Timing is key!
A good use case? Kiehl’s Australia decided to display a specific message for those visitors that are navigating and revisiting but not buying, these are wanderers. “Still deciding?” Discover our latest limited time offers”. Using stress marketing and acknowledging the uncertainty of these shoppers combined with deep targeting options resulted in an uplift of 2,26% in transactions for the brand.
3. Seize the low-hanging fruits
They started shopping with you but left – and are now back on your site! Within AB Tasty you can very easily build campaigns based on abandoned carts, that target – as the name already says – those who were just one step away from completing their purchase journey. You can decide to target them all, or you could do different scenarios depending on cart value or numbers of items in the shopping cart.
Our secret tip for these? Experiment! Find out whether shipping costs, promo code or components of your checkout page can be leveraged to optimize conversions and re-engage on cart abandonment. You can trust our experience there: there’s nothing like A/B testing to know best what ideas work and what ideas don’t work.
Then, only you can really define what elements contribute to retaining abandoners on your site.
4. Rely on your experimentation strategy
We just mentioned it. Seems obvious, especially if A/B testing is part of your strategy, but analyzing campaign results in depth allows you to detect…ideas.
When you test ideas, while this idea might not prove consequently winning regarding the output on the entire audience, it still might be a large win for certain audiences, e.g. for mobile users against desktop users. Or for returning visitors rather than new ones.
When reading A/B test reportings, don’t forget to filter and narrow down to detect these opportunities. Because statistics can prove that the magic you were trying to achieve for everyone is at least working for certain groups of people.
5. Use the force of your datalayer
Segment, Google Analytics 4, Tealium, Mixpanel, …. No matter what solution you use to analyze, understand and follow your customers, you might have identified already interesting audience segments from your first-party data. Why not use these to run your personalization strategy directly on AB Tasty? Once you have connected your preferred solution there, you can launch campaigns on these segments (or cohorts or traits) and couple them with further targeting and triggering options.
Example? Imagine a site offering holiday flats to rent. They know when their loyal customers usually book their holidays. That’s why they run campaigns targeting either those who enjoy Summer or on those – in the screenshot below – who like booking their vacations when Santa is around. In the same spirit, we could also couple that targeting with a weather trigger – snow or sea alternatives when it’s raining on the favorite destination.
We could even make it snow on the screen with AB Tasty’s no code snowflake widget, but that’s a question of idea.
With the rising popularity of DevOps practices in the modern software development world, it’s no surprise that there’s a lot of myths surrounding the concept of DevOps.
In other words, there’s a lot of confusion about what this concept entails. What does it actually mean when we talk about a DevOps team or organization? Even with its widespread adoption and implementation, there are many misconceptions and confusion as to what the term actually means, how it can be implemented and by whom.
However, one thing’s for certain. DevOps, when properly implemented, can bring a great number of benefits and value to teams and customers alike. Thus, this article seeks to address its most common misconceptions so teams can better understand this concept and reap its benefits for more efficient and streamlined processes.
Before we get started, let’s quickly go over what DevOps actually is to allow us to debunk its most common myths.
DevOps, from its name, comes from a combination of development and operations with the purpose to promote better collaboration and break down communication barriers between these teams for enhanced productivity.
Today, DevOps has become an umbrella term for an approach which encompasses a set of tools, practices and culture that aim to increase teams’ ability to deliver higher quality software quickly to end users.
Next, we will dispel some of the most common myths about DevOps to help shed light on this concept and get the most value out of it in your own organization.
Myth 1: DevOps is all about the tools
Often, the first question that comes to mind when we hear DevOps is which tools an organization is using or which are the top tools for teams to adopt during their DevOps journey.
However, it’s important to take a step back and look at the bigger picture to understand the real value behind the DevOps methodology.
DevOps is more than just a list of tools to adopt for your software development practices. It’s an approach that brings teams together to deliver more value to end users.
In that sense, DevOps foremost starts with people. It’s about building the right mindset and culture to promote better collaboration so that teams no longer work in silos. Once you’ve established those, you can then choose the right tools that fit in with your team and processes.
It’s up to the leaders in an organization to lay the foundation to build this culture of DevOps and inspire their teams to adopt the values of DevOps which will then allow them to implement them in their daily workflows in order to build better software to meet customer demands faster.
Myth 2: DevOps and CI/CD are one and the same
While CI/CD are essential processes to successfully implement DevOps, these are not the only components that make up this methodology.
Yet, there are many that confuse the two and believe that DevOps and CI/CD are the same thing.
It’s true that the continuous integration and delivery of software indicates that an organization has adopted a key principle of DevOps but as stated above, the concept goes beyond just the tools and processes and focuses primarily on establishing the right culture and mindset for all these key components to thrive.
CI/CD processes help enable this culture by providing a set of tools that emphasize automation but they are only a part of the DevOps methodology.
It’s important to remember that DevOps grew from a need to create cross-functional teams that can effectively collaborate and communicate throughout the software development lifecycle.
Therefore, CI/CD provides the tools necessary to streamline your software delivery process but it’s only a means to an end. Instead, organizations should focus on bringing together the right combination of people, processes and tools to truly embrace the DevOps methodology.
Myth 3: DevOps is a replacement for Agile
The methodologies of DevOps and Agile are sometimes confused to the point that some claim that DevOps is replacing Agile or believe that the two terms are interchangeable.
In fact, DevOps and Agile can be seen as complementary rather than synonymous; the two are not mutually exclusive and both can exist separately in an organization.
The underlying goal of both is to improve the software development process to deliver products quickly and efficiently.
However, Agile provides a framework that enables teams to break down projects into manageable chunks or “sprints” through iterative development to respond quicker to fast-changing consumer and market needs.
DevOps, for its part, is focused on breaking down silos between development and operations to allow for quicker releases through the use of tools and a fully automated pipeline. It also goes beyond the delivery process and refers to an entire culture that should be adopted within an organization.
You can look at Agile as a methodology for developing software while DevOps as a set of practices for delivering software that necessitates a cultural shift. Both still focus on speed and collaboration.
In that sense, they are complementary approaches-as DevOps enables and builds on Agile practices. Incorporating both into your daily workflows will help improve the efficiency of your software development processes.
Myth 4: DevOps is the answer to all problems
It’s a common misconception that just because you’re implementing DevOps practices within an organization, nothing can ever go wrong. However, you can’t just automate everything and believe that everything will go smoothly.
DevOps also involves developing the right strategy and incorporating the right tools to drive processes that are managed by the right people. If your team isn’t ready to move with the velocity required for these tools to function appropriately then it’s likely your shift to DevOps will only lead to disaster.
DevOps should also go beyond just automation and should incorporate continuous feedback loops from automated processes that developers can use to improve and optimize products.
Myth 5: DevOps means releasing new software non-stop
On the same note, just because DevOps places emphasis on all things continuous, this doesn’t mean that it is a guarantee for non-stop releases.
It’s important to note that the idea of “continuous” shouldn’t be taken too literally. When we say continuous, it rather means that teams have established processes in place that enable them to ship new releases confidently whenever needed. It’s about keeping your code in a constant releasable state so that teams have the confidence and ability to release as often as they want.
Depending on the organization and team objectives, this could mean releasing new software anywhere from several times a day to once a week or every two weeks.
The ultimate goal of DevOps is smaller, more frequent releases but this should never be at the expense of quality. After all, DevOps may be about speed but it’s also about releasing higher quality products to deliver enhanced value to customers.
Myth 6: DevOps engineer is the only means to a DevOps transformation
You can’t hire a DevOps engineer and claim you now have a DevOps team and culture. Similarly, you can’t hire a bunch of engineers, call it a DevOps team and be done with it.
More often than not, DevOps requires a complete organizational transformation with top-down motivation. This means that to successfully adopt DevOps, it’s up to leaders to lay the foundation for DevOps to thrive and aid with the cultural shift that accompanies it.
Undoubtedly, having a DevOps engineer will help facilitate the adoption of DevOps practices with their deep knowledge of DevOps and its tools and can serve as a link between teams to enhance collaboration. However, the fact of the matter is anyone can learn and then implement a DevOps methodology with the right training, tools and leadership.
DevOps engineers cannot single-handedly spearhead this transformation. The true secret to success when it comes to DevOps is how efficiently the different teams within an organization follow DevOps practices through top-down motivation as well as providing teams with the necessary resources in order to perform their job effectively.
Myth 7: DevOps only applies to development and operations teams
It’s natural to conclude that DevOps, which comes from a combination of development and operations, applies only to these two teams within an organization. We can even go so far as to say that this myth does have some truth to it.
While it’s true DevOps grew out of a need to enhance collaboration between these two teams, it has greatly evolved since then and is no longer confined to them.
Nowadays, DevOps encompasses teams from across the whole organization, which means that DevOps practices and principles can be applied and used as a way to empower all teams from engineering to sales and marketing. In other words, DevOps applies to each and every team or all stakeholders involved in the software development and delivery processes.
At the end of the day, DevOps is about cross-functional collaboration and working together towards common goals. In that sense, DevOps today can apply to the whole organization and not solely to development and operations teams. With the right training, any team within your organization can take up and successfully implement DevOps practices and reap their benefits.
DevOps can bring real value to organizations when properly implemented
There are many misconceptions about DevOps which makes sense given how popular it’s become but it’s important to be aware of these misconceptions to get the most out of the DevOps methodology.
It’s important to remember at the end that DevOps is not magic or a quick fix for all problems that come up. DevOps should be foremost people-focused but if you’re not willing to change your processes and undergo a cultural transformation in your entire organization then creating a DevOps team and calling it a day just won’t cut it.
DevOps is not only about collaboration and breaking down silos between teams but it’s also about providing these teams with the resources and foundation necessary to be able to successfully adopt DevOps practices in their day-to-day operations.
However, one thing that most can agree on is that DevOps can bring value to teams when implemented correctly. This will also depend on your own organization’s unique needs. What works for one organization may not work for another.
Make sure that you carefully assess your processes, teams and culture as a whole to understand whether it would make sense to take the plunge and adopt a DevOps methodology for your team and your products. Only then can the transformation truly begin.
This was another exciting year for AB Tasty users! Our team worked hard to implement 50+ features and 100+ improvements. Thank you, as 25% of these were suggested by you! As a result, you unleashed your creativity easier than ever before. During the 2022 Black Friday period, 71% of our customers had at least one live campaign. Pop-ins and banners were your favorites among the thousands of live personalizations and experiments using widgets.
And we can’t wait to see how creative you’re going to be in your next campaigns using Custom Widgets, the first 2022 feature we’re going to dive into!
Engagement
Custom Widgets & the Widget Library
That’s the innovation of the year! Now you can create and customize your own widgets to scale your best ideas! Inject your code to create your own ready-to-use widgets and use them in any campaign you want. Or browse through the new Widget Library to find the perfect widget for your customer scenario. We also brought lots of improvements to the ready-to-use widget configuration, such as the way they appear and disappear or the way to trigger them. You can always customize these and turn them into your own Custom Widget!
New audience segments
The segment builder was enriched with ROI-driven audiences this year. Remember the AI-powered Content Interest that enables the identification of consumers who share an interest in your products or services? You can now see the most relevant topics – including view and transaction rates – linked to each keyword directly in the builder.
Also, easily target low-hanging fruit using the Abandoned Cart segment and choose to address only those with a certain amount or number of items in the cart.
Ecosystem fit
This year we developed a newset of full-circle data flowintegrations with the best-of-breed solutions we encounter in our customers’ tech stack. Now, you can analyze the entire customer journey in your favorite solution: Google Analytics 4, Segment, Mixpanel, and more.Expose your audiences (or cohorts, traits, segments) to campaigns and send results back into your tools. Stay tuned – more integrations coming your way in 2023.
Campaign insights
In 2022, the reporting page was enriched with all kinds of filters: device, date, geography, transaction, loyalty, attributes, action from visitors…the list goes on! Use them to analyze audience behavior on specific dates or compare results between two segments. For example, you might want to check if your experiment has a different impact on mobile versus desktop users.
Performance
Speed matters! In parallel to continuous improvements on our tag, you can now find out whether and why your campaigns might affect your website’s load time on the newcampaign performance center page. All details about performance are visible here and you can navigate directly to where in AB Tasty you need to go to solve the issue.
Privacy & compliance
What happens with your campaign when a visitor declines cookies? Ally, our virtual assistant has your back! She will guide you through campaign behavior settings and explain all the details about the choices you have to make to meet requirements around consent and privacy.
Make your life easier
We got used to it so quickly that we almost forgot it’s recent: the new intuitive interface is really easy to use, with the main menu on the left and the drop-down settings on the top right. Continuous dashboard evolutions now let you see what’s essential for you: you can personalize your view and see campaign readiness status right away.
1:1 Personalization and intelligent search
Last but definitely not least, in 2022 we acquired Epoq, adding to our portfolio to better serve brands on their mission to build 1:1 customer journeys from search to cart. Our solution set now includes:
Intelligent search: a fast, powerful search engine that gets customers straight to the product they want to buy.
Intelligent recommendations: an AI-powered recommendation engine that surfaces new revenue opportunities.
Check out this webinar to get more details about recommendations and site-search for your business!
More exciting features are currently a work in progress for 2023! Do you have suggestions for our roadmap? Connect to your AB Tasty account and send or vote for upcoming features using the “submit feedback” link. We can’t wait to see what YOU will build in 2023!
With the end of third-party cookies in sight, first-party data has moved to the forefront of digital marketing.
First-party data is a powerful tool for personalizing your customers’ buying journey. It’s generally more reliable and offers deeper customer insights than third-party data, helping you gain that competitive edge. But these benefits also bring responsibility. It’s essential from both a compliance and customer experience perspective that you practice ethical data collection when it comes to first-party data.
In this article, we take a closer look at first-party data—what it is, how you can collect and use it ethically and the benefits first-party data offers both your customers and your business.
What is first-party data?
First-party data is information about your customers that you collect directly from them via channels you own.
Potential sources of first-party data include your website, social media account, subscriptions, online chat or call center transcripts or customer surveys. Importantly, the first-party data you collect is yours and you have complete control over its usage.
Examples of first-party data include a customer’s
name, location and email address
survey responses
purchase history
loyalty status
search history
email open, click or bounce rates
interest profile
website or app navigational behavior, including the page they visit and the time they spend on them
interactions with paid ads
feedback
As it comes straight from the customer, first-party data provides you with deep and accurate insights into your audience, their buying behavior and preferences.
These insights are essential for guiding the development of digital marketing strategies that prioritize the human experience, such as personalization. They can also help you create customer personas to help connect with new audiences which may inform key business decisions, including new products or services.
How to collect first-party data
Customers may voluntarily provide first-party data. For example, customers submit their email addresses when signing up for a newsletter, offer their responses when completing a survey or leave comments on a social media post. This is often referred to as declarative data—personal information about your customers that comes from them.
Alternatively, first-party data can be collected via tracking pixels or first-party cookies that record customers’ interactions with your site. This produces behavioral data about your customers.
First-party data is typically stored on a Customer Data Platform (CDP) or Customer Relationship Management (CRM) Platform. From this, you can build a database of information that you can later use to generate customer personas and personalize your marketing efforts.
What is third-party data?
Third-party data removes the direct relationship between your business and your customers during the data collection process. While first-party data comes straight from your customers, third-party data is collected by a separate entity that has no connection to your audience or your business.
Unlike first-party data which is free to collect, third-party data is typically aggregated from various sources and then sold to businesses to use for marketing purposes.
From a marketing perspective, third-party data is further removed and therefore offers less accurate customer insights. You don’t know the source of third-party data and it likely comes from sources that have not used or don’t know your business, limiting its utility.
For many years, marketers relied on third-party cookies to provide the data needed to develop digital marketing strategies and campaigns. But over time, concerns around the ethics of third-party data collection grew, especially in relation to data privacy and users’ lack of control over their data. As a result, most of the major search engines have banned—or will soon ban, in the case of Google Chrome—the use of third-party cookies.
Is first-party data ethical?
First-party data is ethical if it’s collected, stored and used according to data privacy laws, regulations and best practices that require responsible and transparent data handling.
The move away from third-party cookies highlights how first-party data is preferable when it comes to ethical considerations. With full control over the data you collect, you can ensure your first-party data strategy protects the data privacy rights of your customers. You can clearly explain to your customers how you handle their data so they can decide whether they agree to it when using your site or service.
Unfortunately, unethical first-party data collection can and does happen. Businesses that collect data from their customers without informed consent or who use the data in a way the customer didn’t agree to—such as selling it to a third party—violate their data privacy. Not only does this carry potential legal consequences, but it also significantly undermines the relationship of trust between a business and its customers.
How do you collect first-party data ethically?
The first step towards ethical data handling is compliance. There is a range of data privacy laws protecting customer rights and placing obligations on businesses in terms of how they collect, store and use personal data, including first-party data.
Confirming which laws apply to your business and developing an understanding of your legal obligations under them is not only essential for compliance, but it also informs your data architecture structure. The application of data privacy laws depends on your business or activities meeting certain criteria. It’s worth noting that some data privacy laws apply based on where your customer is located, not your business.
Data privacy legislation in Europe
European customers’ data privacy is protected by the General Data Protection Regulation (GDPR). The GDPR requires businesses to demonstrate ethics in data collection and use.
This often means customers must provide informed consent, or opt-in, to their data being collected and used. Businesses must also keep records of this consent. Customers can withdraw their consent at any time and request their data be deleted in certain cases. You must implement reasonable security measures to ensure data is stored securely, according to the level of risk. One option is to use air-gap backups to protect data from cyber threats by isolating it from the network. In certain circumstances, you also need to nominate a data protection officer.
Data privacy legislation in the UK
If you have UK-based customers, you need to comply with the provisions of the UK General Data Protection Regulation (UK GDPR) and the Data Protection Act 2018. These include providing a lawful basis for collecting personal data, such as consumer consent via a positive opt-in.
Consumers have the right to request the use of their data be restricted or their data erased, in certain circumstances. Relevant to first-party data, consumers can object to their data being used for profiling, including for direct marketing purposes.
Data privacy legislation in the US
The US doesn’t have a federal data privacy law. Instead, an increasing number of states have introduced their own. The first state to do so was California.
Under the California Consumer Privacy Act (CCPA)*, you can only collect customer data by informed consent—customers need to know how data, including first-party data, is collected and used. Customers also have the right to opt-out of the sale of their personal data and to request their data be deleted. If a data breach occurs where you have failed to use reasonable security measures to store the data, customers have a right of action.
2023 looks to be a big year for the data privacy landscape in America. In Virginia, the Consumer Data Protection Act (VCDPA) is due to commence on January 1. The VCDPA includes a provision for customers to opt-out of data collection for profiling or targeted advertising processes. Colorado, Connecticut and Utah have introduced similar laws, also ready to commence next year.
Beyond compliance
As you can see, some general principles emerge across the different pieces of data privacy legislation:
Customer consent — customers should consent to the collection and use of their data
Transparency — you should explain to customers what data you collect, how you collect it and what you do with it, typically via a privacy policy or statement
Control — customers should be able to control the use of their data, including requesting its deletion.
From a consumer perspective, compliance is the bare minimum. While the design of your data architecture structure should be guided by the above principles and comply with any relevant data privacy laws, you can also take extra steps to demonstrate your business’s commitment to ethical data handling. This may include appointing a data protection officer to oversee compliance and provide a point of contact for complaints or providing your employees with training, even where it isn’t required by law.
How to use first-party data
In a crowded online marketplace, it’s hard to make yourself heard over the noise. Arming yourself with accurate and reliable first-party data, however, helps you stand out from the crowd and communicate your message to both current and potential customers.
Firstly, you can use the first-party data you collect to create an exceptional customer journey through personas—fictional representations of your customers’ broad wants and needs. Building a series of personas can help you tailor your product or service and business practices to better serve your general customer base.
First-party data is also a crucial ingredient for more specific 1:1 personalization. With it, you can craft a unique user experience for your customers by delivering individual recommendations, messages, ads, content and offers to improve their purchasing journey.
In addition to serving a marketing purpose, first-party data is also essential for retargeting customers, for example, by sending abandoned cart emails. It can also help you identify and address gaps in your customers’ buying experience or your current offerings.
Want to get started with 1:1 personalization or personal recommendations?
AB Tasty and Epoq is the complete platform for experimentation, content personalization, and AI-powered recommendations equipped with the tools you need to create a richer digital experience for your customers — fast. With embedded AI and automation, this platform can help you achieve omnichannel personalization and revolutionize your brand and product experiences.
Benefits of first-party data
Personalization
First-party data provides deeper insights than second or third-party data, allowing you to incorporate a higher degree of personalization in your marketing. In turn, this improves the buying experience for your customers, gaining their loyalty.
Reduces costs
Engaging a third party to aggregate data costs money. First-party data, on the other hand, doesn’t cost you anything to collect.
Increases accuracy
Collecting data from your specific customer base and their interactions with your company produces tailored insights, rather than generic information. First-party data comes directly from the source, increasing its reliability.
Gives you control over data
You own first-party data collected from your customers. This puts you in full control of how it is collected, stored and used.
Transparency
As you have full control over how you collect and use first-party data, you can clearly explain this to your customers to obtain their informed consent. This transparency builds trust and loyalty with your customer base.
Strengthens customer relationships
In a recent Ipsos poll, 84% of Americans report being at least somewhat concerned about the safety of the personal data they provide on the internet. At the same time, Salesforce found that 61% of consumers are comfortable with businesses using their personal data in a beneficial and transparent way. First-party data builds better customer relationships by balancing customers’ desire for data privacy with their preference for personalized advertising.
Compliance with regional privacy laws
Most countries are strengthening their legislative framework around data privacy and prioritizing users’ rights. With first-party data, you can design your data architecture structure to ensure it complies with any relevant laws.
Ethical first-party data handling benefits both you and your customers
First-party data is the key to accurate and sharp customer insights that help you shape effective, targeted marketing strategies. But with the demand for ethical data collection at an all-time high, it’s important you treat your customers’ first-party data with care.
First-party data should be collected responsibly and transparently, with the customer’s fully informed consent. Your first-party data strategy also needs to comply with any relevant data privacy laws, regulations and best practices. This approach achieves a happy medium between addressing customers’ data privacy concerns with their desire for personalization during the purchasing journey. It also helps you optimize your customer’s experience with your business and, in turn, your profits.
Interested in learning more about how you can use first-party data to benefit your business? Check out our customer-centric data series for more insights from the experts.
*Amendments to the CCPA are due to be introduced in 2023, via the California Privacy Rights Act. Many of the related regulations are still being updated.
With their feature flagging functionality, AB Tasty were able to safely and quickly launch new changes to end users without impacting quality through progressive delivery and continuous feedback loops.
In the world of SaaS, velocity and quality are of utmost importance. This is an industry that is constantly evolving and companies must work diligently to keep pace with consumers’ fast-changing needs and to maintain competitive advantage.
AB Tasty has seen a high growth in users all around the world. Consequently, AB Tasty had to accelerate their development processes, which meant that development and feature teams experienced high growth in order to enable the development of more features and increase speed of delivery to market.
The challenges of CI/CD
However, with such high growth and scaling, the company was faced with many growing pains and bottlenecks, which significantly slow down CI/CD processes. This increased the risk of pile up of issues, which defeats the initial purpose of accelerating time-to-market.
Even with mature CI/CD processes, developer and product teams are not immune to pitfalls that impact speed of delivery and go-to-market.
With these challenges in mind, the team at AB Tasty had four main objectives in mind:
Accelerate time-to-market.
Increase speed of delivery without sacrificing quality.
Allow teams to remain autonomous to avoid delays.
Reduce risk by avoiding big bang releases.
The team essentially needed a stress-free solution to push code into production and an easy-to-use interface that can be used by development teams to release features as soon as they’re ready to eliminate the issue of bottlenecks and by product teams to gain more control of the release process by running safe experiments in production to gather useful feedback.
This is when the team at AB Tasty turned to their flagging feature.
Feature flags were a way for the team to safely test and deploy new changes to any users of their choice while keeping them turned off for everyone else.
The team at AB Tasty was able to do this by, first, defining a flag in the feature management interface whose value is controlled remotely by the tool’s API.
The team can then set targeting rules, that is the specific conditions for the flag to be triggered, based on the user ID. Using feature flags, they can perform highly granular user targeting, allowing them to target users using any user attributes to which they have access.
Then, in AB Tasty’s own codebase, teams can simply condition the activation of the feature that interests them, or its behavior, according to the value of the variable, using a simple conditional branch.
At the time, the company was working on a key project to revamp a major part of the UI namely the navigation system, which includes a new vertical navigation and new responsive grids to offer new personalization campaigns with the goal to make it more understandable to users.
For a project of this scope, there were big changes tied to many dependencies, such as the database, and so AB Tasty needed a way to progressively deploy these new changes to obtain initial feedback and avoid a big bang release.
Progressively deliver features
With such a large project, the goal was to mitigate risk by avoiding deploying major changes to all users at once. With feature flags, teams are able to reduce the number of users who can access the new changes.
In particular, the ON/OFF deployment logic of feature toggles within the feature management tool’s interface works like a switch so teams can progressively roll out features based on pre-set targeting rules while turning them off for everyone else.
Easily set up and manage beta and early adopter lists
After testing internally, the product team was looking for a way to easily manage their early adopter list before releasing to the rest of their users. This will enable them to receive quicker feedback from the most relevant (and more forgiving) users.
With AB Tasty’s flagging functionality, teams can simply add these early adopters’ account ID into the targeting of the flag, where they can then immediately access the new feature exclusively before anyone else.
Release without stress by ensuring that developers are ready to tackle any issues
Since most of the development team was based in France, the new changes were initially rolled out to that region so developers can ensure that everything works and can quickly fix any bugs before deploying to other regions (and timezones).
Should anything go wrong, teams can easily roll back the release with a kill switch by immediately toggling a flag off within the feature flagging platform interface so that the feature is no longer visible.
Enable continuous feedback loops
Teams can now test in production on end-users and to optimize features and iterate faster based on real production data. As a result, teams can launch the end-product to all users with the reassurance that they have identified and fixed any issues.
This also empowers teams to become more innovative, as they now have a safe way to test and receive feedback on their ideas, and are no longer limited in their scope of work.
Accelerate go-to-market
Furthermore, the ON/OFF deployment logic allows teams to release at their own pace. This accelerates the time-to-market as developers no longer need to wait for all changes to be ready to release their own changes resulting in less delays and disgruntled customers.
This speed would not be at the expense of quality as with continuous feedback loops, teams can iterate releases which ensures that only high quality products are released to users.
Teams can send features to production whenever they’re ready, make them visible to some users and officially launch to market once all deliverables are ready and good to go!
As we go deeper into digital transformation and as companies move towards large-scale globally distributed systems, the complexity that comes with them increases. This means that failures in these intricate systems become much harder to predict, as opposed to traditional, monolithic systems.
Yet, these failures could result in high costs for teams to repair them; not to mention the painstaking probability of the potential loss of new and existing customers.
The question then is how can we build confidence in the systems that we put into production? How can teams make sure that they’re releasing stable and resilient software that can handle any unpredictable conditions that they’re put into?
This is when teams turn to what is aptly referred to as “chaos engineering”.
What is chaos engineering?
According to the Principles of Chaos, chaos engineering is “the discipline of experimenting on a system in order to build confidence in the system’s capability to withstand turbulent conditions in production.”
In other words, chaos engineering is the process of testing distributed systems to ensure that it can withstand turbulent conditions and unexpected disturbances. Strictly speaking, this is the “chaos” of production.
Chaos engineering is particularly applicable to large-scale, distributed systems. Since such systems are now hosted on globally distributed infrastructures, there are many complex dependencies and moving parts with several points of failure. This makes it harder to predict when an unexpected error will occur.
Due to the unpredictability of these failures of the components of the system, it becomes harder to test for them in a typical software development life cycle.
This is when the concept of chaos engineering came about as a way to predict and test for such failures and uncover hidden flaws within these systems.
In other words, this concept determines the resilience of these systems by identifying their vulnerabilities by carrying out controlled experiments to test for any unpredictable and unstable behavior.
This is done by breaking things on purpose by injecting failure and various types of faults into the system to see how it responds. This will help determine any potential outages and weaknesses in the system.
The ultimate goal of this is a lesson in how to build more resilient systems.
Where does the term come from?
Before we delve any deeper into chaos engineering, it would be helpful to understand where this concept originated.
Chaos engineering started in 2010 when the engineering team at Netflix decided to develop “Chaos Monkey”, which was later made open source, as the team at Netflix migrated from a monolithic architecture to the cloud, deployed on AWS.
For Netflix, this migration to hundreds of microservices brought on a high amount of complexity; therefore, engineers at Netflix were seeking a better approach to prevent sudden outages in the system.
These engineers were mainly looking for a way to disable instances and services within their architecture to ensure that their system can handle such failures with minimal impact on the user experience, allowing them to build a more resilient and reliable architecture.
The idea behind the Chaos Monkey tool was that they would unleash a “wild monkey” to break down individual components in a carefully monitored environment to make sure that a breakdown in this single component wouldn’t affect the entire system.
This, in turn, helped them locate the weaknesses in the system and build automatic recovery plans to address them and alter the system if necessary so that it could easily tolerate unplanned failures in the future.
Afterwards, Chaos Monkey improved and evolved to allow Netflix engineers to more precisely determine failures by testing against more failure states, enhancing the resilience of their system.
From then on, the chaos journey began for Netflix and later on for many organizations dealing with similar distributed systems.
Principles of chaos engineering
We can deduce that chaos engineering involves running experiments to understand how a distributed system behaves when faced with failure.
Unlike other forms of testing, chaos engineering involves experimentation and learning new things about a system by creating a hypothesis and attempting to prove that hypothesis. If it’s not true, this is a chance to learn something new about the system.
Testing, on the other hand, involves making an assumption about a system based on existing knowledge and determining whether it’s true or not by running tests; in other words, the test is conducted based on knowledge of specific properties about the system. The results, therefore, don’t provide new insights or information.
Chaos engineering, for its part, involves exploring scenarios that don’t usually occur during testing designed to gain new knowledge about the system by considering factors that often go beyond the obvious issues that are normally tested for.
The following principles provide a basis on which to run such experiments on your system:
Plan an experiment
The first step involves planning an experiment, where you will need to pinpoint things that could go wrong. This will require gaining an understanding of your system’s normal behavior and determining what constitutes a normal state. Afterwards, you start off by forming a hypothesis of how you think the components of your system will behave in case something goes wrong and then create your control and experimental groups accordingly.
Defining a metric to measure at this stage is useful to gauge the level of normalcy within your system. These could include metrics such as error rates or latency.
Design real-world events
At this stage, you will outline and introduce real-world events that could potentially cause disruptions to your system such as those that occur within hardware or server or any other external event that could lead to outages in your system such as a sudden spike in traffic, hardware failures, network latency or any event that could potentially disrupt the steady state of the system.
Run the experiment
After defining your system’s normal behavior and the events that could disrupt it, experiments can then be run on your system preferably in a production environment to measure the impact of the failure to gain a better understanding of your system’s real-world behavior.
This will also allow you to prove or disprove your hypothesis. The harder it is to cause an outage in the system, the more confident you can be in your system’s resilience.
However, keep in mind that since your experiments are run in production, it’s important to minimize the blast radius in case something goes wrong. This will ensure that any adverse effects are kept at a minimum and if things go smoothly, you can then gradually increase this radius till it reaches full scale. It’s also wise to have a roll back plan if something does go wrong.
Monitor results
The experiment should give you a clear idea of what’s working and of what needs to be improved by looking for a difference between the control and experimental group. Teams can then make the necessary changes as they’re able to identify what led to the outage or disruption to the service, if relevant.
Why we should break things on purpose: Benefits of chaos engineering
We can look at chaos engineering as a sort of safeguard that helps prevent worst case scenarios from happening and impacting the user experience before they actually happen.
Consequently, chaos engineering has a number of benefits.
Increased reliability and resilience
As we’ve already mentioned, running such controlled chaos experiments will help determine your system’s capabilities, thereby preparing the system against unexpected failures.
Information gathered from these experiments can be used to strengthen your system and increase its resilience by locating potential weaknesses and finding ways to resolve them.
In other words, by learning what failure scenarios to prepare for, teams can improve and speed up their response to troubleshooting incidents.
Enhanced user experience
By strengthening your system, it is less likely that it will experience major outages and downtime that could negatively affect the user experience. It allows you to pinpoint issues and problems before they actually become customer pain points.
This will, in turn, result in improved user experience and increased customer satisfaction as you are now releasing high performing, more resilient software.
Reduced revenue loss
By running chaos experiments, companies can prevent lengthy disruptions and outages to the system, which otherwise could lead to losses in revenue as well as high maintenance costs.
Improved confidence in the system
The insights gathered from these experiments can help teams build more resilient and robust systems.
This means that teams, by predicting the unexpected, are prepared for worst-case scenarios, which helps to increase confidence in their systems by having a recovery plan set up for such scenarios.
Nonetheless, organizations should still carefully consider the challenges of chaos engineering before implementing it as, despite its benefits, it can also be costly and time-consuming.
Unleashing chaos for better digital experiences
As we’ve seen, chaos engineering is an essential practice when it comes to creating uninterrupted, seamless digital experiences for your customers.
It’s not just breaking things for the sake of breaking things; it’s a way to gain insight on how a system behaves and to gauge its resilience. In other words, chaos engineering is not only breaking things, but it’s also about fixing weaknesses in a system to build its resilience by exposing hidden threats thereby minimizing risk.
It’s important to note that chaos engineering isn’t meant to replace the other types of testing that are carried out throughout the software development life cycle but instead to complement these tests to provide a high performing system.
Finally, chaos engineering has an important role in DevOps. At the heart of DevOps is the idea of continuous improvement, which is why integrating chaos engineering into a DevOps methodology is essential to mitigate security risks. It’s also a way for DevOps teams to deal with the rising complexity of applications nowadays.
Consequently, introducing chaos experiments into your DevOps CI/CD pipeline will help teams detect hidden issues more quickly, which grows confidence in the system enabling them to deploy faster to end-users.
Elissa Quinby explains why a frugal mindset around experimentation can actually accelerate the process and increase resourcefulness.
Elissa Quinby lives and breathes retail, with eight years under her belt at Amazon working across multiple business units and functions on the marketing and product teams, as well as prior positions at Google and American Eagle Outfitters.
Currently the Senior Director of Retail Marketing at Quantum Metric, an experience analytics company that helps brands to gain insights about their customers and make rapid, data-driven decisions, her expertise has been put to good use for the past year.
AB Tasty’s VP Marketing Marylin Montoya spoke with Elissa about ways to encourage loyalty from customers, methods for experimentation and how even the smallest piece of data can have a huge impact on tailoring the customer journey for a better overall experience.
Here are some of the key takeaways from their conversation.
Start with ONE key piece of data from your customer and use it to build brand loyalty.
As marketers, we know the value of our current customer base, given the time, effort and cost of acquiring new customers. So it’s only logical to focus on improving the user experience in order to encourage repeat shoppers.
During her time working at Amazon, Elissa adopted a mindset of frugality and learned how much of an impact can be made with onlyone piece of customer data. Today, she challenges retailers to ask themselves what data they already have that could revolutionize their customer experience.
With first-party data being the “secret sauce,” Elissa recommends starting small and offering value in return for their cooperation. Customers are increasingly hesitant to share their information with brands, so it’s important to offer an enticing incentive that will allow you to gather that one valuable piece of data that will improve the consumer experience.
The hardest part of gathering that vital first-party data is encouraging customers to create an account. Once a customer has a profile, trust can be built over time and more data can be gathered, but always in exchange for value. For example, you can encourage customers to sign in to shop by offering personalized filtering or search results. This creates a more efficient and enjoyable online shopping experience for your customers as a reward for their loyalty.
“There’s literally nothing that should not be experimented on.”
Experimentation should be at the core of every marketing strategy. In a process of continual improvement, the possibilities for optimizing the customer journey are endless, however data is the only way to know for sure which modifications to pursue.
With an emphasis on speed, the idea of experimentation is to test a new solution as quickly as possible, releasing any attachment to perfection, in order to start collecting customer feedback.
Elissa explains that any new feature must be tested before it launches. Until customers offer feedback via their interactions, it remains a simple hypothesis to be proven. Not only does this save time on development, but you can gauge the user response to the experiment and make the necessary adjustments.
The experimentation process is precise, methodical and data-driven, to ensure the experiment is set up correctly for a reliable and insightful result – regardless of its success or failure.
As the majority of tests do fail, it’s important to fail fast in order to learn as quickly as possible from the customers’ reaction. Elissa explains that running tests multiple times with slight adjustments can help to pinpoint the issue, which might be as simple as where in the customer journey a prompt is showing up.
Experimentation tools can help brands optimize customer experience.
While manual methods for testing can yield results, an experimentation tool can supercharge your customer experience optimization.
An experimentation tool not only saves time, but also ensures you are getting the most out of each test. It begins with data-driven ideation for the best hypotheses, and if your test fails to meet target metrics, a tool will allow you to pivot by ensuring that you have another hypothesis at the ready, also backed by data.
Secondly, being able to pinpoint why an experiment failed, with comprehensive analysis, is key to improving your results without exhausting your resources.
Finally, an experimentation tool can offer real-time data. If your experiment isn’t tracking well, you’ll know immediately and can shut it down. Conversely, if it’s a winner, you can start working with the product team to launch the new feature. It allows innovation cycles to be sped up, with decisions based on real-time data analysis of the user journey and browsing behavior.
By optimizing the experimentation process with an intelligent analytics solution, you can improve efficiency and quickly hone in on features that are going to bring meaningful improvement to the customer experience and therefore drive results for the company.
What else can you learn from our conversation with Elissa Quinby?
How to do more with less resources (both time and money)
How to stand out from competitors via a loyalty program
Why you should leverage digital during all phases of the customer journey
Why all customer insights play a vital role in improving business results
About Elissa Quinby
Elissa Quinby is an expert in retail insights, starting her career as an Assistant Buyer at American Eagle Outfitters followed by two years at Google as a Digital Marketing Strategist. She went on to spend eight years at Amazon across multiple business units and functions including marketing, program management and product.
Today, Elissa is the Senior Director of Retail Marketing at Quantum Metric, an experience analytics company that helps brands to gather customer insights which drive intelligent decision-making.
About 1,000 Experiments Club
The 1,000 Experiments Club is an AB Tasty-produced podcast hosted by Marylin Montoya, VP of Marketing at AB Tasty. Join Marylin and the Marketing team as they sit down with the most knowledgeable experts in the world of experimentation to uncover their insights on what it takes to build and run successful experimentation programs.