Article

5min read

1,000 Experiments Club: A Conversation With Chad Sanderson of Convoy

Chad Sanderson breaks down the most successful types of experimentations based on company size and growth ambitions

For Chad Sanderson, head of product – data platform at Convoy, the role of data and experimentation are inextricably intertwined.

At Convoy, he oversees the end-to-end data platform team — which includes data engineering, machine learning, experimentation, data pipeline — among a multitude of other teams who are all in service of helping thousands of carriers ship freight more efficiently. The role has given him a broad overview of the process, from ideation, construction to execution.

As a result, Chad has had a front-row seat that most practitioners never do: The end-to-end process of experimentation from hypothesis, data definitions, analysis, reporting to year-end financials. Naturally, he had a few thoughts to share with AB Tasty’s VP Marketing Marylin Montoya in their conversation on the experimentation discipline and the complexities of identifying trustworthy metrics.

Introducing experimentation as a discipline

Experimentation, despite all of its accolades, is still relatively new. You’ll be hard pressed to find great collections of literature or an academic approach (although Ronny Kohavi has penned some thoughts on the subject matter). Furthermore, experimentation has not been considered a data science discipline, especially when compared to areas of machine learning or data warehousing.

While there are a few tips here and there available from blogs, you end up missing out on the deep technical knowledge and best practices of setting up a platform, building a metrics library and selecting the right metrics in a systematic way.

Chad attributes experimentation’s accessibility as a double-edged sword. A lot of companies have yet to apply the same rigor that they do to other data science-related fields because it’s easy to start from a marketing standpoint. But as the business grows, so does the maturity and the complexity of experimentation. That’s when the literature on platform creation and scaling is scant, leading to the field being undervalued and hard to recruit the right profiles.

When small-scale experimentation is your best bet

When you’re a massive-scale company — such as Microsoft or Google with different business units, data sources, technologies and operations — rolling out new features or changes is an incredibly risky endeavour, considering that fact that any mistake could impact millions of users. Imagine accidentally introducing a bug for Microsoft Word or PowerPoint: The impact on the bottom line would be detrimental.

The best way for these companies to experiment is with a cautious, small-scale approach. The aim is to focus on immediate action, catching things quickly in real time and rolling them back.

On the other hand, if you’re a startup in a hyper-growth stage, your approach will vastly differ. These smaller businesses typically have to show double-digit gains with every new feature rollout to their investors, meaning their actions are more so focused on proving the feature’s positive impact and the longevity of its success.

Make metrics your trustworthy allies

Every business will have very different metrics depending on what they’re looking for; it’s essential to define what you want before going down the path of experimentation and building your program.

One question you’ll need to ask yourself is: What do my decision-makers care about? What is leadership looking to achieve? This is the key to defining the right set of metrics that actually moves your business in the right direction. Chad recommends doing this by distinguishing your front-end and back-end metrics: the former is readily available, the latter not so much. Client-side metrics, what he refers to as front-end metrics, measure revenue per transaction. All metrics then lead to revenue, which in and of itself is not necessarily a bad thing, but that just means all your decisions are based on revenue growth and less on proving the scalability or winning impact of a feature.

Chad’s advice is to start with the measurement problems that you have, and from there, build out your experimentation culture, build out the system and lastly choose a platform.

What else can you learn from our conversation with Chad Sanderson?

  • Different experimentation needs for engineering and marketing
  • Building a culture of experimentation from top-down
  • The downside of scaling MVPs
  • Why marketers are flagbearers of experimentation
About Chad Sanderson

Chad Sanderson is an expert on digital experimentation and analysis at scale. He is a product manager, writer and public speaker, who has given lectures on topics such as advanced experimentation analysis, the statistics of digital experimentation, small-scale experimentation for small businesses and more. He previously worked as senior program manager for Microsoft’s AI platform. Prior to that, Chad worked for Subway’s experimentation team as a personalization manager.

About 1,000 Experiments Club

The 1,000 Experiments Club is an AB Tasty-produced podcast hosted by Marylin Montoya, VP of Marketing at AB Tasty. Join Marylin and the Marketing team as they sit down with the most knowledgeable experts in the world of experimentation to uncover their insights on what it takes to build and run successful experimentation programs.

Subscribe to
our Newsletter

bloc Newsletter EN

We will process and store your personal data to respond to send you communications as described in our  Privacy Policy.

Article

6min read

Test in Production: Our Favorite Memes

Everyone hates tests. Ever since our school days, just hearing the word ‘test’ puts us on high alert and brings nothing but dread. 

It seems we cannot escape the word even in software development. And it’s not just any test but a ‘test in production’. 

Yes, it is the dreaded phrase that leaves you sweating and your heart pounding. Just reading the phrase may make you envision apocalyptic images of the inevitable disaster that could occur in its wake…

Meme - Test in production. What could go wrong.
“Test in production” they said. “What could go wrong.” they said

We, too, hate tests but even we have to admit that testing in production is a pretty big deal now.  Let us tell you why before you run away in horror…

Meme - I don't always test my code. But when I do, I do it in production.
I don’t always test my code. But when I do, I do it in production.

If it helps, think of it more as an essential part of your software development process and less as an actual ‘test’ where the only two options are pass or fail but for the sake of consistency and clarity, we’ll refer to it here as testing in production and who knows? Maybe by the end of this article, it won’t be so scary anymore!

Meme - There is no TEST. PRODUCTION only there is.
There is no TEST. PRODUCTION only there is.

So here’s the low-down…

First things first, what is testing in production? Testing in production is when you test new code changes on live users rather than a staging or testing environment.

It may sound downright terrifying when you think about it. So what? You have a feature that is brand new and you’re supposed to unleash it to the wild just like that?

Let us break it down for you with the help of our finest selection of memes about test in production…

At this point, you’re probably vehemently shaking your head. The risks are simply too high for you to consider, especially in this day and age of fickle customers who might leave you at the drop of a hat if you make any simple mistake.

Meme - I see you test your code in production. I too like to live dangerously.
I see you test your code in production. I too like to live dangerously.

You may have a well-established product and you cannot risk upsetting your customers, especially your most loyal customers, and damaging your well-crafted reputation by releasing a potentially buggy feature. 

Or you might even just be starting out and you simply cannot afford to make any amateur mistakes.

Meme - One does not simply test in production
One does not simply test in production!

Why, oh why, should I test in production?

We’re here to tell you that you should absolutely test in production and here’s your answer as to why:

Testing in production allows you to generate feedback from your most relevant users so that you can adjust and improve your releases accordingly. This means that the end-result is a high-quality product that your customers are satisfied with.

Meme - There are no finer QA testers than the clients themselves
There are no finer QA testers than the clients themselves

Additionally, when you test in production, you have the opportunity to test your ideas and even uncover new features that you had not considered before. Plus, it’s not just engineers who get to do this but your product teams can test out their ideas leading to increased productivity.

Meme - I'm just a project manager but sure, I'll do QA
I’m just a project manager but sure, I’ll do QA

So now you’re thinking, great but there’s still the issue of it all leading to disaster and disgruntled customers. 

But really, it’s not as terrifying as it sounds.

Meme - Stand back, we're trying this in production
Stand back, we’re trying this in production

Wrap up in a feature flag

When you use feature flags while testing in production, you can expose your new features to a certain segment of your users. That way, not everyone will see your feature and in case anything goes wrong, you can roll back the feature with a kill switch.

Meme - What if I told you, you could have both speed and safety
What if I told you, you could have both speed and safety

Therefore, you have a quick, easy, and low-risk way to roll out your features and roll back any buggy features to fix them before releasing them to everybody else, lessening any negative impact on your user base if any issues arise. 

Be the king (or queen) of your software development jungle

With feature flags, you are invincible. You are in complete control of your releases. All you need to do is wrap up your features in a feature flag and you can toggle them on and off like a light switch!

Meme - Gave that switch a flick. Switches love flicks
Gave that switch a flick. Switches love flicks

Still confused? Still feeling a bit wary? If you want to find out more about testing in production, read our blog article and let us show you why it’s very much a relevant process and a growing trend that you need to capitalize on today.

Test in Production Meme - We'll do it live
We’ll do it live

With AB Tasty’s flagging functionality, it’s easier than ever to manage testing in production. All you need to do is sit back and reap the benefits. 

Happy testing!