Testing in production…
It’s a simple concept…
But it’s also one of the touchiest topics out there, and it splits the development world into a pair of camps— those who say you should always test in production, and those who say you never should.
In this piece, we’ll explore the two different camps, we’ll share which camp we fall into (and why), and we’ll offer a practical perspective on testing in production that cuts through the noise.
First Thing’s First: What is Testing in Production?
To keep it short and simple, testing in production is the practice of running different tests on your product when it’s in a live environment.
Instead of running these tests while your product is still in development, or safely hidden in a staging environment, you will instead run these tests when your product is out in the real world, in the hands of real users.
Why Do People Test in Production?
When done properly, testing in production gives you some great benefits that you just can’t get through any other method.
With testing in production, you can:
- Collect real-world data you can never generate during development.
- Validate that your product is actually delivering what your customers want.
- Learn new features your customers want that you may never have thought of.
- See if your product operates reliably in the less-predictable real world.
- Support a larger strategy of incremental—or even continuous—release.
These are big benefits, and they are enough to create a first camp of developers and product managers who say “Yes, always!” to the practice.
But there’s also a second camp of developers and product managers who say “No, never!” to testing in production. They admit all of the great benefits that testing in production can deliver. But they simply feel the practice carries too much potential downside, and that its benefits just aren’t worth taking on the risks the practice can bring.
Which Camp Do We Fall Into?
It shouldn’t come as much of a surprise— we’re in the first camp.
We believe testing in production is a cornerstone practice for anyone in the development world. And we believe it is particularly important for Product Managers, as it gives them a powerful method to generate the real-world feedback and performance data they need to make sure they are always building a viable pipeline of products.
But even though we advocate for testing in production, we still want to give the “No, never!” camp their respect. By hearing them out, we can learn some of the biggest potential pitfalls created when you test in production. And once we know these issues, we can start to map out some ways to mitigate the practice’s potential downside, and capture the benefits of testing in production— risk free.
What are the Big Risks of Testing in Production?
To be blunt: A lot of things can go wrong when you test in production.
- You can deploy bad code…
- You can leak sensitive data…
- You can overburden your infrastructure…
- You can mess up your tracking & analytics…
- You can release a poorly designed product or feature…
The list goes on and on. Anything that can go wrong, could go wrong.
Worst of all— if something does go wrong when you are testing in production, your mistake will have real-world consequences. Your product might crash at a critical moment of real-world usage. You might collect inaccurate KPIs and create issues with your business stakeholders. Your poorly designed product or feature might result in multiple paying customers leaving your product for a competitor’s.
The camp of people who say “No, never!” to testing in production are correct to consider the practice highly risky, and we understand why they stay away from it.
And yet, while we acknowledge these concerns, when it comes down to it we still advocate you test in production.
Why Should You Still Test in Production?
First, because testing in production does not prevent you from testing before production as well.
Most of the technical problems that can occur during a poor production test can be avoided if you run a full gamut of technical tests during development and QA to ensure your new products and features are stable and capable of handling a high volume of real-world usage.
And if you have been following a strategy of iterative development, then the chances of releasing a misaligned product or feature—from your users’ perspective—is very low. You won’t hit home runs every time, but you also are far less likely to strike out.
Second, because a lot of what might go wrong in a production environment can’t be properly tested in a development or staging environment anyway.
There are certain technical issues that will never show up until you put your product or feature in front of real-world users. In fact, there are certain technical issues that will never show up until you put your product or feature in front of a lot of real-world users. Scale matters, and so does the unpredictable usage patterns only real-world users can bring.
And when it comes to new products and features— even the most robust program of iterating development based on your real-world user’s feedback can still only produce a guess about what your users want next. You can make guesses that are more educated than others, but ultimately you will never know if you interpreted your users’ feedback correctly on your next round of capabilities until you release those capabilities into the wild and see how they are received.
Third, and most important… because you already are testing in production, even if you didn’t know it!
Most of Agile development and product management’s best practices are forms of testing in development. We’re talking about very common practices like:
- A/B Testing
- Phased Rollouts
- Canary Releases
- User Monitoring
- Usability Testing
- Smoke & Sanity Testing
If you are following any of these practices—and many more like them—then you are already running tests with real-world users in a live production environment. You are already testing in production, whether you call it that or not, even if you thought you were in the “No, never!” camp this whole time.
A Practical Perspective on Testing in Production
Let’s face facts: Testing in production is now an essential element of product management. Just imagine trying to do your job without A/B testing. Or without phased rollouts. Or without the ability to collect any sort of immediate real-world feedback or technical usage data on your new products and features. You can’t. You need to test in production. You might as well embrace it.
And that is why we remain staunchly in the “Yes, always!” camp, despite recognizing the risks involved in the practice. If testing in development is inevitable these days, then you should spend less time debating its pros and cons, and more time finding the most effective and responsible way to follow the practice.
We believe in this perspective on testing in development so strongly that we’ve built an entire product suite around helping product developers gain all of the benefits of the practice while minimizing their risks. Our product suite is called Flagship, and its core function is the feature flag.
By thoughtfully deploying feature flags, you can safely perform all of the testing in production that you need. With feature flags—combined with the rest of Flagship— you can:
- Deploy smaller releases that minimize the impact of failure.
- Only test your new features on your most loyal and understanding users.
- Personalize their tests so they know to expect a few hiccups with the release.
- Immediately toggle off underperforming features with a single click.
With feature flags and a little planning, you can dramatically reduce the risk and increase the sophistication of the testing in production you are already performing. And that means more real-world user data, more reliable products & features, and less worry about seeing how your hard work performs outside of the safe confines of development and staging environments.
If you’d like to learn more about Flagship, feature flags, or just how you can perform better testing in production, reach out today.
This article was first publish on flagship.io