Article

9min read

Everything You Need to Know About the Freemium Business Model

“Freemium is like a Samurai sword: unless you’re a master at using it, you can cut your arm off.” – Rob Walling

Scary as this may sound, you’re in the right place to learn what it takes to work the freemium business model in your favor.

Definition of a Freemium Business Model

A freemium business model is an extremely popular customer acquisition strategy among SaaS business owners. Think Dropbox, Spotify, Venngage, Trello, MailChimp, Buffer, Grammarly, etc. Without spending a penny subscribers can experience the product and test out basic features. 

Basically, a freemium offering helps companies not only amplify their reach and popularity (Network Effect), but also create conversion opportunities. How? By activating cognitive biases

Let’s say you’re a steadily growing small business subscribed to a freemium plan of ‘product X.’ Owing to free plan limitations you agree to upgrade. But there’s more to this than meets the eye. You pay willingly, because thanks to the Endowment Effect, you want to avoid loss of any kind and continue owning the product you’ve become familiar with.

That said, not every business successfully converts prospects into paying customers. There are some inherent challenges, but more on that later. For now, let’s cover the basics.

Differences Between Freemium and Free Trial

Unlike the freemium business model, the free trial plans give prospects complete or partial access to the product—free of charge, but for a limited amount of time. 

The idea is to get people to experience the product completely and eliminate doubts within a reasonable time frame. A classic example is Netflix. 

When to Opt for the Freemium Business Model

Freemium might seem like a good fit for your product, but only if:  

#1 You have a problem-solving product with a huge market

Phil Libin, CEO of Evernote once said: “The easiest way to get 1 million people paying is to get 1 billion people using.” 

Certainly makes sense. For viral adoption of your product, it should have a huge market and address freemium users’ pain points. This combination will generate positive word-of-mouth marketing, resulting in engaged customers and improved conversions.

#2 If your product is easy-to-use

The easier it is for users to get around, the less intervention on your part. Besides that, users should understand what they’re getting for free as well as the advantages of upgrading. This essentially means you can spend more time and resources on other important aspects of your business.

#3 If your product isn’t way too expensive

Consumers are price sensitive. To motivate them to upgrade from freemium to a paid plan, your product has to be within an affordable range, justifying the fee and the value you’re delivering. At the same time, it shouldn’t cost you a lot to support a large, non-paying user base.

What Should Your Target Conversion Rate Be? 

Generally speaking, freemium conversion rates are low and hover between 2-5%. But that’s not to say that anything lower than that is bad. In fact, you’re better off as long as you’re consistently improving month on month.

Problem is if your conversion rates are either too low or too high. Here’s why:

A low conversion rate means you’re offering way too many features for free and giving prospects no reason to convert. Conversely, high conversion means your freemium offer isn’t exciting which then threatens future customer acquisitions. So ideally, go after a number that’s neither too low nor too high.

How to Increase Your Freemium Conversion Rate

Convinced that the freemium model is good for your business? Great. Though for it to work, understand that it doesn’t deliver on its own. Put simply, freemium subscribers don’t magically convert into paid users. Yeah, sorry, but someone had to burst your bubble.

So draft a solid plan to compel prospects to open their wallets and also passionately endorse you. 😉

Let’s see how you can position yourself better and maximize freemium conversions.

#1 Review your freemium limitations

To let users test out your product completely and get a taste of exclusive features of the premium plan, get rid of the limited features restriction. Instead, limit the users or usage. 

On Slack, for example, there’s a limit placed on the number of users, messages, and app installations. Then there’s Dropbox where users are given only a certain amount of storage before they’re asked to pay to upgrade.

Slack freemium - limitations on usage and features
Slack’s a perfect example of how to make paid plans enticing.

#2 Send subtle reminders

Stop bombarding freemium users with pushy sales emails and in-app pop-ups to upgrade. This ‘money talk’ can wait, especially if they’re still new to using your product. 

On the contrary, go the subtle, non-aggressive route. Integrate your upgrade message intelligently into the product. Sure, it could take longer for users to really consider upgrading, but your subtle hints won’t go unnoticed. 

Spotify does this well, allowing freemium users to skip only 6 songs every hour. The seventh time users are nudged to slow down or get premium.

Spotify Freemium
Spotify’s message prompt is simple yet packs a punch.

#3 Conduct thorough customer research

Products are created with users in mind. The only way yours will be noticed is if you understand who your customer is and what they want. So as step one, conduct solid customer research. Think customer interviews, email surveys, analytics, social listening, discussion forums, heatmap tools, etc. 

Finally, let your findings guide you to create and/or improve your product that immediately addresses users’ problems. Not only will they engage better, but they’ll also provide a plethora of insights on how to keep innovating, show value, and stay relevant. 

For instance, if you notice a consistent spike in demand for certain features, test those to statistically determine which results in maximum conversions. 

#4 Personalize freemium users’ journey

When freemium users are left to their own devices they don’t learn a lot about the product features. To fix this, tailor your potential customer’s journey because each one of them walks a distinct conversion path and has different reasons for why they sign up. 

Using a marketing automation platform, contact them early and often so they learn how the product can add value to their professional lives. And when they complete a task, send them encouraging emails and cross-promote other useful features to help them get more work done. 

Also, personalize the journey of inactive users and stop them from churning. Trace their previous activity and motivate them to get back in action. 

Before you know it, you’ll have helped them build muscle memory, a new habit of using your product. Let’s say your prospect used your product to create a quiz. Congratulate them and get them to explore other offerings, such as creating a poll. 

#5 Create product/feature-focused content

Make self-learning easy. Produce a vault of content, including blog posts, videos, tutorials, and FAQs. Your efforts to educate your prospects will be appreciated and result in quick adoption of your product.

Besides, as mentioned earlier, the more they get used to using your product, the stronger the Endowment Effect and the possibility of them becoming paid users will be. 

Freemium Buffer email updates to engage users
Buffer sends product updates over email to engage users and set them on the path to feature discovery.

#6 Create a sense of urgency

Urgency triggers fear-of-missing-out or FOMO which makes it a powerful conversion tactic. 

One of the ways to create urgency is by giving your highly engaged users an attractive, time-limited upgrade discount. Make sure you highlight what they’re going to miss if they don’t go premium. It’s bound to work since they know their way around the product and might want to keep using it.

Grammarly freemium plan - create urgency with discounts (1)

#7 Make way for friction-free payment

This one’s a no-brainer. Get rid of every possible barrier on the payment page and make it easier for your customers to start using the product right away. It’ll create an excellent customer experience (hopefully resulting in more referrals), reduce payment abandonment rates, and increase your revenue. 

What you can do: 

  • Auto-fill data you already have in your database. 
  • Do not kill their buzz with hidden fees.
  • Give multiple/preferred payment options.
  • Recap what’s included in the paid subscription.
  • Mention when the renewal is due.
Dropbox freemium strategy - easy transition to paid plan
Dropbox’s no-nonsense payment page.

 

However, despite this, if you notice users step away multiple times from signing-up, reach out to them and discuss why they’re hesitant in coming onboard. You’ll surprise yourself with the kind of customer intelligence you might uncover.

Conclusion

Hopefully, this guide has helped you get a handle on the freemium business model. 

Now to sum it up and before you wrap your SaaS product as a freemium offering, make sure it has a huge demand, is easy-to-use, and doesn’t burn a hole in either your or your customers’ pockets. Aside from that, implement the best practices discussed in this post to run a sustainable business.

Subscribe to
our Newsletter

bloc Newsletter EN

We will process and store your personal data to respond to send you communications as described in our  Privacy Policy.

Article

13min read

Better Understand (And Optimize) Your Average Basket Size

When it comes to using A/B testing to improve the user experience, the end goal is about increasing revenue. However, we more often hear about improving conversion rates (in other words, changing a visitor into a buyer). 

If you increase the number of conversions, you’ll automatically increase revenue and increase your number of transactions. But this is just one method among many…another tactic is based on increasing the ‘average basket size’. This approach is, however, much less often used. Why? Because it’s rather difficult to measure the associated change.

A Measurement and Statistical Issue

When we talk about statistical tests associated with average basket size, what do we mean? Usually, we’re referring to the Mann-Whitney-U test (also called the Wilcoxon), used in certain A/B testing software, including AB Tasty.  A ‘must have’ for anyone who wants to improve their conversion rates. This test shows the probability that variation B will bring in more gain than the original. However, it’s impossible to tell the magnitude of that gain – and keep in mind that the strategies used to increase the average basket size most likely have associated costs.  It’s therefore crucial to be sure that the gains outweigh the costs. 

For example, if you’re using a product recommendation tool to try and increase your average basket size, it’s imperative to ensure that the associated revenue lift is higher than the cost of the tool used….

Unfortunately, you’ve probably already realized that this issue is tricky and counterintuitive…

Let’s look at a concrete example: the beginner’s approach is to calculate the average basket size directly. It’s just the sum of all the basket values divided by the number of baskets. And this isn’t wrong, since the math makes sense. However, it’s not very precise! The real mistake is comparing apples and oranges, and thinking that this comparison is valid. Let’s do it the right way, using accurate average basket data, and simulate the average basket gain. 

Here’s the process:

  • Take P, a list of basket values (this is real data collected on an e-commerce site, not during a test). 
  • We mix up this data, and split them into two groups, A and B.
  • We leave group A as is: it’s our reference group, that we’ll call the ‘original’.
  • Let’s add 3 euros to all the values in group B, the group we’ll call the ‘variation’, and which we’ve run an optimization campaign on (for example, using a system of product recommendations to website visitors). 
  • Now, we can run a Mann-Whitney test to be sure that the added gain is significant enough. 

With this, we’re going to calculate the average values of lists A and B, and work out the difference. We might naively hope to get a value near 3 euros (equal to the gain we ‘injected’ into the variation). But the result doesn’t fit. We’ll see why below. 

How to Calculate Average Basket Size

The graph below shows the values we talked about: 10,000 average basket size values. The X (horizontal) axis represents basket size, and the Y (vertical) axis, the number of times this value was observed in the data.

It seems that the most frequent value is around 50 euros, and that there’s another spike at around 100 euros, though we don’t see many values over 600 euros. 

After mixing the list of amounts, we split it into two different groups (5,000 values for group A, and 5,000 for group B).

Then, we add 3 euros to each value in group B, and we redo the graph for the two groups, A (in blue) and B (in orange): 

We already notice from looking at the chart that we don’t see the effect of having added the 3 euros to group B: the orange and blue lines look very similar. Even when we zoom in, the difference is barely noticeable: 

However, the Mann-Whitney-U test ‘sees’ this gain:

More precisely, we can calculate pValue = 0.01, which translates into a confidence interval of 99%, which means we’re very confident there’s a gain from group B in relation to group A. We can now say that this gain is ‘statistically visible.’ 

We now just need to estimate the size of this gain (which we know has a value of 3 euros).

Unfortunately, the calculation doesn’t reveal the hoped for result! The average of group A is 130 euros and 12 cents, and for version B, it’s 129 euros and 26 cents. Yes, you read that correctly: calculating the average means that average value of B is smaller than the value of A, which is the opposite of what we created in the protocol and what the statistical test indicates. This means that, instead of gaining 3 euros, we lose 0.86 cents!  

So where’s the problem? And what’s real? A > B or B > A?

The Notion of Extreme Values

The fact is, B > A! How is this possible? It would appear that the distribution of average basket values is subject to ‘extreme values’. We do notice on the graph that the majority of the values is < 500 euros.

But if we zoom in, we can see a sort of ‘long tail’ that shows that sometimes, just sometimes, there are values much higher than 500 euros. Now, calculating averages is very sensitive to these extreme values. A few very large basket size values can have a notable impact on the calculation of the average. 

What’s happening then? When we split up the data into groups A and B, these ‘extreme’ values weren’t evenly distributed in the two groups (neither in terms of the number of them, nor their value). This is even more likely since they’re infrequent, and they have high values (with a strong variance). 

NB: when running an A/B test, website visitors are randomly assigned into groups A and B as soon as they arrive on a site. Our situation is therefore mimicking the real-life conditions of a test. 

Can this happen often? Unfortunately, we’re going to see that yes it can. 

A/A Tests

To give a more complete answer to this question, we’d need to use a program that automates creating A/A tests, i.e. a test in which no change is made to the second group (that we usually call group B). The goal is to check the accuracy of the test procedure. Here’s the process:

  1. Mix up the initial data
  2. Split it into two even groups
  3. Calculate the average value of each group
  4. Calculate the difference of the averages

By doing this 10,000 times and by creating a graph of the differences measured, here’s what we get:

X axis: the difference measured (in euros) between the average from groups A and B. 

Y axis: the number of times this difference in size was noticed.

We see that the distribution is centered around zero, which makes sense since we didn’t insert any gain with the data from group B.  The problem here is how this curve is spread out: gaps over 3 euros are quite frequent. We could even wager a guess that it’s around 20%. What can we conclude? Based only on this difference in averages, we can observe a gain higher than 3 euros in about 20% of cases – even when groups A and B are treated the same!

Similarly, we also see that in about 20% of cases, we think we’ll note a loss of 3 euros per basket….which is also false! This is actually what happened in the previous scenario: splitting the data ‘artificially’ increased the average for group A. The gain of 3 euros to all the values in group B wasn’t enough to cancel this out. The result is that the increase of 3 euros per basked is ‘invisible’ when we calculate the average. If we look only at the simple calculation of the difference, and decide our threshold is 1 euro, we have about an 80% chance of believing in a gain or loss…that doesn’t exist!

Why Not Remove These ‘Extreme’ Values?

If these ‘extreme’ values are problematic, we might be tempted to simply delete them and solve our problem. To do this, we’d need to formally define what we call an extreme value. A classic way of doing this is to use the hypothesis that the data follow ‘Gaussian distribution’. In this scenario, we would consider ‘extreme’ any data that differ from the average by more than three times the standard deviation. With our dataset, this threshold comes out to about 600 euros, which would seem to make sense to cancel out the long tail. However, the result is disappointing. If we apply the A/A testing process to this ‘filtered’ data, we see the following result: 

The distribution of the values of the difference in averages is just as big, the curve has barely changed. 

If we were to do an A/B test now (still with an increase of 3 euros for version B), here’s what we get (see the graph below). We can see that the the difference is being shown as negative (completely the opposite of the reality), in about 17% of cases! And this is discounting the extreme values. And in about 18% of cases, we would be led to believe that the gain of group B would be > 6 euros, which is two times more than in reality!

Why Doesn’t This Work?

The reason this doesn’t work is because the data for the basket values doesn’t follow Gaussian distribution. 

Here’s a visual representation of the approximation mistake that happens:

The X (horizontal) axis shows basket values, and the Y (vertical) axis shows the number of times this value was observed in this data. 

The blue line represents the actual basket values, the orange line shows the Gaussian model. We can clearly see that the model is quite poor: the orange curve doesn’t align with the blue one. This is why simply removing the extreme values doesn’t solve the problem. 

Even if we were able to initially do some kind of transformation to make the data ‘Gaussian’, (this would mean taking the log of the basket values), to significantly increase the similarity between the model and the data, this wouldn’t entirely solve the problem. The variance of the different averages is just as great.

During an A/B test, the estimation of the size of the gain is very important if you want to make the right decision. This is especially true if the winning variation has associated costs. It remains difficult today to accurately calculate the average basket size. The choice comes down soley to your confidence index, which only indicates the existence of gain (but not its size). This is certainly not ideal practice, but in scenarios where the conversion rate and average basket are moving in the same direction, the gain (or loss) will be obvious. Where it becomes difficult or even impossible to make a relevant decision is when they aren’t moving in the same direction. 

This is why A/B testing is focused mainly on ergonomic or aesthetic tests on websites, with less of an impact on the average basket size, but more of an impact on conversions. This is why we mainly talk about ‘conversion rate optimization’ (CRO) and not ‘business optimization’. Any experiment that affects both conversion and average basket size will be very difficult to analyze. This is where it makes complete sense to involve a technical conversion optimization specialist: to help you put in place specific tracking methods aligned with your upsell tool.

To understand everything about A/B testing, check out our article: The Problem is Choice.