While not all blogging has the explicit goal of driving sales, most content marketing programs are set up with the purpose of driving ROI.
However, in my experience, I’ve seen that programs mostly focus on loose leading indicators of results: things like organic traffic, keyword rankings, social shares, and engagement.
My argument is that you can and should measure and optimize for business results with content marketing as well.
Conversion rate optimization is simply the process by which you can increase the rate at which blog traffic converts to subscribers, leads, customers, or users (whatever your business metric is).
But, conversion rate optimization (CRO) is sometimes misunderstood, billed as a series of hacks and tactics when that’s not really the case. It’s a process.
This article will walk through my typical CRO process with an emphasis on blog CRO and how to convert visitors on informational pages into users or paying customers.
The CRO Process: Four Steps to Getting Started
Usually, the conversion optimization process consists of the following components:
- Research and discovery
- Hypotheses and prioritization
- Analysis and repeat
Of course, different companies and individuals follow different processes, but most of them are made up in some way of the above components.
The idea is that we want to uncover opportunity areas, prioritize them, and systematize them in a scientific and objective way, then run experiments to see if we can improve the conversion rate.
As I’ve mentioned before, it’s not so much about the tactics (e.g. “use green for button colors”) but the methodology. Going through a robust process will unearth various insights about your users that will be valuable beyond just A/B testing.
Step One: Research & Discovery
In the research and discovery phase, you’re trying to find opportunity areas for improvement. These could be user experience bottlenecks or broken browser versions—anything that inhibits a visitor from converting.
It could be as simple as horrible copy on your exit-intent popup to more complex issues like the algorithmic logic used to suggest which articles visitors should read next (and how those suggestions are displayed).
The first thing I do in the research phase is to audit the existing site and find opportunity pages to optimize. You can do this easily in Google Analytics by going to Behavior > Site Content > Landing Pages and using the comparison feature on the right-hand side of the screen.
Ideally, you have a view specifically set up for your blog or at least a URL structure that allows easy filtering (which would be the case both with a subdomain, blog.site.com, or a subfolder, site.com/blog/blog-post-title).
You can then choose the metric that you want to compare. Ideally, this is something transactional like e-commerce conversion rate or goal conversion rate. But you can also optimize blog posts with low engagement metrics, such as bounce rate or average session duration.
From there, I filter for pages above a certain traffic threshold (you need traffic to test and to show real business impact with UX improvements) and then pull a list of high traffic but underperforming posts to a spreadsheet or document. These are your focus areas.
Once you have that list, you can go in and try to identify why users aren’t converting.
Since these are blog posts, and they probably have a myriad of traffic sources, I’d start the analysis there.
Try to answer the question, “how are people finding this page?” and “what is their pre-click experience like?” Knowing this will help you hone in on the user intent (i.e. what value they hope to get out of the post).
You can find this information easily in Google Analytics. Stay in the same “Landing Pages” report and add a secondary dimension, “Default Channel Group,” and click into the URL that you want to analyze.
You can then see the various sources by which people are coming to the page and their associated metrics.
In most of my blog optimization experience, organic is the primary traffic driver, at least to high traffic posts. If that’s the case, you can use a SEO tool like Ahrefs to see which keywords people are searching to find that page.
For instance, we can see that my blog post about A/B testing has tons of informational search queries such as “A/B testing methodology” and “A/B testing framework.”
These don’t seem to be beginner search queries: they imply someone already knows the fundamentals of A/B Testing and is now looking for a structure to put it into practice, and perhaps looking to set up a program at their company.
If I had previously been trying to convert visitors to an A/B testing software with that post (hypothetically), then this data may indicate that I should change my strategy to try and convert visitors to my email list via an ebook or email course that contains a complete testing methodology. The user intent seems largely informational.
In contrast, take HubSpot’s blog post on best help desk software. We can see here that most of the queries are very commercial (look at the CPC estimates!).
In this case, we may want to get more aggressive with our CTAs and push our help desk software since it looks like visitors actually are looking for a help desk solution. We currently push an offer for a collection of templates, so that could be space for optimization.
The conversion research process should also include on-page factors (as well as the off-page, pre-click stuff like how they’re reaching your site).
To find usability bottlenecks, there are a variety of research methods available. This part resembles the typical CRO research process much more. I use various methods depending on the perceived problem I’m trying to solve, some of which include:
- Digital Analytics Analysis (e.g. Google Analytics)
- User testing
- Session replays
- Technical Analysis
- Heuristic Analysis/Cognitive Walkthrough
- Heat maps and click maps
A great, robust process for the research and discovery phase is the ResearchXL model, though there are plenty of models you can follow on this step.
Step Two: Hypotheses and Prioritization
The second component, hypotheses and prioritization, is the most important and most neglected (in my opinion).
We all have a limited amount of time and resources to pour into activities (even Amazon and Google have some limitations). Therefore, it’s imperative that we don’t waste time A/B testing things that don’t move the needle.
To combat this, we can follow two guidelines:
- Come up with testable hypotheses (follow this guide on how to do so).
- Ruthlessly prioritize ideas based on effort and ease.
Basically, how much impact can we expect in exchange for the effort and resources needed to get the test up and running?
A high impact and low effort test is, of course, the best option. This is commonly referred to as “low hanging fruit.”
Unfortunately, low hanging fruit gets picked off quickly, and eventually, prioritization becomes much more difficult (and therefore important).
Prioritization is important whether you’re working solo or on a team with multiple people submitting test ideas. The thing is, you want your CRO process to be predictable and repeatable. If you leave the prioritization process open to gut feelings, you’ll get less predictable results. In addition, it will be harder to improve upon your prioritization process in the future.
Eventually, you’ll come down to a choice. Should you run an A/B test on your email sign up form, or on your top-performing blog post headlines? When it comes to that decision (or any other), you’ll want an objective way to choose.
All prioritization frameworks have some flaws. None are perfect, but all are helpful. My advice? Pick one, learn it, and use it. Don’t waste too much thinking about it, just get started.
Step Three: Experimentation
Finally, we have experimentation. This is the stage that everyone associates with CRO: A/B testing. Pitting one variant against another and using statistics to infer which is the true winner.
As a data nerd, it’s my favorite part of the process, but that doesn’t mean it’s the most important. What you choose to run experiments on and how you prioritize your tests is just as important if not more so than how you run your experiments.
That’s not to say this step is trivial. It’s not. Testing is a process that requires statistical knowledge in order to get it right. You don’t necessarily need a data scientist over your shoulder, but it wouldn’t hurt to have some guidance on your first few experiments.
In any case, there are a few broad guidelines to follow and a few common mistakes to avoid with testing (specifically A/B testing):
- Calculate your sample size, statistical power, etc. in advance of the test. Use a calculator like this one.
- Run the test to completion (run it to the day you planned on ending it).
- Run the test for full weeks and business cycles (usually 2-4 weeks).
- Look at statistical significance, confidence intervals, and the overall data trend when analyzing the test.
- Use common sense. If your result is surprising (200% lift?) it is probably wrong (Twyman’s Law).
Another question that will probably come up is: what tools can you use to run A/B tests? There are several solutions on the market, some more suitable than others for the purpose of bloggers. AB Tasty is a solid solution whether you want to run website, blog, or product experiments, and also if you want to do any web personalization.
If you’re experimenting on your popups or on your email list, it’s almost certain that your tool of choice will have some sort of A/B testing feature native with the product. That’s certainly true of something like HubSpot or Mailchimp.
In any case, just make sure you can:
- Set up experiments properly (do you have the requisite traffic?)
- Randomize and deploy experiences (with the help of a testing tool)
- Analyze the experiments properly (are you logging data correctly and can you access it easily?)
In my experience, a lot of blog testing will be on things like forms and popups, simply because that’s where a lot of conversion actions happen. In this regard, blog testing is very similar to any other type of e-commerce or SaaS CRO.
Going back to my A/B testing blog post example, perhaps I’d want to test out a different offer in my popup. This would be an easy change, at least visually, to create.
Or perhaps I could try changing the targeting or where the popup appears on the page. Or I could remove a form field, or add a step to the form. Tons of stuff you can do pretty easily when it comes to testing out blog offers.
If you don’t have enough traffic or conversions to test—a common problem with blog CRO— you can do two things in order:
- Validate using qualitative research
- Roll it out and watch the time-series data
You can validate copy changes through something like Five Second Test and you can validate usability changes through user testing, session replays, or polls.
Additionally, keep an eye on the data before and after you roll out the changes. If the change is big enough, you can see the bump in the data over time. The numbers certainly shouldn’t go down. You can also try out a Bayesian time series model like GA Effect to see if your changes produced significant results, given other implicit trends like seasonality.
This isn’t the most scientific way to do things, but some data is probably better than just guessing.
Step Four: Analysis and Repeat
I briefly covered A/B testing analysis in the last step (e.g. how long you should run a test), but obviously, analysis is a nuanced trade. It often helps to have a specialist help you out here.
If you don’t have a trained analyst helping you out, at the very least, I think you should cover the groundwork when it comes to analysis. There are several great resources out there that don’t take a ton of time to consume. Here are a few articles:
- Statistical Analysis and A/B Testing
- Data science you need to know! A/B testing
- Guidelines for A/B Testing
One important note: before you ever run the test, you should decide upfront what action you will take if the test wins, loses or is inconclusive. That way, you mitigate the effects of confirmation bias and cherry-picking. You know what they say, “If you torture the data long enough, it will confess.”
It’s as simple as writing a few sentences:
“If we achieve a statistically significant winning result (estimated lift of 20%+), we’ll roll out the new variant to 100% of traffic. If we get a conclusive loser, we will try a new iteration on our hypothesis. If the test is inconclusive, we will move onto the next idea on the list.”
That way, you limit your ability to sway decisions with your emotions.
Analyzing the test could be quite simple. You may just need to plug in the numbers to an Excel sheet, or even an online calculator, like what Evan Miller provides.
If you’re more handy in Excel or Google Analytics, your analysis might go deeper. You may start to fish around for effects on smaller segments of your traffic. For instance, maybe mobile visitors reacted differently than desktop visitors, and that could inform your hypothesis for a future test.
However, when you’re first starting out with testing, opt for simplicity. The rabbit hole runs deeps. Analysts spend their whole careers learning this stuff, so it’s hubris to think you can do the same with little to no training. You can definitely do the basic analysis though.
Regardless of the result of your test, it’s likely that you’ve learned something from the experiment. Now, you can incorporate that insight into your research and discovery process and start the process all over again.
It’s very likely that experiments will lead to more ideas which will lead to more experiments. CRO is never done. The more you do it, the more insights you’ll gain to feed into more CRO work. And once you get into the swing of things, that’s when the returns really start to happen.
It’s Not All About Testing: It’s Learning About Your Visitors
While testing is at the forefront of CRO, it’s not the end all be all. In fact, many blogs don’t actually have enough traffic to run A/B tests (and most don’t have enough traffic to make it worth the heavy time investment).
Even without A/B testing, you can do CRO. How? Focus on the qualitative. You can still run many of the research and discovery methods I listed above, such as:
- Heuristic analysis
- Mouse tracking
- Session replays
- User tests
- Usability tests
- Customer surveys and interviews
- Technical audits (site speed, etc.)
It’s about better understanding your visitors and their behavior (what do they engage with, ignore, etc.) and providing a better experience for them.
Conversion optimization is one of my favorite parts of online marketing. It enables people to switch from opinions to data-backed conclusions and shows quantitatively what’s working and what isn’t.
Bloggers and content marketers can make the mistake of neglecting CRO to focus solely on traffic acquisition, but a CRO process helps you not only learn more about readers but convert more of them into subscribers or customers.
It’s an incredibly high leverage lever to pull to bring more business results.