Where email testing goes wrong, and how to switch to objective-led testing for long-term gain
Testing is an essential aspect of email marketing because it can deliver meaningful results you can use to operate a more effective email programme.
You’ll learn what prompts your subscribers and customers to act, which can either help you increase sales or reduce the money you would spend on less-effective strategies and tactics.
Many email platforms offer you the ability to do A/B testing; that is, to test two versions of a variable to find out which one generates more of the actions you want. You might want to find out which subject line produces more opens, or which call to action generates more clicks through to your landing page.
Those tests can give you fast, helpful information that can help you boost results from an individual campaign. But that’s where too many marketers stop.
Email testing works best – that is, it delivers the most useful and accurate results – when you raise your sights beyond simple campaign-level testing and measure, instead, on how well your campaigns achieved the objectives you laid out for them at the beginning.
Where testing goes wrong
I see three basic problems in the way email marketers use testing today:
- They do mostly “ad hoc” testing: one-off or occasional testing of individual factors, like subject lines or calls to action, which apply only to that campaign.
- Many email marketers don’t use hypotheses, which allow you to build on previous testing results and then use the findings to guide future research and general programme improvement.
- They use the wrong metrics to measure success. The easiest metrics to track are the actions your subscribers take on your messages, such as opens, clicks, unsubscribes, spam complaints, time spent on the message, whether they moved it out of the inbox or from the spam folder to the inbox.
As you’ll find out soon, tracking a campaign’s overall success via conversions or sales is trickier, in part because many email platforms aren’t set up to capture that data. In most cases, though, those metrics will be the true measure of your success.
Your success metric must map back to your objective
If, say, you’re a publisher and sell ad space based on open rates, testing subject lines to see which one produces a higher open rate in your test audience would be appropriate.
However, most email programmes are built around objectives that have more complex foundations for success, such as increasing revenue or reducing customer churn. Those mostly are what you get paid to do and, thus, how you measure the ultimate success of your email programme.
Do your company’s fortunes rise or fall on your email open rates? The click rate? The unsubscribe rate? I’d venture to say that your chief financial officer would say no!
Not convinced? Try the litmus test. Using six months of data, pull out your top 10 campaigns based on opens, then do the same for the top 10 campaigns based on clicks and the top 10 campaigns based on conversions, however, you define them.
The results should be clear: You will see that they are not the same 10 campaigns. By optimising for these top-of-funnel metrics, you could well be optimising for the wrong results.
Mailchimp’s testing optimization research discovered that clicks and opens do not equal revenue.
Tim Watson’s research into 50 million retail emails from 196 campaigns found that open rates wrongly predicted success 53% of the time.
Tim notes that while the open rate varied only about 12% across campaigns, conversion rates varied up to 70%.
These metrics do have their place in your email knowledge base when you plot them over time to look for trends. For example: Are open rates falling while spam complaints and unsubs are rising? Time to work on your value proposition, sharpen up your email content, or rethink your email purposes.
When you rely on a single metric that isn’t tied into your campaign or overall objective you could wind up optimising your emails for the wrong results. A high open rate could disguise the fact that the campaign didn’t meet its sales or conversion objectives, perhaps because the offer didn’t live up to the promise of an irresistible subject line.
Successful testing takes more than guesswork
Marketing is both an art and a science. It’s easy to deal with the arty side of marketing – the creative and intuitive aspects of building campaigns – while the scientific aspect often gets overlooked.
This is where marketing in general, and email marketing in particular, begins to take on the aspects of the science experiments you did back in your school days. You didn’t just throw a bunch of things together to see what would happen. (If you did, you probably blew a hole through the classroom ceiling! So, the less said about that, the better.)
Instead, you used what you had already learned in class to create a hypothesis about what might happen when you combine elements together, add heat or perform other operations.
You observed the results, wrote them down in a lab notebook and then analyzed them. If others got the same results following your hypothesis and methodology, it bolstered the accuracy of your methods and solidified your findings.
That, in a nutshell, is the basis of scientific enquiry, and it works in your email marketing programme just the way it worked in your chemistry classes. The problem, though, is that many email marketers don’t put the time, effort and thought into designing a useful testing programme that can generate long-term results that measure your progress to your objective.
Holistic testing creates a broader picture of success
Useful testing doesn’t happen in a vacuum. That’s the reality behind the practice of holistic testing. It moves beyond single-channel and basic A/B testing to deliver a more powerful set of customer insights that continually inform, improve and direct your entire marketing programme.
Although my purpose this time is to help you develop a more useful and accurate testing programme, the benefit of using holistic testing is that you end up with results that you can share across all of your marketing channels.
After all, your email database is made up of your customers and prospects – right? This gives your customer a more holistic experience.
Statistical confidence is essential
Right now you’re asking how you can you gain statistical confidence when testing an email campaign if you don’t use the open rate. Right? If not, you should be because this problem is unique with testing email.
So, let’s look at how conversion rate optimization professionals would do it.
They would set up the test using a hypothesis and select the success metric to be meaningful to their objective. It could be a sale, form submission, registration, download, or whatever is related to the goal. Then they would run the test until they reached statistical significance. We use this same approach when testing our automated email programmes such as welcome programmes.
So why do we treat our business-as-usual emails differently? We email marketers are campaign oriented. It’s often hard to think beyond the campaign. But we can apply the same scientific process we use for testing a website to testing our BAU emails.
By using a hypothesis, you now can test the same hypothesis across multiple campaigns That will increase your sample size and give you statistical significance via your conversions.
Use our Statistical Significance Calculator
What can holistic testing do for your email programme in particular? Here’s what I have learned over the years:
1. Results are more reliable because you use scientific methodology to set up the tests and analyze results
The hypothesis is the heart of holistic testing because it clarifies what you want to learn from your test. It guides your selection of (potentially multiple) factors to test. And, it structures your test to line up with your programme objectives.
Hypothesis-led testing also improves the validity of your results by making them less vulnerable to chance, error or unknown factors.
2. You get long-term gains, not just short-term wins
Holistic testing uses regular, systematic testing and builds upon previous insights. It delivers not only immediate, short-term uplifts but also valuable insights into your audience and helps you understand what works best for your email programme.
These long-term gains give you a solid foundation for consistent performance and incremental innovation.
Holistic testing doesn’t rely on top-of-funnel metrics, such as open rates, to measure success. Instead, it factors in many elements into its insights, giving you more reliable information to use when measuring the overall success of your email programme.
3. Testing across campaigns gives you a larger and more reliable sample size
Relying on campaign-level metrics means your learnings are restricted to that one campaign. Using a hypothesis across multiple campaigns gives you a larger sample size. Thus, you can aim for a metric that applies to the lower end of the marketing funnel, such as conversions, instead of limiting yourself to a top-of-the-funnel metric like open or click rates.
4. You can apply your findings across all marketing channels
With a holistic approach, every email you send becomes a customer survey of your target market. Their actions on your emails are theirs vote on what resonates best with them.
Your email database is made up of your target market. So, many of the insights you discover via testing in email will also apply to your other channels. Email testing also is less costly, uses a defined customer base and has a shorter turnaround time than other channels.
Holistic testing in action: Three hypotheses
The three hypotheses below will give you more insight into your email performance than a basic “Which version won: A or B?” test. Bonus: You can apply what you learn from these tests to your web copy, banners, retargeting ads, search terms and other marketing collateral.
- The emotional-language question in the subject line will generate more sales than a directive statement.
- Emotional-language CTA will generate more sales than a pragmatic CTA.
- Emotive image of a person smiling and wearing an outfit will generate more sales than product shots of outfit pieces.
Keep in mind that for all of these hypotheses, you will need to carry out multiple tests to ensure the results are not anomalies. This is their beauty. None of them are bound to any specific wording because you are testing motivation.
Wrapping up
Email gives you a good basic testing structure that you can build on to sharpen your insights and improve your marketing efforts bit by bit across all channels. It’s another one of email’s superpowers that marketers so often overlook or ignore.
Ultimately, it’s another reason why investing both time and money in email pays off across your entire marketing programme. The key is to structure your testing programme with systematic scientific methods that align with your marketing goals and objectives.
It takes more time than just plugging a couple of variables into a testing platform and looking for a winner. But your reward is a stronger email programme, one that produces the results you need along with valuable insight you can apply across all of your email channels.
Interested in finding out more? Check out how the awesome team at Holistic Email Marketing can help take your email programme to the next level.
(This blog first appeared on Econsultancy.)