Unlocking the Power of A/B Testing in Email Marketing

By Jonathan Pay

A/B testing is an indispensable tool in email marketing that helps uncover what truly resonates with your audience. Yet, for many email marketers, navigating the maze of best practices, data interpretation, and hypotheses can seem daunting. That’s exactly what the panelists explored in the latest episode of Email & More, moderated by Adeola Sole, as they delved deep into “A/B Testing in Email Marketing” with experts Kath Pay, Victoria Scott, Phil Hill, and Andrea Boyer.

In this session, each expert shared valuable insights on how A/B testing can elevate email marketing strategies and yield impactful results. But how can you leverage A/B testing to truly connect with your audience and increase conversions? Let’s break down the key takeaways from the panel.

Why A/B Testing is Essential

Have you ever sent an email campaign, felt it was perfect, but the results were underwhelming? A/B testing is the key to uncovering why. It allows marketers to test different variables, such as subject lines, call-to-action buttons, images, and more, to determine what your audience responds to best.

In the words of Victoria Scott from Marks & Spencer, “A/B testing helps us know exactly what our customers want from that communication.” Kath Pay from Holistic Email Marketing added, “It’s like conducting a survey—except instead of asking directly, you’re watching how customers interact, making it more truthful than a traditional survey.”

The Right Approach: Setting a Hypothesis

As Kath Pay emphasized, a successful A/B test starts with a clear hypothesis. “You need to identify what you want to find out and create your hypothesis from there,” she explained. For example, do your customers respond better to discount offers or benefits-led messages? Without a defined hypothesis, you’re merely guessing, which can lead to inconclusive results.

It’s not just about testing one variable, though. Kath encouraged marketers to think bigger: “If you’re testing something like ‘benefits versus savings,’ you can test multiple factors like subject lines, body copy, and images—all supporting the same hypothesis. This gives you a more robust result than micro-testing things like button colors.”

Identifying the Right Metrics

Another crucial aspect of A/B testing is knowing which metrics to track. While it might be tempting to focus on open rates, the panel stressed that metrics should align with your campaign objectives.

“If your goal is conversions, don’t focus solely on open rates or clicks,” Adeola noted. “You might find that one email has a lower open rate but higher conversions, which is ultimately more valuable.”

Andrea Boyer added, “When you’re doing content testing, look at total clicks versus unique clicks. Even if your click-through rate doesn’t jump, you might find that one version is driving far more total interactions.”

Short-Term Wins vs. Long-Term Gains

One of the most frequent questions around A/B testing is how long to run tests and how to balance short-term quick wins with long-term strategic gains. Victoria shared her experience at Marks & Spencer, where short-term tests—like subject line optimization—are often conducted in a few hours to drive immediate results for time-sensitive campaigns like Mother’s Day.

On the other hand, Kath pointed out that long-term testing strategies, such as personalization or automated email flows, provide deeper insights and have a lasting impact on customer retention and lifetime value. “You want to measure customer lifetime value over time when testing something like personalized recommendations,” she explained.

Real-World A/B Testing Success

One standout moment from the session was when Victoria Scott shared a compelling example from Marks & Spencer’s food division. They were testing new creative concepts involving annotated doodles in their emails. Despite initial resistance from the wider team, the A/B test revealed a clear difference in performance—emails without the doodles performed 20% better in terms of engagement. This case highlights how A/B testing can validate assumptions and guide creative decisions.

Phil Hill from DoorDash shared another example, where a test involving email prompts encouraging app downloads led to a significant 50% conversion rate. This test taught his team that the right message, timed perfectly in a customer’s lifecycle, could make all the difference.

Document and Share Results

A key takeaway for maintaining a consistent and effective A/B testing process is thorough documentation. Phil Hill noted the importance of creating a standardized system to log test results, hypotheses, and learnings: “Documentation helps you and your team grow and prevents you from repeating tests unnecessarily.”

Kath agreed, suggesting that marketers should create a methodology document to ensure that future teams understand the reasons behind past tests and how to replicate or build upon them.

How Long Should You Run an A/B Test?

When asked how long an A/B test should run, the panelists agreed: it depends on the metric you’re testing and your audience’s behavior. Kath Pay advised, “If you’re testing conversions, give it a few days to ensure you’re not making decisions based on incomplete data.”

For those unsure about when to stop, Phil recommended using online tools to calculate statistical significance: “There are plenty of tools that can help determine if your sample size is large enough and whether your results are conclusive.”

Use our Statistical Significance Calculator

Key Takeaways

  1. Start with a Hypothesis: Define what you want to test and why. A strong hypothesis will guide the direction of your A/B tests.
  2. Test Big Ideas First: Focus on significant changes—like benefits versus savings—rather than small tweaks like button colors.
  3. Track the Right Metrics: Align your metrics with the goal of your campaign, whether that’s conversions, clicks, or customer engagement.
  4. Document Everything: Keep detailed records of your tests, outcomes, and learnings to build a knowledge base for your team.
  5. Know When to Stop: Use statistical tools to determine when your test results are significant enough to act on.

Don’t miss out on all the insights, questions, and answers from the full episode, A/B Testing in Email Marketing, available to watch on demand both from our site and YouTube!

Many thanks to our special guests Kath Pay, Victoria Scott, Phil Hill, and Andrea Boyer for joining our panel and lending their expertise, as well as to our live audience for their valued participation and questions.

Finally, a big thanks to our sponsors Iterable, Ongage, RPE Origin, and Actito for their generous support of this series.

Hungry for more? Email and More will be back to talk personalization and AI on June 28th. Sign up to receive our newsletter, Holistic Insights or follow us on social media to stay informed!