Skip to content Skip to sidebar Skip to footer

8 powerful tactics to avoid email testing fatigue and keep the channel fresh.

‘You Should Test Everything in Marketing’ ever heard that? We are told to challenge assumptions, backup instinct with data. Who else is tired of that email testing talk? For some senders, a big A/B testing programme might be a terrible idea. Too much overhead with too little to show for the effort.

Let’s sniff out  when testing is a great idea, and when it’s not. What approach should you take to build the best possible email programme?

Confessions of a Testing Sceptic

I have a confession: I’m not a fan of testing. I flinch when I hear A/B. Even the alphabet poster in my kids’ bedroom starts at D. Yes, I recently tore off the C as well, after I read an article about Control Groups.

Okay, I’m not that dysfunctional as a parent. As a CRM professional though, I find that my aversion to A/B testing puts a brake on growth – both for me, and my channel.

How small teams need a different approach

The popular opinion is to ‘test frequently and test everything’. If you’d spend  time in a small team,  or businesses with just one or two CRM managers like I have, you’d agree that doesn’t work that easily. Teams with juuuuust enough time to keep the day to day operation  running need a more nuanced and flexible approach to testing.

I’ll share my findings so smaller teams can keep going AND set up a solid and scalable foundation for growth at the same time.

Why we need to rethink A/B testing in Email

I’ve been a follower of the cult of testing in the past, but have slowly lost my religion. I get why it’s a critical business practice, and why it’s worth the investment in effort and organisation. Solid and extensive testing takes the guesswork out of marketing and design. It forces us to quantify our intuition and not make assumptions. That’s all fine, no argument there from me.

But most of the best practice advice I’ve read comes from the wider world of the web. It’s written by and for web product owners who run platforms with at least a reasonable volume of traffic. 

This isn’t to dismiss the scientific rigour that’s driving modern marketing, but we can’t all be scientists.

Society has a working relationship with science. We can read the weather forecast and make a cup of tea – we don’t conduct our own tests every time to be sure it will work.

Sometimes testing everything soaks up too much of your team’s time, particularly for small CRM teams. And AI won’t replace marketers just yet. I want to make the case for a mix of research, analysis, experience, intuition and selected testing. Taken together, these will help us optimise and grow our CRM and Email activities.

Let’s recap some of the commonly understood recommendations for testing. Then I’ll explain why I struggle with many of them in my role as a CRM and Email Marketer, and we’ll look at how we could focus our efforts.

Testing Best Practices (according to the web)

  1. Be Holistic. Test Everything. Don’t make any kind of change without testing it first.
  2. Be Consistent. Be Organised. Test the same way every time, particularly when trying variations of a change. Always use the same metric.
  3. Be Focussed. Test one thing at a time.
  4. Document Everything. Keep it in a format that teammates, bosses and newcomers can easily access and understand.
  5. Be Scientific. Write hypotheses. Get enough data for Big Sample Sizes and Statistical Significance.

What works for web, doesn’t for Email

Email developers say that what works for web development doesn’t always work for email. We mean web standards and email client rendering, and the same is true of a testing programme.

Web product teams achieve useful test results much faster than a CRM team can because they can run tests on their higher volumes of daily traffic. The opportunities for the CRM team to gather enough data are much fewer. As a result, a single test may take weeks to run.

Consider a CRM programme run by a couple of individuals for a typical SMB. Their list is around 300k opted-in recipients. They have a handful of automated journeys, a weekly newsletter, product updates two or three times a month, and a big promotion each quarter.

The team wants to test a change to CTA text to drive more product demo sign-ups on their landing page. They write their hypothesis and set up a document to track the test results. They choose their weekly newsletter for the test as this combines a consistent format with the largest number of recipients. To ensure that results are not skewed by some other contextual factor, they plan to run the test for several editions of the newsletter.

The ultimate goal is to increase conversions on their landing page, so they use that as their success metric. To attribute the conversions to each variation in their analytics platform, they set up each split as a separate campaign.

Week one goes well:

Great! From 45k opens, Variation A converted 33% better than B. Statistical Significance! Run the test for three more weeks to get a decent sample size, then write up the results and plan the next test. Button colour? CTA above or below the fold? Opening paragraph more or less than 150 words?

Some of these tests will deliver conclusive results, others less so. In five or six months, the team deliver about one test a month, of varying significance, in one very narrowly focussed area of activity. 

They effectively ran two parallel weekly campaigns in this time. Each required the usual set up in the ESP, plus a specific segment.

Further segmentation or localisation would mean extra complications. They paused that activity for the duration of the test.

The company’s marketing VP looks at how much testing the Web Product team have delivered in the same time period. The CRM team are called in for a meeting to explain their slow pace and it all gets rather uncomfortable.

No-one likes to make excuses. But instead of committing to more test activity, the CRM team wants to show that they can work more effectively.

More focus, less hocus pocus

1. Get to know your audience

What data are you already gathering about your subscribers? If you know who they are, can you track differences in who clicks on what? Can you start gathering deeper insights based on their activity around your website? See what your ESP can do for you, or the integration with your e-commerce platform.

Your copywriter, art director, and  co-workers will also have good insights into your audience. Can they help you create personas that can drive some segmentation ideas?

2. Research.

Benefit from others’ learnings. Want to include emoji in your copy? See if anyone else has written about their experience. Debating colour choices for CTAs? Perhaps there’s a study out there on colour psychology you can dig up.

3. Pick your battles testing versus automation

The brand and creative departments often drive newsletter strategy. It tends to evolve quickly. Rigorous, methodical testing slows this process down, creating friction and frustration.

If that’s the case, let the newsletter evolve organically and work with your automations instead. These will be less interesting to the branding teams anyway. Set up a couple of A/B splits and just let them run for a while.

4. Test top of funnel, measure bottom of funnel

Put your efforts into tests that will have an impact on the biggest portion of your list. But the key measure should always be its impact on your ultimate goal – usually a conversion.

For example, test whether generic or branded search creates more valuable subscribers. Identify each source at sign-up, and measure who converts better after the welcome email sequence.

5. Caution: vanity metrics

Open rates, I’m looking at you. 

Sure, open rates are a popular top-funnel metric. But they’re too often seen as a barometer of a healthy email list. Looked at in isolation, they tell you very little about how your campaigns are performing further down the funnel. 

Open rates are like an approval rating for a politician. A nice ego boost if people like you, but meaningless until the actual votes come in.

Design open rate tests with the end conversion in mind. Open rates are affected by sender name, subject line, send time, weekday, among other things. A send-time test shouldn’t only find the best time for opening an email. It should find out when that email is most likely to be acted on.

Imagine a CRM manager for a travel agent. Their ESP data shows that the recipients in your vacation rentals newsletter open an email during their lunch break. But web transactions show evening is busiest for bookings. So an evening send time may result in fewer opens, but more bookings. Looking only at open rates would have led to worse performance and a wasted test.

6. Stick to tests with wider learnings

Subject lines, now I’m looking at you.

Split testing the content of a subject line is like a popularity contest. You will only learn which subject line was more popular on the day.

Instead, try testing personalisation or localisation in the subject line. The learnings might steer you towards further tests using those methods.

7. Trust your intuition

But challenge your assumptions. Soft content only on weekends? Try a sale occasionally. A test could take two campaigns that you already have planned, and switch their sending days for half of the list.

8. Beware the novelty effect

Maybe that Sunday sale worked because it was so unusual. But do your customers want it to become business as usual? A small email programme may never have enough hard data to be sure.

In that case, look at some soft data. Can your social team help with some sentiment analysis around the weekend activity? Did your customer support team get a spike in requests?

This kind of holistic overview will help you make a business decision about Sunday campaigns, even if the hard data is missing

Plan –> Execute –> Review

When it’s time to make a change to your CRM and email activity, be sure to document everything as much as you can. Even when you haven’t set up a test first.

  • Write down what you’re planning to change.
  • Why you are making the change (‘my research shows that…’).
  • How much effort is required.
  • What you expect to happen.
  • How long it should take.

Then do it.

Afterwards, note any differences in performance and be prepared to share and explain these to your team and management.

This doesn’t have to be a detailed document – one row for each point may be enough to help you explain the changes to your team later.

Prepare for growth

Just because you can’t test everything all the time, doesn’t mean you can’t do it properly when you do.

A day will come when your list has grown to 3m recipients. Your automations will be sending 100s of different emails a week. And on that day, it will be time to hire someone to manage the tests full time!

Getting the basics right with a small team will make it easier to scale. When growth comes, you’ll have laid solid foundations for a comprehensive marketing testing programme.

The case for marketing testing

It all adds up to a more nuanced, considered approach to evolve your CRM programme with testing.

  • Don’t test absolutely everything.
  • When you test, do it right and measure the most important metrics.
  • Work with other teams for insights before and after the test.
  • When you can’t test a change, at least document what you did and why, and keep an eye on the results.

It’s about balancing rigour with agility. Being responsive to challenge and opportunity. About combining considered action with informed intuition.

Anthony Noel
Show CommentsClose Comments

Leave a comment