Marketing is a percentages game.
A couple percentage points may not seem like a lot, but, when you have an email list approaching 1 million followers, 2% means an extra 20,000 leads opening your email, interacting with your brand, and ultimately generating revenue.
The problem marketers face when optimizing emails is determining which factors create that 2% increase. Is it the placement of the CTA? The color? Does it matter who sent the email? With so many variables, it’s easy to turn email optimization into a guessing game and form opinions based on perspective rather than statistical evidence.
There’s a better way, of course—email split testing. When done right, you’ll be able to know exactly why one email performs 2% better than another.
Related Read: Check out this case study where Virgin Holidays made millions from a 2% increase in open rates.
But split testing is tricky. Only carefully designed tests end up producing results. Just ask split test experts HubSpot and Phrasee: they say that “Split testing the wrong way is almost as in-effective as not split testing at all.” This comes from their recent study, The Science of Split Testing.
There are two common problems that marketers run into when running split test campaigns: first, gathering enough relevant data, and second, analyzing that data to make better marketing decisions.
Consequently, our research and experience have shown us a solution to these problems: create clean data by taking the correct scientific approach, then use AI technology to gather more relevant information, analyze it, and automatically offer solutions.
With all this in mind, we’ll take a look at how HubSpot and Phrasee recommend you create some killer split tests.
Let’s go back to 5th grade science.
Remember when you had to run an experiment for the science fair? Well, it turns out that mentality is actually really valuable in marketing. The elements of the scientific method are the exact elements of an effective split test:
Pretty simple, right? Here’s where your split test could go wrong:
Marketers commonly refer to split tests as “A/B” tests, where they test only two independent variables (two different subject lines, for example). But why stop there? Why not test 10 different subject lines?
Sure, it takes a little bit more work, but the data that larger split tests produce is super valuable. Pair that data with AI, and you start to see some real incredible results.
That’s exactly what Phrasee does, actually. This AI tool captures data from each of your split tests, then uses machine learning and natural language processing to automatically generate the optimal subject line copy that will get people to click.
For more valuable information, be sure to check out The Science of Split Testing. You’ll be running split tests like a pro in no time!