- EmailOS
- Posts
- The A/B testing ladder to grow engagement and sales
The A/B testing ladder to grow engagement and sales
A simple roadmap for deciding what to test next—and what to ignore.
David Ogilvy was obsessed with two numbers.
Not one. Two.
He tracked direct response results religiously. Coupons clipped, orders placed, inquiries received. But he also tracked readership scores through Starch Research. That second number measured how many people actually read and engaged with the ad, completely independent of whether they bought anything.
Most people only remember the first half. They quote "if it doesn't sell, it isn't creative" and assume Ogilvy cared about sales above everything else.
He didn't.
He cared about sales AND attention. Because he understood something most still haven't figured out: an ad that converts today but nobody wants to read tomorrow is a dead end.
You can squeeze short-term results out of aggressive tactics. But if your audience stops paying attention, those results dry up fast.
Engagement and conversion are two different animals. And optimizing for one while ignoring the other creates problems that compound over time.
This is exactly the mistake we see when we audit client email programs. Everyone is running A/B tests. But they're only watching one number at a time. Test subject lines, pick the winner based on opens. Test body copy, pick the winner based on clicks. Test send times, pick the winner based on revenue.
Seems logical. But the data tells a different story.
We've seen subject lines that drive a 38% open rate generate fewer sales than subject lines that open at 24%. We've seen the send time that produces the highest click-through rate actually suppress purchases. We've watched body copy that gets the most replies become the worst at closing.
When you only optimize for one metric, you can accidentally kill the other one. And you won't even know it's happening because you weren't tracking it.
So we track both. For everything.
Our testing process starts in a place most people skip entirely: the sender name. Before anyone reads your subject line, they've already scanned the sender field and mentally categorized your email.
Co-worker, vendor, brand, stranger. That split-second decision shapes whether they open it, how they read it, and what they do next.
A sender name change can reveal a lot about how your audience perceives you.
Sometimes it moves nothing, which tells us brand recall is weak. Sometimes it spikes opens and unsubscribes at the same time, which tells us people recognized the brand but were ignoring it on purpose. And sometimes it lifts engagement and conversion together, which gives us a lever we can use strategically going forward.
From there, we test timing. Not just the hour. The day. Your subscribers process their inbox in completely different psychological states depending on when your email arrives. Monday morning is triage mode. Scanning for fires, prioritizing tasks, mentally sorting what's urgent from what's not. Sunday evening is preparation mode. Winding down, thinking about the week ahead, open to new ideas.
We recently ran an event launch where Sunday evening emails generated a flood of positive replies and follow-up questions. Monday morning sends from the same campaign dropped engagement off a cliff. The content didn't change. The audience didn't change. Only the timing changed.
That's why we always test days against both metrics. What days drive the highest engagement? What days drive the highest conversion? Those are often two completely different answers.
After timing, we test the content layer. Subject line types, body copy approaches, CTA formats. Buttons vs. links vs. images. Each one tracked against engagement and conversion independently through a UTM structure that lets us see which specific elements actually move revenue versus which ones just get attention.
Over time, this process gives you a map. You start to see which levers to pull when the goal is eyeballs and which levers to pull when the goal is sales. You stop guessing and start making strategic choices about every send.
But there's a layer underneath all of this that determines whether any of it matters.
If the people on your list aren't the right people, no amount of testing will fix that. The brands we see struggling the most with email aren't struggling because of bad subject lines or wrong send times.
They're struggling because their lead gen attracted the wrong audience. So they compensate with complexity. More segments, more flows, more re-engagement sequences.
The brands where email feels simple? Their lead gen did the hard work upfront.
Testing only works when you're testing against an audience that's capable of converting in the first place. Get that part right and the optimization becomes exponentially more powerful.
If you want to see how this two-metric framework applies to your email program, and whether the real bottleneck is your email strategy or your lead gen, that's exactly what we dig into on a strategy call.
We'll walk through your current metrics, show you where the gaps are between engagement and conversion, and map out which tests would have the highest impact for your specific situation.
PS. Download the link to my A/B test list here. While the list is simple this is one of my prized possessions. Done right, this can create more engagement and sales in your business than almost anything else email-wise.
Next week, I’m going to be revealing some of my favorite email copy angles.