Why We’re All Wrong About Open Rates

Why We’re All Wrong About Open Rates

If getting consistent results with your cold outreach campaign is keeping you up at night or if you want to crack cold outreach as a viable growth channel for your business, this newsletter is definitely for you.

I started a cold outreach platform (QuickMail) almost 10 years ago, published a book about cold email, and co-hosted a weekly podcast on cold outreach for more than 6 years.

If deliverability is essential to you, you'll get a kick out of this post in particular.

Deliverability has become very important in the last couple of years with emails, no doubt about it.

Yet, tracking deliverability remains hard as we usually operate with incomplete information and often rely on proxy information, such as open rates.

We know when an email has been sent and, if we are lucky, we’ll also get a notification when someone opens it (assuming the software can differentiate humans from bots properly).

Check out the aggregated open rate value for the last 3 years on QuickMail.

As you can see, it is not always the same value (you have some massive spikes and crashes) depending on holidays, the impact of blacklists, new spam tech being deployed...

But what if I told you that the way every software (including QuickMail a few months back) looks at and calculates open rates is wrong for assessing deliverability?

I’ve been thinking pretty intensely about open tracking lately and my 12h flight from Tokyo helped solidify my thinking.


My conclusion was pretty simple. As an industry, I think we’ve been looking at it from the wrong angle from the beginning.

Nowadays, we want to use Open Rate as a proxy for deliverability, but all software out there are using Open Rate as a proxy for activity.


What's the difference and how big of a deal is it you may ask?


Turns out, it's a very big deal, and here is why.

Take a look at this simple example:

Imagine you have a campaign with 3 prospects.

An email to the first prospect is sent on Saturday, an email to the second prospect is sent on Sunday and an email to the third prospect is sent on Monday.

First two prospects open their emails on Monday, the one who received the email on Monday never opened it because it landed in spam. We are now a week later.?

From an activity standpoint, last Monday got you 2 opens. Seems like a good day to get your email opened.

But from a deliverability standpoint, Saturday’s and Sunday’s emails have been delivered, not Monday.


This may sound like splitting hairs, but think about it this way.

If you are interested in deliverability, you have to attribute the open to the day it has been sent to understand how deliverability was that day.

The emails didn't take 2-3 days to reach the inbox, only to be opened.


This is what it looks like with every software on the market:

Activity-based calculation

Sat: 0% activity (no one opened your email)

Sun: 0% activity (no one opened your email)

Mon: 66.67% activity (2 opens out of 3 prospects contacted)


This is what QuickMail is doing now:

Deliverability based calculation

Sat: 100% deliverability (1 email sent, 1 confirmed received)

Sun: 100% deliverability (1 email sent, 1 confirmed received)

Mon: 0% deliverability (1 email sent, 0 confirmed received)


The two are giving very different stories.?

With the second one, you can pinpoint exactly when deliverability problems start to appear: Monday.

Good luck doing that with the activity-based approach.

My example is a bit simplistic because the volume is low, but on a larger volume, it may indicate that something changed on Monday that negatively affected deliverability.


Because we've been implementing this in QuickMail a couple of months ago (Advanced Analytics), we can now pull up stats, and man this is amazing.


Here is what it looks like with a real example:

If you look across the first row, you'll immediately see that performance of the first week has been decreasing from June 26th until July 24th, then back up in August.

Armed with such data, you can now easily investigate problems. Maybe it's just the holidays, maybe it's time to retire an inbox, or maybe the whole domain reputation was affected.


If you attribute the open to the time it’s opened (activity-based) instead of when the email is sent, it will be almost impossible to figure out when the deliverability problem occurred. There will be too much unknown in the time it takes people to open an email, therefore, making this unreliable as a proxy for deliverability.


This may sound counter-intuitive to still have 0% open rate for a day when someone opens an email, but it’s the absolute right way to look at it if you are interested in deliverability.


Conclusion

I believe open tracking is finally really useful for assessing deliverability with this approach. Give us a go and try it for yourself: https://quickmail.com


PS: As you can see in the picture, we decided to go one step further and actually implement this for Clicks, Replies, Bounces, and Unsubscribe (and have 2 levels: per email or per journey, which is a series of touches a prospect experience).

This provides really useful stats when comparing week-by-week results.

PPS: Although the display is per week, we actually have it done per day and I will be coming back with more information on this.




要查看或添加评论,请登录

社区洞察

其他会员也浏览了