Boost Listener Retention: Master the Art of Podcast Length
Nobody knows how long a show should be... (Photo AKW)

Boost Listener Retention: Master the Art of Podcast Length

Everybody thinks they know how long a podcast should be — but they don’t. Here’s how to analyse your downloads and work out the best runtime for your show

When you’re a podcast producer, the question you are asked the most is “How long should a podcast be?” There is no correct answer any more than there is an answer to “How long should a TV show or movie be?” The right length is in the eye and ear of the audience, and even within your audience, if you ask the question the answers will vary wildly.

But let’s say you’ve got 20 episodes under your belt and you want to fine-tune it. Can your data tell you what the optimum runtime is? Well, yes. And no.

In this article, I'll explain an easy way to get into your data and work out what the numbers are really telling you...


Your podcast download dashboard is lying to you… kind of

Most podcast platforms give you basic metrics, usually downloads over specific periods. However, this isn’t enough to choose an optimal runtime. The raw downloads won’t tell you what you need to know, you need to make some statistical adjustments, using easy maths.

This is because you have to make sure each episode’s data is evenly weighted for a fair comparison. It’s called normalising your data. If you don’t normalise your data then two errors will occur:

  1. Older shows will always look more popular (because downloads accrue over time)
  2. If you have lots of shows with different runtimes, there will be more of one runtime than another, which means you get more downloads for that runtime (because there are more of them). That doesn’t indicate a preferred runtime, it’s just the most frequent.


You can’t just look at downloads per episode and work out the optimum runtime because…

New listeners always listen backwards to episodes that dropped before they discovered the show.

This is pretty obvious if you think about your own listening and watching habits. We’ve all discovered the latest song, YouTube video, TikTok, Insta, books, articles (etc.) from a creator and then checked their profile and looked at their earlier stuff.

That means older content will usually look more popular in the raw numbers than newer content due to their show age, not actual popularity.

Look at this example - taken from a chatshow I produced back in lockdown (Lockdown Lemonade) - the older shows are all more popular than the newer ones… except, they’re not (we prove that later in this piece). Logically, they shouldn’t be, unless the audience was bigger on day one than the latest show. Which it never is.


The older shows are more popular? How? Show age bias.


How do you compensate for show age bias? Do the math…

Normalising the downloads based on the time since publication accounts for the natural increase in downloads that older episodes accumulate because they’ve been available for a longer period. But we can minimise or isolate the effect of show age on the number of downloads. Here’s the formula:

  1. Work out the days since the show was published: Days Since Published = (Current Date — Publish Date)
  2. Then adjust the download count by dividing it by the number of days since the episode was published: Normalised Downloads = Total Downloads/Days Since Published

This gives us an average download rate per day, which helps to compare episodes released at different times more fairly.

See how this normalisation changes the graph to show almost the reverse of the raw data we began with. That’s an expected outcome — the audience has grown over time, and so has the average number of downloads per new episode. Phew.


Now the numbers are reversed, which reflects actual audience growth since day 1

Next, compensate for different groups of runtimes

If you release 5 x 30-minute shows, 7 x 40-minute shows, 10 x 45-minute shows and 2 x 55-minute specials, you have to compensate for the availability of those shows as part of the whole series.

If you don’t, the most frequent runtimes (i.e. 40-minute and 45-minute shows) are bound to feature more in your metrics when a new listener back-listens to older shows.

Ultimately your goal here is both quantitative and qualitative.

Quantitative: To make a fair comparison between different show lengths, to learn which lengths correlate with the most downloads.

Qualitative: This process will also allow you to identify important factors, for example, you might discover that your short news shows are less popular than your long in-depth specials, and that’s important qualitative insight into steering your editorial decisions.


The math here is tricky but it works…

Firstly group your shows into bands, using whatever measure seems most appropriate. Here I have banded them into different ‘bins’ to roughly equalise the numbers. Can be a bit tricky if — like this example — there is a wide range of runtimes to deal with. Just try and group them roughly into 2–3 minutes per bin.

Now let’s even out that big spike in 36–39-minute shows, and the dips too, using this formula:

Don’t be scared of the Σ symbol below, it just means to sum up the total number of normalised downloads per bin — so add each episode’s normalised values into one number.

Average Normalized Downloads for a Bin= ∑ Normalised Downloads in Bin/Number of Episodes in the Bin

Now you get a very different kind of bar chart…

As you can see the bin 48–51 minutes (2889–3033 seconds) shows the highest average normalised downloads per show, indicating that runtimes in this range tend to perform better on a per-day basis, adjusting for the show age and the frequency of episodes in that runtime range.


Making sense of it all?

In the end, take your shows and try to plot a line through the normalised data, to find the sweet spot. This is a decent way to measure audience growth and engagement.

Then do the same with the runtime data — try drawing a best-fit curve through all the dots:

This isn’t so easy, it’s a polynomial fit (which is a statistical manipulation that tries to show trends in data using the formula y=a3x3+a2x2+a1x+a0 also called cubic degrees). Basically, it eliminates the outlying dots and gives you the best average. Lots of stats programs (and ChatGPT etc.) can do this for you.


In conclusion: Data-supported insights — plus one huge caveat

For this podcast series, there’s a peak at around 50 minutes, more or less. That’s the sweet spot based on the shows dropped so far. That’s a quantitative measure of which shows have done best.

That doesn’t mean a different length won’t do better. It just means so far, the best shows have been around 50 minutes long. That’s a data-supported quantitative clue to help you steer your production goals.

Also, if you look at the content of the shows in that 48–55 minute zone, there may be another clue as to which show topics and formats have worked best with your audience. That is a data-supported qualitative insight to help you steer your editorial.

Ultimately, this information is useful for steering your decisions, but it’s not an equation for capturing lighting in a bottle. Experiment, measure, analyse.

That’s data, that’s KPIs and that’s showbiz.


要查看或添加评论,请登录

社区洞察

其他会员也浏览了