Is Segmentation Dead?
Unsplash: Heather McKean

Is Segmentation Dead?

Segmentation is when we put people into groups believing they will react the same to our ads.

Trying to get donations from your charity? You probably want to show your ads to middle-aged women. Want to sell retirement products? You better select 45-55 with income over 50,000 dollars (these are real examples from a big charity and business I worked with).

Segmentation groups have historically been based on basic demographics such as age, gender, location; and increasingly more refined aspects of people including income, leisure, or car owned.

These segmentation groups - the jars of people - were used for targeting, which is the choice of who you will show your ad to, essentially matching a segmentation group with an ad.

Now, think of your friendship group. You might all be the same age, live in the same city, and have the same gender. From an advertising point of view, you are all the same person. Yet, Betty and you cannot be more different from each other?

There are huge assumptions in advertising, assumptions linking your gender or age to certain products and services. Even before DEI teams started looking into them, there were already huge issues with these assumptions.

History of Segmentation

Imagine you owned a nail salon in the 80s. Your business is doing okay and you decide it'd be great to purchase some ad coupons in the local newspapers and magazines.

You go speak with the editors, and they tell you the type of people who read their magazines: "My audience is principally women" from a make-up magazine; "My readers tend to be older" says a clock magazine.

Sounds good! You noticed that your nail salon had 9 women out of 10 customers. Advertising to women sounds great.

This is level one segmentation: the same people, demographically, do the same things.

Then, the internet started, and with it, databases. Suddenly, you could know much more about your customers, and not only from the time they walked through your doors: you could follow them outside.

In my view, databases added two layers to segmentation: they added attribute variables to the segmentation (now it's not only women, but women with cars, pets, red hair) - this could also be called refined profiling; and they added behavioural information - essentially what happened before and after they entered your shop (they visited the bakery next door, they dropped their kids to school).

This is level two segmentation: the same people demographically do the same things based on what they own and the other things they do.

It's great to have the databases, but up to this point, they were only analysed by humans using basic statistical tools. They'd look for correlations, patterns, and apparent rules.

What pushed us to the third level of segmentation are powerful algorithms. Algorithms are connectors with bigger brains than humans. This allowed segmentation not only to connect attributes and behaviours happening right now, but to look at them contextually.

Now, it wasn't only women with pets and red hair who shopped at the bakery next door. It was women with pets and red hair who shopped at the bakery next door yesterday, buy coconut water, chilli flakes, and watches Friends on TV.

Why coconut water and chilli flakes?

The key difference here is how this behaviour was added to the segmentation. It wasn't a rational human decision, it was a contextually algorithmically-powered conclusion.

It is not because, from our human eye, it seems to make sense to associate coconut water with nail salon customers. It is based on empirical evidence drawn out by the algorithm.

What's more, the algorithm determined that the nail salon customers had all of the behaviours and attributes above together and in this order.

So if these women are not watching Friends at the moment, they won't be part of the group. If they're watching Friends again in two months, they'll be part of the group.

This adds an extra layer of complexity only fathomable in the third level of segmentation: the impact of attributes and behaviours between each other. This dramatically increases the level of complexity from the 1+1+1+1 of level two segmentation to the multiplying factor of combinations (a combination of 4 attributes equals 24 possibilities).

And it makes sense.

Everything we do might lead to the behaviour we're researching. In the nail salon example, we could discover that: women with pets go to the pet salon, they get jealous of their cats, and decide to go to the bakery for a treat, but it is not enough! So they decide to get a treat themselves. They already visited the hair-dresser recently to get their hair red, so they enter the nail salon instead.

Are we beaten?

Why can't human extrapolate the above?

There are particulars to the third level of segmentation. The first one might be the scale of the information. The numbers of people observable and the data on these people is monstrous. Humans cannot go through it fast enough.

Fast enough is another reason why we're beaten. Behaviours are constantly evolving. The segmentation working at this very minute might change the next one. It's a mess. The people are not stable, the attributes might change, the combinations might change. There's a constant need for what is called optimisation.

This depth is a true challenge. Humans doing segmentation tend to look at the present. Algorithms go back far into causal chains. They look at dozens of things that happened in time leading to the researched behaviour.

These causal chains might be priming our behaviours. If we eat an ice-cream today, we're primed into buying a holiday if offered one next week. If we did not eat an ice-cream, we won't have the feeling of nostalgia pushing us to make the purchase a week from now.

The priming effect also is problematic to humans: how do we decide what is priming? In other words, how do we decide what is having an impact, what is relevant? If that person also ate a pancake, crepes and a soufflé, which one is relevant?

Too complex

The social view of the world experienced through markets is too complex for humans to decipher. Algorithms can help us, as they're not introducing biased logical theories.

They're simply powerful contextual observers.

Rather than start with a theory (nail salon customers must be women) the algorithms go the other way round: they start with the customers, in all their details, and look for the same.

What might happen when you leave your commercial destiny to an algorithm?

You might realise that you were wrong all along. It wasn't women you should have been interested in. It was men. I have seen this happen myself. When the algorithm defies and completely obliterates our logic.

Why men?

Well, maybe men offer nail salon vouchers to their mothers, wives and sisters. Perhaps women are less likely to treat themselves. Perhaps, perhaps.

The point of the third level of segmentation is that the explanation does not matter. The rationalisation does not matter. All that matters is the mathematical, contextual behaviour observed and replicated by the algorithm.

And this is why human theoretical segmentation might be dead. With the scale of samples and the cost of testing going down, wasting time designing segmented groups might do nothing but waste resources, introduce human bias, and generate inflexibility.

The world is too complex, and segmentation algorithms are better are dealing with this complexity than us.

要查看或添加评论,请登录

社区洞察

其他会员也浏览了