What's the future of 'surveillance capitalism' in the wake of the Facebook/CA revelations?

What's the future of 'surveillance capitalism' in the wake of the Facebook/CA revelations?

A business model which relies on hoovering up huge amounts of personal data from consumers is under threat from GDPR and the Cambridge Analytica revelations. What’s the future for ‘surveillance capitalism’, asks Ardi Kolah.

We tend to use the word ‘surveillance’ to describe an activity that the state may be involved with, such as spying on us for security or other purposes. And we tend to think about ‘mass surveillance’ in the context of the Edward Snowden revelations. But this isn’t what I’m talking about here.

‘Surveillance capitalism’ is a form of real-time, mass surveillance but isn’t an abstract theory of limited academic interest. It’s basically how the internet works. ‘Surveillance capitalism’ neatly summarizes what Facebook and Cambridge Analytica as well as hundreds of thousands of other brand owners are up to on a daily basis.

The term ‘surveillance capitalism’ was conceived by Harvard Business School academic Prof. Shoshanna Zuboff back in 2015 to describe large-scale surveillance and how this could be used to influence human behaviour to sell more stuff.

It involves predictive analysis of big datasets, describing the lives and behaviours of tens or hundreds of millions of people, allowing correlations and patterns to be mapped, insights about individuals’ attitudes, values, perceptions, beliefs and behaviours to be inferred and future behaviour to be predicted with a high degree of precision. It then sells this data to advertisers.

These advertisers (‘data controllers’ in GDPR parlance) will then attempt to exert surreptitious control over consumer behaviour through personalized and dynamic targeted advertising so that the individual – perhaps unconsciously – is influenced to buy a product/service or vote in a particular way.

This technique is part science and part ‘trial and error’ – it’s refined into a ‘dark marketing art’ by testing and re-testing numerous variations of ads on different customer segments to see what works best before unleashing this on the web.

The scary part is that every time you and I use the internet, we’re unwittingly taking part in such an experiment as companies and organizations try to figure out the most effective way of influencing our behaviour to drive incremental sales.

Does that feel creepy or cool? Post-GDPR, it’s creepy.

The challenge for marketers is to re-think how they comply with the higher data protection, privacy and security standards demanded by the GDPR without being creepy.

The charge of ‘surveillance capitalism’ has been levelled against Facebook in the wake of the Cambridge Analytica scandal where 87m people apparently had no idea they’d been subject to such a surreptitious experiment. And it’s been a wake-up call for marketers, judging by the media frenzy and the ‘monsterfication’ of brand owners that dabble in the ‘dark marketing arts’.

Max Schrems, founder of privacy lobbying group None of Your Businesswas first off the blocks in launching a legal challenge against Facebook, WhatsApp, Google and Instagram for sending millions of ‘zombie GDPR emails’ that purported to force users into consenting to targeted advertising in order to continue using their services.

Forcing users to accept wide-ranging personal data collection in exchange for using a service is prohibited under GDPR as that can’t comply with the legal requirement that ‘consent’ must be freely given, mustn’t discriminate or disadvantage someone should they not wish to give consent to such processing but who may still wish to use the product/service or app.

“The GDPR explicitly allows any data processing that’s strictly necessary for the service - but using the personal data additionally for advertising or to sell it on needs the users’ free opt-in consent,” Schrems observes.

“GDPR is very pragmatic on this point: whatever is really necessary for an app is legal without consent, the rest needs a free ‘yes’ or ‘no’ option. Many users don’t know yet that this annoying way of pushing people to consent is actually fobidden under the GDPR.”

Since it was founded in 2004, Facebook has grown to be the biggest social media platform in the world, gathering a massive amount of highly personalized data about its users, including their names, gender, email addresses and birthdays as well as optional data such as hometown, interests, relationship status, education, work and political or religious views and data derived through behavioural or predictive analytics.

Using this data, Facebook may suggest connections, guide the presentation of information on the platform and publish targeted ads. And as every marketer knows, Facebook’s ability to access information about users and their tastes and preferences is key to its business model.

For example, when a user “likes” a comment, article or video, this generates information about the user. Facebook is then able to help advertisers target – or exclude – users based on their tastes and preferences.

While Facebook doesn’t directly, for the most part, sell or share their detailed digital consumer profiles to third parties, at least not in the form of unified dossiers, it does allow other brand owners to “utilize” the personal data “without fully transferring it” and it allows others to use its infrastructure to harvest more personal data which generated over $40bn in revenues just last year.

Over the years, Facebook has made several important adjustments to its platform that alter the way it uses or allows others to use personal data. Initially advertisers could target users based on the information they volunteered in their profiles; the addition of the “Like” button allowed advertisers to target users based on their “Likes”, while another feature allowed them to target the Facebook of friends of people who had “Liked” or otherwise interacted with their brand.

In 2012, Facebook created a feature that allowed companies to upload their own lists of e-mail addresses and phone numbers and have Facebook match this information to the customers’ Facebook accounts. Companies could then target, or exclude, subsets of these individuals based on other information in Facebook’s possession.

Today companies can capture information about very particular activities – such as specific webpage activities, swipes in a smartphone app, or types of purchases – in real-time and tell Facebook to immediately find and target the persons who performed these activities.

In 2014, Facebook started allowing brand owners to target users based on their behaviour on other websites: Facebook tracks users across any website that contains a “Like” button (e.g. “Click here to Like us on Facebook”). If, for example, someone was, reading reviews about laptops, Facebook would let advertisers target that person as someone “interested in purchasing a laptop”.

Around the same time, Facebook started allowing advertisers to target people based on “Ethnic Affinity,” that can be used as a proxy for race. For example, users who “Like” numerous rap artists might be “categorized as African-American.”

Beginning in 2013, Facebook partnered with several data brokers, including Acxiom and companies later acquired by Oracle. By 2017, six data brokers provided Facebook with “audience data,” the better to categorize and segment users.

Then, in 2018, after Cambridge Analytica’s use of Facebook data was widely publicized, Facebook severed ties with data brokers.

What Facebook does next in the wake of the full enforcement of the GDPR across the European Union will depend on whether Max Schrems gets to live his dream of putting the social media giant once and for all back in its place.

This article first appeared on the website of the World Advertising Research Council (WARC) on Monday 18 June 2018

要查看或添加评论,请登录

社区洞察

其他会员也浏览了