Debunking the myths: Why LinkedIn’s AI data practices still don’t hold up

Debunking the myths: Why LinkedIn’s AI data practices still don’t hold up

In a time where data is the lifeblood of every digital interaction, the way companies handle our personal information has never been more crucial. Platforms like LinkedIn, which are woven into the fabric of our professional lives, hold a unique responsibility to safeguard the trust we’ve placed in them. Yet, as generative AI becomes the new frontier of innovation, there’s a growing tension between progress and privacy. LinkedIn’s decision to automatically use users’ data to train its AI models, without explicit consent, has sparked heated debate. Defenders of this practice argue that it’s harmless, that we have nothing to hide, and that AI needs our data to represent us. But these arguments don’t stand up to scrutiny. Privacy is not about secrecy, and consent cannot be assumed. In this piece, I’ll dismantle the most common defences of LinkedIn’s data practices, exposing why they fall short of both ethical and legal standards.

Privacy is not about having something to hide—it’s about control

Let’s start by dismantling this classic straw-man argument: “If you’re not doing anything wrong, why worry about privacy?”. This is a bit like saying, “If you’ve got nothing to hide, why not live in a glass house?”. Sure, it sounds plausible until you actually imagine doing it. The reality is, even the most law-abiding citizen wouldn’t want strangers peering through their windows 24/7, watching them binge Emily in Paris on Netflix or dance badly to Tshwala Bam in their kitchen.

Privacy is about controlling the narrative of your own life—it’s about deciding who gets to see which parts of your personal story.

Online privacy is the same. Just because LinkedIn—or anyone else—thinks your data is useful doesn’t mean they should have a blank cheque to access it. Your professional updates, opinions, and interactions are part of your identity. Letting LinkedIn siphon this off without explicit permission is like handing someone the keys to your car, hoping they don’t drive it off a cliff. And it’s not just about security; it’s about your right to decide what parts of your life you share and with whom.

Think of privacy as a dimmer switch, not an on-off button. You’re not hiding anything; you’re adjusting the brightness depending on who’s in the room. When LinkedIn assumes it can throw the lights up to full blast without asking, it’s not just an inconvenience—it’s a violation of your autonomy.


AI doesn’t need to steal to represent us

Now, onto this seductive little argument: “If we want AI to represent us, companies need our data to train their models”. This is a perfect example of a broken binary. It frames the choice as all-or-nothing: either hand over your data or accept that the future of AI will be riddled with biases.

But that’s like saying, “If you want a self-driving car, you must be okay with the carmaker tracking every single journey you take and where you go to like Signal Hill for late-night horizontal refreshment.”. No. Just no.

There are middle grounds—ways to build responsible AI without rifling through your digital wardrobe like a nosy parent.

There are privacy-preserving techniques like differential privacy or synthetic data that allow AI systems to learn without needing access to raw, identifiable personal data. This isn’t sci-fi—it’s happening now. We can have AI that learns patterns, improves services, and represents diverse experiences without reducing your life to fuel for the algorithmic fire.

In fact, by using an opt-in model, where users consciously choose to participate, you create a data set that’s more robust. People who genuinely understand how their data is being used will share more willingly and thoughtfully, creating a healthier, more accurate training ground for AI. So, let’s bin this idea that we need to hand over the keys to our digital lives for AI to work. We don’t need to choose between Orwell and Skynet—we just need companies to respect our choices.


Opt-out: the digital age’s answer to ‘gotcha!’

Then we have the ever-so-generous opt-out model. LinkedIn pats itself on the back for allowing users the freedom to opt out of data collection for AI. But let’s not kid ourselves—this is like a digital version of those “subscription traps” where you sign up for a free trial, only to be charged forever unless you can find the tiny “cancel” button buried in some obscure settings menu. It’s not a feature; it’s a user experience sleight of hand designed to exploit our inattention.

This is where behavioural science really comes into play. We know from nudge theory that defaults matter. Opt-out systems thrive because most people don’t change the settings. It’s not laziness; it’s human nature.

Our brains are wired to conserve energy, and most of us don’t go trawling through privacy settings unless we’ve had a privacy epiphany—usually after something has already gone wrong.

The ethical model here is opt-in. Give people a clear, simple choice upfront. LinkedIn doesn’t need to trick people into donating their data to its AI project like it’s some form of digital “How many hungry children you’d like to feed?” at South Africa’s favourite fried chicken outlet. They should ask users directly, at the right moment, whether they’re comfortable with their data being used for AI training.

And let’s be clear—burying the option deep in a settings menu isn’t real consent. It’s like someone asking for permission after they’ve already eaten your lunch.

The “no harm” fallacy: we’ve heard this before

Another argument floating around is the “no harm” defence. “Relax”, they say, “there’s no harm done here. LinkedIn’s AI isn’t going to ruin your life—it’s just improving the platform”. It’s the tech world’s equivalent of “don’t worry, the robots are here to help”.

But as any good behavioural economist will tell you, the absence of immediate harm doesn’t mean there’s no risk. Just because your data is being quietly whisked away doesn’t mean there aren’t long-term consequences. Do you remember when we thought sharing our lives on social media was harmless fun? Fast forward a few years, and suddenly, it’s influencing elections, stoking divisions, and making us question our own sense of reality.

AI trained on personal data today can inadvertently expose sensitive information tomorrow. Your innocuous posts and interactions could become fodder for something far beyond what you signed up for—whether that’s in biased AI outputs, discriminatory models, or data breaches. So, let’s not pretend that this is a risk-free activity. Just because it’s invisible doesn’t mean it’s innocuous. This is less “no harm, no foul” and more “silent erosion of trust over time”.


Regulatory complacency elsewhere doesn’t excuse inaction here

Finally, the notion that we should sit back because other regulators haven’t acted aggressively is about as persuasive as saying you shouldn’t lock your doors because your neighbour doesn’t. Global regulation isn’t a race to the bottom; it’s a chance for countries like South Africa to set higher standards.

It’s the equivalent of being the first person in the neighbourhood to install solar panels. Sure, the rest of the street may still be on the grid, but by setting a higher bar, you lead by example, showing that responsible behaviour pays off in the long run. POPIA exists precisely for cases like this, where international tech giants think they can operate without local oversight.

Just because most other regulators haven’t cracked down doesn’t mean South Africa should follow suit.

In fact, the smarter move would be to act now and demonstrate that you don’t need to wait for a crisis to get your house in order. Africa, in particular, has the chance to set a progressive, user-first standard in the way data rights are respected. This isn’t just about legal compliance; it’s about creating an environment where users can trust the digital systems they engage with, knowing their data is protected.


Trust: the forgotten currency in the digital world

At the heart of all of this is trust—the ultimate currency of the digital age. LinkedIn and companies like it depend on the trust of their users to keep their platforms thriving. But trust is a fragile thing, easily broken and hard to rebuild. When companies take liberties with data, especially under the guise of improving user experience, they’re playing fast and loose with that trust.

If LinkedIn wants to build AI models that truly “represent us”, the first step is to show it respects us—by asking for our permission, being transparent, and giving us real control over our data. In the end, this is about more than AI; it’s about creating a digital future where trust, consent, and respect for privacy are the foundation—not afterthoughts tacked on when it’s too late.

Let’s be honest: LinkedIn, like any business, is trying to extract value from its users. But it needs to remember that value isn’t just something you take; it’s something you build together. The most valuable asset LinkedIn has is not the data itself—it’s the trust users place in the platform. Lose that, and no AI algorithm will be able to fix what’s broken.

要查看或添加评论,请登录

Nathan-Ross Adams的更多文章

社区洞察

其他会员也浏览了