The Dark Side of AI Censorship: When Machines Protect the Powerful & Silence the Truth
? Matt Mahmood-Ogston

The Dark Side of AI Censorship: When Machines Protect the Powerful & Silence the Truth

Big Tech Doesn’t Want You to Read This.

Artificial Intelligence was supposed to empower us.

Give us access to knowledge.

Democratise information.

Help us tackle climate change, social justice, and human rights.

But instead?

It’s being weaponised against us.

Corporations and governments are tightening their grip on AI models. Censoring conversations about LGBTQ+ rights, women’s rights, and sustainability.

Hiding inconvenient truths.

Silencing voices that challenge power.

As a purpose-driven brand, you have a responsibility to take notice... and take action.

Take DeepSeek, a Chinese AI chatbot that has been widely criticised for actively censoring discussions about LGBTQ+ rights. Ask it about same-sex marriage or gender identity, and it dodges the question. Bring up climate activism, and you get a vague, state-approved response.

Now, you might think, that’s just China. That wouldn’t happen here.

Think again.

Who Controls the Truth?

AI isn’t neutral.

The people who fund it, build it, and set the rules? They’re almost exclusively white, cisgender, heterosexual men.

The same demographic that controls Big Tech.

The same demographic that controls traditional media.

The same demographic that holds political influence.

These aren’t just companies.

They are gatekeepers of truth.

They can decide:

?? What’s visible.

?? What’s suppressed.

?? What stories get told - and what stories disappear.

And that should terrify you.

AI Bias Is Already Silencing Marginalised Groups

The problem? AI learns from the data it’s given. And that data? It’s full of bias.

Here’s what’s already happening:

  • AI image generators underrepresent women and people of colour in leadership roles. Research has shown that AI-generated images place white men in executive roles at disproportionate rates while assigning women to lower-status occupations.
  • AI job filters discriminate against women and LGBTQ+ candidates. Amazon’s hiring algorithm was found to downgrade resumes that included women’s colleges or LGBTQ+ organisations.
  • AI-powered policing tools target minorities at a disproportionate rate. Predictive policing algorithms used in the US have been found to reinforce racial biases, leading to over-policing of Black and brown communities.

And now?

Some AI models are being trained to ignore discussions on human rights entirely.

Try discussing?LGBTQ+ rights?on certain language models, and you’ll hit a brick wall. A UNESCO study revealed that large language models (LLMs) exhibit?gender bias, homophobia, and racial stereotyping, often failing to appropriately address LGBTQ+ topics.?

Try calling out?corporate greenwashing, and responses turn vague. Research indicates that some AI systems, due to their training data and design, may avoid or inadequately address topics like environmental ethics and corporate responsibility, leading to superficial or evasive answers.?

Try asking about?systemic inequality, and watch how fast the topic gets sidestepped. Studies have shown that AI models can exhibit?covert racism and dialect prejudice, leading to the marginalisation of discussions on systemic inequality.?

This isn’t an accident.

It’s design.

Alexis Curtis-Harris recently wrote about Meta boldly announcing itself as an anti-LGBTQIA+ company.

The Impact on Sustainability, Climate, and Social Progress

The consequences are profound:

  • When AI censors conversations around sustainability, we lose momentum in the fight against climate change.
  • When AI ignores human rights, we lose protection for the most vulnerable.
  • When AI hides stories of corporate abuse, we lose accountability.

And who benefits?

The same billionaires funding these AI models.

The Hidden Environmental Cost of AI Censorship

Most people don’t realise AI has a massive carbon footprint. Training AI models consumes more energy than entire countries.

The Future of AI Depends on Us

This isn’t a problem we can ignore. Because once censorship is baked into AI, it becomes the default reality.

What can we do?

  1. Call it out. If AI models are suppressing crucial discussions, we need to shine a light on it. Post about it. Report it. Demand transparency.
  2. Support open-source AI. One way to ensure AI serves the people - not corporations - is to make it free, open, and accountable.
  3. Diversify who builds AI. The tech industry needs more LGBTQ+, women, and voices?from the global majority in decision-making roles.
  4. Push for regulation. AI shouldn’t be controlled by a handful of billionaires. We need laws that protect human rights in AI development.

This isn’t just about AI.

It’s about the future of information.

If we don’t act now, the next generation will grow up in a world where the truth is controlled by a machine that serves the powerful.

And once that happens?

It’s game over.

What’s your take? Have you noticed AI censorship creeping into conversations that matter?


About the author, Matt Mahmood-Ogston

Thank you for reading this edition of Building Brands with Purpose (formerly 'Becoming a Personal Brand'). A newsletter that explores the intersection of changing the world + storytelling through branding.

Hit the subscribe button to receive a new edition each week.

Who am I?

I'm a social impact photographer for half of my week, and a charity CEO for the other half.

I use my unique blend of creative skills and lived experience to support brands that want to document their positive impact on the world.

Need some help documenting your social impact? Get in touch.

My award-winning work has been seen on channels such as BBC, ITV, Channel 4, and Sky; I've delivered over 80 public talks and keynotes and worked with iconic brands such as Google, Magnum Photos, Meta, Capgemini, RBS, NatWest, Barclays, Lloyds Bank, TSB and AVID. Plus I've helped a Dragon from BBC’s Dragons’ Den brand and launch five startups.

In my rare moments of downtime, you'll find me working on Bona Parle, my new social impact platform for creators and changemakers.

View my portfolio or follow me here on LinkedIn → Matt Mahmood-Ogston ??

Jason Bootle

I deliver people-first ROI | Product & Design Leadership | 1-1 and Team Coaching & Mentoring | Mental Fitness Coaching | Speaking

1 个月

Great article Matt Mahmood-Ogston ??. AI is a reflection on humanity and it is definitely up to us to call out those biases.

回复
Mark Healey

Independent Hate Crime Specialist: Founder of 17-24-30 National Hate Crime Awareness Week (1184819). Passionate about tackling all forms of Hate Crime, LGBTQ+ Community Development and Community Safety.

1 个月

The golden age of social media invented by people who wanted to connect the world is over, now it is in the hands of those who want to control the world. Time to review the tech we use and ensure we are not contributing to systems that will oppress us.

Matt Mahmood-Ogston ??

Social Impact Photographer & Storytelling Consultant Helping Brands, Charities & Funders Document & Share their Impact with Authenticity ?? ESG, CSR & Impact ?? Award-Winning Human Rights Campaigner & Charity CEO ?????

1 个月

?? AI isn’t just reflecting biases - it’s amplifying them. The gatekeeping isn’t always obvious, but it’s shaping what people see, discuss, and believe. If AI is reinforcing existing power structures, how do we challenge it? What are you seeing in your own work?

要查看或添加评论,请登录

Matt Mahmood-Ogston ??的更多文章