AI vs. Humans: Who Should Control Online Safety?

AI vs. Humans: Who Should Control Online Safety?

As tech giants like TikTok reduce their human moderation workforce, replacing them with AI systems, concerns about the phase-out of trust and safety teams are mounting. TikTok's recent layoffs, affecting hundreds globally, underscore a broader shift towards automated content moderation, where 80% of content violating guidelines is now removed by AI.


Why is this development frightening and concerning??

The answer lies in the nuanced nature of human judgment, especially in sensitive areas like hate speech, DEIA (Diversity, Equity, Inclusion, and Accessibility), and political discourse.?

As Meta pivots away from fact-checking to a "Community Notes" model like X, relying more on user judgment than expert moderation, the risks of misinformation amplification grow. This reliance on AI and user-generated content moderation could result in harmful materials circulating more freely, affecting the quality and safety of digital spaces.

The Alpha 101: What is a Trust and Safety team?

A Trust and Safety team is like the guardian angel of an online platform. They're there to make sure the environment is safe and welcoming for everyone. This team handles everything from keeping an eye on user posts to make sure nothing harmful gets through, to enforcing the platform's rules. They also jump into action during crises to protect users, and they work on tools and features that help keep everyone safe. Essentially, they're all about making sure the platform is a safe place where users can confidently interact and share.

This shift also poses significant consequences for employment in the tech sector. Many skilled professionals who have honed their abilities to manage complex human interactions online may find themselves displaced, not by better or more empathetic problem-solvers but by algorithms that promise efficiency at the cost of nuance and contextual awareness. Expert Take: The Future of Trust & Safety Roles Franklin Graves, attorney and author of the?Creator Economy Law newsletter, sees the phase-out of Trust & Safety teams as part of a larger industry trend:

“In many ways, this wave of layoffs in Trust & Safety roles isn’t surprising. I expect these types of roles to continue decreasing as AI tools surpass previous limitations in detecting and managing problematic content… But with the rapid advancements in generative AI, particularly in reasoning capabilities, companies are increasingly relying on automation to handle more complex moderation tasks with fewer human touchpoints.”

“That said, there’s a broader trend at play here. Across the industry, we’ve seen a widespread reduction in T&S teams, often impacting those working on politically sensitive or socially complex areas—hate speech, misinformation, DEIA, and politics-related content. While some of these reductions may be driven by cost-cutting measures or strategic shifts, others seem to reflect broader societal and regulatory pressures on platforms to redefine their approach to content governance.”

His concern? AI might not yet be ready to replace the human oversight required for nuanced, high-stakes content moderation.

Why You Should Care

AI-driven moderation isn’t just a tech industry problem—it’s a societal issue that affects what we see, believe, and how we interact online. Do we really want algorithms to decide what is harmful, what is offensive, or what is "true"? The internet was built as a space for human connection, yet we’re handing over its gatekeeping to machines that lack the ability to understand context, intent, or lived experience.

There’s a balance to strike here. AI can be a useful tool, but it shouldn’t replace human oversight where it matters most. If we let cost-cutting and efficiency fully dictate moderation policies, we risk an internet that’s either too sanitized or dangerously unchecked—both of which threaten the spaces we rely on to communicate, learn, and engage with the world.

Other headlines to check out:

AI

Creator Economy

Web3

The Hidden Struggles of Creators—And Why We Need to Talk About Them

This video perfectly captures the unseen battles creators everywhere face—burnout, pressure, and the relentless need to keep up.

That’s why I started Creators 4 Mental Health—an initiative dedicated to bringing mental well-being tools to the creator economy through events, content, research, and education. Our goal is to ensure that creators have the resources to prioritize their mental health, find real support, and manage stress—without sacrificing their passion.

Because thriving as a creator shouldn’t come at the cost of your mental health.

We've also launched our own monthly newsletter to share related wellness and mental health insights, resources, and updates on how we’re making an impact.Subscribe here!?

If you or your company want to collaborate or support this mission, message me—let’s build a healthier creator economy together.

Gentle Reminder ??

This might surprise you, but I hear a lot of no’s—or sometimes, nothing at all. It’s a reminder that progress isn’t always immediate, and persistence is part of the journey.

So when the yeses come, when things start moving forward—even if it’s taken years—I make sure to celebrate them because every small win is still a step forward.

A little reminder to appreciate your own progress, no matter how long it takes.

Advertise with Us

Remember, I'm Bullish on you!

With gratitude,



Franklin Graves

Shaping the legal landscape of the creator economy, emerging technologies, and IP, data, & privacy ??

4 天前

Thanks for the chance to share my thoughts, Shira! Great newsletter issue ????

回复

要查看或添加评论,请登录

Shira Lazar的更多文章