From AI to teen protections: How 2024’s online safety trends will shape 2025

From AI to teen protections: How 2024’s online safety trends will shape 2025

In late 2023, we predicted five key trends in online safety for 2024, all of which came to pass. Unsurprisingly, artificial intelligence – particularly generative AI – remained at the forefront of industry conversations and market innovation. While some speculated that AI might eliminate the need for human moderation, we saw the opposite: our business grew as this advanced technology increasingly required human oversight to ensure appropriate application and mitigate risks.

Equally significant was the growing wave of global tech regulation, targeting not just AI but also social media and digital platforms. Governments worldwide introduced policies aimed at addressing societal concerns, such as misinformation, data privacy, and the impact of digital platforms on youth. Speaking of misinformation, we certainly saw and enforced it, but the torrent we predicted, particularly as it relates to AI-generated election-related misinformation, was surprisingly modest.

We also predicted that content moderation teams’ real-time responses to global crises would remain essential, and 2024 was no exception. Between the ongoing wars in Ukraine and the Middle East, natural disasters like the catastrophic earthquakes in South Asia, and widespread political unrest in multiple regions, the need for swift, accurate, and culturally informed content moderation was more critical than ever.

Finally, we rightly anticipated that 2024 would deliver heightened focus on the social media’s impact on teens and kids, and let’s just say that Australia’s recent landmark ban on social media for teens is a precursor of much more to come.

So now, VP of Trust & Safety Alexandra Popken makes some predictions for 2025.


1. Kids/teen age verification will take center stage.

2025 will likely solidify age verification as a global standard. Governments across Europe, North America, and Asia are likely to adopt similar measures. While aimed at protecting young users, these policies will also ignite debates about privacy, enforcement challenges, and the potential for inequitable access to online spaces.

2. Trump administration's impact on legislation.

With the return of Donald Trump to the White House, the regulatory landscape in the US is expected to shift. This includes a potential pivot towards deregulation in AI and social media to promote innovation and competitiveness, particularly against China. Simultaneously, the administration’s stance could embolden conservative voices in content moderation, steering social media policy enforcement further to the right.

  1. Generative AI will require more moderation than ever: Even with a potential slowdown in Trump-era AI regulations, generative AI platforms will increasingly depend on robust moderation. The tragedy involving Character.AI highlighted the urgent need for proactive safeguards. As generative AI becomes more pervasive, companies must prioritize moderation to mitigate risks, protect users, and build trust in their technologies. WebPurify is proud to partner with leading AI companies to provide these protections.
  2. Content moderation x human rights challenges will continue: The intersection of content moderation and human rights has never been more challenging, as evidenced by the ongoing Israel/Palestine conflict. Platforms face immense pressure to balance free expression with the need to prevent harmful content, disinformation, and incitement to violence. This tension is further complicated by the global nature of online discourse, where different cultural, legal, and political norms often conflict with one another. Platforms have come under fire for making decisions perceived as biased or inconsistent, which is not difficult to do when enforcing policies at scale. We can expect this tension to continue.
  3. The trust & safety solution space will grow: As unique use cases continue to emerge, the Trust & Safety landscape will see an expansion of specialized suppliers offering tailored solutions. This growth reflects the rising demand for niche expertise, from generative AI moderation to misinformation management. A notable trend gaining momentum is the transition of Trust & Safety leaders from platform roles to solution providers. Drawing on their firsthand experience, these leaders are now developing the tools and systems they once needed in their roles at tech companies, driving innovation and reshaping the industry with practical, user-centered solutions.

Last year underscored the complex, evolving challenges and opportunities in the online safety landscape.

From the rise of generative AI requiring human oversight to the intensifying global push for regulation, 2024 proved that this space is anything but static.

As we look to 2025, the trends we foresee – such as a growing focus on youth protections, shifts in US regulatory dynamics, and the continued evolution of trust & safety solutions – will demand innovation, collaboration, and resilience from all stakeholders.

We remain committed to leading in this space, leveraging our expertise to help platforms navigate these challenges responsibly and effectively. The work of ensuring safe online environments has never been more critical, and we’re ready to partner with our clients and peers to meet the moment.

要查看或添加评论,请登录

WebPurify的更多文章

社区洞察

其他会员也浏览了