STAR FRAMEWORK

STAR FRAMEWORK

CCDH's Global Standard for Regulating Social Media

What's CCDH's STAR Framework? How can collective action and legislation address the dangers of online hate and misinformation?


Through CCDH’s?STAR Framework, we aim to establish key global standards for social media reform to ensure effectiveness, connectedness and consistency for a sector whose reach impacts people globally.

We need to reset our relationship with technology companies and collectively legislate to address the systems that amplify hate and dangerous misinformation around the globe. The?STAR Framework?draws on the most important elements for achieving this:?Safety by Design, Transparency, Accountability and Responsibility.

Introduction from CCDH’s CEO Imran Ahmed

Our digital spaces

A handful of companies dominate Internet content, owned by a small coterie of Big Tech billionaires. This elite owns the technology that connects?4.5 billion people around the world?and creates a platform on which individuals can share information, make new relationships, create communities, develop their brands, and transact business. Platforms produce little content themselves but have produced business models in which they monetize the content produced by billions of people by selling both viewers and data on the psychology of those viewers to those seeking to sell their own products, services, brands and ideas.

The communities on these online platforms, the behaviours and beliefs, and the values emerging from those spaces increasingly touch every aspect of offline society too. Sometimes that can be good. Sometimes it can be bad. The tech companies and their executives know it can sometimes be bad, which one would expect would lead to their curation of these environments, but the imperative for growth and acquisition of market share of eyeballs to sell to advertisers is their only concern. This was most pithily explained by the Chief Technology Officer of Meta when he wrote in an?internal?memo :

So we connect more people. That can be bad if they make it negative. Maybe it costs a life by exposing someone to bullies.?

Maybe someone dies in a terrorist attack coordinated?on our tools. And still,?we connect people.

The ugly truth is that we believe in connecting people so deeply that anything that allows us to connect more people more often is *de facto* good. It is perhaps the only area where the metrics do tell the true story as far as we are concerned."

Regulation and Big Tech

For many years, this elite of billionaire owners have postured themselves to operate under the utopian charter myth of neutrality and virtuous contribution to the growth of human understanding through social media. At its core, their proposition is an old-fashioned advertising business that reshapes cheaply-acquired content by promoting the most salacious, titillating, controversial, and therefore “engaging” content. This was designed to stave off the moment that regulators might turn their eye to this industry.

‘Social media’ is not a synonym for the Internet or even for technology, and yet, by hiding behind the techno-utopian halo of online technological innovation, they have both hidden the banal atavism of their core business models and avoided real scrutiny of the harms they cause. Surely, many opine this is inadvertent or unavoidable. In fact, neither of these statements are true.

The laws that seek to regulate this enormous industry which directly affects billions of people were, for the most part, created before social media companies existed. In the United States, they were codified in Section 230 of the Communications Decency Act 1996 which sought to protect bulletin boards and newspaper comments sections from third-party liability in order to foster innovation and growth for a fledgling industry.

This led to decades of regulatory ambivalence and the international community adopting a ‘hands-off’, or at best, an individual content-based approach, to regulating online harm in some jurisdictions, with technology companies seen as neutral actors in this environment. Tech companies were encouraged – through this permissive regulatory environment that functions without checks and balances – to adopt aggressive profit-driven business strategies that follow what Mark Zuckerberg described as a “move fast and break things” maxim, as outlined in his?2012 letter to investors .

Big Tech's failure

Things are, indeed, broken. Through our work at the Center for Digital Hate (CCDH), we have developed a deep understanding of the online harm landscape. Since 2016, we have researched the rise of online hate and disinformation and have shown that nefarious actors are able to easily exploit digital platforms and search engines that promote and profit from their content. CCDH has studied the way anti-vaccine extremists, hate actors, climate change deniers, and misogynists weaponize platforms to spread lies and attack marginalized groups. Through our work, we have seen the depth and breadth of harm that tech companies profit from on a daily basis, including:

What has remained consistent, across all types of harmful content, is an absence of proper transparency and a failure of platforms and search engines to act. Our research and advocacy work shows repeated failures by social media companies to act on harmful content or the actors/networks who are sharing it. We have demonstrated how the companies’ algorithms – with a systematic bias towards hate and misinformation – have had a damaging impact on our information ecosystem.

The failure of social media companies to act on known harmful content connected with terrorism, racism, misogyny and online hate is a violation of their own terms and conditions, the pledges made to an international community when the cameras were rolling, and the inherent dignity that the victims of tragedies like Buffalo, Christchurch and Myanmar were entitled to – the right to live safely in their communities and to be safe from extremist, racist terrorism.?This failure to act is the reality of the self-regulation environment.?Self-regulation means no regulation.

A Framework that makes our digital spaces safe for all

The status quo cannot stand. It has a damaging impact on individuals, communities and our democracies. CCDH research has evidenced the need for legislation that changes the fundamental business models and therefore behaviour of the platforms that profit from the spread of misinformation, disinformation, conspiracy theories and online hate by bad actors and by their own systems.?CCDH has advised the UN, the UK, the US, and other governments on disinformation, violent extremism and how conspiracy theories can overwhelm fact-checking countermeasures and cause considerable real-world harm.

Following the?CCDH Global Summit ?in May 2022, we saw a need to develop a values and research-driven framework to support global efforts to regulate social media and search engine companies. In this document, we have set out core elements of the?STAR Framework?with explanations and examples from our research. Through the?STAR Framework,?we aim to establish key global standards for social media reform, to ensure effectiveness, connectedness, and consistency for a sector which impacts people globally.?

The impact is real – on people, communities and democracy. We cannot continue on the current trajectory with bad actors creating a muddy and dangerous information ecosystem and a broken business model from Big Tech that drives offline harm. We need to reset our relationship with technology companies and collectively legislate to address the systems that amplify hate and dangerous misinformation around the globe. The?STAR Framework?draws on the most important elements for achieving this:?Safety by Design, Transparency, Accountability and Responsibility.

  • Safety by Design?
  • Transparency of algorithms, rules enforcement and economics (advertising)?
  • Accountability to independent and democratic bodies
  • Responsibility of companies and their senior executives.

Find out more at counterhate.com

要查看或添加评论,请登录

社区洞察

其他会员也浏览了