Meta's policy pivot: Implications for New Zealand's Code of Practice for Online Safety and Harms

Meta's policy pivot: Implications for New Zealand's Code of Practice for Online Safety and Harms

Context of the Code

Having been involved in the awful, cosmetic consultations in New Zealand before it was launched, I have been publicly dismissive, and extremely critical of the Aotearoa New Zealand Code of Practice for Online Safety and Harms.

Released in 2022 to much fanfare, and championed by Netsafe New Zealand - ironically an institution that few trust, and was at the time defined by human rights breaches, financial mismanagement, bullying, and much worse - this was Facebook's strategic gamble in New Zealand to avoid hard regulations on platform accountability, integrity, and safety.

Referencing tweets by Jason Kint on how Facebook was grilled by the Canadian Parliament, I tweeted at the time how the company did one thing, but with the Code in New Zealand, promised to be, and do something else. The hypocrisy was evident even then.


Incredibly, Facebook tried surreptitiously, and repeatedly to introduce the same flawed Code - and I mean quite literally the exact same text to what was passed in New Zealand, sans all references to Māori - to Sri Lanka.

Upon the launch of the Code in New Zealand, Meta noted,

At Meta, we’re looking forward to working with the stakeholders to ensure the Code sets in place a framework to keep Kiwis safe across multiple platforms by preventing, detecting, and responding to harmful online content. Combating online harmful content will take a whole of society effort and the Code is not intended as a total solution to this challenge. It is a genuine attempt by responsible industry players to increase safety outcomes in New Zealand, focusing on trust through transparency.

How does this highfalutin promise hold up with Meta's significant policy pivot in early January 2025?

Meta's new policies and the Code

Meta's new Hateful Conduct policies are a significant shift away from what was previously enforced to strengthen platform integrity. My position on this in relation to New Zealand, and the rest of the world is clear - it will cost lives.

However, as far as I know, no one's studied the extent to which Meta's new policies impact the Code in New Zealand, and its meaningful implementation moving forward. The prognosis is terrible.

Fundamental policy contradictions: The Code explicitly commits signatories, including Meta, to "implement, enforce and/or maintain policies and processes that seek to prohibit or reduce the prevalence of hate speech" (Measure O3M10). However, Meta's policy revisions appear to deliberately weaken several key protections against hate speech, particularly regarding vulnerable groups. This creates a direct conflict with the Code's guiding principle of Mana Tangata (dignity), which "emphasises the importance of civility and humanity in the care and protection of all people online."

Protected characteristics, and enforcement: The Code requires signatories to provide safeguards against online hate speech targeting protected characteristics. Meta's revised policy maintains a nominal framework of protected characteristics but introduces significant carve-outs that may effectively nullify these protections:

Gender Identity and Sexual Orientation

  • The removal of specific protections against dehumanising transgender people (e.g., using "it" pronouns)
  • The explicit allowance of "allegations of mental illness or abnormality" when based on gender or sexual orientation
  • Permission for exclusionary language in "political and religious discourse about transgenderism and homosexuality"

These changes contradict the Code's Outcome 3, which requires "safeguards to reduce the risk of harm arising from online hate speech."

Dehumanising, denigrating speech, and related harms

While Meta maintains some baseline prohibitions against dehumanising comparisons (e.g., comparing protected groups to insects or animals), the revised policy creates new permissible categories of dehumanising speech:

  • Comparisons of women to household objects or property
  • Comparisons of people to faeces, filth, bacteria, viruses, diseases
  • Use of terms like "primitives"
  • Denial of existence of certain identity groups

This violates both the letter, and spirit of the Code's commitment to reduce "harmful stereotypes historically linked to intimidation or violence" (O1M4).

Impact on bullying, and harassment protections

The Code requires signatories to "implement, enforce and/or maintain policies and processes that seek to reduce the risk to individuals (both minors and adults) or groups from being the target of online bullying or harassment" (O2M6). Meta's deletion of warnings against certain explicit insults and expressions of hate (e.g., specific derogatory terms) undermines this commitment.

Transparency, and accountability challenges

The Code requires signatories to:

  • "Enhance transparency of policies, processes and systems" (Commitment 4.3)
  • "Publish and make accessible information on relevant policies, processes, and products that aim to reduce the spread and prevalence of harmful content online" (O10M40)

Meta's policy revisions raise questions about compliance with these transparency requirements, particularly regarding:

  • The rationale for weakening existing protections
  • The evidence base supporting these changes
  • The potential impact on vulnerable communities

New Zealand's cultural context, and Te Ao Māori

The Code explicitly incorporates Māori cultural values and principles, including:

  • Mahi tahi (solidarity)
  • Kauhanganuitanga (balance)
  • Mana tangata (dignity)
  • Mana (respect)

Meta's policy revisions, particularly those allowing for increased exclusionary and dehumanising speech, appear to conflict with these cultural values and principles that emphasise collective wellbeing and mutual respect. Much more on this below.

Disinformation, truth decay

The allowance of terms/speech acts like "China virus" conflicts with the Code's commitments regarding misinformation, and disinformation (Outcomes 6 and 7), particularly in relation to "highly significant issues of societal importance" such as public health.

Compliance, and enforcement Implications

The policy changes raise serious questions about Meta's ongoing compliance with the Code. The Code provides for:

  • Regular compliance reporting (Section 5.4)
  • Review by an Oversight Committee
  • Possible termination for non-compliance (Section 3.5.1)

Given the extent of the divergence between Meta's new policies, and the Code's requirements - which I'd submit are irreconcilable - this situation may warrant review by the Administrator and Oversight Committee to determine whether Meta remains in compliance with its commitments as a signatory. But no one, to date, has asked these questions.

Four core values

The Code's website notes that it is guided by four key values, sourced in Te Ao Māori. Every one of these is massively impacted, and significantly undermined by Meta's policy changes.

Mahi tahi | Solidarity

  • The policy changes demonstrate a significant departure from the collaborative ethos inherent in mahi tahi. Meta's unilateral relaxation of hate speech protections undermines the collective industry-government-civil society framework established by the Code.
  • Specific manifestations include, but will not be limited to removal of protections against self-admitted discriminatory statements (e.g., racism, homophobia) Weakening of collaborative content moderation frameworks Potential fragmentation of industry standards for online safety.
  • It's significantly concerning that the policy revisions will create asymmetric, majority user-driven, algorithmically managed protection levels across different platforms with vast variance, reduce collective capability to address coordinated harmful behaviors, and diminish capacity for cross-platform harm mitigation.

Kauhanganuitanga | Balance

  • Meta's policy changes exhibit a fundamental misalignment with the principle of balanced representation and fair process.
  • Key distortions include the explicit allowance of "allegations of mental illness" targeting gender and sexual orientation Asymmetric protection mechanisms favoring certain forms of political and religious discourse Reduced transparency in decision-making processes.

  • This may result in inequitable weighting of perspectives regarding vulnerable communities, vastly eroded accountability in content moderation decisions, and systematic bias in permitted speech categories (adding to New Zealand's structural racism).

Mana tangata | Dignity

  • The policy revisions represent a substantial regression in protecting human dignity online.
  • This can manifest through what's now a permissive environment for dehumanising comparisons (e.g., of wāhine Māori to objects, bacteria), the heightened allowance of existence denial, identity, and history for Māori, and reduced protections against targeted harassment.

Mana | Respect

  • The policy changes fundamentally undermine high-trust relationships, and respectful, responsible behaviours on Meta's product, and platform surfaces (for e.g. on Threads, Pages, Groups, and Instagram).
  • Respect risks rapid, and sustained erosion through the removal of prohibitions against explicit, borderline or dog-whistled incitement to hate, and vastly weakened protections against discriminatory language.


New technologies, human rights and te Tiriti o Waitangi

In May 2023, Paul Hunt, the former head of the New Zealand Human Rights Commission, released a prescient, grounded, and succinct brief that I was also consulted on prior to publication, outlining the impact of new technologies like generative AI, and social media, on the country's human rights frameworks, and Te Tiriti o Waitangi.

There's now a fundamental misalignment between Meta's policy direction, and the human rights, and te Tiriti/Treaty based framework Hunt proposed for managing the socio-political, and cultural impact of communications technologies in New Zealand (including social media).

Impact on Human Rights framework

Meta's policy revisions fundamentally challenge the human rights framework Hunt describes. The briefing emphasises that "human rights provide an ethical framework for tackling difficult issues," yet Meta's changes prioritise potentially violative forms of expression over protections for vulnerable groups, including Māori, and minorities. This creates particular tension with Hunt's assertion that "there is no hierarchy of human rights: they are inter-related and depend on each other."

Te Tiriti o Waitangi/Treaty implications

The policy changes raise significant concerns regarding Meta's alignment with te Tiriti obligations. Hunt explicitly notes that the Crown must work in partnership with Māori communities to ensure protection from "harmful effects of the communication revolution." Meta's relaxation of hate speech protections, particularly regarding dehumanising language and cultural insensitivity, completely contradicts this imperative.

Social Cohesion

Hunt identifies key values including "whanaungatanga (kinship), kaitiakitanga (stewardship), manaakitanga (respect), dignity, decency, fairness, equality." Meta's new policies, particularly those allowing increased latitude for dehumanising, denigrating speech inciting hate, and identity-based attacks diverge significantly from these foundational values of New Zealand's liberal democratic firmament.

Meta's responsibilities

The Human Rights Commission briefing explicitly states that social media companies have responsibilities to "give effect to our values and human rights" and "be transparent about their operations." Meta's policy changes, particularly the removal of previously existing protections, raise significant questions about corporate social accountability in the context of these obligations.

National, and human security

Hunt cites the New Zealand Security Intelligence Service's concerns about "challenges to democratic norms" and "anti-authority extreme ideologies." Meta's relaxation of content moderation standards, particularly regarding disinformation, and harmful speech (independent of its sunset of fact-checking) exacerbate these national, and human security concerns in a context where my own research establishes significant threats from outside, and within the country growing at pace, aided by the normalisation of violent extremist ideas, and rhetoric on social media platforms.

Impact on vulnerable communities

The briefing emphasises that "negative impacts are especially felt by at-risk and disadvantaged groups." Meta's policy changes, particularly those reducing protections against identity-based harassment and dehumanising speech, increase rather than mitigate these vulnerabilities.

Information ecosystem health

Hunt's discussion of mis-, and disinformation's impact on "anxiety, anger, and mistrust" becomes particularly relevant given Meta's new allowances for certain types of previously prohibited speech which will invariably contribute to affective polarisation, social division, and stigmatisation - especially of Māori, Muslims, persons of colour, immigrants, and other minorities in New Zealand.

Democratic, civil discourse

The briefing's call for "constructive places where we can share reliable information and exchange different views in a spirit of openness" is seriously challenged by Meta's reduced moderation standards, which enable more polarising, and harmful forms of discourse. The resulting chilling effects, at scale, will only ensure less diversity, and fewer citizens participating in democratic dialogues, and processes - significantly impacting electoral integrity, and social cohesion.

Future implications

Hunt's vision of managing the communication revolution by keeping "hold of our values, respect all human rights, honour te Tiriti, empower the most disadvantaged, provide safeguards" is, I would argue, impossible to achieve under Meta's revised policies, which prioritise minimal content moderation over comprehensive protection of vulnerable groups.


How to improve the Aotearoa New Zealand Code of Practice for Online Safety and Harms?

This was the title of a report issued by the New Zealand Human Rights Commission in December 2023, authored by an eminent group of authors called the Independent Accountability Group (IAG).

Meta's policy pivot wholly contradicts the framework, and completely undermines recommendations provided by the IAG, in turn eroding the (already limited) effectiveness of the Code, and its ability to protect vulnerable communities in New Zealand.

  • The report emphasises (and I completely agree) that context is vital for understanding, and addressing online harms, particularly given New Zealand's unique historical, demographic, and cultural landscape. This includes the ongoing impacts of colonisation, systematic dispossession of Māori land, and patterns of enduring discrimination. Meta's new policies, which permit previously prohibited forms of dehumanising speech, and reduce protections for vulnerable communities/identities/groups, now stands in opposition to this crucial contextual framework.
  • Compared against what the authors state, Meta's policy changes present serious challenges to the constitutional obligations arising from te Tiriti o Waitangi. The principle of active protection, which requires preventing harm to Māori, is compromised by Meta's relaxation of content moderation standards. This includes the removal of specific protections against dehumanising or denigrating language, and reduced moderation of discriminatory content. Furthermore, the unilateral nature of these policy changes contradict the partnership principle established by te Tiriti. Mirroring the Code's four pillars, the IAG report identifies crucial values that should guide online safety approaches, including kaitiakitanga (stewardship), manaakitanga (respect), dignity, decency, fairness and equality. Meta's policy changes fundamentally contradict these foundational values that the IAG identifies as essential for online safety governance.
  • The IAG emphasises that New Zealand's transparency standards should not be lower than those adopted in other jurisdictions. However, Meta's new policies will invariably erode transparency around content moderation decisions, weaken accountability mechanisms, and lower protection standards compared to, for example, the EU, where Meta operates - and is bound by stronger, stricter regulations like the DSA.
  • The changes raise significant issues regarding the human rights framework the IAG identifies as crucial. This includes implementation gaps in protecting vulnerable groups, weakened safeguards against discrimination, and diminished accountability mechanisms. The policy changes also create accountability deficits through unclear remediation processes, and reduced third-party, and company-led monitoring capabilities.
  • The structural impact of the policy changes represents a systematic weakening of protective mechanisms across content moderation, platform safety, and social cohesion. As has been the case in the Global South over many years, this in turn gravely risks content, and commentary on Meta's product, and platform surfaces significantly contributing to increased divisive content, and reduced protection against the incitement of hate, harms, and violence in New Zealand.


Conclusion

Meta's business imperative has always been to increase profit even at the cost of principles, especially through the seed, and spread of manufactured outrage on its products, platforms, and apps. I've studied this across five continents, and since 2013. New Zealand's Code, as profoundly flawed as it was, could nevertheless have been interpreted as a DSA-lite - enabling a degree of platform accountability in a country that doesn't have any other meaningful, fit-for-purpose legislation, regulation, or framework after the incumbent government killed proposed hate speech laws.

The policy pivot by Meta is a seismic shift for content moderation, and platform integrity considerations. The changes will invariably accelerate the normalisation of previously restricted forms of discriminatory, racist, nativist, and violent extremist discourse, particularly targeting Māori, Muslims, and other marginalised communities who are already burdened with negotiating disproportionate online, and offline harms, and hate every day. Meta's new policy effectively creates vast, varied new vectors for harmful, and hateful content to proliferate while simultaneously reducing accountability mechanisms, and transparency requirements that previously allowed for community oversight, including through the application of the Code.

The cumulative effect will be an immediate, sustained erosion of platform integrity, adding to what my research clearly establishes in New Zealand as a vaulting anti-Māori racism online, information disorders worsening at pace, and politicians on Meta instigating hate.

It all begs the question as to what purpose the Aotearoa New Zealand Code of Practice for Online Safety and Harms serves anymore.

Online violence and misogyny are still on the rise – NZ needs a tougher?response https://theconversation.com/online-violence-and-misogyny-are-still-on-the-rise-nz-needs-a-tougher-response-250033 Timely piece by Dr Cassandra Mudgway.

回复

要查看或添加评论,请登录

Sanjana H.的更多文章

社区洞察

其他会员也浏览了