Meta's policy pivot: Implications for New Zealand's Code of Practice for Online Safety and Harms
Context of the Code
Having been involved in the awful, cosmetic consultations in New Zealand before it was launched, I have been publicly dismissive, and extremely critical of the Aotearoa New Zealand Code of Practice for Online Safety and Harms.
Released in 2022 to much fanfare, and championed by Netsafe New Zealand - ironically an institution that few trust, and was at the time defined by human rights breaches, financial mismanagement, bullying, and much worse - this was Facebook's strategic gamble in New Zealand to avoid hard regulations on platform accountability, integrity, and safety.
Referencing tweets by Jason Kint on how Facebook was grilled by the Canadian Parliament, I tweeted at the time how the company did one thing, but with the Code in New Zealand, promised to be, and do something else. The hypocrisy was evident even then.
Incredibly, Facebook tried surreptitiously, and repeatedly to introduce the same flawed Code - and I mean quite literally the exact same text to what was passed in New Zealand, sans all references to Māori - to Sri Lanka.
Upon the launch of the Code in New Zealand, Meta noted,
At Meta, we’re looking forward to working with the stakeholders to ensure the Code sets in place a framework to keep Kiwis safe across multiple platforms by preventing, detecting, and responding to harmful online content. Combating online harmful content will take a whole of society effort and the Code is not intended as a total solution to this challenge. It is a genuine attempt by responsible industry players to increase safety outcomes in New Zealand, focusing on trust through transparency.
How does this highfalutin promise hold up with Meta's significant policy pivot in early January 2025?
Meta's new policies and the Code
Meta's new Hateful Conduct policies are a significant shift away from what was previously enforced to strengthen platform integrity. My position on this in relation to New Zealand, and the rest of the world is clear - it will cost lives.
However, as far as I know, no one's studied the extent to which Meta's new policies impact the Code in New Zealand, and its meaningful implementation moving forward. The prognosis is terrible.
Fundamental policy contradictions: The Code explicitly commits signatories, including Meta, to "implement, enforce and/or maintain policies and processes that seek to prohibit or reduce the prevalence of hate speech" (Measure O3M10). However, Meta's policy revisions appear to deliberately weaken several key protections against hate speech, particularly regarding vulnerable groups. This creates a direct conflict with the Code's guiding principle of Mana Tangata (dignity), which "emphasises the importance of civility and humanity in the care and protection of all people online."
Protected characteristics, and enforcement: The Code requires signatories to provide safeguards against online hate speech targeting protected characteristics. Meta's revised policy maintains a nominal framework of protected characteristics but introduces significant carve-outs that may effectively nullify these protections:
Gender Identity and Sexual Orientation
These changes contradict the Code's Outcome 3, which requires "safeguards to reduce the risk of harm arising from online hate speech."
Dehumanising, denigrating speech, and related harms
While Meta maintains some baseline prohibitions against dehumanising comparisons (e.g., comparing protected groups to insects or animals), the revised policy creates new permissible categories of dehumanising speech:
This violates both the letter, and spirit of the Code's commitment to reduce "harmful stereotypes historically linked to intimidation or violence" (O1M4).
Impact on bullying, and harassment protections
The Code requires signatories to "implement, enforce and/or maintain policies and processes that seek to reduce the risk to individuals (both minors and adults) or groups from being the target of online bullying or harassment" (O2M6). Meta's deletion of warnings against certain explicit insults and expressions of hate (e.g., specific derogatory terms) undermines this commitment.
Transparency, and accountability challenges
The Code requires signatories to:
Meta's policy revisions raise questions about compliance with these transparency requirements, particularly regarding:
New Zealand's cultural context, and Te Ao Māori
The Code explicitly incorporates Māori cultural values and principles, including:
Meta's policy revisions, particularly those allowing for increased exclusionary and dehumanising speech, appear to conflict with these cultural values and principles that emphasise collective wellbeing and mutual respect. Much more on this below.
Disinformation, truth decay
The allowance of terms/speech acts like "China virus" conflicts with the Code's commitments regarding misinformation, and disinformation (Outcomes 6 and 7), particularly in relation to "highly significant issues of societal importance" such as public health.
Compliance, and enforcement Implications
The policy changes raise serious questions about Meta's ongoing compliance with the Code. The Code provides for:
Given the extent of the divergence between Meta's new policies, and the Code's requirements - which I'd submit are irreconcilable - this situation may warrant review by the Administrator and Oversight Committee to determine whether Meta remains in compliance with its commitments as a signatory. But no one, to date, has asked these questions.
Four core values
The Code's website notes that it is guided by four key values, sourced in Te Ao Māori. Every one of these is massively impacted, and significantly undermined by Meta's policy changes.
领英推荐
Mahi tahi | Solidarity
Kauhanganuitanga | Balance
Mana tangata | Dignity
Mana | Respect
New technologies, human rights and te Tiriti o Waitangi
In May 2023, Paul Hunt, the former head of the New Zealand Human Rights Commission, released a prescient, grounded, and succinct brief that I was also consulted on prior to publication, outlining the impact of new technologies like generative AI, and social media, on the country's human rights frameworks, and Te Tiriti o Waitangi.
There's now a fundamental misalignment between Meta's policy direction, and the human rights, and te Tiriti/Treaty based framework Hunt proposed for managing the socio-political, and cultural impact of communications technologies in New Zealand (including social media).
Impact on Human Rights framework
Meta's policy revisions fundamentally challenge the human rights framework Hunt describes. The briefing emphasises that "human rights provide an ethical framework for tackling difficult issues," yet Meta's changes prioritise potentially violative forms of expression over protections for vulnerable groups, including Māori, and minorities. This creates particular tension with Hunt's assertion that "there is no hierarchy of human rights: they are inter-related and depend on each other."
Te Tiriti o Waitangi/Treaty implications
The policy changes raise significant concerns regarding Meta's alignment with te Tiriti obligations. Hunt explicitly notes that the Crown must work in partnership with Māori communities to ensure protection from "harmful effects of the communication revolution." Meta's relaxation of hate speech protections, particularly regarding dehumanising language and cultural insensitivity, completely contradicts this imperative.
Social Cohesion
Hunt identifies key values including "whanaungatanga (kinship), kaitiakitanga (stewardship), manaakitanga (respect), dignity, decency, fairness, equality." Meta's new policies, particularly those allowing increased latitude for dehumanising, denigrating speech inciting hate, and identity-based attacks diverge significantly from these foundational values of New Zealand's liberal democratic firmament.
Meta's responsibilities
The Human Rights Commission briefing explicitly states that social media companies have responsibilities to "give effect to our values and human rights" and "be transparent about their operations." Meta's policy changes, particularly the removal of previously existing protections, raise significant questions about corporate social accountability in the context of these obligations.
National, and human security
Hunt cites the New Zealand Security Intelligence Service's concerns about "challenges to democratic norms" and "anti-authority extreme ideologies." Meta's relaxation of content moderation standards, particularly regarding disinformation, and harmful speech (independent of its sunset of fact-checking) exacerbate these national, and human security concerns in a context where my own research establishes significant threats from outside, and within the country growing at pace, aided by the normalisation of violent extremist ideas, and rhetoric on social media platforms.
Impact on vulnerable communities
The briefing emphasises that "negative impacts are especially felt by at-risk and disadvantaged groups." Meta's policy changes, particularly those reducing protections against identity-based harassment and dehumanising speech, increase rather than mitigate these vulnerabilities.
Information ecosystem health
Hunt's discussion of mis-, and disinformation's impact on "anxiety, anger, and mistrust" becomes particularly relevant given Meta's new allowances for certain types of previously prohibited speech which will invariably contribute to affective polarisation, social division, and stigmatisation - especially of Māori, Muslims, persons of colour, immigrants, and other minorities in New Zealand.
Democratic, civil discourse
The briefing's call for "constructive places where we can share reliable information and exchange different views in a spirit of openness" is seriously challenged by Meta's reduced moderation standards, which enable more polarising, and harmful forms of discourse. The resulting chilling effects, at scale, will only ensure less diversity, and fewer citizens participating in democratic dialogues, and processes - significantly impacting electoral integrity, and social cohesion.
Future implications
Hunt's vision of managing the communication revolution by keeping "hold of our values, respect all human rights, honour te Tiriti, empower the most disadvantaged, provide safeguards" is, I would argue, impossible to achieve under Meta's revised policies, which prioritise minimal content moderation over comprehensive protection of vulnerable groups.
How to improve the Aotearoa New Zealand Code of Practice for Online Safety and Harms?
This was the title of a report issued by the New Zealand Human Rights Commission in December 2023, authored by an eminent group of authors called the Independent Accountability Group (IAG).
Meta's policy pivot wholly contradicts the framework, and completely undermines recommendations provided by the IAG, in turn eroding the (already limited) effectiveness of the Code, and its ability to protect vulnerable communities in New Zealand.
Conclusion
Meta's business imperative has always been to increase profit even at the cost of principles, especially through the seed, and spread of manufactured outrage on its products, platforms, and apps. I've studied this across five continents, and since 2013. New Zealand's Code, as profoundly flawed as it was, could nevertheless have been interpreted as a DSA-lite - enabling a degree of platform accountability in a country that doesn't have any other meaningful, fit-for-purpose legislation, regulation, or framework after the incumbent government killed proposed hate speech laws.
The policy pivot by Meta is a seismic shift for content moderation, and platform integrity considerations. The changes will invariably accelerate the normalisation of previously restricted forms of discriminatory, racist, nativist, and violent extremist discourse, particularly targeting Māori, Muslims, and other marginalised communities who are already burdened with negotiating disproportionate online, and offline harms, and hate every day. Meta's new policy effectively creates vast, varied new vectors for harmful, and hateful content to proliferate while simultaneously reducing accountability mechanisms, and transparency requirements that previously allowed for community oversight, including through the application of the Code.
The cumulative effect will be an immediate, sustained erosion of platform integrity, adding to what my research clearly establishes in New Zealand as a vaulting anti-Māori racism online, information disorders worsening at pace, and politicians on Meta instigating hate.
It all begs the question as to what purpose the Aotearoa New Zealand Code of Practice for Online Safety and Harms serves anymore.
Online violence and misogyny are still on the rise – NZ needs a tougher?response https://theconversation.com/online-violence-and-misogyny-are-still-on-the-rise-nz-needs-a-tougher-response-250033 Timely piece by Dr Cassandra Mudgway.