I was permanently banned from Meta products because of a movie quote I shared from my Facebook Memories.
Therese B. Fessenden
Experience & Service Design Research @ NN/g || USAR Veteran || Co-Owner @ Lake Road Vineyards
I wish I was exaggerating, but it's true: I was automatically suspended and eventually banned, without any warning, and without a reasonable form of appeals process.
Honestly, I was hoping I wouldn't have to write this, especially on LinkedIn, because I was hoping the company would actually recognize the misunderstanding and want to retain a user who has been an active and contributing member in good standing for about 15 years. But since that hasn't happened, I started researching more about Meta 's policies and procedures around content moderation and account suspensions.
After much reflection, my conscience is telling me this is too important to stay silent about, and I'm willing to stick my neck out and even risk my professional reputation because I believe:
This article covers:
Quick Recap: What Happened to Therese's Meta Account
Let's briefly summarize what happened to my account, as a play-by-play recap. You're welcome to skip this and go straight to the lessons, but I feel it's important to understand exactly what happened to understand why Meta's response is unreasonable.
Lesson #1: What is a Fascist Design Pattern?
While I'm reticent to use such strong terms that imply moral judgement, I believe it's imperative to call a spade a spade. Fascism, as defined by Merriam-Webster, is a tendency toward or actual exercise of strong autocratic or dictatorial control; often characterized by severe economic and social regimentation and by forcible suppression of opposition. In other words: fascism is an authoritarian form of control which prioritizes organizational control over human agency.
A fascist design pattern is a design pattern which exerts severe penalties or restrictions on individuals without due process, and destructively prioritizes organizational goals over user needs.
While Meta does not have an authoritarian approach to content moderation -- there are multiple ways content can be reviewed and appealed -- the same is not true of account moderation. To be honest, it would be more forgivable if Meta was authoritarian about content, and not about accounts. Content restriction does not impact user agency in the same way account restriction does.
Further it's even more problematic when decisions on human agency are made by an algorithm, and then ratified by a singular individual. Humans are more than the jokes they make, and to have one human decide their digital fate because of one joke is akin to asking one person to decide whether or not to set a house ablaze because of an annoying picture frame that came with the house.
One might say, "It's just a social media platform. They have the right to ban anyone they want." Perhaps, but it's important to reflect on how Meta's monopoly is inherently harmful, and prone to fascist patterns.
Why is a Meta Ban Particularly Fascist?
There are many studies that substantiate the impact that a good internet connection has on socioeconomic mobility and agency as a human being. I firmly believe that the same theory applies with access to a monopolistic social media conglomerate, which owns the primary means of communication that small businesses utilize, even over websites.
To paraphrase my husband:
"Banning people from Meta is like saying 'You're not allowed to participate in society.'"
Over 90% of businesses are active on Facebook, and, in 2017, one study noted that only 45% of surveyed business owners even had a website, with 55% utilizing Facebook as the primary means of communicating with customers. Banning access to these communication tools is economically devastating, especially for business owners, but also for customers wishing to participate in community events - which typically are communicated through social media.
When a Meta account gets disabled, it's not just the Facebook profile that's lost (and all the photos and fun memories). Users lose the ability to:
Re: the Meta Monopoly. They've certainly argued that there are other social media platforms out there. Yes, but none that so uniquely have the near total market share in every possible aspect of social media. TikTok only has videos. A phenomenal algorithm on videos, but just videos and DMs, nonetheless. They don't have photos, Threads, posts, groups, Marketplace, Messenger, and heck, even WhatsApp (ironically, drug dealers probably do use WhatsApp for its end-to-end encryption - and yet, it appears I have not been banned from that, for now... maybe because I hardly use it). LinkedIn and X/Twitter don't come anywhere close to monopolizing the online experience in the same way.
Perhaps even more disconcerting is they now have a very high quantity of my PII to include: my phone number, my face, my location (video meta-data), and my driver's license. Meaning: Meta has stored this information about me, and likely could use facial recognition data without my consent. After all, I no longer have a Meta account to have any say over what happens to my data.
But let's talk about accountability and ownership:
Lesson #2: AI Lacks Context, Feigns Accountability, & Perpetuates Bias
One might argue that Meta's monopoly is a business choice, not a design choice, and artificial intelligence is merely a way to efficiently handle business at scale. I beg to differ: when you make AI the primary arbiter of decisionmaking, that's just as much a design choice as it is a business choice, and it has disastrous consequences. AI merely serves as a convenient third party to distance the company from the responsibility they owe to their users to provide due process & thorough review.
While AI is currently being heralded for its potential to streamline work and remove bias from decision-making, there is a fundamental conflict between how AI currently works versus how good decisions are made. Objectively good decisions typically acknowledge important external, contextual factors. Objectively bad decisions are often made in a vacuum, in isolation of contextual data.
Artificial intelligence functions as a probability machine. It uses "probabilistic" data to determine how likely or unlikely things are associated with each other. Which is why, when used for things like mortgage approval algorithms, AI perpetuates biases. If there is statistical significance in the differences between the approval rates for people of color and white people, those differences persist, because the AI is approximating real-life probabilities. It does not account for the context behind those denials: systemic racism and prejudicial lending, all taking place outside the paperwork and referenced data points.
Trusting AI to make sound decisions is like asking a child to identify the most common color M&Ms in a pile, without acknowledging that you took out 50 of the blue ones before you set the pile in front of them.
Why does context matter here? All humor exists precisely because of contextual awareness: the acknowledgement of shared truth beyond what is explicitly stated. Humor is funny because you don't have to explicitly state the punchline - it is known by the audience's shared experiences. AI is gradually improving at learning humor. However, until we allow AI systems to watch our every move, or make them privy to every private conversation we have, they will never account for personal humor shared between close friends, or inside jokes.
Still, it doesn't take much context to know my joke was a joke. First, I used a movie quote. But also:
You might then retort, "But Therese, AI didn't make this decision final, they appealed your decision by having a human reviewer look at your message."
Yes. One human reviewer. In other words, my ability to participate digitally in society was completely eliminated because of one person's opinion. This is an inherently biased outcome.
I'm not ascribing this to a staff member's personal failing, but likely due to an incentive to review quickly rather than thoroughly (which is ironic, given that a "review" is implied to be thorough, not quick). For a review to take place in under 1 hour, this tells me the reviewer likely rushed my appeal with no further context (because the appeal process did not let me offer any), maybe even erring on the side of caution, disabling every account they're asked to review; even if those accounts were not truly violating any terms. Meta can conveniently point at a single reviewer and fire them if they disagree.
You might also say, "But Therese! There's an Oversight Board! It's not just one human."
Except it was just one human, because that is how their system is designed; which leads me to:
Lesson #3: Design is an Expression of Priorities and Accountability (or Lack Thereof)
While the Oversight Board boasts 15,000 reviewers, I am certain they are underpaid and asked to work through thousands of pieces of content each day - some of which are truly horrifying and disturbing and harmful - leaving them burnt out and deeply emotionally impacted.
Make no mistake: This content review work is necessary and important. Real users are getting harmed every day by illegal activity, and need to be protected. However, it is abundantly clear that truth and thorough reviews are not prioritized as much as speed to remove offending content is. I would happily wait a whole month if it meant that my account was thoroughly being investigated, and not just given a cursory glance. This commitment to speed over integrity prioritizes the safety of Meta (they can say that they "reviewed" accounts, even when they truly haven't) but it truly harms people...
...and not just me.
A quick Google search reveals many people whose accounts were disabled for other wrongful reasons, such as "adding someone they did not know" (i.e. their brother as their very first Facebook friend) or violations they did not do. The subreddit r/FacebookDisabledMe features over 1.2k posts, and many more can be found on other Facebook-related subreddits like r/FacebookAds. There's even a Change.org petition signed by over a thousand people for Facebook to change their appeals process.
Facebook blames users for its content management decisions & design flaws
If I haven't beaten the drum loudly enough: at the root of this is an accountability problem.
I cannot stress this enough: this post was promoted to me by Facebook Memories; which not only displayed the post, but encouraged me to share it. If this feature did not exist, I would likely not have re-engaged with the offending material. I probably would've forgotten about a post from 14 years ago. I certainly wouldn't have missed it (and if it's not abundantly clear, I obviously did NOT anticipate that the joke had the potential to jeopardize my entire account. I regret ever sharing it.)
I get that in 2009 and 2010, these standards did not exist and holding an entire generation of users' old content to a new standard is unfair. One might even argue that I should thank Facebook for not disabling my account as soon as the community standards took effect, before I ever re-engaged with it.
But, I know it wasn't a benevolent choice for 2 reasons:
If we're considering this offending material to be "grandfathered" in, then Facebook is giving that content their "blessing." If that wasn't the intended effect, then that content should have been deprioritized or even removed. Truth be told, I'd be thrilled if their decision was to restore my account, remove the offending content, and warn me not to do it again. I would happily have complied with this.
But, by virtue of its design, Facebook promoted what should have been deprioritized content, and then blamed me for unknowingly taking the bait.
What's Next
The good news, I guess, is that a trial is coming.
On Nov 13, 2024 (a mere 10 days before my debacle began) a judge in Washington ruled that Meta is forced to stand trial in April in the FTC’s antitrust lawsuit. This is four years after the FTC sued Meta for unlawful mergers with Instagram and WhatsApp. If this is the first you're hearing of it, here's a quick summary:
"The FTC sued in 2020, during the Trump administration, alleging the company acted illegally to maintain a monopoly on personal social networks. Meta, then known as Facebook, overpaid for Instagram in 2012 and WhatsApp in 2014 to eliminate nascent threats instead of competing on its own in the mobile ecosystem, the FTC claims." (Jody Godoy, Reuters)
So, if it took 12+ years from the unlawful merger to even reach the point of setting a trial date (April 14, 2025), I can imagine I won't get my Instagram account back anytime soon. Maybe even at all. However, I can see a significant class action lawsuit on the horizon to finally bring justice to thousands of impacted customers and business owners, whose accounts were wrongfully seized by a social media conglomerate that prioritized efficiency and profit over justice.
I'll bet the Oversight Board will see us then.
Ways You Can Help
If you've gotten this far in this article, and feel as incensed about it as I do, there are a few ways you can help:
I'll never claim to have all the answers to these problems, but if this field has taught me anything, it's that the only way we can have just and equitable outcomes is by facing deeply uncomfortable questions. It's time we start asking them.
Ruby on Rails developer
3 天前Wow! Tank you for sharing, Therese. I experienced a similar situation. I was permanently banned from meta on 02/19. They deleted my Instagram (10y/o account), Facebook(16y/o account), WhatsApp, threads and messenger accounts. All at the same time. They said it was for 'human exploitation'. I posted a conversation to raise funds for a friend's son's operation. In the text, my friend said: 'at any moment, I'll take out a kidney and sell it' (which is a figure of speech, something we say in my country when we urgently need cash for something very important). The AI interpreted that I was trafficking human organs. I got a restriction. I appealed, and in my opinion the answer they gave me was also from AI. Then suddenly all my accounts where disabled. I didn't even got an email I could reply to. The only one who answered was Whatsapp support but I think it was an automated response (you violated the community standards and your account will remain disabled). I never though they would enforce such extreme measures in a law-abiding long time user.
Public Relations | Corporate Communications | Branding | Media Relations | Event Planning
1 周I completely agree and I can’t stand their ridiculous community standards. I’ve seen blatant racism and defamation against key individuals go unchecked, even after being reported, only to receive the classic "This doesn't violate our Community Guidelines" response. Seriously, HELLO????? Meanwhile, a simple curse word or an obviously sarcastic comment gets instantly flagged and removed. Community Guidelines and Standards? More like AI garbage, and don’t even get me started on the so-called Oversight Board. If strangling someone to near death were legal, they’d be first in line, right after whoever wrote those absurd guidelines. We seriously need a META replacement, just like how they took over Friendster and MySpace back in the day. They lured everyone in with their "free to use, no payment needed" model, only to turn it into a playground for advertisers, bots, and inconsistent moderation. It’s time for a new platform that actually values real users over ad revenue and shady policies.
Pharmacist || Drug Discovery || Cancer Research Enthusiast(R&D) || Social Impact || Project Manager
3 周I’m so sorry you experienced this, Did you later get the account back My WhatsApp and Instagram has been disabled last week and I’m hoping to get this resolved soon
Entrepreneur looking to help others live the life they dream!
1 个月Going through this right now. My instagram account was hacked couple weeks ago then this past Tuesday I got notification about both instagram and Facebook violating community standards. No clue how or why. I did the Facebook appeal and it rejected it within 2 minutes and disabled it. Have just gotten farther with my instagram complaint. So waiting to hear back from that. It’s rediculous that there is no way to contact meta support and talk to an actual person.