Beyond 'true or false': making our own decisions
Araceli Higueras
Author | Product Owner | <How to be the CEO of your Career> Coach | UX Designer | Copywriter | Business Analyst
Have you read or heard about this interview?
<<The team that built X’s Community Notes talks about their design process and the philosophy behind their approach to combatting false information on the platform.>>
I think it's worth your while, specially if you're hearing about certain social media apps laying off their fact checkers. It doesn't mean the application doesn't do any fact checking.
Why stop humans doing fact checking?
Keith Coleman (VP of product at X and formerly Twitter) highlights in the interview several key limitations of traditional human fact-checking approaches:
1. Speed
Traditional fact-checking methods take "multiple days to check a claim, which is the equivalent of infinity in internet time."
2. Scale
There's a challenge with "small groups of people" being able to review and fact-check the vast amount of content.
3. Trust
Even if speed and scale could be addressed, there's a fundamental trust issue.
Many people "did not want a tech or media company deciding what was or was not misleading."
These limitations led the team to explore crowdsourcing as an alternative, drawing inspiration from Wikipedia's model of collaborative knowledge creation.
They wanted a solution that could: act at "internet speed", work at "internet scale", be trusted by people across different political perspectives.
The existing approaches of internal trust and safety teams or partnerships with professional media organisations were insufficient due to these three key challenges.
The team saw crowdsourcing as a potential solution that could overcome these limitations by leveraging a large, diverse group of contributors who could quickly provide context and fact-checking in a way that feels more transparent and trustworthy to users.
How are they checking their facts then?
X have a unique approach to crowdsourced fact-checking through several key mechanisms:
1. Open Contribution
Anyone can contribute a "note" (context) to a post.
Notes must be specific to the content of the post.
Contributors must provide sources for their information.
2. Anonymous Rating System
Contributors anonymously rate whether a note is helpful.
Personal reputation is not affected by providing rating. There isn't backlash from influencing ratings.
(Anonymity became important after realising people were uncomfortable having their name attached to notes about controversial posts.)
3. The "Bridging Algorithm" (the most innovative part)
The system doesn't just count votes (it's not a popularity contest!). Instead, it looks for notes that people who typically disagree can both find helpful.
For example: If people who have historically disagreed on ratings both find a particular note helpful, that note is more likely to be shown.
The most effective notes are those that provide neutral, factual context that people across different political perspectives can agree on.
领英推荐
4. Quality Control
Contributors earn the ability to write notes.
They can lose this ability if they consistently write unhelpful or poor-quality notes.
Not every note gets published - only those meeting a high-quality* threshold.
The goal isn't to declare something definitively "true" or "false", but to provide additional context that helps people make more informed judgments.
Quick notes "provides context and information and then lets you make your own decision about how to trust it."
Last question: How is quality measured?
What is quality?
We've briefly touched on it but it's worth recapping:
According to the article, the quality is primarily measured through the "bridging algorithm," which has a few key characteristics:
1. Political Spectrum Agreement
The algorithm looks for notes where people who have historically disagreed on their ratings actually agree that a particular note is helpful.
This means: the note isn't just popular with one side of the political spectrum, it must be found useful by people with different viewpoints
2. Visualisation of Quality
They created a graph that shows notes plotted by helpfulness and polarisation.
Most notes are either:
- Very polarised and only mildly helpful, or
- Completely unhelpful
>> Only a few notes fall in a "sweet spot" of being both helpful and non-polarising
3. Specific Criteria
Notes need to be specific to the content.
They must include sources.
They should provide context that actually helps people understand the post.
4. Earned Contributor System
They developed a system where: contributors can earn the ability to write notes.
They can lose this ability if they write notes that others don't find helpful.
In conclusion
Not every proposed note becomes visible, because not every note is accurate or well-written.