On the Ingenuity of Community Notes
If product-making had difficulty levels, fighting misinformation would be a boss fight. You're not just dealing with honest mistakes; you're facing a horde of adversarial actors — some deliberately trying to break your system, and others doing it by accident.
That's where Community Notes comes in. With X having used it for a while, and Meta planning to use it soon, I spent some time trying to understand how it works and was struck by the thought and iteration that has led to this solution. Here are some quick notes on Community Notes along with thoughts on what it takes to build something like this.
A Community Note is basically a crowdsourced annotation, displayed alongside a post. It provides helpful context—either refuting misinformation or adding valuable information relevant to the original content. Contributors add notes, and these notes are voted on by other eligible users on whether they're helpful or not.
The way the note is chosen is where the magic happens: the algorithm is designed to surface notes that receive agreement from people who?usually disagree with each other.
The core product insight is that when people who are normally on opposite sides find common ground on a note's helpfulness, it's a strong signal that the context it provides is valuable. The ingenuity is in the fact that no one is defining these "sides"—these are implicit preferences that are derived from users' voting behaviours. The solution is the result of a strong product insight meeting math.
Here's a very simplified explanation of how the model works
How does one build a feature like this?
Do you think this was the stroke of a one-person's genius? or a group effort? Did they one-shot their way to a solution? or was it a process of iteration?
领英推荐
It's easy to look at a product and assume it was a single "aha" moment. But it hardly plays out that way in reality. Most of these should not come as a surprise, but an interview with the Community Notes team puts the process into perspective. Here are some takeaways and thoughts...
1. Know your problem space intimately: The team knew that existing approaches to misinformation were struggling with speed (news moves faster than fact-checkers can keep up!), scale (billions of posts to monitor!), and trust (who gets to decide what's true?).
This wasn't just about recognizing that misinformation was "bad"; it was about acknowledging the divisiveness in the discourse & dissecting the specific challenges that made it so difficult to combat. This can only come from a deep immersion in the problem space and was crucial for everything that followed.
2. Get inspired by looking around, adapt: Knowing the problem space, the team considered crowdsourcing as a potential solution based on what they know from Wikipedia (it's massive, mostly reliable and fresh). Crowdsourcing isn't exactly a new idea. The challenge was adapting it to the unique context of social media.
Generally speaking, you often find an approach in one place that can be adapted to solve a different problem somewhere else. This highlights a crucial point about innovation: it often comes from thoughtful exposure to a wide range of ideas and then deliberately adapting them. You need to actively seek out inspiration and understand what makes different approaches successful. And then when it's time, you may feel your product-spidey-sense tingle.
3. Cut the red tape and move fast: The team operated as a "thermal project" within the company. This meant a small, focused team with the freedom to build and iterate quickly, with a framework to keep the risks in check (small 500-user pilots). The algo wasn't there from the start — they landed on this after trying out multiple approaches.
This seemed like a great example of a framework mentioned in Loonshots, where the author draws a distinction between "artists" and "soldiers". There are parts of your org that need to do more of what already works, and those which need to innovate. You need to nurture and invest in both + have a process where promising ideas transfer from the lab (inhabited by the artists) to the field (inhabited by the soldiers).
4. Talk to users, obsess over the details: Would contributors feel ok with their name attached to the note? Would readers think the note was added by a fact checker? Would this be seen as a high-handed move from the platform?
Building a product at scale is taking hundreds of micro-decisions, each of which can be construed in different ways by different users at scale. Each decision, however small, can have significant consequences. This is where they seem to have benefited from their connects with users, consultations with expert researchers, and knowing the data cold!
References:
Product Manager @Microsoft | ex-SDE @Microsoft
2 个月Insightful!