Can AI be a force of good for fending off fake news and deepfakes?

Can AI be a force of good for fending off fake news and deepfakes?

Last week, one of my favorite columnists Thomas L. Friedman wrote about “a trend ailing America today: how much we’ve lost our moorings as a society”. The metaphors he used is a type of tree called Mangrove: “Mangroves filter toxins and pollutants through their extensive roots, they provide buffers against giant waves set off by hurricanes and tsunamis, they create nurseries for young fish to safely mature because their cabled roots keep out large predators, and they literally help hold the shoreline in place. To my mind, one of the saddest things that has happened to America in my lifetime is how much we’ve lost so many of our mangroves. They are endangered everywhere today — but not just in nature.”

This made me think about responsible AI again: what is the “Mangrove” that the AI practitioners could foster, to fortify trusted information sources, cultivate a solid foundation of scientific literacy, and “remoor” society with facts vs opinions?

AI is a double-edged sword when it comes to information. While it can be used to create deepfakes and spread misinformation, it also holds immense potential to fight fake news. Here are some examples of how AI can help us fend off fake information:

1. Identifying Patterns and Red Flags: AI can analyze massive amounts of data to identify patterns in language and content creation that are often associated with fake news. This includes:

  • Emotionally charged language: AI can recognize the use of inflammatory words and phrases commonly used in manipulative content.
  • Deviations from journalistic norms: AI can compare writing styles to established journalistic standards and flag articles with significant stylistic departures.
  • Suspicious source behavior: AI can track the behavior of unknown or suspicious online sources that might be spreading misinformation.

2. Fact-checking at Scale: AI can automate some aspects of fact-checking by:

  • Cross-referencing information: AI can compare claims to established knowledge bases and credible sources to verify their accuracy.
  • Identifying inconsistencies: AI can analyze the content of an article and flag inconsistencies within the text itself or with known facts.

3. Tracking the Spread of Misinformation: AI can monitor social media platforms to identify how fake news spreads. This allows researchers and platforms to understand how misinformation campaigns operate and develop strategies to disrupt them.

However, it's important to remember that AI is just a tool, and Gen AI, is a reflection of the content on the internet. We simply can not feed AI with synthetic information and let it generate synthetic intelligence to further pollute our society. We need to have unbiased, trusted news media and journalism. That's why legislative bills such as CJPA are so interesting to watch the progress.

Overall, AI is a valuable weapon in the fight against fake information, but it needs to be supported with good sources of content and developed and used responsibly for human betterment.

?

Sam Keen

Engineering Leader | Cloud Architecture | AWS, Lululemon, Nike

9 个月

Great post Wei. Definitely a repeating pattern in technology; new tech which can cause harm, is the same tech to use to mitigate that harm. I see (want) a core responsibility of these personal AI assistants we are supposedly getting to be that of “misinformation mitigation”. To that point, I feel open source models are key to providing personal use models we can trust.

Vasudev Sharma

Senior Director (Data, Analytics,ML) at Arcteryx

9 个月

Will be interesting to see the evolution, Purpose built tools will bring the right balance.

Helen Hockx-Yu

Principal Architect, McDonald's Global Technology

9 个月

To combat this, I wonder if it would be possible for owners of quality content to authorise copyright exemption so that curated corpora can be used to benchmark or train AI.

Matt Quinn

Supervisor, Enterprise Communications at McDonald’s | Strategic Communications Professional Experienced in Employee Engagement, Generative AI, and Business Transformation.

9 个月

Wei Manfredi - agreed all around! Leveraging genAI pattern recognition abilities to spot fake news can be a universally beneficial application. That ability to flag as well as clearly explain the indicators to a user - from an objective perspective - can go a long way.

要查看或添加评论,请登录

Wei Manfredi的更多文章

  • The AI Scaling Challenge: Why US Companies Need to Look to India

    The AI Scaling Challenge: Why US Companies Need to Look to India

    The race to AI supremacy is on. US companies are at the forefront, pushing the boundaries of what's possible with…

    8 条评论
  • Who would you call?

    Who would you call?

    I never imagined a 21-year-old American rapper's song would unravel my deepest grief, leaving me in tears on a Saturday…

    1 条评论
  • Are we still objects?

    Are we still objects?

    1996, I was in Ohio for my graduate studies. Just like most of the graduate students, in order to make ends meet, I was…

    9 条评论

社区洞察

其他会员也浏览了