Can AI be a force of good for fending off fake news and deepfakes?
Wei Manfredi
Visionary Tech Executive | Data & AI Keynote Speaker | Board Member | Ex-Googler Posts = My Opinions
Last week, one of my favorite columnists Thomas L. Friedman wrote about “a trend ailing America today: how much we’ve lost our moorings as a society”. The metaphors he used is a type of tree called Mangrove: “Mangroves filter toxins and pollutants through their extensive roots, they provide buffers against giant waves set off by hurricanes and tsunamis, they create nurseries for young fish to safely mature because their cabled roots keep out large predators, and they literally help hold the shoreline in place. To my mind, one of the saddest things that has happened to America in my lifetime is how much we’ve lost so many of our mangroves. They are endangered everywhere today — but not just in nature.”
This made me think about responsible AI again: what is the “Mangrove” that the AI practitioners could foster, to fortify trusted information sources, cultivate a solid foundation of scientific literacy, and “remoor” society with facts vs opinions?
AI is a double-edged sword when it comes to information. While it can be used to create deepfakes and spread misinformation, it also holds immense potential to fight fake news. Here are some examples of how AI can help us fend off fake information:
1. Identifying Patterns and Red Flags: AI can analyze massive amounts of data to identify patterns in language and content creation that are often associated with fake news. This includes:
2. Fact-checking at Scale: AI can automate some aspects of fact-checking by:
领英推荐
3. Tracking the Spread of Misinformation: AI can monitor social media platforms to identify how fake news spreads. This allows researchers and platforms to understand how misinformation campaigns operate and develop strategies to disrupt them.
However, it's important to remember that AI is just a tool, and Gen AI, is a reflection of the content on the internet. We simply can not feed AI with synthetic information and let it generate synthetic intelligence to further pollute our society. We need to have unbiased, trusted news media and journalism. That's why legislative bills such as CJPA are so interesting to watch the progress.
Overall, AI is a valuable weapon in the fight against fake information, but it needs to be supported with good sources of content and developed and used responsibly for human betterment.
?
Engineering Leader | Cloud Architecture | AWS, Lululemon, Nike
9 个月Great post Wei. Definitely a repeating pattern in technology; new tech which can cause harm, is the same tech to use to mitigate that harm. I see (want) a core responsibility of these personal AI assistants we are supposedly getting to be that of “misinformation mitigation”. To that point, I feel open source models are key to providing personal use models we can trust.
Senior Director (Data, Analytics,ML) at Arcteryx
9 个月Will be interesting to see the evolution, Purpose built tools will bring the right balance.
Principal Architect, McDonald's Global Technology
9 个月To combat this, I wonder if it would be possible for owners of quality content to authorise copyright exemption so that curated corpora can be used to benchmark or train AI.
Supervisor, Enterprise Communications at McDonald’s | Strategic Communications Professional Experienced in Employee Engagement, Generative AI, and Business Transformation.
9 个月Wei Manfredi - agreed all around! Leveraging genAI pattern recognition abilities to spot fake news can be a universally beneficial application. That ability to flag as well as clearly explain the indicators to a user - from an objective perspective - can go a long way.