How We Can Combat the Negative Impacts of AI
Lorenzo Thione
OG AI Founder (2003) / Keynote Speaker & Investor / Broadway Producer / ????? Advocate
In six months (??), nearly all online content—as much as 90%—may be AI-generated.
This may lead to even more disinformation, spam, and the upsetting of SEO and searches, which the latest Google algorithm leaks confirmed. It’s a big deal, and it’s negatively impacting the media and publishing industry.?
That’s just part of the story.?
Other ways AI can be weaponized to inflict harm:
Here’s what we need to know to combat these risks and to protect ourselves.?
1. AI systems aren’t inherently evil.?
领英推荐
But they can still do damage, even when used as intended.
Even though people are already using AI in malicious ways, AI’s negative consequences are not always intentional. It’s easy to end up with biases in algorithmic decision-making, infringement of copyright and image rights, or circulation of unsourced plagiarized content–without proper safeguards.
2. Containment is critical.
We need to make significant investments in AI containment strategies—AI-powered defenses and countermeasures to detect, neutralize, and prevent AI misuse and its side effects. Some of my own AI portfolio companies — Reality Defender , Jericho Security , and Protopia AI — are already building cutting-edge technology solutions to keep us and our data safe.?
3. Policing AI requires worldwide cooperation.?
This isn’t just a technical challenge: it's a societal imperative. Just as we have global efforts to contain the spread of nuclear weapons or biological agents, we need a worldwide effort to limit the downsides of AI use and misuse. We must adopt changes in our criminal and regulatory frameworks to face the new challenge to privacy, democracy, and individual rights posed by threats like deepfake porn, impersonation, and superhuman persuasion. And we need proper auditing of critical systems and those that can amplify and inflict inequities and injustice upon minorities, such as in hiring, housing, and access to financial products.
4. We are the architects of AI.
Continued work on red-teaming models and increasing our investment in AI Safety research is critical. We need AI to advance work in observability and interpretability, build AI models with robust fail-safes and moderation ,and monitor and mitigate any harmful effects we detect.
Performance Marketing and Ecommerce Expert | Acquisition Partner and Area President - Arthur J Gallagher | Author
9 个月The risk of misinformation and disruption in SEO is alarming.
Transforming stuck startups and advising conscious leaders
9 个月The rapid rise of AI-generated content is definitely a double-edged sword.
Much better.
[human only post] The banks created FINRA to self police, and it’s a model that works. Maybe this already exists, why couldn’t social media companies build a not for profit consortium of human verifiers (community members) to manually post “human verified human produced content” with approved security protocols. Then can fine each-other for violating any principles which undermine the integrity of human content markets to fund the verification service efforts. I’m afraid a paid service by the user would squash the small voices. Its a way for social media companies to work together and enact meaningful change. The service can be as simple as a short video call with the content producer to get a small badge and the service should be equally available to all individual humans. Just some thoughts.. this impacts everyone.