My Alaskan Trip Musings on False Information Accelerated by AI and the Ethical Risks
Made with Designer. Powered by DALL·E 3.

My Alaskan Trip Musings on False Information Accelerated by AI and the Ethical Risks

Hello everyone. I am writing this article at the Anchorage Airport where I just got off a beautiful Alaskan cruise of many natural wonders. The majestic bald eagles were everywhere, and the occasional bears or moose casually roaming. It’s hard to return to the world of AI when nature offers so much beauty and serenity. But with the realities of being a CEO striving to advance AI innovations with ethical AI practices, I find myself musing on what to share as I return to Canada. Enjoy the Alaskan Musings!

What would it mean if AI could consistently tell a truth vs a lie in what humans said, or wrote? How powerful would this be if AI cyber smart detection systems could identify the difference between email spam, fake news, false claims, and even exaggerations in resumes? In a more intelligent world, it is hard to know what is true.

Recently reported by MIT, Alicia von Schenk, et al (2024)?from the University of Würzburg, Germany developed an AI tool that can significantly better identify people telling lies vs humans. Their research asked colleagues to write statements about their weekend plans and half the time people were incentivized to tell a lie, and over 1,500 statements from 768 respondents were collected. Then they used a Google AI Language Model (BERT) to train 80% of these statements on the lies and truths and then tested the remaining 20% of the statements and found that the algorithm could tell whether a statement was true or false close to 70% of the time.? However, 70% is not good enough.

Their research if augmented with additional AI methods from facial detection and voice, has shown you can get to levels of accuracy beyond 80% accuracy, and humans at best are more likely on average to be well below this marker. AI?lie detectors look for facial patterns from movement and “micro gestures” associated with deception.?As Jake Bittle? wrote, “The dream of a perfect lie detector just won’t die, especially when glossed over with the sheen of?AI.”

However, how can we keep up is the bigger question. AI is always on and can always be learning. Tate Ryan-Mosely reported that generative AI in her research of sixteen countries is actively influencing public debate, sowing doubt, and smearing opponents with fake news.

AI language models are influencing our world everywhere, and we need to accelerate more regulatory controls and advance cybersecurity AI software to detect what is true or false to protect citizens and the democratic world we cherish.

Some researchers like Will Douglas Heaven have reported that “Bull**it” tools for chatbots will be one way that humans will be able to detect what is trustworthy or not.

With the upcoming USA and Canadian national elections. concerns about misinformation will require support from the technology titans fueling the AI Industry.

"AI developers do not have a confident understanding of what causes undesirable AI behaviors like deception," says mathematician and cognitive scientist Peter Park . "But generally speaking, we think AI deception arises because a deception-based strategy turned out to be the best way to perform well at the given AI's training task. Deception helps AI models achieve their goals."

Does this not sound like a perfect storm?

1.) AI Developers don't understand AI behaviors like deception. Yikes!

2.) AI Technical Titans are betting their futures on LLMs and LLMs are hallucinating, telling lies, and enabling accelerated misinformation and deception intelligence.

3.) AI is smarter than humans in potentially identifying disinformation, but we don't have the trusted highways universally built to control congested traffic let alone validate its integrity and trustworthiness. In addition, the regulatory controls and safeguards are not built nor validated.

What can be done?

One strategy is to have the technology titans have a new citizen responsibility accountability that requires them to invest 5-10% of their net income into developing advanced cyber-security detection systems to counter what is being unleashed by them. In nature, there is always a cause and effect, and the consequences over the long term are worrisome, given the speed of generative AI and false information propagation.

It's like LLM forest fires are starting everywhere but we cannot control the flames as our world is getting more congested with toxic misinformation. The fire brigades are not well-equipped to manage the intense, uncontrolled LLM conflagration.

If Apple, Amazon, Google, IBM, META, Microsoft, Nvidia, Oracle, SAP, Salesforce, and large language leaders like Anthropic, Cohere, OpenAI, Mistral, etc don't step up more to combat false content, we may slip into a world where trust is a distant reality and a long lost friend.

It is ironic DR. CINDY GORDON ICD.D. that ChatGPT "hallucinates" which to me is essentially lying or misinforming. An LLM does not learn to lie by itself, it is typically encoded as such. The irony of your post is that LLMs should be checking themselves and leveraging AI to delete any lies they may be propagating! ?? Sue

回复
Simon Au-Yong

Bible lover. Founder at Zingrevenue. Insurance, coding and AI geek.

4 个月

Hope you enjoyed your beautiful escape from the scorching summer, DR. CINDY GORDON ICD.D. ?? Misinformation and deepfakes are indeed a huge issue in an election year. And here Down Under, misfiring algorithms have caused enormous headaches for successive governing administrations and taxpayers alike. It looks like humans need to be in the AI loop for the foreseeable future! https://www.smh.com.au/business/companies/gone-in-38-seconds-regulator-using-ai-to-reject-serious-criminal-complaints-20230303-p5cp7d.html

回复
Dave Cassie

Executive Advisor and Transformational Leader. Capital Markets, Wealth Management - Technology - Operations - Compliance.

4 个月

Insightful view as usual (is this what you were pondering on your cruise?) This may be complicated as companies adopt or adapt their LLMs with their own content and constructs. Will individual companies have the resources to citizen responsibility?

回复
Christopher Norris

FRSA ??Need help with your pre-launch business, invention or creative project? Let's connect ? Serial entrepreneur: 15+ businesses ? Author ? Expert ? Connector ? Mentor ? Philanthropist ? Global

4 个月

Tools like AI are essentially neutral technologies. It's what human being do with the technology that leads to positive or negative outcomes. I agree, human nature being what it is, that bad actors with nefarious intentions need to be fenced in with regulation and penalties. I also agree that AI can be an effective way of policing uses of AI: we just need the commercial and legal environment to encourage Big Tech to design the necessary tools.

回复

要查看或添加评论,请登录

社区洞察

其他会员也浏览了