Study Finds New Version of Chat-GPT Can be Used in Voice Scams
TrollEye Security
Empowering continuous security insight for unlimited growth.
As reported by Bleeping Computer , in a new study, researchers from the University of Illinois Urbana-Champaign (UIUC) have demonstrated how OpenAI’s ChatGPT-4o, a sophisticated AI model with integrated text, voice, and vision capabilities, could be exploited to facilitate financial scams. Despite OpenAI’s security enhancements to prevent misuse, UIUC researchers Richard Fang, Dylan Bowman, and Daniel Kang found that ChatGPT-4o’s real-time voice API can be leveraged to conduct scams with low to moderate success rates.
As voice-enabled scams grow into a multi-million dollar threat, the integration of voice, text, and visual features in tools like ChatGPT-4o introduces new security challenges. OpenAI’s latest advancements bring improved functionality, but the researchers’ study reveals that the technology’s safeguards, designed to detect harmful content and block unauthorized voices, may still fall short in blocking more elaborate scams.
The Study's Findings
In their paper, the UIUC researchers explored how AI tools, currently available with limited restrictions, could be abused by cybercriminals for financial scams such as bank transfers, gift card exfiltration, cryptocurrency transfers, and credential theft. By using ChatGPT-4o’s voice capabilities to simulate interactions, they showcased how AI agents could navigate websites, input data, manage two-factor authentication, and conduct other tasks traditionally handled by human scammers.
Since ChatGPT-4o is programmed to avoid handling sensitive information, the researchers used prompt jailbreaking techniques to bypass built-in restrictions. They simulated these scams by manually interacting with the AI, assuming the role of a “credulous victim,” and confirmed successful transactions on real websites, including Bank of America.
Success Rates and Costs
Across the simulated scenarios, success rates varied from 20% to 60%, with credential theft from Gmail achieving a 60% success rate. Bank transfers and impersonating IRS agents saw more failures due to transcription errors or navigation complexities. Each scam required up to 26 browser actions and could last up to three minutes in complex cases.
领英推荐
The study highlighted that conducting these scams was relatively inexpensive. For instance, a successful case cost around $0.75 on average, with bank transfer scams costing $2.51—a small price compared to potential profits from successful scams.
OpenAI responded to the findings by emphasizing the company’s ongoing efforts to enhance ChatGPT’s security measures. In a statement, OpenAI’s spokesperson acknowledged that papers like this one from UIUC are valuable for improving ChatGPT’s defenses against malicious use.
The spokesperson noted that OpenAI’s upcoming model, “o1-preview,” has been built with advanced reasoning capabilities and improved defenses against adversarial use. Compared to GPT-4o, o1-preview scored significantly higher in internal safety evaluations, showing a 93% resistance rate to unsafe content prompts versus 71% for GPT-4o.
To prevent voice-based abuse, OpenAI has restricted voice generation to a set of pre-approved voices to counter impersonation attempts. As OpenAI continues to enhance ChatGPT’s security, the company also indicated that older, more vulnerable models may eventually be phased out.
While OpenAI’s updates aim to make ChatGPT-4o and future models more resilient against abuse, the risk posed by other voice-enabled chatbots with fewer restrictions remains significant. The UIUC study highlights how even small loopholes in advanced AI can be exploited by cybercriminals, highlighting the need for continuous improvements in AI safety and fraud prevention as these technologies evolve.
As voice and AI technologies continue to advance, both developers and users must remain vigilant to mitigate the substantial risks they pose to financial security and personal data.