OpenAI Confirms State Sponsored Threat Actors Using ChatGPT For Cyber Attacks

OpenAI Confirms State Sponsored Threat Actors Using ChatGPT For Cyber Attacks

OpenAI has disclosed that cybercriminals are exploiting its ChatGPT AI model to develop malware and carry out cyberattacks.

In a recent report, OpenAI outlined more than 20 incidents since early 2024 where attackers attempted to misuse ChatGPT for harmful activities.

The report, titled “Influence and Cyber Operations: An Update,” indicates that state-sponsored hacking groups, particularly from countries like China and Iran, have been using ChatGPT to bolster their cyberattack capabilities. These malicious activities include refining malware code, creating content for phishing schemes, and spreading disinformation on social media.

The report reveals two notable cyberattacks involving the use of the generative AI tool, ChatGPT.

The first attack, disclosed by Cisco Talos in November 2023, was carried out by Chinese threat actors targeting Asian government entities. The attackers employed a spear-phishing tactic called "SweetSpecter," which involved sending a ZIP file containing a malicious payload. When the file was downloaded and opened, it triggered a series of infections on the target's system. OpenAI discovered that the attackers used multiple accounts on ChatGPT to create scripts and identify vulnerabilities with the help of the language model.

The second AI-assisted cyberattack was executed by an Iranian group known as "CyberAv3ngers," which exploited ChatGPT to find weaknesses and steal user passwords from macOS devices. Additionally, another Iran-based group, Storm-0817, used ChatGPT to develop Android malware. This malware was designed to steal contact information, extract call logs and browser histories, determine the device's exact location, and access various files on infected devices.

While these attacks utilized conventional methods to create malware, the report indicates that ChatGPT did not generate any fundamentally new malware. Nevertheless, the incidents highlight the ease with which threat actors can manipulate generative AI services to produce harmful tools. The situation demonstrates how generative AI can be exploited for malicious purposes, even with existing safeguards in place. Although some security researchers are working to uncover and report potential exploits, these attacks underscore the need for discussions about imposing restrictions on generative AI usage.

OpenAI has stated that it will continue to enhance its AI systems to prevent such abuses, collaborating with internal safety and security teams in the process. The company also pledged to share its findings with industry partners and the research community to help prevent similar incidents in the future.

As generative AI becomes more widely adopted, it is crucial for other major AI platforms to implement protections to mitigate the risk of such attacks. While it is challenging to eliminate these threats entirely, AI companies must prioritize preventive measures over reactive ones to strengthen cybersecurity.


Read The Complete OpenAI Report Here


OK Bo?tjan Dolin?ek

回复

Will the digital forensics stop the cyber av3ngers

回复
ibrahim fatai

IT Specialist at gencon-solution

5 个月

Am sure not only that process they are working onn...best policies should be implemented and monitor...

回复
Jupyo Seo

Digital Audit Director | Cybersecurity & IT Risk Management, Process Improvement, Data Analytics

5 个月

This is an aspect we need to focus on for safer AI tools in the future. Before that, we also need to define how broad the scope of 'safer' should be, and how we can measure and provide assurance of their safety.

Mohamed Abd Elhalim

IT Engineer, Linux / UNIX administrator, Windows Administrator System Engineer, Network Engineer, Infrastructure manager

5 个月

It is expected that thieves will use technology just as the righteous do, but this calls for caution, not for burying technology.

要查看或添加评论,请登录

The Cyber Security Hub?的更多文章

社区洞察

其他会员也浏览了