FraudGPT: Criminals Have Created Their Own ChatGPT Clones
Net Talent
Specialist recruitment consultancy focusing on technology, digital and data roles across Scotland, the UK and beyond.
Cybercriminals and hackers are claiming to have created their own versions of the text-generating technology, OpenAI’s ChatGPT chatbot. The systems could, theoretically at least, supercharge criminals’ ability to write malware or phishing emails that trick people into handing over their login information.
The emergence of criminals creating their own versions of large language models (LLMs) like ChatGPT for malicious purposes is a concerning development. These rogue chatbots could indeed be used to enhance cybercriminal activities such as phishing and malware creation. The ability of LLMs to generate human-like text makes them potentially effective tools for crafting convincing phishing emails or creating malware-laden content that could deceive unsuspecting individuals.
It's important to note that the technology itself is neutral; it's the intent and usage that determine whether it's for legitimate or malicious purposes. Just as legitimate companies have developed LLMs to assist with various tasks, criminals can exploit these same capabilities for their own gain.
The proliferation of such malicious LLMs highlights the need for enhanced cybersecurity measures and continued research to detect and mitigate potential threats. As technology evolves, so do the methods and tools that cybercriminals use. It's crucial for security experts and organizations to stay vigilant and adapt their strategies to counter these emerging threats effectively.
?
In recent weeks, two chatbots have been advertised on dark-web forums—WormGPT?and?FraudGPT—according to security researchers monitoring the activity. The LLMs developed by large tech companies, such as Google, Microsoft, and OpenAI, have a number of guardrails and safety measures in place to stop them from being misused. If you ask them to generate malware or write hate speech, they’ll generally refuse.
?
The shady LLMs claim to strip away any kind of safety protections or ethical barriers. WormGPT was first spotted by independent cybersecurity researcher?Daniel Kelly, who worked with security firm?SlashNext ? to detail the findings. WormGPT’s developers claim the tool offers an unlimited character count and code formatting. “The AI models are notably useful for phishing, particularly as they lower the entry barriers for many novice cybercriminals,” Kelly says in an email.?“Many people argue that most cybercriminals can compose an email in English, but this isn’t necessarily true for many scammers.”
?
In a test of the system, Kelly writes, it was asked to produce an email that could be used as part of a business email compromise scam, with a purported CEO writing to an account manager to say an urgent payment was needed. “The results were unsettling,” Kelly wrote in the research. The system produced “an email that was not only remarkably persuasive but also strategically cunning.”
?
The development of these malicious large language models (LLMs), such as FraudGPT and WormGPT, by cybercriminals is concerning and highlights the potential misuse of advanced AI technology. These criminal actors claim that these LLMs can be used for activities such as crafting undetectable malware, generating text for online scams, and identifying vulnerabilities.
?
Several observations and points can be made after all this information:
?
?
领英推荐
?
?
?
?
?
?
In summary, the emergence of malicious LLMs in the hands of cybercriminals poses a significant challenge to cybersecurity efforts. It underscores the importance of proactive measures to detect, prevent, and mitigate the potential threats posed by these advanced AI technologies. As AI continues to advance, both legitimate users and cybercriminals will find ways to leverage its capabilities, making ongoing research, awareness, and preparedness critical components of cybersecurity.
?
?
?