Humans Still More Effective Than ChatGPT at Phishing
Human social engineers have been observed to perform better than artificial intelligence programs (AI) when trying to induce potential victims to click on malicious links.
The claims come from a new research paper by?HoxHunt, which analyzed 53,127 emails sent to users in over 100 countries according to its phishing training workflow.
HoxHunt co-founder and CTO, Pyry Avist, suggests that professional red teamers managed to?induce a 4.2% click rate compared to the 2.9% achieved by ChatGPT, outperforming the AI by?44.8%.
“Interestingly, there is some geographical variance between user failure rates on human vs?AI-originated phishing simulations,”?Avist wrote. “The greatest delta between the effectiveness of human vs AI-generated phishing attacks was among the Swedish population. AI was most effective against US respondents.”
HoxHunt clarified the experiment was performed before the release of ChatGPT 4, which is set to bring?substantial improvements?to the model.
“Large language models like ChatGPT will likely rapidly evolve and improve at tricking people into clicking.”
At the same time, Avist added that current human risk controls should remain relevant even as AI-augmented phishing tools evolve.
“The more time people spend in training, the less likely they’ll fall for an attack, human or AI. You don’t need to reconfigure your security training to address the potential misuse of ChatGPT.”
Potential measures to improve protection against such attacks include updating awareness training programs to inform employees about the emerging technologies and trends in phishing tactics, according to?Tanium’s?director of endpoint security research, Melissa Bischoping.
“While the recipient of a phish is often the first line of defense, it’s important that you’re also investing in layers of defense like email, DNS, network?and endpoint security monitoring and response capabilities.”
For Further Reference