Generative AI Amplifies Malware Threats by Producing 10,000 Variants with 88% Evasion Rate

Generative AI Amplifies Malware Threats by Producing 10,000 Variants with 88% Evasion Rate

Cybersecurity researchers have demonstrated how large language models (LLMs) can create numerous variants of malicious JavaScript code, significantly enhancing their ability to evade detection.

“While LLMs may struggle to generate malware from scratch, they can effectively rewrite or obfuscate existing malware, making it more challenging to detect,” stated researchers from Palo Alto Networks Unit 42 in a recent analysis. The transformations enabled by LLMs produce code that appears more natural, thereby complicating the task of identifying it as malicious.

LLM-Powered Obfuscation

Unit 42 researchers used LLMs to iteratively modify malware samples, aiming to bypass detection by machine learning (ML) models like Innocent Until Proven Guilty (IUPG) and PhishingJS. The process involved various techniques, including:

  • Variable renaming
  • String splitting
  • Insertion of junk code
  • Whitespace removal
  • Complete reimplementation of code

The result was the creation of 10,000 unique JavaScript malware variants, all retaining the same functionality but often classified as benign by malware detection systems. An internal test showed that 88% of the transformed variants successfully evaded detection.

Adding to the concern, these rewritten variants often bypassed detection on platforms like VirusTotal, which aggregate results from multiple security tools. Unlike traditional obfuscation tools such as obfuscator.io, the LLM-generated variants appeared more organic, making them harder to fingerprint.

Dual-Use Potential

While the risks of such techniques are clear, researchers highlighted their potential for defensive use. The same LLM-driven obfuscation could be used to generate diverse training data to enhance the robustness of ML-based detection models.

TPUXtract Attack Targets Google Edge TPUs

In a separate disclosure, researchers from North Carolina State University introduced a side-channel attack called TPUXtract, targeting Google Edge Tensor Processing Units (TPUs).

The attack exploits electromagnetic signals emitted during neural network inference to extract hyperparameters, including:

  • Layer type and configurations
  • Number of nodes, filters, and strides
  • Kernel sizes and activation functions

With a 99.91% accuracy rate, the attack enables adversaries to replicate the architecture of AI models, potentially leading to intellectual property theft or facilitating further attacks. Notably, the attack requires physical access to the target device and specialized equipment, limiting its scalability.

EPSS Framework Vulnerabilities

Finally, researchers at Morphisec identified vulnerabilities in the Exploit Prediction Scoring System (EPSS), an AI framework used to predict the likelihood of software vulnerabilities being exploited. By manipulating key input features, such as social media mentions and public code availability, attackers can distort the model’s outputs.

A proof-of-concept demonstrated that artificially boosting these signals through fabricated posts on social media and placeholder GitHub repositories could increase a vulnerability’s predicted exploitation probability and percentile ranking, misleading organizations that rely on EPSS for risk management.

Implications for Cybersecurity

These developments underscore the dual-edged nature of generative AI and its profound impact on cybersecurity. While bad actors can exploit LLMs and AI frameworks to enhance attacks, defenders must adapt by leveraging these same tools to improve detection systems and mitigate risks.

The findings highlight the growing sophistication of cyber threats and emphasize the importance of staying ahead in the arms race between attackers and defenders. Collaboration between researchers, organizations, and AI developers will be key to ensuring robust security measures in an evolving technological landscape.


Noveo's Commitment to AI-Driven Cybersecurity

At Noveo, we recognize the critical role of AI in modern cybersecurity. Our comprehensive training programs are designed to equip professionals with the skills needed to implement and manage AI-powered incident response systems effectively. By enrolling in our training, you will:

  • Gain In-Depth Knowledge: Understand the integration of AI in cybersecurity and its application in real-world scenarios.
  • Develop Practical Skills: Engage in hands-on exercises with cutting-edge technologies and strategies to build and manage AI-driven security solutions. ?
  • Stay Ahead of Threats: Learn to leverage AI to proactively detect and respond to emerging cyber threats ensuring your organization stays ahead of malicious actors.


Join Us in Securing the Future

Don't leave your cybersecurity to chance. Choose Noveo—the industry leader in AI security training—and empower your team to safeguard your digital assets effectively. Sign up for our training today and take the first step toward a more secure future.

要查看或添加评论,请登录

社区洞察

其他会员也浏览了