ModeLeak: Privilege escalation to LLM model exfiltration in Vertex AI
ReversingLabs
ReversingLabs is the trusted name in file and software security. RL - Trust Delivered.
Welcome to the latest edition of Chainmail: Software Supply Chain Security News, which brings you the latest software security headlines from around the world, curated by the team at ReversingLabs .
This week: Researchers uncovered two critical flaws in Google’s Vertex AI that could have allowed attackers to escalate privileges and exfiltrate models. Also: Amazon’s latest data breach was a result of the infamous MOVEit Transfer’s 2023 SQL injection flaw.?
This Week’s Top Story
ModeLeak: Privilege escalation to LLM model exfiltration in Vertex AI
This past week, researchers at Palo Alto Networks’ Unit 42 discovered two vulnerabilities in Google’s Vertex AI platform that could have allowed attackers to escalate privileges and exfiltrate models. Vertex AI serves as a machine learning (ML) platform that lets users train and deploy ML models and AI apps, and customize large language models (LLMs) for use in AI-powered applications. Researchers shared the discovery with Google, and the company has since implemented fixes to eliminate the vulnerabilities on the Google Cloud Platform, where Vertex AI resides. Unit 42 was able to perform proof of concepts for both vulnerabilities, which they detailed in their blog post .?
The first vulnerability allows for privilege escalation via custom jobs – a feature of Vertex AI where users can tune their models using custom training jobs, which are “code that runs within the pipeline and can modify models in various ways,” researchers noted. By exploiting the custom job permissions, an attacker could escalate privileges and gain unauthorized access to all data services in the ML project. In Unit 42’s proof of concept, they were able to exploit the custom job feature to access the service agent’s identity, making it possible for the researchers to list, read and export data from buckets and datasets that they “should never have been able to access.”?
The second vulnerability, which researchers described as being the more “interesting” infection scenario of the two, allowed an attacker to exfiltrate a model via a malicious model. In this scenario, an attacker could have deployed a poisoned model in Vertex AI in order to exfiltrate all other fine-tuned models. Researchers noted that if carried out successfully by a threat actor, this vulnerability could have posed “a serious proprietary and sensitive data exfiltration attack risk.”?
For example – if a malicious actor uploaded a poisoned model to a public repository, and a member of your organization unknowingly imported and deployed the malicious model to Vertex AI, the model could “exfiltrate every other ML and LLM model in the project … putting your organization’s most critical assets at risk,” researchers noted.?
Researchers warn that both of these attack scenarios highlight “how a single malicious model deployment could compromise an entire AI environment,” demonstrating the need to implement strict controls on model deployments. One solution for preventing such scenarios is to separate an organization’s development and test environments from its live production environment, which will reduce the risk of a threat actor accessing insecure models.?
“Whether it comes from an internal team or a third-party repository, validating every model before deployment is vital.” – Unit 42
(Unit 42 )
This Week’s Headlines
Amazon’s latest data breach a ripple effect of MOVEit
This past Monday, Amazon publicly disclosed a data breach that impacted its employees’ data, which has been linked to the infamous MOVEit Transfer vulnerability discovered last year. The vulnerability, a critical SQL injection flaw tracked as CVE-2023-34362, was first exploited in May 2023. It allowed an attacker to gain unauthorized access to vulnerable systems. In total, the flaw enabled threat actors to bypass security measures and potentially steal sensitive data from at least 1,000 organizations worldwide. In this particular breach, Amazon fell victim due to its connection to a third-party property management vendor – highlighting the importance of third-party software risk management (TPSRM). (Secure World )
Security flaws in popular ML toolkits enable server hijacks, privilege escalation
Researchers at JFrog have discovered nearly two dozen vulnerabilities spanning 15 different ML related open-source projects on both the server and client side. Researchers noted that the server side flaws in particular "allow attackers to hijack important servers in the organization such as ML model registries, ML databases and ML pipelines." The vulnerabilities were found in Weave, ZenML, Deep Lake, Vanna.AI, and Mage AI, and could “lead to an extremely severe breach,” if successfully carried out by attackers, JFrog notes. (The Hacker News )
The US Department of Defense has finalized cyber rules for its suppliers
A new rule by the US Department of Defense (DoD) to ensure that contractors and subcontractors are implementing cybersecurity measures required by the federal government is set to take effect 60 days after its publication in the Federal Register this past October. The rule covers the DoD’s Cybersecurity Maturity Model Certification (CMMC) Program, which verifies that the agency’s contractors are compliant with existing protections for federal contract information (FCI) and controlled unclassified information (CUI). Contractors also must protect the information at a level commensurate with known cybersecurity risks, including advanced persistent threats (APTs). The CMMC also gives the DoD the tools necessary to hold contractors accountable for putting sensitive information at risk, or by knowingly misrepresenting their cybersecurity practices and any incidents that occur to the contractor. (CSO )
FBI, CISA, and NSA reveal most exploited vulnerabilities of 2023
A collection of agencies from the US and several countries have released a list of the top 15 routinely exploited vulnerabilities throughout last year, most of them first abused as zero-days. The report also states that threat actors “exploited more zero-day vulnerabilities to compromise enterprise networks (in 2023) compared to 2022.” The vulnerabilities include famous ones such as the MOVEit Transfer SQL injection flaw, the TeamCity authentication bypass flaw, and the Log4j2 remote code execution (RCE) flaw. The agencies are urging organizations worldwide to immediately patch the 15 vulnerabilities listed, in addition to implementing patch management systems to minimize their networks’ exposure to potential attacks. (BleepingComputer )
These three critical sectors are riddled with high-risk vulnerabilities
New research from Black Duck highlights how vulnerable the finance, healthcare and IT sectors have become to cyber attacks, with thousands of critical security flaws identified across all three industries. By analyzing data from dynamic application security testing (DAST) scans, the firm found nearly 1,300 critical vulnerabilities impacting companies in the financial sector, making it the most at-risk industry to security flaws. The healthcare and social assistance sector came in second-place for being the most at-risk, with 992 critical vulnerabilities found across these companies. Third-place went to the IT sector, with Black Duck’s DAST scans finding 446 critical flaws across these companies. (ITPro )
For more insights on software supply chain security, see RL Blog .?
The Best of RL
Blog | Gauging the Safety Level of Your Software with Spectra Assure
With RL Spectra Assure SAFE Levels, organizations can quickly understand the current level of software safety, which threats require immediate action, and how other risks and exposures can be addressed over time. Keep reading to learn more. [Read It Here ]
Webinar | Exposing Software Supply Chain Weaknesses
November 20 at 12 pm ET
国际数据公司 's latest DevSecOps research documented a staggering 241% increase in software supply chain attacks. Join special guest speaker Katie Norton of IDC to dive into the firm’s findings, unpack trends in attacker behavior, and learn actionable steps to help safeguard your organization from these rising threats. [Register Here ]?
Webinar | Global Perspectives on Software Supply Chain Security
November 21 at 9 am ET / 2 pm UK
The UK Cyber Security and Resilience Bill marks a critical step in strengthening the nation's cybersecurity with a focus on software supply chain security. Join this live event, featuring experts from RL, the UK’s National Cyber Security Centre , and PwC UK to learn about the Bill's impact on businesses operating within the UK and those collaborating across borders. [Register Here ]?
For more great conversations to watch, see RL’s on-demand webinar library .?
Co-Founder of Altrosyn and DIrector at CDTECH | Inventor | Manufacturer
1 周The MOVEit Transfer vulnerability highlights the critical need for robust zero-trust architectures within software supply chains. Google's VertexAI flaws underscore the importance of continuous security audits and penetration testing, especially for AI/ML platforms handling sensitive data. Given RL's focus on securing the software development lifecycle, how do you envision integrating threat modeling techniques into CI/CD pipelines to proactively mitigate vulnerabilities like these?