To make AI more secure, AI vendors should share their vulnerability research

To make AI more secure, AI vendors should share their vulnerability research

We recently found, fixed, and disclosed vulnerabilities in our Vertex AI platform. Google Cloud CISO Phil Venables explains why all AI vendors should share their vulnerability research.


Importantly, the ongoing rapid growth of AI technology means that its attack surface can also quickly shift. This is a crucial time to invest in AI security research, and ensure that it is as secure as possible as it matures.?

At Google, we test our products, platform, and infrastructure, and we open ourselves to and partner with vulnerability researchers through our bug bounty program . We also have our in-house Google Cloud Vulnerability Research team, which was tasked with focusing on our AI platform, Vertex AI, prior to the launch of Gemini in 2023.??

During their research, the CVR team discovered previously-unknown vulnerabilities in Vertex AI — and remediated them. You can read the CVR team’s detailed research, where we also cover some of the architectural adjustments to structurally harden the platform, at the Bug Hunters blog .

“We detail our findings, how we found and fixed the issues internally, and how we reported our findings to similar cloud providers,” the CVR team said. Importantly, they didn’t limit their research to Vertex AI. “We continued this research on another large cloud provider, discovered similar vulnerabilities in their tuning architecture, and reported these vulnerabilities using their standard vulnerability disclosure process.”

At Google, we place great importance on delivering our AI technology to our customers, and part of that process means security testing and researching our AI products. It’s simply part of our culture. Fixing and mitigating vulnerabilities is crucially important for building trust in new technology, and so is disclosing those findings and discussing them.?

As an emerging technology, AI will face intense scrutiny from attackers and researchers. While the AI industry is learning together how best to analyze and secure AI, it’s essential that we normalize the discussion of AI vulnerability research.?

That’s why it’s vital for AI developers to normalize sharing AI security research now. Google Cloud intends to lead efforts to enhance security standards globally by promoting transparency, sharing insights, and driving open discussions about AI vulnerabilities so we can collectively work towards a future where gen AI is secure by default .?

Conversely, not sharing vulnerabilities once they’ve been remediated raises the risk that similar or identical vulnerabilities will continue to exist on other platforms. As an industry, we should be making it easier to find and fix vulnerabilities, not harder.

Reaching that future will require communication and collaboration. The Coalition for Secure AI and the open-source Secure AI Framework (SAIF) that it’s based on have important roles to play. By investing in and developing an AI security framework that stretches across the public and private sectors, we can make sure that developers safeguard the technology that supports AI advancements. This will help ensure that AI models are secure by default when they’re implemented.

We want to expand the strong security foundations that have been developed over the past two decades to protect AI systems, applications, and users. Similarly, we advocate for consistent control frameworks that can support AI risk mitigation and scale protections across platforms and tools. Doing so can help ensure that the best protections are available to all AI applications in a scalable and cost efficient manner.

Stigmatizing the discovery of vulnerabilities will only help attackers. We hope that encouraging vulnerability transparency and driving open discussions will empower developers and other Cloud providers to follow, addressing security issues without fear of reprisal. It is this mentality that will ultimately help push the AI industry forward.?

Let’s raise the bar of AI security industry-wide, as we collectively work towards a future where foundation models are secure by default.



MD FAKHRUL HOSSAIN CHOWDHURY

Marketings at Any textile company

3 周

Very helpful

回复
Harish Prasad

IT Infrastructure Delivery Leader |Managed Services | Cloud Transformation, Migration, Operations |AIOps | Portfolio, Program, Project Management | Client Relationship Management | Account Management MBA, PMP, ITIL

3 周

I think transparency, sharing insights, and driving open discussions about AI vulnerabilities is important to avoid duplication of efforts and to make products/services/operations using AI secure collectively. It can be part of having global democratic framework for responsible AI.

要查看或添加评论,请登录

社区洞察

其他会员也浏览了