GCP Large Language Model Security

GCP Large Language Model Security

The shared responsibility model on Google Cloud Platform (GCP) is a framework that outlines the division of security tasks between #GCP and its customers. This model ensures that both parties are accountable for safeguarding their respective areas of responsibility, leading to a more robust and secure cloud environment.

In the use cases where organizations develop their own applications (as opposed to using AI applications developed by a third party,) the core responsibilities between Google and the customer include the following:

Google’s Responsibilities include:

  • Infrastructure Security: Google is responsible for the physical and logical security of the underlying Google Cloud infrastructure, including data centers, networks, and operating systems.
  • Platform Security: Google Cloud is built with security in mind, offering features like encryption, vulnerability management, and identity and access management (IAM) controls.
  • PlatformCompliance: Google adheres to rigorous compliance standards, such as FedRAMP, HIPAA, and PCI DSS, to ensure data privacy and security.

Customer Responsibilities include:

  • Identity and Access Management (IAM): Customers are responsible for implementing IAM best practices to control access to AI resources and data. This includes using least privilege, and service accounts, and regularly reviewing and updating IAM policies.
  • Network Security: Customers should configure network security controls to protect AI workloads, such as VPC segmentation, firewall rules, web application firewall protection, and API management.
  • Application Security: Securing the applications that interact with AI models is crucial. This involves input validation, secure coding practices, and regular security testing.
  • Data Security and Governance: Customers must implement robust data security measures to protect sensitive data used in AI training and operation. This includes encryption, data loss prevention (DLP), and proper data lifecycle management.
  • Logging and Monitoring: Continuous monitoring of AI workloads is essential for detecting and responding to security incidents. Customers should leverage Google Cloud logging, monitoring, and SIEM tools to gain visibility into their AI systems.
  • Incident Response: Having a well-defined incident response plan is crucial for minimizing the impact of security breaches. This plan should outline steps for identifying, containing, and remediating security incidents.

The Enterprise Foundations Blueprint is a well-designed foundation that enables consistent governance, security controls, scale, visibility, and access to shared services across all workloads in your Google Cloud environment

Model Security

While securing applications and infrastructure is a vital component of AI security, prioritizing the security of the AI models themselves must not be overlooked. AI models are the core decision-making engines of the system, making them prime targets for sophisticated attacks. While using pre-trained LLM(Foundation or open source) the customer is not training or refining the model hence model security is reduced to matching the model to the business problem to be solved and protecting the model from unauthorized access, modification, and disclosure.

Application Security

Protection In-line Prompt and Response

Since gen AI models take unstructured prompts from users and generate new, possibly unseen responses, you may also want to protect sensitive data in line. Many known prompt-injection attacks have been seen in the wild. The main goal of these attacks is to manipulate the model into sharing unintended information.

While there are multiple ways to protect against prompt injection, SDP(Sensitive Data Protection) can provide a data-centric security control on data going to and from gen AI foundation models by scanning the input prompt and generated response to ensure that sensitive elements are identified or removed.

Sensitive Data Protection

While many hackers may attempt prompt injection techniques in an attempt to exfiltrate sensitive data, others may want to manipulate your models to generate content that may be offensive, misleading, or dangerous. Content processed through the AI is assessed against a list of safety attributes, which include "harmful categories" and topics that can be considered sensitive. By default, these APIs block unsafe content based on a list of safety attributes and their configured blocking thresholds. You may choose to enforce a different threshold for each safety attribute, allowing you to take control over the type of content your AI application accepts or generates. You should also consider using text embeddings as the third layer of defense to protect against model manipulation and evasion. A text embedding is a vector representation of text, and they are used in many ways to and similar items. When you create text embeddings, you get vector representations of natural text as arrays of floating point numbers — all of your input text is assigned a numerical representation. By comparing the numerical distance between the vector representations of two pieces of text, an application can determine the similarity between the text or the objects represented by the text.

This becomes quite useful in the security context: although your team has taken great effort and care to consider the ways in which an attacker can create a prompt to manipulate your model, they can’t identify every possible attempt. Therefore, text embeddings can determine that a new prompt is similar to a known malicious prompt — and defenders can use this information to enforce your security guardrails.

Infrastructure Security

A fundamental pillar of AI security lies in safeguarding the underlying infrastructure upon which these systems function. Compromising a system’s infrastructure — the hardware, networks, and software it relies on — can expose sensitive AI models, training data, and the overall system to harmful attacks and manipulation. Therefore, the AI infrastructure should ensure that unauthorized users are unable to establish unauthorized access to the model, are prohibited from unauthorized appropriation of the model, and cannot insert unauthorized models or corrupt responses.

When you build an image with Cloud Build, the image’s build provenance is automatically recorded. Build provenance is a collection of verifiable data and includes details such as the digests of the built images, the input source locations, the build arguments, and the build duration. You can leverage this build provenance to confirm that build artifacts are being generated from trusted sources and builders and ensure that provenance metadata describing your build process is complete and authentic. You may also choose to encrypt the build-time persistent disk with a unique ephemeral Customer-Managed Encryption Key (CMEK) that is generated for each build. Once a build starts, the key is accessible only to the build processes requiring it for up to 24 hours. Then, the key is wiped from memory and destroyed. It is recommended that you enable the Container Analysis API before pushing any images to the Artifact Registry. The Container Analysis API initiates an automatic vulnerability scan when images are pushed to the Artifact Registry. The vulnerability information is continuously updated when new vulnerabilities are discovered and are available in the Security Command Center.

Serving Infrastructure

Google Cloud’s web application firewall service, Cloud Armor, provides WAF and anti-DDoS capabilities, protecting applications against layer 3, 4, and layer 7 attacks, the Open Web Application Security Project (OWASP) Top 10, and sophisticated application exploits.

A Google Cloud Application Load Balancer is the first entry point for traffic attempting to reach your application’s API. As the track reaches the External Application Load Balancer, Cloud Armor immediately assesses the track to detect and mitigate network attacks. This assessment is based on preconfigured and custom security policies that you can apply to allow, deny, rate-limit, or redirect requests before traffic reaches your API and backend services. If Cloud Armor determines that the traffic is legitimate, the user will be able to access the application. It is recommended that you contain the backend infrastructure within a VPC Service Controls Perimeter, to help you mitigate exfiltration risks by isolating multi-tenant services. The VPC Service Controls perimeter denies access to restricted Google Cloud services from trac that originates outside the perimeter, which includes the console, developer workstations, and the foundation pipeline used to deploy resources

Before the perimeter is created, you must design ingress and egress rules and exceptions to the perimeter that allow the access paths that you intend.

You should also:

  • use dry run mode to identify API access violations without interruption to the applications before enforcing the perimeter.
  • Design a process to consistently add new projects to the perimeter.
  • Design a process to design exceptions when developers have a new use case that is denied by your current perimeter configuration.

Data Security

Building an AI/ML system requires a large corpus of data to appropriately train models and oftentimes the data may be considered sensitive. Securing that data appropriately becomes of paramount importance, and we can protect against data leakage risks with encryption and anonymization techniques.

All data stored within Google Cloud is encrypted at rest using the same hardened key management systems that Google uses for our own encrypted data

Sensitive Data Protection

Sensitive Data Protection includes more than 150 built-in info types to help quickly identify sensitive data elements like names, personal identifiers, financial data, medical context, or demographic data.

Protection for Data Preparation

Customers frequently use their own data to create datasets to train custom AI models, such as when they deploy an AI model on prediction endpoints. In another common example, customers use their own data to fine-tune a LLM to enhance model outputs and better advance relevant business priorities. The tuning process described above uses customer-specific datasets and creates parameters that are then used at inference time; the parameters reside in front of the “frozen” foundation model, inside the user’s project. To ensure that these datasets do not include sensitive data, we recommend that your organization use the Sensitive Data Protection service to scan the data that was used to create the datasets. Similarly, this method can be used for Vertex AI Search to ensure uploaded data does not include sensitive information.

Logging, Detection, and Response

It’s important to note that Google Cloud does not log your end-user’s interactions with your AI applications. Logging these interactions must be done at the application layer. We recommend that you use the Cloud Logging client library to log the end-user’s prompts and the AI application’s responses.

Detection for Vertex AI

The security posture feature of the Security Command Center includes predefined postures that help you secure Vertex AI workloads. Such policies include, but are not limited to, restricting public IP addresses and disabling le downloads, root access, and the Vertex AI Workbench terminal. These postures include detective controls using Security Health Analytics that will, for example, identify when CMEK’s are disabled across Vertex AI models, datasets, endpoints, training pipelines, and data labeling and custom jobs.

Conclusion

GCP offers robust security measures for LLMs, including data encryption at rest and in transit, granular access controls, and regular vulnerability scanning. By implementing these safeguards, organizations can protect their sensitive data and ensure the integrity of their AI models while leveraging GCP’s scalable infrastructure for efficient LLM deployment.

Please feel free to reach out to me to share your experience managing LLM.-based deployment in enterprises.

#LLMsecurity hashtag#AIsecurity hashtag#enterpriseAI hashtag#datasafety hashtag#modelpoisoning hashtag#AIethics


要查看或添加评论,请登录

社区洞察

其他会员也浏览了