Managing Generative AI Security Risks

Managing Generative AI Security Risks

The rapid adoption of generative AI models after the launch of ChatGPT promises to radically change how enterprises do business and interact with customers and suppliers.

Generative AI can support a wide range of business needs, such as writing marketing content, improving customer service, generating source code for software applications, and producing business reporting. The numerous benefits of generative AI tools -- especially reduced costs and enhanced work speed and quality -- have encouraged businesses and individuals alike to test such tools' capabilities in their work.

However, as with any emerging technology, rapid implementation could be risky, opening the door to threat actors to exploit organizations' vulnerabilities. In today's complex IT threat landscape, using generative AI tools without careful consideration could result in catastrophic consequences for enterprises.

  1. Security risks associated with using generative AI in enterprise environments

Understanding the potential risks of using generative AI in an enterprise context is crucial to benefit from this technology while maintaining regulatory compliance and avoiding security breaches. Keep the following risks in mind when planning a generative AI deployment.

A. Employees exposing sensitive work information

In enterprise environments, users should be cautious about any piece of data they share with others -- including ChatGPT and other AI-powered chatbots. A noticeable recent incident is the data leakage caused by Samsung employees who shared sensitive data with ChatGPT. Engineers at Samsung uploaded confidential source code to the ChatGPT model, in addition to using the service to create meeting notes and summarise business reports containing sensitive work-related information. The Samsung case is just one highly publicized example of leaking sensitive information to AI-powered chatbots. Many other companies and employees using generative AI tools could make similar mistakes by revealing sensitive work information, such as internal code, copyrighted materials, trade secrets, personally identifiable information (PII), and confidential business information.

Implementing GenAI: Uses cases and challenges OpenAI's standard policy for ChatGPT is to keep users' records for 30 days to monitor for possible abuse, even if a user chooses to turn off chat history. For companies that integrate ChatGPT into their business processes, this means employees' ChatGPT accounts might contain sensitive information. Thus, a threat actor who successfully compromises employees' ChatGPT accounts could potentially access any sensitive data included in those users' queries and the AI's responses.

2. Security vulnerabilities in AI tools

Like any other software, generative AI tools themselves can contain vulnerabilities that expose companies to cyber threats. In March last year, there were breaches in ChatGpt whereby it was also possible to see the first message of a newly created conversation in someone else's chat history if both users were active around the same time. In addition, the same bug revealed the payment-related information of 1.2% of ChatGPT Plus subscribers who were active during a specific period, including the customers' first and last names, email addresses, and last four credit card number digits.

3. Data poisoning and theft

Generative AI tools must be fed with massive amounts of data to work properly. This training data comes from various sources, many of which are publicly available on the internet -- and, in some cases, could include an enterprise's previous interactions with clients. In a data poisoning attack, threat actors could manipulate the pre-training phase of the AI model's development. By injecting malicious information into the training data set, adversaries could influence the model's prediction behavior down the line, leading to false or otherwise harmful responses.

Another data-related risk involves threat actors stealing the data set used to train a generative AI model. Without sufficient encryption and controls around data access, any sensitive information contained in a model's training data could become visible to attackers who obtain the data set.

4. Breaching compliance obligations

When using AI-powered chatbots in enterprise environments, IT leaders should evaluate the following risks related to violating relevant regulations:?

? Incorrect responses. AI-powered tools sometimes give false or superficial answers. Exposing customers to misleading information could give rise to legal liability in addition to negatively affecting the enterprise's reputation.

? Data leakage. Employees could share sensitive work information, including customers' PII or protected health information (PHI), during conversations with an AI chatbot. This, in turn, could violate regulatory standards such as GDPR, PCI DSS, and HIPAA, risking fines and legal action.

? Bias. AI models' responses sometimes demonstrate bias based on race, gender, or other protected characteristics, which could violate anti-discrimination laws.

? Breaching intellectual property and copyright laws. AI-powered tools are trained on massive amounts of data and are typically unable to accurately provide specific sources for their responses. Some of that training data might include copyrighted materials, such as books, magazines, and academic journals.

Using AI output based on copyrighted works without citation could subject enterprises to legal fines.

? Laws concerning chatbot use. Many enterprises have begun integrating ChatGPT and other generative AI tools into their applications, with some using AI-powered chatbots to answer their customers' inquiries immediately. But doing so without informing customers in advance risks penalties under statutes such as California's bot disclosure law.

? Data privacy. Some enterprises might want to develop their own generative AI models, a process likely to involve collecting large amounts of training data. If threat actors successfully breach enterprise IT infrastructure and gain unauthorised access to training data, the resulting exposure of sensitive information contained in compromised data sets could violate data privacy laws.

Edited by Nashya Haider (Ph.D.)

要查看或添加评论,请登录

社区洞察

其他会员也浏览了