Data Protection for ChatGPT, Generative AI, and Shadow IT
With the rise of hybrid work, data leakage has become a significant issue. Employees are now working from a variety of locations, including their homes, coffee shops, and even public libraries. This makes it more difficult to keep track of data moving between managed endpoints and your organization's SaaS applications or private apps.
Shadow IT, the use of unauthorized SaaS and cloud services by employees has always been a challenge for IT departments. If left alone, shadow IT can pose a significant security risk, as it can expose your organization's data to unauthorized access.
Generative AI platforms like ChatGPT are a form of SaaS or cloud app and have emerged as a new frontier of shadow IT. These platforms allow users to generate text and images, troubleshoot software bugs, and create content that is often indistinguishable from human-created content. This makes them a popular tool for employees who want to share sensitive information without fear of detection.
The problem is that many platforms use the data submitted by end users to train their model, meaning anything proprietary becomes publicly available once it has been ingested. Samsung, for example, had to ban ChatGPT after three separate instances of employees unintentionally sharing sensitive data to the generative AI platform, including confidential source code.
The good news is that generative AI platforms are not all that different from other internet destinations that you need to protect your data from. With the right tools, you can block access or block user actions like uploading and submitting any sensitive data to these platforms and prevent data leakage.
To reduce the risk of data exfiltration and stop any form of shadow IT, you need a modern secure web gateway (SWG) solution with native data loss prevention (DLP) functionality. A SWG monitors all internet traffic and blocks access to unauthorized websites and applications. However, a modern SWG that has DLP can also be used to scan files for sensitive data and prevent them from being uploaded or downloaded to any public websites and unauthorized cloud apps.
In a hybrid work environment, you must take steps to protect your data while ensuring productivity continues. A SWG with DLP functionality can help you do both.
How data is leaked to ChatGPT and other generative AI
There are several ways that data can be leaked to ChatGPT and other generative AI platforms. Some of the most common scenarios include:
Pasting sensitive data on AI apps for formatting or grammar checks.
This is a common practice, but it can be risky if the sensitive data is not properly anonymized.
Developers pasting the source code into AI apps to improve performance and efficiency.
The source code is proprietary to any enterprise and contains sensitive information about the company's products or services.
Including AI apps in sensitive company meetings to transcribe
This can be a convenient way to create a transcript of a meeting, but it can also be risky if the call contains sensitive information.?
Accidental uploading of sensitive or compliant data
Individuals may unknowingly input confidential data into the chat interface under the false impression that it provides a secure means of communication. This information can include personal identifying information (PII), software code, financial information, personal health information (PHI), or any other confidential or compliant data.
领英推荐
How Lookout prevents data leaks to ChatGPT and other generative AI
Lookout Secure Internet Access is a data-centric SWG built on the principles of zero trust that protects users, underlying networks, and corporate data from Internet threats like malware, zero-day threats, and browser-based attacks. With native DLP capabilities, Secure Internet Access inspects outbound traffic for sensitive data and applies data loss prevention policies to prevent data leakage to the public internet. This enables your organization to restrict generative AI platforms and other unauthorized websites and apps with granularity.
Different ways to protect your data
Lookout uses a variety of techniques to prevent data leaks, including:
Customize policies that best suit your needs
We ensure that you can quickly write policies that protect data while leaving room to customize and refine for more specific circumstances. Here are some of the ways you can write policies with Lookout Secure Internet Access whether the data is structured, as in posts, or unstructured in the form of file uploads:
How Lookout applies data loss prevention (DLP) to all data?
The Lookouts Cloud Security Platform offers centralized DLP policy management and enforcement across every platform and app. Using advanced DLP, Lookout can identify, assess, and protect sensitive data in every format and app using exact data matching and fingerprinting. Once sensitive data has been identified, Lookout provides adaptive data protection policies that go beyond the basic allow-or-deny capabilities offered by other DLP solutions.?
Lookout’s DLP extends data classification and governance to any document in the cloud, integrating with Microsoft Azure’s Information Protection (AIP), Titus classifications, and cloud-native labels. It can also apply standard compliance policies that cover regulations like GDPR, SOX, PCI, HIPAA, and many others.?
With its comprehensive data protection features, you can trust Lookout to protect your sensitive corporate data, even in the face of new challenges like generative AI.