GenAI/AI: An Aggravating Factor to a Persistent Problem – Securing Sensitive Data

GenAI/AI: An Aggravating Factor to a Persistent Problem – Securing Sensitive Data

As a CISO, you’ve spent years waging war against an ever-growing list of challenges, including over-provisioned access, fostering a security-conscious culture, navigating complex regulatory compliance, insider threats, robust data privacy and protection, and the inherent risks associated with third-party collaboration. Now, the rise of AI has added fuel to the fire, exposing cracks in your data security strategy while regulators raise the stakes with stricter compliance mandates. Amid this chaos, one truth remains constant: your organization’s sensitive data is both its greatest asset and its greatest liability. Protecting it isn’t just about preventing breaches—it’s about safeguarding your company’s reputation, avoiding regulatory penalties, ensuring operational resilience in an AI-driven world, and, of course, keeping the wolves (attorneys) away.

Recent incidents, including the?Samsung data leak via ChatGPT, the?Microsoft GitHub breach, and more than 11% of sensitive data shared with GenAI technologies each day, underscore the risks posed by AI when sensitive data is improperly managed. However, these examples also reveal an opportunity. By focusing on?data-centric security?and leveraging solutions like Seclore, organizations can not only mitigate AI risks but also strengthen their overall data security posture.

The top categories of confidential information being input into the GenAI tools include internal business data at 43%, source code at 31%, and personally identifiable information, known as PII, at 12%. 
- Revealing the True genAI Data Exposure Risk - June 2024        

With 328.77 million terabytes of data created each day, the inherent risk of data leaks is greater than ever. The dynamic nature of the cybersecurity landscape demands constant vigilance and adaptability from organizations.

Let’s simplify the problem by breaking down these persistent problems, explaining how GenAI/AI aggravates them, and explaining how Seclore uniquely solves them.

1. Enforcing Least Privilege Access

The principle of least privilege (PoLP) ensures that employees and systems have access only to the data they absolutely need. However, this principle is often neglected, creating excessive exposure and attack surfaces for threat actors. We know empirically, however, that most organizations are about as far from least privileged as possible. Just take a look at some of the stats from Microsoft's own State of Cloud Permissions Risk report. If we extrapolate those numbers, the underlying problem will be highlighted. It may come as no surprise, on average, companies have:

  • 40+ million unique permissions

  • 113K+ sensitive records shared publicly
  • 27K+ sharing links

Adding to the complexity, permissions are largely managed by the Lines of Business (LoB), not IT or security teams.?This poses a significant challenge for information security teams. Consider solutions like Copilot, which can access all the sensitive data a user can access, often an excessive amount. Alarmingly, approximately 10% of a company's M365 data is open to all employees, potentially exposing the organization to serious risks.

How AI Aggravates These Issues

AI tools like ChatGPT and Microsoft Copilot rely on broad data access to deliver comprehensive insights. Without strict controls, this can and will lead to overexposing sensitive data internally & externally.


Seclore’s Solution:

Seclore goes beyond role-based access control by implementing?dynamic, granular policy enforcement?at the file level. Even if tools like ChatGPT try to access data:

? Data Rights Travel with the File: Seclore’s persistent protection ensures that sensitive files remain encrypted and access-controlled, regardless of where they’re shared or who accesses them. Organizations like Samsung might even lock down the use of GenAI internally; they will continue to share IP (intellectual property) with third parties who may continue to use these technologies. Maintaining protection and control is now possible, even when sensitive data no longer sits within the four walls of the organization.

? Revocation on Demand: If an employee mistakenly shares a file or the sensitivity of that file changes, the organization can revoke access instantly—anywhere, anytime. Seclore ensures that PoLP is enforced at all times, preventing unauthorized AI access to sensitive data

? Evidence-Based Cybersecurity Policies through Insights Dashboard: If information is power, insights become the unfair advantage organizations have been seeking when developing policies they enforce within the security technologies (CASB, DSPM, DLP, etc) they deploy. Seclore provides unique insights about how your data/files are being used that help to inform better policies with empirical evidence. Policy effectiveness is driven by how control is implemented more than by a binary yes-no regarding whether it is implemented. Neither mathematical models nor hypothetical scenarios can easily capture how data is being shared/handled in the real world. Seclore avoids this problem by providing a full 'chain-of-custody' so organizations know how their data is being handled.


2. Third-Party Sensitive Data Handling

Third-party vendors and partners are critical to business operations, but they also represent significant risk. The rise of AI has added complexity as vendors increasingly integrate AI tools into their workflows, often without robust safeguards. Shadow IT gave us a preview of the risks that arise when technology outpaces governance. The lesson is stark: we cannot afford to make the same mistake with Artificial Intelligence. Shadow AI could clearly become the proverbial trojan horse within our own organizations. Potentially, the larger threat might be those 3rd parties that we work with daily and trust our NDAs are protecting our most sensitive digital assets.

There's another discussion to be had around the regulatory frameworks applicable to Generative AI that are emerging and quickly evolving. This article will avoid a comprehensive discussion of existing or proposed regulations and rather focus on third-party relationships.

How AI Aggravates Third-Party Risk

AI can amplify risks in third-party relationships given you're only as secure as your weakest link.

? Case Study: Semiconductor Company leaking sensitive IP

A recent engagement came to Seclore, given that they had just experienced a breach while submitting design elements (IP) to third-party fabricators to bid on. After investigating the breach, I learned that more than 10% of all bidders had leveraged GenAI to respond to their RFP. This confidential data was fed into AI models and is now subject to being reviewed by AI trainers. Imagine your organization's IP might be used to train models and appear in other users' outputs.

Seclore’s Solution:

Seclore ensures that data shared with third parties remains secure. Long gone are the days of assuming your third parties provide an “adequate” level of security. This is not a good practice because “adequate” can be interpreted in many ways.?To decrease your organization’s chances of third-party security data leaks, be clear about your expectations and manage them accordingly with Seclore. It's the simple things. For example: (1) Does everyone in the third-party organization—or your organization—need to have access to your data? (2) Do they need to have access to your data forever? (3) Does the third party need to access your data from anywhere? Seclore can implement all of these controls with:

? Granular Usage Controls: Limit what third parties can do with your data—such as restricting editing, printing, or forwarding.

? Third-Party Monitoring: Continuously track how third-party vendors interact with your files. If a vendor uses an AI tool that might expose data, Seclore provides visibility into where and how your data is being accessed.

? Data Expiry: Set expiration dates for shared files to ensure sensitive data doesn’t linger in third-party systems indefinitely.

By deploying Seclore, organizations can securely collaborate with third parties while maintaining complete control over their data—even when AI tools are in play. The DOJ’s 2024 ECCP places a strong emphasis on using data analytics and continuous monitoring to strengthen compliance programs. These expectations are included with the requirements of proactive risk management and data-driven compliance. What we've seen over the past decade is a rising number of breaches occurring by third parties who are handling your most sensitive data.

3. Deploy Persistent Protection for Data

Sensitive data is no longer confined to organizational boundaries. It flows across devices, platforms, and ecosystems, creating numerous points of vulnerability. AI tools like Copilot exacerbate this by increasing the mobility, processing, and searchability of sensitive data.

How AI Aggravates This Issue

AI tool adoption is growing exponentially, as are the velocity, volume, and value of your most sensitive digital assets. IDC predicts that by 2025, 175 zettabytes of unstructured data will be created annually, up from 64.2 zettabytes in 2020. A significant portion of this increase stems from AI-driven automation and decision-making processes.

? Example: Copilot-Generated Outputs

An employee uses Microsoft Copilot to draft a client presentation. Copilot accesses shared drives and CRM systems, pulling sensitive customer data into its drafts. This might be M&A information, financials, CAD files (IP), etc. Even if the output seems harmless, fragments of sensitive information may remain in temporary storage or external servers, exposing the organization to breaches.

Seclore’s Solution:

Seclore enables?data-centric security?by embedding protection directly into the data itself:

? End-to-end Encryption: Files are encrypted from creation to deletion, ensuring that sensitive information remains secure even if accessed by AI tools.

? Usage Tracking and Forensics: Seclore tracks every interaction with a file, providing detailed insights into who accessed it, where, and when—ensuring accountability for AI usage.

? AI Integration Safeguards: Seclore’s APIs enable organizations to integrate AI tools like Copilot securely, ensuring that any data accessed by AI remains protected and auditable.

?

Seclore: Turning AI from a Risk into a Strategic Advantage

AI tools like ChatGPT and Microsoft Copilot are reshaping productivity, but they’re also stress-testing your data security framework. Incidents like the?Samsung leak?and?Microsoft’s GitHub breach?are cautionary tales, highlighting the need for robust, data-centric security.

With Seclore, you don’t just patch vulnerabilities—you address the root cause of data insecurity. By embedding protection into your data, enforcing least privilege access, and securing third-party collaborations, Seclore ensures your sensitive information remains safe, even in an AI-driven world.

GenAI/AI isn’t your enemy. With the right strategy and tools, it can be your competitive advantage. Seclore makes that vision a reality. It’s time to secure your data—and your future.



要查看或添加评论,请登录

Justin Endres的更多文章

社区洞察

其他会员也浏览了