Data Security in the Age of Generative AI

Data Security in the Age of Generative AI

Generative AI models are powerful tools that can create realistic texts, images, sounds, and more from a given input. They can be used for various purposes, such as content creation, enhancing creativity, and research. We have seen a huge increase in the use of generative AI models such as ChatGPT,Copilot and Bard in our organizations, this shows that our employees are finding these generative AI models to be really helpful in their day-to-day activities.

With the increase in use of these generative AI models, there is also the security and compliance risk that organizations should be aware of and implement measures to protect organization data, we all can agree that these generative AI models are here to stay and therefore organizations need to adapt a proper security strategy that will ensure safe use of generative AI at work while empowering employees to unleash their creativity? and improve their productivity with these generative AI models.

One of the main compliance risks is that some generative AI models may store the input information from our employees and use it to train their models, these inputs might be company sensitive information such as business plans and financial plans or even customer PII data. Other compliance risk is that some generative AI models may not have adequate measures in place to protect the input received from our employees from unauthorized access or disclosure which could expose the data to hackers and other malicious actors.

With this in mind, we have to consider using a defense in depth approach in implementing robust measures for governing the use of generative AI models in the organization. This means that the strategy to be used here involves layering multiple measures aimed at monitoring for the use of generative AI in the organization and preventing sharing of company sensitive information to these risky generative AI models while restricting the use of non-compliant generative AI models. Some of these measures include.

  • Data Loss Prevention Policies.

Tools like Microsoft Purview DLP allow you to create DLP policies that can be used to prevent our employees from copying and pasting company sensitive information to generative AI models.

  • Restricting the use of risky generative AI models

We got hundreds of generative AI models in use today, some of these models have put strict compliance measures in place which means they respect the privacy of information they are receiving from our employees. To understand how compliant these models are, we can use tools like Microsoft Defender for Cloud Apps which has the capability to discover different AI models being used by our employees. As part of this discovery, we also get information about the compliance state of the generative AI model in use and with this information we can choose to tag the application as Unsanctioned just within Defender for cloud apps and this automatically restricts the employees from accessing the risky application

The screenshot below from Microsoft Defender for Cloud Apps shows the compliance state for one of the generative AI models being used by employees (Microsoft Copilot).

Defender for Cloud Apps Compliant Application

Below screenshot shows an overview of a different generative AI model compliance state

Defender for Cloud Apps Risky Application

With this information you can choose to manually restrict the access or create an automation policy in Defender for Cloud Apps that will automatically restrict employees access to a generative AI model based on its compliance risk factor, legal risk factor or security risk factor i.e. the policy can automatically restrict employee access if the generative AI model does not comply with GDPR, ISO 27018 or PCI DSS

  • Managing consent for applications

Some generative AI models application require user or admin consent to be used in the organization. You probably have seen this where you join a meeting and notice some AI models in the meeting taking notes. As mentioned earlier, we should be concerned about the compliance state of these models since we are granting them access to company sensitive information. These models are granted permissions by a user or administrator as shown below.

Application consent request

To limit the models granted access to organization data, it is important to enforce admin restriction in Microsoft Entra ID such that for a user to grant consent to an application, they have to request an approval from an admin and the admin can investigate the compliance state of the application before granting or denying user access like shown below.

Application consent request with admin approval

In conclusion, generative AI models are here to stay, we should therefore adopt the defense in depth approach and put in multiple measures to ensure our employees are only using generative AI models that have been properly verified to have privacy and data security measures in place. In my next part of this blog series, I will discuss how to actively hunt for non-compliant generative AI models in the organization and how to use and enforce use of compliant alternatives such as Microsoft Copilot.

Albert Sitati

Technical Azure Consultant | Co-Founder Resource Cloud Academy (RCA) | Helping businesses develop sustainable Cloud Migration Strategy while remaining Compliant.

12 个月

Very informative, can't wait for the next blog in the series to understand how to actively hunt for non-compliant generative AI models in the organization. Thanks for sharing

要查看或添加评论,请登录

Kelvin Ngware的更多文章

社区洞察

其他会员也浏览了