Ai risk management with Microsoft tools

Ai risk management with Microsoft tools

Now that everybody is talking about DeepSeek and it's vague data protection terms, I thought it would be good to think what you can do about those risks.

In general we have three common types of Ai risks: Data Leak, Data Oversharing and Non-compliance usage. Microsoft E5 toolset includes quite good options to reduce these risks

Data Leak

Users may leak data to Ai apps that can use the sensitive data to train language models and that data may become searchable to other users or service owners. This risk has been tested for example by Samsung employees, so it is real.

From E5 toolset we can bring in Ai hub that can efficiently visualize usage of Ai tools along company managed devices and we can use DLP for endpoint to prevent sensitive data pasting or classified document upload to unapproved websites. There are other also, but these 2 are the most efficient ones to start using now.

After DLP is implemented Insider risk management can be used to elevate user risk score and apply stricter controls when needed.

Data Oversharing

Users may access sensitive data through Ai apps that they are not supposed to have access to.

From E5 toolset I would choose Sensitivity Labels with strict enough access control and automatic labeling of sensitive content. Data Lifecycle management is also powerfull tool to handle obsolete data deletion reducing data risks greatly.

I have written couple articles of this field before, so refer those for more detailed information.

Non-compliance usage

Users may use Ai apps to generate unethical or other high risk content. Same tools help here as to other risks along with communication compliance, but more relevant is training and built-in protections in Ai tools.

Risks manageable by training only

For example lack of diversity in job application processing is not something that technology can't prevent, user must know that this kind of risk exist if you for example present previous good applications to Ai tool to process new ones. Ai will choose similar applicants that have been chosen before.

We cannot affect also use of tech from other devices, so training the users is mandatory action to prevent risks. We can prevent our IPR and other sensitive data to end up in unmanaged devices, but that takes a bit time and doesn't remove the data already copied, sent or transferred out of our managed containers.



要查看或添加评论,请登录

Tomi Miettunen的更多文章

社区洞察

其他会员也浏览了