Risks of Artificial Intelligence
I know that this is a long article, but I hope it is worth your time as I feel we need to bring some structure into the discussion…
Thank you dominik kessler ?? and Swe Geng for your valuable input!
A lot is currently written about AI, especially about ChatGPT, and the challenges which comes with it. After launching our preview version of Bing, I saw coverage there as well – with all the side-effects. What we learned in customer discussions over the last few weeks is mainly, that there is a lot of fear, uncertainty, and doubt, when it comes to AI.
Let me try to structure the problem and rely on your feedback to understand how far these thoughts make sense and where my thinking falls short.
Before, let’s start at the beginning as I see a lot of confusion as a lot of terms are mixed: ChatGPT, OpenAI, Azure Open AI?, Bing’s usage of Open AI, Large Language Models (LLM). I asked Bing to help me there:
Personally, I feel it is important that Bing uses the models from OpenAI and ChatGPT with a completely different governance around it. While the ChatGPT version you can see on the net and Bing are based on the same technology, but at the end of the day, it is a fundamentally different baseline.
Now, as we clarified a few terms, I would like to structure the security and privacy challenges we see:
My data is sent to Microsoft
Challenge
With all these AI powered applications like the text suggestions in Word and Outlook and lots more to come, there is a fear that this data lands “with Microsoft”. In other words, you write a sensitive mail and Outlook suggests how to complete the sentence. Customers are heavily worried that the sensitive content is uploaded uncontrolled to Microsoft.
Approach
In the case of Microsoft, we made three commitments to our customers:
In this context it is absolutely crucial to think about two questions: Where is the model trained and – more importantly – where does it run. Very often, the models which help you write code or e-mails etc. are trained by your AI provider. However, the application of the AI model to your specific problem then runs in your environment. This might be your cloud subscription or even locally.
I feel that this is the key question when this question comes up.
I do not want my employees to use AI-empowered Bing
Challenge
A lot of customers are now afraid that their employees copy and paste sensitive documents to Bing to let it summarize the content – or that their developers paste code into Bing to look for performance improvements etc.
领英推荐
Approach
While I definitely understand the concerns, this is not new from my point of view. We have the Data Loss Prevention (DLP) challenges for a long time. Whether your employee translates texts or pasts code into Bing or Stack Overflow is the same problem to be solved. I would even argue that this risk of leakage in Bing is lower than in Stack Overflow, where the code is publicly accessible.
Think about your DLP policies and you might even want to re-think DLP along the way of adaptive data governance as we propose but this problem is definitely not new.
How are AI models protected?
Challenge
I want to understand how my AI providers protect their AI systems and I need to understand how I manage my risks in this context for first and third party models.
Approach
At Microsoft there is a lot of work going into this topic currently. There are different levels of this discussion and different material is already available that you can leverage. Currently we are working on extending this content:
I am afraid that AI drives too much automation in security
Challenge
I already heard this notion: I do not want that AI does too much regarding automation in security – especially when it comes to reactive security.
Approach
Security response is all about speed these days. In order to be fast, you need to have a complete (I know…), integrated and automated environment. AI will change the game! With incredible speed we are approaching the point where AI will really become a game changer here and where AI will support your security analysts in new ways like helping to reduce mean time to respond and scale out human expertise by findings known patterns in massive datasets. Or it shines a light in human blind spots and finds unexpected clusters of similar activity.
The same is true for defense. We need to say goodbye to things like manual RBAC models etc. We have already announced the integration of our Insider Risk Management platform with data governance and this is just the beginning.
To be completely clear and transparent: This is not new. AI is already embedded in your security products across the board like Azure Active Directory Conditional Access, Sentinel, Defender for Endpoint, Defender for Identity – just to name a few. Without AI, you would be flooded with alerts or on the other hand, unknown attacks would not be seen. A lot more will come very soon – stay tuned. Therefore, requirements like transparency (how did the AI come to a certain conclusion) play a crucial role.
Conclusion
From my point of view, we need to apply these technologies now and learn. You might not want to apply AI already in your most sensitive areas. But not allowing your employees to use AI is the wrong approach. Just to give you one figure: A third-party study showed that developers using GitHub Copilot are up to 55% more efficient, which means you all of a sudden have 55% more developers. Something you really want to miss? Yes, there are risks but I feel that they are manageable.
I would love to get feedback. What am I missing, where do I oversimplify? Where do we have additional questions, we need to address? Let’s continue this dialogue.
Information Security Executive & Lecturer | Executive MBA | CISSP | GSLC
1 年Thank you Roger Halbheer for this brilliant article.I agree with your conclusions. Bottom line: We (society) gain new benefits at the cost of risks. Does AI raise new ethical or regulatory questions? Sure. Will AI be abused for the bad? For certain. Does AI change any fundamental principle of security, privacy or safety? Probably not.
Excellent article, Roger Halbheer. Thank you sharing! If at all, I may suggest that the title may be a bit more specific. If one considers the case where the AI enabled search engines "decide" that this is the seminal article on the topic of AI risk, there may be a question of comprehensiveness. It clearly illustrates challenges that customers are expressing on challenges with specific areas of AI application, but there may be many more issues or areas to cover. Changing the title slightly, it may be even more impactful. Looking at it differently, this may actually be a start of a series of "customer challenges regarding risks of AI" that highlights a range of topics such as the "impact of non-verified information and search psychology", "dealing with intellectual property and ease in LLM-based applications". I think this is what you aimed for by asking for a dialogue?
General Manager ELCA Security Services | Organizational Resilience | Cyber Security | GRC Management | Data Governance | Cyber Law
1 年Hi Roger, It did indeed clarify a few important points for me. The key question for organizations wanting to use their data systematically to go beyond the fast-emerging AI services, etc.. is the one you raise: "Where is the model trained and – more importantly – where does it run." I think this point merits a bit more fleshing out as it is, I believe, what will be the true differentiator of organizational competitiveness: those who do that well and those who don't or not at all... BTW, thanks again for your previous piece. The additional links in your article were very informative and useful.
Industry Advisor { Defense and Intelligence { Microsoft
1 年Great analyses. Would be interested to see your perspective in next generation verifiable claim based identity services.
Cybersecurity & Modern Work Experte
1 年I very much like your summary. What definitely can be added to your last challenge of customers do not want to have to much of Ai/Automation in defense, is that attackers will use automation and attacks will become even more sophisticated for all sizes of companies. This will make the manual defense even more complex.