Risks of Artificial Intelligence

Risks of Artificial Intelligence

I know that this is a long article, but I hope it is worth your time as I feel we need to bring some structure into the discussion…

Thank you dominik kessler ?? and Swe Geng for your valuable input!

A lot is currently written about AI, especially about ChatGPT, and the challenges which comes with it. After launching our preview version of Bing, I saw coverage there as well – with all the side-effects. What we learned in customer discussions over the last few weeks is mainly, that there is a lot of fear, uncertainty, and doubt, when it comes to AI.

Let me try to structure the problem and rely on your feedback to understand how far these thoughts make sense and where my thinking falls short.

Before, let’s start at the beginning as I see a lot of confusion as a lot of terms are mixed: ChatGPT, OpenAI, Azure Open AI?, Bing’s usage of Open AI, Large Language Models (LLM). I asked Bing to help me there:

  • OpenAI is an American AI research laboratory that consists of a non-profit organization and a for-profit subsidiary. It was founded in 2015 by a group of entrepreneurs, investors, and researchers, including Elon Musk, Peter Thiel, Reid Hoffman, and Sam Altman. Its mission is to ensure that artificial general intelligence (AGI) benefits all of humanity and avoids existential risks.
  • Some of its projects include ChatGPT, a conversational AI system based on GPT-3; DALL·E, an image generation system based on GPT-3; 2 Whisper, a text-to-speech system based on GPT-3; Codex, a code generation system based on GPT-3; and MuZero, a reinforcement learning system that can master complex games without prior knowledge.
  • A large language model (LLM) is a deep learning algorithm that can recognize, summarize, translate, predict and generate text and other content based on knowledge gained from massive datasets. It can perform a variety of natural language processing (NLP) tasks.
  • Bing is using a new AI-powered chat system that leverages ChatGPT and Bing Search with Prometheus. ChatGPT is a neural network model that can generate natural language responses based on user input. Bing Search with Prometheus is a technology that can synthesize information from multiple web sources to answer complex queries.
  • Azure OpenAI is a new product offering on Azure that gives customers access to advanced language AI models from OpenAI such as GPT-3, Codex and DALL-E. It is a managed service that exposes these models as REST API that developers can consume through simple API calls. Azure OpenAI also provides security and enterprise features such as private networking, regional availability and responsible AI content filtering.

Personally, I feel it is important that Bing uses the models from OpenAI and ChatGPT with a completely different governance around it. While the ChatGPT version you can see on the net and Bing are based on the same technology, but at the end of the day, it is a fundamentally different baseline.

Now, as we clarified a few terms, I would like to structure the security and privacy challenges we see:

My data is sent to Microsoft

Challenge

With all these AI powered applications like the text suggestions in Word and Outlook and lots more to come, there is a fear that this data lands “with Microsoft”. In other words, you write a sensitive mail and Outlook suggests how to complete the sentence. Customers are heavily worried that the sensitive content is uploaded uncontrolled to Microsoft.

Approach

In the case of Microsoft, we made three commitments to our customers:

  1. Your data is your data: We do not want your data; you have all the control over your data and what happens to it.
  2. Your data is not used to train the foundational AI models: We are not using customer data to train the underlying models. You may use your data to train your models but – coming back to the point above – your data is your data.
  3. Your data is protected by the most comprehensive enterprise compliance and security controls: This did not change in the AI area. We are committed to this since you trusted us with your data.

In this context it is absolutely crucial to think about two questions: Where is the model trained and – more importantly – where does it run. Very often, the models which help you write code or e-mails etc. are trained by your AI provider. However, the application of the AI model to your specific problem then runs in your environment. This might be your cloud subscription or even locally.

I feel that this is the key question when this question comes up.

I do not want my employees to use AI-empowered Bing

Challenge

A lot of customers are now afraid that their employees copy and paste sensitive documents to Bing to let it summarize the content – or that their developers paste code into Bing to look for performance improvements etc.

Approach

While I definitely understand the concerns, this is not new from my point of view. We have the Data Loss Prevention (DLP) challenges for a long time. Whether your employee translates texts or pasts code into Bing or Stack Overflow is the same problem to be solved. I would even argue that this risk of leakage in Bing is lower than in Stack Overflow, where the code is publicly accessible.

Think about your DLP policies and you might even want to re-think DLP along the way of adaptive data governance as we propose but this problem is definitely not new.

How are AI models protected?

Challenge

I want to understand how my AI providers protect their AI systems and I need to understand how I manage my risks in this context for first and third party models.

Approach

At Microsoft there is a lot of work going into this topic currently. There are different levels of this discussion and different material is already available that you can leverage. Currently we are working on extending this content:

  • The core of everything we do circles around Responsible AI. This is the framework we use and where we build our internal policies around and there is a lot of material there for you to leverage as well.
  • If you look at the AI Risk Assessment I already published once, you realize that a lot of risks actually remain the same – and some are new.
  • Counterfit is an interesting tool to help you to understand and protect your own models.

I am afraid that AI drives too much automation in security

Challenge

I already heard this notion: I do not want that AI does too much regarding automation in security – especially when it comes to reactive security.

Approach

Security response is all about speed these days. In order to be fast, you need to have a complete (I know…), integrated and automated environment. AI will change the game! With incredible speed we are approaching the point where AI will really become a game changer here and where AI will support your security analysts in new ways like helping to reduce mean time to respond and scale out human expertise by findings known patterns in massive datasets. Or it shines a light in human blind spots and finds unexpected clusters of similar activity.

The same is true for defense. We need to say goodbye to things like manual RBAC models etc. We have already announced the integration of our Insider Risk Management platform with data governance and this is just the beginning.

To be completely clear and transparent: This is not new. AI is already embedded in your security products across the board like Azure Active Directory Conditional Access, Sentinel, Defender for Endpoint, Defender for Identity – just to name a few. Without AI, you would be flooded with alerts or on the other hand, unknown attacks would not be seen. A lot more will come very soon – stay tuned. Therefore, requirements like transparency (how did the AI come to a certain conclusion) play a crucial role.

Conclusion

From my point of view, we need to apply these technologies now and learn. You might not want to apply AI already in your most sensitive areas. But not allowing your employees to use AI is the wrong approach. Just to give you one figure: A third-party study showed that developers using GitHub Copilot are up to 55% more efficient, which means you all of a sudden have 55% more developers. Something you really want to miss? Yes, there are risks but I feel that they are manageable.

I would love to get feedback. What am I missing, where do I oversimplify? Where do we have additional questions, we need to address? Let’s continue this dialogue.

Reto Zeidler

Information Security Executive & Lecturer | Executive MBA | CISSP | GSLC

1 年

Thank you Roger Halbheer for this brilliant article.I agree with your conclusions. Bottom line: We (society) gain new benefits at the cost of risks. Does AI raise new ethical or regulatory questions? Sure. Will AI be abused for the bad? For certain. Does AI change any fundamental principle of security, privacy or safety? Probably not.

Excellent article, Roger Halbheer. Thank you sharing! If at all, I may suggest that the title may be a bit more specific. If one considers the case where the AI enabled search engines "decide" that this is the seminal article on the topic of AI risk, there may be a question of comprehensiveness. It clearly illustrates challenges that customers are expressing on challenges with specific areas of AI application, but there may be many more issues or areas to cover. Changing the title slightly, it may be even more impactful. Looking at it differently, this may actually be a start of a series of "customer challenges regarding risks of AI" that highlights a range of topics such as the "impact of non-verified information and search psychology", "dealing with intellectual property and ease in LLM-based applications". I think this is what you aimed for by asking for a dialogue?

回复
Juan Avellan V.

General Manager ELCA Security Services | Organizational Resilience | Cyber Security | GRC Management | Data Governance | Cyber Law

1 年

Hi Roger, It did indeed clarify a few important points for me. The key question for organizations wanting to use their data systematically to go beyond the fast-emerging AI services, etc.. is the one you raise: "Where is the model trained and – more importantly – where does it run." I think this point merits a bit more fleshing out as it is, I believe, what will be the true differentiator of organizational competitiveness: those who do that well and those who don't or not at all... BTW, thanks again for your previous piece. The additional links in your article were very informative and useful.

Martin Slijkhuis

Industry Advisor { Defense and Intelligence { Microsoft

1 年

Great analyses. Would be interested to see your perspective in next generation verifiable claim based identity services.

Daniel Kubin

Cybersecurity & Modern Work Experte

1 年

I very much like your summary. What definitely can be added to your last challenge of customers do not want to have to much of Ai/Automation in defense, is that attackers will use automation and attacks will become even more sophisticated for all sizes of companies. This will make the manual defense even more complex.

要查看或添加评论,请登录

社区洞察

其他会员也浏览了