Enterprise Security Policies for ChatGPT
The hype goes, but the threat remains. It is unquestionable that there is a hype surrounding Artificial Intelligence these days. Businesses are adopting Artificial Intelligence models like ChatGPT rapidly to tap into their transformative and competitive power. These cutting-edge tools like ChatGPT, MidJourney or Perplexity come with challenges of security, privacy, and more importantly, misuse, that demand attention and action.
Myself, as a security researcher, manager, and mentor with a background in Information Retrieval (IR) technologies and search engines, had the lucky opportunity to witness the evolution of these technologies. Now, as the Group Cybersecurity Lead at a premium online entertainment enterprise that comprises a diverse group of companies with various technological intensities, I balance different needs and risk appetites. My mission is to ensure that we will end up with something that makes us more productive, cutting costs without blocking the progress but ensuring that no sensitive information will be compromised.
My team’s journey with language models at IT Security began with GPT-2 and even earlier with NLP and sentiment detection technology when we explored potential applications in RegTech (Regulatory Technology) and EdTech (Education Technology), aiming to transform security with awareness methodologies and making security compliance a streamlined process. Today we have more and more use cases adopted as people started to discover its potential, whether it is recognized within the organization or not (since it may be used privately by most of the people). I regularly catch them with ChatGPT screens on, and I am increasingly faced with letters, of which I can tell were written by ChatGPT. I don't like them in their vanilla style however, but it is my personal opinion.
It is a thing that you cannot stop, as I presented this case at ICT Spring conference in Luxembourg a year ago. To better understand the potential risks and to apply appropriate risk mitigation strategies, we leveraged our collective intelligence: our security team early on. Since we operate a high-tech security team, it was not a surprise that we were playing with new technologies and had experience beforehand. We experienced unexpected interest from colleagues outside of the cybersecurity domain, of course. Colleagues hung on the door in front of our meeting room more than a year ago in February to join the discussion, before it really went mainstream in the world.
Here, I would like to share some insights and practical steps for creating an effective security policy for AI use, learned from my own experiences. Even if it fails miserably at many reasoning tasks, and is not even near what we would call Artificial General Intelligence, the hype gives a great look into what people would like to use it for, even if it is only giving good general answers based on the vast amount of text it was trained on. From the cybersecurity perspective, I would not say that we are looking ahead to a great future in term of AI risks, but there are some crucial points that everyone shall adopt, and that’s why I pose this article here.
Outputs and Assessing Use-Cases
My first and primary concern lies in the use of the outputs of the AI technologies without validation. This is what will later influence the perceptions and decisions of people. The most important consideration is that ChatGPT is merely a language model trained on vast amount text data that is available on the Internet. It doesn't think and reason like us. It is important to understand, even if you are not about to become an AI expert. It is being trained on the product of the whole human culture, but it is not an anthropomorphic intelligence. Its responses, while appearing to be somewhat generally intelligent, should be treated as an unfalsified scientific hypothesis. It somehow started to generalize on the processes that created those texts, but it is still not a human intelligence. Relying solely on the generated linguistic constructs, using language models for decision-making and planning, is like using a GPS without cross-checking: it could easily misguide you. And it oftentimes does.
While many have voiced concerns about nuances like copyright, legal, and privacy issues, I must emphasize the importance of evaluating whether the AI generated text truly and completely represents your intended message and if it is really reasonable. Without proper prompting techniques, rules, instructions and appropriate time taken with the validation of the responses can have cringe style with an outcome that will not have the desired effect.
Understanding how AI technologies like this are being used within your organization is the seed for crafting an effective policy for it. Be open minded and discover how it is being used! Open discussions with your colleagues can reveal many unconventional uses that you might not even think about but will need to address then.?
In my case, in January 2023, we found that 5-10% of our workforce was already accessing chat.openai.com, using AI for a variety of tasks ranging from programming and marketing to recruitment. This rate is much, much higher as of today. Yet, ChatGPT's capabilities have limitations, with noticeable inconsistencies in tasks like business planning, legal or finance, and number-related work, not speaking about models by other service providers, or the ones that are being run locally that pose other unknown risks.
领英推荐
Education is Key
My third key takeaway is the importance of educating your colleagues about the responsible use of generative AI technologies. This isn't just a technological problem but also a communication and coordination challenge. As policies become more complex, some people resort to summarizing them using ChatGPT, posing potential comprehension problems (!), including but not limited to your own acceptable use policy about ChatGPT. This highlights the need to employ modern educational techniques like micro-learning to effectively relay messages. Security and privacy is important, but exploiting it for the progression of business is even more important, and so to say, security is here to serve business, not the other way around.
Input and Security Controls
ChatGPT's inputs from your staff deserve attention. It should be my primary concern, but it’s not the first one. It's essential to establish guidelines discouraging the upload of sensitive data such as draft contracts, proprietary code, and personally identifiable information (PII). This information shall not be communicated through a third-party service otherwise. Period. It needs to be highlighted that, in numerous cases, shared accounts may leak information in unintended ways. Ensuring traceability of the prompts and histories is an important aspect of auditing the use of such services. Just grossly copy-pasting contracts to ChatGPT is a big NO-NO!
I assume that you have an effective SaaS or third-party management process in place already. This ensures better policing of AI usage and provides means to address issues when things go wrong. Ensuring this may involve technical measures such as DLP (Data Leak Prevention) software or more AI guardrailing. It is a somewhat ironic turn of affairs where an AI-enforced policy controls AI use, but this is where we are heading to. With the prospect of private deployments and other large language models, this might become an unavoidable reality.
Non-human like errors
Reliability of these systems is another concern. The issue isn't about their reliability per se, but rather the misconceptions people have about them. People tend to overreact or underreact when something new is going on. People tend to think of these AIs as human or even superhuman, which can lead to over-reliance and misuse. ChatGPT and similar models do make mistakes, but unlike human errors, they are unique and unexpected.
For example, if you sit next to someone driving a car, you don't expect them to drive into a concrete pillar or into a truck with clouds painted on its side. It's important to understand that there is a difference between anthropomorphic and neuromorphic intelligence, and hallucinations aside, we currently know very little about how AI will fail on a certain task. To err is human, but we don't know how an AI model will fail, and this might be the greatest risk. As I mentioned, don't build a financial model with ChatGPT, because the numbers definitely won't add up.
Balancing Innovation with Security
The intersection of Enterprise Security and AI technologies is an intricate web of excitement, potentials and challenges. As we progress on this frontier, embrace the fair and responsible use of these technologies, openly assess their use-cases within your organization, create an acceptable use policy that is aligned with your business. Educate your team. This approach will empower you to harness the potential of AI technologies like ChatGPT while ensuring that your organization will remain secure in this ultra fast changing environment.