Cybersecurity Issues of ChatGPT
Conversational chatbots introduce a range of new cybersecurity issues that will require attention by practitioners.

Cybersecurity Issues of ChatGPT

Enterprise teams are now asking our TAG Cyber team about the cybersecurity of ChatGPT. While it is too early to provide decent empirical guidance, this note offers a preliminary glimpse into the types of issues likely to emerge with conversational chatbots. (Our Promise: We will mercifully not include ChatGPT-generated paragraphs to mischievously demonstrate its capability.)

First, it should be acknowledged that ChatGPT is not synonymous with artificial intelligence (AI), nor does it represent the wide range of chatbots in general. Instead, ChatGPT is an early working prototype of a conversational system from OpenAI that provides human-like responses to questions.

Second, it should be acknowledged that the current capabilities offered by ChatGPT, and other AI-based systems, provide just a glimpse into the future support such systems will provide to a variety of applications, including education, manufacturing, transportation, retail, health care, and government.

Finally, it should be acknowledged that every new scientific or engineering advance generates early hype, followed by calm application by engineers and practitioners to integrate the new capability into the right types of usage scenarios. As security experts will attest, however, many fraudulent and criminal use-cases often emerge as well.

This note is intended to give readers an early glimpse into the security scenarios that will emerge from intelligent AI-based conversational systems such as ChatGPT. We do not address autonomous machines, robot factory automation, and other non-conversational applications. We also focus on what we expect to emerge in the coming years versus current prototype functionality from OpenAI.

Adversaries and Motivations for Chatbots

To examine security issues, we must first identify the adversaries and motivations that are likely to occur. To do so, we introduce a model that mimics the progression of adversaries and motivations that were present for computer security (now cybersecurity) during its first few decades of relevance. The model is shown below:

No alt text provided for this image

Figure 1. Adversary and Motivational Model for Conversational AI Chatbots

Two points are worth highlighting: First, it is likely that fake conversational AI bots will soon emerge that cannot be trusted. An entire infrastructure will be required to help people steer clear of fake systems so that they can stick with real ones. These fake bots will be domain specific, with answers to various useful topics (e.g., health) and questionable ones (e.g., porn).

Second, it is likely that the existing brand-new infrastructure at OpenAI is going to be insufficient for the barrage of security issues that will emerge in the coming months. No company that goes from few users to many millions in such a short period of time can ever deal with such growth without massive security vulnerabilities. This is common sense.

Observers must not associate any security weaknesses in the OpenAI infrastructure as an indictment of AI or conversational bot security. Rather, this should be interpreted as the growing pains of a company growing at an unprecedented rate. Hopefully, the OpenAI team will work quickly to establish a security infrastructure.

Security Issues of Conversational Chatbots

The ten issues listed below represent an early glimpse into the types of security issues that will emerge with conversational chatbots. The summary is guided by underlying models such as the well-known CIA model of cybersecurity, and includes reference (during list development) to other models such as MITRE ATT&CK.

The descriptions here are high-level, so developers might find the discussion too abstract to put into immediate practice – although starting with high-level views is better than diving into the minutia anyway. Our hope is that this note helps to establish useful dialogue as this issue drives security initiatives and the inevitable security attacks to systems such as ChatGPT.

Issue 1: Availability Attacks

Businesses who are now beginning to depend on ChatGPT and similar services might be used to the robust search network infrastructure created over decades by Google. It seems obvious that DDOS attacks targeting OpenAI should now work, so be warned that establishing your new business operations practice around ChatGPT could result in outages.

Issue 2: Fake Chatbots

Now that ChatGPT has taken the world by storm, we should expect to see one after another competing conversational bot being produced by companies, search firms, and yes – fraudsters. We will need to begin training employees to be careful, and some sort of DMARC-like infrastructure will be needed to ensure the authenticity of any bot service.

Issue 3: Output Corruption

Business models will probably drive “intentional corruption” of output to perhaps list some firm that has paid money to be the first example in those little bullet lists that are so common in chatbot output. And we all know that what can be done intentionally can be done maliciously. Watch for hackers trying to use crafted data to train weird or biased output from bots.

Issue 4: Insider Attacks

The sad fact is that nation-state adversaries will begin targeting successful firms such as OpenAI to integrate insiders into their working operational teams. Anyone working in critical infrastructure knows this to be true, despite any public pronouncements to the contrary. We should expect to see insider threats emerge from the most prominent firms in this area.

Issue 5: Malware Generation

The potential to generate code implies the potential to generate bad code. And the potential to generate bad code implies the potential to general malware. It is impossible to imagine that the technology will not move in this direction, probably with malware generators located on the Dark Web and accessible via Tor. This will happen quickly.

Issue 6: Social Engineering Attacks

The use of fake chatbots will provide a fertile breeding ground for misdirecting users to fake sites, fraudulent businesses, and other dangerous Internet destinations. It will soon become an art to determine how to safely use the output from a conversational chatbot to ensure that one is not being misdirected or tricked. Social engineering will benefit from chatbots for sure.

Issue 7: Bidirectional Conversations

The current one-way “user-to-ChatGPT” conversation that exists will quickly merge into one where the conversation is much more real-time and bidirectional. This will give chatbots the ability to guide users in the direction of spilling lots of information – and the AI will know how to do this (e.g., using learned methods from the greatest interrogators who ever lived).

Issue 8: Supply Chain Security

The usual problems associated with software supply chain will emerge quickly into the conversational chatbot space. Maybe SBOMs will help, but the truth is that we will probably not have a clue as to the software packages and interconnections that exist behind the bot we are using. The situation will be somewhat akin to the confusion that exists with TikTok.

Issue 9: Military Use

Current conversational bots like ChatGPT have limited use for military attackers but expect to see data harvesting by nation-states as a means for informing their intelligent weapons. That conversational bot recommending good places to go skiing in Utah might actually be run by an adversary nation-state trying to learn more about your habits and whereabouts.

Issue 10: Malicious Text

There is a science to generating Spam or phishing text, usually with intentionally introduced misspellings. This is done to weed out the intelligent recipients and highlight the true dummies who will follow the scam to completion. It is likely that chatbots will help optimize such text for email usage – so expect to see phishing attacks get better soon.

Cyber security is a major concern with chat GPT, as any stored data on a chatbot system is at risk of potentially sensitive data being exposed. Threats also exist from malicious users attempting to exploit the chatbot’s weaknesses to gain access to confidential data and systems. For chat GPT, it is important to have sufficient cyber security protocols in place to protect the user’s data and systems.

Ondrej Krehel, PhD

ex-CEO, Innovator, Cyber Warrior Scholar, Practitioner in Digital Forensics, Cybersecurity, AI, Mathematics and Physics, Cyber Hostage Negotiator

1 年

Edward Amoroso I enjoyed this read, and makes me wonder how well algorithms in AI will be enhancing, and in which direction: shaping input better like static and boundary dynamic conditions, or internal algorithm processing optimization, which one will be faster. We have seen the same race in cryptography a decade ago.

Andrew Hopkins

Store, secure and control data on all distributed devices such as UAV's, sensor's, medical devices. Distributed data management for a Distributed World.

1 年

Output corruption is an interesting one and points to the broader AI issues of data integrity and data provenance. How do you trust the output when you know nothing about the input (data) and, as you point out, the data is subject to mass manipulation? Data security is enough of a problem and still relies on perimeter defenses that are only as good as their weakest link (ie people). Data integrity is another issue entirely and potentially far more dangerous in an AI enabled world.

Fred Callis

CEO/President at Cybervore

1 年

Hi Ed, The ChatGPT topic was a discussion this week in our company both for and against its future. Thanks for sharing!

Ed….great start to addressing cybersecurity issues with these type of products….aside from cyber issues, I’m have a difficult time understanding the difference between ChatGPT and and other speech answering services that utilize learning models. Companies like Interactions have very sophisticated automated speech answering and conversational systems that have been around for ever and continue to be improved….. Is the big deal is that’s an open AI ?

要查看或添加评论,请登录

社区洞察

其他会员也浏览了