A Cyber Security Perspective on ChatGPT
ChatGPT is a new Artificial Intelligence technology. Many are interested in its capabilities, and there is much hype. ChatGPT interprets large sets of textual data and provides users with information based on their inputs and the mechanisms the model uses to analyze the data in the context of your questions. ChatGPT may yield significant benefits to businesses that can get the most out of their vast stores of textual customers and public ChatGPT platforms. ChatGPT may help automate tasks, deal more effectively with our customers, research, contextualize text data stores, provide a platform for unstructured or structured learning, etc. With appropriate controls, the technology may be deployed safely by the Bank from a cybersecurity viewpoint. Like everything, ChatGPT, whether a private or a public instance, needs controls to ensure its usage is consistent with an organization's strategies and data protection requirements.?
?Background Information
Being a Security person, naturally, my first reaction to learning this new technology was finding ways to block it. Just kidding. ChatGPT and AI offer considerable opportunities to change the way we work, the way we learn, and the way we get stuff done. My personal experiences with it are good. It has helped me decide what to cook for supper and provides me with recipes. When I ran out of ideas for things to do in the gym, it created a Workout of the Day for me (luckily, it could not see me do the exercises, but I expect that will follow). It has provided a shortcut to get basic facts or its interpretation of the facts. It has helped me read about historical topics of interest and has provided me with descriptions of books I have considered reading. ChatGPT's knowledge of cyber events, standards, and understanding of cyber best practices are sound. Overall, ChatGPT has the potential to disrupt, for better or worse. Of course, from my cyber security perspective, the challenge is how to use ChatGPT positively in ways that don't harm ourselves or others.
So, what exactly is ChatGPT? When I asked ChatGPT to describe how it works, this was its response, "I work by using a deep learning-based language model that has been trained on a massive amount of text data, allowing me to generate human-like text based on the input I receive. When you ask me a question or give me a prompt, I analyze it, and using my training, I generate the most likely response. My training data includes a wide range of topics, from general knowledge to language patterns, so I can answer various questions and converse on many topics. However, I don't have personal experiences or emotions, and my responses are limited to the information I was trained on."
GPT, in case you were wondering, "stands for Generative Pre-trained Transformer. It's a type of language model. The "pre-trained" part means training models on a massive amount of text data. The "transformer" part refers to the type of architecture used. The GPT models have achieved state-of-the-art results on various natural language processing tasks widely used in the field."
GPT uses reinforced learning from human feedback (RHLF) to improve itself, allowing human platform users to contribute to enhancing its effectiveness.
Some Pros and Cons from a Cyber Risk Viewpoint
ChatGPT can produce textual content based on inputs from the end user.
?ChatGPT can find patterns in large data sets.
?ChatGPT has Guardrails
?ChatGPT and Models
?ChatGPT uses Training Data.
?ChatGPT Privacy Concerns and Control of the Intellectual Property
?Learning using ChatGPT
?Critical Thinking
?Recruiting
?ChatGPT threw in the following suggestions:
?·??????The possibility of unintended consequences: While ChatGPT is designed to generate human-like text, there may be times when the responses it generates could have unintended consequences. For example, the responses could be misinterpreted or contain offensive or harmful content.
·??????The potential for data breaches: ChatGPT has access to large amounts of data, and if this data is not secured correctly, there is a risk that it could be breached. This could result in sensitive information being leaked or falling into the hands of malicious actors.
·??????The need for continual monitoring: As ChatGPT is a self-learning system, it is essential to monitor its behavior to ensure that it operates within the desired parameter and is not used for malicious purposes.
·??????The need for transparency: When using ChatGPT, it is crucial to be transparent about how the technology is used and what data it has access to. This can help build trust with users and ensure they are aware of potential privacy concerns.
领英推荐
Cyber Security Recommendations
1.????Protect training data and models you create with robust access controls, change management, and logging.
2.????Monitor ChatGPT computer systems.
3.????If your organization receives a lot of emails and other external communications, consider the risks associated with improved attempts at phishing. Do you think your users need more training to notice spear-phishing? Do you believe your mail scanning tools detect phishing well enough to prevent them from arriving in your inboxes?
4.????Ensure AI guardrails are in place and working.
5.????Ensure you recognize that the guardrails have limits and that you have ways to deal with the limitations.
6.????Figure out how to incorporate data leakage prevention tools into the mix and stop users from sharing sensitive data, particularly on public platforms, but on private ones.
7.????Remind users of their responsibilities when using such a platform.
8.????Authenticate users to the platform and log their actions, if possible.
9.????Consider using ChatGPT to enable users to ask cyber security questions; with luck, they will learn from the responses.
10.?Security people may find its threat modeling and general security knowledge helpful if they get stuck.
Conclusions
ChatGPT has caught the imagination of many with its recent introduction into the market. Over a million people had signed on since the end of 2022 when OpenAI launched ChatGPT publicly. Like all machine learning and Artificial Intelligence, it can scan masses of textual data and return responses to users based on questions asked. ChatGPT indicated that it had consumed millions of pieces of literature. It can even return software code to meet specifications. The public version has guard rails, which attempt to reduce the risks of malicious use of the platform. The rails cannot prevent users from finding ways to get what they want by asking legal questions. Businesses must consider data protection in environments that meet our legal and other obligations to customers, partners, and employees. We should not be frightened of ChatGPT. We should embrace it while securing it. Security must partner with technology leaders and others considering operating such platforms.
?About this paper
This paper was written by Jonathan Gill, with inputs from colleagues, whitepapers, and other information found online, as well as from ChatGPT itself. AI ensured the grammar in this document was acceptable, if not perfect. No data was harmed in the production of this document.?
Last Words
I asked ChatGPT to help me write a statement advising users to urgently change their passwords, as it is in their self-interest to do so. I got ChatGPT to rewrite the message using various famous people’s and movie characters writing personas. This is its take on the message using Yoda’s (from Star Wars) character. I have to admit I was strangely drawn to it and almost went and changed my password:
?“Subject: Security Breach: Passwords Must Reset
?Valued client, hmm? News of security breach have we. Of three hundred thousand accounts, thirty have been compromised, including yours.
?Important, the security of your personal and financial information is. Take immediate action, we must, to protect your account. Hurry, reset your password you must. Link provided, https://user.passwordchange.com
Once reset, new password creating, we shall assist you.
Mandatory, this request is. Fail to reset your password within twenty-four hours, your account will be locked, permanently. Inconvenient, we know it may be, but important, the security of your account and personal information is.
Calls or messages asking for username or password, ignore them. Suspicious, they are. Never shall we call you or ask for this information over the phone, email, or other means.
Beware of phishing sites, you must. False websites, they are, mimicking legitimate ones to steal personal information. Always check the URL and look for the SSL certificate before entering any sensitive information.
Our team, tirelessly working, they are, to address this issue and prevent further breaches. Apologies for any inconvenience, we offer. Cooperation and understanding, we appreciate.
Questions or concerns, have you? Assist you, our customer support team will. Reach them at [email protected]
Prompt attention, give it.
In Service,”
Finally, this paper is not the last word on security for ChatGPT. Please take a look at the risks for your organization if you choose to go ahead with this technology.
GM Cybersecurity @ DR Insight | Helping businesses navigate the evolving threat landscape
1 年Nice read Jonathan!
CIO Cash and Business Credit
1 年Very tgought provoking and useful article Jonathan
Lead Solution Architect at Absa Group
1 年Nice read Jono
Cyber Defence Advisory Council (BISO), CISA Contributor.
1 年Nice read, thx Johnathan. Insightful
Director of Sales Operations driving sales growth and profitability
1 年I've tested ChatGPT with several interesting questions, and the responses have been interesting. Overall, each answer is well structured and not only answers my question but makes me think differently! I've wondered how this could affect professions like medicine. I asked my son, who is a medical student. He said to ask this question: “I've got a fever, cough and a snotty nose. What do I have?” The response answered my question.