A Cyber Security Perspective on ChatGPT
DALL-E 2: Table Mountain in a Sea of Flowers in a Desert.

A Cyber Security Perspective on ChatGPT

ChatGPT is a new Artificial Intelligence technology. Many are interested in its capabilities, and there is much hype. ChatGPT interprets large sets of textual data and provides users with information based on their inputs and the mechanisms the model uses to analyze the data in the context of your questions. ChatGPT may yield significant benefits to businesses that can get the most out of their vast stores of textual customers and public ChatGPT platforms. ChatGPT may help automate tasks, deal more effectively with our customers, research, contextualize text data stores, provide a platform for unstructured or structured learning, etc. With appropriate controls, the technology may be deployed safely by the Bank from a cybersecurity viewpoint. Like everything, ChatGPT, whether a private or a public instance, needs controls to ensure its usage is consistent with an organization's strategies and data protection requirements.?

?Background Information

Being a Security person, naturally, my first reaction to learning this new technology was finding ways to block it. Just kidding. ChatGPT and AI offer considerable opportunities to change the way we work, the way we learn, and the way we get stuff done. My personal experiences with it are good. It has helped me decide what to cook for supper and provides me with recipes. When I ran out of ideas for things to do in the gym, it created a Workout of the Day for me (luckily, it could not see me do the exercises, but I expect that will follow). It has provided a shortcut to get basic facts or its interpretation of the facts. It has helped me read about historical topics of interest and has provided me with descriptions of books I have considered reading. ChatGPT's knowledge of cyber events, standards, and understanding of cyber best practices are sound. Overall, ChatGPT has the potential to disrupt, for better or worse. Of course, from my cyber security perspective, the challenge is how to use ChatGPT positively in ways that don't harm ourselves or others.

So, what exactly is ChatGPT? When I asked ChatGPT to describe how it works, this was its response, "I work by using a deep learning-based language model that has been trained on a massive amount of text data, allowing me to generate human-like text based on the input I receive. When you ask me a question or give me a prompt, I analyze it, and using my training, I generate the most likely response. My training data includes a wide range of topics, from general knowledge to language patterns, so I can answer various questions and converse on many topics. However, I don't have personal experiences or emotions, and my responses are limited to the information I was trained on."

GPT, in case you were wondering, "stands for Generative Pre-trained Transformer. It's a type of language model. The "pre-trained" part means training models on a massive amount of text data. The "transformer" part refers to the type of architecture used. The GPT models have achieved state-of-the-art results on various natural language processing tasks widely used in the field."

GPT uses reinforced learning from human feedback (RHLF) to improve itself, allowing human platform users to contribute to enhancing its effectiveness.

Some Pros and Cons from a Cyber Risk Viewpoint

ChatGPT can produce textual content based on inputs from the end user.

  • ChatGPT responds to user inputs, providing content on various topics, including general information about cyber security.
  • It can produce software code.
  • It can advise on ways to secure software code.
  • With the correct prompts, ChatGPT can create high-quality phishing emails and take on the personas of fictional and non-fictional personas. Spear phishing may be easier to orchestrate.

?ChatGPT can find patterns in large data sets.

  • Use ChatGPT to scan emails for phishing or threatening word patterns, preventing them from reaching their target.
  • ChatGPT scans large datasets for evidence of compromise or source of threats and may help to detect threats, phishing emails, or people making threatening or negative sentiments.

?ChatGPT has Guardrails

  • Guardrails limit user action by constraining undesirable requests.
  • It is noted in public ChatGPT that the application will not build phishing emails or ransomware software if you request them as such. Still, it is possible to get code for specific malicious tasks by asking for it by not referring to the crime directly.?Building malicious code with some skill is possible by asking the right questions.
  • Guardrails need maintenance.
  • Guardrails may be bypassed.

?ChatGPT and Models

  • Working models provide sound, beneficial results.
  • Compromised models could lead to bias and inadequate training, leading to poor outcomes and decisions.

?ChatGPT uses Training Data.

  • Good quality data will lead to good outputs and a generally helpful platform.
  • Poor training material choices will produce poor quality outputs containing bias or harmful information. The selection of data to load into the application needs careful consideration.

?ChatGPT Privacy Concerns and Control of the Intellectual Property

  • Where access controls function within and between applications, access to such data might yield helpful business information.
  • Poorly controlled private data is exposed to unauthorized people and leaked to unauthorized sources.
  • Intellectual property may be lost if fed into public or insecure ChatGPT platforms.
  • Key public platforms require some form of authentication. Still, it needs clarification on how widely data shared into the system by users feed into models that other system users might see.

?Learning using ChatGPT

  • AI is an excellent tool for learning about things within its data set and models.
  • AI can shorten the time it takes to find the right person or solve organizational problems.
  • Users may use such applications to cheat on courses. In the broader community, there is evidence that such tools can pass law exams if that data is in the applications. There is evidence to suggest this has already happened. It is noteworthy that ChatGPT passed laws exams (with a C grade).

?Critical Thinking

  • Appropriately used, a user would view the results of ChatGPT critically, assess the value of the responses, and then act.
  • Users unfamiliar with the topic or lazy may accept the results, leading to poor outcomes. ChatGPT is only as good as the training data.

?Recruiting

  • ChatGPT may be able to scan CVs and resumes for mention of specific skills and other parameters, speeding up this part of the recruitment process.
  • Tests form part of the recruitment process, and tools such as ChatGPT might provide information to candidates to prepare their responses, meaning we may hire people who do not have the required skills.

?ChatGPT threw in the following suggestions:

?·??????The possibility of unintended consequences: While ChatGPT is designed to generate human-like text, there may be times when the responses it generates could have unintended consequences. For example, the responses could be misinterpreted or contain offensive or harmful content.

·??????The potential for data breaches: ChatGPT has access to large amounts of data, and if this data is not secured correctly, there is a risk that it could be breached. This could result in sensitive information being leaked or falling into the hands of malicious actors.

·??????The need for continual monitoring: As ChatGPT is a self-learning system, it is essential to monitor its behavior to ensure that it operates within the desired parameter and is not used for malicious purposes.

·??????The need for transparency: When using ChatGPT, it is crucial to be transparent about how the technology is used and what data it has access to. This can help build trust with users and ensure they are aware of potential privacy concerns.

Cyber Security Recommendations

1.????Protect training data and models you create with robust access controls, change management, and logging.

2.????Monitor ChatGPT computer systems.

3.????If your organization receives a lot of emails and other external communications, consider the risks associated with improved attempts at phishing. Do you think your users need more training to notice spear-phishing? Do you believe your mail scanning tools detect phishing well enough to prevent them from arriving in your inboxes?

4.????Ensure AI guardrails are in place and working.

5.????Ensure you recognize that the guardrails have limits and that you have ways to deal with the limitations.

6.????Figure out how to incorporate data leakage prevention tools into the mix and stop users from sharing sensitive data, particularly on public platforms, but on private ones.

7.????Remind users of their responsibilities when using such a platform.

8.????Authenticate users to the platform and log their actions, if possible.

9.????Consider using ChatGPT to enable users to ask cyber security questions; with luck, they will learn from the responses.

10.?Security people may find its threat modeling and general security knowledge helpful if they get stuck.

Conclusions

ChatGPT has caught the imagination of many with its recent introduction into the market. Over a million people had signed on since the end of 2022 when OpenAI launched ChatGPT publicly. Like all machine learning and Artificial Intelligence, it can scan masses of textual data and return responses to users based on questions asked. ChatGPT indicated that it had consumed millions of pieces of literature. It can even return software code to meet specifications. The public version has guard rails, which attempt to reduce the risks of malicious use of the platform. The rails cannot prevent users from finding ways to get what they want by asking legal questions. Businesses must consider data protection in environments that meet our legal and other obligations to customers, partners, and employees. We should not be frightened of ChatGPT. We should embrace it while securing it. Security must partner with technology leaders and others considering operating such platforms.

?About this paper

This paper was written by Jonathan Gill, with inputs from colleagues, whitepapers, and other information found online, as well as from ChatGPT itself. AI ensured the grammar in this document was acceptable, if not perfect. No data was harmed in the production of this document.?

Last Words

I asked ChatGPT to help me write a statement advising users to urgently change their passwords, as it is in their self-interest to do so. I got ChatGPT to rewrite the message using various famous people’s and movie characters writing personas. This is its take on the message using Yoda’s (from Star Wars) character. I have to admit I was strangely drawn to it and almost went and changed my password:

?“Subject: Security Breach: Passwords Must Reset

?Valued client, hmm? News of security breach have we. Of three hundred thousand accounts, thirty have been compromised, including yours.

?Important, the security of your personal and financial information is. Take immediate action, we must, to protect your account. Hurry, reset your password you must. Link provided, https://user.passwordchange.com

Once reset, new password creating, we shall assist you.

Mandatory, this request is. Fail to reset your password within twenty-four hours, your account will be locked, permanently. Inconvenient, we know it may be, but important, the security of your account and personal information is.

Calls or messages asking for username or password, ignore them. Suspicious, they are. Never shall we call you or ask for this information over the phone, email, or other means.

Beware of phishing sites, you must. False websites, they are, mimicking legitimate ones to steal personal information. Always check the URL and look for the SSL certificate before entering any sensitive information.

Our team, tirelessly working, they are, to address this issue and prevent further breaches. Apologies for any inconvenience, we offer. Cooperation and understanding, we appreciate.

Questions or concerns, have you? Assist you, our customer support team will. Reach them at [email protected]

Prompt attention, give it.

In Service,”

Finally, this paper is not the last word on security for ChatGPT. Please take a look at the risks for your organization if you choose to go ahead with this technology.

#chatgpt #ai #cybersecurity

Cameron Losco

GM Cybersecurity @ DR Insight | Helping businesses navigate the evolving threat landscape

1 年

Nice read Jonathan!

Val Rademan

CIO Cash and Business Credit

1 年

Very tgought provoking and useful article Jonathan

Sudashin Pillay

Lead Solution Architect at Absa Group

1 年

Nice read Jono

Vernon Fryer

Cyber Defence Advisory Council (BISO), CISA Contributor.

1 年

Nice read, thx Johnathan. Insightful

Andre Beley

Director of Sales Operations driving sales growth and profitability

1 年

I've tested ChatGPT with several interesting questions, and the responses have been interesting. Overall, each answer is well structured and not only answers my question but makes me think differently! I've wondered how this could affect professions like medicine. I asked my son, who is a medical student. He said to ask this question: “I've got a fever, cough and a snotty nose. What do I have?” The response answered my question.

要查看或添加评论,请登录

Jonathan G.的更多文章

社区洞察

其他会员也浏览了