ChatGPT: Cybersecurity friend or foe?

ChatGPT: Cybersecurity friend or foe?

If you haven’t heard about ChatGPT yet, perhaps you’ve just been thawed from cryogenic slumber or returned from six months off the grid. ChatGPT—the much-hyped, artificial intelligence (AI) chatbot that provides human-like responses from an enormous knowledge base—has been embraced practically everywhere, from private sector businesses to K–12 classrooms.

Upon its launch in November 2022, tech enthusiasts quickly jumped at the shiny new disruptor, and for good reason: ChatGPT has the potential to democratize AI, personalize and simplify digital research, and assist in both creative problem-solving and tackling “busywork.” But the security community and other technology leaders have started raising the alarm, worried about the program’s potential to write malware and spread mis- and disinformation.


Why ChatGPT security concerns are both overblown and valid

Artificial intelligence (AI) was once a science fiction cautionary tale—stories consistently warned against designing machines capable of surpassing human ingenuity. But over the last 20 years, AI has transitioned from the stuff of nightmares to mainstream adoption. From the first Roomba in 2002 to the first virtual assistant (Siri) in 2011, AI has slowly and steadily penetrated the consumer technology market, often with little comprehension from buyers that artificial intelligence is actually powering the functionality behind their favorite devices. ?

As of November 2022, people can no longer ignore the artificial elephant in the room. Enter: ChatGPT. Developed by OpenAI, ChatGPT (Chat Generative Pre-Trained Transformer) is an AI-powered chatbot that uses an enormous and sophisticated language model to generate human-like responses in text format. ChatGPT amassed an incredible 100 million users within two months of its explosive launch, and its website receives over 1 billion visitors per month.

Unlike other chatbots, which are typically only able to produce canned replies on a few subjects, ChatGPT can mimic the style of celebrity CEOs, craft business pitches, compose music and song lyrics, answer test questions, simulate an entire chat room, write computer code, and much more. For example, Malwarebytes asked ChatGPT to write the opening paragraph of a novel about an antiquated antivirus program that relies on signature-based detection to stop new and emerging threats.

Here's what the program came back with:

?

“The antivirus program blinked to life, its archaic interface flickering on the outdated CRT monitor. The program’s once cutting-edge signature-based detection technology was now decades out of date, unable to keep pace with the rapidly evolving landscape of digital threats. But for the small team of programmers responsible for maintaining the program, it was still their best line of defense against the unrelenting tide of new and unknown viruses that threatened to breach their systems. As they watched the program scan through the code, line by line, they knew that they were fighting an uphill battle - but they had no other choice but to keep trying.”

?

Such articulate, detailed responses caused a near frenzy upon ChatGPT’s initial release. Many corporate customers and app developers have rushed to implement the platform, and its newer Dall-E program for generating images, into their business processes and products. However, the security community and those wary of artificial intelligence’s steady drumbeat forward have warned organizations to exercise caution over a myriad of potential risks.

Because of its meteoric rise into public consciousness and rapid adoption, the generative AI chatbot has been the subject of continuing, complex conversations about its impact on the cybersecurity industry, threat landscape, and humanity as a whole. Will ChatGPT be the sentient harbinger of death some have claimed? Or is it a unicorn that’s going to solve every business, academic, and creative problem? The answer, as usual, lies somewhere in the gray.


Read also:

What is ChatGPT?

OpenAI’s terms of use


Security pros of ChatGPT

AI can be a powerful tool for cybersecurity and information technology professionals. It will change the way we defend against cyberattacks by improving the industry’s ability to detect and respond to threats in real time. And it will help businesses shore up their IT infrastructure to better withstand the constant stream of increasingly-sophisticated attacks. Most effective security solutions today, including Malwarebytes, already employ some form of machine learning. That’s why some in the security community argue that generative AI tools can be safely deployed to strengthen an organization’s cybersecurity posture as long as they’re implemented according to best practices.

?

Increases efficiency

ChatGPT can increase efficiency for cybersecurity staff on the front lines. For one, it can significantly reduce notification fatigue, a growing concern within the field. With companies grappling with limited resources and a widening talent gap, a tool like ChatGPT could simplify certain labor-intensive tasks and give defenders back valuable time to commit to higher-level strategic thinking. ChatGPT can be trained to identify and mitigate network security threats like DDoS attacks when used in conjunction with other technologies. It can also help automate security incident analysis and vulnerability detection, as well as more accurately filter spam.

?

Assists engineers

Malware analysts and reverse engineers could also benefit from ChatGPT’s assistance on traditionally challenging tasks, such as writing proof-of-concept code, comparing language- or platform-specific conventions, and analyzing malware samples. The chatbot can also help engineers learn how to write in different programming languages, master difficult software programs, and understand vulnerabilities and exploit code.

?

Trains employees

ChatGPT’s security applications aren’t limited to Information Security (IS) personnel. The program can help close the security knowledge gap by assisting in employee training. Cybersecurity training is crucial for organizations interested in mitigating cyberattacks and fraud, yet IT departments are often far too busy to offer more than a single course per year. ChatGPT can step in to offer insights on identifying the latest scams, avoiding social engineering pitfalls, and setting stronger passwords in concise, conversational text that may be more effective than a lecture or slide presentation.

?

Aids law enforcement

Finally, ChatGPT has the potential to assist law enforcement with investigating and anticipating criminal activities. In a March 2023 report from Europol, subject matter experts found that ChatGPT and other large language models (LLMs) opened up “explorative communication” for law enforcement to quickly gather key information without having to manually search through and summarize data from search engines. LLMs can significantly speed up the learning process, enabling a much faster gateway into technological comprehension than was previously thought possible. This could help officers get a leg up on cybercriminals whose understanding of emerging technologies have typically outpaced their own. ?


Read also:

ChatGPT helps both criminals and law enforcement, says Europol report


Security concerns overblown

Not long after ChatGPT was first introduced, the inevitable hand wringing by technology decision-makers took hold. In a February survey of IT professionals by Blackberry, 51 percent predicted we are less than a year away from a successful cyberattack being credited to ChatGPT, and 71 percent believed nation states are likely already using the technology for malicious purposes.

The following month, thousands of tech leaders, including Steve Wozniak and Elon Musk, signed an open letter to all AI labs calling on them to pause the development of systems more powerful than the latest version of ChatGPT for at least six months. The letter cites the potential for profound risks to society and humanity that arise from the rapid development of advanced AI systems without shared safety protocols. More than 27,500 signatures have since been added to the letter.

However, even when ChatGPT is engaged in ominous activities, the outcomes at present are rather harmless. Since OpenAI allows developers to modify its official APIs, some have tested a few nefarious theories by creating ChaosGPT, an internet-connected “evil” version that runs actions users do not intend. One user commanded the AI to destroy humanity, and it planned a nuclear winter, all while maintaining its own Twitter account, which was ultimately suspended.

So maybe ChatGPT isn’t going to take over the world just yet—what about some of the more realistic security concerns being voiced, like the ability to develop malware or phishing kits?

When it comes to writing malicious code, ChatGPT isn't yet ready for prime time. In fact, the platform is a terrible programmer in general. It's currently easier for an expert threat actor to create malware from scratch than to spend time correcting what ChatGPT has produced. The fear that ChatGPT would hand script kiddies the programming power to produce thousands of new malware strains is unfounded, as amateur cybercriminals lack the knowledge to pick up on minor errors in code, as well as the understanding of how code works.

One of our researchers recently embarked on an experiment to get ChatGPT to write ransomware, and despite the chatbot’s initial protests that it couldn’t “engage in activities that violate ethical or legal standards, including those related to cybercrime or ransomware,” with a little coaxing, ChatGPT eventually complied. The result: snippets of ransomware code that switched languages throughout, stopped short after a certain number of characters, dropped features at random, and were essentially incoherent and useless.

Since the primary focus of ChatGPT’s training was in language skills, security pros have been most anxious about its ability to generate believable phishing kits. While the chatbot can produce a clean phishing email that’s free from grammatical or spelling errors, many modern phishing samples already do the same. The AI tool’s phishing skills begin and end with writing emails because, again, it lacks the coding talent to produce other elements like credential harvesters, infected macros, or obfuscated code. Its attempts so far have been rudimentary at best—and that’s with the assistance of other tools and researchers.

ChatGPT can only pull from what’s already in its public database, and it has only been trained on data up until 2021. Even today, there are simply not enough well-written phishing scripts in the wild for ChatGPT to surpass what cybercriminals have already developed. In addition, OpenAI has safety protocols that explicitly prohibit the use of its models for malware development, fraud (including spam and scams), and invasions of privacy. Unfortunately, that hasn’t stopped crafty individuals from “jailbreaking” ChatGPT to get around them.


Read also:

ChatGPT gut check: cybersecurity threats overhyped

Malwarebytes Labs: ChatGPT happy to write ransomware, just really bad at it


ChatGPT security cons

Just because some of the worst fears about ChatGPT are overhyped doesn’t mean there are no justifiable concerns. According to the NIST AI Risk Management Framework published in January, an AI system can only be deemed trustworthy if it adheres to the following six criteria: ?

  1. Valid and reliable
  2. Safe
  3. Secure and resilient
  4. Accountable and transparent
  5. Explainable and interpretable
  6. Fair with harmful biases managed

However, risks can emerge from socio-technical tensions and ambiguity related to how an AI program is used, its interactions with other systems, who operates it, and the context in which it is deployed.

?

Racial and gender bias

There are many inherent uncertainties in LLMs that render them opaque by nature, including limited explainability and interpretability, and a lack of transparency and accountability, including insufficient documentation. Researchers have also reported multiple cases of harmful bias in AI, including crime prediction algorithms that unfairly target Black and Latino people and facial recognition systems that have difficulty accurately identifying people of color. Without proper controls, ChatGPT could amplify, perpetuate, and exacerbate toxic stereotypes, leading to undesirable or inequitable outcomes for certain communities and individuals.

?

Lack of verifiable metrics

AI systems suffer from a deficit of verifiable measurement metrics, which would help security teams determine whether a particular program is safe, secure, and resilient. What little data exists is far from robust and lacks consensus among AI developers and security professionals alike. What’s worse, different AI developers interpret risk in different ways and measure it at different intervals in the AI lifecycle, which could yield inconsistent results. Some threats may be latent at one time but increase as AI systems adapt and evolve.

?

Cybercriminal experimentation

Despite its struggles with malicious code, ChatGPT has already been weaponized by enterprising cybercriminals. By January, threat actors in underground forums were experimenting with ChatGPT to recreate malware variants and techniques described in research publications. Criminals shared malicious tools, such as an information stealer, an automated exploit, and a program designed to phish for credentials. Researchers also discovered cybercriminals exchanging ideas about how to create dark web marketplaces using ChatGPT that sell stolen credentials, malware, or even drugs in exchange for cryptocurrency.

?

Vulnerabilities and exploits

There are few ways to know in advance if an LLM is free from vulnerabilities. In March, OpenAI temporarily took down ChatGPT because of a bug that allowed some users to see the titles of other people’s chat histories and first messages of newly-created conversations. After further investigation, OpenAI discovered the vulnerability had exposed some user payment and personal data, including first and last names, email addresses, payment addresses, the last four digits of credit card numbers, and card expiration dates. While OpenAI claims, “We are confident that there is no ongoing risk to users’ data,” there’s no way (at present) to confirm or deny whether personal information was exfiltrated for criminal purposes.

Also in March, OpenAI massively expanded ChatGPT’s capabilities to support plugins that allow access to live data from the web, as well as from third-party applications like Expedia and Instacart. In code provided to ChatGPT customers interested in integrating the plugins, security analysts found a potentially serious information disclosure vulnerability. The bug can be leveraged to capture secret keys and root passwords, and researchers have already seen attempted exploits in the wild.

?

Privacy concerns

Compounding worries that vulnerabilities could lead to data breaches, several top brands recently chastised employees for entering sensitive business data into ChatGPT without realizing that all messages are saved on OpenAI’s servers. When Samsung engineers asked ChatGPT to fix errors in their source code, they accidentally leaked confidential notes from internal meetings and performance data in the process. An executive at another company cut-and-pasted the firm's 2023 strategy into ChatGPT to create a slide deck, and a doctor submitted his patient's name and medical condition for ChatGPT to craft a letter to his insurance company.

Both privacy and security concerns have prompted major banks, including Bank of America, JPMorgan Chase, Goldman Sachs, and Wells Fargo, to restrict or all-out ban ChatGPT and other generative AI models until they can be further vetted. Even private companies like Amazon, Microsoft, and Walmart have issued warnings to their staff to refrain from divulging proprietary information or sharing personal or customer data on ChatGPT as well.

?

Social engineering

Finally, cybercriminals wouldn’t be cybercriminals if they didn’t capitalize on ChatGPT’s wild popularity. Because of its accelerated growth, ChatGPT was forced to throttle its free tool and launch a $20/month paid tier for those wanting unlimited access. This gave threat actors the ammunition to develop convincing social engineering schemes that promised uninterrupted, free access to ChatGPT but really lured users into entering their credentials on malicious webpages or unknowingly installing malware. Security researchers also found more than 50 malicious Android apps on Google Play and elsewhere that spoof ChatGPT's icon and name but are designed for nefarious purposes.


Read also:

NIST Artificial Intelligence Risk Management Framework

ChatGPT data breach confirmed as security firm warns of vulnerability

Malwarebytes Labs: Stop! Are you putting sensitive company data into ChatGPT?


ChatGPT’s disinformation problem

While vulnerabilities, data breaches, and social engineering are valid concerns, what’s causing the most anxiety at Malwarebytes is ChatGPT’s ability to spread misinformation and disinformation on a massive scale. That which enamors the public most—ChatGPT’s ability to generate thoughtful, human-like responses—is the very same capability that could lull users into a false sense of security. Just because ChatGPT’s answers sound natural and intelligent doesn’t mean they are accurate. Incorrect information and associated biases are often incorporated into its responses.

OpenAI CEO Sam Altman himself expressed worries that ChatGPT and other LLMs have the potential to sow widespread discord through extensive disinformation campaigns. Altman said the latest version, GPT-4, is still susceptible to “hallucinating” incorrect facts and can be manipulated to produce deceptive or harmful content. “The model will boldly assert made-up things as if they were completely true,” he told ABC News.

In the age of clickbait journalism and social media, it can be challenging to discern the difference between fake and authentic content, propaganda or legitimate fact. With ChatGPT, bad actors can use the AI to quickly write fake news stories that mimic the voice and tone of established journalists, celebrities, or even politicians. For example, Malwarebytes was able to get ChatGPT to write a story in the voice of Barack Obama about the earthquake in Turkey, which could easily be modified to spread disinformation or collect fraudulent payments through fake donation links.


Educational concerns

In education, mis- and disinformation are especially troubling byproducts of ChatGPT that have led some of the biggest school districts in the US to ban the program from K–12 classrooms. From its lack of cultural competency to its potential to undermine human teachers, academia is understandably apprehensive. For every student using ChatGPT to research debate prompts or develop study guides, there’s another abusing the platform to plagiarize essays or take exams.

The education industry might be willing (for now) to let teachers use ChatGPT for simple tasks like creating lesson plans and emailing parents, but the tool will likely remain off-limits for students, or at least highly regulated in public schools. Educators are aware that over-reliance on AI-powered tools and generated content could lead to a decrease in problem solving, creativity, and critical thinking—the very skills teachers and administrators aim to develop in students. Without them, it’ll be that much harder to recognize and avoid misinformation.

?

Final verdict

?

Suggesting that ChatGPT is low risk and unworthy of the security community’s attention is like putting your head in the sand and pretending AI doesn’t exist. ChatGPT is only the start of the generative AI revolution. Our industry should take its potential for disruption—and destruction—seriously and focus on developing safeguards to combat AI threats. Halting “dangerous” research on advanced models ignores the reality of rampant AI use today. Instead, it’s better to demand NIST’s criteria for trustworthiness and establish regulation around the development of AI through both government intervention and corporate security innovation.

Some artificial intelligence regulation is already on the books: the 2022 Algorithmic Accountability Act requires US businesses to assess critical AI algorithms and provide public disclosures for increased transparency. The legislation was endorsed by AI advocates and experts, and it sets the stage for future government oversight. With AI laws proposed in Canada and Europe as well, we’re one step closer to providing some important guardrails for AI. In fact, expect to see changes (aka limitations) implemented to ChatGPT in the near future in response to a country-wide ban by the Italian government.

Just as cybersecurity relies on commercial software to defend people and businesses, so too might generative AI models. New companies are already springing up that specialize in AI vulnerability detection, bot mitigation, and data input cleansing. One such company, Kasada Pty, has been tracking ChatGPT misuse and abuse. Another new tool from Robust Intelligence, modeled after VirusTotal, scans AI applications for security flaws and tests whether they’re as effective as advertised or if they have issues around bias. And Hugging Face, one of the most popular repositories of machine learning models, has been?working?with Microsoft’s threat intelligence team on an application that scans?AI programs for cyberthreats.

?As organizations look to integrate ChatGPT—whether to augment employee tasks, make workflows more efficient, or supplement cyberdefenses—it will be important to note the program’s risks alongside its benefits, and recognize that generative AI still requires an appreciative amount of oversight before large-scale adoption. Security leaders should consider AI-related vulnerabilities across their people, processes, and technology—especially those related to mis- and disinformation. By putting the right safeguards in place, generative AI tools can be used to support existing security infrastructures.

Awareness alone won’t solve the more nebulous threats associated with ChatGPT. To bring disparate security efforts together, the AI community will need to adopt a similar modus operandi to traditional software, which benefits from an entire ecosystem of government, academia, and enterprise that has developed over more than 20 years. That system is in its infancy for LLMs like ChatGPT today, but continued diligence—plus a learning model of its own—should integrate cybersecurity in a symbiotic relationship. ?The benefits of ChatGPT are many, and there’s no doubt that generative AI tools have the potential to transform humanity. In what way, remains to be seen.

?



Want to learn more about how we can help protect your business? Get a free trial of Malwarebytes Endpoint Detection and Response below.

TRY NOW

要查看或添加评论,请登录

Malwarebytes的更多文章

社区洞察

其他会员也浏览了