Enhancing Cybersecurity Resilience: Revolutionizing Security Operations with AI
PC: Google

Enhancing Cybersecurity Resilience: Revolutionizing Security Operations with AI

Starting as a cloud security engineer, I was excited about using AI to make things easier. I thought of AI as a way to automate tasks and make processes in the digital world more efficient. I liked the idea of algorithms going through data, finding patterns, clusters, and connections without getting tired. This would save me from doing boring tasks, letting me concentrate on important security strategies.

Little did I anticipate that AI, my trusted ally in fortifying the cloud against threats, would evolve into an unexpected adversary. It dawned on me that threat actors had embraced the very technology designed to safeguard against them. AI, once a beacon of security, had now become the clandestine companion of those seeking to exploit vulnerabilities.

GenAI, especially, comes with a lot of risks. A concerning example of its potential harm is seen in tools like WormGPT, which is like ChatGPT but geared toward doing illegal things. Fortunately, AI is being used in cybersecurity to protect against these threats. The AI in cybersecurity market is expected to reach $60.6 billion by 2028. This shows that human security teams might find it hard to deal with and fix big cyberattacks driven by AI without using AI tools themselves.

AI in cybersecurity will remain pivotal in countering security threats powered by AI. This is crucial because malicious actors may use LLM prompts to manipulate GenAI models and extract sensitive information. Cloud Service Providers (CSPs) are expected to wholeheartedly adopt the AI revolution soon, indicating that substantial decisions related to infrastructure and development will be assisted by AI chatbots. The utilization of chatbots as potential weapons, such as WormGPT or FraudGPT, implies that businesses will face numerous unforeseeable challenges in AI-related cybersecurity.

AI serves as the driving force behind contemporary development processes, workload automation, and big data analytics. Within enterprise cybersecurity, AI security emerges as a crucial element, dedicated to safeguarding AI infrastructure against cyberattacks.

The security of AI is a critical aspect of enterprise cybersecurity, with a primary focus on safeguarding AI infrastructure against cyberattacks. Prioritizing AI security is essential as various AI technologies are deeply integrated into the framework of organizations.

AI vulnerabilities are a common vector for data breaches, and software development lifecycles (SDLCs) that incorporate AI are increasingly susceptible to vulnerabilities.?

As stated by Etay Maor, Senior Director of Security Strategy at Cato Networks,

Generative AI can also be exploited by criminals. For instance, they can employ it to compose phishing emails. Previously, one of the primary methods for detecting phishing emails involved identifying spelling mistakes and poor grammar. These served as indicators that something might be suspicious. However, with the advent of ChatGPT, criminals can effortlessly craft phishing emails in multiple languages with impeccable grammar.

AI security risks

The best way to tackle AI security is to thoroughly understand the risks. Let’s take a look at the biggest AI security risks.?

Increased attack surface

When we bring AI, like GenAI, into the Software Development Life Cycles (SDLCs), it really changes how a company's IT system works and brings in a lot of new risks that we might not know about. It's like making the area where cyber-attacks can happen even larger. The main security challenge with AI is making sure that the security teams are in control of all the AI systems. If we can see and understand everything about AI systems, it helps fix any problems, lowers the risks, and makes the area where cyber-attacks can happen smaller.

Higher likelihood of data breaches and leaks

The risks of a broader attack surface include downtime, disruption, profit losses, reputational damage, and other major long-term consequences. According to The Independent, 43 million sensitive records were compromised in just August 2023 alone. Suboptimal AI security can compromise your crown jewels and add you to the list of data breach victims.?

Vulnerable development pipelines

Security Risk in the AI Pipeline (PC: Wiz)

AI pipelines tend to broaden the vulnerability spectrum. For instance, the realm of data science, encompassing data and model engineering, often operates beyond traditional application development boundaries, leading to novel security risks.

The process of gathering, processing, and storing data is fundamental in the domain of machine learning engineering. Integrating with model engineering tasks demands robust security protocols to protect data from breaches, intellectual property theft, supply chain attacks, and data manipulation or poisoning. Ensuring data integrity is pivotal in reducing both deliberate and accidental data discrepancies.

Data poisoning

Data poisoning refers to the manipulation of GenAI models by introducing malicious datasets to steer outcomes and introduce biases. An illustration of this is the Trojan Puzzle, an attack crafted by researchers, demonstrating how threat actors could potentially manipulate and contaminate datasets that a GenAI model learns from, orchestrating malicious payloads.

Direct prompt injections

Direct prompt injections are a type of attack where threat actors deliberately design LLM prompts intending to compromise or exfiltrate sensitive data. There are numerous risks associated with direct prompt injection, including malicious code execution and the exposure of sensitive data.

Indirect prompt injections

An indirect prompt injection is when a threat actor shepherds a GenAI model toward an untrusted data source to influence and manipulate its actions. This external, untrusted source can be custom-designed by threat actors to deliberately induce certain actions and influence payloads. Repercussions of indirect prompt injections include malicious code execution, data leaks, and provisioning end users with misinformation and malicious information.

Hallucination abuse

AI has consistently tended to generate inaccurate information, prompting innovators globally to strive to minimize this phenomenon. However, until this is achieved, AI hallucinations remain a substantial cybersecurity concern. Threat actors are now taking steps to record and "legitimize" potential AI hallucinations, resulting in end users receiving information influenced by malicious and illegitimate datasets.

Chatbot credential theft

Stolen ChatGPT and other chatbot credentials are the new hot commodity in illegal marketplaces on the dark web. More than 100,000 ChatGPT accounts were compromised between 2022 and 2023, highlighting a dangerous AI security risk that's likely to increase.

Recently, I was reading about Google's flagship large language model (LLM) and GPT-4 competitor, Gemini.

Duet AI is extensively integrated into Google Cloud Platform's security operations products, particularly in Chronicle Security Operations and Security Command Center. By incorporating Duet AI into Security Operations, Google Cloud emerges as the pioneer among major cloud providers, making generative AI accessible to defenders through a unified SecOps platform.

This integration includes leading security intelligence, such as Mandiant's frontline information on vulnerabilities, malware, and threat indicators, empowering defenders to enhance organizational protection. Security teams can now enhance their capabilities and increase efficiency by leveraging generative AI to expedite threat detection, investigation, and response. Do check the video link [6]

With Duet AI in Chronicle Security Operations, users can:

  • Search vast amounts of data in seconds with custom queries generated from natural language.
  • Reduce time-consuming manual reviews and quickly surface critical context by leveraging automatic summaries of case data and alerts.
  • Improve response time using recommendations for next steps to support incident remediation.

More and more companies want to adopt or some have already adopted the latest cloud-based artificial intelligence (AI) and machine learning (ML) technologies, but they are subject to an increasing array of data privacy regulations. This is an important concern for customers, who are interested in using AI and ML systems to drive better business outcomes while complying with new data privacy laws.

Google was one of the first in the industry to publish an AI/ML Privacy Commitment that reflects the belief that customers should have both the highest level of security and the highest level of control over data stored in the cloud. Don't forget to check Google's perspective on Responsible AI

The Artificial Intelligence Risk Management Framework (AI RMF) by NIST

When engaging with AI and AI-based solutions, it's important to understand AI's limitations, risks, and vulnerabilities. The Artificial Intelligence Risk Management Framework (AI RMF) by NIST is a set of guidelines and best practices designed to help organizations identify, assess, and manage the risks associated with the deployment and use of artificial intelligence technologies.

The framework consists of six elements:

  1. Valid and Reliable - AI can provide the wrong information, which is also known in GenAI as "hallucinations". It's important that companies can validate the AI they're adopting is accurate and reliable.
  2. Safe - Ensuring that the prompted information isn't shared with other users, like in the infamous Samsung case.
  3. Secure and Resilient - Attackers are using AI for cyber attacks. Organizations should ensure the AI system is protected and safe from attacks and can successfully thwart attempts to exploit it or use it to assist with attacks.
  4. Accountable and Transparent - It's important to be able to explain the AI supply chain and to ensure there is an open conversation about how it works. AI is not magic.
  5. Privacy-enhanced - Ensuring the prompted information is protected and anonymized in the data lake and when used.
  6. Fair - This is one of the most important elements. It means managing harmful bias. For example, there is often bias in AI facial recognition, with light-skinned males being more accurately identified compared to women and darker skin colors. When using AI for law enforcement, for example, this could have severe implications.

Additional resources for managing AI risk include the MITRE ATLAS (Adversarial Threat Landscape for Artificial-Intelligence Systems), OWASP Top 10 for ML, Wiz PEACH Framework, and Google's Secure AI Framework (SAIF).

Integrate AI safely with cybersecurity in your organization

To ensure the safe integration of AI with cybersecurity, experts in cybersecurity, along with industry professionals and policymakers, must oversee these models. Their role includes strengthening the models against attacks, responding to new threats (including those powered by AI), dealing with bias and ethical concerns, and upholding privacy standards.

Integrating AI safely with cybersecurity means understanding its benefits and addressing the challenges it brings. However, it's important to know that AI models aren't flawless. It also involves training cybersecurity experts to effectively use AI-driven security solutions and understand the risks associated with AI-powered cyberattacks.

Must Read:

[1] LLM AI Security & Governance Checklist

[2] Not what you’ve signed up for: Compromising Real-World LLM-Integrated Applications with Indirect Prompt Injection

[3] Wiz AI Security

[4] Will AI Replace Cybersecurity?

[5] AI-powered security operations platform

[6] Detect, investigate, and respond to threats with Duet AI in Security Operations

[7] Duet AI in Google Cloud

[8] Google's AI Principles

[9] Awesome LLM Security



I appreciate you reading The Security Chef.

Thanks for reading The Security Chef! Subscribe for free to receive new posts and support my work.

要查看或添加评论,请登录

Swapnil Pawar的更多文章