Contributing to the OWASP Top 10 for LLM
Prompt by Steve Wilson, Render by GPT4/DALL-E

Contributing to the OWASP Top 10 for LLM

The OWASP Top 10 for Large Language Model (LLM) Security project is a community-driven effort to identify and tackle the biggest security challenges in LLM applications. I get asked all the time by people who want to contribute how they can start. Your input is invaluable whether you’re a seasoned pro or just getting started. Here’s how you can get involved:

Check Out the Roadmap

To get a sense of where the project is headed, look at our roadmap. It outlines all the key milestones and goals for the upcoming version.

Knowing what’s coming up helps you see where you can make the most impact. Whether it’s giving feedback on proposed changes or suggesting new ideas, your contributions are crucial.

Participate in Surveys

One of the easiest ways to contribute is to respond to our surveys. We want to hear from you! Your insights help shape the project and ensure that we’re focusing on the most critical issues.

Recently we did a survey where we re-ranked the existing Top 10 vulns and then asked people for areas that needed further investigation.

Rerank of the Top 10 for LLM Vulns

Here are some of the areas identified by the survey for future research.

  • Hallucinations and Bias Injection: Highlight the need to mitigate hallucinations and bias that cause incorrect or unethical LLM outputs.
  • Prompt Leakage and Jailbreaking: Address vulnerabilities such as system prompt leakage that lets attackers bypass controls, and multi-turn jailbreak attacks.
  • Denial of Wallet and Resource Abuse: Point out issues from misuse or overuse of LLM resources that lead to unnecessary costs.
  • Third-party Data Exposure and Model Inversion Attacks: Focus on risks of exposing sensitive data via third-party integrations and model inversion.
  • Insecure Agent and Proxy Style Attacks: Discuss the risks of insecure agents and potential MITM-style attacks within complex app flows.

Suggest New Vulnerabilities

Have you spotted a new type of attack or weakness in LLM applications? We want to hear about it! Proposing new vulnerabilities is a fantastic way to contribute. Your real-world examples and suggestions for mitigating these issues help keep the OWASP Top 10 relevant and up-to-date.

Submitting a new vulnerability is easy! Just check out the instructions here. You can develop something new or rewrite an existing vuln with a new spin. It's up to you! This is a generative phase. You can get your ideas out. We'll prune and combine ideas later.

We've already had a ton of submissions, so get yours in now!

  • Adversarial Use of AI for Red Teaming and Cyber Operations: Using AI to simulate attacks and improve defensive strategies can be exploited to create sophisticated cyber threats.
  • Adversarial Inputs: Maliciously crafted inputs designed to exploit vulnerabilities in LLMs, leading to incorrect or harmful outputs.
  • Improper Error Handling: Poor error handling practices in LLM systems can expose sensitive information and facilitate further attacks.
  • Insecure Design: Design flaws in LLM systems can introduce vulnerabilities that compromise security and functionality.
  • Model Inversion: Attackers can reconstruct input data by exploiting the outputs of LLM models, potentially revealing sensitive information.
  • Unrestricted Resource Consumption: Without proper limits, LLM systems can be overwhelmed by excessive resource usage, causing service disruptions.
  • Agent Autonomy Escalation: Granting excessive autonomy to AI agents can lead to unintended and potentially harmful actions.
  • Malicious LLM Tuner: Malicious actors can tamper with LLM tuning processes to introduce biased or harmful behaviors.
  • Deepfake Threat: Advanced deepfake techniques powered by LLMs can create convincing but false digital content, posing significant security risks.
  • Unauthorized Access and Entitlement Violations: Improper access controls can lead to unauthorized usage of LLMs, exposing sensitive data and violating entitlements.
  • Alignment & Value Mismatch: Discrepancies between the values of LLMs and their users can lead to unintended and potentially harmful outcomes.
  • RAG & Finetuning: Improper fine-tuning and retrieval-augmented generation can introduce vulnerabilities and degrade model performance.
  • Unwanted AI Actions by General Purpose LLMs: General-purpose LLMs may perform unintended actions that can cause harm or disruption.
  • Dangerous Hallucinations: LLMs can generate highly plausible but incorrect information, leading to dangerous consequences if trusted without verification.
  • Resource Exhaustion: Intensive resource demands from LLMs can lead to system slowdowns and outages, affecting overall performance and availability.
  • Privacy Violation: LLMs can inadvertently expose private or sensitive user data, leading to privacy breaches.
  • Voice Model Misuse: Misuse of voice models can lead to impersonation and unauthorized actions, posing significant security risks.

Get Involved!!!

  • Join Our Mailing List Get the latest updates, news, and announcements straight to your inbox. Join our mailing list here.
  • Follow Us on Social Media Stay in the loop by following us on LinkedIn and Twitter/X. We share updates, insights, and opportunities to get involved.
  • Sign Up for Slack Join our Slack community to engage in real-time discussions, ask questions, and collaborate with other contributors. Sign up here. Once you're there, sign up for channel #project-top10-for-llm.

Full details here: https://github.com/OWASP/www-project-top-10-for-large-language-model-applications/wiki/Contributing

Contribute to Additional Projects

In addition to the core OWASP Top 10 list, there are several other exciting projects you can get involved with. These projects offer more ways to contribute and make a difference in the field of LLM security.

CISO Checklist Project

The CISO Checklist Project aims to create a comprehensive checklist for evaluating the security of LLM applications. This checklist will serve as a valuable resource for Chief Information Security Officers (CISOs), helping them ensure their applications meet the highest security standards.

Data Gathering Project

The Data Gathering Project focuses on collecting real-world data on LLM security vulnerabilities. This data is crucial for understanding the prevalence and impact of different security issues, and it helps inform the OWASP Top 10 list and other initiatives.

By participating in these additional projects, you can help expand our understanding of LLM security and develop practical tools and resources for the community. Every bit of effort counts, and together we can make a significant impact.


Thanks, Steve. I will look into the information and contribute with my knowledge.

Kaustubh M.

Cybersecurity Professionals | Certified Red Team Operator (CRTO) | CEH Practical | CSCU | AZ-900 | SC-900

5 个月

Interesting ??

回复
Anugrah SR

Empowering tech leaders with robust cybersecurity | Specializing in GenAI/LLM testing & Offensive Security | Senior Consultant with multi-industry pentest experience

5 个月

Thanks Steve Wilson for putting out this article, always had this question in mind.

回复
Sam Moore, CISM, CEH, CCFE, CCSP

Capable, Collaborative, Professional, Cyber Security Leader | Vulnerability Management | Cyber Risk Assessment | Cyber Security | Operations Management | Threat Intelligence Analysis

5 个月

Interesting!

Daniela C.

CISSP, C|EH, CSSLP, Principal Software Engineer Raytheon, Adjunct Professor UMBC

5 个月

Thanks for sharing

要查看或添加评论,请登录

社区洞察

其他会员也浏览了