AI Vulnerability Database的封面图片
AI Vulnerability Database

AI Vulnerability Database

数据基础架构与分析

An open-source, extensible knowledge base of AI failures.

关于我们

The AI Vulnerability Database (AVID) is an open-source knowledge base of failure modes for large-scale AI/ML models and datasets. There are two parts AVID: a taxonomy and a database. These efforts are complimentary to each other in enabling data scientists and ML engineers proactively screen their ML systems for potential harms in an efficient manner. AVID is the flagship project of AI Risk and Vulnerability Alliance, a US 501(c)(3) nonprofit.

网站
https://avidml.org
所属行业
数据基础架构与分析
规模
2-10 人
类型
非营利机构
创立
2022

AI Vulnerability Database员工

动态

  • 查看AI Vulnerability Database的组织主页

    1,225 位关注者

    ? AVID: 2023 in review As AI became a household topic in 2023, the need for rigorous approaches to managing the risks of this emergent technology became obvious to the general public, companies, and governments around the world. AVID provided an outlet to this groundswell of interest. During the rest of this year, we launched a slew of public events to channel this interest to productive outlets, partnered with like-minded organizations in community efforts, and had a number of releases seeding technical resources that would be indispensable to AI risk practitioners for decades to come. As a purely grassroots organization, we scaled unfathomable peaks during 2023. We've partnered with AI Village to organize the biggest ever AI red teaming event, organized a number of community events, won grants, seeded developer tooling, and published research on AI safety and risk management. We couldn't have done without our community and network of collaborators spanning the globe. To all our friends, thank you for your support! Read more here: https://lnkd.in/grasRD8C We wish everyone Happy Holidays. See you all in 2024!

  • Check out this new podcast, from our very own Jekaterina Novikova, PhD, to learn more about women making waves in AI research. Knowing her, it proves to be both deeply informative and challenging.

    查看Jekaterina Novikova, PhD的档案

    AI Researcher | 10+ Years Advancing State-of-the-Art | High-Impact Publications | Research Collaborations | Research Leader & Mentor | Expert Advisor | Multidisciplinary Research | Novel Applications

    Following up on my last post, I am glad to finally share the big announcement! I am proud to announce that Malikeh Ehghaghi and I are launching a new podcast: Women in AI Research ??? This podcast is dedicated to highlighting the contributions, challenges, and achievements of women shaping the future of AI. We look forward to bringing you insightful conversations and inspiring stories from leading women in the field. Follow WiAIR Women in AI Research to stay updated on our journey! #WomenInAI #WiAIR

    • Logo of the Women in AI Research podcast
  • We're excited for the outputs from this collaboration between Data & Society Research Institute and ARVA taking shape. We hope you can join this event to hear from the team, including our board member Borhane Blili-Hamelin, PhD!!

    查看Data & Society Research Institute的组织主页

    21,231 位关注者

    On Thursday, February 20 at 1 p.m. ET, join Lama Ahmad, Camille Fran?ois, Tarleton Gillespie, Briana Vecchione, PhD, and Borhane Blili-Hamelin, PhD as they examine red-teaming’s place in the evolving landscape of genAI evaluation and governance. The discussion draws on a forthcoming report by Data & Society and AI Risk and Vulnerability Alliance (ARVA) that investigates how red-teaming methods are being adapted to confront uncertainty about flaws in systems and to encourage public engagement with evaluation and oversight. Look out for that report next week, and RSVP for this online event! https://lnkd.in/e24PjCd9 ?

    • Text on a blue background with the details of the Red-Teaming Generative AI Harm event on February 20.
  • Our Science Lead Jekaterina Novikova, PhD will part of this panel discussing efficient LLMs tomorrow. Be there!

    查看Arcee AI的组织主页

    8,260 位关注者

    What does efficiency mean in the context of Large Language Models (#LLMs)? How do you approach the trade-off between model size, accuracy, and efficiency? Can smaller models really compete with larger ones in terms of performance? Join the hosts of The Small Language Model (SLM) Show, Arcee AI's Julien SIMON & Malikeh Ehghaghi, for a discussion of the latest approaches to language model efficiency – with insights from two experts with deep knowledge of the topic: ??Jekaterina Novikova, PhD who's the Science Lead at the AI Vulnerability Database ??and Milica Cvetkovic who works in Artificial Intelligence Services at Google. The show will livestream right here on Linkedin coming up this Wednesday at 11am PT / 2pm ET, and you can also watch it on X... Drop your questions below or join live to chat directly with the hosts and guests!

    The SLM Show: How to Make LLMs Efficient?

    The SLM Show: How to Make LLMs Efficient?

    www.dhirubhai.net

  • Join us tomorrow!

    查看AI Vulnerability Database的组织主页

    1,225 位关注者

    Interested in the problem of expanding participation in AI evaluation? What other innovative approaches can empower the public to hold AI accountable? How can everyday users contribute to responsible AI development? Join us for this conversation!

    此处无法显示此内容

    在领英 APP 中访问此内容等

  • AI Vulnerability Database转发了

    查看Carol Anderson的档案

    Machine Learning | AI Ethics

    I'm looking forward to participating in this event tomorrow, where we'll talk about end-user audits of AI systems. Join us! #aiethics #responsibleai

    查看AI Vulnerability Database的组织主页

    1,225 位关注者

    Interested in the problem of expanding participation in AI evaluation? What other innovative approaches can empower the public to hold AI accountable? How can everyday users contribute to responsible AI development? Join us for this conversation!

    此处无法显示此内容

    在领英 APP 中访问此内容等

  • The power to uncover harmful flaws in AI shouldn't rest solely with experts. But how can everyday users contribute to responsible AI development? Join Michelle Lam, Christina Pan, Carol Anderson, Nathan Butters, and Borhane Blili-Hamelin, PhD to explore IndieLabel, an innovative tool that puts algorithmic audits in the hands of everyday users. Come discuss the power of public participation in shaping the future of AI. Speakers Michelle S. Lam, Christina A. Pan, Carol Anderson, Nathan Butters, and Borhane Blili-Hamelin Date & Time Wednesday July 24, noon to 1pm ET.

    查看AI Vulnerability Database的组织主页

    1,225 位关注者

    Interested in the problem of expanding participation in AI evaluation? What other innovative approaches can empower the public to hold AI accountable? How can everyday users contribute to responsible AI development? Join us for this conversation!

    此处无法显示此内容

    在领英 APP 中访问此内容等

  • Interested in the problem of expanding participation in AI evaluation? What other innovative approaches can empower the public to hold AI accountable? How can everyday users contribute to responsible AI development? Join us for this conversation!

    此处无法显示此内容

    在领英 APP 中访问此内容等

  • AI Vulnerability Database转发了

    Join Jekaterina Novikova, PhD Novikova, Science Lead at AI Vulnerability Database! Explore the?"Dual Nature of Consistency in Foundation Models: Challenges and Opportunities"?in her upcoming talk. ???? What You'll Learn: - Understanding Consistency:?Gain insights into why consistency is crucial for trustworthy LLMs and foundation models. - Measuring Consistency:?Learn how to evaluate consistency in these models effectively. - Mitigation Practices:?Discover strategies to address the negative consequences of inconsistencies. - Leveraging Inconsistencies:?Find out how observed inconsistencies can be turned to your advantage with practical examples. Why Attend? - In-Depth Knowledge:?Benefit from Jekaterina's expertise on the complex issue of consistency in AI models. - Practical Strategies:?Learn actionable mitigation practices to enhance model reliability. - Innovative Examples:?Explore innovative ways to leverage inconsistencies for positive outcomes. About the Speaker: Jekaterina Novikova?is the Science Lead at the AI Risk and Vulnerability Alliance. She specializes in understanding and addressing the risks and vulnerabilities associated with AI models, particularly focusing on consistency in foundation models. ???Don’t miss this opportunity!?Secure your spot now: https://lnkd.in/g6EEsEc Join Jekaterina Novikova for a comprehensive exploration of consistency in foundation models!

    • 该图片无替代文字
  • AI Vulnerability Database转发了

    查看Freyam M.的档案

    AI/ML Security Engineer at Realm Labs | IIIT Hyderabad

    ?? Thrilled to announce that our paper, "Closing the Loop: Embedding Observability in the GenAI Product Lifecycle for Systematic Bias Mitigation," has been accepted under the "Harms: Privacy" theme at Generative AI and HCI (GenAICHI) at ACM #CHI2024 in Honolulu, Hawaii! Under Nimmi Rangaswamy's guidance, I explored the transformative potential of the Observability and Governance Layer within AI systems. This layer acts like a high-powered microscope, zooming in on AI operations, ensuring adherence to strict security protocols, and providing a feedback loop that sends critical insights back to the core of AI development. This continuous cycle of feedback and improvement enhances the overall integrity and fairness of AI technologies. ?? Paper: https://lnkd.in/dRkw6mJH A cornerstone of our approach is the BiasAware initiative, inspired by my collaboration with the AI Vulnerability Database last year. This highlights the strength of open and trusted platforms in enhancing our methodologies and broadening the impact of responsible AI practices. It's important to recognize that AI is an evolving field. As we continue to refine our understanding and implementation of observability, our work with the GenAI product lifecycle aims to lead the way in creating AI systems that are not only intelligent but also responsible and equitable. If you’re at CHI, come join me in Room 323A at GenAICHI to dive deeper into these discussions! #GenAICHI #CHI2024 #EthicalAI #Observability #AIResearch

    • 该图片无替代文字
    • 该图片无替代文字

相似主页

查看职位