AI Security Insider — June 2024

AI Security Insider — June 2024

This month, we are eager to share two foundational resources which will help teams better understand and contend with the complex challenges of AI security: our AI Security and Safety Taxonomy and our AI Security Reference Architectures .

AI applications introduce an entirely different set of safety and security risks which differ from traditional cybersecurity challenges. Our taxonomy classifies these risks with supporting examples, suggested mitigations, standards mappings, and more. Additionally, our vendor-agnostic reference architectures serve as a template for teams to effectively address these risks and design more secure AI applications.

We’re excited to hear how these resources help you in your own AI journey. Of course, there’s more to see below, including our monthly threat roundup and a conversation with the CTO of IBM Security.


?? Blogs &?Press

Discover the most common threats to AI safety and security along with mitigations, mappings, and more in our new taxonomy. See the taxonomy.

Learn to design secure RAG applications, AI chatbots, and AI agents with our all-new AI security reference architectures. Read the architecture.

Our AI security research team shares threats from June, including a new pickle file exploit and method for training data extraction. Read the blog.

Robust Intelligence and MLflow integrate to make safety and security validation a seamless, automated part of model development. Read the blog.


??? Featured Events

Our AI Security Fireside Chat series continues with Srinivas Tummalapenta, Distinguished Engineer & CTO at IBM Security. Watch our conversation.

We spoke with AI security researcher Kai Greshake about indirect prompt injections being used to modify or steal sensitive data. Watch our conversation.


?? AI Security Spotlight

"One of our IBM Institute for Business Value (IBV) reports said that only 24% of CEOs factored in security for AI. That means we have 76% who have not even considered any security… but about 80% plus CEOs said that they’re using some sort of AI."

Srinivas Tummalapenta, Distinguished Engineer & CTO at IBM Security


??? AI Policy Roundup

The AI regulatory landscape is always evolving. Here are some recent developments.

  • NIST announced Assessing Risks and Impacts of AI (ARIA) , a new evaluation program for safe and trustworthy AI. According to ARIA program lead Reva Schwartz, this is the “evaluation program that brings people directly into the equation, and how they use, adapt to or are impacted by AI technology.” It covers three levels of evaluation: model testing, red teaming, and field testing.
  • A new bipartisan bill (S.4495 ) was introduced in Congress, which would would “require government contracts for AI capabilities to include safety and security terms for data ownership, civil rights, civil liberties and privacy, adverse incident reporting and other key areas.”
  • Singapore released its Model AI Governance Framework for Generative AI , providing a useful voluntary framework for organizations to adopt while deploying AI systems in order to meet best practices for AI risk management.
  • The EU AI Office has officially opened, comprising a team of 140 people that include technology specialists, lawyers, and policy experts. They recently held their first webinar on the risk management logic of the EU AI Act, clarifying requirements and explaining how to parse through overlapping international standards.

To learn more about these and other policy developments, check out our Policy Roundup blogs .


?? About Robust Intelligence

Robust Intelligence protects enterprises from AI security and safety vulnerabilities using an automated approach to assess and mitigate threats. Our end-to-end solution gives AI and security leaders the confidence to build and deploy AI-powered applications that meet rigorous standards and successfully secure their AI transformation.

Interested in learning more? Schedule a demo here.

要查看或添加评论,请登录

社区洞察

其他会员也浏览了