Safe & Responsible AI
Praveen Anantharaman
CRM + AI + Data +Trust | Product & Program Leader | Digital Transformation pioneer | Cyber Security & SaaS Expertise | Strategy Design | Agile delivery | Ex-IBM, DXC, HireRight | Cloud 4x Salesforce 4X SAS ML
Did you know cyberattacks are reaching record highs, with over 750 million attempts daily? ?? As AI becomes more powerful, the risks of cyber threats and social engineering attacks increase.
80+% of penetrations and hacks start with a social engineering attack. 70+% of nation-state attacks [FBI, 2011/Verizon 2014], It is also empirically evident that humans are a fundamental weakness of cyber systems.
With the advent of frontier AI and its rapid adoption, there is a need for exploring Safe and responsible AI: Risks & Challenges.
AI poses a broad spectrum of Risks:
Malfunction: Bias, harm from AI system malfunction, and/or unsuitable deployment/use & Loss of control.
Important to Mitigate Risks While Fostering Innovation, we shall explore the challenges here
Challenge 1: Ensuring Trustworthiness of AI & AI Alignment
Privacy, Robustness, and Other AI Alignment Challenges i.e. Hallucination, Fairness, Toxicity, Stereotype, Machine Ethics, Jailbreaks, and Alignment Goals: Helpfulness, Harmlessness, Honest
Challenge 2: Mitigating misuse of AI ( we shall deep dive into this space)
Will Frontier AI Benefit Attackers or Defenders More?
Let's look at the impact of frontier AI on the:
By adopting a multi-layered approach that combines reactive, proactive, and secure-by-design principles, organizations can better protect their AI systems and mitigate risks.
Misused AI Can Make Attacks More Effective: Deep Learning Empowered Vulnerability Discovery and Phishing Attacks. Overall, the integration of AI into cyberattacks can significantly enhance their effectiveness and make them more difficult to detect and prevent. Current AI Capability enhances attacker capability across the Kill chain, this highlights the importance of developing robust cybersecurity measures to counter these threats.
The good news is AI has the potential to significantly enhance cybersecurity defenses!
AI can also be used by attackers to launch more sophisticated and effective attacks. Also, an asymmetry exists between attack and defense, the cost of failure of attackers is high, can exploit delays in patch deployments, and can exploit probabilistic & repeated attacks. Attackers need to be right only once, the defender has to get it right every time!
领英推荐
However, by automating tasks, improving detection capabilities, and accelerating response times, defenders make for the gaps. It is crucial to stay ahead of the curve by investing in AI research and development to ensure that defenses can keep pace with evolving threats. Current AI capabilities enhance early stages - Proactive testing, Attack detection, and Triage / forensic phases and have little impact on remediation development and deployment stages!
We do employ behavioral monitoring for anomaly detection and context-based measures! These AI Analytics help in quicker insights in real-time and help make informed decisions and take countermeasures, thus enhancing defending capability.
Several key areas where we need to focus to mitigate risks and foster innovation in AI:
Reference :
Sincere thanks to Prof Dawn Song and the staff team at UC Berkley, for this 12 week MOOC Large Language Model Agents MOOC, Fall 2024
Towards Building Safe & Trustworthy AI Agents and A Path for Science? and Evidence?based AI PolicyDawn Song, UC Berkeley
Qinbin Li, et al., VLDB 2024, Best Paper Award Finalist
RedCode: Risky Code Execution and Generation Benchmark for Code Agents, Guo et al., NeurIPS 2024
https://www.wsj.com/articles/the-ai-effect-amazon-sees-nearly-1-billion-cyber-threats-a-day-15434edd