Pragmatist Guide to AI Risks

Pragmatist Guide to AI Risks

Hey folks, I wanted to provide some light reading before/during the holiday break, and in this article, I really felt like punishing myself, so we are diving into the scary and exciting waters of AI risks which somehow manage to be simultaneously mundane yet profoundly alarming.

This article is an abridged version of a longer article that you can read (for free) on my substack.



1. The Pendulum of Public Discourse

The discourse around AI swings dramatically, from lauding its potential to revolutionize our lives through innovations like self-driving cars and healthcare improvements, to fears of job loss, rogue algorithms, and loss of human control. Lately, I’ve started to relate this challenge to a powerful coal train chugging up a steep hill.

The train, loaded with the promise of a brighter future, reflects the immense potential of AI to revolutionize our world. Yet, as it powers forward, it inevitably spews a long, dark cloud behind it -smog filled with fears, doubts, and uncertainties about what this new technology might bring. This black-and-white thinking stifles balanced discussions and pragmatic preparations for managing AI's real-world impacts.


2. Understanding Possibility vs. Probability

The terms 'possibility' and 'probability' often get tossed around like interchangeable synonyms, but they couldn't be more distinct. Let's clear the air.

  • Possibility is a straightforward concept – a binary, an on-off switch. If something can happen, no matter how outlandish or improbable, it falls into the realm of possibility. Superintelligent AI overthrowing humanity? Possible. Will AI cure all diseases? Also possible.

  • Probability doesn't deal with yes or no; it dwells in the 'how likely.' It's a nuanced spectrum, a sliding scale that weighs the likelihood of an event occurring. Probability forces us to consider whether something can happen and grapple with the more pertinent question of whether it's likely to happen – and under what conditions.


3. Navigating AI Risks with a Probability Mindset

Adopting a probability-focused mindset in AI risk discussions encourages realistic assessments of potential outcomes, like job displacement or misuse of AI systems. This approach, grounded in assessing the likelihood rather than mere possibilities, informs effective resource allocation and policy development. In making decisions amid uncertainty, balancing risks and benefits while considering human values is crucial. History shows our challenges in predicting long-term outcomes of transformative technologies, emphasizing the need for cautious optimism and ethical considerations in navigating AI's future.


4. How to Get Started

Effective risk management in AI involves identifying and mitigating threats and vulnerabilities, crucial for safeguarding assets like data or devices. This process, however, faces challenges due to the rapidly evolving nature of AI, a lack of standardized terms, and limited empirical data, making it difficult to accurately assess and manage risks. Understanding these complexities is key to developing adaptive strategies for a rapidly changing AI landscape.

Resources to Help You Figure it Out

The following frameworks, guidelines, and tools can provide comprehensive guidance on the wide and developing spectrum of AI risks:

  1. NIST AI RMF: A structured approach for managing risks in AI system design, development, deployment, and operation.
  2. Guidelines for Secure AI System Development: A detailed guide for secure AI system lifecycle, emphasizing security in AI and addressing unique vulnerabilities.
  3. OWASP Top 10 for LLM Applications: Targets unique security vulnerabilities in LLM applications, offering critical vulnerability lists and remediation strategies.
  4. OWASP LLM AI Security & Governance Checklist: Provides a comprehensive guide for managing Generative AI technology rollouts in organizations.
  5. Huntr: The world's first bug bounty platform for AI/ML applications, it facilitates the identification and resolution of AI/ML software vulnerabilities.
  6. Microsoft Responsible AI: Focuses on six guiding principles for ethical AI development and usage, operationalized through governance, policy, and research.
  7. AWS Generative AI Security Scoping Matrix: Assesses and mitigates risks in generative AI, categorizing solutions and outlining necessary security disciplines.
  8. My Guide to Secure AI: Lastly, if you’re looking for a complete lifecycle while staying at a 20,000 to 50,000-foot view, you can check out my comprehensive guide to securing AI systems here.


That concludes our wild ride into the ever-twisty, turny world of AI risks. Here's my hot take for 2024: it will be a rollercoaster that makes the craziest theme park ride look like a kiddie carousel. I've been strapped into this AI thrill ride for about a year and a half now, and let me tell you, it's been a fucking blast and I’m going again in 2024! If you’re not ready, I recommend you grab your puke bag and fasten your seatbelt! ??????


Disclaimer: The views and opinions expressed in this article are my own and do not reflect those of my employer. This content is based on my personal insights and research, undertaken independently and without association to my firm.

要查看或添加评论,请登录

Kris Kimmerle的更多文章

  • The Hidden Complexity of Securing AI Embeddings in Enterprise Chatbots

    The Hidden Complexity of Securing AI Embeddings in Enterprise Chatbots

    I've been researching how to secure general-purpose chatbots that leverage embedding models, and I see a lot of…

  • When Machines Start Fighting Machines

    When Machines Start Fighting Machines

    A bit of a departure from my usual, but I wanted to share some thoughts on where I think cybersecurity is headed in the…

  • Lessons Learned Leading AI Security

    Lessons Learned Leading AI Security

    AI makes headlines, but AI security leadership often stays in the shadows. This article aims to shed light on this…

    1 条评论
  • AI Red Team Assessment Strategies

    AI Red Team Assessment Strategies

    In my previous article, 'Breaking Your AI Before Someone Else Does,' we tipped our toes into the pool of AI red…

    1 条评论
  • Break Your AI Before Someone Else Does

    Break Your AI Before Someone Else Does

    AI red teaming means intentionally breaking your own systems to build them back better. Seven months ago, I wrote the…

  • The Many Faces of AI Risk

    The Many Faces of AI Risk

    Artificial Intelligence brings a whole new set of risks. But here's the kicker - not everyone sees these risks the same…

  • Automating Tasks, Not Jobs

    Automating Tasks, Not Jobs

    Lately, I have seen more and more articles discussing how AI will replace human jobs wholesale. This framing isn't…

    5 条评论
  • Analysis of Hallucinations

    Analysis of Hallucinations

    AI models like ChatGPT create content by connecting disparate information, leading to creative but sometimes inaccurate…

    1 条评论
  • Why Purple Llama is a BIG Deal

    Why Purple Llama is a BIG Deal

    Meta announced the Purple Llama project this morning, marking a pivotal moment for AI trust and safety. This…

  • Practical Guide to Secure AI

    Practical Guide to Secure AI

    It's essential to recognize that AI systems, whether internally hosted models or those leveraging external application…

社区洞察

其他会员也浏览了