“AI Security” Just Got Interesting

“AI Security” Just Got Interesting

I have to confess that as much as I have enjoyed learning about AI security and safety risks, the conversations I had with other risk leaders in 2023 sometimes put me to sleep. Thankfully, the dialogue about how to manage the risks related to the explosive increase in the use of artificial intelligence is much more interesting in 2024.

The world tends to think of 2023 as the year in which mainstream audiences began, through the rapid adoption of ChatGPT and other AI tools, to understand the power and potential of AI. I, on the other hand, think of that year as a kind of monotone theoretical conversation about risk. There was a lot of noise but not a lot of substance:

  • 2023 was the year every security product company had to start to say both that their product defended against AI risks and that the product had AI baked into the product.
  • 2023 was the year every security executive started to get questions about how they were addressing AI risks from their board of directors
  • 2023 was the year that the mainstream media joined the AI hype train. Every day we saw articles about the amazing potential of AI, but also the incredible risks that would lead to the destruction of society once the AI takes over everything…
  • And, 2023 was the year that talks started showing up at security conferences.

In 2023, once we cut through that noise, the conversations were often abstract, filled with theoretical concerns about AI ethics, data privacy, and potential biases in algorithms. While these topics are undoubtedly important, they lacked the immediacy that grabs attention. Theoretical risks, though intellectually stimulating to some, rarely inspire action when the threat seems distant or speculative.

In all respects, 2024 has been much more motivating and action-oriented, because the risks are no longer theoretical. The shift in 2024 toward more engaging discussions on AI risks reflects the rapid evolution of the technology and the corresponding increase in its adoption across various sectors..?

But even though AI is in use everywhere, appropriate security and safety controls are not in place yet. And as a result, bad things are starting to happen. Theoretical discussions have evolved into urgent conversations driven by real-world incidents and regulatory developments. High-profile AI failures, coupled with significant breaches and mishandlings of data by organizations using AI, have forced risk leaders to move beyond theory. Everyone I speak with has a story of a real situation that went awry because the AI risks were not managed well. It is these real examples that are sparking deeper thoughts about mitigation exercises. The conversation has shifted from "what could happen" to "what is happening," and more importantly, "what do we do about it?"

In the results of a survey of security leaders that they released today, Lakera , an AI security company I advise, discovered that only about 5% of organizations express high confidence in their GenAI security frameworks, despite nearly 90% of organizations actively implementing or exploring LLM use cases. There is uncertainty about the effectiveness of existing security approaches in protecting against sophisticated AI threats, with 86% of respondents having moderate or low confidence levels in their current measures. This cautious approach acknowledges their limited experience with AI-specific threats and controls. I won’t go into the details of the study, but highly recommend security professionals give it a read–with a focus on concrete vulnerabilities that have been found. For example, 19% of respondents reported internal incidents related to unauthorized access to AI models. This list is a good start for areas that need attention now inside every organization.

As the survey shows, the adoption of AI introduces new risks into organizations unprepared to address them. This shift has pushed risk leaders to engage more deeply with technical teams and rethink traditional risk management approaches. In 2023, the focus was predominantly on safeguarding data and ensuring compliance with evolving regulations. In 2024, however, the discussions have expanded to include sophisticated AI-generated attacks that challenge existing cybersecurity protocols. This has led to a renewed emphasis on adaptive security measures, requiring risk leaders to stay ahead of AI’s capabilities and vulnerabilities.

New regulation is also driving this transformation. 2024 has seen a surge in governmental and industry-specific regulations aimed at AI governance. The implications of these new regulations have also fueled discussions among risk leaders, making the subject matter more tangible. As companies scramble to comply with these mandates, risk leaders are no longer just engaging in hypothetical debates—they are developing actionable strategies to mitigate AI-related risks and concrete regulatory requirements.

Security executives are being pushed out of their traditional data security comfort mode into thinking about new dimensions for risk management conversations. The public and stakeholder demand for ethical AI has pushed organizations to adopt responsible AI practices, not just as a regulatory requirement but as a competitive differentiator. This has made risk management a more dynamic and strategic field, as leaders navigate the intersection of ethics, technology, and business performance.

As AI continues to permeate every facet of business and society, the role of security leaders leaders will become increasingly critical in shaping how organizations manage and mitigate these evolving risks. The challenge now lies not just in understanding AI risks but in developing robust strategies to address them in an ever-changing landscape. The shift from passive discussion to active management marks a significant step forward in the responsible deployment of AI technologies.

#AI #Security #Innovation #GenAI #Cybersecurity #Lakera

Full disclosure, I used the help of AI tools in the writing of this article.

Thanks for the shoutout, Joe Sullivan! ??

回复

Joe great summary! I have seen many of the same things as you. It is not an easy challenge. Even some of the "solutions" create consternation and additional challenges. You always need to factor in the law of unintended consequences. Users want what they want.

Todd Fitzgerald

#1 Best Selling Cybersecurity Leadership Author | Former F500 CISO | Keynote Speaker | Board Advisor | Podcaster | Educator | Mentor

3 个月

Great writeup Joe Sullivan . 2023 was a year of noise and to some extent, mis-information regarding the threat, real solutions, and mitigations across the CISO community, with many people trying to “figure it out.” I recall moderating an AI panel where there panelists were concerned about talking about it because they had not fully implemented the ideas they were considering. I said that’s ok, as we are all in this boat and need to have the discussions. If CISOs/CPOs/CIOs/Business executives are not in the room having these discussions, they need to be. #generativeai #artificalintelligence #cyberleadership #cisos #cybersecurity

回复
Peter Dawson

Project Manager - IT Demand and Project Portfolio Management, America's Region at DB Schenker

3 个月

?? Gagandeep Sidhu worth reading

Robert Caplehorn

Non Executive Director - FINTECH, Crypto and E-Payments. Current roles: PSR Payment Industry Panel Chair MyPOS Payments Ltd, GVS Prepaid Ltd (Blackhawk), Bitstamp Europe S.A., Unipaas Financial Services Ltd.

3 个月

Very helpful article Joe, and timely

回复

要查看或添加评论,请登录

社区洞察

其他会员也浏览了