Lessons Learned Leading AI Security

Lessons Learned Leading AI Security

AI makes headlines, but AI security leadership often stays in the shadows. This article aims to shed light on this field with insights from my experiences and personal research. I'll share what works, what doesn't, and how to navigate unique challenges.

Here are top takeaways:

Adapt, Build Your Team, and Drive Growth

  • Every organization faces different challenges, whether it’s dealing with outdated systems or navigating strict regulations. The key is understanding those differences and adapting security practices accordingly.
  • Don’t fall into the trap of thinking you can solve AI security with just a few people. Focus on building a cross-functional team that combines technical expertise with a broad understanding of enterprise risks. If that's not feasible, consider hiring the right talent or partnering with vendors and specialists to fill those gaps.
  • Integrate AI security into the business and use it to fuel growth. When you align security with your company’s goals and show how it adds real value, it can become a revenue driver. If it’s not adding value, it’s just another expense that could get cut.



This article is a shortened version, but if you’re looking for more detail, you can check out the full version on Substack (free to read) here.






Adaptability is key.

AI security challenges can look very different depending on the type of organization. Understanding these differences is key to putting in place the right security measures. I've worked in a variety of environments, and each one needed its own specific strategy:

  • In big companies, adding AI to existing systems can be tough because of older technology that may be outdated (we call this technical debt). It's all about balancing new emerging technologies with the old systems that are already in place.
  • Startups move fast and focus on growth, which means they often take risks with security. I've worked with startups to help them keep growing quickly but also build in security from the start.
  • Industries like healthcare and finance have strict rules about privacy and ethics. Working with these organizations means understanding all the regulations. I've helped by training boards and creating policies that keep AI projects in line with these rules.

One big lesson stands out: clarity is the most valuable but underestimated resource in AI.

A clear understanding helps organizations make confident decisions and align AI projects / initiates with their goals. Without clarity, it’s hard to measure success, and without measurement, managing becomes guesswork. Unfortunately, many organizations are still guessing.

That brings me to my next point: to be an effective AI security leader, you need to be adaptable, develop a context-specific understanding, and demonstrate the ability to align security practices with your organizational objectives and regulatory demands.






Become fluent in AI.

Because you'll need to be able to explain this clearly and often—many people simply don't know the difference, and they'll need help understanding, especially when making risk decisions.

Start with Traditional AI vs. Generative AI.

In traditional AI, models are built using individualized data, giving developers full control over safeguards like fairness and data protection. It’s great for analyzing large datasets, making predictions, and providing recommendations. Generative AI, on the other hand, uses general-purpose models shared by millions, excelling at tasks like summarizing natural language and role-playing (e.g., "customer service rep"). However, it also comes with unique risks due to inputs that are often unpredictable, meaning even small changes can lead to different outcomes.

Then, move to AI as a Platform.

Platforms like social media and e-commerce have transformed digital ecosystems, allowing users to create, connect, and transact. AI is set to follow in their footsteps, becoming the next major platform technology with the potential to build vast ecosystems and drive value through user-generated activity.

Finally, learn the AI Tech Stack.

To manage AI risks effectively, it’s crucial to understand the key layers: serving infrastructure, applications, AI models, safety measures, user experience, and the broader ecosystem. Since each layer of the stack relies on the others, this interconnectedness brings its own unique risks.

That means understanding tools like Kubernetes and Docker for orchestration and containerization. Get comfortable with automated infrastructure management, or infrastructure-as-code, using Terraform or Ansible. Know that TensorFlow and PyTorch are essential for AI/ML work, while tools like MLflow and Seldon ensure reproducibility and secure model deployment in production. Pre-trained models can be sourced from places like Hugging Face or cloud provider model registries, but these need continuous monitoring for vulnerabilities. ONNX (an open-source format that allows models to be easily transferred between platforms like PyTorch and TensorFlow) helps with cross-platform model standardization and optimization. Be familiar with frameworks like NIST AI RMF and MITRE ATLAS, and use security tools like PyRIT, Adversarial Robustness Toolbox, and Purple Llama.

The stack can become quite complex, depending on how much you narrow your focus. But by breaking it down into its distinct components, such as infrastructure, the tools running on it, and people and systems interact with it, then it becomes easier to see how everything connects and address risks more effectively.

Equally important is being able to communicate clearly across all these layers.

Each layer has its own challenges, and all stakeholders, including technical teams, leadership, and end users, need to be aligned. You will likely play an important part in bridging that divide. When you can explain how each part works together, it becomes much easier to manage risks within the broader ecosystem.





Keep up with emerging AI risks.

The fast-paced implementation of AI often comes with a “move fast and break things” mindset, which leaves security gaps. AI tools have lowered the barriers for sophisticated cyberattacks, allowing attackers who once needed advanced skills to execute complex attacks more easily.

One of the biggest security issues in AI/ML development is insecure coding. Many AI engineers aren’t trained in secure coding practices, leading to vulnerabilities when sensitive data is handled or model APIs are left unprotected. Additionally, misconfigured infrastructure can expose models to malicious actors, increasing the risk of attacks.

Improperly secured language models are particularly attractive to attackers, vulnerable to techniques like prompt injections or model inversion. As threat actors begin sharing AI tools and methods, it’s becoming more difficult to attribute specific Tactics, Techniques, and Procedures (TTPs) to individual groups, making threat intelligence more complex. (For a deeper dive into AI attacks, check out my article:

Another growing risk is shadow AI, where employees use unapproved tools to perform work tasks. Many organizations are still in the process of rolling out approved large language models (LLMs) like GPT-4o, Claude Sonnet 3.5, or Llama 3.2. But that hasn’t stopped employees from finding and using unauthorized tools, creating significant security challenges. Companies must secure their AI environments while meeting user demand for these tools, often without a clear understanding of what’s being used internally.

For those companies, getting this balance right is essential.

Tools like Microsoft Copilot for M365 are sometimes seen as replacements for ChatGPT or Claude, but they’re not designed to be. Copilot takes significantly longer—2 to 6 times more—to respond due to its extensive safety, security, and privacy features. While this added security is a positive, the slower response times can frustrate employees who are used to faster tools.

Though improvements may come with time, managing expectations in the interim is essential. Employees often expect a fast, ChatGPT-like experience, which is why features like asynchronous streaming are important. These features allow companies to deliver the quick responses users expect without compromising security.

Keeping up with all of this is a daily challenge.

I dedicate time each night to staying informed, and building relationships with trusted partners has been key. You need people who will call or text you the moment something happens. Casting a wide net and using AI to sift through sources helps stay ahead. If you have a threat intelligence team or platform, leverage it fully.

Use what you have at your disposal to stay proactive and ensure you have the right systems and contacts in place to respond quickly.





Focus on people, not just tech.

AI security is not a one-person job. But many companies, no matter their size, try to give huge challenges to just a few people, expecting them to handle everything without enough support. It’s important to build a strong team early on—one that includes both technical experts and strategic thinkers who understand AI risks as a whole. Training, certifications, and community involvement are great ways to build this team.

It's also not enough to focus only on technical skills. Security skills should be part of everyone's knowledge, from leadership to IT to business teams, and even end users.?If you don’t have enough people thinking about AI security, it can lead to burnout and high risk if key people leave. This is true in many areas, but with AI moving so fast, it’s even more relevant.

This could mean including basic AI security awareness in company-wide training sessions or creating cross-functional teams for AI projects that include members from various departments.

While bigger organizations can afford to build out entire teams, that’s often not realistic for smaller companies. Instead of trying to do everything in-house, they can lean on external experts. Partnering with third-party providers for things like red teaming or regular vulnerability checks can give them the oversight they need without stretching their resources too thin. And with so many automation tools becoming more affordable, even smaller businesses can start catching potential risks early and staying ahead without breaking the bank.

Make sure to push for the right investments early. Ask yourself: Are we growing in a way that matches our team's skills? Are we willing to wait for the right resources before taking on more?

When you’re just getting started, picking your battles is very important. Remember, Rome wasn't built in a day, and neither is a successful AI security program.

You also want to break down the silos.

In a large organization, knowledge silos are a big problem. One of the most successful things I did was create a global environment for sharing and learning about AI security. This helped different parts of the company work together, prevented isolated teams, and brought in fresh ideas to solve tough problems. This kind of environment helps everyone grow and makes sure the team has a wide base of knowledge instead of relying on one person.

If you haven’t started yet, now is the time to do this. Tools like Slack or Mattermost for communication, Confluence, SharePoint, or Notion for documentation, and GitHub for version control can help create a collaborative culture. Setting up dedicated channels for discussions, projects, and shared documentation helps create an environment where knowledge flows freely.






Make it a revenue driver.

One of the first things to consider is how AI security can bring in revenue. In most organizations, adding to the bottom line is important because it’s how they continue to operate and grow. Finding ways for AI security to add value helps justify the investment. It might not be easy at first, but if you can align your efforts with the company’s growth goals, you'll create a compelling reason for continued support.

The two most common main ways to do this are (1) creating new revenue streams and (2) building customer trust.

For me, it was about finding out how to build up our team to meet client needs and show that AI security can bring in revenue. When AI security becomes essential to delivering trusted solutions, it becomes part of the company’s growth story, offering value and assurance.

For other organizations, it means finding the right people and strategies to improve their capabilities, meet client needs, and use AI security as a key advantage. I go into much more detail on this in the full article.

In short, when AI security is positioned as both a revenue driver and a differentiator, it transforms from a cost center into a strategic asset that fuels long-term success.





Test and evaluate thoroughly.

Building generative AI applications can happen fast, but the real challenge lies in thorough testing and tuning for real-world use.

I prioritize what’s most likely to break or be exploited, such as: LLM integration, API security, and container vulnerabilities. It’s practical and targeted. Beyond security, it is important to evaluate accuracy, bias, and safety. These evals and more can help determine whether the model is reliable, resilient, and ready for production.

Not every system needs constant testing, though. For high-risk AI models or those that change frequently, regular retesting is critical. But for lower-risk, internal tools used by smaller teams, periodic testing may be enough.

The key is understanding the system’s risk profile, how often it’s updated, and how critical it is to the business.

For high-risk models, AI red teaming is a must. Red teaming simulates real-world attacks, uncovering issues that regular testing might miss. But for lower-risk systems, a lighter approach will often suffice.

If you want to dive deeper into AI red teaming, check out my series on the topic:





Know your limits.

It’s easy to want to do too much, especially in AI, but that’s never a good idea. It’s important to work within your limits, set realistic goals, and make clear boundaries. This doesn’t mean you shouldn’t be ambitious—it’s about being realistic with what you have. Your customers will appreciate it, your organization will too, and you’ll be thankful in the long run.

I've experienced times when there was pressure to take on every opportunity without the right resources. In those cases, I had to either get creative or push back. It’s worth fighting for smart growth, avoiding overextension, and focusing on quality over quantity.

Not everyone can push back, though. That’s why it’s important to find allies who share your vision, explain the risks of taking on too much, and make sure the health of your team and quality of work are priorities. Getting support from peers and focusing on long-term impact can help you manage these pressures.





A lone figure stands at the threshold of a bright, glowing doorway, framed by deep blue and green shadows. The contrasting light and dark suggest a sense of transition and boldness, symbolizing the courage, clarity, and collaboration required to step forward and lead in the complex world of AI security.

Takes a lil’ bit of courage, clarity, and collaboration.

Leading an AI security initiative is like climbing a tough mountain. It takes planning, teamwork, and effort at every step. There are times when you’re exposed, don’t have enough support, or face big risks, but each step forward needs a clear plan, support from the right people, and resilience.

In the end, collaboration is key. 99.9% of climbers do not climb Everest alone, and the same goes for building a strong AI security program. You need a committed team that shares the same purpose and puts in the effort to make an emerging field something mature and trusted.

It’s not easy, but it's worth the effort. To get started, take the first step—whether it's organizing a training session, establishing a new policy, or having a meeting to assess AI risks.

If this felt a bit too high-level for you and you're craving more detail, head over to Substack for the full article.


Disclaimer: The views and opinions expressed in this article are my own and do not reflect those of my employer. This content is based on my personal insights and research, undertaken independently and without association to my firm.

These are great insights and should lead to serious organizational think tanks. Regardless of the platform or forum security must and should be in very early conversions.

要查看或添加评论,请登录

Kris Kimmerle的更多文章

  • The Hidden Complexity of Securing AI Embeddings in Enterprise Chatbots

    The Hidden Complexity of Securing AI Embeddings in Enterprise Chatbots

    I've been researching how to secure general-purpose chatbots that leverage embedding models, and I see a lot of…

  • When Machines Start Fighting Machines

    When Machines Start Fighting Machines

    A bit of a departure from my usual, but I wanted to share some thoughts on where I think cybersecurity is headed in the…

  • AI Red Team Assessment Strategies

    AI Red Team Assessment Strategies

    In my previous article, 'Breaking Your AI Before Someone Else Does,' we tipped our toes into the pool of AI red…

    1 条评论
  • Break Your AI Before Someone Else Does

    Break Your AI Before Someone Else Does

    AI red teaming means intentionally breaking your own systems to build them back better. Seven months ago, I wrote the…

  • The Many Faces of AI Risk

    The Many Faces of AI Risk

    Artificial Intelligence brings a whole new set of risks. But here's the kicker - not everyone sees these risks the same…

  • Automating Tasks, Not Jobs

    Automating Tasks, Not Jobs

    Lately, I have seen more and more articles discussing how AI will replace human jobs wholesale. This framing isn't…

    5 条评论
  • Pragmatist Guide to AI Risks

    Pragmatist Guide to AI Risks

    Hey folks, I wanted to provide some light reading before/during the holiday break, and in this article, I really felt…

    1 条评论
  • Analysis of Hallucinations

    Analysis of Hallucinations

    AI models like ChatGPT create content by connecting disparate information, leading to creative but sometimes inaccurate…

    1 条评论
  • Why Purple Llama is a BIG Deal

    Why Purple Llama is a BIG Deal

    Meta announced the Purple Llama project this morning, marking a pivotal moment for AI trust and safety. This…

  • Practical Guide to Secure AI

    Practical Guide to Secure AI

    It's essential to recognize that AI systems, whether internally hosted models or those leveraging external application…

社区洞察