Start GenAI Security with these free resources.

Start GenAI Security with these free resources.

Learn GenAI and LLM while keeping security in mind.


When I started looking for GenAI and LLM learning resources through Google search, chatGPT, etc., I found too many courses on udemy, Coursera, edX, and other popular LMS platforms. My challenge was to find courses that are easy to understand and provide some labs for basic hands-on.

I divided my GenAI and LLM security study plans into three stages

  1. Learn the basics of GenAI and LLM
  2. Learn the fundamentals of GenAI and LLM Security
  3. Practice those learnings through online labs and CTFs

Here are the courses list that will give you a good kickstart while starting your learning journey into GenAI and LLM Security:

Basics of GenAI and LLM

  1. GenAI for Everyone by Andrew Ng: https://www.coursera.org/learn/generative-ai-for-everyone/ If you don’t know anything about GenAI, you must start with this short course. This course provides a comprehensive introduction to generative AI, making complex concepts accessible to all, regardless of technical background. Andrew NG has done a fantastic job in explaining not just GenAI basics but also its future, job market etc.
  2. LLM CS-324 from Stanford University: https://stanford-cs324.github.io/winter2022/ I am adding it as a second course because you need to know LLM in little detail, and what could be better than a course from Standford University! It’s a rigorous course that deepens into large language models (LLMs), combining theoretical insights with practical applications. It is ideal for learners interested in cutting-edge AI research and its implications.

Fundamentals of GenAI and LLM Security

  1. GenAI with LLM from Coursera: https://www.coursera.org/learn/generative-ai-with-llms#modules If you think a course by Stanford University is too long for you and you want to understand in less time, this could be the course on LLM again by Andrew NG and Deep Learning. It focuses on leveraging large language models in generative AI applications, equipping learners with the skills to create innovative AI solutions in various domains.?
  2. The foundation of AI Security by AttackIQ: https://www.academy.attackiq.com/courses/foundations-of-ai-security Now, as you know about GenAI and LLM fundamentals, it’s time to understand its security aspects. It provides essential knowledge about the security challenges associated with AI technologies and lays the groundwork for understanding how to protect AI systems from potential threats.
  3. MLSecOps Foundation course by ProtectAI: https://protectai.com/mlsecops-foundations-certification It’s not DevSecOps with ML, but surely a fair idea of DevSecOps would make you understand MLSecOps part quickly, but not mandatory. Protect AI has done a good job creating this specialised course focusing on the intersection of Machine Learning and Security Operations, designed to build foundational skills in MLSecOps.

Practical GenAI and LLM Security

  1. Web LLM attacks from portswigger: https://portswigger.net/web-security/llm-attacks I know, you would get bored quickly if it’s just theories. But, it was mandatory to start hands-on in GenAI security with LLM. Portswigger has done a fantastic job explaining and providing labs for LLM attacks. This resource is crucial for security professionals aiming to fortify their defences against emerging threats.
  2. OWASP Top 10 for LLM:?https://application.security/free/llm?by Kontra (Security Compass now). This free resource assesses the security of LLM applications and provides practical insights into identifying and mitigating risks associated with these technologies.
  3. Red Teaming LLM Applications by Coursera: https://www.coursera.org/projects/red-teaming-llm-applications I enjoyed this course thoroughly as it had lab associated and explanation for each possible security issues in LLM-based applications in brief. You will gain essential hands-on experience in identifying and mitigating security risks in LLMs, a necessary skill for AI security professionals.
  4. LLM security challenge with Gandalf: https://gandalf.lakera.ai/ Test your LLM security skills in an engaging and interactive challenge hosted by Gandalf. Perfect for enthusiasts and experts alike. It was easy till level 6, however. The good thing is it has even more interactive GenAI-based challenges, and it will surely keep you engaged.?
  5. AI security challenges, CTF style: https://promptairlines.com/ I know the list would not be complete without considering bug bounty hunters. You can participate in Capture-The-Flag (CTF) style challenges that expose you to real-world AI security scenarios and adversarial prompt techniques.


With these free resources, I hope you will enjoy learning GenAI and LLM security.

Connect with me on my social profiles for more guidance and insights!

1. Linkedin: https://www.dhirubhai.net/in/jassics?

2. Linkedin Newsletter: https://www.dhirubhai.net/build-relation/newsletter-follow?entityUrn=7004103439039287296

3. Github: https://www.github.com/jassics ?

4. Medium: https://jassics.medium.com ?

SriPhaniKrishna Chinnapuvvula

Seasoned Digital Transformation Leader & Enterprise Agile, DevSecOps, SRE Coach | Driving Digital Excellence in Global Enterprises | Proven Success Enterprise Integration, Enterprise Architecture, Engineering Mgmt.

2 个月

Excellent Resources

Utkarsh Sawant

Global Head of Cyber Strategy

2 个月

Very helpful

要查看或添加评论,请登录

Sanjeev Kumar Jaiswal的更多文章

社区洞察

其他会员也浏览了