AI Offensive Security Engineer

AI Offensive Security Engineer


AI red teaming is a proactive way to strengthen AI and reduce risks, preventing costly incidents. It helps companies deploy AI responsibly by addressing security threats. Organizations should involve security researchers skilled in AI and prompt hacking for effective red teaming. This uncovers unknown issues and uses their expertise to test AI models. By doing this, companies show their commitment to responsible AI and help build safer systems.

Great article by Dane Sherrets along with Micron’s Sr. Manager, Offensive Security, Andrew Mayen are helping me to learn more.?

The vital role of red teaming in safeguarding AI systems and data | InfoWorld

We also have a vital role available now and if you are working remote, open to travel, let’s still talk.? Must have Red Teaming experience (not the same as penetration testing).

Additional Requirements:? C#, Python, Go Programming Language or Java (You don’t need to be a developer; must know how to attack when a threat appears).

This is a senior level position that requires strong understanding of cloud environments and attacks (AWS, Azure…)

See the posted position here and apply today!

Careers at Micron Technology

要查看或添加评论,请登录

Sheedeh Rahimi的更多文章

社区洞察