Most people who've talked to me over the past year know I'm borderline obsessed with one thing: getting diverse voices into AI safety before it's too late.
When we're tackling questions like "How do we make AI systems reliably do what humans want?" or "How do we ensure AI benefits all of humanity?" we desperately need different viewpoints in the room. We need people who think differently about risk, about social impact, and about unintended consequences.
That's why I'm incredibly excited about our Women in AI Safety Hackathon this International Women's Day weekend (March 7-10). We've brought together pioneering organizations and brilliant minds to tackle this challenge from multiple angles:
??Goodfire, the groundbreaking AI interpretability lab, is leading our Mechanistic Interpretability track where participants will dive deep into understanding and steering AI models. You'll get hands-on experience with their cutting-edge Ember API and tools.
?? BlueDot Impact, with their 4,500+ strong community of AI safety professionals, is championing our Public Education track to make AI safety knowledge accessible to everyone.
?? Our Social Sciences track, shaped by The London School of Economics and Political Science (LSE) Visiting Fellow Andreea Damien, explores how AI "thinks" and transforms society - because we need technical and human perspectives to build safe AI systems.
We're proud to partner with Women Who Do Data (W2D2), a community dedicated to supporting diverse talent in AI technology.
The caliber of women contributing to this effort is extraordinary:
- Myra Deng (Goodfire)
- Tarin Rickett(BlueDot Impact Product and Engineering)
- Bessie O'Dell (AI Security Institute)
- ChengCheng Tan( FAR.AI and Women Who Do Data (W2D2))
- Zainab Ali Majid(AI Safety & Cybersecurity)
- Hannah Betts( FAR.AI)
- Natalia Pérez-Campanero Antolín(Apart Research)
And many more brilliant minds ready to mentor participants!
Lambda is backing every team with $400 in compute credits (including A100s!) because they understand that innovation needs real resources.
Whether you're a developer, researcher, policy enthusiast, or just curious about AI safety - this is your chance to make an impact. You don't need prior AI safety experience.
Want to be part of this mission? We need:
- Participants ready to innovate
- Sponsors who understand that diverse perspectives are crucial for safe AI
- Mentors willing to share their expertise
This isn't just another hackathon. It's a statement that women belong at the forefront of making AI systems safer and more reliable.
Drop a comment if you want to help shape the future of AI safety, or DM me to discuss opportunities.
https://lnkd.in/gvMfJhiR