AI Safety @ UCLA的封面图片
AI Safety @ UCLA

AI Safety @ UCLA

研究服务

Los Angeles,CA 76 位关注者

Research-focused club that provides students with tools and guidance to tackle challenges in AI Safety.

关于我们

AI Safety at UCLA works to ensure that the development of powerful AI systems is done safely. We are a research focused club which strives to provide students the tools and guidance to tackle the problems that interest them. Join us if you'd like to learn about AI safety and discover how we can use AI to build a better future.

网站
https://ais-ucla.org/
所属行业
研究服务
规模
11-50 人
总部
Los Angeles,CA
类型
非营利机构
创立
2022

地点

AI Safety @ UCLA员工

动态

  • 查看AI Safety @ UCLA的组织主页

    76 位关注者

    An amazing time at EA Global 2024 Bay Area (Global Catastrophic Risk) and EAGxAustin!

    查看Tejas Kamtam的档案

    Doing cool stuff! | Incoming SWE @ Amazon | ucla cs

    These past 2 quarters have been a fantastic dive into the landscape of pragmatic AI safety research at a professional level. In early February, some of our members at AI Safety @ UCLA (Avi Parrack, Emma Vidal, Govind Pimpale, and others) had the opportunity to attend the Effective Altruism Global 2024: Global Catastrophic Risk conference. Here, we heard from some top minds at Google DeepMind, Anthropic, Redwood Research, FAR AI, and many others regarding the current state of mechanistic interpretability, model evaluations, and overall advancements in AI interpretability research along with an overall view of the long-term future and progress towards solving alignment. Perhaps the most significant takeaways included the case of AGI causing havoc despite a "shutdown button," sparse autoencoders to elicit monosemanticity in transformer-based language models, and the amazing progress organizations like METR and Redwood Research have made in capabilities and control evaluations! From a student organizer perspective, this conference was by far one of the BEST ways to hook students on possible career paths in AI safety. Fast-forward to mid-April, and we (Michael Ward, Steven Veld, Chengheng Li, and others) made our way to EAGxAustin, held at The University of Texas at Austin and focused on student engagement and organization. Here, I had the opportunity to join a panel of student organizers nationwide on how to best approach organizing and maintaining a student organization on Effective Altruism and AI Safety. It was a blessing to meet some fantastic peers and familiar faces who made the event a blast while also being highly impactful; thank you, Ivy Mazzola, Alex Dial, Max Gehred, Nathan Reed, Tzu Kit C., Cody Rushing, Satvik Duddukuru, Thomas Kwa, Akash S. These conferences were some of the most impactful opportunities to engage our members and myself in humanity's long-term future and a chance to speak with some incredible people and discover what everyone is doing to lead society to a brighter and safer technological future. Here are some pictures from the events!

    • 该图片无替代文字
    • 该图片无替代文字
    • 该图片无替代文字
    • 该图片无替代文字
    • 该图片无替代文字

相似主页