Welcome to the Responsible AI Weekly Rewind September 23rd Edition
In the fast-paced world of AI, staying informed is crucial. That's why the team at Responsible AI Institute curates significant AI news stories each week, saving you time and effort.
What to expect:
Subscribe to receive the Rewind every Monday to catch up on the top headlines and stay informed about the evolving AI landscape.
A new United Nations report suggests that the organization should spearhead a global effort to monitor and govern artificial intelligence (AI) with the same urgency it applies to climate change. The report, produced by the UN Secretary General’s High-Level Advisory Body on AI, recommends the creation of a body akin to the Intergovernmental Panel on Climate Change (IPCC) to gather real-time information on AI's risks and benefits.
The global push to govern AI is essential, recognizing both AI's transformative potential and the risks of unregulated development and use. This initiative could help bridge the digital divide by focusing on equitable access, especially for developing nations, and promote ethical AI adoption worldwide. A coordinated global effort may streamline fragmented regulations, creating a more consistent AI landscape.
However, geopolitical competition and AI's rapid evolution pose significant challenges. The initiative's success will depend on how well it navigates political complexities and adapts to AI's fast-changing nature.
2. A big win for the EU? How California's new AI bill compares to the EU AI Act
California's proposed AI safety bill, currently awaiting the governor's decision, aims to introduce safety protocols for large-scale AI models, mirroring aspects of the EU AI Act. While the EU's risk-based approach covers a broad range of AI systems, California's bill is more specific, targeting future models that do not yet exist and allowing for quicker action against unsafe AI. Both laws face criticism for potentially stifling innovation, but proponents argue that these regulations are essential for managing AI risks globally.
3. Sam Altman leaves OpenAI board's safety and security committee
Sam Altman, CEO of OpenAI, has stepped down from the company's safety and security committee, which is now composed entirely of independent board members. This move comes after critics raised concerns about the potential conflict of interest with Altman overseeing the very practices he was meant to regulate; the committee, now chaired by Zico Kolter, will continue overseeing the safety of major AI models and has the authority to delay releases if necessary.
领英推荐
4. Dutch AI Risk Report Summer 2024: turbulent rise of AI calls for vigilance by everyone
The Dutch Data Protection Authority (Dutch DPA) released its AI & Algorithmic Risks Report for Summer 2024, emphasizing the need for vigilance as AI technology rapidly develops in the Netherlands. The report highlights low public trust in AI, the risks of misinformation from generative AI, and the lack of oversight on AI use by local governments, urging stronger regulatory frameworks and democratic control to ensure safe, responsible AI deployment in sectors like healthcare, education, and public transport.
5. Meta reignites plans to train AI using UK users’ public Facebook and Instagram posts
Meta has announced that it is resuming its plans to train AI systems using public posts from Facebook and Instagram users in the UK, after having paused the initiative due to regulatory concerns earlier in the year. Meta claims it has revised its approach by incorporating feedback from regulators, aiming for greater transparency. However, critics remain skeptical about the differences between the new approach and the original one.
6. Oracle to Power AI Data Center with Small Modular Nuclear Reactors
Oracle has revealed plans to use small modular nuclear reactors (SMRs) to power a new AI data center with a projected one-gigawatt capacity. This announcement was made during Oracle's quarterly earnings call, with Chief Technology Officer Larry Ellison emphasizing the significance of sustainable energy solutions for powering the company’s cloud data centers. Oracle has already secured permits to build these reactors.
7. Companies carry more liability for AI than they realize
A 100-page article published in NYU's Journal of Legislation and Public Policy highlights that companies using generative AI may be carrying more legal liability than they realize. Co-authored by experts including EqualAI CEO Miriam Vogel and former Homeland Security secretary Michael Chertoff, the article warns businesses that current laws on housing, lending, and employment still apply even when AI is involved, and companies can be held responsible for discriminatory or flawed AI outcomes.
Ready to dive deeper into responsible AI? Head over to our website to explore our groundbreaking work, discover membership benefits, and access exclusive content that keeps you at the forefront of trustworthy AI and innovation.
Exploring Emerging Tech | Curious Creator | Building @ IBM
2 个月I think it's a great move by OpenAI and Altman — independent oversight on safety is a huge step forward.
Filmmaker / Futurist / Beneficial AGI Enthusiast / Mindful Optimist
2 个月All available fingers crossed ?? whatever the #future has in store for us, it is a #future we have created for the #benefit of all sentient beings. I choose to believe that in the near #future, (we) #humanity have achieved #maturity and become an #united, #collaborative, #cooperative #society, working together for the prosperity of all. If we do it right, this #possibility is within our reach. Who knows, our #potential future is potentially #Limitless. One way to ensure this #Limitless #future is to develop a #Beneficial #AGI/#ASI. While I am tackling this complex topic, I am trying to keep myself humble, by attempting to unfold the many layers of the following thought-provokingly self-reflective question: "Are we, as a species, mature enough, to successfully/responsibly develop a Benevolent-Beneficial #ArtificialGeneralIntelligence (#AGI)?" Somehow this fascinating question became the glue/attractor for even more past, present, future; questions, suggestions, probabilities and possiblities of our past, present and future relationship with #AI. Let's explore this excitingly complex topic together. => "The #Existential #Importance for The #Development of #Benevolent-#Beneficial #AGI" here: =>> https://lnkd.in/gFyK2b8A