Welcome to the Responsible AI Weekly Rewind October 28th Edition

Welcome to the Responsible AI Weekly Rewind October 28th Edition

In the fast-paced world of AI, staying informed is crucial. That's why the team at Responsible AI Institute curates significant AI news stories each week, saving you time and effort.

What to expect:

  • Key developments in AI technology and policy
  • Insights on responsible AI practices
  • A concise summary of the week's important stories

Subscribe to receive the Rewind every Monday to catch up on the top headlines and stay informed about the evolving AI landscape.


Congressional leaders negotiating potential lame-duck deal to address AI concerns

Congressional leaders, including Chuck Schumer and Mitch McConnell, are in talks to pass AI-related legislation during the lame-duck session before January 2025. While there’s bipartisan interest in AI research and workforce training, more divisive issues like AI's role in misinformation, elections, and national security may pose challenges. Schumer has been leading the effort with an "AI policy roadmap," and there’s potential for any AI package to be included in must-pass bills like government funding or the National Defense Authorization Act.

As Congress weighs how to draft AI legislation before new members are sworn-in later this year, there are several factors influencing the content of bills under consideration. While improving the U.S.’s position in AI workforce readiness and publishing AI research are top bipartisan priorities, other critical AI topics are politicized. Right now, the role that AI systems can play in spreading misinformation and disrupting elections is disputed in importance across party lines, which will impact whether an AI package gets passed before the end of the year. Congressional leadership may push to include these topics in must-pass legislation, like the National Defense Authorization Act, but even under this condition, it seems like several members may delay passing an AI package that touches on these subjects. Invested congressional representatives will need to think creatively this Fall on how to structure the contents of any upcoming AI deal to maximize its chances of getting passed before the end of the 2024 lame-duck period.

Read the article


Paving the Way for Responsible AI: UNESCO and the G7 Toolkit Initiative

UNESCO, in partnership with the G7 and OECD, has developed the G7 Toolkit for AI in the Public Sector to promote responsible AI governance. This framework addresses privacy, transparency, and inclusivity, fostering trust in AI systems. The initiative encourages collaboration between governments and the private sector to safeguard ethical principles, human rights, and sustainability. UNESCO's work includes launching a voluntary AI compliance disclosure mechanism, aligning AI practices with certification standards, and mitigating environmental impacts through strategic partnerships, such as with France’s Ministry of Ecological Transition.

Read the article


AI helped the feds catch $1 billion of fraud in one year. And it’s just getting started

AI-driven fraud detection at the U.S. Treasury helped recover $1 billion in check fraud in fiscal 2024, nearly tripling the prior year's figures. In total, Treasury prevented and recovered over $4 billion in fraud, leveraging machine learning to sift through vast data, spot hidden patterns, and enhance prevention efforts. As the Treasury continues expanding these tools, concerns grow around AI’s broader risks to the financial system, with human oversight remaining integral in final fraud determinations.

Read the article


Virginia Candidate Hosts Debate Against Incumbent’s AI Chatbot

Independent congressional candidate Bentley Hensel hosted a debate against an AI chatbot he created to stand in for Democratic incumbent Don Beyer, highlighting Beyer’s refusal to participate in debates ahead of Virginia's 8th district election. Hensel's AI, called DonBot, answered policy questions using Beyer’s public records, addressing topics like gun control and U.S. aid to Israel. Though the debate received little attention, it sparked discussions about the future role of AI in political communication and transparency.

Read the article


Penguin Random House books now explicitly say ‘no’ to AI training

Penguin Random House has updated its copyright page to explicitly prohibit the use of its books for AI training, both for new publications and reprints. This move includes a clause that excludes its content from text and data mining exceptions under EU law, marking it as the first major publisher to adopt such a policy. While not legally enforceable on its own, this statement reflects the company's commitment to protecting authors' intellectual property against AI misuse, contrasting with other publishers who have made AI training deals.

Read the article


Midjourney plans to let anyone on the web edit images with AI

Midjourney plans to release a new web tool allowing users to edit any uploaded images using its generative AI, including retexturing objects based on captions. Set to launch soon, this upgrade will initially be limited to a subset of the community with increased moderation. However, concerns about misuse, such as facilitating copyright infringement or deepfakes, persist. While Midjourney has committed to some metadata standards, it has yet to adopt broader tools for tracing image provenance. The platform is also seeking community feedback to guide access.

Read the article


Bonus Feature: Responsible AI Report Podcast - Episode 2

In this episode we feature mpathic 's Caraline Brzezinski and Dr. Amber Jolley-Paige. They discuss the role of AI in healthcare, focusing on AI accuracy, standardized testing, acceptable error rates, and the need for human oversight to ensure safety and effectiveness.

Watch on demand


Ready to dive deeper into responsible AI? Head over to our website to explore our groundbreaking work, discover membership benefits, and access exclusive content that keeps you at the forefront of trustworthy AI and innovation.


Join the RAI Institute Community


要查看或添加评论,请登录

Responsible AI Institute的更多文章

社区洞察

其他会员也浏览了