Last week was eventful in terms of AI Risk and Safety.
QuantUniversity
is tracking all the developments and will be incorporating these innovations in the next iteration of the ML and AI Risk Management Certificate program.
Here's a summary of the major declarations and news related to AI risk and safety from last week:
- President Biden's Executive Order on AI: President Biden signed an executive order focusing on artificial intelligence that introduces preliminary guardrails to balance technology innovation with national security and consumer rights. This order is expected to set the stage for future legislation and international agreements[1].
- Establishment of the U.S. Artificial Intelligence Safety Institute: The U.S. Department of Commerce, through the National Institute of Standards and Technology (NIST), announced the creation of the U.S. Artificial Intelligence Safety Institute (USAISI). This body will lead the government's efforts on AI safety and trust, with a particular focus on evaluating advanced AI models[2].
- AI Safety and Security Standards: President Biden's executive order also sets new standards for AI safety and security, ensuring protection for Americans' privacy, advancing equity and civil rights, standing up for consumers and workers, and promoting innovation[3].
- DHS Incorporating AI Safety and Security Guidance: The U.S. Department of Homeland Security (DHS) is integrating the White House Office of Science and Technology Policy’s AI Bill of Rights and NIST’s AI Risk Management Framework, among other security guidance, to enhance safety and security measures for critical infrastructure[4].
- International Collaboration on AI Safety: At the AI Safety Summit in Bletchley Park, England, the "Bletchley Declaration," a policy paper that seeks to create a global consensus on managing AI risks was unveiled. The summit highlighted plans for international collaboration, with commitments from various countries to tackle AI safety issues together. The summit will become a recurring event, with future gatherings planned in Korea and France[5].
These steps indicate a growing global awareness and a proactive approach towards the potential risks posed by AI technologies, with emphasis on collaboration, regulation, and ethical considerations.
Now that the time for declarations and meetings is done, we got to roll up the sleeves and do the hard work! We will see many pages pop up on AI related company websites with announcements that "something" is being done! But till we have actionable items and expectations from companies, codified through laws with formal obligations, we will just hear what all should and must be done.