Last week was significant for the AI industry, marking crucial milestones for the future of AI's regulatory frameworks, adoption guidelines, and the competitive landscape.
For those who have been keeping up with technology and AI news, these developments will be familiar, yet they are so pivotal that I am compelled to share my key takeaways. If this news has passed you by, I hope this post is a solid introduction. At the end of this post, you will find a list of articles and newsletters for those interested in a deeper dive.
Takeaways:
- Key Emphasis on Testing: The Executive Order (EO) and the AI Summit underscore the importance of rigorous testing for foundational AI models (Axios, DeepLearning.ai).
- The Role of the AI Officer: The proposed role of the AI Officer could catalyze private and public organizations to develop robust AI strategies and structures.
- Focus on Talent Development: There is a strong emphasis on talent development, enabling the use of AI across various sectors. This approach is expected to open up new avenues for up-skilling and re-skilling, as well as the emergence of novel roles and responsibilities across industries (O'Reilly).
- Post Section 230: Ricky Sutton asserts that the AI era will not have a Section 230 equivalent, thus imposing new levels of responsibility and liability on platforms (Future Media).
- Open Questions on Scope and Applicability: The proposed testing rules are set to apply to AI models trained with computing power greater than 10 to the 26th power of integer or floating-point operations. This threshold is likely to exclude nearly all currently available AI services, according to experts (Axios).
UK AI Summit and Biden's Executive Order
Here is a summary of the updates from the UK AI summit and the executive order announced by President Biden:
- The UK AI summit, hosted by Prime Minister Rishi Sunak at Bletchley Park on Nov. 1 and 2, was the first global event to address the safety and security of artificial intelligence, especially the advanced large language models (LLMs) that can generate text, images, audio, and code. The summit brought together representatives from 27 governments, including the US, China, India, Japan, and the EU, as well as leading AI companies, civil society groups, and researchers
.
- The summit resulted in a landmark agreement among the participants to require safety testing and information sharing for the most powerful AI systems before and after they are deployed. The agreement also established the AI Safety Institute, a London-based hub for AI safety research and collaboration, and the AI Safety and Security Board, a body that will oversee the implementation of the safety standards
.
- The summit also addressed the risks of using AI to engineer dangerous biological materials, and agreed to develop new standards for biological synthesis screening as a condition of federal funding for life-science projects
. Additionally, the summit showcased how ensuring the safe development of AI can enable AI to be used for good globally, such as in healthcare, education, and climate change
.
- President Biden's executive order on AI, issued on Oct. 30, was a comprehensive policy that aimed to ensure that AI is trustworthy and beneficial for Americans. The order established new standards for AI safety and security, protected Americans' privacy, advanced equity and civil rights, stood up for consumers and workers, promoted innovation and competition, and advanced American leadership around the world
.
- The order directed federal agencies to root out bias and discrimination in their design and use of AI, and to protect the public from algorithmic harms. It also introduced new consumer protections, such as requiring industry to label and watermark AI-generated content, and giving consumers the right to opt out of AI-based decisions that affect their lives
.
- The order also supported the development of AI talent and infrastructure, such as increasing funding for AI research and education, creating an AI workforce advisory board, and launching a national AI research resource to provide access to computing power and data. It also called for international cooperation on AI governance and ethics, and for strengthening the US's competitiveness and leadership in AI
.
Potential Implications for Startups, Corporations, and Individuals
The announcements from the UK AI summit and the executive order have significant implications for startups, corporations, and individuals involved in or affected by the development and use of AI. Here are some of the possible implications:
- For startups and corporations that develop powerful AI systems, especially the large language models (LLMs) that can generate text, images, audio, and code, they will have to comply with the new safety and security standards and share their test results and other information with the government before and after they deploy their models. This could increase their costs and delays, but also improve their trustworthiness and accountability. They will also have to label and watermark their AI-generated content and give consumers the option to opt out of AI-based decisions
.
- For startups and corporations that use AI systems for their products and services, they will have to ensure that their AI systems are free of bias and discrimination, and that they protect the privacy and civil rights of their customers and workers. They will also have to monitor and mitigate the potential harms of AI systems on society, such as displacing jobs, spreading misinformation, or undermining democracy
.
- For individuals who consume or interact with AI systems, they will have more rights and protections, such as being able to access and correct their personal data, being informed of the use and source of AI-generated content, and being able to challenge or appeal AI-based decisions that affect their lives. They will also have more opportunities to benefit from AI systems, such as accessing better healthcare, education, and public services
.
Key Principles for AI
According to the fact sheet from the White House
, the Executive Order promotes the following key principles for AI:
- Safety and security: AI systems should be safe, secure, and trustworthy, and should not pose serious risks to national security, national economic security, or national public health and safety.
- Privacy: AI systems should respect and protect the privacy of Americans and their personal data, and should not be used for unlawful or unethical surveillance or data collection.
- Equity and civil rights: AI systems should advance equity and civil rights, and should not discriminate, oppress, or harm any group of people based on their race, ethnicity, gender, sexual orientation, disability, or any other protected characteristic.
- Consumer and worker protection: AI systems should protect the rights and interests of consumers and workers, and should not deceive, exploit, or harm them in any way.
- Innovation and competition: AI systems should foster innovation and competition, and should not stifle or undermine them through unfair or anti-competitive practices.
- American leadership: AI systems should reflect and uphold American values and principles, and should not be used to undermine or threaten them. The US should also cooperate and collaborate with other countries and international organizations on AI governance and ethics.
AI Officer
The role of the AI officer, as described in the Executive Order, is to oversee and coordinate the implementation of the AI policy and standards within each federal agency and is also responsible for implementing AI so that the agencies and organizations adopt AI to offer a better service. According to the fact sheet from the White House
, the Executive Order directs the following actions for the AI officers:
- Designate a Chief AI Officer within 60 days of the issuance of the Order. The Chief AI Officer will ensure that the agency complies with the AI safety and security standards, protects the privacy and civil rights of Americans, and advances the agency's mission and public service through AI.
- Participate in the interagency AI Council, which will be chaired by the Director of the Office of Science and Technology Policy. The AI Council will coordinate federal action on AI, share best practices and lessons learned, and identify and address cross-cutting challenges and opportunities.
- Develop and implement an AI strategy and action plan for the agency within 180 days of the issuance of the Order. The AI strategy and action plan will outline the agency's goals, priorities, and initiatives for AI, as well as the resources, metrics, and timelines for achieving them.
- Report annually on the agency's progress and performance on AI, including the results of the safety tests, the impacts of AI on equity and civil rights, and the outcomes and benefits of AI for the agency and the public.
AI Showcase
There were many examples and case studies showcased at the AI Summit in the UK that demonstrated how AI can bring a positive impact to society through implementations in healthcare, education, and other areas. Here are some of them:
- In healthcare, one of the examples was the
AI-powered diagnostic tool
developed by the University of Oxford and the John Radcliffe Hospital, which can detect heart diseases and lung cancer more accurately and earlier than human doctors
. Another example was the
AI-enabled humanoid social robot
called Ameca, created by Engineered Arts, which can interact with patients and provide emotional support and companionship
.
- In education, one of the examples was the
AI-based personalized learning platform
called Century Tech, which can adapt to the needs and preferences of each student and provide feedback and guidance3
. Another example was the
AI-driven pose tracking system
called Kinetikos, which can monitor and improve the posture and movement of students and teachers
.
- In other areas, such as climate change, poverty, and inequality, some of the examples were the
AI-powered satellite imagery analysis
projects by Imperial College London and Stanford University, which can identify and measure the living conditions and economic status of different regions and populations
. Another example was the AI-based disaster response system called One Concern, which can predict and mitigate the impacts of natural hazards such as earthquakes, floods, and wildfires.
There are many more applications of AI that can bring a positive impact to society across various domains and sectors. It's certainly encouraging that world leaders are considering the benefits of applying AI in a secure and reliable manner to create a positive impact on our society.
Scope and Applicability
If the testing rules are indeed set to apply only to AI models that have been trained with more than 10^26 integer or floating-point operations, this would exclude most, if not all, currently available AI services.
- Current Large-Scale AI Models: The largest AI models today, like OpenAI's GPT-3 or even potentially larger models like GPT-4, require a significant amount of computing power for training, which could be in the range of several exaFLOPs or zettaFLOPs of operations. One zettaFLOP is 10^21 FLOPs.
- Threshold of 10^26 Operations: Setting a threshold at 10^26 operations is orders of magnitude higher than the computing power used for training current AI models. This suggests that the testing requirements are targeting future AI developments that may use an unprecedented scale of computational resources.
The purpose of setting such a high threshold could be to focus on future systems that are expected to be significantly more powerful (and potentially more risky) than today's models. It might be a forward-looking measure, anticipating the development of next-generation AI that may have profound impacts on society and national security.
By setting the bar this high, it seems the intention is not to burden current AI developers with regulatory overhead but to prepare the groundwork for oversight of future AI systems that could pose new challenges due to their scale and capabilities. This approach allows for the current pace of AI innovation to continue while establishing a framework to ensure that when AI systems reach this immense scale, they will be developed and deployed responsibly and safely.
Further Reading
IT Delivery Director
1 年Great info!
CTO Supertab | Tech Leader | Advisor | Multimodal
1 年Fantastic read, Aníbal Abarca Gil! Nice insights highlighting the importance of responsible innovation. Your breakdown made the complex info super accessible! Looking forward to more of your thoughts on this. ??