State of AI Regulation: September 2023 Newsletter:
Greetings from the Dallas AI team in Texas to all AI enthusiasts!
In this month's newsletter, our focus is on AI regulation. What is AI regulation? Why do we need it? What are the key AI global powers doing about regulating AI? Let's start with a definition.
AI regulation refers to the legal and policy framework put in place by governments and international organizations to oversee and govern the development, deployment, and use of artificial intelligence (AI) technologies. The primary goal of AI regulation is to ensure that AI systems are developed and used in a manner that is safe, ethical, and aligned with societal values.
The use of AI use has steadily increased in the last decade and with this increase there has been a rising interest in regulating AI, as illustrated by the search trends.
However till end of 2022, there was not enough momentum around making it happen. In contrast, the year 2023 has seen a flurry of activity around AI regulation - even some of the private companies have testified in public and supported AI regulation. Perhaps the popularity of ChatGPT and some of the past negative media about AI has led to this stage. With the increase in chatter and public concerns, politicians have taken notice and there have been high profile government hearings.
In the recent Ethics issue of MIT Technology Review , technology thought leader and author, Eric Schmidt writes about the need for regulation:
AI is such a powerful tool because it allows humans to accomplish more with less: less time, less education, less equipment. But these capabilities make it a dangerous weapon in the wrong hands.
In the same Technology Review issue, coursear's founder and one of the top AI voices, Andrew Ng shares his thoughts on AI regulation in the article/section "How to Be An Innovator: Take Responsibility For Your Work":
Regulation helps us avoid mistakes and enables new benefits as we move into an uncertain future. I welcome regulation that calls for more transparency into the opaque workings of large tech companies; this will help us understand their impact and steer them toward achieving broader societal benefits.
Let’s dive deeper into the world of AI regulation. In this newsletter we look at:
Here are some key drivers of AI regulation:
Impact of AI on Key Industries
Even though it is generally felt that AI is becoming more prevalent in business, companies still at times see AI as something “other industries” are doing. As a result, many companies may not yet be aware of the extent of AI use in their operations. Companies should expect they are already using AI and start to consider how to manage AI regulatory risk.? Due to the sheer breadth of AI growth in recent years, examples of industries with considerable AI dependencies in daily operations include:?
AI Regulations from Key Global Players
Let's take a look at the three big AI powers and examine their approach and current actions around AI regulation.
1 - EU AI Act 2023
EU has been at the forefront of AI regulation. The EU has developed the most comprehensive AI regulation draft which has spurred discussion, praise and concern at the same time. Let's unpack what is the EU act and how will it impact AI in various geographies and segments.
The EU Artificial Intelligence Act is one of the most comprehensive frameworks for artificial intelligence. The AI act was originally proposed by the European Commission in April 2023. A general position on the legislation was adopted by the European Council in late 2022. The European Parliament adopted the amendments for the EU Artificial Intelligence Act on June 14, 2023, and the draft text of the legislation serves as the negotiating position between member states and the European Commission. The EU Artificial Intelligence Act endeavors to strengthen Europe's position as a global hub of excellence in AI from the lab to the market, ensure that AI in Europe respects our values and rules, and harness the potential of AI for industrial use.
?The cornerstone of the AI Act is a classification system that determines the level of risk an AI technology could pose to the health and safety or fundamental rights of a person. The framework includes four risk tiers: unacceptable, high, limited, and minimal.
The EU's AI Act includes multiple components:
“With these landmark rules, the EU is spearheading the development of new global norms to make sure AI can be trusted,” Margrethe Vestager, the Executive Vice-President for Europe Fit for the Digital Age and Competition, added in a statement . “Future-proof and innovation-friendly, our rules will intervene where strictly needed: when the safety and fundamental rights of EU citizens are at stake.”
Not everyone agrees with these comments about innovation-friendly. There's concern that these rules may stifle innovation.
?The Artificial Intelligence Act proposes steep non-compliance penalties. For companies, fines can reach up to €30 million or 6% of global income. Submitting false or misleading documentation to regulators can result in fines, too. The proposed law also aims to establish a European Artificial Intelligence Board, which would oversee the implementation of the regulation and ensure uniform application across the EU.?
?To this end, and despite various shortcomings, the Act is a valuable start in helping to shape global norms and standards and promote trustworthy AI systems that are, at least to some degree, more consistent with human values and interests. The Act also promotes innovation, including regulatory sandboxes and specific measures to support small-scale users and providers. Some may say that in preserving rights it does not go far enough, while innovators and developers may argue that it goes too far. But promoting ethical innovation and fair competition while balancing rights is not easily done, compromise and the right regulatory framework are often needed.
If you are interested in hearing opinions from experts about AI regulation and the EU ACt, we recommend checking out this event from Stanford's Human-Centered AI Center. The full discussion is available on YouTube .
2 - U.S. AI Regulation in 2023?
The rapid growth of AI has led to increasing focus on how best to regulate it. This year saw initial regulations emerge, and 2023 promises to provide further and more general AI compliance obligations. Several use case-specific AI rules emerged in 2022, but more general AI regulatory initiatives may arrive in 2023; state data privacy law, FTC rulemaking, and new NIST AI standards .?
领英推荐
State Privacy Laws – general requirements for consumer-facing AI in 2023 2023 is likely to see some of the first general obligations that apply across AI-use cases, contained in privacy legislation passed by certain states. California, Connecticut, Colorado, and Virginia recently passed general data privacy legislation that goes into effect at various times in 2023. These laws contain provisions governing “automated decision-making,” which includes technology that facilitates AI-powered decisions.?
New Rules for Artificial Intelligence?
Senate Majority Leader Chuck Schumer called for new rules for artificial intelligence. WASHINGTON—Senate Majority Leader Chuck Schumer, launched an effort Wednesday to write new rules for the emerging realm of artificial intelligence , aiming to accelerate U.S. innovation while staving off a dystopian future.?
Schumer called for more federal involvement in maintaining U.S. competitiveness, which will require careful attention to mitigating AI’s potential harms , Schumer said.
“The first issue we must tackle is encouraging, not stifling, innovation,” Schumer said at the Center for Strategic and International Studies. “But if people don’t think innovation can be done safely, that will slow AI’s development and even prevent us from moving forward.”
He added in a fact sheet that “with so much potential, the U.S. must lead in innovation and write the rules of the road on AI and not let adversaries like the Chinese Communist Party craft the standards for a technology set to become as transformative as electricity.”
Schumer joins the Biden administration, tech-industry leaders, and other members of Congress in seeking to put limits on the technology , amid fears that AI tools can be abused to manipulate voters, pull off sophisticated financial crimes, displace millions of workers, or create other harms.
But imposing new regulations on a set of technologies that are still under development will be difficult for Congress , which often waits years or even decades before establishing guardrails for new industries.?
In addition, lawmakers will be trying to impose new rules in several areas—such as copyright and liability—where tech companies have battled with other industries and consumers for years.?
Schumer plans a series of forums in the fall to gather insights from industry leaders , interest groups, and AI developers. He also will be pushing Senate committees and other groups to develop bipartisan solutions to potential problems. He is aiming for the process to produce legislation in a matter of months.??
3 - China’s Artificial Intelligence Act
?It is well-known that China has vowed to become a technology and AI leader. As noted in the working paper mentioned below, "China is in the midst of rolling out some of the world’s earliest and most detailed regulations governing artificial intelligence (AI). These include measures governing recommendation algorithms—the most omnipresent form of AI deployed on the internet—as well as new rules for synthetically generated images and chatbots in the mold of Chat-GPT. China’s emerging AI governance framework will reshape how the technology is built and deployed within China and internationally, impacting both Chinese technology exports and global AI research networks."
Over the past two years, China has rolled out some of the world’s first binding national regulations on artificial intelligence (AI). These regulations target recommendation algorithms for disseminating content, synthetically generated images and video, and generative AI systems like Open AI’s Chat-GPT.
The rules create new requirements for how algorithms are built and deployed, as well as for what information AI developers must disclose to the government and the public. Those measures are laying the intellectual and bureaucratic groundwork for a comprehensive national AI law that China will likely release in the years ahead, a potentially momentous development for global AI governance on the scale of the European Union’s pending AI Act. Together, these moves are turning China into a laboratory for experiments in governing perhaps the most impactful technology of this era. China currently lists over 100 artificial intelligence companies capable of producing services similar to the American Chat-GPT and Open AI chatbot services.
Many in US and West do not pay enough attention to China's AI policy or dismiss it as not relevant. That's a mistake. Despite China’s drastically different political system, policymakers in the United States and elsewhere can learn from its regulations. In an insightful working paper by Matt Sheehan at Carnegie Endowment , the author reverse engineers China's AI regulation and shares the insights.
China’s regulations create new bureaucratic and technical tools that are include:??Disclosure requirements, Model auditing mechanisms and Technical performance standards
These tools can be put to different uses in different countries, ranging from authoritarian controls on speech to democratic oversight of automated decision-making. Charting the successes, failures, and technical feasibility of China’s AI regulations can give policymakers elsewhere a preview of what is possible and what might be pointless when it comes to governing AI.
The paper notes that:
Chinese AI regulations share three structural similarities:?
?China’s existing AI regulations are motivated by three main goals.
?Chinese AI governance is approaching a turning point. After spending several years exploring, debating, and enacting regulations that address specific AI applications, China’s policy-making community is now gearing up to draft a comprehensive national AI law.
This paper presents a four-layered policy funnel through which China formulates and promulgates AI governance regulations. Those four layers are real-world roots, Xi Jinping & CCP ideology, the world of ideas, and the party and state bureaucracies. These layers are porous, and regulations do not proceed through them in a purely linear fashion.
What Can Companies Do to Get Ready for AI Regulation
You may be wondering what you and your company can do about all these regulations that you may be subjected to.
First thing is to bring in the experts to determine which policy are you subjected to and what jurisdictions are covered and included for you.
Second, have a clear strategy and make sure leadership understands the power and peril of AI. As discussed in the Stanford HAI event, these are early days and most of the frameworks for understanding AI models are being built. There are new challenges around digital labor and labor surveillance. The best way to get ready for the coming AI regulation is to educate yourself and follow best practices for ethics and privacy protection.
The big challenge is to deal with different standards e.g. from EU and US. Businesses don't want confusion and inconsistency. The pressure is on the US government to align the standards.
If we play our cards right with sensible regulation and proper support for innovative uses of AI to address science’s most pressing issues, it can rewrite the scientific process. - Eric Schmidt
We want to hear from you. We will post a poll on our Linkedin page about this. Looking forward to hearing your thoughts and opinions on AI regulation!
Poll preview. We want to hear from you. Tell us if there's a need for a fresh approach to regulation in the age of AI. Find this poll on our linkedin page and add your comments!
Global Startup Ecosystem - Ambassador at International Startup Ecosystem AI Governance,, Cyber Security, Artificial Intelligence, Digital Transformation, Data Governance, Industry Academic Innnovation
1 年Am closely working on bias insights (Algorithm aspects) and would like to read more on this... Would like to associate with like minded individuals in this area
Poet of 6 Books. Founder of the Artificial Intelligence Magazine 'The AI Times'. You can subscribe at link aitimespage.com/ai. Founded Literary Clothing Brand thereadingwear.com
1 年Love what you all are doing with AI. Check out my magazine and newsletter 'The AI Times' at link here aitimespage.com/ai you can subscribe for monthly issues. Would love to partner up with you guys.
Author | Speaker | Publisher | Strategic Content Manager & Advisor
1 年Thank you for this thorough overview. I'm going to keep it as a reference going forward.