State of AI Regulation: September 2023 Newsletter:
September 2023 Newsletter - AI Regulation - Dallas AI

State of AI Regulation: September 2023 Newsletter:

Greetings from the Dallas AI team in Texas to all AI enthusiasts!

In this month's newsletter, our focus is on AI regulation. What is AI regulation? Why do we need it? What are the key AI global powers doing about regulating AI? Let's start with a definition.

AI regulation refers to the legal and policy framework put in place by governments and international organizations to oversee and govern the development, deployment, and use of artificial intelligence (AI) technologies. The primary goal of AI regulation is to ensure that AI systems are developed and used in a manner that is safe, ethical, and aligned with societal values.

The use of AI use has steadily increased in the last decade and with this increase there has been a rising interest in regulating AI, as illustrated by the search trends.

Search Trends for "AI Regulation", Sep 2018- Sep 2023. Data by Google.

However till end of 2022, there was not enough momentum around making it happen. In contrast, the year 2023 has seen a flurry of activity around AI regulation - even some of the private companies have testified in public and supported AI regulation. Perhaps the popularity of ChatGPT and some of the past negative media about AI has led to this stage. With the increase in chatter and public concerns, politicians have taken notice and there have been high profile government hearings.

In the recent Ethics issue of MIT Technology Review , technology thought leader and author, Eric Schmidt writes about the need for regulation:

AI is such a powerful tool because it allows humans to accomplish more with less: less time, less education, less equipment. But these capabilities make it a dangerous weapon in the wrong hands.

In the same Technology Review issue, coursear's founder and one of the top AI voices, Andrew Ng shares his thoughts on AI regulation in the article/section "How to Be An Innovator: Take Responsibility For Your Work":

Regulation helps us avoid mistakes and enables new benefits as we move into an uncertain future. I welcome regulation that calls for more transparency into the opaque workings of large tech companies; this will help us understand their impact and steer them toward achieving broader societal benefits.

Let’s dive deeper into the world of AI regulation. In this newsletter we look at:

  • Key aspects and drivers of AI regulation?
  • Industries and job roles impacted by AI regulation
  • AI regulation by the three big players: EU, US and China
  • What Companies Can Do to Get Ready for AI Regulation in 2023


Here are some key drivers of AI regulation:

  1. Ethical Considerations: Regulations often include guidelines and principles that promote ethical AI development. This can include requirements to prevent bias and discrimination in AI algorithms, protect privacy, and ensure transparency in decision-making processes.
  2. Safety Standards: AI regulation may establish safety standards for AI systems, especially in critical areas like autonomous vehicles, healthcare, and aviation. These standards aim to minimize the risk of accidents and harm caused by AI.
  3. Data Privacy: Regulations often address data privacy concerns related to AI. They may require informed consent for data collection, storage, and processing, and set rules for how personal data can be used in AI applications.
  4. Transparency: Many regulations emphasize the importance of transparency in AI systems. This includes requirements for clear explanations of AI decisions, especially in contexts where they impact individuals' lives.
  5. Accountability: Regulations can establish mechanisms for holding developers and users of AI accountable for any harm caused by AI systems. This may involve liability frameworks and auditing procedures.
  6. Consumer Protection: In cases where AI is used in consumer products and services, regulations may focus on ensuring that consumers are protected from fraudulent or harmful AI applications.
  7. Sector-Specific Regulations: Some industries, like healthcare and finance, have unique AI-related challenges. Regulations in these sectors may be tailored to address specific risks and concerns.
  8. International Cooperation: Given the global nature of AI, there is growing interest in international cooperation and harmonization of AI regulations. This ensures consistency and interoperability of regulations across borders.
  9. Research and Development: Regulations may also affect the research and development of AI technologies. They can set guidelines for responsible AI research and the sharing of AI-related knowledge.
  10. Government Oversight: Regulatory agencies are often tasked with overseeing AI development and use, ensuring compliance with the established regulations.

Impact of AI on Key Industries

Even though it is generally felt that AI is becoming more prevalent in business, companies still at times see AI as something “other industries” are doing. As a result, many companies may not yet be aware of the extent of AI use in their operations. Companies should expect they are already using AI and start to consider how to manage AI regulatory risk.? Due to the sheer breadth of AI growth in recent years, examples of industries with considerable AI dependencies in daily operations include:?

  • Financial Services, FinTech, and Payments:?Consumers can sign up for credit cards, apply for loans, open brokerage accounts, and seek financial or investing advice – all without interacting directly with a live human. Credit decisions may be made by AI algorithms while advising can be done by robo-advisors. Payments are now protected by AI-powered fraud detection and security. FinTech develops new AI technology that, via acquisitions, finds its way throughout the financial services ecosystem.?
  • Insurance:?Insurance depends on underwriting and modeling, making it ripe for a variety of AI-use cases. Similar to financial services, insurance companies can use AI in underwriting decisions. AI also enables insurers to intake data over time to provide insurance on a more dynamic basis – like mobile apps that let consumers share real-time driving data for AI-powered technology to calculate premiums or safe-driving bonuses.?
  • Automotive:?As AI continues to advance, it is expected to play an even greater role in the automotive industry, enabling vehicles to become even smarter and more capable. “Driver Assist” technology can make AI-powered decisions that alert inattentive drivers or take emergency action (like braking at intersections). Autonomous and self-driving vehicles are in the spotlight for safety incidents and are already under studies and pressure for regulation will only increase.
  • Logistics:?Similar to the automotive industry, logistics may see continued movement to autonomous delivery vehicles powered by self-driving AI.
  • Health Care & Medical Devices:?This is already heavily regulated and with AI there will be higher levels of scrutiny and control. While AI can improve patient care, reduce burdens on providers, and help avoid medical errors, it can also cause harm and careful regulation has to be imposed.
  • Retail, E-commerce, and Hospitality:?AI enables retailers and hospitality companies to provide a much more personalized relationship with customers. AI can also help optimize product recommendations – imagine, for example, showing your face to an in-store kiosk and receiving makeup and beauty product recommendations.?Privacy concerns will trigger regulation in retail.
  • Marketing & Advertising:?Digital advertising already relies to a large extent on AI in campaign planning, bidding on paid media, and campaign measurement. As advertising moves away from cookies and individual-level tracking, AI-powered probabilistic modeling becomes more intrusive and likely to be subjected to regulation.
  • Manufacturing:?Manufacturing has largely moved to automated, robot-driven assembly processes. AI has the potential to automate additional parts of the manufacturing processes that have continued to rely on human input despite the robotics revolution, like quality control or safety inspections.?
  • Media & Entertainment:?Many people may have experienced the moment where “the algorithm” of their content providers (like Netflix or Spotify) started to get their preferences right. AI enables personalized recommendations for viewing or listening. But AI is also enabling content creation itself. For example, electronic gaming already enables “worlds” where some characters’ behavior is partially created in real-time by AI.
  • Education:?AI is being utilized in a variety of applications within the education industry, such as in personalized / remote learning and in systems that assist teachers in creating and delivering lessons.

AI Regulations from Key Global Players

Let's take a look at the three big AI powers and examine their approach and current actions around AI regulation.

1 - EU AI Act 2023

EU has been at the forefront of AI regulation. The EU has developed the most comprehensive AI regulation draft which has spurred discussion, praise and concern at the same time. Let's unpack what is the EU act and how will it impact AI in various geographies and segments.

The EU Artificial Intelligence Act is one of the most comprehensive frameworks for artificial intelligence. The AI act was originally proposed by the European Commission in April 2023. A general position on the legislation was adopted by the European Council in late 2022. The European Parliament adopted the amendments for the EU Artificial Intelligence Act on June 14, 2023, and the draft text of the legislation serves as the negotiating position between member states and the European Commission. The EU Artificial Intelligence Act endeavors to strengthen Europe's position as a global hub of excellence in AI from the lab to the market, ensure that AI in Europe respects our values and rules, and harness the potential of AI for industrial use.

?The cornerstone of the AI Act is a classification system that determines the level of risk an AI technology could pose to the health and safety or fundamental rights of a person. The framework includes four risk tiers: unacceptable, high, limited, and minimal.


Source: EU's Risk Based Approach -


The EU's AI Act includes multiple components:

  • The Data Governance Act introduces new methods for managing data to enhance trust and facilitate data sharing.
  • The Digital Markets Act aims to create fair and competitive digital markets to foster innovation and growth in the digital sector.
  • The Digital Services Act is designed to establish a safer digital environment that protects the rights of all digital service users.
  • The Data Act regulates data access in various business relationships, including B2B, B2C, and interactions with government entities, as well as data migration between cloud service providers.
  • The AI Act imposes strict regulations on high-risk AI systems and prohibits certain practices.

EU Policy on AI - Source:

“With these landmark rules, the EU is spearheading the development of new global norms to make sure AI can be trusted,” Margrethe Vestager, the Executive Vice-President for Europe Fit for the Digital Age and Competition, added in a statement . “Future-proof and innovation-friendly, our rules will intervene where strictly needed: when the safety and fundamental rights of EU citizens are at stake.”

Not everyone agrees with these comments about innovation-friendly. There's concern that these rules may stifle innovation.

?The Artificial Intelligence Act proposes steep non-compliance penalties. For companies, fines can reach up to €30 million or 6% of global income. Submitting false or misleading documentation to regulators can result in fines, too. The proposed law also aims to establish a European Artificial Intelligence Board, which would oversee the implementation of the regulation and ensure uniform application across the EU.?

?To this end, and despite various shortcomings, the Act is a valuable start in helping to shape global norms and standards and promote trustworthy AI systems that are, at least to some degree, more consistent with human values and interests. The Act also promotes innovation, including regulatory sandboxes and specific measures to support small-scale users and providers. Some may say that in preserving rights it does not go far enough, while innovators and developers may argue that it goes too far. But promoting ethical innovation and fair competition while balancing rights is not easily done, compromise and the right regulatory framework are often needed.

If you are interested in hearing opinions from experts about AI regulation and the EU ACt, we recommend checking out this event from Stanford's Human-Centered AI Center. The full discussion is available on YouTube .

2 - U.S. AI Regulation in 2023?

The rapid growth of AI has led to increasing focus on how best to regulate it. This year saw initial regulations emerge, and 2023 promises to provide further and more general AI compliance obligations. Several use case-specific AI rules emerged in 2022, but more general AI regulatory initiatives may arrive in 2023; state data privacy law, FTC rulemaking, and new NIST AI standards .?

Source: NIST.

State Privacy Laws – general requirements for consumer-facing AI in 2023 2023 is likely to see some of the first general obligations that apply across AI-use cases, contained in privacy legislation passed by certain states. California, Connecticut, Colorado, and Virginia recently passed general data privacy legislation that goes into effect at various times in 2023. These laws contain provisions governing “automated decision-making,” which includes technology that facilitates AI-powered decisions.?

New Rules for Artificial Intelligence?

Senate Majority Leader Chuck Schumer called for new rules for artificial intelligence. WASHINGTON—Senate Majority Leader Chuck Schumer, launched an effort Wednesday to write new rules for the emerging realm of artificial intelligence , aiming to accelerate U.S. innovation while staving off a dystopian future.?

Schumer called for more federal involvement in maintaining U.S. competitiveness, which will require careful attention to mitigating AI’s potential harms , Schumer said.

“The first issue we must tackle is encouraging, not stifling, innovation,” Schumer said at the Center for Strategic and International Studies. “But if people don’t think innovation can be done safely, that will slow AI’s development and even prevent us from moving forward.”

He added in a fact sheet that “with so much potential, the U.S. must lead in innovation and write the rules of the road on AI and not let adversaries like the Chinese Communist Party craft the standards for a technology set to become as transformative as electricity.”

Schumer joins the Biden administration, tech-industry leaders, and other members of Congress in seeking to put limits on the technology , amid fears that AI tools can be abused to manipulate voters, pull off sophisticated financial crimes, displace millions of workers, or create other harms.

But imposing new regulations on a set of technologies that are still under development will be difficult for Congress , which often waits years or even decades before establishing guardrails for new industries.?

In addition, lawmakers will be trying to impose new rules in several areas—such as copyright and liability—where tech companies have battled with other industries and consumers for years.?

Schumer plans a series of forums in the fall to gather insights from industry leaders , interest groups, and AI developers. He also will be pushing Senate committees and other groups to develop bipartisan solutions to potential problems. He is aiming for the process to produce legislation in a matter of months.??

3 - China’s Artificial Intelligence Act

?It is well-known that China has vowed to become a technology and AI leader. As noted in the working paper mentioned below, "China is in the midst of rolling out some of the world’s earliest and most detailed regulations governing artificial intelligence (AI). These include measures governing recommendation algorithms—the most omnipresent form of AI deployed on the internet—as well as new rules for synthetically generated images and chatbots in the mold of Chat-GPT. China’s emerging AI governance framework will reshape how the technology is built and deployed within China and internationally, impacting both Chinese technology exports and global AI research networks."

Over the past two years, China has rolled out some of the world’s first binding national regulations on artificial intelligence (AI). These regulations target recommendation algorithms for disseminating content, synthetically generated images and video, and generative AI systems like Open AI’s Chat-GPT.

The rules create new requirements for how algorithms are built and deployed, as well as for what information AI developers must disclose to the government and the public. Those measures are laying the intellectual and bureaucratic groundwork for a comprehensive national AI law that China will likely release in the years ahead, a potentially momentous development for global AI governance on the scale of the European Union’s pending AI Act. Together, these moves are turning China into a laboratory for experiments in governing perhaps the most impactful technology of this era. China currently lists over 100 artificial intelligence companies capable of producing services similar to the American Chat-GPT and Open AI chatbot services.

Many in US and West do not pay enough attention to China's AI policy or dismiss it as not relevant. That's a mistake. Despite China’s drastically different political system, policymakers in the United States and elsewhere can learn from its regulations. In an insightful working paper by Matt Sheehan at Carnegie Endowment , the author reverse engineers China's AI regulation and shares the insights.

China's AI Regulation - source:
China’s regulations create new bureaucratic and technical tools that are include:??Disclosure requirements, Model auditing mechanisms and Technical performance standards

These tools can be put to different uses in different countries, ranging from authoritarian controls on speech to democratic oversight of automated decision-making. Charting the successes, failures, and technical feasibility of China’s AI regulations can give policymakers elsewhere a preview of what is possible and what might be pointless when it comes to governing AI.

The paper notes that:

Chinese AI regulations share three structural similarities:?

  • The choice of algorithms as a point of entry
  • ?The building of regulatory tools and bureaucratic know-how
  • ???The vertical and iterative approach that is laying the groundwork for a capstone AI law

?China’s existing AI regulations are motivated by three main goals.

  • The first, overriding goal is to shape the technology so that it serves the CCP’s agenda, particularly for information control and, flowing from this, political and social stability.
  • The second major goal behind Chinese AI governance is to address the myriad social, ethical, and economic impacts AI is having on people in China.
  • The third goal is to create a policy environment conducive to China becoming the global leader in AI development and applications.

?Chinese AI governance is approaching a turning point. After spending several years exploring, debating, and enacting regulations that address specific AI applications, China’s policy-making community is now gearing up to draft a comprehensive national AI law.

This paper presents a four-layered policy funnel through which China formulates and promulgates AI governance regulations. Those four layers are real-world roots, Xi Jinping & CCP ideology, the world of ideas, and the party and state bureaucracies. These layers are porous, and regulations do not proceed through them in a purely linear fashion.

What Can Companies Do to Get Ready for AI Regulation

You may be wondering what you and your company can do about all these regulations that you may be subjected to.

First thing is to bring in the experts to determine which policy are you subjected to and what jurisdictions are covered and included for you.

Second, have a clear strategy and make sure leadership understands the power and peril of AI. As discussed in the Stanford HAI event, these are early days and most of the frameworks for understanding AI models are being built. There are new challenges around digital labor and labor surveillance. The best way to get ready for the coming AI regulation is to educate yourself and follow best practices for ethics and privacy protection.

The big challenge is to deal with different standards e.g. from EU and US. Businesses don't want confusion and inconsistency. The pressure is on the US government to align the standards.


If we play our cards right with sensible regulation and proper support for innovative uses of AI to address science’s most pressing issues, it can rewrite the scientific process. - Eric Schmidt

We want to hear from you. We will post a poll on our Linkedin page about this. Looking forward to hearing your thoughts and opinions on AI regulation!

Poll preview. We want to hear from you. Tell us if there's a need for a fresh approach to regulation in the age of AI. Find this poll on our linkedin page and add your comments!

  • A new, centralized external regulator and enforcer is needed to address all aspects of AI
  • Existing regulatory bodies such as FTC should add AI to their portfolio, no need to create a central AI regulatory body
  • Self-regulation is better. Current laws are sufficient but need better enforcement


Dr. Prakash Sharma

Global Startup Ecosystem - Ambassador at International Startup Ecosystem AI Governance,, Cyber Security, Artificial Intelligence, Digital Transformation, Data Governance, Industry Academic Innnovation

1 年

Am closely working on bias insights (Algorithm aspects) and would like to read more on this... Would like to associate with like minded individuals in this area

回复
Antuan Simmons

Poet of 6 Books. Founder of the Artificial Intelligence Magazine 'The AI Times'. You can subscribe at link aitimespage.com/ai. Founded Literary Clothing Brand thereadingwear.com

1 年

Love what you all are doing with AI. Check out my magazine and newsletter 'The AI Times' at link here aitimespage.com/ai you can subscribe for monthly issues. Would love to partner up with you guys.

回复
Bonnie Daneker

Author | Speaker | Publisher | Strategic Content Manager & Advisor

1 年

Thank you for this thorough overview. I'm going to keep it as a reference going forward.

回复

要查看或添加评论,请登录

社区洞察

其他会员也浏览了