?? Weekly AI Policy Extravaganza (May 28- June 4) ??
www.w3brew.com

?? Weekly AI Policy Extravaganza (May 28- June 4) ??

Artificial intelligence is quickly becoming an indispensable asset in addressing a range of challenges in today’s society – from domestic and international cyber threats to healthcare advancements and environmental management. While there's no shortage of opinions on AI's impact, one thing is clear: we need robust, flexible policies to harness its full potential. In this newsletter, we'll dive into global efforts to regulate AI, highlight key legislative moves, and discuss emerging challenges and opportunities.

For more insights and to stay ahead in the rapidly evolving world of technology and policy, don't forget to subscribe at Web3 Brew . Let's get started!


One step back:

Key Points in Global AI Policy

Efforts and innovations must be coordinated globally. A leader is needed to harmonize these efforts, avoiding a confusing system of disparate AI regulations.

Examples of Global Efforts:

  • Europe: Passed the AI Act in December 2023, focusing on categorizing AI risks.
  • United Kingdom: Took a pro-innovation stance, hosting the AI Safety Summit at Bletchley Park in November 2023.
  • China: Requires state review of algorithms, aligning them with core socialist values.
  • United States: Proactive in AI governance, marked by President Biden’s extensive AI executive order. // The U.S. strategy emphasizes a flexible framework that can adapt to the rapidly evolving AI landscape. This framework includes detailed reporting and risk assessments mandated by the federal government, aiming for implementation ahead of the EU’s AI Act.
  • Federal: Recent executive orders and guidelines from the Office of Management and Budget.
  • State: In 2023, 25 states introduced AI bills, and 15 states plus Puerto Rico adopted AI-related resolutions or legislation.


???? AI Lobbying Surges as U.S. Moves Toward New Regulations

The number of lobbyists focusing on AI issues surged in 2023 as the federal government considered new AI regulations, according to Public Citizen.

Key Numbers:

  • Clients Lobbying on AI: Increased by 120%, from 272 in 2022 to 566 in 2023.
  • Lobbyists Hired for AI Issues: Also increased by 120%, from 1,552 in 2022 to 3,140 in 2023.

This spike aligns with the Biden administration’s executive order on AI, leading to increased lobbying activity, especially at the White House. The number of lobbyists engaging with the White House rose 188% in 2023, from 322 in Q1 to 931 in Q4.

Industry Involvement:

While the tech industry is the most active, it only accounts for 20% of AI lobbyists. Other sectors involved include financial services, education, transportation, defense, media, and healthcare.

What’s Next?

Public Citizen warns against industry self-regulation, emphasizing the need for strong, public-centered AI policies to ensure AI benefits everyone, not just major players. Expect lobbyist engagement to continue rising in 2024 as federal agencies implement new AI policies and Congress debates further proposals.


????? UN Highlights Human Rights Risks with Generative AI

The UN Human Rights Office has just released a crucial supplement on the human rights risks tied to generative AI. It is a supplement to the UN B-Tech Project’s foundational paper and a wake-up call on how generative AI can impact internationally agreed human rights.

So, what are these risks? ??

  • Freedom from Physical and Psychological Harm: Think of deepfake pornography—non-consensual and harmful.
  • Privacy Violations: Your personal data could be misused.
  • Freedom of Thought and Opinion: Misinformation could shape and distort beliefs.
  • Right to Work: AI could displace workers, affecting job security.
  • Child Protection: Kids might get exposed to inappropriate content.

The UN document also points out that these risks are often more severe for vulnerable groups, especially women and girls. Generative AI isn’t just expanding existing risks; it’s creating entirely new ones!

Looking ahead, the report warns of more risks emerging as the technology evolves. It stresses the need to identify, prevent, and mitigate these human rights harms effectively.

What do you think? Are we prepared to tackle these challenges? ??


???? DoJ Charges Man for Creating Child Sexual Abuse Material Using Generative AI

The U.S. Department of Justice charged Steven Anderegg, 42, from Wisconsin, for using the AI image generator Stable Diffusion to create thousands of realistic child sexual abuse images. This is a landmark case that brings attention to the serious human rights risks tied to generative AI.

Key Points:

  • Charges: Steven Anderegg used AI to generate child sexual abuse material (CSAM).
  • Statement: Deputy Attorney General Lisa Monaco said, "CSAM generated by AI is still CSAM, and we will hold accountable those who exploit AI to create obscene, abusive, and increasingly photorealistic images of children."
  • Potential Sentence: If convicted, Anderegg faces up to 70 years in prison.

Why It Matters:

The DoJ’s action aligns with the UN’s concerns about AI-related human rights risks. This case could set a precedent for how AI-generated content is regulated and prosecuted.

What’s Next?

Expect more scrutiny and possibly new regulations around the use of generative AI. How do you think this will impact the development and deployment of AI technologies?


???? California Advances Measures Targeting AI Discrimination and Deepfakes

California lawmakers are making big moves on AI! They’re pushing forward several proposals aimed at protecting jobs, building public trust, fighting algorithmic discrimination, and banning deepfakes involving elections or pornography.

Key Points:

Fighting AI Discrimination and Building Public Trust

  • Oversight Framework: Companies using AI in decision-making would need to participate in the process and inform affected individuals.
  • Bias Assessments: AI developers would have to routinely check their models for bias.
  • Attorney General's Role: The state AG could investigate and fine companies $10,000 per violation for biased models.

Protecting Jobs and Likeness

  • Hollywood Influence: Inspired by the actors' strike, a proposal would protect performers from being replaced by AI-generated clones.
  • Contract Clarity: Performers could exit contracts with vague AI usage language and would need representation when signing “voice and likeness” contracts.
  • Posthumous Cloning: Penalties for digitally cloning deceased individuals without estate consent.

Regulating Powerful Generative AI Systems

  • Guardrails for Large AI Models: Proposals include built-in "kill switches" for models that could cause massive damage.
  • New State Agency: Overseeing AI development and setting best practices.

Banning Deepfakes Involving Politics or Pornography

  • Prosecution Facilitation: Aiming to prosecute those creating AI-generated child sexual abuse images.
  • Election Deepfake Ban: Prohibiting deceptive AI-generated political content in the lead-up to and following elections.
  • Social Media Labeling: Requiring platforms to label AI-generated election-related posts.

Why It Matters:

California, home to many AI giants, is setting the stage for nationwide AI regulations. The state is learning from past mistakes with social media and aims to balance attracting AI companies with ensuring responsible AI use.

California is positioning itself as a leader in AI regulation with ambitious bills targeting biased algorithms, election disinformation, and protecting digital likenesses. Governor Gavin Newsom has not taken a public stance on these bills yet but emphasized balancing innovation with potential AI risks.


???? EU Creates AI Office, EDPB Warns on Facial Recognition

The European Commission has launched the AI Office, a new regulatory body tasked with enforcing the EU's groundbreaking AI Act. The AI Office will oversee high-risk "general-purpose AI models," including those powering systems like ChatGPT.

Key Functions of the AI Office:

  • Risk Identification: Spotting systemic risks in powerful AI models.
  • Mitigation Measures: Proposing ways to address identified risks.
  • Evaluation and Testing: Developing and implementing testing protocols.
  • Codes of Practice: Creating guidelines for safe AI use.
  • Sanctions: Investigating issues and applying penalties when necessary.

In parallel, the European Data Protection Board (EDPB) has issued an opinion on the use of facial recognition technology in travel. The EDPB emphasized that "individuals should have maximum control over their own biometric data" in AI systems.

EDPB's Concerns and Recommendations:

  • Centralized Databases: Warns against centralized biometric databases without robust encryption managed by individuals.
  • Less Intrusive Alternatives: Urges airports and airlines to explore less intrusive methods before adopting facial recognition.
  • Risks: Highlights potential issues like discrimination and identity fraud from biometric data misuse.

The EU's proactive stance on AI regulation and data protection aims to balance technological innovation with ethical and legal safeguards, ensuring a responsible approach to AI development and deployment.


???? EU Needs to Up Its AI Game, Auditors Say

The European Commission needs to invest more in AI to keep up with the US and China, according to a new report by the European Court of Auditors (ECA). Despite having new AI regulations, the Commission isn’t coordinating well with member states or tracking investments effectively.

Key Findings:

  • Coordination Issues: The ECA report highlights that the Commission hasn’t been aligning efforts across the bloc or utilizing the necessary tools to track investments.
  • Investment Delays: Delays in the Digital Europe funding program have also hindered progress.

ECA member Mihails Kozlovs put it bluntly: “Big, focused AI investments are crucial for EU economic growth. In the AI race, the winner takes it all. The EU needs to step up, join forces, and unlock its AI potential.”

AI Investments: How the EU Stacks Up

  • US and China: Both are way ahead, with China aiming to lead globally by 2030, thanks to massive private investments.
  • EU Targets: The EU set ambitious targets of €20 billion in AI investment over 2018-2020 and €20 billion annually for the next decade.

AI adoption varies across the EU. France and Germany are leading with the biggest public AI investments. Just last week, French President Emmanuel Macron announced a €400 million investment to boost AI research across nine universities. The EU’s goal is for 75% of firms to use AI by 2030, hoping this tech will boost productivity and tackle societal challenges.


???? Singapore Launches AI Governance Framework and Testing Toolkit

Singapore's Infocomm Media Development Authority (IMDA) rolled out the "Model AI Governance Framework for Generative AI." This framework aims to tackle the risks and challenges tied to the development and use of generative AI. It builds on Singapore's earlier AI governance efforts and adds new layers of oversight.

Key Dimensions of the Framework:

  1. Accountability: Ensuring clear responsibility for AI actions.
  2. Data: Emphasizing data quality and integrity.
  3. Trusted Development and Deployment: Promoting ethical AI creation and usage.
  4. Incident Reporting: Establishing protocols for AI-related incidents.
  5. Testing and Assurance: Implementing robust testing measures.
  6. Security: Protecting AI systems from threats.
  7. Content Provenance: Verifying the origin of AI-generated content.
  8. Safety and Alignment Research: Prioritizing safe AI advancements.
  9. Harnessing AI for Public Good: Leveraging AI to benefit society.

The framework stresses global collaboration, aiming to create a "Digital Commons" where common rules allow equal opportunities for all. What impact could this global approach have on AI development?

AI Verify Project Moonshot

Alongside the framework, Singapore’s Ministry for Communications and Information introduced AI Verify Project Moonshot. This open-source toolkit addresses security and safety issues in large language models (LLMs). It integrates red-teaming, benchmarking, and baseline testing into one user-friendly platform. How might this toolkit improve the reliability and safety of generative AI?

Singapore’s proactive stance sets a strong example in AI governance. Will other countries follow suit, and how might this shape the future of AI policy?


???? Japan Provides Guidelines on Using Copyrighted Material for AI

Japan's Copyright Office has issued a "General Understanding on AI and Copyright" to clarify how the nation's Copyright Act applies to generative AI technologies.

Key Points:

  1. Distinction in Stages: The report highlights the difference between using copyrighted works during AI development/training and potential copyright violations when AI generates new content. "It is important to differentiate between the exploitation of copyrighted works in the 'AI development/training stage' and infringement in the 'generation/utilization stage,'" the document states.
  2. Training Phase Exception: Japan's copyright law allows the reproduction of copyrighted material for "non-enjoyment purposes" like data analysis without permission. The guidance specifies, "Exploitation of a copyrighted work not for enjoyment...such as AI development or other forms of data analysis may, in principle, be allowed without the permission of the copyright holder." However, copying works to mimic a creator's unique style could violate this clause.
  3. Liability for Using Pirated Content: Companies using pirated content for AI training could face copyright infringement charges. The guidance warns, "If an AI developer or AI service provider collects training data...from a website that they know contains pirated or infringing content, there is a high possibility that the business will be held responsible."

Asian regulators are increasingly proactive in providing clear policy documents for AI governance, as seen with Japan and Singapore.


?? Suggested Further Reading:

Colorado and EU AI Laws Raise Several Risks for Tech Businesses by Lena Kempe, LK Law Firm

Related Stories:

  • Sweeping Colorado AI Bill Signed by Governor as He Urges Changes (May 18, 2024)
  • EU AI Act’s Passage Starts the Clock for US Companies to Comply (March 13, 2024)
  • US Businesses That Prepare for EU AI Act Will Have an Advantage (December 18, 2023)

Key Topics:

  • Biometrics
  • Artificial Intelligence
  • Insurance Coverage
  • Equal Employment Opportunity Compliance

Comparative Analysis: Attorney Lena Kempe compares the AI acts of the EU and Colorado, highlighting how both laws impose high-risk AI requirements for businesses. These comprehensive AI laws address responsible development and deployment, with extraterritorial effects for companies operating in or targeting these markets.

High-Risk Systems: Both laws target high-risk AI systems, focusing on preventing algorithmic discrimination (Colorado) and addressing health, safety, and fundamental rights risks (EU).

Provider/Developer Obligations: Developers and providers of high-risk AI systems have significant responsibilities under both laws, including maintaining and modifying AI systems within the defined high-risk criteria.

For a detailed comparison and actionable insights, check out the full article by Lena Kempe.


?? What to Watch For: Keep your eyes on this space for continuous updates and in-depth analysis of AI policy trends. For more insights and to stay ahead in the rapidly evolving world of technology and policy, don't forget to subscribe at Web3 Brew . Let’s keep blending tech into our world and shaping a thoughtful digital future! ?? Until next week, keep sipping on that tech brew! ??


Alp Mete ??R?N

Partner @ ARC Law Firm| Columnist, Mentor, Blockchain Lawyer

5 个月

Thank you very much, Nesibe, for the excellent work. I appreciate your efforts and your skillful writing!

要查看或添加评论,请登录

社区洞察

其他会员也浏览了