?? AI and Environmental Impact
Luiza Jarovsky
Co-founder of the AI, Tech & Privacy Academy, LinkedIn Top Voice, Ph.D. Researcher, Polyglot, Latina, Mother of 3. ??Join our AI governance training (1,000+ participants) & my weekly newsletter (40,000+ subscribers)
AI Policy, Compliance & Regulation | Edition #146
?? PERSONAL REQUEST: You are currently subscribed to this newsletter's LinkedIn version; please move your subscription to my independent newsletter here. You'll receive the upcoming editions earlier and gain access to additional features. Thank you!
?? Hi, Luiza Jarovsky here. Welcome to the 146th edition of this newsletter on the latest developments in AI policy, compliance & regulation, read by 38,300+ subscribers in 155+ countries. I hope you enjoy reading it as much as I enjoy writing it!
?? In this week's AI Governance Professional Edition, I’ll discuss the latest legal decisions on?AI copyright?and the potential outcomes everyone in the AI field should?know. Paid subscribers will receive it on Friday. If you are not a paid subscriber yet, upgrade your subscription to receive two weekly newsletter editions (this free newsletter + the AI Governance Professional Edition), access all previous analyses, and stay ahead in the fast-paced field of AI governance.
?? Don't miss my intensive winter training!?This December, join me for a 3-week AI Governance Training (8 live sessions; 12 hours total), already in its 15th cohort. Join?over 1,000 people who have benefited from our training programs. Use coupon code EARLYBIRD1516 to secure your spot and enjoy a 10% early bird discount—available only until Wednesday. Learn more and?register here.
?? A special thanks to Usercentrics for sponsoring this week's free edition of the newsletter. Check out their content:
Privacy-led marketing practices?allow?you to?gather?high-quality user data and customer preferences with consent. With the Usercentrics consent management platform, you can strengthen?trust with your audience, stay ahead of privacy regulations, and foster sustainable growth. Put privacy and consent?at?the center of your 2025 growth plan.?Learn how here.
?? AI and Environmental Impact
If you are interested in the environmental impact of AI, including water consumption, climate, and sustainability implications, here are five great resources to learn more. Download, read, and share:
1?? The Environmental Impacts of AI - Primer, by Sasha Luccioni, Bruna Sellin Trevelin, and Margaret Mitchell. ?? Read it here
2?? The Climate and Sustainability Implications of Generative AI, by Noman Bashir, Priya Donti, James Cuff, Sydney Sroka, Marija Ilic, Vivienne Sze, Christina Delimitrou, and Elsa Olivetti. ?? Read it here
3?? Measuring the Environmental Impacts of AI Compute and Applications, by the OECD. ?? Read it here
4?? How Much Water Does AI Consume? The Public Deserves to Know, by Shaolei Ren. ?? Read it here
5?? Making AI Less “Thirsty”: Uncovering and Addressing the Secret Water Footprint of AI Models, by Pengfei Li, Jianyi Yang, Mohammad Atiqul Islam, and Shaolei Ren. ?? Read it here
?? Adding here one of my favorite quotes from Kate Crawford, from her book "Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence" (this?was the first book I recommended in our AI Book Club; join 1750+ participants here):
"AI is neither artificial nor intelligent. Rather, artificial intelligence is both embodied and material, made from natural resources, fuel, human labor, infrastructures, logistics, histories, and classifications. AI systems are not autonomous, rational, or able to discern anything without extensive, computationally intensive training with large datasets or predefined rules and rewards. In fact, artificial intelligence as we know it depends entirely on a much wider set of political and social structures. And due to the capital required to build AI at scale and the ways of seeing that it optimizes AI systems are ultimately designed to serve existing dominant interests. In this sense, artificial intelligence is a registry of power"
?? Judge Sides with OpenAI
A U.S. judge sides with OpenAI in an AI copyright lawsuit filed by Raw Story and AlterNet over copyright management information (CMI) removal.?Take note of the second quote below and my commentary:
"Plaintiffs argue that they have standing to pursue two forms of relief. First, Plaintiffs argue that they have standing to pursue damages because "the unlawful removal of CMI from a copyrighted work is a concrete injury." Opp. at 7. Second, Plaintiffs argue that they have standing to pursue injunctive relief, because they have alleged that there is a substantial risk that Defendants' program will "provide responses to users that incorporate[] material from Plaintiffs' copyright-protected works or regurgitate[] copyright-protected works verbatim or nearly verbatim." Compl. , 52; see also Opp. , at 9-10. Defendants respond that neither theory of harm identifies a concrete injury-in-fact sufficient to establish standing."
"Let us be clear about what is really at stake here. The alleged injury for which Plaintiffs truly seek redress is not the exclusion of CMI from Defendants' training sets, but rather Defendants' use of Plaintiffs' articles to develop ChatGPT without compensation to Plaintiffs. See Compl. ~ 57 ("The OpenAI Defendants have acknowledged that use of copyright-protected works to train ChatGPT requires a license to that content, and in some instances, have entered licensing agreements with large copyright owners ... They are also in licensing talks with other copyright owners in the news industry, but have offered no compensation to Plaintiffs."). Whether or not that type of injury satisfies the injury-in-fact requirement, it is not the type of harm that has been "elevated" by Section 1202(b )(i) of the DMCA. (...). Whether there is another statute or legal theory that does elevate this type of harm remains to be seen. But that question is not before the Court today."
╰┈? In simple terms, the lawsuit filed by Raw Story & AlterNet focused on the fact that: a) OpenAI removed copyright management information (CMI), infringed the Digital Millennium Copyright Act (DMCA), and caused harm; b) they needed a court order to stop OpenAI from continue doing that.
╰┈? The judge was not convinced and dismissed both claims, siding with OpenAI. The judge concluded that plaintiffs actually wanted to sue OpenAI over the lack of compensation for using their content to train AI.
╰┈? Essential AI copyright controversial issues, such as consent, compensation, and infringing outputs, were not covered in this decision (plaintiffs can still appeal). Also, in my view, the plaintiffs' lawyers should probably have tried a different legal approach.
?? Unpopular Opinion
Many tech executives have expressed categorical opinions about how AI should be regulated (e.g., the recent letter by Meta, Spotify and others). Most of them don't have any legal expertise and, at the same time, have too much at stake to offer an impartial perspective. Their opinion should be worth the same as anyone else's.
?? The EU is Building AI Factories
Did you know that the EU is building?AI factories, and they're expected to be ready in early 2025? Here's what you need to know, including what was announced this week:
╰┈? The AI factories are part of the package of measures announced by the EU in January 2024,?designed to support AI startups and innovation.
╰┈? In a nutshell, according to the EU plan:
"AI Factories are dynamic ecosystems that foster innovation, collaboration, and development in the field of AI. They bring together the necessary ingredients – computer power, data, and talent– to create cutting-edge generative AI models. They serve as hubs driving advancements in AI applications across various key domains, from health to energy, and from manufacturing to meteorology."
╰┈? In more specific terms, these AI Factories are supposed to be?a new pillar for the EU's supercomputers 'Joint Undertaking?activities,' including:
→ "Acquiring, upgrading and operating AI-dedicated supercomputers to enable fast machine learning and training of large General Purpose AI (GPAI) models;
→ Facilitating access to the AI dedicated supercomputers, contributing to the widening of the use of AI to a large number of public and private users, including startups and SMEs;
→ Offering a one-stop shop for startups and innovators, supporting the AI startup and research ecosystem in algorithmic development, testing evaluation and validation of large-scale AI models, providing supercomputer-friendly programming facilities and other AI enabling services;
→ Enabling the development of a variety of emerging AI applications based on General Purpose AI models."
╰┈? This week, the EU announced that the first 7 proposals for AI Factories were submitted by 15 Member States and two associated participating states. These proposals will be evaluated by an independent panel of experts, the selected AI factories will be announced next month, and they're expected to be ready by early 2025.
╰┈? The EU is hopeful that these AI factories will help it enhance its competitiveness in the field of AI, particularly against strong US and Chinese rivals. It's an interesting package of efforts, and if it works, we should see the results in the coming years.
?? Early AI Governance Efforts Matter
As AI laws are enacted worldwide and the compliance clock starts ticking, every tech company should be working on its AI governance strategy, including early compliance efforts. Inaction now might become costly later. People are watching. Join the discussion on LinkedIn or Twitter/X.
?? Join our Intensive Winter Training
If you are dealing with AI-related challenges, you can't miss our acclaimed live online AI Governance Training—now in its 15th cohort. In December, we’re offering a special intensive format: all 8 lessons with me (12 hours of live learning) condensed into just 3 weeks, wrapping up before the holidays.
→ This is an excellent opportunity to jumpstart 2025 and advance your career, especially for those who couldn’t participate earlier due to other commitments.
→ Our unique curriculum, carefully curated over months and constantly updated, focuses on AI governance's legal and ethical topics, helping you elevate your career and stay competitive in this emerging field.
→ Over 1,000 professionals from 50+ countries have advanced their careers through our programs, and alumni consistently praise their experience—see their testimonials.
??? Use coupon code EARLYBIRD1516 to reserve your spot and enjoy a 10% early bird discount—available only until Wednesday. We offer additional discounts for students, NGO members, and individuals in career transition. Don’t hesitate to fill out this form. I hope to see you there!
*You can also sign up for our?learning center to receive?updates on?future training programs?along with?educational and professional resources.
??? AI and Judicial Futurism
The paper "Predictability,?AI, And Judicial Futurism: Why Robots Will Run The Law And Textualists Will Like It," by Jack Kieffaber, foresees that machines will replace judges & lawyers. If you like provocative articles, read this:
"I’m adamant that the textualist’s highest end—the most he can aspire to—is predictability. Now, let’s get weird; I’m equally adamant that predictability is best served by eliminating the primary (if not only) source of unpredictability in the legal system: humanity. That’s because the enemy of predictability is arbitrariness, which has three abstract causes: bias, error, and indeterminacy. Activist judges cause the first of these—and that’s what textualism seeks to remedy. But human judges are all it takes cause the latter two—and they cause it inevitably."
"I’ll finish with arguments that JudgeAI is morally undesirable. But I want to remind you from the outset that I’ve never claimed the contrary—I’ve never argued that JudgeAI is good. I’m arguing that, if you think it’s bad, you’re not a textualist. That’s where I’m heading—though I’m going to hash out JudgeAI’s moral merits as best I can while en route. So let’s begin: JudgeAI offers perfect predictability. If it still feels irksome … what could it be missing?"
"I’d like to take a step back and close with a broader point: I am not joking around here. My JudgeAI hypothetical, at the end of the day, really isn’t a hypothetical at all. It’s coming. It’s coming because there’s one institutional lodestar that trumps even predictability: money. Law, today, is an untenably expensive field; New York big law bills a first year associate who took the Bar last week at $700 an hour. But the machines I’m describing can already do better work for a fraction of the cost—and the system I’m positing can do the work for nothing. If you’re a partner at a major, multinational law firm, please listen to me: It’s time to cash out. Right now. It’s over."
?? Potential Solution for AI Copyright Issues?
Is there a fair and legal way out of AI copyright challenges? According to the paper “Win-win: How to Remove Copyright Obstacles to AI Training While Ensuring Author Remuneration (and Why the European AI Act Fails to Do the Magic),” by Martin Senftleben, output-based remuneration systems are the solution. Here's what the author proposes:
"Implementing output-based remuneration systems, lawmakers can establish a legal framework that supports the development of unbiased, high quality AI models while, at the same time, ensuring that authors receive a fair remuneration for the use of literary and artistic works for AI training purposes – a fair remuneration that softens displacement effects in the market for literary and artistic creations where human authors face shrinking market share and loss of income. Instead of imposing payment obligations and administrative burdens on AI developers during the AI training phase, output-based remuneration systems offer the chance of giving AI trainers far-reaching freedom. Without exposing AI developers to heavy administrative and financial burdens, lawmakers can permit the use of the full spectrum of human literary and artistic resources. Once fully developed AI systems are brought to the market, however, providers of these systems are obliged to compensate authors for the unbridled freedom to use human creations during the AI training phase and displacement effects caused by AI systems that are capable of mimicking human literary and artistic works."
"(...) the input-based remuneration approach in the EU – with rights reservations and complex transparency rules blocking access to AI training resources – is likely to reduce the attractiveness of the EU as a region for AI development. Moreover, the regulatory barriers posed by EU copyright law and the AI Act may marginalize the messages and values conveyed by European cultural expressions in AI training datasets and AI output. Considering the legal and practical difficulties resulting from the EU approach, lawmakers in other regions should refrain from following the EU model. As an alternative, they should explore output-based remuneration mechanisms. In contrast to the burdensome EU system that requires the payment of remuneration for access to human AI training resources, an output-based approach does not weaken the position of the domestic high-tech sector: AI developers are free to use human creations as training material. Once fully developed AI systems are offered in the marketplace, all providers of AI systems capable of producing literary and artistic output are subject to the same payment obligation and remuneration scheme – regardless of whether they are local or foreign companies (...)."
??? Next Week: Live Talk with Gary Marcus
If you are interested in AI,?particularly in?how we can ensure it works for us, you can't miss my live conversation with Gary Marcus [register here]:
→ Marcus is?one of?the most prominent voices in AI today. He is a scientist, best-selling author, and serial entrepreneur known for?anticipating many of AI's current limitations, sometimes decades in advance.
→ In this live talk, we'll discuss his new book "Taming Silicon Valley: How We Can Ensure That AI Works for Us," focusing on Generative AI's most imminent threats, as well as Marcus' thoughts on what we should insist on, especially from the perspective of AI policy and regulation. We'll also talk about the EU AI Act and U.S. regulatory efforts and the false choice, often?promoted?by Silicon Valley, between AI regulation and innovation.
→ This will be the 20th edition of my AI Governance Live Talks, and I invite you to attend live, participate in the chat, and learn from one of the most respected voices in AI today. Don't miss it!
?? To join the live session, register here. I hope to see you there!
?? Find all my previous live conversations with privacy and AI governance experts on my YouTube Channel.
?? AI Book Club: What Are You Reading?
?? More than 1,750 people have joined our AI Book Club and receive our bi-weekly book recommendations.
?? The 14th recommended book was The Quantified Worker - Law and Technology in the Modern Workplace by Ifeoma Ajunwa.
?? Ready to discover your next favorite read? See our previous reads and join the book club here.
?? Generative AI's Open Source Challenge
The paper "Generative AI’s open source challenge: policy options to balance the risks and benefits of openness in AI regulation" by Nick Botton and Mathias Vermeulen is a must-read for open-source enthusiasts. Key findings:
1?? "Lack of clarity regarding what constitutes “open source” in Generative AI has resulted in open washing;
2?? Open washing disproportionately focuses on promoting the benefits of openness without fully addressing its risks;
3?? Opening up access to external parties can improve risk mitigation measures;
4?? An open science approach to releasing models can lead to increased safety;
5?? Barriers exist that limit the potential of openness to external researchers;
6?? Current policy approaches do not adequately tackle the openness challenge."
╰┈? There has been a positive buzz around open-source AI models, but we don't hear much about potential downsides and cases in which open-source might not be the best option.
╰┈? This paper does a great job of breaking down the risks and outlining a practical policy framework to help balance the risks & benefits associated with openness. Among policy options are:
→ "Threshold criteria for high-risk Generative AI models
→ Standards for responsible release
→ Systematic researcher vetting
→ A safe harbour for independent researchers
→ Subsidies for external research
→ Standards on levels of access
→ Due diligence requirements for model hosting platforms"
?? Job Opportunities in AI Governance
Below are 8 new AI Governance positions posted in the last few days. This is a competitive field: if it's a relevant opportunity, apply today:
?? More job openings: subscribe to our AI governance & privacy job boards and receive our weekly email with job opportunities. Good luck!
?? Thank you for reading!
If you have comments on this edition, write to me, and I'll get back to you soon.
You are currently subscribed to this newsletter's LinkedIn version; please move your subscription to my independent newsletter here. You'll receive the upcoming editions earlier and gain access to additional features.
AI is more than just hype—it must be properly governed. If you found this edition valuable, consider sharing it with friends and colleagues to help spread awareness about AI policy, compliance, and regulation. Thank you!
Have a great day.
All the best, Luiza
MyData Board | Expert Support Engineer at SAP
2 周Very interesting read. Thanks.
That's very welcome info! Also, we've shared a few insights about Agentic AI, and the great purpose they fit. Take a look at: https://www.dhirubhai.net/pulse/roleplaying-adults-why-ai-agents-taking-over-we-want-de-paula-mba-elwcf
Artificial Intelligence Governance Professional | Information & Communication Technology, Information Security, Data Protection, and Data Science Professional | Business Innovation & Leadership Coach
2 周I love your newsletter. Thank you for sharing, Luiza Jarovsky!