Supervised influence: when AI models our laws
Oscar Kavanagh
Product Strategist - AI Alignment & Investment @ Peru Consulting | Responsible AI @ University of Cambridge
Big tech remains hell-bent on celebrating an AI revolution that has yet to fully manifest [1][2][3]. Everyday, consumers are pummeled with promotion for generative tools [4][5]. While AI may offer coherent potential for the future, the industry’s biggest bulls have churned out an over-served market complete with skyrocketing valuations and eye-watering sums spent on AI products. Whatever the future holds, right now, we’re in a hype cycle. Bubbles carry their own repercussions down the road, but one very real disaster is here and notably absent from the public discourse: weaponized lobbying.
Part 1 of 2 stories. "Big Tech votes to own its destiny" (part 2) is featured below this article.
Technology companies have been market makers for years, and the AI inflection has only amplified their influence to volatile levels. The S&P 500 reached record highs this year, with only three companies, Microsoft, Nvidia, and Apple, accounting for roughly a third of its gains. Alongside this is a significant increase in lobbying spend. According to IssueOne, tech giants have spent $51 million in lobbying this year, a 14% increase from 2023. This may not seem enormous, but for firms like Meta, which increased spend by 29% compared to 2023, that’s a whole new brigade of anti-regulation lawyers just for the United States.
Don't touch our stuff
This expense is to fight legislation like the Kids Online Safety and Privacy Act?in US Congress (which passed 91-3 in the Senate) or the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act?in California (considered a modest mandate to avoid “critical harm”) [6].
It is also to tussle with regulators in the EU deploying the General Data Protection Regulation (GDPR) privacy enforcement and EU AI Act development laws. In dissent of measures to initiate privacy, public safety, and developer guardrails around an undeniably latent technology, ‘stifling innovation’ is the rallying argument of tech leaders. “Europe has become less competitive and less innovative compared to other regions and it now risks falling further behind in the AI era,” says Mark Zuckerberg in an open letter signed by several tech CEOs asking for less ‘fragmented’ regulation.
Fragmentation is defined in this letter as inconsistent and unpredictable, which the author of this article would argue is wordplay marketing genius on par with popularizing the term ‘hallucinations’ to describe illogical goop created by model confabulations. Much like the alignment problem engineers behind this letter confront developing their own AI systems, insisting upon fully actualized and invariable parameters is a senseless approach. Protective regulation has to start somewhere.
A game of definitions
Big tech’s dismissive stance on regulation is a present danger. This attitude is often reflected by anti-regulation lobbyists skilled at spreading sweeping definitions of innovation, framing it as a force that democratizes fundamental rights, like free speech.
Daniel Leufer, who appeared in our podcast The Good Robot episode "The EU AI Act part 1", has seen this performance hands-on:
“I think [innovation is] used in two different ways that are not mutually compatible. It has a very neutral meaning, which is just ‘something new’. And then it has a more loaded meaning, which is ‘something new that's good’. And often industry will say, “we need more innovation”. But do we need innovation in chemical weapons? Do we need innovations in ways to undermine people's rights, etc.? Or do we need socially beneficial innovation? I find that if you listen to anti-regulation lobbying lines, et cetera, it switches between those two meanings depending on what's convenient. So I often like to say to them you need to pin down what you mean here. Are you using the word innovation in a value-laden way?” - Daniel Leufer
Animosity may be expected between regulators and technologists, but indifference to the details that enable mutual consideration shows something darker. It appears that AI leaders are not merely determined to remove guard rails; they portray an active refusal to take part in the collaborative system meant to protect its users. This is also demonstrated in the enactment of policy. During the EU AI Act's formation, industry lobbyers argued for a transparency loophole in the passing of an amendment to the Act’s foundational risk Article 6 from Section 1. This article codified that potentially high-risk systems must be subject to distinct transparency requirements and responsible development practices. However, the loophole stipulation added that “A provider who considers that an AI system referred to in?Annex III?is not high-risk shall document its assessment before that system.” The amendment, which passed to the astonishment of Daniel and his colleague Caterina Rodelli, effectively permitted developers in the high-risk category to “decide whether you pose a risk” (Daniel Leufer).
In the United States, regulators have been brought to heel through an inexhaustible army of lobbyers skilled at injecting skepticism and using technical expertise to recast themselves as educators to policymakers. Craig Albright, a top lobbyist in Washington, D.C., said this scrutable educating was “the primary thing that we do.” This state of collaboration, seen in the EU and the US, reveals a rooted willingness of policymakers to collude in an asymmetric dialogue to the detriment of civilian liberties throughout the Western world. A lack of understanding is no excuse to exercise restraint. In many cases, it is unnecessary to understand the intricacies of AI systems to enforce essential protections. An example of this, provided by Daniel Leufer, is the use of facial recognition in publicly accessible spaces, using unique identifiers to mark individuals from a watch list. This method blatantly undermines pre-existing human rights protections that already exist in public spaces?but remains challenged or outright ignored by large-scale developer demands.
Bridging policy and power
As AI development continues to surge in spend and market support, how can public advocated bring big tech back to the table? Innovation, in whatever definition, will remain a tantamount value to tech leaders in any case brought against them.
"Artificial intelligence really builds on top of this pre-existing surveillance business model that emerged for the Internet and, an important part of that story is that a few tech companies, in developing that business model began to amass network effects, massive amounts of concentrated power and control over key resources. The infrastructures needed to build AI through compute and cloud infrastructure, hoovering up lots and lots of data about people amassing, the most skilled labor in being able to train and deploy AI systems. And to answer your question, our definition of AI is, in many ways, something that proceeds from that dynamic of concentrated power and that I think also influences the possible future scope of what could be AI for the public good." - Dr. Sarah Myers West on The Good Robot podcast
Lawmakers in the United States are swayed by lobbying culture, facing gridlock on any regulation of AI, including privacy protections. The challenge is to fight back in a landscape defined by deep pockets. While tech giants continue to exert influence on concerted reform, lawmakers can repurpose existing regulations and levy injunctions or fines when user protections are ostensibly abused.
To fight undue collaboration with tech, compromised lawmakers can be remedied by investing in institutional developer advocates to teach non-biased AI fundamentals. A paper by Stuart Russell et al. provides one such framework (When code isn’t law: rethinking regulation for artificial intelligence). Human rights advocates, researchers, ethicists, and economists are uniting to channel a firm rebuke of tech leaders' developer dogma, working to critically define how their AI industrialization can be challenged and molded to avoid the disastrous social effects of unbridled expansion. A highly collaborative publication by AI Now released in November embodies the value of this work: Redirecting Europe's AI Industrial Policy.
Where large-scale policy fails, states and cities must learn to apply precise enforcement where AI systems are deployed, an increasing trend [7][8]. Finally, persistence. Politicians and activists continue to lead the charge, including a new congressional group determined to pass bipartisan reform.
Today, humanity is disposed to a revolution it did not ask for, handled by a collective it does not trust. But the origins of AI are not dependent on the influence of its developers, nor are they reliant upon the surveillance business model that came to define our internet. According to Sarah Myers West, “AI has meant lots of different things over the course of almost 70 years” (EU AI Act Part 2 ep). To win out AI as a force for the public good, regulators are in need of a war chest that cuts through hype, calls out mendacious negotiation, and brute forces developer choices.
Further reading & listening
领英推荐
Big Tech votes to own its own destiny
Silicon Valley is voting from the shadows. We’ve seen big-name backers support US presidential candidates and congressional races. Tech industry workers are also active contributors, notably to Vice President Harris. But a much larger influence has emerged in 2024 behind the scenes of the electorate. This is the silent play of Big Tech to both US Presidential campaigns.
The tech industry has a history of progressive members, but has recently blurred this paradigm with several high-profile endorsements and campaign contributions. It was hard to miss the outsized support for Donald Trump by Elon Musk, followed by other prominent technologists, including Mark Andresson, Peter Thiel, and Former Sequoia Capital head Douglas Leone [1][2]. Outspoken on the Democratic ticket for Vice President Kamala Harris include LinkedIn Founder Reid Hoffman, Mark Cuban, and Bill Gates. It’s a toss-up in cash and signaling. One thing that’s clear, however, is the willingness of tech leaders to convey a kind of neutrality, even if that includes removing any former progressive sentiment (such as the tech industry’s support of Biden in 2020). To this end, several Big Tech CEOs have made offerings to former President Trump along the campaign trail.
Meta CEO Mark Zuckerberg called Trump after his assassination attempt to commend his heroism. Google CEO Sundar Pichai allegedly praised his McDonalds visit. And Apple CEO Tim Cook apparently called in to discuss his frustration with EU fines levied on Apple. These reports are unclear and may be examples of flattery tailored to this President’s distinct ego. But they would align to an unusual courting of presidential candidates compared to previous election cycles. Vice President Harris has a history of interactions with the tech community in her home state of California, but in response to her recent declarations of tech regulation and anti-trust busting, active impartiality may be the best strategy for its leaders. On this note, we also saw Amazon founder Jeff Bezos remove the Washington Post’s longstanding tradition of endorsing presidential candidates, citing a “perception of bias” as a core concern.
Why the impartiality? A unifying theme is that tech leaders are more hostile to regulatory interference than ever. At the beginning of the year, the DoJ took aim at large mergers with Big Tech and its increasing acquisition activity of AI outfits, including Meta’s Instagram and Whatsapp purchases. In August, Google lost a major antitrust case in its search business. Antitrust lawsuits have also been filed between the DoJ and FCC against Amazon, Apple, Google, and Meta. Amidst this crackdown, tech giants have been battling fiercely to defend their moats and regulation to improve user privacy and safety laws. These include the Algorithmic Accountability Act, Federal Artificial Intelligence Risk Management Act, Kids Online Safety Act, and California’s Safe and Secure Innovation for Frontier Artificial Intelligence Models Act. A record number of legislation (120 AI bills in Congress) has put the tech industry on its toes, causing a groundswell of political spending action to safeguard an unimpeded future. Loyalties are being calculated on this condition to the detriment of fundamental protections and privacy for millions of Americans.
“There are some that seem to be waking up to the fact that like, ‘Holy sh*t, this guy might get elected again. I don’t want to have him, his administration, going after us,’” a person close to Trump told CNN. “What he’s saying out loud, I think they hear, and they’re taking it seriously.” - CNN source (linked below)
At the time of this article’s writing, the 2024 US Presidential Election was not called for either candidate. That has since changed. Donald Trump’s sweeping victory across the seven swing states has put the tech industry’s strategic positioning into sharper focus. Despite anti-regulatory positions in his previous term, Trump has shown disdain for Big Tech’s size and manufacturing developments offshore. It appears likely that Trump’s biggest backers, like Elon Musk, gain greater influence on policy by leveraging their close relationships with the president. It will also introduce considerable volatility due to his unpredictable nature regarding regulatory scrutiny. The tech landscape remains uncertain under a Trump administration 2.0, but it appears that direct relationships to the president’s ear are the surest bet to dictate what restrictions or anti-trust investigations are carried out or scrapped.
We saw personal appeals by tech leaders to Trump throughout the election, and we will likely see more of it in the coming years as tech companies seek to protect unmitigated AI development, acquisitions, and muted policy over their handling of the digital landscape. Whatever regulatory posture Trump takes toward Big Tech in 2025, the departments and bureaus he presides over are more likely to execute his will directly. This election has laid bare the lengths to which technocrats will go to ensure their interests are untouched. As trust in these tech organizations continues to plummet, the question remains: how much of our political future is up for determination by the few controlling our digital infrastructures? Or are we willing to fight back against the encroachment of our fundamental rights and, increasingly, democratic principles?
Further reading:
Story roundup of technology ethics stories in 2024:
Articles by Oscar Kavanagh capturing the year's integral technology in society developments
More on technology in 2024 elections:
According to Axios, over 1 billion people are heading to polling stations worldwide in 2024. The largest ever.
Thank you for reading! For more information about me, please check out my profile or visit oscarkavanagh.com.
Director, GM Ireland - Peru Consulting
3 个月Great piece Oscar and good links. It makes me reflect that I’m being too passive as a citizen / consumer. Innovation isn’t progress per se.
Managing Director | Problem Solving
4 个月Personally I think "AI offers incredible potential to simplify tasks, enhance efficiency, and unlock creativity, making life easier and more productive. It can analyze data quickly, provide personalized assistance, and improve problem-solving in various fields. However, it also raises concerns, such as job displacement, ethical dilemmas, and the loss of human touch in decision-making. While AI's rapid growth excites me, it also makes me wary of its misuse and the potential erosion of privacy and control. Overall, AI feels like both a powerful tool and a challenge that requires careful balance".