Why the AI apocalypse is good for net-zero

Why the AI apocalypse is good for net-zero


Let’s be honest – we have no idea what to do about the risk of runaway tech. But we do know how to tackle runaway emissions

In 2019, GPT-2 could not count to ten.

This is the first line ‘Managing AI Risks in an Era of Rapid Progress,’ of an open paper authored by twenty-four of the world’s leading scholars including Geoffrey Hinton, Dawn Song, and Yuval Noah Hariri. The rest of the piece is not nearly as endearing. “Humanity is pouring vast resources into making AI systems more powerful but far less into safety and mitigating harms,” they argue. ‘AI could cement or exacerbate global inequities, or facilitate automated warfare, customized mass manipulation, and pervasive surveillance.’ Governments urgently need to respond with global safety standards, whilst funders should pivot a third of R&D budgets to ethical development.

The paper, released at the end of last year, is an example of the great and good usefully stating the obvious: we have no idea what we are doing when it comes to cutting-edge tech. For innovations which long ago lost their novelty, like smartphones and social media, the evidence-based consensus around harms is only now starting to emerge. It includes issues around mental health, attention, IQ, and addiction, and governments still have little idea how to address them.

For AI – whose risks should be considered in the same league as pandemics and nuclear war, according to another statement from the Centre for AI Safety in May 2023 – the default wait-and-see approach may be a luxury we cannot afford.

What are we to do??

Slaying the (slippery) dragon

I put this question to Laurent Muzellec , Dean of Trinity Business School , in the latest episode of Conversations on Climate. Like many leaders in similar positions, he is confronting the task of regulating AI in education. Fortunately for Trinity, Professor Muzellec has spent his entire career studying digital marketing and platform technologies. He founded the Trinity Centre for Digital Business before GPT was even born.

“It is disturbing that a tool that has such profound effect potentially on humanity has been released with minimum safeguards, and thrown in the hands of the public,” was his view on the current state of regulation. “I was surprised to see that a lot of people in the industry, even the tech industry, were kind of saying: you need to be careful.”

So why, I asked him, are these technologies historically so difficult for governments to get to grips with?

The answer is that regulating tech in the twenty-first century has been like fighting a hydra covered in baby oil. A brief selection of challenges and dilemmas might include:

  • Tech platforms are multi-sided, so can deny responsibility for content they host (the ‘Section 230’ problem ) whilst their algorithms are secret and proprietary
  • Regulating social and communication technologies may conflict with fundamental freedoms of speech
  • Innovation and diffusion both move faster than regulators can keep up with, and expertise in complex technologies is concentrated in the private sector
  • These technologies are developed by some of the world’s most powerful companies with strong political connections. Between 2005 and 2019, the Big Five spent half a billion dollars lobbying the US Congress.
  • Market monopolies, platform/scale effects and IP make even the more traditional services like Amazon a challenge (see the 2016 classic, ‘Amazon’s Antitrust Paradox ’)
  • Reluctance on the part of governments to appear totalitarian. The fact that China regulates tech so strongly may actually be a drag on action in the rest of the world. (The ‘Great Firewall of China’ and state censorship is so sensitive, it briefly banned the letter ‘N’ in 2018!)

These concerns originally applied to the likes of Facebook and Google, but they are even more relevant today for AI. In particular, innovation is moving at a pace that is almost difficult to comprehend for us wetbrains living in the analogue world. As the authors ‘Managing AI Risks’ point out, big tech already has the cash on hand to scale up LLM training by multiples of 100 to 100012. Once AI starts autonomously training itself, the horse will have well and truly bolted.

Should we just love the bomb?

Perhaps we should be more optimistic. Firstly, there is no guarantee that AI will ever ‘get there’ to the point it could pose a major risk; it’s a better search engine, but it still can’t even illustrate a lower-case r . Creativity and judgement remain far beyond the uncanny valley. It is certainly not conscious, and perhaps never will be; who can say when technology may plateau or run up against inherent limits? Regardless, humanity may develop alongside it. We eventually learned to effectively regulate twentieth-century harms like tobacco and nuclear weapons.

Alternatively, AI may turn out to be a net boon. We are certainly due an empowering, democratising technology that lives up to its promise, and this might be it. It could unlock a productivity revolution that reshapes the world for the better, solving complex problems and liberating workers from drudgery. Should it achieve consciousness, we might also discover that AGI is a much kinder, more generous being than we are. Ceding more and more social functions to our new overlord might make the world a safer, happier place.?

AI: climate’s wingman?

That may sound too Pollyanna-ish for you. It probably does for me, too. But rather than chase an unknown future around in a circle, let’s take the rest of this article to consider the lesson here for climate, starting with the parallel that the authors of ‘Managing AI risks’ draw:

We are already behind schedule for this reorientation. We must anticipate the amplification of ongoing harms, as well as novel risks, and prepare for the largest risks well before they materialize. Climate change has taken decades to be acknowledged and confronted; for AI, decades could be too long.

I agree that we need to start doing some serious thinking about AI. But if we are going to put the world on emergency footing to deal with a problem of unknown magnitude, which may or may not transpire at some unknown point in the future, and for which we have no good solutions…why don’t we use known tools to solve for a well-defined, existential and already-present threat first?

We don’t know what to do about AI.

But we know exactly what to do about climate change.

Solving at the source

This is not to underestimate the challenge of climate action. It is a wicked problem. But I sincerely believe we are unnecessarily overcomplicating the solutions.

Last week, I read a story about start-ups like Winnow and Afresh using AI to track food waste as an emissions reduction technology. Western societies do throw away a lot of food, and that wastes a lot of carbon – as much as 10% of global emissions, by some measures. But have our brightest entrepreneurs pour their lives into creating tattle-tale rubbish buns feels like a profound waste of talent. At least the Chinese version of Big Brother works at scale. Have we all forgotten the Pareto principle? ?

Instead, lets remind ourselves of the most obvious point of leverage in the system – producers. This month saw the release of the new Carbon Majors report, which lists the 57 organisations responsible for 80% of global emissions since Paris. After Chinese coal, the biggest emitters were ExxonMobil (3.6 gigatonnes of CO2, or 1.4% of the world’s emissions), Shell, BP, Chevron and Total. And a majority – 65% of SOEs and 55% of private firms – expanded production in that period too.

Most of these big private producers are listed and domiciled in the West; the US, not Saudi Arabia, is now the world’s biggest producer of fossil fuels. If Western governments got together and passed a unified wellhead tax on these producers at-source, we would at a stroke reprice the entire global economy for a sustainable future. Combine that will an immediate ban on exploration and a series of progressive lower production caps, and decarbonisation would become a reality by default – no matter what happens in the consumer-facing economy.

That would still leave state-entwined producers in the rest of the world, particularly the Middle East, China and South Asia (and any western firms who flee to the Badlands). But the EU’s new Border Adjustment Tariff is exactly the tool we need to enforce a similar carbon price on external agents. A unified front has a good chance of drawing China into the agreement too; and no matter how powerful Aramco and Adnoc are at home, business eventually bends to global markets.

It's not a perfect plan. It would require great political courage, and some technical work at the border. But it is a radically simple, cheap, scalable, and effective solution. Compared to the genuine dilemmas at the heart of regulating tech and AI, it feels like a no-brainer.

The world doesn’t need GPT to count to 57. Why not start there?

?

?

Sarah Needham

Inclusive Leadership Accelerates Results | Executive Leadership Advisor - ICF Professional Certified Coach | Chartered Engineer | B-Corp Certified Business

7 个月

I love the way you challenge my thinking... We need more of these type of challenges to help open up perspectives & opportunities. Now my brain will take a little time to digest & we can talk further when we meet in Malta at the end of the month - I'm looking forward to it!

Marc Lawn

CEO | Global Business Advisor | People Centric Solutions | Turning Sustainable Visions into Operational Realities | Delivering Growth Through Innovation and Collaboration

7 个月

Christopher Caldwell - fascinating post, & thanks for the prompt. In terms of the tax, it's an interesting thought, but may have practical challenges as I think from the data 65% of the emissions are state owned entities. I suspect a reasonable proportion are not exporting. That aside, & turning to AI. I see AI having a pretty significant overlap with sustainability - due to it's significant demand for energy. There is increasing risk that AI becomes part of the new-zero problem, as well as the longer term societal challenges. What, I think, is critical, is for to think carefully. Many of the patterns we are seeing are 'human nature' as described in the 1972 Limits to Growth book. Now that's just a little scary.

NICO JOHNSON ???

18 yrs Solar & Renewables, Podcaster, Clean Energy Investor, Advisor & Executive Coach

7 个月

Always bringing fascinating conversations to light. Thank you Christopher Caldwell Also, beautiful videography! ??

Scott Newton

Managing Partner, Thinking Dimensions ? LinkedIN Top Voice 2024 ?Bold Growth, M&A, Strategy, Value Creation, Sustainable EBITDA ? NED, Senior Advisor to Boards,C-Level,Family Office,Private Equity ? Techstars Lead Mentor

7 个月

Thank You for the mention. I am already seeing examples of where Machine Learning is being used to help companies take better decisions on Sustainability and eliminating waste. This LinkedIN Live I hosted dives into the subject in detail: https://www.dhirubhai.net/events/utilizingaitosolvethetoughestch7155932442925674496/theater/

要查看或添加评论,请登录

社区洞察

其他会员也浏览了