The Duality of AI
Copilot prompt: Generate an image for an essay titled ‘The Duality of AI’

The Duality of AI

Over the last 12 months, I must have read hundreds of articles and essays about AI. This does not make me an expert in AI, not by a long stretch. I am just now finishing The Coming Wave by Mustafa Suleyman , co-founder of DeepMind and Inflection AI. As I ponder the implications of AI on work, life and humanity, I realise perhaps one of the most important things to know about AI is that everything is a duality.

Here are a few simple examples from the 2023 Expert Survey on Progress in AI (ESPAI).

  • 68% thought good outcomes from superhuman AI are more likely than bad (presumably, the remaining 32% thought bad outcomes are more likely than good)
  • Of the 68% (ESPAI calls them ‘net optimists’), 48% gave at least a 5% chance of extremely bad outcomes such as human extinction
  • 59% of net pessimists gave 5% or more to extremely good outcomes

AI authors and experts cannot agree among themselves whether outcomes from superhuman AI are more likely to be good or bad. Half the net optimists hedged that AI could lead to human extinction. Over half the net pessimists made a similar hedge that AI could lead to extremely good outcomes.

Another example cited in a white paper on the productivity effects of generative AI is about grant writing. Tools like ChatGPT can empower companies to write their own grant applications, thereby putting grant writers out of work. On the other hand, generative AI could make grant writers much more efficient and productive, making their services more affordable and thereby expanding the overall demand for grant writing services.

The grant writing example above is a microcosm of what Frank et al. (2019) see as the labour market implications of AI: a doomsayer’s perspective and an optimist’s perspective. Frey and Osborne (2013) estimate that 47% of total US employment is at risk of losing jobs to automation over the next decade. Bowles (2014) estimates that the equivalent figure in the EU is 54%. Optimists, on the other hand, believe that the productivity and reinstatement effect of AI will more than compensate for the substitution effect. There are projections that by 2025, AI and robotics will have created over 90 million new jobs. According to the World Economic Forum, AI will generate 97 million new jobs in the fields of big data, machine learning and information security by 2025, more than making up for the 85 million jobs that AI will eliminate.

Here’s another duality pointed out by Ethan Mollick , one of the most prolific writers about AI on the internet. I highly recommend his newsletter One Useful Thing. According to Mollick, surveys have repeatedly found that workers enjoy using AI at work despite knowing the threats AI poses to their continued employment. They rather outsource the drudgery of low-value work to AI so they can focus on the part of their work they enjoy and which other people find high-value.

In a recent experiment conducted by Harvard Business School researcher Fabrizio Dell’Acqua, a group of recruiters were given algorithms to assist them in deciding which candidates to interview. The catch is that some algorithms were better than others. Somewhat surprisingly, the inferior algorithms with 75 percent accuracy outperformed the algorithms with 85 percent accuracy. Turns out that when recruiters used algorithms known to be unreliable, they paid more attention and applied their own judgment. Conversely, recruiters using the superior algorithms were happy to sit back and let the algos decide.

Tim Harford, the economist who writes the Undercover Economist column for the FT weekend, believes the Dell’Acqua experiment shows “a low-grade algorithm and a switched-on human make better decisions together than a top-notch algorithm with a zoned-out human”. Again, the duality of AI. Human intelligence augmented with AI can lead to superior outcomes, but over-reliance on AI can lead to disappointing results and, in some instances, disastrous consequences.

In The Coming Wave, Mustafa Suleyman writes about the grand bargain we are facing. It is naive to think that social and political problems can be solved with technology alone. It is equally naive to think that these problems can be solved without technology. Technological breakthroughs will help us meet humanity-scale challenges - food security, climate change, natural disaster detection, affordable healthcare, and the one closest to my heart, education reform.

At the same time, Suleyman warns us against pessimism aversion and to consider the new risks that might arise from omni-use technologies like artificial general intelligence. The dismantling of the modern, liberal democratic nation-state, for example. Or what AI researcher Stuart Russell calls the “gorilla problem”: gorillas are put in cages despite being physically stronger than human beings because we have puny muscles but big brains. If human beings dominate our environment because of our intelligence, it follows that a more intelligent entity like AGI could dominate us.

As I finish Suleyman’s book on the urgent need to contain AI, here are a few more books I just ordered on Amazon.

The duality of AI in ways big and small is perhaps what is most fascinating about a technology that will fundamentally change human lives. Learning about the implications of AI is also a terrific exercise in critical thinking. Yet another reason why schools should be embracing AI rather than banning it. When everything about AI is nuanced, forming your own opinion is immensely challenging and rewarding.

The goal is not to position myself as an AI expert (there are enough of them out there, real and self-proclaimed). I am not one and I don’t aspire to be one. The goal is to ponder, explore and discuss the implications of AI on human flourishing.

This article was first published in Education & Catastrophe, a weekly newsletter for anyone anxious about the future of learning and work.

References:

Cristina Román

Director, Europe @ The Clios |

7 个月

Thanks for the invitation, John. Human 'Flourishing' is a very positive positioning that I unfortunately do not share. During a Yahoo Conference last week, William highlighted the lack of investment in human intelligence (HI) despite excessive focus on artificial intelligence (AI). This approach doesn't seem to be leading us to 'flourishing'. Let's not even talk about the 'resources' that it needs and where they come from. I was reading this morning about Norway opening up a vast territory as big as Italy to 'commercial deep-sea mining' to help the country break China and Russia's rare earths dominance.

It's refreshing to see someone taking a broader perspective on the implications of AI. Can't wait to read the newsletter!

DR MUHAMMAD BILAL S.

Child Health Legal Epidemiologist | Health System Thinker & Health System Strengthening Specialist | Global Health Interventionist & Entrepreneur | Transformative Mentor | Facilitator

7 个月

Great initiative, John Tan! Looking forward to gaining insights in your newsletters.

Ben Dixon

Follow me for ?? tips on SEO and the AI tools I use daily to save hours ??

7 个月

Love the initiative! Looking forward to reading your newsletter.

Sheikh Shabnam

Producing end-to-end Explainer & Product Demo Videos || Storytelling & Strategic Planner

7 个月

That sounds fascinating! Looking forward to reading your insights. ??

要查看或添加评论,请登录

社区洞察

其他会员也浏览了