Of Nobel Prizes, AI and the impossible calculus of risks and rewards

Of Nobel Prizes, AI and the impossible calculus of risks and rewards

By now, most people who read the news are aware that last week’s Nobel prizes for Physics and Chemistry were a little unusual. The prizes were won by Physics Laureates Geoff Hinton and John Hopfield, and Chemistry Laureates David Baker, Demis Hassabis and John Jumper. They were all chosen because AI not only played a central role in their discoveries, but was the protagonist, hero and romantic lead in their narratives. (I use the phrase 'romantic lead' with intent - the AI community fell into a near erotic swoon after news of these two awards broke.)


Much has been written over the last week about the inner workings of the AIs that were at the core of the awards, which I won't rehash here. Many cliches like 'game-changing' have been bandied about, but they are far too flaccid to actually describe what was developed, tested and formalised by the recipients. What is clear is that AI has, in a remarkably short time (about two decades since machine started to bite), changed science forever. It offers the promise of a new age of discovery, one that will unfold faster than any other fertile period of invention in human history. The Nobel adjudicators clearly wanted to make this point loudly. So they made it twice over.


But enough of the superlatives. Another narrative has become equally important, and is a heavy counterweight to the many enthusiasms that echoed around the world after the announcement.


Physics Laureate Geoff Hinton, whose moniker ‘the Godfather of AI' has been affixed to him for decades (a little unfairly, given the number of smart pioneers in the field) has a second string to his bow, which has been somewhat lost in all the applause. In 2023 Hinton famously quit Google, where he was a Vice President and Google Fellow, citing the need to speak more openly about AI and the risks it poses to the wellbeing of our species.


I wonder about the dissonance going on in Hinton's head right now. He helped invent this untamed new creature, and it has brought him this highest of possible scientific accolades. He must surely be proud of his work, while simultaneously trying to digest the many truly dystopian scenarios likely to follow in its wake. Hinton is worried about misinformation, he is worried about jobs and, most importantly, he is worried about the future of humanity in the face of a technology that will certainly outstrip our cognitive capabilities in many, if not most, areas of human economic activity.


When exactly? Where are we on this shrinking timeline? A recent public lecture sponsored by Bloomberg, with guests Yuval Noah Harari, the historian, philosopher and author, and Tristan Harris who heads up the Center for Humane Technology, offered some deep and fascinating insights.


Harris reflected on the question of AGI (Artificial General Intelligence) and its timeline. He had been previously reluctant to make predictions, preferring gentle deflections into the foreseeable future.


As a stunning example of how fast AI is moving, the recent (12 September 2024) release of OpenAI's new model, named o1, has reduced Harris' estimate. "So suddenly my timelines went from like, oh, I don't know, it could be in the next decade or earlier is now like, oh, certainly in the next thousand days." A thousand days. A mere blink of an eye.


OpenAI’s o1 is not simply a better trained GPT4. It is fundamentally different because it thinks before it answers. In short, it can reason. And to those detractors still poking holes he pithily points out that today "AI is the slowest and dumbest it will ever be in our lifetimes".


OK, so what? This just means that we are all probably underestimating the rate of acceleration in AI innovation. But this in turn means that we cannot even begin to calculate risk; we simply do not know. It is this unpredictability that has led Hinton, Harris and many others to say - wait a minute, let's just think about this for a while.


This caution amongst AI leaders culminated in the release of two well-publicised documents last year.


One came out of the Future of Life Institute in March 2023, in the form of an open letter. It was a call to action. It said “We call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4”. It was signed by 30,000 concerned citizens, most of them in the field.


The second, perhaps more impactful because of its signatories, was released by the Centre for AI Safety in May 2023. The document was signed by over 100 of the top AI researchers and academics in the world. It states simply and without subtlety “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.” That was it, short and sharp; there was no further context or explanation.


The intended audience for this statement was national governments and policy-makers. Among the top three signatories on the list are two of our Nobel Prize winners, Geoff Hinton and Demis Hassabis.


What has happened since? Well, nothing really, besides a few desultory national and regional policy frameworks in some Western countries. The world's biggest and best AI companies are developing at breakneck speed, funded both by profit-hungry capital and power-hungry states. The answer as to why this reckless course is being pursued with such abandon lies in the nature of competitive dynamics.


Harari puts it this way - "...when you talk with the people leading the revolution, most of them, maybe after an hour or two of discussion, they generally say, yes, it would be a good idea to slow down and to give humans a bit more time, but we cannot slow down because we are the good guys and we want to slow down, but our competitors will not slow down."


The implication, of course, is that the other guys (the Chinese come to mind) are probably thinking exactly the same thing. The number Harris bats around, with purposeful hyperbole when he talks about the possibility of losing this race, is that someone else will have a "trillion" times more cognitive power than you. It is not a race that anyone can contemplate losing.


To give one example of what's at stake, let's check out what's going on in the world of misinformation. We are not talking about text here, but about the more emotive messaging media - videos, photos and audio. You know, the stuff that most of us consume on social media.


It turns out that there are a bunch of technologies that have been developed to spot fakes. They can identify tiny aberrations in lip movements or lip-sync or evidence of photos edited from original material, even if only slightly. A company called TrueMedia.org has aggregated all known fake detectors. You need only to upload a photo or URL to their site, and it will indicate the probability that it is false. Neat, right?


In a recent podcast, CEO Oren Etzioni freely admitted there are a couple of problems with his initiative. One is that it is proactive. You have to actually go to their site and upload stuff and, realistically, who is going to do that in the daily overwhelm of TikTok videos and X/Twitter posts? Certainly not the social media companies. They have zero incentive to do so; they are not in the business of truth, they are in the business of getting your attention and not letting go.


Secondly, the system is 90% accurate. Sounds good, but that means that 10% of all of this media cannot be certified as original and unmodified. It means that a huge amount of stuff washing around the Internet might be malevolent. Enough to swing an election. My guess is that the 90% accuracy rate will drop quickly as deep fake technology gets better and better, driven by advances in AI.


Our default position (within months, not years) must then be - everything we see online is fake. Everything. I am not sure that is a great world to live in.


There is a final point to be made. Harris points out that the human body spends about 15% of its energy on its immune system. The US spends about 20-25% of its GDP on its immune system – police, defence, firefighters, etc. And the AI industry? They spend only $1 on AI security for every $1000 committed to the juggernaut of better, faster, smarter AI. That’s 0.1%. We clearly are not taking the risk seriously.


We celebrate the role that AI will play in the upliftment of our lives and we congratulate the Laureates. We also worry about the darker side of AI. And yet it is not simply a question of weighing the risks against the rewards, as Harari clearly articulates in the public lecture with Harris.


He says - "Do the benefits outweigh the risks? Social media taught us that is the wrong question to ask. The right question to ask is, will the risks undermine the foundations of society so that we can't actually enjoy the benefits?"


Steven Boykey Sidley is a professor of practice at JBS, University of Johannesburg. His new book It’s Mine: How the Crypto Industry is Redefining Ownership is published by Maverick451 in SA and Legend Times Group in UK/EU, available now. Copy-edited by Bryony Mortimer


As Steven's ex AI teacher, i feel obliged to comment on his news. First up, both the term "AI" and the term "Nobel Prize" have both been abused so much in recent years (eg giving warmongers the Peace Prize) that both terms are now merely spin-doctor bollocks. The greatest risk to the biological world comes not from computer technology but from the short-sighted predatory species H.Sapiens not very sapient. The prize for the scientific breakthrough that created the impressive achievements of DCNN - which is very very good at statistical pattern recognition - should have gone to the man that thought DCNN up to do optical character recognition (i forgot his name; the fellow at Facebook), and not to bandwagon jumper snake-oil salesmen twonks like Hinton and Hassabis, neither of whom could think their way out of a paper bag, as evidenced by the human-generated imaginary conversation entitled "Demis and Noam" which is enough to make you fall of your chair laughing if you weren't crying so hard in your beer. https://www.youtube.com/watch?v=3RSiORA5NN8&list=PL4y5WtsvtduqNW0AKlSsOdea3Hl1X_v-S&index=28

回复
Andrew Turpin

Managing Director at Cyber-Mint (Pty) Ltd

5 个月

The article makes compelling points about the recent Nobel Prize and AI’s pivotal role in shaping the winners’ groundbreaking discoveries. But why, after a decade of rapid AI development, are we still swinging between boundless optimism and near-apocalyptic fears? Even if I were to accept that AI is advancing so quickly that its creators struggle to keep pace with the risks, and that various calls for action are being ignored - despite the backing of prominent figures - and even if we accept, as Harari suggests, that the "competitive dynamics" are so unyielding that no state, institution, or tech company can afford to hit the brakes for humanity's sake, I still remain unconvinced that a coordinated response from big tech or big government is the right approach.

要查看或添加评论,请登录

Steven Sidley的更多文章

社区洞察

其他会员也浏览了