THE CHALLENGE OF BUILDING AN AI FRIENDLY FUTURE
Dr. Brian Richard Joseph, Ph.D. (Harvard)
President at TTN
AI versus Humans (Part Three)
Tuesday Theorems for June 11, 2024
In Part One (May 28, 2024) of this series “AI versus Humans”, we noted the widespread excitement and concern about the rapidly increasing use of Artificial Intelligence. In Part Two (June 4, 2024) we zeroed in on some specific - and to use Geoffery Hinton’s terms - “scary” dangers posed by AI’s future development and use. Today a look at our options to maximize the benefits of AI and minimize its dangers.
The task of building an AI-friendly future for our children and grandchildren is greatly complicated by one extremely uncomfortable historical fact: most inventions and most new technology end up in the hands of the military.
There, generals and their political leaders often lacked the wisdom and restraint to avoid using new technologies, or often used them for purposes that caused their developers regret and grief.
In introducing Lilas Toward's rich and revealing portrait of MABEL BELL, the amazing woman behind Alexander Graham Bell's inventiveness in aerodynamics, acoustics, and agriculture, Hugh MacLennan (yes, THAT Hugh MacLennan!) observed of the progress of inventions:
?(Bell and his brilliant contemporaries, Marconi, Edison, Nobel and others lived in)?an age in which most dynamic people learned prodigiously how to manipulate material nature without thinking it necessary to understand the essential nature of man himself. Even a man as shrewd as Mark Twain took it for granted that the domination of nature by mechanical devices was certain to produce a Golden Age in which humanity would at last rise to its full stature.
In such a time ..inventors were presented to school children as heroes and supermen, and it was assumed .. that material progress would result in enormous human happiness. As we all know, that dream ended in 1914.
And further, in another sobering observation, MacLennan writes:
Even sadder was the disillusionment of Alfred Nobel. When he invented dynamite, he had the sincere hope it would be a blessing to mankind because it would eliminate much of the brutal pick and shovel work required for construction. To his horror, dynamite was immediately grabbed by the armies and put into high explosive artillery shells.
?What followed was the largest blood bath ever inflicted on humankind in the form of World War One.
?If we substitute the word "digital" for "mechanical" in MacLennan's account above, we get a fairly good approximation of the naive and ill-informed hype currently surrounding the development and deployment of Artificial Intelligence in our own day.
?In surveying the near and far horizons of AI, the Canadian godfather of Artificial Intelligence, University of Toronto Computer Scientist Jeffrey Everest Hinton, recently observed that the use of AI to generate thousands of military targets (one such AI application was made recently by the Israeli military in Gaza) is "merely the thin edge of the wedge".
Professor Hinton is particularly concerned about the development of autonomous weapons systems, whereby imperfect and often racially biased algorithms decide who is killed during policing actions and military conflicts WITHOUT human intervention.
Just a few weeks ago, on May 30, 2024, the University of Toronto - Professor Hinton’s academic home when away from his tenure at Google - released a large scale survey [ 21 countries and 24,000 replies] on global attitudes toward AI.
This survey showed a striking bifurcation along international ‘class’ lines: poorer developing countries were quite enthusiastic about the benefits for communication, medicine, and governance of AI. This may be because AI aids their leapfrogging over older infrastructure characteristic of wealthier nations in the global north. (For example, phone companies no longer need to manually install telephone poles and run expensive copper wire to develop a national phone grid.) As a result of this leap-frogging, the survey director, Peter Loewen, claimed it is now easier to open a bank account online in Kenya than in Canada, and it is easier to receive government payments in India than in Canada! [CBC Radio, Sydney Nova Scotia’s Main Street interview of May 31, 2024]
But Professor Hinton’s hopes for a Geneva Convention kind of solution to the threat of run-away AI may be less practical than the model for the control of nuclear weapons which emerged out of the Cold War between the United States and the USSR; namely, Mutually Assured Destruction (“MAD”). The AI analogue might be termed Mutually Assured Disruption (or “MAD2”).
It might even be hoped that the ease and power of current ransom-wear and state sponsored infrastructure attacks could serve as the necessary "nasty" precursors Professor Hinton predicted would be necessary before serious control measures are implemented.
However, given a) the tripolar nature of current big power geopolitics; b) the unfortunate examples of some smaller states ignoring international legal rulings (continued attacks against the civilian population in Gaza and in other conflict zones may be an example of decreasing respect for an international rule of law); and,? c) the poor record of modern tech giants like Google or Meta to practice responsible self-regulation, the current prospects for reaching binding international agreements to control the dangers of AI misuse appear dim.
For that reason, while hoping to be wrong, I share Professor Hinton’s pessimism about the longer term prospects that our children and grandchildren will inhabit an AI friendly world.
In the current AI arms race between China, Russia and the United States, the West still retains a narrow technological advantage, thanks in part to the inventiveness of researchers at INTEL, GOOGLE, and MICROSOFT. If this advantage is to be maintained, it may be advisable to curtail future exports of new chip technology and computing algorithms to our ideological and military rivals.?
Lest our readers think this exaggerates the costs of a Chinese or Russian victory in the current AI arms race, I encourage them to hear the experiences of refugees from the former Soviet East Block nations; or, the experiences of more recent escapees from Chinese-dominated territories in Tibet, Xinjiang, or North Korea.
In the meantime, it behooves us to pressure our governments to take seriously the dangers which Professor Hinton and other leading technology experts have raised, if not for our benefit, then for the safety and wellbeing of future generations.
Unless there is a sharp increase in the critical awareness among citizens in the Western democracies of how dangerously the current vogue for Artificial Intelligence could misfire, there is little prospect of serious regulation occurring.
For this reason, we have focused in this series mainly on the dangers posed by run-away, unregulated development of Artificial Intelligence in order?to offer some counterbalance to the exaggerated hype currently being advanced by poorly informed, often na?ve, advocates of this new technology. Such uncritical advocates would be well advised to consider the startling conclusion of the AI white hackers Fabio Urbana and colleagues described in Part Two of this series - whose warnings include the following:
The reality is that this is not science fiction. We are but one very small company in a universe of many hundreds of companies using AI software for drug discovery and de novo design. How many of them have even considered repurposing, or misuse, possibilities? Most will work on small molecules and many of the companies are very well funded and likely using the global chemistry network to make their AI designed molecules. How many people are familiar with the know-how to find the pockets of chemical space that can be filled with molecules predicted to be orders of magnitude more toxic than VX? We do not currently have answers to these questions.
The full details of their white hat exploration of AI’s ability to rapidly generate molecules of extreme toxicity can be found in their report for NATURE MACHINE INTELLIGENCE, March, 2022, vol. 4, no 3 pp 189-191 in the U.S. National Library of Medicine (“NLM”) collection and linked here:?
??
Next week, a look at the broader geopolitical context in which AI and other new technologies are currently being developed.
RESOURCES FOR DEEPER UNDERSTANDING AND FURTHER LEARNING
1. Mabel Bell, Alexander's Silent Partner by Lilas M. Toward. Breton Books, 1996.
2. Geoffery Everest Hinton, the Romanes Lecture, Oxford University, Feb. 19, 2024, the full lecture is available via:
?Prof. Geoffrey Hinton - "Will digital intelligence replace biological intelligence?" Romanes Lecture
?3. Geoffrey Everest Hinton, The Possible End of Humanity from AI? Geoffrey Hinton at MIT Technology Review's EmTech available via:
4. Carl Benedict and Michael Osborne, The Future of Employment: How susceptible are jobs to computerisation? The Oxford University Martin Research Centre, 2013. Dated but revealing!
?5. Margaret MacMillan, The War That Ended Peace: The Road to 1914 - a look at the folly and immense cost of the failure to “understand the essential nature of man himself" triggering World War One versus mere enthusiasm over new technologies!
?6. Ken Follett, NEVER, a very well researched fictional account of the incremental failure to prevent World War Three due to factors resembling MacMillan's scholarly analysis of World War One, a risk AI progress appears to increase.
?7. On the difficulty of obtaining international agreements, even in cases of extreme urgency, and the lessening effectiveness of the post-war rule of law, see the BBC’s Stephen Sacker HARDTALK Interview with Joan Donoghue, the former head of the International Court of Justice, available via:
8. On the enormous downside risks of uncontrolled AI development and the ease of misuse possible by bad actors - both state and non-state actors - see:
"Dual Use of Artificial Intelligence-powered Drug Discovery" by
in Nat Mach Intell. 2022 Mar; 4(3): 189–191.
Published online 2022 Mar 7. doi:?10.1038/s42256-022-00465-9
and?available via:
?(c) brianrjoseph 2024. permission is given for all electronic, digital and print dissemination if authorial credit is provided.
?
President of Canmac Economics who specializes in forecasting, strategic planning and market studies, program evaluation studies, economic impact and feasibility studies.
8 个月Thanks for sharing