What Future for AI, and us?
Ernest worthman
IEEE senior/life member. High-Technology Industry Analyst. Editor 6G World, Electronic Design, and other publications. High-Technology Writer. Conference Panelist/Speaker
Perhaps nothing, in quite some time, has stirred the global technology sector as loudly as the controversy over the future of artificial intelligence (AI).
What seemed to hit the publicity pedal on this was a letter written by The Future of Life Institute back in March of this year. This letter went virial. It was signed by a myriad of reputable individuals, including Elon Musk, Steve Wozniak, Max Tegman, and other knowledgeable, respected thought leaders across a variety of fields, both in and out of the AI space. Their dire warning is the worry of losing control. This because, already, we do not understand some of the larger modules that have been developed with the aid of AI.
And the warnings are still coming. Just recently, in an essay penned by Ian Bremmer and Mustafa Suleyman in Foreign Affairs, warned “As artificial intelligence technology continues to advance at a breakneck pace, it will only become better, cheaper, more ubiquitous, and more dangerous. Its arrival marks a Big Bang moment, the beginning of a world-changing technological revolution that will remake politics, economies, and societies.”
Now, however, like most typical sky is falling scenarios, predictably, the attention from the “if it leads it bleeds” media went away almost as quickly as it came, and they went off in search of something more sensational. Since then, things have quieted down on the sky is falling AI scene.
But that does not defang the AI case nor discredit the issues. AI, particularly generative, concerns are real and valid. However, it seems to have, for the most part, fallen on deaf ears; AI development continues across all platforms, unabated, with little reserve to the potential dark side. In fact, it seems nearly everybody is jumping on the AI product bandwagon with all kinds of offerings. Of late, USTA, and Amazon, among other have been adding new generative AI offerings at a record pace.
This AI evolution reminds me of a long past historical time-the development of the atomic bomb. Back then, the same concerns were raised by the segment that really knew the technology-and they were right.
However, there was a need for the dark side of nuclear development. But once WWII was as over, the race began to develop the biggest and baddest atomic bomb and continued until the world stood at the brink of nuclear extinction. We eventually dodged that bullet, but not without a lot of consternation. Are we at a similar point with AI?
When all is said and done, the bottom line is, yes, we are at a pivotal point with AI.
We have a lot more historical experience with doomsday technology that we did in the mid-1940s. It took a lot of years, fears, and tears before we became comfortable that the worst-case scenario that nuclear war was not going to happen. But it still is a threat. For now, we just understand that if it happens, everybody dies!
AI is the 21st century’s version of the nuclear age. Just like back then, we have the ability to use it for good and bad. And only time will tell the path we take.
The Good
On the bright side, there is no doubt AI has the potential to work miracles. One promising use case is that AI can used as an early warning system for cancer. This was demonstrated in an MIT in a study that revealed AI-interpreted MRI images of lungs could spot potential cancer development that the radiologist could not see. The project, called Sybil, was 94 percent accurate in predicting where a tumor would appear within two years.
A second such case is with a brain cancer called Glioblastoma. This is a rapidly spreading and nearly always fatal cancer. Unfortunately, today’s treatment is still somewhat archaic. The present method is to take as much of the tumor out as possible and send it out for analysis. This is important because this particular cancer has as vastly different mutation profiles for each case. With Glioblastoma, there is no one size fits all approach that can be applied to cure it. This is unlike most other cancers.?
Initial treatment is similar to other cancers requiring surgery. However, unlike other cancers, follow-up treatment requires a precision, targeted approach. This is due to the fact that each case has unique mechanisms that propel its growth and proliferation. To identify the molecular mechanisms that drive each patient’s mutation profile takes weeks. And, with a fast-moving cancer such as this, time is of the essence. The reason it has such a high mortality rate is that it takes too long to find the case-specific, after surgery treatment.
This is where AI shines. Combining Stimulated Raman Histology (SRH) with AI can provide results that formerly took weeks, in minutes. And it is done in the operating room during the surgery. This allows a uniquely targeted, individual approach to be formulated immediately, giving the patient a much better chance of survival. In fact, this approach shows promise within the entire cancer environment.
There are many more cases where AI can work near miracles. Or at least, improve the odds in favor of a good outcome significantly.
So, kudos to those working with AI in such environments. Keep up the good work. But regardless of how good AI can be, do the doom-sayers have a valid case? Can it really threaten the human race with a risk of extinction as some intimate? Are we back in 1942? Some think so and, quite frankly, so do I. Let us look at why.
The bad
On the public side, there is some movement in trying to reel in AI a bit. The EU, (the European Data Protection Board, EDPB) is considering creating a task force to look into generative AI. Italy, Spain, China, the United States (On August 10, 2023, the U.S. Department of Defense (DOD) announced the creation of Task Force Lima, which will focus on generative artificial intelligence (AI) responsibility and strategy issues.
The task force announcement follows several DOD actions to ensure that AI tools are developed responsibly and in a coordinated manner, following the February 2020, DOD adoption five ethical principles for AI development. And a laundry list of other bodies is sounding warnings and looking at some sort of regulation.
On the private side, well-respected field leaders such as Geoffrey Hinton, a pioneer in AI and chatbots, believes he underestimated the existential threat they pose. He noted that once AI can create its own goals, humans will no longer be needed. That is a chilling thought, one that evokes imaginings of Terminator scenarios. He is making a strong statement having resigned from Google over ethical fears. He has coined what is likely the future scenario of the human race when he said, “humanity is just a ‘passing phase’ in the evolution of intelligence.” I believe he is spot on. That is certainly one potential path for humanity.
领英推荐
As well, Craig Burland, CISO of Inversion6 noted that ChatGPT is evoking fears pulled from science fiction movies.
And Sam Altman, CEO of Microsoft-backed OpenAI, recently called for the US to regulate deployment of advanced large language models, warning of the dangers of generative AI without solid policy frameworks in place. These individuals are not a bunch of crackpots who know nothing about AI. They are the world’s foremost authorities on it. They are educated, reputable, and respected scientist and business leaders who understand the technology.
However, in spite of all of this, there seems to be a lot of talk, but little action or actionable directions at the moment. Development has hardly been stifled. Nearly every day, there is news about a new AI product, company merger or acquisition, or derivative technology about it.
The most recent being Meta trying to outdo OpenAI these days, releasing two new open source projects, Code Llama and SeamlessM4T. Meanwhile, OpenAI made fine-tuning?for GPT-3.5 Turbo available, IBM is using generative AI to modernize COBOL mainframes. Hugging Face is getting $235 million from Big Tech. And that is only a sampling of the forward movement.
As history has shown, the more this advance, the more difficult it will be to reel it in and curtail building new models and expanding capabilities, or ban addressing new use cases around AI (particularly generative AI). As with so many other cases, unless there is a major crisis of some sort, any regulation will largely be ceremonial and practically unenforceable (as seen recently with Tic Tok).
Still, just as so many other discoveries, over the centuries, AI promises substantial benefit to mankind, but it also being weaponized. By that, I really mean for use by the dark side for nefarious purposes.
I am not necessarily referring to the Terminator-type of out of control (although that certainly is not out of the question down the road). I mean its potential for abuse by despicable evil-doers, and by ignorance from legitimate organizations such law enforcement and security.
Segments such as facial recognition have revealed the flaws in AI. And concern over AI being used to manipulate financial data, to hack all kinds of databases and create nefarious applications such as the, now infamous, Joe Rogan podcast. Then there is the deep fake angle, defeating security protocols, and using it to create all kinds of fake documents and images. It can be used to create fake reviews, even for blackmail. And this is just the tip of the iceberg.
Making history or ending it
AI will go down in history as the most significant paradigm shift since the industrial revolution. And it is certainly conceivable that at some future point in time, AI will surpass general human intelligence-the turning point of singularity-thereby becoming the most superior form of intelligence on the planet. A chronical oft penned in sci-fi.
In fact, AI has the potential to evolve into the primary intelligent life on this planet, if left unmanaged. Then the Terminator scenario is much more worrisome.
A snippet of that is the famous case where Big Blue, the then IBM supercomputer, beat Garry Kasparov, the world chess champion (1997) in a game of chess. And that was fledgling, emerging AI case.
Since then, there have been various other cases, all showing more and more potential-good and bad. In 2011,?IBM’s Watson?— a ‘question and answer’ computer system capable of analyzing natural language took on?two former winners of the Jeopardy show. It took home the grand prize of $1,000,000 quite humbly. That was a decade ago.
Today, AI is in medicine, ecology, energy, transportation, wireless, the metaverse, the internet of anything and everything (IoX), and creating data that is difficult to determine its origin. There are hundreds, perhaps thousands, of other segments and industries that can benefit tremendously from it. Already there is concern that AI is so pervasive across so many industries and segments that, as some have noted, we really do not know all that is happening across the AI landscape.
So here comes another classic conflict between good and evil. Only this time, it is not about religion, or race, or land, or resources. It is around intelligence. Can the dark side win? To paraphrase a line from the movie Apocalypse Now, “there is a struggle in every human heart between good and evil. And good does not always triumph.” We have a conscience, AI does not (at least so far) so it can act without feeling or emotions. That is the turning point.
Soon we will have the internet of anything and everything (IoX), autonomous vehicles, all manner of robots, smart phones, refrigerators, buildings, cars, and so many more intelligent machines around us, all being operated by an artificially super intelligent autonomous system — an AI “God” if you will. This will change human life will change dramatically and is the culmination of all that we will both achieve and worry about.
This future world has been prophesied within so many media platforms-books, movies, chats, videos, yada yada. The life of ease where we will no longer be burdened by menial, laborious tasks, or boring chores. Cleaning, cooking, ironing, washing, will all be done for us. We no longer need to know how to drive or even flick a switch to turn on the lights. Utopia? Perhaps. But beware, if this God-like intelligence eventually decides the planet is better off without the human footprint, what then? Is this not the way Skynet (the Terminator movie’s supercomputer) evolved? Could it play out that the greatest human invention will also be our last?
Sounds far out, does it not? Yet that is exactly the scenario some see on the horizon. Hence the sudden pushback of what we are doing with AI.
I would like to believe that the better nature of mankind will get past this as it has gotten past so many critical turning points over its history. But there is no guarantee of that.
Only time will tell whether we will survive the development of AI as we did the development of the nuclear ecosystem. Let us just hope the better side of mankind can grasp the AI monster before it grasps us.