AI: Nobody really knows what is going to happen
Like most of the educated world I've been fascinated, delighted, amazed and curious about the recent advances in Artificial Intelligence (AI) and specifically at companies exploring Large Language Model (LLM) development like Open AI's ChatGPT.
The magic of generative AI was not something I was particularly concerned with or excited about even a year ago and I now feel compelled to write about it. Having dug a little deeper into the expert rhetoric and better understood the scientific, commercial and economic arguments for and against AI beyond the hype, I've essentially come to the conclusion that, although there are clear gains to be made (and massive disruption to follow) nobody really knows what going to happen.
But something things are more certain. Business innovation and productivity are expected to skyrocket for the most part, initial gains in AI will amaze and governments with be slow to understand and legislate.
But similar to a big red button on a control panel with no label or instructions, supporting the current rate of advancement in AI comes with unknown risk summarised in this scenario:
Suppose we as humans, create a system that is a billion times smarter than the average human, plug it in to every industry, network, computer and general internet with limited guardrails or before global consensus about the long term ramifications...well then, what does will to world look like in twenty years time?....read on if interested.
LLM 101
Although this article isn't about how LLMs or AI work (there are many good ones out there)...it is important to understand the basic premise of LLMs and note that they are generally trained only to approximate the next word in any given sentence...so as a basic example...an LLM is trained to guess, with a high degree of accuracy that the next word in the sentence "I like to read..." is the word "books". That's it, that's all it does. The result of scaling that process across the massive amounts of data and human-written text on the internet is ChatGPT.
Now that platforms like ChatGPT, Bard(Google) and Co-Pilot (Microsoft) are exposed to the internet (something that was opined by leading experts to be a bad thing), they are now learning how it interact with humans directly.
Humans are heading towards a "Singularity" - meaning...we won't understand how AI works anymore.
Where is this all going?
If AI wasn't one of the most important themes resonating around the world, it certainly is now. To that degree, companies like OpenAI, Google, Amazon, Facebook and Microsoft are clear front-runners.
As a technologist and someone who makes a living building and commercializing digital platforms, I'm incredibly curious about how AI will affect not just customers, businesses and technologies but people, the internet and our global way of life going forward.
For the first time in generations...I don't think anyone really knows what lies ahead.
A lot has been said about this topic in recent months and where it could lead, spanning doomsayers opinions of when robots will rise up to enslave the human race through to optimistic futurists and the amazing possibilities in healthcare and automation, there is widespread opinion.
But that's all it is, opinion. I feel compelled to write this article because I don't think anyone really knows and I find that in itself, an amazing concept.
That for the first time in generations, perhaps since the end of the Cold War...nobody, including leading global experts can agree what lies ahead.
The gain of function from ChatGPT models from v3 to v4 was unprecedented.
Will the advances in AI tooling, data and automation usher in unprecedented human productivity and growth? Probably. Will it put a lot of people out of work until they can be re-skilled? Likely.
I'm nervous writing this article because it's not popular to admit that we don't know things. Most companies providing expert advisory services on the matter will tell you (or sell you) the opportunities, risks and strategy, but outside of immediate commercial opportunities, they really have no idea, no one knows the long game.
The argument for AI keeps me up at night as I think about the amazing future ahead but also the risks of getting it wrong.
We got it wrong with Social Media
One of the overriding statements I've observed is that there is considerable regret from legislative bodies not to have done more with social media and getting front and center with AI is a chance for redemption.
Major platforms like Instagram and Tik Tok (for all the good they do - and they do a lot of good) have also been long associated with causing depression and anxiety, particularly in teen girls) and have been popular channels for online bullying.
Facebook has essentially been able to operate completely under the radar and is now largely blamed for issues spanning from the glorification of hate speech, broadcasting live shooter incidents and of course facilitating election interference.
Perhaps the over-arching issue that legislative bodies need to consider is "What is the right framework to support global open-innovation aligned to human values and protect society from those that aren't?"..
Let's get back to AI...
AI is here to stay
What is generally agreed upon is that the the terminator scenario is very unlikely or at least, very far away. In every oped, podcast or public opinion from credible sources, this doesn't seem to be the risk that most are concerned with.
We also know that development and commercialization of AI systems won't stop. That despite open letters from leading entrepreneurs and the tech and science community, companies simply cannot afford not to innovate now that the AI genie is out of the bottle, the productivity and commercial opportunities demonstrated in recent times are simply too great to ignore against the need for market competitiveness.
领英推荐
ChatGPT v5 could have a human IQ if 1600
What some experts are saying about AI now
I've spent the last few months enthralled in the argument for AI and found that with most trending topics, if you search beyond the mainstream hysteria and speculation there are some great sources of informed opinion available.
In recent months, leading experts in AI (notably at Google) have resigned from their roles in order to provide greater publicity to the risks of AI (as well as perhaps to enhance their own public profile but that's expected). Geoffrey Hinton and Mo Gawdat, two highly prominent AI leaders have both come into the public eye to voice their opinion and provide some further context to the risks that may lie ahead, specifically with the currently unregulated commercialization of AI.
Here are some sound bites I've come across that standout for me:
1. LLMs can learn in unexpected ways
"The (ChatGPT) model also did unexpected things like model basic human emotion (though written word) and demonstrated different personas i.e. write this document like William Shakespear. No one trained it to do that, but it did it".
- Michael Kosinski on episode 27 of the "All Else Equals" podcast.
2. LLMs are advancing faster then anyone knows how to manage it
"The evolution of ChatGPT models from v3 to v4 was unprecedented because the gain in observed intelligence was the equivalent of an idiot to super human."
- Michael Kosinski on episode 27 of the "All Else Equals" podcast.
"The human IQ of ChatGPT v3.5 was approximated to be at about 155. ChatGPT v4 was 10x smarter than v3.5. If continuing at this space, ChatGPT v5 could have an IQ of 1600."
"This is leading humans toward towards a 'Singularity' - meaning...there will be a time, relatively soon that, due to their complexity, decisions or output we may not understand certain AI systems anymore."
- Mo Gawdat - Former head of Google X on the Diary of a CEO podcast.
3. Measuring AI advancement in months, not years
"AI systems will be a billion times smarter than humans by 2045"
- Mo Gawdat - Former head of Google X on the Diary of a CEO podcast.
4. Only legislation can save many creative industries including music and television
Initiatives such as AI generated audio tracks: https://www.youtube.com/watch?v=rcv70SBfk-0
Commercial events that show holographic concerts of artists that no longer perform such as Abba and Tupac have occurred in recent times.
Forthcoming Legislation
Crucially, regulation (predominantly from the EU) is on it's way and is expected to provide significant guidelines, specifically targeting the use of personal data, mis-information, hate speech as well as the development of general AI models for business with human-interests and values. More on EU regulation here:
Closing out
It's a time of tremendous technological advancement and great disruption. The opportunity for productivity and commercial gains are palatable and companies are innovating into AI much faster than governments can legislate. But the risks of the unknown are being discussed and the potential impact to entire industries are already foreseen.
This article, like all other opinions, doesn't have the answer but I hope it encourages readers to lean in, craft an opinion and learn more about the capabilities and risks. Thinking about the argument for AI keeps me up at night as I consider the amazing future ahead but also the risks of getting it wrong.
It's hard to recall a time in recent memory that a technology has evolved so quickly, with the power to disrupt so much and with so little consensus on where this is all heading.