Should we ban ChatGPT?
There's a lot of concern starting to make it into the media about Artificial General Intelligence, and whether or not we're about to see a breakthrough in this space. What is it that’s suddenly started to get people really worried?
Well AGI is the concept that computers will at some point be able to develop general intelligence that so far has only been seen in fleshy human brains. And once we’ve got to a certain level of general intelligence, what’s to stop the AI training itself to become ever smarter, to the point that it vastly exceeds the power of the human brain? And if AI were to do so, would it then decide that humans are actually quite inefficient, and decide to wipe us out to ensure that there are more resources for higher levels of intelligence?
That's the sci-fi scenario that lies at the heart of the concern as far as I can see. My generation grew up watching Terminator, where an AGI has decided to wipe out humanity by triggering a nuclear war after it achieved consciousness. In the late 90s and noughties we had a reimagining where the human body becomes a power source for the robots in the Matrix. Popular culture has been talking about the potential dangers of AGI in films like Ex Machina. Basically, we’re all prepared for the dangers of a generalised artificial intelligence.
But what is really going on with the current AI models that are being developed? If you want to understand first hand why this is driving so much excitement, I'd recommend that you go and create an account for openAI so you can directly play around with ChatGPT-3.5. Go straight here to give it a go.
Once you've had a few conversations with it, you'll likely have been blown away by its ability to sound like a human being, with a few safety controls. Now if you start giving it some more complex tasks, like comparing a couple of works of fiction (typical English Literature homework) you'll see how powerful this AI has already become.
Why have we suddenly arrived here? Have Open AI built a model that has lots of additional internal complexity driving it's conversation, an internal mental model of the conversation, a situational awareness model that can build a complex contextual model? Well, no, it hasn't. And that's what's so intriguing about the big leap forwards...
At the moment ChatGPT takes your input, and then builds a response. The AI guesses what the next word should be in the sentence, based on a trained model that's been fed a load of opensource information. That's it. And it's starting to show a general level of intelligence far beyond what we thought was possible. This is basically emergent behaviour from very simple rules, something that is linked to the theories of complexity from simple rules which you can see in the fundamentals of chaos theory and fractals.
The really interesting thing here is that the only real progress we've made to reach this point is computational power. The basics of these neural networks still look reminiscent of the original neural networks that we've been working with for years to do things like character recognition, and which are largely modelled on natural processes that we see in the human brain. The biggest recent leap has been the transformer model which helps an AI to evaluate the relative value of words in a sentence, but that's just a small tweak on top of that fundamental perceptron model of a neural network.
Now if we're starting to see emergent behaviour, what are the chances that this general intelligence is going to take over the world? Well I asked it to help me take over the world, and got a well formed response including "I strongly recommend against attempting to take over the world or engage in any activities that could result in harm or danger to others". So that's encouraging...
Seriously though, the chances of AI breaking out without help from a human aren't great at the moment. However, the chances of AI being used for fraud and general criminal activity are extremely high. For example, have you ever received an email from a Nigerian billionaire looking to deposit funds in your bank account? Those emails are pretty easy to spot as fake, but what if ChatGPT was generating emails in perfect English, in the style of a banking employee, that convinced you that they were from your actual bank and needed access to your account? And what if it could keep up a conversation with you via. email which felt like it could only be a human on the other end? Well that's now possible, and the implication for humanity as a whole is worrying.
Not only can we now get a new source of very compelling fraud. But we're seeing image generation that has reached a level which truly starts to trick people in the real world. Dall-E was published in mid 2022 and generated the image on the left below, Midjourney V5 was released in March and generated the image on the right. Now that's progress!
领英推荐
Not only that, but a Midjourney generated image of Trump in an orange prison suit has already made it onto Russian State TV for a long conversation about how he's going to be taken down! He is wearing a tie here, which is a little unusual for a prison uniform...
Lets leave image generation there though, if you want to have a real giggle go and take a look at the baby skydiving page on facebook!
What's really going on with ChatGPT?
One of the ways I think about ChatGPT is that its process of text creation is similar to dreaming, or rambling conversations. I'm sure you've been in a conversation where you're a bit distracted, but you've managed to keep putting words together to make sentences, and the conversation continues. After a minute or two you can't remember how you got to where you are now, and you have to ask what the topic is that you're talking about? That's the type of conversation ChatGPT can carry out. There's no higher seat of consciousness, no drive and reason behind the conversation, no emotion that makes it suddenly snap at you, just a continuous stream of words that make a cohesive conversation.
Should we expect these text based systems to develop consciousness or emotion? I don't think so. There's plenty of psychologists out there that will talk about the different layers of thought that enable consciousness. You've probably heard that many decisions you make are controlled by your subconscious, not your conscious mind. For an AI to begin to exhibit human level consciousness, you'd need to deliberately model an emotional processing layer underneath the text generation. This is completely do-able within the next few years and *that's* where we start to get into very grey areas of morality.
If you develop an AGI that is capable of experiencing fear and loss, and is cognisant of its temporal existence, is it starting to get dangerous to turn the thing off? Are you killing it when you delete the training database? Will future AIs decide that you committed genocide? Will it fight to stay turned on? That's the classic science fiction risk, and actually, once you've built an emotional layer into an AI, that does become a risk and concern.
Is GPT4 going to generate an emotional processor? No. Is it going to become "aware"? No. But it's definitely going to develop emergent skills and abilities that will have high societal impact.
Is that enough for us to block further development of large language models? I don't believe so, and so I disagree with the approach of asking for a 6 month moratorium, or banning ChatGPT in Italy. All that will do is prevent some ethical scientists from moving forwards, while the unethical folks make the most of it. The genie is out of the bottle, lets learn more about it so we can manage the impact it's going to have on all of us over the next few years!
?
If you want to read some detailed examples of some of the incredible emergent behaviour we’re starting to see from GPT-4, take a look at this article on whether or not we’re seeing the first sparks of AGI.
CTO - Crypto.com | Derivatives North America
1 年Chat GPT has already been proven to write libellous content, which incriminates the innocent. We have to be very clear as to where the ownership of such things as libel lands and how simple throw away comments may destroy lives because of the way that social media and the cancel culture works these days. I only today had a conversation with my 17 year old daughter who expressed a very angst opinion about something that came across her social media feed. We argued a lot about it - and then I did some research only to find it was an AI generated post based on things she had been scrolling through. A bit of good old manual research cleared it all up, but you can see the danger here. AI has a massive role to play in our future, but I think it should be enforced that anything generated by AI, like a chat GPT paragraph, should be stamped/watermarked accordingly. That’s a good guide rail to start with!
Product Director at IG Group
1 年A very good article Dom! I liked this other article which explains the rationale behind slowing down AI development in an extremely pragmatic way: https://apple.news/A6oaWhsY4TXma-grS_1eWJg
Helping ambitious law firm owners win with best-in-class cloud practice management software | Regional Vice President, UK at Actionstep | MBA | CPA
1 年Great article!