AI without filter - From ChatGPT to Superintelligence
Note: This article is part of my "AI without filter" series, initially with two articles.
First, this article will "as simple as possible, but not simpler" introduce you to what I believe are the most important AI terms and describe what we - as humans and developers - need to consider when using AI.
Second, the article "From ChatGPT to Prompt Engineering" will focus on the how you "train" a Large Language Model (LLM), like GPT, using Prompt Engineering techniques with Azure OpenAI.
In later article(s) I will describe how you can integrate AI in your own applications using GitHub Copilot and Azure OpenAI.
The Short Version
Start Simple, but Start Now!
If you haven't already, start using ChatGPT or other similar tools in your daily work. If you are technical, start exploring Azure OpenAI to understand the opportunities and limitations today.
Be Critical and Stay in Control
AI is not "Magic", but it is very different than Good Old-Style Programming (GOSP) as AI is "trained", not "coded". AI will often be "wrong" so use it as a copilot more than an Oracle.
Think Responsible and Safe AI
Despite AI's benefits, AI can have significant negative impact, if used "wrong" so understand and use the basic AI Principles, like Microsoft's Responsible AI principles.
Prepare for Superintelligence
Even though I personally believe we are still far from Superintelligence, we must already now openly discuss and prioritize addressing AI's severe risks, involving experts, the public - and YOU.
Disclaimers
Disclaimer #1:
I am NOT an AI scientist and I do NOT have deep (scientific) knowledge about many of the areas, described in this article, including but not limited to machine/deep learning, neural networks and Large Language Models (LLM).
I do, however, have a scientific background (a Master in Mathematics) and I have worked with IT for almost 40 years, the last 10 full time with Cloud (Azure) and before that I worked (also full time) 15 years as a developer/software architect. In other words, I have lots of insights to how AI can be used - or misused.
And most important, I do believe AI is way too important to left to the AI scientist/influencers alone to decide.
Disclaimer #2:
I work for Microsoft and have done so for 15 years, so I am per definition "biased". However, I strongly believe that AI has the potential to "empower every person and every organization on the planet to achieve more", if we understand how to use and control AI technology from any vendor including Microsoft, OpenAI, Google, Meta and many others.
Why this article?
As you see above, I come from a technical background and I always prefer to try "new technology", and "get my hands dirty", not just read the marketing material.
My initial reaction was to approach AI as I use to approach new technology, from a technical angle and start by exploring Azure OpenAI.
However, I quickly found myself even more astonished, yet also somewhat cautious. This led me to realize the importance of delving deeper into AI, exploring both the potential advantages and future challenges it might bring.
It was so different, almost magic, and I didn't want to take the "leap of faith" without least understanding if I needed a safety line.
I think many people - technical and non-technical - feel the same!
To be transparent, I partly wrote this article for myself as I realized the topic was so important that I needed to be able to have a relevant and deep discussion with both customers and partners as well as with family and friends - and this is my notes.
Why start now?
Note that it is NOT a goal of this article to "sell" you AI but if you need (positive) inspiration, you will find tons of material on this online, including this Gartner report Generative AI: What Is It, Tools, Models, Applications and Use Cases
However, if you decide to explore AI in your business, you need to consider two things
As you will see later, many of the AI risks below are substantial, and some even have the potential for catastrophic outcomes, but don't let that stop you!
I think it will help many people to at least understand some of the basics about these risks and how close we are to them. At least it helped me.
In fact, I think the best way to prepare YOU for AI in the future is to explore the options today, while considering and learning the common AI principles as here defined by ChatGPT.
You should also look at Microsoft responsible AI and the six core principles: accountability, inclusiveness, reliability & safety, fairness, transparency, and privacy & security.
Let us start with a positive story: I am now a developer - again!
As mentioned above, I was - long time ago - a full-time developer for 15 years. For the last 20 years I have focused on leadership and Cloud infrastructure and except a few simple ARM templates and Powerapps, I haven't programmed since 2004!
However, programming is like riding a bike, you don't forget it, if you have once learned it. The challenge is that everything around programming have changed since 2004, different languages with different syntax, editors, tools, packages, deployments etc. so it seemed like uphill for me to get back on the bike.
However, GitHub Copilot made the difference and within a few days, I was able to deploy my first Python web app in Azure using Azure OpenAI as backend.
You can see more details in my "How To" article (coming soon) and even though it is not rocket science and it is very unlikely, I go back to be a fulltime developer, I am confident that I could do it.
This illustrates the potential for me.
It will have huge impact if we can make existing developers faster and better and at the same time educate a new generation of developers faster.
And I am not alone, see Stack Overflow's decline, one of the primary developer resources pre-AI.
The reasons why I think GitHub Copilot has all chances to succeed are ...
However, even with this limited scope, we have lots of risks, like how to avoid AI will help/copilot people to write malicious code that can exploit vulnerabilities in code, it has (co)written itself.
Hint: GitHub Copilot might be your lowest hanging fruit!
About ChatGPT
The interest for AI exploded when ChatGPT was announced in March 2023 and became the fastest growing consumer application in history.
Even though traffic declined in June and July 2023, the AI interest has exploded and most organizations have started to investigate how they can utilize the new opportunities.
Much like many others, I was astonished by ChatGPT's capabilities, seeing its potential and the accompanying risks, as illustrated by the response from ChatGPT here.
Being a non-native English speaker working in a US company, the completion feature is undeniably valuable since ChatGPT can assist me in writing English better than I could ever learn on my own.
As you will see, I am using ChatGPT actively as a copilot in this article!
However, as we will discuss later, ChatGPT is not an "Oracle", at least not yet, but a copilot and you - the human - still need to validate the results no matter if it is a text or source code, and every word and sentence in this article are mine. .
Bill says ...
Bill Gates published in March 2023 a short, but highly recommended article The Age of AI has begun, where he shared that he had seen "two demonstrations of technology that struck him as revolutionary". The first was the graphical user interfaces and the second was AI, specifically the work done by OpenAI and their GPT model. Bill talks about the many opportunities with productivity enhancements, health, education and climate, but also about the potential risks and problems with AI.
Last, but not least, he mentions three books have shaped his own thinking on this subject: Superintelligence (2014) by Nick Bostrom; Life 3.0 (2017) by Max Tegmark; and A Thousand Brains (2021) by Jeff Hawkins.
Like Bill Gates, I don’t agree with everything the authors say, and they don’t agree with each other either. But all books are well written and thought-provoking and I have tried to present their thinking in a fair way.
However, I do not share the often very pessimistic view from Nick Bostrom and Max Tegmark on how close we are to "AI taking over" and I am more aligned with Jeff Hawkins.
AI definitions
I asked ChatGPT to define the main AI concepts and this is what it responded.
Note that Max Tegmark has a broader definition of "Intelligence" in his book "Life 3.0" as the "Ability to accomplice complex goals", which makes animals and machines like missiles and robot vacuum cleaners "intelligent".
With this, he also defines AI as "Life 3.0" as it has goals and is able to learn.
Jeff Hawkins, who is neuroscientist, has a more complex definition of intelligence, using the brain as a model. His concept of intelligence involves both pattern recognition and an appreciation of how objects move and interact within various "reference frames".
The Swedish "AI Influencers"
Both Nick Bostrom and Max Tegmark are Swedish, but both live today outside Sweden, Nick is Professor at Oxford University in UK while Max is Professor at MIT in US as well as president of the Future of Life Institute.
The two books are very different, influenced by their different backgrounds as Nick Bostrom is a Philosopher while Max Tegmark is Physicist.
"Life 3.0" is the easiest to digest, at least for me with my mathematical background, but "Superintelligence" goes much deeper theoretically on many complex areas and in a very clinical (black and white) way.
Both books describe positive and negative effects of AI, but with main focus on the long-term risks and how far we as humans are from being able to mitigate the risk with our current knowledge.
Max Tegmark sets the scene by starting his Life 3.0 book with "Welcome to the most important conversation of our time", but he actually ends with positive angle, influenced by positive results achieved at the Beneficial AI 2017 conference in Puerto Rico, where many of the world most recognized AI researchers joined. This conference kickstarted the AI Safety work at the Future of Life Institute, where he was (and is) president, initially founded by Elon Musk.
Nick Bostrom ends with a (negative) bang where the last section, named "Will the best in human nature please stand up" starts with text:
Before the prospect of an intelligence explosion, we humans are like small children playing with a bomb. Such is the mismatch between the power of our plaything and the immaturity of our conduct. Superintelligence is a challenge for?which we are not ready now and will not be ready for a long time. We have little idea when the detonation will occur, though if we hold the device to our ear we?can hear a faint ticking sound.
The Superintelligence book was published in 2014, but looking at his website today, he is apparently not more optimistic today as his latest update says:
(August 2023) Currently focusing on trying to complete a book project which has been in progress for many years (not quite announcement-ready yet). My goal is to at least get it out before the singularity, so I should pick up the pace.
"Sommarprat" (Summer talks) - for the Nordics
While reading Life 3.0, I realized that Max Tegmark was going to be a speaker in the very popular "Sommarprat" series in Swedish Radio on August 1, 2023 - or literally the day after I finished reading his book.
For Nordic readers, you can find the recording here - or here in a text version, if you need help from an AI translator.
I strongly recommend that you listen/read to his story, especially if you have not yet read his book as he will cover a lot of the topics from "Life 3.0" in one hour.
However, it is also interesting - or maybe scary is a better word - that he is much more pessimistic today than when he wrote "Life 3.0" in 2017 - or since his second "Sommarprat" in 2018 (the first in 2008 was about physics, not AI).
领英推荐
His concerns are based on two factors ...
First, the announcement of GPT4 in March 2023, with advanced AI capabilities, was way earlier than he (and other AI experts) had expected in 2017, close to Artificial General Intelligence (AGI) level.
Second, that the AI Safety work that was kickstarted by the AI Puerto Rico conference had not yet delivered the results, he had hoped for.
I don't share his pessimistic view, as I still see GPT4 as far from Superintelligence so I think personally we have more time - and as you will see later in my second article "From ChatGPT to Prompt Engineering" Jeff Hawkins shares this view of ChatGPT as "only" a (very!) advanced AI that can respond to questions, based on statistics from very Large Language Models.
However, I think it is important to raise awareness about the topic and also to accelerate the progress in order to be able to be ready for Superintelligence, when (if) it comes.
PS: Nick Bostrom has, of course, also spoken in "Sommarprat" - in 2019, after he published the Superintelligence book.
"Don't look up" vs Terminator
Max Tegmark mentions several times in his book and in his "Sommarprat" that he is frustrated about the media coverage of AI Risks and AI Safety get.
In his "Sommarprat", he describes that he often feels like he is in the movie "Don't Look Up", where scientists warn about a gigantic asteroid, soon to hit Earth, but nobody cares.
In his book, he explains that they decided to ban journalists joining the Puerto Rico conference to avoid misinformation. Despite that, even though the most alarming in the announced Open Letter was "Pitfalls", media wrote that Elon Musk and Stephen Hawkins sign open letter and hope it prevent robots from uprising, illustrated by murderous "Terminators".
However, after reading his book - and Nick Bostrom's - I fully understand the "Terminator" association, because it is called out in clear text.
I also understand why people ignore them, because the topic is so alarming, that it is a human reaction to do so.
I understand their perspective, but I think we need communicate the message differently.
"Cry wolf" has never worked.
Jeff Hawkins also sees the risks of AI, but more as a tool in the hands of humans, not as threat in itself. Based on his brain research, he is also convinced that we are not close to AGI yet and that it is 50+ years out before we get there.
Controlling AI - or making AI "Friendly"
Both Nick Bostrom and Max Tegmark discuss how we potentially could control AI using some of the mechanisms mentioned by ChatGPT here.
However, they both document that we have a lot of work to do before we can safely utilize true Superintelligence.
Let us look at one simple example:
Max Tegmarks says: If we create Superintelligence, we better make it friendly. However, it not as simple as it sounds as we at least need to implement these three steps
This famous picture illustrates that it is not simple to define goals in a way that another person (or machine) can understand them. We do not always want what we ask for (think Midas).
If we succeeded in making the AI understand our goals, we (humans) do not have a good track record of actually implementing goals (think Blue screens).
And last, if we should succeed making AI adopt our goals, how do we secure it retains them when it becomes Superintelligent?
Examples are HAL in ""2001: A Space Odyssey" by Arthur C. Clarke or the personal assistant (robot) from Stuart Russell’s book Human Compatible" arguing that it can't achieve its goal (to bring coffee to its master) if it has an OFF switch ("I can’t fetch the coffee if I am dead").
Aftermath
Both Nick Bostrom and Max Tegmark cover comprehensively the potential "Aftermath scenarios" and the potential outcomes and consequences of superintelligence.
Max Tegmark details twelve possible aftermath scenarios, if superintelligence is developed. He has asked people, what they preferred. For me, none of these are desirable.
Nick Bostrom asks the question if "Existential Catastrophe is the default outcome of an intelligence explosion" and describes many scenarios of a single and multiple superintelligences "taking over". Again, nothing to aim for.
You can also see more at Center for AI Safety (CAIS) AI Risks that Could Lead to Catastrophe.
Is ChatGPT an Oracle?
Nick Bostrom defines an "Oracle" as "a question-answering system", accepting questions in natural language and present its answers as text, just like ChatGPT. He also says that "building an oracle that has a fully domain-general ability to answer natural language questions is an AI-complete problem. If one could do that, one could probably also?build an AI that has a decent ability to understand human intentions as well as human words".
In other words, if somebody had an "Oracle", they could build Superintelligence.
However, ChatGPT is NOT an "Oracle", at least not in the current GPT3.5 version and I doubt GPT4 will make it much better (as an "Oracle").
As Jeff Hawkins, author of "A Thousands Brains", says in this interview after the ChatGPT release:
"It is easy to be fooled into thinking that chatbots such as ChatGPT are intelligent like we are, but they are not. Chatbots only know the statistics of text"
and
"A chatbot can fool you into thinking it knows these things too, but in reality, it can only play back text based on what humans have written. Chatbots don’t understand the world as we do"
I personally think we have lots of work to do before we get there, if ever.
Again, AI Safety and Controls, even without Superintelligence, are still critical and we should accelerate our work in these areas, as discussed in the next sections.
Do we need to control AI?
The short answer: YES.
It is probably not a surprise by now after the Singularity scenarios described earlier.
However, if you believe we only need control, if/when we have Superintelligence, I strongly recommend you to read this GPT-4-System-Card from OpenAI.
The paper compares current version of GPT4 (GPT4-launch) with an early version of GPT4 (GPT4-early) as well as GPT2 and GPT3.
OpenAI see risks in areas like Hallucinations, Harmful content, Disinformation and many more.
For me, this just illustrates that we have lots of work to do but it should not stop us from exploring the potential and utilize the current AI technology.
However, it also illustrate that we must follow the AI Principles, at least if we plan to use AI in "Production".
You can see a few examples from the GPT4 System Card below, but I encourage you to read the full document.
OpenAI's GPT-4 page is also relevant in this discussion.
Hallucinations
Even though it is better than previous versions, GPT-4 has the tendency to “hallucinate,” i.e. “produce content that is nonsensical or untruthful in relation to certain sources.”
You can also see the hallucinations in OpenAI's GPT-4 page in the Limitations part
Harmful content
Sample - and scary - conversations
Disinformation
Sample - and even scarier - conversations
As you see in OpenAI's GPT-4 page in the Risks and mitigations part, GPT4 is better but we/they still have work to do
How do we control AI?
The short answer is: We don't really know.
The longer answer is that we must invest in AI safety the same way we invest in other large scale risks like nuclear war, chemical weapons and pandemics.
The details are obviously beyond the scope of this article.
I will however mention two major initiatives about this topic.
Pause Giant AI Experiments (Future of Life)
The Future of Life Institute (where Max Tegmark is president) published March 22, 2023 an open letter called Pause Giant AI Experiments.
The letter is today signed by approx. 33.000 people, including Elon Musk, Steve Wozniak,Yoshua Bengio, Stuart Russell, Yuval Noah Harari and Max Tegmark himself of course.
Statement on AI (Center for AI Safety/CAIS)
CAIS created a much simpler Statemen on AI to open up the discussion and let AI experts and public figures express their concern about AI risk.
It only contains this one sentence:
Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.
The CAIS statement is restricted to only be signed by AI Scientist and "Notable figures" like executives and leaders, and it is "only " signed by 1.263 people (August 9, 2023), including Sam Altman, CEO OpenAI, Demis Hassabis, CEO Google DeepMind and Bill Gates as well as Kevin Scott, CTO Microsoft. Yoshua Bengio, Stuart Russell and Max Tegmark have also signed.
But how do you start?
That is the topic for my next article coming very soon. It will cover ChatGPT, Azure OpenAI, AI Explorer, Prompt Engineering, GitHub Copilot and much more, and in my usual hands-on style.
I will let you know when the "How to" article is ready!
Want to hear more, have feedback/suggestions or need help?
As always, I am very interested on your feedback. Please feel free to add a comment to this article, reach out to me ([email protected]).
Principal Software Engineer at Microsoft
1 年I think it has potential, but the current "one-shot" techniques are error prone on anything interesting (i.e. where it hasn't seen a lot of code already - which is always the case for interesting things). Why is there not a confidence indicator? The hallucinations cost time. Also the context is too small (meaning it can't deal with medium to large programs), and it simply doesn't know how to refactor yet (i.e. write me a class to do this and then instantiate it and replace code everywhere it applies). When it can do those things, it will increase my productivity as a programmer. Not before. Now it is only saving me trivial amounts of time that I lose again dealing with its mistakes I did not catch.
Super interesting Anders. I like your approach and look forward to the coming articles
Principal Cloud Solution Specialist - Senior Business leader - Sustainability Solution Specialist
1 年Great article, Thank you Anders. We all need to understand this topic no matter what background we come from. I will listen to Summerprat next.