I visited the future… and all I got was this lousy chatbot

I visited the future… and all I got was this lousy chatbot

Just last week it was announced that?ChatGPT has become the fastest growing app of all time , reaching 100 million users in a little over two months. And over that time there has been so much said about Open AI’s new chatbot, some of it fawning fan reviews, others harsh critique…and from my own explorations I would suggest both these responses are entirely valid. If I was to summarise all that I’ve learnt so far it would be this


  1. ChatGPT is incredible
  2. Yet it’s also deeply, deeply flawed
  3. Those flaws my never be fixed
  4. This is both disappointing and dangerous
  5. But people will use it anyway


1. ChatGPT is incredible

For anyone who’s used a chatbot ever, ChatGPT is incredible in it’s ability to ‘hold a conversation’ and respond to natural language prompts. Personally I’m not a fan of chatbots, in fact I’ve never understood why any company would voluntarily put a piece of technology in front of their customers when they are at their most impressionable (when they have a problem/request/sales enquiry) and then proceed to frustrate them with circular arguments, technology drop outs and constant miscommunication.

BUT, if a chatbot was powered by ChatGPT then I imagine I could have quite a nice (though slightly verbose) conversation. It’s eerily human like in it’s responses and you get them fast. And if there is anything that is incredible about ChatGPT, it’s that you get fast, human like answers…it’s definitely not the quality or reliability of it’s responses.


2. Yet it’s also deeply, deeply flawed

Unfortunately you cannot rely on the ChatGPT providing you with accurate information. There are countless examples of this floating around the internet and?I shared an example on LinkedIn recently of ChatGPT being a flat earther . Supporters of the platform will claim that most of the responses are accurate but in some ways it doesn’t matter whether this is true or not. If you can’t be sure if the response you receive is accurate then you’re still going to have to validate it through your own research. Not only does this remove one of the two incredible traits of ChatGPT (speed), the way ChatGPT provides references doesn’t always make validation easy.

If you ask ChatGPT where it sources it’s information from it will often tell you its ‘training data’ but not provide links to specific books, websites or articles. Then, at other times, it will provide references but when you check those references they turn out to be inaccurate or even entirely made up.

For example, I recently asked ChatGPT about the amount of milk produced in Turkey (don’t ask me why) and I got the following responses*


Simon: How much milk is produced in Turkey?

ChatGPT: 11 million tonnes in 2021

Simon: Where did you get this data from?

ChatGPT:?https://www.fao.org/faostat/en/#data/QL

~ Simon goes and checks the FAO website and finds out that Turkey produced 23.2 million tonnes of milk products in 2021 ~

~ Simon then thinks that he must have phrased the question incorrectly ~

Simon: How much?total?milk products are produced in Turkey?

ChatGPT: 5 million tonnes in 2021 (includes products such as milk, yogurt, cheese, and butter)


* These responses have all being truncated because, as I mentioned earlier, ChatGPT tends to give quite long winded answers


So neither of the two answers ChatGPT provided align with the reference it gave…or even with each other (it might be possible for the amount of milk produced to be less than total amount of milk products but not the other way around). Now this may feel fairly benign but there are a number of other examples of ChatGPT providing dangerous financial advice including?suggesting people taking on high risk loans they can’t afford .

And although spitting out information that is absolutely wrong an incorrectly referenced is a major problem, I would suggest there’s actually a bigger flaw. It’s not that ChatGPT is sometimes absolutely wrong, it’s also wrong absolutely. What I mean by this is ChatGPT is entirely committed to its wrong answer. There is no hesitation or doubt in its responses, no caginess, no upward infliction on the last word to suggest that it’s not entire convinced that it’s correct. The problem with this is without ‘ifs, buts and maybes’ we are likely to believe the responses that ChatGPT provides even when we shouldn’t.


3. Those flaws may never be fixed

Some might argue that this technology is still in it’s infancy and will get better over time. Although we can most definitely expect the technology to get better it may not get more accurate. In fact,?the main objective of ChatGPT is not actually to provide accurate information, its main objective is purely to sound more human .

The whole premise of a generative AI model (like the one that ChatGPT is based on) is to predict the next word(s) in a sentence given a particular prompt. It does this by ingesting large amounts of text data and finding statistically significant patterns. The more statistically significant the pattern of words the more likely they will be regurgitated by ChatGPT when you ask a question. The objective of ChatGPT is plausibility not perfection. In fact you could argue that when ChatGPT provides an accurate response to your question it only happened by chance, not by design.


4. This is both disappointing and dangerous

Humans have been pursuing the dream of creating a computer that can think like a human for over 50 years. The goal of Artificial Intelligence has consumed vast amounts of money, resources and time and with ChatGPT it suddenly feels like we’re getting close.

Except we’re not.

Although ChatGPT sounds human like it doesn’t necessarily ‘think’ human-like. It doesn’t have the capacity to reason in the same way that we do, it can only consult it’s statistical models and provide a contextual response. It also has no capacity to shape it’s responses based on personal experience and individual circumstance (for a great critique of this I encourage you to read this?post by Nick Cave responding to a song written by ChatGPT in the style of Nick Cave ), and perhaps most importantly it lacks imagination. It cannot generalise concepts from one domain and apply them to another or dream a new idea into existence.

And that’s why, like?Professor Adam Frank , I now hesitate to use the words Artificial Intelligence when it comes to talking about ChatGPT. At best it’s misleading, at worst it can be dangerous. By implying a form of intelligence we are more likely to believe in it, and trust in it, than than we should.

But perhaps the most disappointing thing about ChatGPT (and the pursuit of AI more generally) is rather than compensate for our human flaws it compounds them. Historically humans have created technologies to make us better, faster, stronger and more consistent.

For example, most of us are poor at math. We can do the simple stuff ok but if someone asks what the square root of 247 multiplied by 368 is, we will probably take a long time to get to the answer and even after a whole bunch of time and effort there’s a good chance we will get it wrong. But thankfully we have calculators. As long as you can push the right buttons in the right sequence you will get the right answer. Every. Single. Time.

ChatGPT is like having a calculator that randomly provides you with the wrong answer…and then convince you that it’s 100% correct.


5. But people will use it anyway

Unfortunately we all seem to be obsessed with doing more with less. More output, less effort and preferably delivered instantly. And if that’s your objective its hard to go past the output of chatbots like ChatGPT. Don’t wont to write your own blog post slash academic article slash mid term paper? Just feed a few prompts and parameters into ChatGPT and it will spit out an entirely average but nicely worded answer.

And unfortunately we live in a world were something that sounds nice is often quite enough. Apart from a ranking algorithm that automatically crawls the internet, most people’s blogs aren’t going to be read that much anyway. And to be honest, if your ChatGPT created article or paper can pass scrutiny it probably says as much about the flaws in our academic systems as it does about the quality of the work.

The optimist in would like to think that the inception of ChatGPT can be a catalyst to question a whole bunch of things about the technology we create, the type of work we get people to do and how we define human success both in school and in the workplace. But the realist in me knows that if this was to ever happen it’s unlikely to happen any time soon. In the short term the pure novelty of having a chatbot spit out nice sounding words with next to zero effort means our classrooms, boardrooms, inboxes and the internet more broadly will be awash with this type of content.

And as the technology gets better the problems with this will become more pronounced. The latest evidence suggests?artificially generated human faces now look more real than photographs . Over time we can expect text generated by ChatGPT to appear more real and more trustworthy than text that has been researched and anguished over by humans… but, just like artificially generated faces, this artificially generated text will also lack depth, personal experience, and ultimately, a deeper sense of purpose.

Darren Hill

Co-Founder 3-Time AFR Fast 100 company, Pragmatic Thinking

1 年

Nice piece Simon Waller and yet I finished reading with some big questions; 1. Are we placing our own expectation upon technology to be something it’s not? 2. Are we judging AI against the right criteria? Ysee in my own research into ChatGPT I see it as technology that is a tool to help us get a job done. I don’t expect it to be human. I don’t delegate my humanness to it. As such I can see enormous time savings and a rudimentary brainstorming and organising tool. Much of the critique I read around AI is it’s not human enough…but what if it’s not supposed to be human? Then we stop trying to classify it as such. And I think that’s useful.

Helen Palmer

I help people learn so something different is possible

1 年

I'm convinced you wrote this article Simon, on the basis there were typo errors. And I can't imagine you took time to intentionally insert them. ?? The presence of typo/spelling errors could be a new key human performance indicator. What a world!

Steph Clarke

Helping the C-Suite see around corners

1 年

Great thoughts Simon. I saw a post the other day talking about the impressive thing about GPT3 is not what it *can* do, but what it *couldn't* do a few months ago. Sam Altman keeps talking about how this is essentially a demo, but the problem is people are treating it and using like a 'final' product. I wonder if we'll see social platforms / communication methods starting to add some kind of tag for 'generated with AI' (and maybe a filter to filter these posts in/out?). The integration with Bing is really interesting, and what they demo'd last week that shows that GPT-powered searches also include the citation of which site the information was pulled from so people can verify slightly easier. It's a wild time for sure.

Michael Schiffner

High Performance Executive Coach for Professionals Specialising in Business Development + Leadership | Keynote Speaker | Managing Director of Collective Intelligence

1 年

Great share Simon! I always appreciate your perspective on tech.

回复

要查看或添加评论,请登录

社区洞察

其他会员也浏览了