Three points to watch out for when using ChatGPT

Three points to watch out for when using ChatGPT

Speech is one of the things that sets humans apart from animals. So with ChatGPT capable of generating conversations, AI – which really has been born over 60 years ago – has crossed an important line. What's next? I won't bore you with predictions on how many jobs will be created or killed by ChatGPT – instead I want to draw your attention to three things to watch out for that seem to get lost between the exuberant optimism and pessimism dominating most accounts of ChatGPT.

Don't let confirmation bias close your eyes for the upside

With ChatGPT eloquently compiling high quality essays – apparently even passing bar exams with honors –, it is not surprising that job fears are creeping up. Expect your subconsciousness to cling onto evidence that AI is not that powerful after all as proof that you cannot be replaced that easily. The consequence of this is negative confirmation bias (we feverishly look for examples where AI has gone wrong) – and as a result, you subconsciously close your eyes to the true strengths of AI and the myriad of possibilities how AI can help you to become more efficient and effective.

Technology has completely eliminated so many roles already – imagine, there used to be people operating telephone switchboards or calculating interest, one savings account at a time, in their head! – that we should not doubt the least that ChatGPT will all but wipe out certain job descriptions. And yes, also people in those jobs will cling to a belief that they cannot be replaced by a machine – when my Mom was a kid, German banks still hosted competitions between man and machine in balancing current accounts, and initially the fastest mental arithmetic workers actually won over the living-room-sized machines munching their punch cards. Last time I checked, none of these whizzkids is still calculating interest charges – but none of them is out of a job, either, as banks now desperately lack programmers to operate their IT.

So, rather than trying to find evidence for ChatGPT not being able to do your job, focus on figuring out how you actually can help ChatGPT to do your job even better – and then figure out how you can make a living from that, be it by helping others to make use of AI or by getting AI to help you do things you never dreamt of (think of quantitative hedge funds subjugating computers competing with the best traders).

In fact, when I myself try to get my head around how ChatGPT could make my life easier, I remember how a couple of colleagues at McKinsey tried to convince me and my other colleagues of the benefits of e-commerce back in 1996. I mean, what could possibly be more convenient than going to a department store (which at that time in Germany would close on Saturdays at noon and not reopen before Monday morning), find the last item of a category available (unfortunately one size too small and in screaming pink but still better than no pants at all to wear on Monday!) and lining up 30 minutes to pay? My colleagues actually built a fake "online store" where everyone could buy an item for free with a virtual coin – and still, it was a few more months until I placed my first online order in the real world.

It took years for Amazon and Google to find the winning formula (and sadly, numerous pioneers prior to them failed) – so don't expect the huge upside of ChatGPT and generative AI in general to be evident in just a year or two.

Watch out for the continuing death of expertise

High up on the list of "books that I should have written but haven't" is "The Death of Expertise" by Tom Nichols who already 5 years ago has drawn our attention to the problem that Wikipedia, YouTube & co seemingly put exhaustive and authoritative information in everyone's hand, rendering experts redundant. Unfortunately so far we have just seen the tip of the iceberg of the epidemic of ubiquitous information crowding out true expertise.

In fact, even before Microsoft put ChatGPT into Bing, I marveled at how much time I spend on validating information I find on the internet. How often has buying a simple kitchen tool on Amazon cost me an hour as one highly rated offer after the other (all looking the same and peddled by unheard of companies with a name consisting of randomly juxtaposed letters) apparently was pitched by fake reviews trying to crowd out the few genuine ones reporting on exploding appliances and rubbish not even surviving the first use. How often did I discard reviews as thinly veiled infomercials, and how often was I shocked by how poorly researched even articles in The Wall Street Journal and other reputable newspapers have been.

ChatGPT doesn't do this. In the best case, it will exclude certain sources – but just like the human brain, it will misinterpret frequency of a statement as validity, it will almost certainly disregard academic debates in the most recently published articles, and it very well might start to believe and perpetuate its own fabrications, nicely called "hallucinations."

And Ladies and Gentlemen, don't think that the occasional slip-up of ChatGPT today is the baseline from where it only can become better – remember, Google search optimization is one of the new job titles that replaced telephone switchboard operator but ChatGPT search optimization has not yet even been proposed as an activity.

This is where we need to take a step back to appreciate what AI is and what it is not. Artificial Intelligence is actually a big misnomer as it patently is not mimicking human but animalistic intelligence. More specifically, it is an artificial version of the pattern recognition of the animalistic brain that in humans often is called the subconscious or system 1 brain. What sets humans apart from animals – and machines – is the logical thinking, however.

ChatGPT can sound incredibly sophisticated – however, it actually does not think logically but rather produces plausible sentences based on incredibly complex probabilistic models. It reminds me of an anecdote of the oral exam a librarian took in Germany upon completion of his apprenticeship – when asked which books he would recommend to a mother interested in books on minerals for her son, he actually made up plausible book titles and got full marks!

There is a parallel to food. Processed food – some of it rather Frankensteinian, such as mechanically separated meat or fruit deskinned in acids – has so much conquered our food chain that healthy, naturally produced artisanal fare has become rather rare (and prohibitively expensive). I am certain that the same will happen in knowledge – AI-fabricated knowledge will become so widespread that true, logically derived thought will increasingly become an expensive luxury. Processed food, high in sugar, fat, and other unholy ingredients, has caused an epidemic of obesity and other diseases – what will the epidemic of fake expertise do to us?

And we also should not forget that even a lot more basic tasks are still way beyond AI's ability today. As an image for this article, I chose a picture from the cockpit of a rental car that purported to be able to read traffic signs and indicate the current speed limit. It was hilariously random – in villages, it at times would claim I could race at 100 km/h while on the high way, at times it wanted to slow me down to 30 km/h. Or have you ever stood in front of a restaurant, wondered whether there might be better options nearby, and realized that when googling for "restaurants near me", the very restaurant in front of you was missing on the result list of almighty Google (while including maybe some patently irrelevant items)?

So I actually believe that AI got a bit ahead of itself and thanks to the incredible human minds behind ChatGPT and other large language models it sounds more knowledgeable than it actually is, maybe not unlike the colleague of yours who is perfect at pretending to be Mr Know-it-all while being dangerously ignorant.

To stress: this is not a fatal argument against ChatGPT but rather a reminder where we stand in a journey towards artificial intelligence that already has been under way for more than half a century, suggesting that we still need to use this emerging technology with proper caution and safeguards while relentlessly improving the underbelly of the information flowing into its fancy, chatty presentation. Mistaking input data for truth is a fundamental limitation of all algorithms including ChatGPT.

And beware the seduction

I mentioned in the previous section that ChatGPT produces plausible sentences. Yet it can do more – it can be programmed to specifically produce pleasing sentences. Did you notice how on Bing, you can choose between "more creative" and "more precise"? I mentioned earlier that ChatGPT chooses words based on probabilistic models. On the receiving end, our brains also are complex probabilistic models. In fact, if I tell you that I wrote an article on "Three Points to Watch Out for when Using ChatGPT," your brain most likely will form an expectation what these three points could be. If I tell you exactly those points, you have wasted your time by reading my article, and your brain actually will give the feedback that my article was flat and not interesting at all. If I tell you that ChatGPT will cause the return of the dinosaurs, the flattening of the globe, and time to run backwards starting July 17, 2029, your brain will deem these predictions so bizarre that you will label me a lunatic tinhead. Yet if I formulate moderately surprising theses, your brain will deem them "interesting" – plausible yet new and hence valuable information.

So if you want to develop a "good" ChatGPT algorithm, you need to calibrate it to pick the right level of plausibility – plausible enough to be credible but surprising enough to be interesting. As the technology evolves, you obviously will want to optimize further for user reaction. What would a TikTok version of ChatGPT look like? Well, it would be the most captivating, sweet talking "person" on the planet.

This is dangerous. Imagine you use this technology to create a digital salesperson. Whatever question or concern you voice, the answer will be optimized to put you at ease. Every piece of marketing psychology discovered till date (at least until 2021…) could go into the honing of the sales pushes. Have you ever encountered a salesperson too good to escape from? Well, ChatGPT could be that person on steroids. Or imagine a storyteller – convincing you that the earth is flat or that your ears will fall off unless you eat your own children here and now. If you believe that social media today are addictive, be warned: TikTok was just generative AI's warm-up!

And don't get me wrong: With an epidemic of loneliness, maybe digital companions will prove to be a boon to the mental health of millions of people, especially the elderly. Maybe properly designed counseling bots can help overcome the ubiquitous dearth of psychotherapists. Maybe racier versions of ChatGPT even will reduce the incidence of rape by offering a more compelling digital alternative. So the very ingredients of a persuasive chat algorithm can be decidedly beneficial – yet they also can be very, very dangerous.

***

It is easy to say that every new technology is dangerous. When the first railway in Germany traveled at 30 km/h, the warnings about catastrophic health effects of traveling at such a high speed were loud. Today we safely fly with 900 km/h from Singapore to New York and feel that this is kind of slow and we really should resurrect supersonic travel. When British trains switched from coal to other fuels, initially unions insisted that the workers dropping the coals in the furnace would continued to be employed out of a fear of unemployment – today we give employment neither to these folks nor to telephone switchboard operators yet the Western world is so woefully short of workers that in my hometown, they even had to reduce bus service for lack of drivers.

Likewise, I am convinced that today we cannot even imagine the incredible benefits AI can bring us. Electricity, engines, the internet all have transformed the world beyond recognition. Yet we have utilized all these advancements without fully understanding the dangers and second order effects – so today we are also battling grave issues such as growing and debilitating obesity, global warming, and skyrocketing suicide rates amongst teenagers. So as we start embracing generative AI, we need to be hyper alert towards its weaknesses and second order effects, and forcefully take measures to protect us from harm. With the plague of fake information and its potential for toxic addictiveness, I certainly have not identified an exhaustive list of potential problems of generative AI – but we have already more than enough to chew on.

Have you already found ways to exploit ChatGPT for your benefit? How do you protect yourself from fake information? And have you fallen victim to AI's addictiveness, too? What could be strategies to achieve healthy and responsible AI that brings us all the benefits without imposing disastrous cost?

Joydeep Sengupta

Senior Partner @ McKinsey | Transformation, Financial services, Leadership, Strategy and Organisational change

1 年

Great post Tobias- your views are terrific

Nathalie Fabre (she/her)

I build bridges | Head of Future of Work

1 年

Insightful perspectives Tobias ??Thanks!

Diwakar Sharma

Senior Expert - Analytics at McKinsey & Company

1 年

Amazingly put Tobias Baer !!

George Georgakopoulos

Managing Director, Global Head of Servicing

1 年

Excellent article, Tobias - wonder what ChatGPT generate on the same topic!

要查看或添加评论,请登录

Tobias Baer的更多文章

社区洞察

其他会员也浏览了