The problem with using ChatGPT & similars

The problem with using ChatGPT & similars

#ChatGPT #AI

My first experience with an automatic text generation tool was seven years ago when I had to investigate the pros and cons of generating automated financial reports. As with most new tools, the demo was impressive. But when digging into the tool itself and learning how to set up those reports, it was clear the technology of the tool was not yet mature. Lots of code, lots of parametrization, which would cost more effort to fine-tune and maintain than to keep humans writing the reports.

I gave a negative advise for the deployment of the tool. But it was not because of the lack of performance or quality. When ChatGPT was first released to the broad public, there was the same scepticism about the quality of the generated texts. I remember a journalist asking ChatGPT which eggs are bigger, those from a chicken or those from a cow. Unsurprisingly, ChatGPT wrote a scientific-like paper explaining why cow eggs are bigger.

But quality is not the real issue here. Comparing actual tools with those from seven year ago, the improvement in quality and ease of use is impressive. Those who now despise tools like ChatGPT because of their poor content will have existential problems when they realise what the tool will be capable of, within seven years.


So, what is the problem with ChatGPT ?

The problem is that it gives answers to research questions.


Again, the issue is not so much that the answer will be biased. For example: “Why is capitalism better than eco-socialism“, “Which car is the best for a budget of 30k?”, “Who started the war in country X?”, “What is the best way to seduce a woman?”, “What recipe can I prepare with the content of my fridge?”. Giving an answer to those silly questions is a problem by itself, not only because it leads to unidimensional thinking but mostly because we have no information about the sources and about the algorithms used. But this is not new, we’re used to that. Since generations we’re exposed to constant advertising, trying to push us to buy or believe something. Manipulating minds exists since long, so the biased answer is not the fundamental issue of using such tools, even if it is a big concern.

The fundamental problem of ChatGPT is that it cuts out the research needed to investigate a topic. And by doing that, it blocks human development. When you look up information on the net, you don’t just put some keywords and then click on the first occurrence in Google. If you do that, you’re already braindead. You may search a browser but then, you investigate the sources, knowing what tendency they follow, how reliable they are, you compare the information with other points of view, etc. Then you start getting an idea of the answer you’re looking for, which still takes some time to analyse. This is basic analysis, and it is not limited to scientific or philosophical questions. You apply the same process when choosing which blender suits best your needs and budget.

It may sound a bit like Taoism, but it’s the path that matters. The answer is just a logical conclusion of the analysis, of the way you have selected information and thought about it. If you suppress the path, and keep only the result, you get something hazardous, less useful and less exact than information we already have from encyclopaedia. But most of all, you lose intellectual capacity.

Why do we still ask in primary school to learn the multiplication tables by heart ? All smartphones and computers have a calculator. No human is as fast as a calculator, no human can perform very complex calculations in less than a second. So why bother learning those tables by heart ? The same with languages, why bother learning a new language when you can use Google Lens to translate from and to any language? Why write a paper for school if ChatGPT can generate results probably more in line with the formal requirements of the teacher?

Those questions are not limited to education. Similar questions arise in private and public enterprises. Why still do the effort of writing a text, if ChatGPT can generate a convincing result? ChatGPT can write code in any programming language, can propose marketing slogans from which to pick, can provide advocacy work in defence of a client, and much more. Why should a company still pay for human labour if ChatGPT can do the same at almost no cost?

The answer is simple. If you don’t train your brain, you will lose the capacity to think. And thinking makes all our humanity. It’s not the rather na?ve idea of an apocalyptical takeover of the world by computers we must fear. To achieve that science fiction script, all parameters of all machines must converge and agree on a unique goal. In the chaos of data, languages, algorithms and parametrisations, this is not a credible threat. What we must fear is the disappearing of our intellectual capacities and with that, our freedom of choice.

?

What impact will the abuse of ChatGPT have on enterprises?

I predict a huge Return on Investment on short time for companies using ChatGPT massively. You don’t have to pay all those annoying human collaborators anymore. But I predict for those companies, that, within the year, they will cease to exist because they will be braindead. Algorithms are just tools, you can’t let them manage your company, you can’t let them create meaningful content. And if you do, the quality of that content will quickly be diluted in the massive production of boring, senseless content.

That’s the reason why I recommended not to create automatic financial reports, seven years ago. Because when writing those documents, financial analysts analyse the data. They are aware of the information and thanks to their insights, they can advise on tendencies and raise alerts before it is too late. That’s why financial analysts are paid for. Or do we prefer a new financial crisis ?

?

要查看或添加评论,请登录

Gilles Cardoen的更多文章

社区洞察

其他会员也浏览了