Clever cats, clueless AI - The intelligence gap

Clever cats, clueless AI - The intelligence gap

While tools like ChatGPT are impressive at predicting the next word in a sequence, they are regurgitating information they’ve been trained on, not reasoning. AI is not even close to the intelligence of cats, which have basic reasoning, memory, and planning abilities, qualities AI lacks – Yann LeCun (2024)?

Introduction

Who doesn’t know ChatGPT by now? It’s very tempting to use, especially for people who think they can spare themselves a lot of time. Using these tools is an example of the Efficiency-Thoroughness Trade-Off (Hollnagel, 2009). Everyone’s time is limited, and we have a natural tendency to conserve effort - among other reasons to maintain a resource reserve for unexpected events. Next to individual habits and ambitions, we feel pressures from colleagues and managers, and organizational pressures such as conflicting priorities.

Many organizations, too, anticipate that AI will lead to significant gains in efficiency by enhancing the "if-then" logic within their operations. But AI also introduces new coordination challenges. Automated systems function effectively only because of the careful planning and problem-solving that address the potential risks of automation. While AI simplifies certain processes, it simultaneously generates new complexities at a higher level. Although some tasks become less burdensome, organizations must still manage the coordination issues that arise from this simplification. Human intervention is required to make the necessary adjustments (Kühl, 2024).?

LLMs – The good, the bad, and the ugly

Large Language Models (LLMs) such as ChatGPT and Bard are cognitive tools that can assist us in tasks like writing, translating, summarizing, and generating text. LLMs differ from search engines like Google because people can engage with them in conversation-like interactions. This conversational nature can make LLMs seem more trustworthy than they are (Heersmink et al, 2024). This can quickly mislead users, who often project human-like qualities onto LLMs, such as intelligence, reasoning, and consciousness. This tendency, combined with the fluent and confident nature of LLM responses, quickly creates an unwarranted sense of trust in the output of LLMs. This misplaced trust has a risk, especially since LLMs can produce errors or biased outputs. Because LLMs can hallucinate and generate incorrect outputs, it’s difficult for users to assess their reliability.

When I was testing ChatGPT by asking a question about the history of my home town, it said “It received city rights in 1608 from the province bailiff” – something that never happened. ChatGPT had one of the city founders right, the other two it made up. And that’s a problem. In this case, I knew the correct information, but often users don’t. And they can’t trace or evaluate the origins of the information provided – their large datasets and algorithmic processes are intransparent.?

The commodification of LLMs

If people criticize a model, they often get a swift reply from someone who quotes George E.P. Box: “All models are wrong, but some are useful”. When it comes to ChatGPT and other LLMs, it’s a long time since only a couple of early adopters saw their usefulness. It’s used by many people of all ages and occupations. Of course, also a lot of people try to make money with them, e.g. by selling courses “writing prompts for ChatGPT”. Critical commentators see their comments deleted and themselves blocked by the commercial consultant. This happened to me, and others.?

The branding of LLMs

AI consultants are consistently branding themselves through weekly updates on LinkedIn. This builds trust, while product variety (“NEW products added daily!”) keeps people engaged.

In the AI advertisements I currently see, visually attractive and urgent descriptions are used (“View product NOW”). The surface appeal of discounts and terms like "AI Expert" suggest value. Without deeper analysis, consumers rely on these superficial cues.

AI products such as an “AI prompting course” are presented in a way that the consumer chooses what will make them more productive or successful. Given my desire to improve productivity, I might be tempted to think that taking this course could help me work smarter or more efficiently.

A striking example is a “GPT Humanizer”, which transforms robotic or overly formal AI text into content that flows smoothly and mimics the style of a human writer.

(An offer for an incompany AI training package states: “SALE! Original price was: € 200.000,00. Current price is: € 49.000,00.” Wow, we can save…through a huge expenditure!)?

There’s always a catch

AI should not be rejected out of hand, but we should think carefully about how we do and do not use it. We can become too dependent on AI. In the process, we can lose human skills. Not to mention the growing environmental impact of technological developments (Jacobs & Meester, 2024).

As with a lot of tools, the misapplication of LLMs is already clear. From academic papers written by ChatGPT, via the copy-pasting of classified company documents in its dialogue box, to the automatic generation of Powerpoint Presentations. It’s hard to say which one bothers me the most. In time, we may lose the skills to write and make a presentation ourselves. Remember, creativity comes from personal struggle and suffering – LLM’s can’t experience neither.?

References:

-?????? Heersmink, R., de Rooij, B., Clavel Vazquez, J., Colombo, M. (2024), A phenomenology and epistemology of large language models: transparency, trust, and trustworthiness, in: Ethics and Information Technology, Vol. 26, Issue 3, pp. 1-15.

-?????? Hollnagel, E. (2009), The ETTO Principle: Efficiency-Thoroughness Trade-Off – Why Things That Go Right Sometimes Go Wrong, Boca Raton/London/New York: CRC Press.

-?????? Meester, R., Jacobs, M. (2024), De onttovering van AI - Een pleidooi voor het gebruik van gezond verstand, Zutphen: Mazirel Pers.

-?????? Kühl, S. (2024), On the Fantasies of Control Associated with the New Technologies, in: Madness as Usual: Digitalization, Versus Online Magazine, Digitalization - Versus Magazine ( versus-online-magazine.com ) .?????????

David van Valkenburg

Leren van calamiteiten | auteur | onderzoeker | facilitator en trainer

1 个月

Currently reading Nexus on precisely these topics...

#NatürlicheIntelligenz sollte besser genutzt werden... #NI matters... its free... just use it!

要查看或添加评论,请登录

Martijn Flinterman的更多文章

社区洞察

其他会员也浏览了