AI Fatigue: How low would the Intellect baseline fall?
https://www.pexels.com/photo/person-holding-brown-wooden-frame-8148683/

AI Fatigue: How low would the Intellect baseline fall?

\begin{PPASTA}

Let's assume the intellectual baseline of a society (a human collection) is its intellectual capability level across the board. Similar to physical capabilities, the intellectual capabilities need exercise and development. Over the time, both physical and intellectual capabilities have been improved, and their baselines have been highly elevated compared to those of centuries before.

What is fatigue here? Let's consider someone who has a picture frame on the wall that gradually leans to one side. The person tries and levels the frame in each episode hoping that the frame stays straight. At some point, the fatigue caused by this repetitive and without-any-end-in-sight would make the person to give up and accept the way the frame is on the wall (this acceptance can be seen as a change in his/her baseline).

Fatigues could also influence intellectual baseline in similar ways. Most probably, the fatigue could arise from any form of overhead, consuming intellectual activity (with no hope that the overhead would be resolved at some point).

It is assumed that AI chatbots, such as chatGPT-likes, are AI assistants. However, in practice this not the case. In the case of a traditional assistant, the assistant would never directly state a statement. He/She would always wrap the suggestion in a form like "please keep in consideration that ...", and so on. This is not the case for a chatGPT-like. If you ask it a question, the answer would be direct statements, something like "step 1. Do this, step 2 Do that, ...". That means that if the user who ask the question is removed from the picture, no one could determine whether those statements were generated by an assistant or an actual decision maker.

In terms of intellectual capabilities, AI assistants (such as chatGPT-likes) would be bound to what has been observed (plus the generalizations) bound to the boundaries of the design of their creators. Outside those boundaries, they would provide plausible outputs (speculations) that would require correction. At the beginning, it would be fine and fun for the user to do the corrections. However, at some point, he/she would give up and accept the output as it-is, accepting the possibility that it might not be optimal (or even not accurate). This would be a decline in the baseline (probably more common in the fields that are highly complex and complicated). Considering that the impact of the decisions might not be observable right away, the adjusted, lower baseline could become the norm! In addition, the person in charge of decision making with the help of an AI assistant would be forced to make more and quicker decisions because an AI assistant is provided to him/her. This will accelerate the development of AI fatigue and its consequences.

This is an extreme case, and it would not apply to AI. However, we could blame fatigue in the case of the fictional Two-Face and his famous flipping-the-coin habit: A once-an-attorney accepts and lets a simple coin-flip system makes the decision for him.

Just a thought.

\end{PPASTA}

要查看或添加评论,请登录

Reza F.的更多文章

社区洞察

其他会员也浏览了