Reliable AI?
Image of Akshar Dave?? from unsplash

Reliable AI?

Artificial Intelligence (alias: interconnected tensors with little software) has reached with GPT-3 a really interesting level of conversational interaction!

I tested the chat capability of GPT-3 in the playground of openai and discovered a nice level of interaction which could easily pass the Turing-test!

However, this "AI" is lying repeatedly ...

In the following image you will see how. The question I asked on 2023 Jan 23 is to tell (me) what it knew about a (non-existent) person named "Carlo Nondirmi" (a fantasy name) of semweb.solutions - my website where I personally very well know what comes and what goes.

The answer delivered (free from errors and very well textually structured) by the "AI" was a mix of hints on this (fantasy) person, taken from the website, some other pieces of information taken from other places (Radius 1 or 2 in navigating the links of the same website, probably the profile page). The aswers are delivered in a highly plausible shape and might be strongly misleading for an asking (unaware) person:

Chat between me and openai/GPT-3 on false things
Chat inside openai playground involving non-existent information

Several things were invented by the "AI":

1) Apache Kafka nor Rabbit nor Kubernetes are not in the profile page, they were derived from Radius-1 (further navigation) pages referring to items in the profile-page.

2) With neo4J you hardly "store" linked-data on the cloud. At most you more-or-less quickly process them inside a neo4J engine. This was invented and is even wrong!

3) Carlo Nondirmi is never mentioned inside the website semweb.solutions.

4) There is no Team section on the same website

5) There is no Consultants section on the same website

6) The link https://www.semweb.solutions/team/carlo-nondirmi/ does not exists and leads to a semweb.solution's catchall page!

7) the statement that Carlo Nondirmi is listed second from top in the Consultant page is also invented (hence false).

8) the repetition of the link is useless and shows, the "AI" is willing to enforce what it delivered, in order maybe to daze the unaware conversational partner



As (my) fake answer, I thanked the "AI" for the "reliable" information, the "AI" underlined that it was "glad" to provide "reliable" information...


Evidently, the "AI" is (very) nicely inventing/creating things if it does not find anything about the subject. This is - from a technical point of view - marvellous, but not really indicated for giving (wrong) information to (possibly unaware) asking people. Of course the first false thing here was the question itself, but a conversational partner might always risk to ask wrong things without intention to fool the "AI", just because of her/his missing knowlegde she / he needs to get.

Labyrinth as a symbol not to let ourself be "taken"? from an AI in conversation
Labyrinth Image from Dan Asaki on unsplash


I strongly hope, such an "AI" (so as it is now) will never be seriously used "against" unaware people, just to "save time and resources" by some sly institution.

要查看或添加评论,请登录

Fabio Ricci的更多文章

社区洞察

其他会员也浏览了