My recent experience with Artificial Intelligence

My recent experience with Artificial Intelligence

ChatGPT seems to have become the subject of many conversations and some variants, using the API, have come out that specialize, for example, in extracting information from PDF files.

A few days ago, I had a chance to test this particular variant (ChatPDF) and, to do so, I used an official report of an aviation accident. There were two surprising things in the process:

1- The system perfectly processed a natural language; it could give the appearance of having a conversation with a human interlocutor.

2- The apparent solvency of the answers, without a shadow of a doubt, and using a language equally natural.

However, there were some surprising details:

When asked about the experience of the pilot in command, the system answered that the report did not say anything on that point, which was already strange.

When asked about the existence of technical failures in the aircraft, the system answered that no failures had been reported.

Although this was a case I did not know in depth, both answers appeared uncommon. Therefore, I decided to read the complete report and found that it did indeed have quite a lot of information on the pilots' training. The information did not only cover the general and type experience and the role; it included different courses taken by both pilots.

As for the technical failures, the whole incident was started by a failure in the Take-Off Warning System. Therefore, a previous technical failure was present.

In both cases, the report did not have a heading speaking of "pilot training" or "technical issues", but the information was there and was undetected by the system.

The handling of natural language, which is by no means a minor achievement, and the certainty of the responses may give the false impression that the system can be fully relied upon. However, it does not yet seem to be in a position to be entrusted with a sufficiently serious task.

I understand that someone might retort that it is necessary to monitor the results, and obviously it is so. However, in that case its usefulness may be very limited in some areas. It is reminiscent of that curious old rule that, in order to warn of the danger of automobiles, required a person walking with a lantern before the automobile to warn of its imminent arrival.?

Lukas W.

Captain, FAA ATP, EASA & UK CAA ATPL, Gulfstream GIV, Learjet 45/75, Phenom, Military & Civilian Flying Instructor, UAS pilot ,Aeronautical Engineer, Future Flight & Green Technology, Photo & Technical Journalism, MRAeS

1 年

Overconfidence by the platform and where a user thinks it's infallible is a serious problem. Music is not simply sound.

Raul Sosa Riera

Consultor e Investigador en Aviación

1 年

A partir de esté comentario pregunté a ChatGPT su modelo de aprendizaje y creación de respuestas y si era una forma de autopoiesis. Esto me dijo: “En cierto sentido, se podría decir que hay algo de autopoiesis en un proyecto de IA como Chat Gpt, ya que estos sistemas tienen la capacidad de mejorar continuamente su propio rendimiento mediante el aprendizaje automático. En el caso de Chat Gpt, el modelo es capaz de "auto-generar" nuevas respuestas y generar su propio conjunto de respuestas en función de los datos que ha "aprendido" a partir de las interacciones con los usuarios. De esta manera, el modelo se autorregula y se ajusta a medida que se le proporciona más información y datos, lo que le permite mejorar su capacidad para producir respuestas precisas y relevantes. Sin embargo, es importante se?alar que la autopoiesis en Chat Gpt es una forma muy limitada de autopoiesis, en comparación con la complejidad de los sistemas biológicos que se han estudiado en la teoría de la autopoiesis. Aunque Chat Gpt puede adaptarse y mejorar en función de su experiencia, sigue siendo un sistema dise?ado y programado por humanos, y por lo tanto, no es un sistema vivo en el sentido biológico del término.”

回复

要查看或添加评论,请登录

社区洞察

其他会员也浏览了