ChatGPT - Can we trust AI-powered chatbots?
https://openai.com/

ChatGPT - Can we trust AI-powered chatbots?

ChatGPT, the latest model from OpenAI, has gone viral. There is no stopping the buzz it has created on the internet. Many folks are praising its ability to do a variety of things, including things like writing articles and code, but not everyone is paying attention to its downsides.

What are chatbots?

A chatbot is software that simulates human-like conversations with its users via chat. Its main objective is to give answers to the questions asked by the user with instant messages. Chatbots answer the questions asked by the user in well-punctuated prose. They are commonly used in customer support, sales, marketing, data collection, etc. They help businesses achieve results with minimal effort.

Nowadays, multiple companies and academic labs are building advanced chatbots. ChatGPT, created by OpenAI, is a prime example. Even though these are pieces of software, they seem to chat as a human would at lightspeed. An easy way to think about them is a digital assistant, like Alexa, Siri, or Google Assistant, on steroids.

Some folks strongly believe that this software can replace internet search engines like Google! This is because instead of serving up a page of links, they give the information in easy-to-read understandable paragraphs.

Some problems with these chatbots

However, not everything is perfect with these chatbots. We should not trust anything and everything it gives us, it can make mistakes. They do not always tell the truth. The mistakes might be as innocent as incorrect arithmetic or as severe as blending fact with fiction. Although these might look easy to spot and avoid, people use them to generate and spread misinformation and untruths. They can be a bit of a fabulist because of their vast training corpora - information posted to the internet.

Parts of their answer will be true and the rest will be made-up. This problem can be called hallucination. Chatbots have a way of taking what they learn from their training corpora and reshaping it into something new like a good storyteller. In this process, they do not verify if the output is true. Another big problem with chatbots like ChatGPT is that they do not cite their answer's sources.

Many groups have published papers in various journals where they point out certain situations in which ChatGPT fails to give the correct answers. All of them end on a similar note, even though ChatGPT can help you with your homework, you should be careful when you use it in the real world. Although OpenAI has warned users that it may produce incorrect information, harmful instructions, or biased content, people often ignore these words and pay attention to whatever it spits out.

ChatGPT is not the only one making headlines for the right and wrong reasons. Other chatbots like LaMDA, Blender Bot 2, and Alexa Teacher Model are similar to ChatGPT in their pros and cons.

In conclusion, ChatGPT is slick but it still spits out ridiculous answers, at times.

Tharun Poobalan

Graduate Student @ ASU | MS in Robotics AI | Actively Seeking for Full-Time Computer Vision | Generative AI | Conversational AI

1 年

To be more precise, it’s still in its infancy. Nevertheless, a very good article.

Prakhar Deo

Product@Angel one | ex-Kotak, IBM | BITS Pilani

1 年

Great piece! What do you think about the problem of giving deterministic answers related to a particular enterprise. Domain training does help give niche answers but compromises a bit on accuracy and completeness.

回复
Akash Pareek

Product @ Groww, India

1 年

Very insightful brother. What is the best tool for Customer Support kind of application in your opinion?

要查看或添加评论,请登录

社区洞察

其他会员也浏览了