AI Chatbots: Can they Reason ?
AI Chatbots are taking the world by storm. AI chatbots are powered by Large language models (LLMs), they are a type of artificial intelligence (AI) that are trained on massive amounts of text data. This allows them to generate human-like text, translate languages, write different kinds of creative content, write computer programs/code and answer your questions in an informative way. AI chatbots are also providing us with new ways to learn, to stay healthy, and to be entertained. AI chatbots have revolutionized various industries, including customer service, e-commerce, healthcare, banking, and more. AI chatbot i.e. ChatGPT gained 100 million users within just two months to become?the fastest-growing 'app' of all time.
AI Chatbots are still under development, and their ability to reason is improving. However, it is important to remember that chatbots are not human and they do not have the same level of reasoning ability as humans.
This article provides comparison between Bard chat, Chatgpt (3.5), Bing Chat (powered by GPT-4) and You-chat (from You.com).
We have provided a custom reason prompt and compared their response from reasoning point-of-view. Our custom prompt is "We are four brothers and sisters. My brother have two sons and my sisters have two daughters and two sons each. How many in total we are ? "
First of all we have tested Chatgpt (3.5) response. ChatGPT is a large language model (LLM) developed by OpenAI released on November 30, 2022. It is trained on a massive online dataset. ChatGPT using a two-phase concept for training called “unsupervised pre-training and then fine-tuning using reinforcement learning with human feedback”. ChatGPT is a transformer-based neural network. Transformers are a type of neural network that are specifically designed for natural language processing tasks. They are able to learn long-range dependencies in text, which allows them to generate more accurate and informative responses. GPT-3.5 is not a multimodal Large Language Model (LLM). It is only able to process text input. However, GPT-4 is a large multimodal model (accepting image and text inputs, emitting text outputs).
GPT-3.5 cracks the prompt in its first go and here is it's response:
The second AI chatbot is Bing Chat. In early February, Microsoft unveiled?a new version of Bing with its standout feature being its AI chatbot that is powered by the same technology behind chatGPT (GPT-4).?At the moment, the only free way to access GPT-4 is Bing Chat. It is claimed that GPT-4 is a more reliable, intelligent, and capable model than its predecessor, GPT-3.5. GPT-4 is a large multimodal model (accepting image and text inputs, emitting text outputs) that, while less capable than humans in many real-world scenarios, exhibits human-level performance on various professional and academic benchmarks. For example, it passes a simulated bar exam with a score around the top 10% of test takers; in contrast, GPT-3.5’s score was around the bottom 10%.
When presented with our customized prompt the response of Bing Chat is as follows:
The reasoning seems to be flawed therefore we provided another prompt to reconsidering its earlier response. This time it has corrected its response as
Thirdly we analyse the response of Bard. Bard Chat is a large language model (LLM) developed by Google AI. It is trained on a massive dataset of text and code, curated by Google. Bard Chat is built on top of the LaMDA (Language Model for Dialogue Applications) architecture. LaMDA is a neural network architecture that is specifically designed for conversational AI. It is able to learn the nuances of human language and to generate text that is both informative and engaging. ?Bard now supports multimodal search, meaning it's capable of generating images from a text prompt. Bard responses will include visuals in addition to text, and it can generate images for you based on text
领英推荐
We presented out customized prompt to Bard and its response is as follows:
Bard response is flawed as the summation seems incorrect. We provided the second prompt to correct its earlier response. However, its response is incorrect again
You.com, a?search engine launched a ChatGPT-style chatbot on its website on December 23, 2022, that can answer questions and hold a conversation, bringing more artificial intelligence-powered technology to the wider web. You Chat is not a multimodal LLM. It is only able to process text input. You-Chat is a large language model based on transformer architecture that was trained using vast amounts of natural language data.
We provided the customized prompt to you-chat and here is its response:
As the response was flawed therefore we provided another prompt mentioning that it has included ourself twice. Unfortunetly, its second response is also flawed
The responses we get from ChatGPT or Bing Chat or Google Bard or You Chat are predictive responses generated from corpora of data that are reflective of the language of the internet. These chatbots are powerfully interactive, smart, creative but some of the answers they spit out, with such seeming authority, are non-logical, or just plain wrong.
LLMs are prone to “hallucinating,” which means that?they can generate text that is factually incorrect or nonsensical.
So to our question, can AI chatbot reason ?, the simple answer is Yes whereas longer answer is No. AI chatbots lack true understanding and consciousness, and their responses are based on statistical patterns rather than genuine reasoning. They do not possess the ability to think abstractly, infer logical conclusions, or engage in complex deductive or inductive reasoning. However, AI chatbots can sometimes mimic logical reasoning to a certain extent. They can recognize simple patterns, follow predefined rules, and apply basic logical operations such as IF-THEN statements. This allows them to perform straightforward tasks like answering factual questions or executing simple commands.
It is important to understand that AI chatbots are not yet capable of artificial general intelligence (AGI).
AGI is a hypothetical type of intelligence that would be able to understand and reason about any topic, just like a human being. AI chatbots are still under development, and it is not yet clear if they will ever be able to achieve AGI.
Journalist
1 年good article