ChatGPT – Has the future arrived, or just another internet fad?

ChatGPT – Has the future arrived, or just another internet fad?

(Disclaimer: This article has been written by a human. We tried to assign the task to a bot, but it failed miserably :D)

Introduction:

In 1999, when personal computers were the latest trend, a science fiction movie by the title ‘The Matrix’ garnered a lot of popularity. If you haven’t already watched this exciting film franchise, the plot is as follows. In the year 2199 artificial intelligence has taken over the human race. Human bodies have been enslaved and plugged into a huge machine to generate bio-electricity which the AI machines feed on. Meanwhile, the human mind is kept engaged in a simulation designed and controlled by the AI called the ‘Matrix’. Humans are living under the impression that they are on Earth in the year 1999, while in reality, they are in the year 2199. Fascinating, isn’t it? Although a movie, the film highlights the perception of AI in the late 1990s – “it is a threat to humanity.” And that hasn’t changed much till today! Automation has always been perceived as a threat to humanity. When the industrial revolution began, people revolted because they believed their jobs would be lost. When the computer revolution began, people again feared losing their job. And the same happened when the internet revolution began. However, humans are still employed and the unemployment rate isn’t undergoing any major change.

Once again, a new discussion has emerged as, ChatGPT, a chatbot is flexing its skills – Can Artificial Intelligence imitate the human capability of thinking? For the past month, LinkedIn (the social media website where everyone fakes how glorifying their work is), has gone crazy over ChatGPT. The website is full of optimists who have been making claims that ChatGPT is going to change content marketing, ChatGPT is going to change journalism, ChatGPT is going to change education, ChatGPT is MBA qualified, and CA exams are the next target… Has the future that we were told about arrived sooner than expected, or is this just another internet fad that will go down soon? Let’s uncover the same in this article.

What is ChatGPT?

We perform our day-to-day tasks through body movements, logical reasoning, communication and intelligence. Modern technologies try to replicate the same human capabilities so that human tasks can be automated and performed by machines. For example, natural language processing tries to replicate human communications, robotics replicates our body movements, machine learning replicates logical reasoning, and artificial intelligence replicates human intelligence. There are many other subsets of these technologies such as computer vision, drones and autonomous vehicles, 3D printing, neural networks, and blockchains that try to imitate humans, to make our life easier. Chat Generative Pre-trained Transformer, shortened to ChatGPT, is an artificial intelligence-based language model. AI language models are a type of AI technology that seeks to train and generate human-like communications based on the input it receives. It uses deep learning techniques such as neural networks to analyze large amounts of text data and generate new text that resembles the input data. The generated text can be used for various applications such as chatbots, language translation, and question answering. ChatGPT was developed by San Francisco-based OpenAI which is a research organization dedicated to developing and promoting friendly AI that benefits humanity as a whole. This not-for-profit organisation was founded in 2015 by Elon Musk, Sam Altman, Greg Brockman, Ilya Sutskever, John Schulman and Wojciech Zaremba. Microsoft is a partner in OpenAI and has invested USD 1 billion in the company.

How was ChatGPT developed?

The central theme for the development of all technologies is data – that’s why you see the growing importance of generating data (and also the privacy concerns around its misuse). Large subsets of data help in analysing the possible variations and developing the responses for those variations. ChatGPT is a large language model (LLM). LLMs are trained with massive datasets to predict what word comes next in a sentence. LLMs predict the next word in a sentence and also the next sentences similar to how the auto-complete works, however, at a much mind-bending scale. This allows them to write paragraphs and entire pages of content, however, they don’t always understand exactly what a human wants. This is where the ChatGPT has done a marvellous job with Reinforcement Learning with Human Feedback (RLHF) training. It was discovered that large data enabled the ability of language models to do more. According to Stanford University, GPT-3, the technology behind ChatGPT, has 175 billion parameters and was trained using 570GB of text. For comparison, its predecessor, GPT-2, was over 100 times smaller with only 1.5 billion parameters. The increase in scale changed the behaviour of the model. This is why GPT-3 can perform tasks it is not explicitly trained to such as translating sentences without any specific training examples. GPT-3 was trained using data from the internet including sources like Reddit discussions, to learn the human style of dialogue and responses. Because of this training, ChatGPT can understand the human intent in a question and provide helpful, truthful, and harmless answers, something that other simpler bots cannot do. It can challenge questions and also discard parts of the question which make no sense.

What are the limitations of ChatGPT?

ChatGPT is an advanced language model developed by OpenAI, but it is not perfect and has limitations. It is specifically programmed not to provide toxic or harmful responses and therefore, it avoids answering any such questions. Here are a few key limitations of ChatGPT –

Incorrect answers – ChatGPT can write answers which sound plausible, but, in reality, are incorrect or nonsensical. Many people have discovered and posted on social media how ChatGPT provided their incorrect answers and some really wildly incorrect answers. According to OpenAI, fixing this issue is challenging because there’s currently no perfect source of truth and thus, the model knows what the internet has. The model can be trained to be more cautious, however, it would then decline too many questions.

Lack of context – ChatGPT can generate a response based on the input it is given, however, it does not have a complete understanding of the context of the conversation or the world. This can lead to responses that are inaccurate or inappropriate.

Limited common sense - ChatGPT has been trained on a large dataset of text, however, it does not have the same level of common sense and knowledge as a human. It struggles to understand complex questions or provide answers that are not supported by the data it was trained on.

Bias in training data – The training data that ChatGPT was trained on can contain biases and inaccuracies which often lead to biased or incorrect responses.

Input sensitivity – ChatGPT is a machine learning model which means that its responses can be influenced by the input it is given. This can lead to unexpected or inappropriate responses if the input is unclear or malicious. In simple words, expert prompts can generate better answers, while simple key-ins may result in more errors.

Is ChatGPT the only regenerative model?

GPT is a highly advanced language model developed by OpenAI, but some other similar models and companies offer similar capabilities. Here are a few of its competitors –

1.?BERT by Google - Bidirectional Encoder Representations from Transformers (BERT) is a pre-trained language model developed by Google

2.?XLNet by Carnegie Mellon University – XLNet is a pre-trained language model developed by Carnegie Mellon University that uses a permutation-based training process.

3.?RoBERTa by Facebook AI – RoBERTa is a pre-trained language model developed by Facebook AI that was created to address some of the limitations of BERT.

4.?ALBERT by Google – ALBERT (A Lite BERT) is a pre-trained language model developed by Google that uses a lighter model architecture for improved performance.

These are some of the major competitors to ChatGPT in the field of NLP and pre-trained language models. Each of these models has its strengths and weaknesses, and the best choice will depend on the specific use case and requirements.

ChatGPT and Search Engines:

While ChatGPT helps generate texts, paragraphs, letters, emails, articles and essays, one of the major implementations of ChatGPT in the future could be in search engines. People usually search with questions on search engines expecting some quicker straightforward answers, however, the capability of search engines is limited to providing links to the articles. Google has done a fantastic job at generating questions related to key search terms and providing excerpts from various articles which is often useful. However, ChatGPT can take this a step forward with direct answers and also responses that feel more like an interaction. But then, calling it a ‘Google killer’ doesn’t seem plausible. It is unlikely that ChatGPT will completely replace search engines in the future. While it is capable of generating human-like responses to questions, search engines like Google have a broader range of capabilities and use complex algorithms to provide relevant and accurate results. ChatGPT is designed to generate text-based responses, while search engines are designed to retrieve relevant information from the internet. While ChatGPT can provide answers to questions, search engines provide a wider range of information and resources. Google has a strong hold on the search engine market and is constantly updating its algorithms to improve results. Search engines will use language models like ChatGPT to enhance search results, however, currently, it doesn’t seem like it will replace search engines entirely.

How should ChatGPT be used?

While ChatGPT is a powerful tool for generating human-like responses, it is important to understand its limitations and use it with caution. ChatGPT recreates human language by using data from the internet. However, the online data is not necessarily factual. People are often mean, wrong or even sarcastic, things that artificial intelligence may not be capable of detecting. Therefore, such chatbots should be used as a tool to answer questions, or for brainstorming captions, strategies or lists. It is important to cross-check the data. As generative AI is gaining traction, it is predicted that a new category of professionals called the ‘prompt engineers’ would replace the traditional programmers. Just the way we use search engines today, prompting generative AI will become part of our jobs in the future. However, the current version of the technology is far underdeveloped from the expectations around it. An email or text composed by ChatGPT cannot be distinguished from one composed by a human, however, the human ability to put tough sentiments into flawless sentences cannot be replaced. We cannot make blanket statements about when it is okay to use AI to compose personal messages, however, for people who struggle with written or spoken communication, ChatGPT can be a life-changing tool. As we increase our use of regenerative chatbots, it is important to ask ourselves, are we enhancing our communication, or deceiving and shortchanging?

要查看或添加评论,请登录

社区洞察

其他会员也浏览了