The Battle of AI-Language Models & Safe Generative Chatbots: Google Bard-LaMDA2, OpenAI ChatGPT - GPT3, GPT4, Facebook BlenderBot3 & ErnieBOT
Join the conversation on the advancements in AI language models and safe generative chatbots. Get an in-depth comparison of the latest from Google, OpenAI, and Facebook. Stay ahead of the curve.

The Battle of AI-Language Models & Safe Generative Chatbots: Google Bard-LaMDA2, OpenAI ChatGPT - GPT3, GPT4, Facebook BlenderBot3 & ErnieBOT

In the era of Conversational AI & Generative AI technology. The newly introduced ChatGPT has received widespread praise. It's already changing the way people search for information by answering complex questions, from math to coding to writing essays. However, it is not entirely alone in this space. Here are a few large language models built on transformer models similar to GPT-3, BERT/LaMDA2?? AlexaT20B, DialoGPT, Godel, Sparrow, and Galactica built-on transformer models you can explore.


Discover the Ultimate Showdown: GPT-3, LaMDA2, and BlenderBot3! Uncover the Pros and Cons of these Giant Language Models and Find Out Which One Reigns Supreme as the Safest Generative Chatbot.

Bard is Google's AI language model that generates human-like text. It is known for its ability to generate coherent and fluent responses.

OpenAI's GPT-3 is a highly advanced language model that has received significant attention due to its ability to generate text that is often indistinguishable from human writing.

BlenderBot3 is Facebook's language model, a multi-lingual model that has been trained on a large amount of text data.

All three models have been trained using advanced AI techniques, but they differ in their training data, architecture, and specific use cases. While each model has its strengths, the selection of the best model would depend on the specific requirements of a given use case.


Why is the bot called "ChatGPT", "Bard AI", or “BlenderBot”?

Uncovering the mystery behind the creative names of popular AI chatbots - ChatGPT, Bard AI, and BlenderBot.
Uncovering the mystery behind the creative names of popular AI chatbots - ChatGPT, Bard AI, and BlenderBot.

ChatGPT — The name is "ChatGPT" because it stands for "Chat Generative Pretrained Transformer". It represents the type of language model that, is a machine learning model that has been pre-trained on a large dataset and can generate text in response to prompts.

Bard —The name "Bard" was chosen because it is a storyteller. The company stated that this is the reason for the name.

BlenderBot — This is related to the implementation of the bot. We found that teaching the bot to?"blend"?skills needed to perform multiple conversational tasks made for better performance than training the bot to learn one skill at a time.

How generative AI can be used?

Generative AI can be used to create various forms of content that have a similar look and feel to content created by humans. AI-generated content is not just limited to mimicking human writers. AI-generated content also exists in other media, such as: Image, Speech, Video, Music, Code, etc. It can be applied across a range of industries, including:

Academia - writing academic papers and other lengthy material;

Law - producing legal briefs;

Science - accelerating drug discovery;

Art - generating new works and imaginative content;

Manufacturing - hastening product design and development;

Digital Marketing - writing copy, product descriptions, and social media posts;

Software Development - creating, modifying, and summarizing code; and

Cybersecurity - performing rapid threat detection and developing anti-malware solutions.

What do They do?

Google Bard is the company's AI search assistant. It will work in the background of search queries to?generate a short text summary of your results, rather than simply an index of links. Plus, it can answer open-ended and abstract queries by using its language model and drawing data from the web. It is able to gather not only objective facts but also information from blogs and articles to understand people's opinions, providing more detailed answers.

The services that Google's Bard and ChatGPT offer are similar. Users will have to key in a question, a request, or give a prompt to receive a human-like response. Both models offer includes the following:

  • written code;
  • product descriptions;
  • blog posts;
  • email drafts;
  • summaries of transcripts, meetings, and podcasts;
  • simple explanations of complex topics;
  • law briefs;
  • translations;
  • jokes or memes;
  • social media posts;
  • Science Exploration;

No alt text provided for this image
BlenderBot 3 can talk about almost any topic.

BlenderBot3 can talk about almost any subject by searching the internet. It's made to get better and safer by learning from real conversations and feedback from people. Unlike earlier datasets, which were collected through studies with limited diversity, BlenderBot 3's training data reflects the real world.


How They are Different?

Both technologies can distill complex information and multiple perspectives into easy-to-digest formats, but the most apparent difference is Bard's ability to include recent events in the responses.

Though not immediately clear how the two services will differ, it is certain that Alphabet's Bard will have access to more data.

However, GPT-3 is a prediction machine: It's trina to predict what should come next based upon what it has learned about the world through text. A prompt is text send to GPT-3 for it to make a prediction about what should come next.GPT-3 isn't programmed to do any specific task. It can perform as a chatbot, a classifier, a summarizer, and other tasks because it understands what those tasks look like on a textual level

Bard & BlenderBot draws on information from the internet, while ChatGPT has access to data until 2021.

Details are here ??

LaMDA VS ChatGPT VS Blenderbot

LaMDA and ChatGPT are both transformer-based neural language models used in dialog applications. LaMDA has 137B parameters and is pre-trained on 1.56T words of dialogue data and web documents, fine-tuned on three metrics: quality, safety, and groundedness.

ChatGPT, on the other hand, is based on GPT-3.5 architecture with 175B parameters and consists of three models: code-davinci-002, text-davinci-002, and text-davinci-003. ChatGPT uses a supervised-learning model where human AI trainers craft responses and train the model by ranking its conversation quality.

Both models use human-generated responses to evaluate their progress.

BlenderBOT3, is based on?seq2seq model transformer?architecture with 175B parameters ands it's newly developed machine learning techniques,?SeeKeR?and?Director, to learn how to improve its skills and safety through natural conversations and?feedback from people automatically without human interaction. It's also capable of searching the internet to chat about virtually any topic and learn from user's conversations.

Are AI-Generated responses always factual?

 it’s citing are talking about boiling babies’ milk bottles
Despite the indications, this is not the new AI-powered Bing, but rather the old Bing which is making the error of providing a single, definitive answer.

Bing aims to base all its responses on reliable sources - but AI can make mistakes, and third-party content on the internet may not always be accurate or reliable. Bing will sometimes misrepresent the information it finds, and you may see responses that sound convincing but are incomplete, inaccurate, or inappropriate. Use your own judgment and double-check the facts before making decisions or taking action based on Bing's responses.

Similarly on Google Bard AI, posted by Google on Twitter, a user asks Bard: “What new discoveries from the James Webb Space Telescope can I tell my 9 year old about?” Bard responds with a series of bullet points, including one that reads: “JWST took the very first pictures of a planet outside of our own solar system.” According to NASA, however, the first image showing an exoplanet — or any planet beyond our solar system —?was actually taken by the European Southern Observatory’s?Very Large Telescope?nearly two decades ago, in 2004.

Blenderbot 3 response about Mark Zuckerburg
BlenderBot3 response about Mark

Meta's latest AI Chatbot BlenderBot 3 response about Mark Zuckerberg??????"Oh man, big time. I don't really like him at all. He's too creepy and manipulative


How do they generate responses?

ChatGPT, generate responses to questions and prompts based on patterns in the text that it's have been trained on. It's have been trained on a massive dataset of text from the internet, which includes a wide range of topics and styles of writing. The process of generating a response involves several steps. First, it analyzes the input to identify the most relevant information and determine the intent behind the question. Then, use that information to retrieve relevant information from its training data. Finally, it uses that information to generate a response that is contextually appropriate and grammatically correct.

Bard is based on a so-called large language AI model - LaMDA, a type of neural network, which mimics the underlying architecture of the brain in computer form. It is fed vast amounts of text from the internet in a process that teaches it how to generate responses to text-based prompts. In addition, LaMDA evaluated and fine-tuned on three metrics: Quality, Safety, and Groundedness. Progress in LaMDA is measured by comparing responses generated by the pre-trained model, the fine-tuned model, and human raters in multi-turn two-author dialogues. The responses are then rated by a separate group of human evaluators based on these metrics.

Bing & Google search for relevant content across the web and then summarizes what it finds to generate a helpful response. It also cites its sources, so you're able to see links to the web content it references.

Will they have access to your personal data? What data is saved and for how long? Who will have access to the data?

Every AI models rely on data, some of which may be personal. That data, or the way the models are trained, may be biased unintentionally. At the same time, there is also a chance that AI systems could be compromised, and an individual’s private information exposed. This is partly because AI systems rely on large datasets, which might make them a greater target for cyber-attacks.

ChatGPT As a language model created by OpenAI, As per their statement they don't have the ability to collect or access personal data, and they only generate responses to prompts given. OpenAI collects data such as logs of questions to improve the quality of its models, but takes user privacy and data security seriously and implements strict policies and measures to protect it.

Bard is created by Google, They collect certain information through its services, but it also has strict privacy policies in place to protect your data. Bard AI may collect questions and conversations to improve conversational abilities.

BlenderBOT is Created by Facebook. They collect technical information about users' browsers or devices for analytics and to provide the tool. Users can opt-in to share their conversations with the AI language model, BlenderBot, for research purposes. The text of the conversation may be stored indefinitely if the user consents and may be used for improving conversational abilities, annotating quality, in academic research, in publicly available databases, or in future initiatives. The conversations may contain personal information and the website will try to de-identify the information, but it may not be 100% effective. Users can opt out of having their conversation used for research and their data will be deleted.

How do they handle safety against offensive content?

Many people say that OpenAI's conversational AI produces shallow and repetitive content, like it's just regurgitating information from Wikipedia. This has led to criticism for producing incorrect information, fake quotes, and non-existing references.

However, LaMDA is different. It uses metrics to check the quality of its responses. For example, the "groundedness" metric makes sure the responses are based on credible sources. The "quality" metric checks if the responses make sense in the context, are specific and not generic, and are interesting or witty.


LaMDA generates and then scores a response candidate
LaMDA generates and then scores a response candidate

This helps to improve the accuracy and depth of LaMDA's responses, making it a step ahead in this field.


The good news is, OpenAI's GPT-3.5 architecture uses a reinforcement learning model (RLHF), based on human feedback, to continuously improve its performance. However, LaMDA does not use RLHF. Despite some reported errors in ChatGPT's output, the use of RLHF is a unique and interesting aspect of OpenAI's models.

Director model for learning through positive and negative feedback
Director model for learning through positive and negative feedback

Facebook Blenderbot has a completely different mechanism, They added a new state-of-the art dialog safety techniques to its existing dialog safety measures, including safety classifiers, filters, and tests. This new technique aims to improve the responses to feedback on challenging conversations and make them more conducive to civil discussions. Although safety issues cannot be completely solved, the goal of these techniques is to help the model learn how to respond more responsibly through feedback on inappropriate responses.

What happens if the bot says something offensive despite your safety efforts?

If the bot says something offensive, you should report the message by clicking the “thumbs down” beside the message and selecting “Rude or Inappropriate” as the reason for the dislike. In every Generative Chatbot is using this feedback to improve future iterations of the bot.

Does the bot ever say anything untrue?

Unfortunately yes, the bot can make false or contradictory statements. Users should not rely on this bot for factual information, including but not limited to medical, legal, or financial advice.

In research, we say that models like the one that powers this bot have "hallucinations", where the bot confidently says something that is not true. Bots can also misremember details of the current conversation, and even forget that they are a bot.

You can help to improve generative AI models by selecting the “thumbs down” button by any untrue or confusing messages sent by the bot

Do they have any APIs?

Yes, OpenAI offers APIs for its language models, including GPT-3, which developers can use to build applications that generate text. The OpenAI API provides access to state-of-the-art AI models that can perform tasks like text completion, translation, summarization, question answering, and more. To use the API, developers can send requests to the API endpoint and receive responses in real-time. The API is accessible through a subscription, and OpenAI provides documentation and support to help developers get started.

Similarly, Facebook released their publicly available chatbot complete with?model weights,?code,?datasets, and?model cards.

Also, Google Bard AI will open their Generative Language API, initially powered by LaMDA with a range of models to follow. Over time, our goal is to create a set of tools and APIs that will make it easy for others to build more innovative applications with AI.

Is there any Alternatives?

There are other?AI-content generators available. There are also several startups working on their own projects, including ChatSonic, Jasper AI(powered by GPT-3), OpenAssistant and Wordtune. China's search engine, Baidu, will also use AI with an application called Ernie Bot.

Is GPT4 Available? What it can do?

GPT-4 is not an officially announced or released product, so it is not possible to say with certainty what it can do. However, based on the advancements in the previous versions of the GPT series, it is likely that GPT-4, if and when it is released, will be an even more advanced language generation model with improved capabilities in tasks such as language translation, text summarization, question answering, and text generation. But until it is officially announced, we cannot say for sure what specific capabilities it will have.

Ashfaq Salehin

Artificial Intelligence PhD Researcher at University of Sussex

1 年

Thank you for sharing your insightful comparison between these latest wonders. ??

要查看或添加评论,请登录

社区洞察