1 million people have used ChatGPT in just 5 days. To put this in context, Facebook, which is also free, took 10 months and Instagram 2.5 months. Some of my friends proclaimed, "Here is the beginning of the end" and "we are doomed." In this post, I aim to dive deeper into the question of whether ChatGPT is truly going to usher in the end of the world.
?Here are a few reasons why ChatGPT is built to evolve responsibly.
What is impressive is that OpenAI has hit almost all the transparency checkboxes for releasing AI that no other AI-creating organization has met before.
- Documentation Transparency: Open AI provides detailed documentation and guidelines on how to use the model in a responsible manner. It has also released a paper that shows the ethical considerations and limitations of the model. Open AI has published research papers and blog posts about the development and capabilities of its models, including the technical details of the architecture and performance of the models as well as any ethical considerations and limitations.
- Model Transparency: Open AI provides ample information on what model was used, the model's design and limitations, what steps they undertook to reduce biases and stereotypes, and what they are willing to do if users find problems with the ChatGPT model. OpenAI has also done an excellent job of providing a lineage of technology and models that can be used to reduce the blackbox nature of ChatGPT. Open AI has released the source code for some of its models, allowing researchers and developers to examine the code and understand how the models work. OpenAI has a "Model Card" for each of their models, which provides a summary of the model's capabilities, the data it was trained on, and any ethical considerations and limitations. This allows users to understand the model's performance and limitations before using it.
- Data Transparency: OpenAI openly admits that their AI can be biased and elaborates on every step they've taken to mitigate that bias. OpenAI also has a "Responsible AI" team to handle any ethical concerns and is constantly reviewing its data and outcomes to ensure they are not perpetuating any biases or stereotypes. Open AI gives information about the data that was used to train the models and tells people how to use the models in a responsible way.
- Technology Transparency : OpenAI also has an "OpenAI API," which allows developers to access the company's models and use them in their own applications without having to host the models themselves. It also provides detailed documentation, usage limits, and pricing details to make it easy for developers to understand how to use the models in a responsible manner. OpenAI also provides access to their models via a cloud-based API, which allows developers to use the models without having to download and host the models themselves. This makes it easy for developers to use the models in their own applications and to stay within the usage limits and policies.
- We need understanding of human consciousness to build sentience in machines: To build sentient AI—AI that can act on its own—we need to really understand what consciousness really is. Many leading AI experts don't believe we have the tools, infrastructure, and knowledge to build sentient AI. Current generation chatbots are built on pattern recognition, token matching against millions of internet documents, and sophisticated sentence construction to put together the identified tokens in a meaningful way. They are nowhere close to sentient AI. In essence, the chatGPT is just an automated version of search that can be illogical at times. The ultimate responsibility lies with the user and creators.
- ChatGPT gives opinion but doesn't make decisions. It simply augments the human mind with a more sophisticated compilation of existing research. If a human uses ChatGPT's irresponsible or nonsensical answers, ChatGPT's creator Open AI states that the the the human is to blame.
In many modern uses of AI, decisions are made by the technology. For example, AI was used to make hiring, mortgage, and prison sentence decisions, which was harmful. AI can be asked for an opinion, but it cannot be asked for a decision. The way ChatGPT uses technology moves AI more toward the opinion function than the decision function. This is exactly how AI is meant to be used, as a less intelligent but more specialized "human helper." In ChatGPT's case, a web crawler of internet documents and a sensible sentence constructor.
This is the ultimate test of responsibility. Previously, publicly available AI was a proprietary technology. Publicizing access to AI conveys two things:
- Being open to crowdsourcing feedback, problems, and concerns with the wider public and users of AI builds trust in it. People are free to experiment with ChatGPT and point out where things went wrong or when someone is acting irresponsibly. We collectively have the ability, access, and opportunity to responsibly utilize ChatGPT. Here is a look into the wild conversations in ChatGPT.
- If a company is brave enough to make ChatGPT public, it means that the technology is really impressive and has been tested well.
Prior to such information augmentation, AI systems such as Meta's Galactica and IBM's Watson were closed to the public, resulting in their being referred to as "black boxes."
Venture Associate @ 27pilots Deloitte | Help Startups find Venture Clients & Corporates Innovate through Startup Technologies
1 年Does anyone know any company working on the safety of ChatGPT beside OpenAI?