ChatGPT Whats Next -1000 plus Use cases.
Ashraf-Dalle-2 generated

ChatGPT Whats Next -1000 plus Use cases.

Summary of contents presented to North American forum of engineers (KEAN USA), 26th March 2023 at 8pm EST.?

It is not easy to tell what happens Next ..

It’s hard to predict or try to understand how the world will look just 10 years from today. It will be very different, we have passed the inflection point, and the rocket engines have been lit and just taken off.


Why Chatbots?

Chatbots are becoming increasingly popular because they offer a number of benefits for businesses and individuals alike. Here are some reasons why chatbots are being used:

  1. Improved customer service: Chatbots can provide 24/7 customer support, which can help businesses to provide better customer service and respond to customer inquiries and issues more quickly.
  2. Cost savings: Chatbots can help businesses to save money by automating certain tasks and reducing the need for human customer support agents.
  3. Increased efficiency: Chatbots can handle multiple customer inquiries at once, which can help to improve efficiency and reduce wait times for customers.
  4. Personalization: Chatbots can use data and machine learning to provide personalized recommendations and support to customers, which can help to improve the customer experience.
  5. Accessibility: Chatbots can help to make products and services more accessible to people who may have difficulty accessing them through traditional channels.
  6. Scalability: Chatbots can handle large volumes of inquiries and interactions, making them an effective solution for businesses with a large customer base.


What is ChatGPT?

ChatGPT is a large language model developed by OpenAI, based on the GPT-3.5 architecture. It is designed to understand natural language and respond to a wide variety of questions and prompts in a conversational manner. ChatGPT has been trained on vast amounts of text data and uses machine learning algorithms to generate human-like responses. Its purpose is to assist users in a variety of tasks, from answering general knowledge questions to helping with creative writing prompts.

What are Generative Models?

Generative models are machine learning models that are designed to generate new data that is similar to the input data that they were trained on. They are used to learn the underlying patterns and structures in the input data and then use that knowledge to generate new examples of the same type of data.

Generative models can be used for a variety of tasks, such as image generation, text generation, and speech synthesis. They can also be used for data augmentation, where new data is generated to expand the size of a training dataset, and for anomaly detection, where the model generates data that is different from the input data, and this difference is used to identify anomalies in the input data.

Some examples of generative models include variational autoencoders, generative adversarial networks (GANs), and autoregressive models such as GPT. These models have been used in a wide range of applications, including image and video generation, natural language processing, and drug discovery.

What are LLMs?

LLMs, or Language Modeling Models, are a type of machine learning model that are used to understand and generate natural language. There are several different types of LLMs, each with its own strengths and weaknesses. Here are some examples:

  1. n-gram models: These models are based on statistical patterns of word co-occurrences, where n refers to the number of words in the sequence. For example, a 2-gram model would look at pairs of adjacent words, while a 3-gram model would look at sequences of three adjacent words. These models are relatively simple, but they can be effective for modeling short text sequences.
  2. Recurrent Neural Networks (RNNs): RNNs are a type of neural network that can process sequential data, such as sentences or paragraphs. They can be used to generate new text by predicting the probability distribution of the next word given the previous words in the sequence. RNNs are powerful models that can capture long-term dependencies in the input data.
  3. Transformer models: These models are based on the self-attention mechanism, which allows the model to attend to different parts of the input sequence when generating the output. Transformer models have become very popular for natural language processing tasks, such as language translation and text generation, because of their ability to model long-range dependencies and generate high-quality text.
  4. Probabilistic Context-Free Grammars (PCFGs): These are models that generate sentences by recursively applying grammatical rules. Each rule has a probability associated with it, and the model generates a sentence by choosing the most likely rule at each step. PCFGs are useful for generating structured text, such as sentences that follow a specific grammatical structure.

There are many large language models developed by various companies and organizations, here are some examples:

  1. GPT-3 (Generative Pretrained Transformer 3) - developed by OpenAI, is one of the largest and most well-known language models to date, with 175 billion parameters.
  2. T5 (Text-to-Text Transfer Transformer) - developed by Google, is a large-scale LLM that can be trained on a variety of natural language tasks, including translation, summarization, and question-answering.
  3. Megatron - developed by NVIDIA, is a large-scale transformer-based LLM that has been trained on a variety of tasks, including natural language processing and computer vision.
  4. BERT (Bidirectional Encoder Representations from Transformers) - developed by Google, is a transformer-based LLM that has been trained on a variety of natural language tasks, including question-answering and sentiment analysis.
  5. RoBERTa (Robustly Optimized BERT Pretraining Approach) - developed by Facebook AI, is a modified version of BERT that has been further optimized for a wide range of natural language processing tasks.
  6. GShard - developed by Google, is a distributed LLM that can be trained on massive amounts of data across multiple machines.
  7. Switch Transformers - developed by Microsoft, is a large-scale LLM that can dynamically switch between different levels of granularity depending on the input data.
  8. ProphetNet - developed by Microsoft, is a large-scale transformer-based LLM that has been specifically designed for generating high-quality text.
  9. Pathways Language Model (PaLM) GLaM, LaMDA, Gopher, and Megatron-Turing NLG.

APIs & Tokens - "Use Your Words Carefully"

No alt text provided for this image

ChatGPT and Whisper models are available as APIs and are charged by the number of tokens, it is important for us to have a basic understanding of tokens. At the time of launch, the?ChatGPT API?charges $0.002 per 1000 tokens. As per their rule of thumb, 1000 tokens translates to approximately 750 words in common English.

Whisper - OpenAI's Whisper API can be?used by transcription service providers to transcribe audio and video content in multiple languages accurately and efficiently. The API's ability to transcribe the audio in near real-time and support multiple file formats allows for greater flexibility and faster turnaround times


Responsible AI refers to the development and deployment of artificial intelligence (AI) systems that are designed to be fair, transparent, accountable, and trustworthy. Responsible AI seeks to address ethical and social considerations that arise with the use of AI, and to ensure that these systems are developed and used in a way that benefits society as a whole.

The concept of responsible AI encompasses a range of principles, such as:

  1. Fairness: AI systems should not discriminate against individuals or groups based on factors such as race, gender, or age.
  2. Transparency: AI systems should be designed and deployed in a way that is transparent, so that users can understand how the system works and how decisions are made.
  3. Accountability: AI systems should be accountable for their actions, and mechanisms should be put in place to ensure that they can be held responsible for any negative consequences.
  4. Privacy: AI systems should respect individual privacy and data protection laws, and any data collected or used should be handled in a secure and ethical manner.
  5. Safety: AI systems should be designed and deployed in a way that is safe for humans, animals, and the environment, and any potential risks or harms should be identified and mitigated.

Responsible AI is becoming increasingly important as AI systems become more prevalent in society and have a greater impact on people's lives. Many organizations, including governments, NGOs, and tech companies, are working to develop principles and guidelines for responsible AI, and to ensure that these principles are incorporated into the development and deployment of AI systems.


  1. What is ChatGPT??
  2. ChatGPT?News?
  3. ChatGPT?Explained?
  4. What are Chatbots?
  5. Chatbot Architecture?
  6. Why Chatbots ??
  7. Impact on jobs?
  8. Large Language Models?
  9. ChatGPT Use cases?
  10. Security Risks?
  11. Responsible AI
  12. The Engine and Corpus - AI??
  13. Generative Models?
  14. Neural Networks? and? classification???
  15. Reinforcement learning?
  16. Github?Copilot?
  17. Microsoft 365 Co-pilot, Office, ERP?and CRM??
  18. Google Bard ,LaMDA,OPT,BLOOM?
  19. Whats?Next..!?
  20. Demo – Text Classification ,Python ,Keras,APIs, Code Generation, Image Generation, Existing Models ChatGPT4,DALLE-2
  21. Q&A?

ChatGPT generated usecases 100 usecases list , 1000 usecases list

Use Cases1

Use Cases2

Demo Items

1) Anatomy Q&A for medicine students

2) Writing emails to Boss

3) Creating powerpoint presentations

4) Training wrongly using dialog

5) Counselling help

6) Code generation

7) Movie scripts

8) Speech content

9) Research papers

10) Summarization

11) Python Chatbots with Flask

12)Movie corpus and sentiments Keras,TF

Read More

要查看或添加评论,请登录

社区洞察

其他会员也浏览了