Generative AI -Opportunities and Challenges
Image Source : https://www.channelfutures.com/reports-digital-issues/manufacturing-sd-wan-the-challenges-the-opportunities

Generative AI -Opportunities and Challenges

ChatGPT and Generative AI are no more buzz words for anyone!!

Generative AI can do lots of fantastic tasks for example, it can write the optimize code for you, it can create amazing and unique image based on your text input, it can understand and summarize articles and many more.... People have already started using ChatGPT, Google Bard which are examples of Generative AI. It is based on Large Language Model (LLM).

What is LLMs?

Large language models (LLMs) have generated much hype in recent months. LLMs are trained on vast corpus of data and also called Foundation models (FMs).

Foundation models are different from traditional machine learning models. As Mahine learning models is trained on specific set of data and able to accomplish specific task. On the other hand, Foundation model is trained on vast amount of data and single FM model is able to accomplish multiple tasks. For example, A foundation model can generate and summarize the text as well as extract the information.

These FMs are pre trained on vast amount of data and very large in size. These models are built using deep learning?in?natural language processing (NLP) and predict the probability of next generation words in sequence. These models can be fine tune on domain specific data (relatively small dataset) to get the better performance.

No alt text provided for this image

Chances:

Generative AI is taking AI to the next level and generating lots of new business opportunities. LLM models, such as PaLM, ChatGPT have been shown to achieve outstanding performance on a variety of natural language processing tasks.

  • Gen AI can generate product descriptions from product reviews, summarize negative reviews.
  • Gen AI can search similar products from the catalog based on the product description given by user.
  • It can summarize the transcript of user and agent and check the sentiment for better customer experience.
  • Getting clean and real time data to train a model is always a challenge. Gen AI can help to generate synthetic data to train the ML model.
  • Gen AI is taking conversation AI to the next level where it can understand user query and context better and improve the customer experience.
  • It can generate new and creative content/image for marketing.
  • It can improve the accuracy of AI applications like fraud detection by analyzing the large amounts of unstructured data.

Challenges:

Along with chances and opportunities, there are many challenges to adopt and fine tune these models.

  1. Infrastructure Optimizations: These models are very large in size. Infrastructure Optimizations is one of the challenges to build the foundation models.
  2. Model Monitoring: Another big challenge is to monitor these models. As these models are generating new text and images. Content validation in production is always a big Challege.
  3. Security: Data Privacy and security is another big challenge for industries to fine tune these models.
  4. Scaling: As these are very large models, scaling and automation of LLMs is one of the key challenges.
  5. Sustainability: FMs are trained on vast amounts of data.AI requires higher processor, network, and storage requirements and increasing carbon footprints.

Consumption of LLMs

We need to understand, despite of multiple challenges with LLMs, why they are getting so much popularity and how companies are consuming and creating LLM powered Applications?

As An API:

It’s quite expensive to build and train your own Large Language Models. Most people prefer to use a pre-trained model like Hugging Face which are accessible using API.

LLMs are available as an API which can be used directly for any NLP task without training.

When calling the API, we need to pass in some parameters, like temperature, p_token, Prompt and so on. Prompt is where you can describe to the model what you want it to do.

For example, you can get OpenAI key and execute its API:

import opena
from langchain import PromptTemplate
from langchain.llms import OpenAI
openai.api_key = "<API KEY>"
llm = OpenAI(openai_api_key=openai.api_key , temperature=0)


def get_completion(Loc):
? template = """ I want to travel {location}.what should I do there? respose in one short sentence """
? prompt = PromptTemplate(input_variables=["location"],template=template, ?)
? final_prompt = prompt.format(location=Loc)
? response = llm(final_prompt)
? return response


get_completion('india')        


Fine Tuning with Prompt Engineering: LLMs output depends on the Prompt. By giving correct and brief prompt, you can get desired output from LLMs. You can also provide instructions in prompt to get the desired output.

Fine tune LLMs with Parameters: You can also get desired output by fine tuning LLMs parameters. for example, if you are looking for unique content, you can make temperature parameter as 1 and if you are looking most frequent content as LLM output, you can make temperature parameter as 0.

Fine Tune LLMs with one and few shots: Companies can get the more accurate output in desired format with few shots.In few shots, you can provide few examples. LLM will understand these examples and return the output in same format as given in few shots.

Building LLMs from Base Model: ?Companies can leverage pre-trained transformer models (public or Proprietary) for example, Hugging face, AI21 lab that are already trained on a large corpus of data in a self-supervised fashion. This raw model can be further fine-tuned on a specific downstream task.


Happy Learning!!

要查看或添加评论,请登录

Ruchi A.的更多文章

社区洞察

其他会员也浏览了