There are two types of LLMs:
- Base LLMs: These predict the next word based on text training data only and cannot provide specific responses. This is for basic tasks.
- Instruction-Tuned LLMs: These try to follow instructions, fine-tune LLMs, and follow instructions with RLHF (Helpful, Harmless, Honest). They are practically used.
Principles of Prompt Engineering for Instruction-Tuned LLMs:
- Write clear and specific instructions.
- Give the model time to think.
Steps for Prompt Engineering:
- Install libraries such as OpenAI, OS, and load the helper function (the latest being OpenAI library 1.0.0).
- The helper function includes:
def get_completion(prompt, model="gpt-3.5-turbo"): messages = [{"role": "user", "content": prompt}] response = openai.ChatCompletion.create( model=model, messages=messages, temperature=0, # This is the degree of randomness of the model's output ) return response.choices[0].message["content"]
This Python function, get_completion, interacts with OpenAI's ChatGPT model (specifically using the "gpt-3.5-turbo" variant) to generate responses to prompts. Here's a breakdown of how it works:
- Input:prompt: Input text or prompt provided to the function, representing the starting point or context for the conversation.model (optional): Specifies the version of the GPT model to use. Defaults to "gpt-3.5-turbo".
- Formatting the Input:The input prompt is formatted into a list of dictionaries called messages, with each dictionary representing a message in the conversation. The prompt is formatted as a user message.
- Generating Response:The function uses OpenAI's ChatCompletion.create method to generate a response to the provided prompt.It passes the model to use (model), the messages formatted earlier (messages), and additional parameters like temperature.temperature: Controls the randomness of the generated response. Setting it to 0 makes the response deterministic.
- Extracting Response:The function retrieves the generated response from the response object.It accesses the choices attribute of the response object, containing a list of potential completions.Since temperature is set to 0, there's only one choice, so it retrieves the first choice using response.choices[0].Finally, it extracts the content of the message from the first choice using response.choices[0].message["content"].
- Output:The function returns the content of the generated response as a string.
In summary, this function allows easy interaction with OpenAI's ChatGPT model, generating responses to prompts in a conversational manner, and providing a streamlined way to integrate AI-generated text into applications or projects.
- Use delimiters to indicate distinct parts of input. These could be '', "", <>, <tag>, </tag>, or :.
- Ask for structured output, such as JSON or HTML.
- Ask the model to check whether conditions are satisfied.
- Use few-shot prompting.
- Specify the steps required to complete a task.
- Instruct the model to work out its own solution before rushing to a conclusion.
- Instruct the model to test/validate its own results.
- Ensure human-in-the-loop for checking model hallucinations.
Iterative: Prompt development to interact with information-loaded LLMs functions in a similar way to machine learning models: idea -> implement -> iterate.
- If the provided text is too long, ask it to shorten.
- If the LLM output focuses on the wrong details, get it back on track.
- Ask it to extract information and organize it in a table based on the use case.
- Ask the LLM to summarize the text.
- Provide a word or character limit.
- Ask it to focus on a topic within the provided text input.
- If the output introduces an unrelated topic, ask LLMs to extract instead of summarizing.
LLMs can identify sentiment and emotions from a text.
- Ask LLMs to identify specific sentiments.
- Extract specific information, such as names of persons mentioned in the text.
- Ask LLMs to multitask, accomplishing multiple tasks at the same time.
- Ask LLMs to infer topic titles based on the text.
I used the inferring functionality in my personal podcast app.
Translates from one language to another. LLM can also act as a universal translator at once, translating sentences from several languages into one. LLMs can produce and change tones. They can also change text formats from JSON to HTML, etc.
LLMs can integrate information from two different texts.
CEO & Founder @Asenti | Data-driven Engineering | Empowering real estate developers to do more with less
7 个月Crazy to see how many AI-automated comments posts about AI is getting. I recently posted with #LLM and was getting it as well.
AI Enthusiast ?? SaaS Evangelist ?? Generated $100M+ Revenue For Clients | Built a 90K+ AI Community & a Strong SaaS Discussion Community with 12K+ SaaS Founders & Users | Free Join Now ??
7 个月Excited to dive back into the world of generative AI and data strategies with refreshed study materials! Sandhya C.
Excited to dive back into the world of generative AI and data for business growth! Sandhya C.
Helping 15,000+ Founders Discover the Best AI & SaaS Tools for Free | Founder of SaaS Gems ?? | Curated Tools & Resources for AI & SaaS Founders ??
7 个月Excited to dive back into the world of generative AI and data! ??
????Vom Arbeitswissenschaftler zum Wissenschaftskommunikator: Gemeinsam für eine sichtbarere Forschungswelt
7 个月Excited to dive back into the world of AI and data-driven growth strategies! ??