Generative AI Glossary

Generative AI Glossary


Auto-Regressive Model: “A model that infers a prediction based on its own previous predictions. For example, auto-regressive language models predict the next token based on the previously predicted tokens. All Transformer-based large language models are auto-regressive.”

Chain-of-Thought Prompting: “A prompt engineering technique that encourages a large language model (LLM) to explain its reasoning, step by step.”

Chat: “The contents of a back-and-forth dialogue with an ML system, typically a large language model. The previous interaction in a chat (what you typed and how the large language model responded) becomes the context for subsequent parts of the chat.”

Contextualized Language Embedding: “An embedding that comes close to “understanding” words and phrases in ways that native human speakers can. Contextualized language embeddings can understand complex syntax, semantics, and context.”

Context Window: “The number of tokens a model can process in a given prompt. The larger the context window, the more information the model can use to provide coherent and consistent responses to the prompt.”

Distillation: “The process of reducing the size of one model (known as the teacher) into a smaller model (known as the student) that emulates the original model’s predictions as faithfully as possible.”

Few-Shot Prompting: “A prompt that contains more than one (a “few”) example demonstrating how the large language model should respond.”

Fine Tuning: “A second, task-specific training pass performed on a pre-trained model to refine its parameters for a specific use case.”

Instruction Tuning: “A form of fine-tuning that improves a generative AI model’s ability to follow instructions. Instruction tuning involves training a model on a series of instruction prompts, typically covering a wide variety of tasks. The resulting instruction-tuned model then tends to generate useful responses to zero-shot prompts across a variety of tasks.”

Low-Rank Adaptability: “An algorithm for performing parameter efficient tuning that fine-tunes only a subset of a large language model’s parameters.”

Model Cascading: “A system that picks the ideal model for a specific inference query.”

Model Router: “The algorithm that determines the ideal model for inference in model cascading. A model router is itself typically a machine-learning model that gradually learns how to pick the best model for a given input. However, a model router could sometimes be a simpler, non-machine learning algorithm.”

One-shot prompting: “A prompt that contains one example demonstrating how the large language model should respond.”

Parameter-Efficient Tuning: “A set of techniques to fine-tune a large pre-trained language model (PLM) more efficiently than full fine-tuning. Parameter-efficient tuning typically fine-tunes far fewer parameters than full fine-tuning, yet generally produces a large language model that performs as well (or almost as well) as a large language model built from full fine-tuning.”

Pre-Trained Model: “Models or model components (such as an embedding vector) that have already been trained.”

Pre-Training: “The initial training of a model on a large dataset.”

Prompt: “Any text entered as input to a large language model to condition the model to behave in a certain way.”

Prompt-based Learning: “A capability of certain models that enables them to adapt their behavior in response to arbitrary text input (prompts).”

Prompt Engineering: “The art of creating prompts that elicit the desired responses from a large language model.”

Prompt Tuning: “A parameter efficient tuning mechanism that learns a “prefix” that the system prepends to the actual prompt.”

Reinforcement Learning from Human Feedback: “Using feedback from human raters to improve the quality of a model’s responses.”

Role Prompting: “An optional part of a prompt that identifies a target audience for a generative AI model’s response.”

Soft Prompt Tuning: “A technique for tuning a large language model for a particular task, without resource-intensive fine-tuning. Instead of retraining all the weights in the model, soft prompt tuning automatically adjusts a prompt to achieve the same goal.”

Temperature: “A hyperparameter that controls the degree of randomness of a model’s output. Higher temperatures result in more random output, while lower temperatures result in less random output.”

Zero-Shot Prompting: “A prompt that does not provide an example of how you want the large language model to respond.”


Ross Holmes

Digital Marketing & Business Development | Web3, Technology & iGaming | Growth & Relationships

7 个月

Fascinating lexicon! Let's explore this AI world together

回复

要查看或添加评论,请登录

Smit PATEL的更多文章

社区洞察

其他会员也浏览了