Prompting Techniques for LLMs Compared to Fine-Tuning




  • Prompting techniques are methods of using pre-trained language models for natural language processing tasks without fine-tuning or adding new parameters
  • They rely on designing natural language prompts and demonstrations that can elicit the desired output from the language model


Some of the prompting techniques are:



  • Zero-Shot Prompting: no examples provided; leverage the model’s pre-training.
  • Few-Shot Prompting: provide a few demos of input and output; show the desired reasoning format.
  • Chain-of-Thought Prompting: prefix responses with intermediate reasoning steps; generate rich, concise summaries.
  • Self-Consistency Prompting: pick the most frequent answer from multiple samples; increase redundancy and robustness.
  • Tree-of-Thought Prompting: generate and evaluate multiple responses; allow backtracking through reasoning paths.
  • Verifiers Prompting: train a separate model to evaluate responses; filter out incorrect responses.
  • Fine-Tuning Prompting: fine-tune on an explanation dataset generated via prompting; improve the model’s reasoning abilities
  • Prompting techniques can reduce the gap between pre-training and downstream tasks, and enable few-shot or zero-shot learning for new scenarios . Prompting techniques can also improve the performance, accuracy, and confidence of the language models on various tasks

Prompt techniques


  • #NLP
  • #Prompting
  • #AI
  • #LanguageModels

要查看或添加评论,请登录

社区洞察

其他会员也浏览了