From Simple Queries to Complex Reasoning: Evolution of LLM Prompting Techniques
Florent LIU
Data architect, Full Stack Data Engineer in BIG DATA, and Full Stack Developer AI.
Introduction
Prompt engineering has emerged as a pivotal technique for unlocking the reasoning capabilities of large language models (LLMs).
This article systematically compares five major prompting strategies—Basic Input-Output (IO), Chain of Thought (CoT), Multiple CoTs (COT-SC), Tree of Thoughts (ToT), and Graph of Thoughts (GoT)—analyzing their strengths, limitations, and ideal use cases.
By evaluating metrics such as accuracy, computational cost, and reasoning flexibility, the article reveals a clear hierarchy: simpler methods like IO excel in efficiency but fail at complex tasks, while advanced frameworks like ToT and GoT achieve state-of-the-art performance through structured exploration of reasoning paths, albeit at significantly higher computational expense.
The analysis concludes with actionable guidelines for researchers and practitioners to optimize LLM performance across domains.
1. Basic Input-Output (IO)
Basic IO is the simplest prompting strategy, relying on direct mapping of input to output without intermediate reasoning steps. It is computationally efficient but struggles with complex, multi-step problems due to lack of transparency in reasoning.
IO operates via pattern recognition from training data, making it prone to errors in tasks requiring logical or arithmetic reasoning. Its performance plateaus on benchmarks like GSM8K (math problems) and CommonsenseQA, where step-by-step analysis is critical.
2. Chain of Thought (CoT)
CoT introduces explicit intermediate reasoning steps, mimicking human problem-solving. It significantly improves performance on tasks requiring logic, arithmetic, or symbolic manipulation.
By decomposing problems into sub-steps (e.g., "Step 1: Calculate 25% of 80..."), CoT reduces cognitive load and allows error tracing. However, it remains linear and deterministic, limiting exploration of alternative pathways.
3. Multiple CoTs (COT-SC)
COT-SC generates multiple reasoning chains and selects the most consistent answer via majority voting, reducing bias from single-path reasoning.
By sampling diverse reasoning paths (e.g., 5-10 chains), COT-SC mitigates errors from individual flawed chains. However, it increases computational costs and lacks explicit coordination between chains.
4. Tree of Thoughts (ToT)
ToT extends CoT by exploring a tree of potential reasoning paths, enabling backtracking and parallel exploration of hypotheses.
ToT frames problem-solving as a search process over a tree, where each node represents a partial solution. It uses heuristics (e.g., similarity to correct answers) to prioritize branches, enabling strategic exploration.
5. Graph of Thoughts (GoT)
GoT generalizes ToT by modeling thoughts as a graph, allowing cyclic dependencies and merging of reasoning paths. It excels in tasks requiring non-linear or interdependent reasoning.
Nodes represent intermediate thoughts, and edges define relationships (e.g., contradiction, support). GoT dynamically aggregates information (e.g., merging valid sub-solutions) and outperforms ToT in tasks like theorem proving.
Comparative Analysis
*ToT’s 74% applies to Game of 24, not GSM8K.
Strategic Recommendations
1. IO: Use for low-stakes, single-step tasks (e.g., classification).
2. CoT: Deploy for educational tools or explainable QA systems.
3. COT-SC: Optimal for high-stakes decisions (e.g., medical diagnosis).
4. ToT: Apply to puzzles, strategic games, or constrained optimization.
5. GoT: Reserve for R&D tasks with interdependencies (e.g., drug discovery).
This hierarchy reflects a trade-off between reasoning depth and computational efficiency, with GoT offering maximal flexibility at the cost of scalability.
REFERENCE: Graph of Thoughts: Solving Elaborate Problems with Large Language Models
#AI #DataScience #data #generative ai #reinforcement learning optimization #model optimization techniques #fine tuning llms
KAI KnowledgeAI Big data for small & medium enterprises Generative AI Summit Dauphine Executive Education - Paris Dauphine University-PSL Université évry Paris-Saclay
Follow me on LinkedIn: www.dhirubhai.net/comm/mynetwork/discovery-see-all?usecase=PEOPLE_FOLLOWS&followMember=florentliu