Logic-of-Thought: Injecting Logic into Contexts for Full Reasoning in Large Language Models
Why Should You Care About "Logic-of-Thought" (LoT) Prompting?
To accomplish the downstream tasks with GenAI LLM , Its essential to utilize its reasoning capability.Logic-of-Thought one such technique helps to achieve
Research article - Link
? Introduction
Large Language Models (LLMs) have showcased impressive abilities across diverse Natural Language Processing (NLP) tasks, such as translation and text generation. However, their performance in complex logical reasoning remains suboptimal.
Despite enhancements brought by Chain-of-Thought (CoT) prompting, LLMs often produce reasoning chains that do not faithfully support their conclusions. This research introduces a novel method, Logic-of-Thought (LoT), which addresses logical reasoning limitations by injecting propositional logic into prompts, thereby enhancing LLMs' ability to maintain consistency and correctness in reasoning tasks.
?? Existing Methods
Prominent methods like CoT prompting have improved LLMs' reasoning abilities by structuring the thought process into intermediate logical steps.
Building on this, Tree-of-Thought (ToT) and Graph-of-Thought (GoT) further extend reasoning capabilities into more complex topologies, supporting backtracking and exploration of multiple paths.
Despite these advancements, methods like Self-Consistency (SC) and CoT-SC still encounter issues with faithfulness, where the reasoning chain does not align with the final conclusions. Neuro-symbolic approaches, such as LINC and SatLM, integrate symbolic logic with LLMs but suffer from information loss during the extraction of logical expressions, leading to incorrect results.
?? Proposed Method: Logic-of-Thought (LoT)
LoT introduces an advanced prompting mechanism that systematically extracts and translates logical propositions from input contexts into augmented logical information. This logical information is appended to the original context, thereby enriching the prompt and enhancing the logical reasoning ability of LLMs.
Key Phases of LoT:
Logic Extraction: Using LLMs to identify and extract propositions and logical relationships (e.g., A→B) from input contexts.
Logic Extension: Applying logical laws (e.g., Transitive Law, Contraposition Law) to extend the extracted expressions into new logical forms using a Python-based deduction program.
Logic Translation: Translating the extended logical expressions back into natural language and incorporating them into the original context.
?? Key Findings
LoT significantly boosts logical reasoning capabilities when integrated with various prompting methods.
The improvements are demonstrated across multiple datasets:
ReClor: CoT accuracy improved by +4.35%, CoT-SC by +2.17%.
LogiQA: LoT enhanced Chain-of-Thought’s performance by +2.50%.
ProofWriter: Tree-of-Thoughts (ToT) accuracy increased by +8%.
?? Models & Prompting techniques Used
GPT-3.5-turbo and GPT-4 were employed as LLMs in the experiments. LoT prompting was integrated with prompting techniques like CoT, CoT-SC, and ToT.
?? Dataset
Five logical reasoning datasets were utilized:
领英推荐
?? Key Techniques
Logical Proposition Extraction: Deriving formal logic from natural language context. Logic-based Prompt Augmentation: Expanding context with logical implications. Orthogonal Integration: Seamless combination with existing prompting techniques.
?? Outcomes and Evaluation
The evaluation showed that LoT substantially improves the logical reasoning abilities of LLMs, outperforming both direct prompting and existing advanced methods:
Achieved a 4.35% accuracy improvement on the ReClor dataset when combined with CoT. Enhanced ToT’s performance on ProofWriter by 8%
?? Takeaway
LoT addresses the critical issue of information loss in logical reasoning by injecting propositional logic directly into the prompt context. This augmentation not only prevents reliance on symbolic solvers but also allows LLMs to reason more faithfully.
?? Future Research Directions
Future work will focus on expanding the range of supported logical connectives and reasoning laws to further strengthen LoT’s logical extraction and translation capabilities. Moreover, LoT will be adapted for additional complex reasoning frameworks, integrating with newer prompting methods and expanding its application across diverse NLP tasks.
Example Prompt
The Logic-of-Thought (LoT) framework uses a series of prompts during its three main phases: Logic Extraction, Logic Extension, and Logic Translation. Each prompt is tailored to guide the model in extracting, expanding, and translating logical information from natural language contexts. Below are examples of how these prompts are structured in each phase:
1. Logic Extraction Prompt
This prompt guides the model to extract logical propositions and expressions from the provided context.
Example Prompt:
Please identify all possible logical propositions from the following context using uppercase English letters such as A, B, C, etc. Do not include negative tones such as "not" in the propositions. For example, if the sentence is "It is not bored," you should use "A: bored" to represent it. For each proposition, use the symbol to represent its negative form (e.g., A becomes ?A).
Next, analyze the context and find causal relationships between propositions. Use arrows (→) to indicate causal relationships. For example:
- "If A, then B", "B if A", and "A causes B" can be represented as A → B.
Output the propositions and causal expressions. If necessary, add additional contextual information for clarity.
Output Example:
Context: "If a person reads a book, that person gains knowledge. If a person gains knowledge, they become smarter."
- Propositions:
A: a person reads a book
B: gains knowledge
C: becomes smarter
- Causal Relationships:
A → B, B → C
2. Logic Extension Prompt
This prompt helps the model expand the logical expressions derived from the previous phase using logical reasoning laws.
Example Prompt:
Using the extracted logical expressions and reasoning laws (e.g., Transitive Law: (A → B) ∧ (B → C) ? (A → C)), generate new logical expressions. Use logical rules like:
- Double Negation Law: ??p ? p
- Contraposition Law: (p → q) ? (?q → ?p)
- Transitive Law: (p → q) ∧ (q → r) ? (p → r)
For example:
- Input: A → B, B → C
- Output: A → C
Output Example:
- Input: A → B, B → C
- Extended Logical Expressions: A → C
3. Logic Translation Prompt
This prompt converts the expanded logical expressions back into natural language descriptions, which can be seamlessly integrated with the original context.
Example Prompt:
Translate the logical expressions into natural language sentences using the provided propositions. For example:
- A → B can be translated as "If a person reads a book, they gain knowledge."
- ?B → ?A can be translated as "If a person does not gain knowledge, they did not read a book."
Only provide the translated sentences as output.
Output Example:
- "If a person reads a book, they gain knowledge."
- "If a person gains knowledge, they become smarter."
Full Example Prompts for LoT
Logic Extraction Phase Prompt: This prompt is designed for extracting propositions and logical relationships from contexts in datasets like ReClor and LogiQA.
Please extract logical propositions from the context using labels like A, B, C, etc. For negative forms, use ? before the label. Identify logical relationships (e.g., implications, negations) and represent them using symbols like →.
For example, "If A, then B" can be represented as A → B. Output propositions and causal expressions accordingly.
Logic Extension Phase Prompt: This prompt guides the extension of extracted logical expressions using logical laws like transitivity or contraposition.
Extend the extracted logical expressions using logical rules. For instance, if the input contains "A → B" and "B → C", derive a new expression "A → C".
Use logical reasoning laws such as:
- Transitive Law: (p → q) ∧ (q → r) ? (p → r)
- Contraposition Law: (p → q) ? (?q → ?p)
Logic Translation Phase Prompt: This prompt translates extended logical expressions back into natural language.
Translate the logical expressions into natural language sentences. For example, "A → B" can be translated as "If A is true, then B is true". Provide the translated sentences only.
These prompts guide the LLM through extracting, reasoning, and translating logical expressions, ensuring that it accurately maintains logical consistency and minimizes information loss in complex reasoning scenarios.