Faithful Logical Reasoning- Symbolic Chain-of-Thought & GNN-RAG - Graph Neural Retrieval for Large
Language Model Reasoning

Faithful Logical Reasoning- Symbolic Chain-of-Thought & GNN-RAG - Graph Neural Retrieval for Large Language Model Reasoning

Chain-of-Thought (CoT) technique enhances reasoning ability of large language models (LLMs) with the theory of mind, it might still struggle in handling logical reasoning that relies much on symbolic expressions and rigid deducing rules. For Strengthening the logical reasoning capability of LLMs - Symbolic Chain-of-Thought, namely SymbCoT, a fully LLM-based framework that integrates symbolic expressions and logic rules with CoT prompting.

Building upon an LLM, SymbCoT -

  1. First translates the natural language context into the symbolic format.
  2. Derives a step-by-step plan to solve the problem with symbolic logical rules.
  3. Followed by a verifier to check the translation and reasoning chain. Via thorough evaluations on 5 standard datasets with both First-Order Logic and Constraint Optimization symbolic expressions. SymbCoT shows striking improvements over the CoT method consistently, meanwhile refreshing the current state of-the-art performances.

SymbCoT comprises four main modules: Translator, Planner, Solver, and Verifier. Characterized by the following three core aspects:

  1. SymbCoT integrates symbolic expressions into CoT to describe intermediate reasoning processes, facilitating more precise logical calculations. However, relying solely on symbolic representation still has its limitations, as it often fails to capture certain content, such as implicit intentions or crucial contextual information embedded within questions. Yet LLMs excel at interpreting such nuanced information and contexts. Thus, we consider a combination of symbolic and natural language expressions to leverage the mutual strengths of both: freely expressed implicit intents and contextual information in natural language and rigorous expression in symbolic forms.
  2. Unlike the straightforward prompting of “thinking step by step” in vanilla CoT, SymbCoT considers a plan-then-solve architecture. This involves decomposing the original complex problem into a series of smaller, more manageable sub-problems, which are then addressed one by one. This way, the entire reasoning process becomes more trackable, enabling a clearer and more structured approach to problem-solving.
  3. Furthermore, this paper devise a retrospective verification mechanism. At both the translation and subsequent problem-solving stages, this paper retrospectively validate the correctness of each step’s outcome, by tracing back to the original given condition. This verification process ensures the accuracy and reliability of the operations performed during the reasoning process.


Summarizing Technical Contributions -

This paper Proposes a fully LLM-based logical reasoning framework based on CoT, demonstrating that LLMs can achieve robust logical reasoning capabilities without external reasoning tools. Compared to existing SoTA solutions relying on external resolvers, SymbCoT offers better robustness against translation errors and more human-understandable explanations.

? Innovatively integrating the strengths of symbolic forms and natural language expressions, enabling precise reasoning calculations while fully interpreting implicit information and capturing rich contexts.

? Introducing a plan-then-solve architecture for CoT reasoning, along with a retrospective verification mechanism, enhancing the faithfulness of the reasoning process.

Reference Reading Links -

Paper - https://arxiv.org/abs/2405.18357

GNN-RAG: Graph Neural Retrieval for Large Language Model Reasoning -

Large Language Models (LLMs) are the state-of-the-art models in many NLP tasks due to their remarkable ability to understand natural language.

LLM’s power stems from pretraining on large corpora of textual data to obtain general human knowledge.

However, because pretraining is costly and time-consuming. LLMs cannot easily adapt to new or in-domain knowledge and are prone to hallucinations. Knowledge Graphs (KGs) are databases that store information in structured form that can be easily updated.

KGs represent human-crafted factual knowledge in the form of triplets (head, relation, tail), which collectively form a graph. In the case of KGs, the stored knowledge is updated by fact addition or removal. As KGs capture complex interactions between the stored entities. e.g - multi-hop relations, they are widely used for knowledge-intensive task, such as Question Answering (QA).

Retrieval-augmented generation (RAG) is a framework that alleviates LLM hallucinations by enriching the input context with up-to-date and accurate information e.g., obtained from the KG. In the KGQA task, the goal is to answer natural questions grounding the reasoning to the information provided by the KG. For instance, the input for RAG becomes “Knowledge: Jamaica → language_spoken → English \n Question: Which language do Jamaican people speak?”, where the LLM has access to KG information for answering the question. RAG’s performance highly depends on the KG facts that are retrieved.

The challenge is that KGs store complex graph information (they usually consist of millions of facts) and retrieving the right information requires effective graph processing, while retrieving irrelevant information may confuse the LLM during its KGQA reasoning.

Existing retrieval methods that rely on LLMs to retrieve relevant KG information (LLM-based retrieval) underperform on multi-hop KGQA as they cannot handle complex graph information or they need the internal knowledge of very large LMs, e.g., GPT-4, to compensate for missing information during KG retrieval.

GNN-RAG is a method for improving RAG for KGQA. GNN-RAG relies on Graph Neural Networks (GNNs) which are powerful graph representation learners, to handle the complex graph information stored in the KG.

Although GNNs cannot understand natural language the same way LLMs do, GNN-RAG repurposes their graph processing power for retrieval.

  1. GNN reasons over a dense KG subgraph to retrieve answer candidates for a given question.
  2. Shortest paths in the KG that connect question entities and GNN-based answers are extracted to represent useful KG reasoning paths. The extracted paths are verbalized and given as input for LLM reasoning with RAG.
  3. GNN-RAG can be augmented with LLM-based retrievers to further boost KGQA performance. Experimental results show GNN-RAG’s superiority over competing RAG-based systems for KGQA by outperforming them by up to 15.5% points at complex KGQA performance.

GNN-RAG repurposes GNNs for KGQA retrieval to enhance the reasoning abilities of LLMs. In GNN-RAG Framework GNN acts as a dense subgraph reasoner to extract useful graph information, while the LLM leverages its natural language processing ability for ultimate KGQA. Moreover, our retrieval analysis guides the design of a retrieval augmentation (RA) technique to boost GNN-RAG’s performance.

GNN-RAG's Effectiveness & Faithfulness

GNN-RAG achieves state-of-the-art performance in two widely used KGQA benchmarks (WebQSP and CWQ). GNN-RAG retrieves multi-hop information that is necessary for faithful LLM reasoning on complex questions (8.9–15.5% improvement.

GNN-RAG Efficeincy improves vanilla LLMs on KGQA performance without incurring additional LLM calls as existing RAG systems for KGQA require. In addition, GNN-RAG outperforms or matches GPT-4 performance with a 7B tuned LLMs.

LLMs for KGQA use KG information to perform retrieval-augmented generation (RAG) as follows:

The retrieved subgraph is first converted into natural language so that it can be processed by the LLM. The input given to the LLM contains the KG factual information along with the question and a prompt. For instance, the input becomes “Knowledge: Jamaica → language_spoken → English \n Question: Which language do Jamaican people speak?”, where the LLM has access to KG information for answering the question. Landscape of KGQA methods. Figure 2 presents the landscape of existing KGQA methods with respect to KG retrieval and reasoning. GNN-based methods, such as GraftNet and ReaRev reason over a dense KG subgraph leveraging the GNN’s ability to handle complex graph information.

Recent LLM-based methods leverage the LLM’s power for both retrieval and reasoning. ToG [Sun et al., 2024] uses the LLM to retrieve relevant facts hop-by-hop. RoG [Luo et al., 2024] uses the LLM to generate plausible relation paths which are then mapped on the KG to retrieve the relevant information.

LLM-based Retriever. We present an example of an LLM-based retriever. Given training question-answer pairs, RoG extracts the shortest paths to the answers starting from question entities for fine-tuning the retriever. Based on the extracted paths, an LLM (LLaMA2-Chat7B is fine-tuned to generate reasoning paths given a question q as LLM(prompt, q) =? {r1 → · · · → rt}k, (2) where the prompt is “Please generate a valid relation path that can be helpful for answering the following question: {Question}”. Beam-search decoding is used to generate k diverse sets of reasoning paths for better answer coverage. e.g relations {, } for the question “Which language do Jamaican people speak?”. The generated paths are mapped on the KG, starting from the question entities, in order to retrieve the intermediate entities for RAG.

GNN-RAG We introduce GNN-RAG, a novel method for combining language understanding abilities of LLMs with the reasoning abilities of GNNs in a retrieval-augmented generation (RAG) style.

GNN reasons over a dense KG subgraph to retrieve answer candidates for a given question. Second, the shortest paths in the KG that connect question entities and GNN-based answers are extracted to represent useful KG reasoning paths.

The extracted paths are verbalized and given as input for LLM reasoning with RAG. In our GNN-RAG framework, the GNN acts as a dense subgraph reasoner to extract useful graph information, while the LLM leverages its natural language processing ability for ultimate KGQA.

Reference Reading Links -

Paper -https://arxiv.org/abs/2405.20139

Github -https://github.com/cmavro/GNN-RAG

For more information on AI Research Papers you can visit my Github Profile -

https://github.com/aditikhare007/AI_Research_Junction_Aditi_Khare

For Receving latest updates on Advancements in AI Research Gen-AI, Quantum AI & Computer Vision you can subscribe to my AI Research Papers Summaries Newsletter using below link -

https://www.dhirubhai.net/newsletters/7152631955203739649/

Thank you & Happy Reading !

要查看或添加评论,请登录

社区洞察

其他会员也浏览了