Neuro-Symbolic AI: A brief overview and challenges
A "new old buzzword" that may actually provide a solution to many challenges the AI community faces today.
News about AI not being able to logically reason, how models don't get better, etc., have filled our feeds the past few months -- talks about a third AI winter started to emerge right as the northern hemisphere actually enters its winter! ??
I want to briefly highlight one potential solution that is not new, but started to make a comeback: Neuro-Symbolic AI. Integrating symbolic systems into LLMs to make them more reliable. It combines the neural network of LLMs with symbolic reasoning. Symbolic AI is an approach to artificial intelligence that uses explicit symbols and rules to represent knowledge and perform logical reasoning.
I will use the example prompt "Does a whale breathe air?" and how a Neuro-Symbolic AI system always ends up with "Yes, a whale breathes air" and never hallucinates, using the above graphic.
First, how does symbolic reasoning actually work?
We have a knowledge base (see attached image) that contains the following pieces of information:
领英推荐
Based on this information, an inference engine can now deduct "whales breathe air" from this knowledge: All mammals breathe air, a whale is a mammal, ergo: Whales breathe air. What is critical to realize is that ???????? ???? ???????????? ?????????????? ??????????????????, something LLMs can't do.
How do we integrate symbolic systems into LLMs?
There are many, many ideas on how to marry the two systems. A prominent one is depicted as ????????????[????????????????], which means we have a neural network-based LLM (like a Transformer model) as a base and integrate into its architecture a means to call a symbolic reasoning engine.
In the image attached, we use a model specializing in biology questions. You see the purple highlighted "Attention Schema" -- this part of the model "counts" how often relevant trigger phrases appear (e.g. "whale", "air", "breathe") and as soon as enough of those appeared, it programmatically triggers the model to call external help from a symbolic reasoning engine.
Part of the prompt gets sent now to the symbolic reasoning engine that infers the answer "TRUE" (i.e. a whale breathes air) and sends it back to the LLM where the rest of the model formats the answer to reflect how the user likes to see it, i.e. TRUE becomes "Yes, a whale breathes air."
Some core challenges
While there are many more, for the purpose of this article I'm sticking to three.
CxO Advisor & Senior Technology Executive | Partner, North America AI Lead, Kearney | ex: McKinsey Partner, Tech Startup Founder, IBM Executive | Venture Partner
3 个月Great read.
Software Development Manager | Team Building & Management | Software Development | Python | Machine Learning
4 个月Thanks! Your posts/articles always make me think. This is exactly the kind of content I want to see more of on LinkedIn. Building these knowledge bases and ensuring the data they hold is correct without requiring substantial manual intervention does feel a long way off. What happens when there are conflicting 'truths', or when our knowledge changes over time with new discoveries?