Chicken and the Egg: its more impactful to think of knowledge graphs and causal graphs supporting LLMs than vice versa Part One
Background
This is a three part blog
It raises some unconventional viewpoints based on my experience?
I hope by sharing these, i can learn and evolve my thinking
The three parts are
I am overall optimistic about knowledge graphs (KG) and causal graphs (CG)- but I have been rethinking my approach and resetting my expectations.??In a nutshell, while KG and CG are important, however, in the light of the progress made by LLMs, they are no longer the central theme. Instead, they should be looked as assisting LLMs (especially for enterprises) rather than stand alone solutions.?
That’s what I mean by ‘chicken and egg’?
Considering the progress towards reasoning both by OpenAI and Claude
And leaving aside the AGI debate
Agents and Reasoning are centre stage
At the end of 2024, that’s a market reality?
Once you accept that reality, you have to rethink KG and CG in that light
One comment: I consider KG and CG together here. The Casual graph can be considered a special case of a knowledge graph with additional outcomes like counterfactuals etc. But they both have the same considerations from our perspective
For the purposes of this discussion, we consider them together
The dark side of KG and CG
If you see the literature from the KG and CG community, it looks like these tools solve all the problems of AI?
I am no longer optimistic? that KG/CG will be the dominant paradigms
I jokingly said to someone .. I was? fan of KG and CG .. until I tried them myself in a real project
Then I was not :)?
There is a dark side to all this optimism from the KG CG community which not many people talk about
Discovery is hard
The structure is brittle
The industry typically adopts a SAAS model - which is now under threat by agents??
It does not scale
The last part should not surprise us
领英推荐
After all these are hybrid AI techniques
They retain the shortcomings of hybrid AI i.e. lack of scale
Most importantly, the KG/CG community does not adequately address the reasoning case (expect through research papers to some extent) - the products are not anywhere in the same league as LLMs.
A different perspective
There are many use cases of KG/CG that will continue
GRAPHRAG reduces hallucination (but note it does not eliminate it)
Large scale fraud detection uses KG??
Causal is used in medicine?
All these will be enhanced by LLMs
Also
Cloud is (largely) becoming LLM agnostic - an aggregator and integrator of the best possible services
Agents are dominant
However, KG/CG will not become the dominant paradigms due to the issues above
To take a step back
What is a KG/CG
A network of nodes,relationships and? weights?
You can actually simulate it entirely within the LLM very easily using what I call a conceptual knowledge graph (same applies to GG)
While this approach is suited for many cases, it is ideally suited for the reasoning use case
In the next two sections
I will explore the reasoning use case and the role of ontologies
Currently, most approaches think of LLMs to support KG/CG. This is logical but does not scale. The opposite (KG/CG) to assist LLMs is far more interesting and not yet fully explored.
I believe that the reasoning use case will be the dominant use case and that needs us to think LLM first - especially in a world dominated by agentic workflows
Finally ..
If you want to study #AI with me at the #universityofoxford - please see my course on AI (almost full now - only last few places remaining) https://conted.ox.ac.uk/courses/artificial-intelligence-generative-ai-cloud-and-mlops-online
If you want to be a part of my community see Creating a community (LinkedIn group) for my blog - where you can ask me questions re AI
If you want to work with me, we are recruiting
Proud sponsor of AUTM | Industry & Company Insights to Close Deals Fast | ?? to Master AI Before Your Competition Does
2 个月This perspective makes much more sense to me Ajit Jaokar - especially since our AI is agent-centric.
I largely agree, but reckon that symbolic representations will remain important for agent to agent communication, just as they have been since written records were invented, e.g. Sumerian cuneiform tablets as used for accounting. Today we have databases, and I expect these to evolve further into cognitive databases with multimodal LLMs/agents as front ends. KG's need to be revised to match new use cases, including unanticipated edge cases. Agents will be invaluable as collaborative tools for curating use cases, software development and regression testing. My work on the Plausible Knowledge Notation (PKN) addresses the challenges of knowledge that is uncertain, imprecise, context sensitive, incomplete, inconsistent and changing, along with different forms of argumentation: deductive, inductive, qualitative, fuzzy, analogical, planning and abductive reasoning. LLMs will be great for developing arguments using a mix of implicit and explicit knowledge.