The evolution of LLMs within the Enterprise will be different from that outside the enterprise.
Introduction
Last week, I spoke at? European Economic and Social Committee on behalf of the European Internet Forum .
In my talk, for an audience of policy makers, I mentioned that the evolution of LLMs is likely to be complex and different from what we see today.?
Here is a broad flow of thought I am thinking of?
Consider LLMs in the Enterprise
The implementation of LLMs within the Enterprise could be very disruptive - and different from that outside the Enterprise. Specifically, within the enterprise, LLMs could be used with knowledge graphs (KG) for the reasons we discuss below
Outside the enterprise, LLMs will grow in terms of number of parameters i.e. size.?To create these LLMs, you need large, well-funded companies.?OpenAI leads this pack.?
However, inside the enterprise the same LLM maybe implemented differently
Within the Enterprise, knowledge graphs help to create a semantic layer for the Enterprise that complements (and overcomes the limitations) of LLMs.
Today, knowledge graphs in the enterprise solve two big problems of LLMs:
a) lack of explainability and
b) reducing the potential for hallucination.?
领英推荐
Over time, knowledge graphs could help to create a reasoning engine for AI within the Enterprise.??
Since last year, I have been working with LLMs and KG. It's nice to see that Gartner also has knowledge graphs at the center of their impact radar this year
The Enterprise Semantic layer
In an Enterprise, the semantic layer could have a number of components - such as? Metadata, Taxonomy & Information Architecture, Business Glossary , Ontology and of course,? Knowledge Graphs.? Knowledge graphs represent factual data about real world entities in the form of networked nodes and edges. When this symbolic knowledge is linked to LLM data,? we are able to ground the LLM providing explainability, reliability (and in future), reasoning.?
LLMs and deep learning models in general, are excellent at learning patterns from large amounts of data. However, they are black boxes. Also, they need vast amounts of data to learn and might struggle with tasks that humans find easy. Knowledge graphs - or more broadly,? neurosymbolic AI systems provide a set of constraints / context that effectively narrows down the search space that the neural network needs to consider. This not only improves efficiency but can also enhance the system's ability to generalise from fewer examples, as it leverages structured human knowledge to guide learning.
Tomaz Bratanic provides one such implementation in? Enhancing Interaction between Language Models and Graph Databases via a Semantic Layer where we extract parameters from the user input and uses a template based on user intent to provide the LLM with context.?
In my teaching at the University Of Oxford, we are also exploring other models such as small LLMs like Phi and Otca and the chaining GPTs to create more complex software?
Many thanks? to Louise Grabo for hosting the event and for European Economic and Social Committee and European Internet Forum. for inviting me. My talk was 45 mins and the Q and A was almost an hour!. I will share more about the talk including the #AIact discussions.
You can meet us at the The Oxford AI summit. We also announced two of our well known courses: low code AI course at the university of oxford - open to non developers and digital twins for AI course
with Eusebiu Croitoru Maria Rosa Gibellini . also thanks to Anthony Alcaraz David Stevens Philip Rathle for their ongoing insights to our work at the #universityofoxford on #knowledgegraphs
Director Data Architecture
1 年Thanks for share this nice material Ajit Jaokar and prepare interesting educational paths https://conted.ox.ac.uk/courses/artificial-intelligence-generative-ai-cloud-and-mlops-online ?? !!!!
GenAI | Quantum | Startup Advisor | TEDx Speaker | Author | Google Developer Expert for GenAI | AWS Community Builder for #data
1 年Insightful
Well said Ajit Jaokar! I am likewise finding it increasingly clear that there are, broadly, two classes of use cases as you describe. At one end lies the consumer-grade LLMs that we have become familiar with. Here, black-box answers with "good enough" accuracy. At the other are enterprise use cases. Here, the impact of a wrong answer can have impacts ranging from brand & reputational to health & safety to safety & compliance. And even a right answer might require explaining the basis for the decision: either to the person ultimately responsible for the AI decision, or to a government regulator. The wide & high-value class of enterprise problems that fit the above description motivate the case for Knowledge Graphs, and the reason Gartner places them at the center of the bullseye in their 2024 Impact Radar for GenAI.
Communications and Marketing Leader | Board Director | Technology Storyteller
1 年"Knowledge graphs help to create a semantic layer for the Enterprise that complements (and overcomes the limitations of) LLMs" - well said Ajit Jaokar and love the Tomaz Bratanic example, your work with Neo4j's Philip Rathle and David Stevens, and the broader community you're curating on #genai with University of Oxford.
ERP Cloud/ SaaS Solution Architect / Consultant
1 年Great synopsis about the evolution of LLM within the Enterprise.. this is a real struggle for enterprises and needs solutions soon and fast..