Knowledge Graphs and Supply Chains
Copyright 2024 Kurt Cagle / The Cagle Report
From one of my readers:
Hi Kurt, have you written any article on how a knowledge graph differ from inventory built using graph. We are in constant struggle to differentiate knowledge graph from inventory apps? Questions like can inventory systems built using graph technology, use the same ontology defined for a KG, Can such inventory deal with the transactional NFRs. It would be good to know your point of view.
Defining Knowledge Graphs
This is a very good question, and actually lets me lead in to a topic I've addressed before, but maybe with a different twist this time:
What is a knowledge graph, and how does it differ from a standard ontology?
Let me address the latter part first, because I think it clarifies the former. A knowledge graph is a form of ontology. Like all ontologies, it generally consists of at least three, and perhaps four parts:
- A schema. This identifies the shapes and properties that are of significance to the knowledge graph. You can have an ontology with a minimal schema (essentially RDFS) but the richer the schema, the more powerful the ontology.
- A taxonomy. These are the classes that identify categorization or type information, and are used typically to identify variants that don't necessarily have structural distinctions in the model. For instance, in an address, an AddressType is part of the taxonomy, and this generally indicates the role or purpose of the address without explicitly requiring subclassing to add new properties.
- Entity (Event) Data. Entities are things that have existence, meaning that they generally have both a physical locus and a temporal one. Entities are created, fulfill their purpose, and then cease to exist.
- Data Structures. These are underlying abstractions that identify interfaces (schema) but don't in general have instances, and most frequently these involve blank nodes as their intercessors. Ordered and unordered sets, linked lists, bags, etc., all fall into his category.
These are broad categories, and you'll find that determining whether something is schematic, categorizational, or spatiotemporal in nature can often get tricky at the edges.
What's critical to understand is that a knowledge graph is an abstraction, just as RDF is an abstraction. It is independent of the specific implementation. Certain processing tools, such as SPARQL, are not as independent, though I think SPARQL is pretty good for what it does, which is match triple patterns to process them abstractly.
Beyond the abstraction, I think the recognition that a knowledge graph needs to be implemented
Supply Chains and Knowledge Graphs
A supply chain is precisely such an ontology that requires the use of STA. A thing (such as a package), exists. That package is boxed or wrapped and has a barcode attached to it (its identifier). The package is moved from place to place until it reaches its final destination, at which point it is deconstructed and its existence ends.
While this is somewhat simplified - here's an example of a supply chain from the perspective of a single package being moved:
领英推荐
Any semantic mapping should be reproduceable as a story. This one runs as follows: Two items, a box of cookies and a tin of cocoa, are placed in a package, then are delivered. That delivery consists of multiple routes, where each route is a leg of the trip from New York to Denver to Seattle, and from there to the house in Seattle. It can even tell me what the current transport vehicle is.
Note that this describes a plan. The actual delivery follows that plan by reporting events in transit, including a delay in Seattle. The events are what makes it possible to track the package, This can be modified to include other events, including those where the event was expected to arrive by a certain time but didn't (someone stole the cookies and cocoa en route).
The package here, as indicated, is pretty boring ... most of the interest actually is in the delivery process. However, the package also contains the relevant reported manifest (my cookies and cocoa here).
What isn't shown here is infrastructure. Someone (or some automated system) needs to register the manifest, identify the package, design and initiate the delivery, handle the event notifications, and so forth. A knowledge graph by itself is not going to do that, it is simply a record of events as they happen.
On the other hand, the value of the knowledge graph is that at any given point, if I have the package identifier, then I know everything there is to know about the delivery process. I can use this to draw a map indicating where my goods are (either in transit or at a specific transit point), and can use it with queries to notify when a given expected event doesn't show up after a certain time.
As indicated, this is a very simple model, and shows only one package being delivered. However, with it, you can not only track resources, but also aggregate them (how many packages were delivered across a specific route? which trucks broke down most consistently? what were the most commonly shipped items in a given period? and so forth).
Can it replace dedicated supply chain systems? I think the cost involved in retrofitting an existing SCS would probably be prohibitive, but I'd also argue that such systems are fragile. Adding properties can have downstream consequences
So, on balance, I'd have to say that if you were looking to develop a new supply chain tracking and inventory system
Hope that helps.
In media res,
Kurt Cagle
Editor, The Cagle Report
If you want to shoot the breeze or have a cup of virtual coffee, I have a Calendly account at https://calendly.com/theCagleReport, and I am available for consulting and full-time work.
Everywhere, knowingly with the bG-Hum; Crusties!
6 个月Here, our Fletcher has been cornered by the Bowes. They (cowards!) want their arrows to work on demand before their competition slays them with primitive closer combat spears or god forbid face-to-face swords. But in Editor-in-Chief's defense, I would say they are not regarding the issues from an architectural perspective. The criticism is largely implementation, and mostly restricted to the Relational repository. It's the Rag wagging the Dog here. But the most valuable take-away for an architect is their insight in considering the faultlines they've highlighted and to find a way of accommodating them into the lofty abstractions being established here for necessary commutativity between KGs and ERs.
Holistic Management Analysis and Knowledge Representation (Ontology, Taxonomy, Knowledge Graph, Thesaurus/Translator) for Enterprise Architecture, Business Architecture, Zero Trust, Supply Chain, and ML/AI foundation.
6 个月Useful. As a former US Army Logistician, I used my Knowledge Representation (#KR) method since 1982 to model and manage product supply chains and value chains, and the collection of all an organization's value/supply chains, which I call the Value Lattice (#VL) of the organization and its environment's stakeholders. Supplier/Value Chain, for single product.
Bible lover. Founder at Zingrevenue. Insurance, coding and AI geek.
6 个月One would be shocked at the number of times ALTER TABLE statements are issued in a busy e-commerce environment. Now let's think of the stack that sits atop these frequent changes and now we understand how darn challenging this can be.
Bible lover. Founder at Zingrevenue. Insurance, coding and AI geek.
6 个月Calculating the cost of infrastructure - from talent availability to cloud resource to data governance - is absolutely key if organisations are dead serious about a project like this. Otherwise it's just frivolous fluffing around. A great way to de-risk is to build prototypes that track changes in production. It would be a great and cheap source of information to various stakeholders without spending major slosh on a live but ultimately doomed project.
Bible lover. Founder at Zingrevenue. Insurance, coding and AI geek.
6 个月Having built inventory systems for e-commerce I can say this - it will be an expensive undertaking as business is continually changing. Not only will underlying assumptions - from data structures to entities (and attributes) to relationships to cardinalities - change frequently - but to keep track and reflect that in the ontology - is a financially challenging commitment. I would only proceed if there were mid- to long-term financial guarantees from the executive team as well as suitably resourced teams available to keep this parallel structure in sync. And LLMs aren't necessarily the best AI front end - gradient boosted decision forests have been known to provide enormous predictive bang for buck ROI - cheap as chips, performant and easily kept up to date - unlike hulking, slow and unreliable layered transformer architecture based models.