The EMPWR platform: Data and Knowledge-driven Processes for Knowledge Graph Lifecycle
Amit Sheth
NCR Chair & Prof; Founding Director, AI Institute at University of South Carolina
Cite as: Hong Yung Yip and Amit Sheth, "..", IEEE Internet Computing, Jan/Feb 2024.
The unparalleled volume of data generated has heightened the need for approaches that can consume these data in a scalable and automated fashion. While modern data-driven, deep learning-based systems are cost-efficient and can learn complex patterns, they are black boxes in nature, and the underlying input data highly dictate their world model. On the contrary, the traditional knowledge-centric, symbolic approaches provide notational efficacy and declarative capabilities that can be used to make implicit data explicit. Knowledge Graphs (KGs), as one such technology, have surfaced as a compelling approach for using structured knowledge representation to support integrating knowledge from diverse sources and formats. While tools exist to create KGs, a KG lifecycle involves data and knowledge-driven processes for continuous maintenance and update of the KG. We present EMPWR, a comprehensive KG development and lifecycle support platform that uses a broad variety of techniques from symbolic and modern data-driven systems. We discuss the sets of system design guiding principles in developing EMPWR, its system architectures, and workflow components. We illustrate some of EMPWR’s abilities by describing a process of creating and maintaining a KG for the pharmaceuticals domain.
Introduction
?With the rapid advancement and widespread use of digital technologies, we are witnessing unprecedented data generated, from social media interactions to online transactions, from sensor readings to healthcare records. The unparalleled volume, variety, and velocity of data that are being generated in the current digital era has rendered the manual rule-based declarative approach to symbolic knowledge acquisition, representation, and reasoning less effective and has thus propelled the need for approaches that can consume (process, analyze, and glean insights on) these data on a scalable and automated fashion. As such, the emergence of modern data-driven systems and the continuous evolution of Artificial Intelligence (AI) applications can be seen as clear attributions and exemplars of how the abundance of data has transformed the way we live, work, and interact with the world around us as well as support the efficient functioning of modern society.
These systems based on neural networks and deep learning (e.g., ChatGPT) are cost-efficient for use (albeit expensive to build) and can consume, recognize, and learn complex patterns and relationships in the underlying data on their own without the need for arduous human labor in knowledge curation and feature engineering. In addition, modern self-supervised systems (e.g., zero-shot learning) can learn from a small pool of data without needing a large investment of human annotations and appeal to individuals and organizations with resource constraints. The ease of scalability and deployment of current state-of-the-art architectures is the cherry on top. However, they are not silver bullets as their world model – the spatial and temporal representation and understanding of the environment are highly dictated by the underlying input data. In other words, unsanitized data and non-validated inputs may lead to factually incorrect models, biases, and hallucinations, which can be adversarial. In addition, these systems are usually black-box in nature and fall short in explainability and provenance as their performance and capabilities are determined by tuning their underlying weights and parameters, which need to be more human-understandable. Other aspects, such as ethics, governance, and safety, are still at the forefront of research. Therefore, we should recognize the merit of traditional symbolic approaches and should understand the underlying sources, characteristics, and implications of these data, models, and systems deluge to harness their potential and navigate the complexities they bring.
??????????? Traditional symbolic technologies (e.g., Semantic Web) rely on various representational and logic-based formalisms to provide a foundation for building knowledge representation and reasoning systems. They provide notational efficacy and declarative capabilities that can be used to make implicit data explicit, enabling high-quality linguistic and situational knowledge and the ability to formally capture the structure and the behavior of the objects around us and audit the reasoning. These systems are more computationally tractable, and domain scope and constraints can be easily enforced, which, in turn, supports data governance and provenance and provides a more explainable output. Despite the advantages, these systems are being shied away due to the cost of manual labor in knowledge acquisition and curation and the computational complexity, scalability, and brittleness of the unrepresented information compared to the more modern data-driven systems. Nonetheless, as we enter what DARPA describes as the third phase of AI that involves combining statistical and symbolic approaches (i.e., neuro-symbolic AI [7]), the role of knowledge is becoming indispensable in making sense of data, and we are witnessing an increased adoption of KGs, which is a form of Semantic Web approach as a key enabler for data-driven solutions that involve intelligent data transformation into insights, actionable information, and decisions as well as making AI systems more transparent and auditable.
??????????? At its core, KG represents real-world entities as nodes and the types of relationships among entities as edges. It is founded on ontology commitment, where the meanings of entities and relationships that domain experts agree upon are explicitly defined and published. As such, it has surfaced as a compelling approach for imparting definitions, structure, and uniformity over raw data and integrating them from diverse sources (typically siloed) and formats (unstructured, semi-structured, and structured). KGs are increasingly used to power consumer applications such as search engines (e.g., Google KG [https://bit.ly/google_kg ]), social media (e.g., LinkedIn KG [https://bit.ly/linkedingraph ]), chatbots, recommendation systems (e.g., Amazon Product KG [https://bit.ly/amazonproductkg ]) as well as healthcare research (e.g., Amazon COVID-19 KG [https://bit.ly/amazoncovid19kg ]). However, designing and creating a KG from the ground up requires a substantial upfront investment of time and human labor in knowledge curation. While tools exist to support the semi-automated development of KGs, they are often (singular) domain and application-driven with specific use cases, requirements, and purposes. Most are designed to extract knowledge from the specific corpus. We taxonomize a list of existing KG development tools into several categories.
Tools that specialize in NLP
Tools developed for a specific domain and application with target use cases and datasets
Tools developed for any domain
In this article, we advocate an approach that hybridizes the multiple techniques from traditional symbolic and modern data-driven systems to design a platform (EMPWR) for the KG lifecycle to encompass broad-based applications and broad sources of data. We first review the current end-to-end knowledge lifecycle design practice and the associated challenges. We then discuss the sets of system design guiding principles in developing EMPWR, its system architectures, and workflow components. Finally, we illustrate the process of creating a KG with EMPWR, drawing experiences from our work in pharmaceutical KG construction (partnership with collaborator WIPRO) with over 6M triples, 1.5M nodes, and 3000 relation types and interconnecting knowledge from broad-based open and domain-specific knowledge sources.
While deep learning-based systems are cost-efficient and can learn complex patterns without explicit knowledge curation, they are not silver bullets. The underlying input data highly dictate their world model. On the contrary, knowledge graphs impart definitions, structure, and uniformity.
The KG Lifecycle
The standard practice of an end-to-end KG lifecycle consists of different phases: (i) design and requirements scoping, (ii) data ingestion, (iii) data enrichment, (iv) storage, (v) consumption, and (vi) maintenance. We describe each phase and its associated challenges next.
The design phase entails scoping the target use case and application’s requirements, followed by creating the domain ontology. This is currently one of the more resource-consuming phases due to the involvement of domain experts and communities from different disciplines to congregate on the design and development of the appropriate schemas, representation formats, and assessment of relevant data sources to fit the intended use case. With the sheer availability and ease of accessibility of data today, we believe a bottom-up approach where we infer the domain ontology from the underlying data and the ontology is then reviewed and edited by domain experts can drastically reduce the initial upfront commitment and bootstrap the design process. As such, we aim to streamline, scale, and automate such an approach with EMPWR.
??????????? The data ingestion phase involves the process of collecting, extracting, and transforming data from various sources (e.g., databases, APIs, web scraping, and user entry) and heterogenous formats (e.g., unstructured, semi-structured, structured) into a unified format that can be used to build a KG. This is a fundamental and critical step as it lays the foundation for organizing and connecting data from multiple siloed sources for information discovery and analysis. It requires careful consideration in creating and structuring data pipelines and workflows, mapping legacy data to the established ontology, and standardizing the representation. The challenges lie in implementing measures to ensure consistent and reliable transformation, data governance, storage, and access rights (e.g., different groups of users with various levels of access privilege), data validation (e.g., ensuring the data sanity), lineage, and provenance (i.e., the ability to pinpoint the data origin) as well as efficient scalability to accommodate the volume and data diversity. In later sections, we describe how we consider the above in designing EMPWR.
??????????? The data enrichment phase involves various processes and methodologies to improve the data quality, impart definitions and meanings (e.g., entity class and named relation) on data, expand the initial vocabulary scope with external authoritative knowledge bases, and enforce any constraints per the domain or application specifications. This phase is the most important step as the raw data are contextualized and abstracted into information, translating into potentially valuable and actionable insights and enabling new knowledge discovery. This includes the use of assorted arrays of Natural Language Processing (NLP) techniques such as named entity recognition, relationship extraction, entity mapping and disambiguation, and relationships linking as well as inferencing and reasoning approaches to derive new information which may be used to expand the initial ontology from the design phase.
?
??????????? The storage phase is the repository for managing and hosting the KG, typically on graph databases or triple stores for consumption. The consumption phase typically accommodates the design of user interfaces (e.g., frontend data portal) and software interfaces (e.g., APIs) to serve both users and developers, respectively, for KG access, management, and query. In addition, it should support the KG export to various popular formats (e.g., JSON-LD, RDF, TTL) to enable import and extension to other graph databases. The maintenance phase involves the ongoing effort to maintain the ever-growing schemas and KGs through versioning and suitable provenance measures. The challenge lies in scalability and extensibility, i.e., keeping up with real-world events' dynamicity and temporal updates and evolving graph structures.
Developing a KG is not a one-off process. It is a lifecycle consists of different phases and requires continuous maintenance, which necessitates a scalable platform that provides a wide range of capabilities such as data extraction and ingestion from multiple sources, continuous KG updates with schedulers, with the capacity to scale, both in computation and storage to ensure timely updates to reflect real-world knowledge changes.
The EMPWR Platform for KG Lifecycle
We bring forth an approach and a set of system design guiding principles and critical elements outlined in the Open Knowledge Network (OKN) report [https://bit.ly/oknreport ] to design a platform (which we named EMPWR) for KGs creation. The guiding principles include governance, ethics, provenance, scalability and interoperability, sustainability, access rights, and data validation.
(A) Data Ingestion: Users may upload a list of unstructured datasets (i.e., MashQA, MEDIQA) via the frontend portal. The ingestion endpoints are extended to incorporate existing KG stored in popular graph databases (Neo4j, Stardog, Amazon Neptune) to support the ability to curate KG as well as workflow (B) and (C).
(B) Knowledge Extraction: The knowledge extraction module consists of state-of-the-art information extraction tool kits (NLTK, AMR, SpaCy, Stanford CoreNLP) as well as large language models that perform a series of natural language tasks on the underlying datasets drawing upon our large body of work in entity extraction, compound entity extraction, implicit entity extraction, entity linking, relationship extraction, semantic path computation and ranking, and federated learning. The module is extended with KGTK (Knowledge Graph Toolkit) [https://github.com/usc-isi-i2/kgtk ] from USC ISI for any KG transformation and manipulation functionalities.
(C) Knowledge Enrichment: The extracted list of entities and relations are then augmented and enriched with high-quality knowledge from external knowledge stores (DBPedia, Mayoclinic, and Drugbank) through our web-crawling engines and publicly available APIs. For example, an entity paracetamol that is extracted in (B) is queried through DBPedia Spotlight to retrieve information such as alternative names (e.g., Tylenol, Panadol), synonyms (e.g., N-acetyl-para-aminophenol), and existing URIs linked to other knowledge sources (e.g., dbo:pubchem-ID:1983) that are otherwise not available in the underlying datasets ingested in (A).
领英推荐
(D) Knowledge Alignment: The entities and relations are disambiguated, deduplicated, and mapped using concept similarity and alignment techniques (supervised: synonyms and synsets matching; unsupervised: fuzzy matching; neural networks; large language models; reinforcement learning; unsupervised learning) as well as our history of work in ontology alignment (SWSA/ISWC ten year award-winning). Users can validate the aligned KG and connect with community-curated KGs (e.g., WikiData, RxNORM, UMLS, Geonames, LNEx, Empathi, KnowWhereGraph) based on similar approaches. As we align knowledge from various ontologies and sources, we will also support modeling of the provenance.
(E) Schema Inference: We generate schemas from the underlying triple instances. For example, the following schema (drug, relieves, symptom) can be inferred from the following triple instances (paracetamol, relieves, headaches) by methods of entity tagging. The inferred schemas are then subjected to users’ validation. The invalid schemas and their underlying triple instances are pruned. The KG construction workflow metadata from knowledge extraction (best-performing NLP models with their configurations) to knowledge enrichment is captured and logged by the HPE CMF framework for provenance and lineage.
(F) Knowledge Storage and Query: The constructed KG is then stored in the Intelligent Data Store with access rights control for semantic querying and visualization. While we conform to the W3C RDF OWL standard as the default format for triples representation, the KG can be exported to various supporting formats such as JSON-LD, XML, and TTL to support downstream use-cases on different open source and commercial graph databases such as Virtuoso, Neo4J, etc. We support the ability to query the graph via the data portal either by (a) natural language and (b) using the RDF Query Language (SPARQL) endpoint or similar capability.
(G) Quality Assurance, Consistency Checking, and Evaluation: Our evaluation encompasses periodic and/or longitudinal analyses and experiments to improve, assess, and evaluate the platform iteratively continually.
Next, we describe the CMF and IDS framework from HPE.
Common Metadata Framework (CMF)
CMF is an open-source framework developed by HPE to record, query, and visualize lineage, the provenance of input/ output artifacts (datasets), parameters, and metrics used in computational workflows in a Git-like fashion. CMF involves the instrumentation of EMPWR’s knowledge extraction and enrichment workflow pipelines with CMF’s logging API. It is built on ML Metadata and data version control (DVC) and takes a pipeline-centric approach while incorporating features from experiment-centric frameworks. It automatically records pipeline metadata from different stages in the pipeline and offers fine-grained experiment tracking. The framework adopts a data-first approach; the content hash versions and identifies all artifacts recorded. It enables metadata tracking for each workflow variant for reproducibility, audit trail, and traceability. CMF’s metadata is stored in its relational database. CMF supports importing and exporting metadata in external formats such as OpenLineage to prevent metadata from being siloed into a particular cloud or datastore and facilitate open standards sharing. CMF also supports querying APIs and a visualization engine for the captured metadata (lineage graphs to visualize the KG construction process). Any site (including a cloud resource) can be set up as a CMF server to facilitate the hosting/sharing/discovery of workflow metadata (metadata hub).
Intelligent Data Storage (IDS)
Integrating IDS into EMPWR serves as both a back-end server and a query endpoint for KGs that regulates access rights, data governance, and ethical standards. IDS is an in-memory triple datastore that (i) hosts and serves data in different shapes (documents, graphs, feature vectors, vector embeddings), (ii) allows pattern search on the hosted data with AI models, (iii) supports a query language (e.g., SPARQL) to orchestrate database retrieval (exact search), pattern search using machine learning (approximate search), and user-defined functions (domain-specific search), (iv) offers easy-to-use programming interfaces for database operations, (v) runs on differentiated server architectures, (vi) is the fastest massively parallel processing database for unstructured data that scales-out (query latencies in seconds instead of minutes/hours). The core technology behind IDS is described in [8], and the recent success stories hosting drug discovery KGs are documented in [9].
Developing a large-scale KG is a continuous process and requires a platform that provides a wide range of capabilities. EMPWR supports data extraction and ingestion from multiple sources; continuous KG updates with schedulers; integration with HPE CMF for metadata logging and provenance; and the capacity to scale in computation and storage with IDS to ensure timely updates to reflect real-world knowledge changes.
Conclusion
??????????? In this article, we propose a hybrid framework combining the multifaceted approaches from traditional symbolic and modern data-driven systems, exemplified by the EMPWR platform for knowledge graph creation. We discuss the advantages and limitations of both families of systems and taxonomize the existing tools for creating KGs. We then review the KG lifecycle design practice and propose a platform: EMPWR, which supports broad-based applications & broad sources of data for the large-scale development and maintenance of KGs.
?Acknowledgment
This effort was supported in part by WIPRO, and by National Science Foundation grants # 2133842: EAGER: Advancing Neuro-symbolic AI with Deep Knowledge-infused Learning and #2335967: EAGER: Knowledge-guided neurosymbolic AI with guardrails for safe virtual health assistants . The opinions are those of the authors and not the sponsors. We also acknowledge AIISC’s collaboration with Aalap Tripathy and Sreenivas Rangan Sukumar of HPE on CMF and thank Revathy Venkataramanan for the feedback.
References
[1] Dessì, D., Osborne, F., Reforgiato Recupero, D., Buscaldi, D., Motta, E., & Sack, H. (2020). AI-KG: an automatically generated knowledge graph of artificial intelligence. In The Semantic Web–ISWC 2020: 19th International Semantic Web Conference, Athens, Greece, November 2–6, 2020, Proceedings, Part II 19 (pp. 127-143). Springer International Publishing.
[2] Kertkeidkachorn, N., & Ichise, R. (2018). An automatic knowledge graph creation framework from natural language text. IEICE TRANSACTIONS on Information and Systems, 101(1), 90-98.
[3] Rotmensch, M., Halpern, Y., Tlimat, A., Horng, S., & Sontag, D. (2017). Learning a health knowledge graph from electronic medical records. Scientific reports, 7(1), 5994.
[4] Xu, J., Kim, S., Song, M., Jeong, M., Kim, D., Kang, J., ... & Ding, Y. (2020). Building a PubMed knowledge graph. Scientific data, 7(1), 205.
[5] Rossanez, A., Dos Reis, J. C., Torres, R. D. S., & de Ribaupierre, H. (2020). KGen: a knowledge graph generator from biomedical scientific literature. BMC medical informatics and decision making, 20(4), 1-24.
[6] Ruqian, L. U., Chaoqun, F. E. I., Chuanqing, W. A. N. G., Shunfeng, G. A. O., Han, Q. I. U., Zhang, S., & Cungen, C. A. O. (2020). HAPE: A programmable big knowledge graph platform. Information Sciences, 509, 87-103.
[7] Sheth, A., Roy, K., & Gaur, M. (2023). Neurosymbolic Artificial Intelligence (Why, What, and How). IEEE Intelligent Systems, 38(3), 56-62.
[8] Rickett, C. D., Maschhoff, K. J., & Sukumar, S. R. (2020, December). Massively parallel processing database for sequence and graph data structures applied to rapid-response drug repurposing. In 2020 IEEE International Conference on Big Data (Big Data) (pp. 2967-2976). IEEE.
[9] Sukumar, S. R., Balma, J. A., Rickett, C. D., Maschhoff, K. J., Landman, J., Yates, C. R., ... & Khan, I. A. (2021, October). The convergence of HPC, ai and Big Data in rapid-response to the COVID-19 pandemic. In Smoky Mountains Computational Sciences and Engineering Conference (pp. 157-172). Cham: Springer International Publishing.
?
Hong Yung (Joey) Yip is a Ph.D. student at the AI Institute of South Carolina (AIISC). His research interests include semantic web, natural language understanding, and generative AI. Contact him at [email protected] .
Amit Sheth Sheth is the founding director of the AIISC, an NCR Chair and Professor of Computer Science & Engineering at the University of South Carolina. https://aiisc.ai/amit
This article is part of the IEEE-IC department on Knowledge Graphs. Many of the previous articles can be found here .
Navigating the sea of data to build knowledge graphs? Remember, as Aristotle hinted—knowing yourself is the beginning of all wisdom. Dive deep, understand the complexities! ????#KnowledgeIsPower