What would a language model say about attending Extended Semantic Web Conference 2024?

What would a language model say about attending Extended Semantic Web Conference 2024?

Imagine a research conference goer, immersed in the lively intellectual environment of 'Fabrics of Knowledge: Knowledge Graphs and Generative AI' at the recently concluded Extended Semantic Web Conference (ESWC) 2024, pondering a curious counterfactual:

What if Large Language Models (LLMs) were connected to the ESWC sessions output? How would they perceive the experience?

Perhaps they would respond with an insightful reflection:

"Attending 'Fabrics of Knowledge: Knowledge Graphs and Generative AI' at the ESWC'24 felt like embarking on a journey where my mental models underwent a knowledge distillation process (leveraging techniques like RML, SHACL, RDF, SPARQL, OWL, and other ontological/knowledge graph frameworks) that refined my pre-trained understanding of Knowledge Graphs (KGs) and Large Language Models (LLMs). Throughout the week, my embedding vectors (parametric model weights) were updated with new vocabulary, skills, methods, tools, and knowledge gleaned from keynotes, workshops, symposiums, demos, posters, and project networking. By the end of the week, I felt sufficiently fine-tuned, yet much like LLM architectures, I recognized the importance of Retrieval-Augmented Generation (RAG) as a computationally efficient approach for continually updating knowledge bases. By connecting with targeted external and structured sources, RAG allows the system to seamlessly integrate new information and refine existing knowledge.. The conference underscored this transformative potential of Retrieval-Augmented Generation (RAG) for knowledge acquisition, akin to connecting with the external knowledge base represented through the ongoing human pursuit of learning fuelled by expert research, discourse and collaborative refinement"

This contemplation is not, in the remotest sense, an attempt to anthropomorphize LLMs. On the contrary, it is merely a reflection on how the learning paths, sequences, and sources could be similar between computational methods and human experiences. Just as a language model benefits from structured knowledge and continual learning paths, human learning also thrives on collaboration, exposure to new information, and iterative refinement. Both processes underscore the importance of a community and shared knowledge in driving progress and frontiers of innovation.

Several keynote addresses were particularly thought-provoking. These include "LLM for Knowledge Base vs. LLM as Knowledge Base" by Paul Groth , "Evolution of Knowledge Engineering" by Elena Simperl , "Skills, Roles and Building an Enterprise Knowledge Graph" by Katariina Kari , and "Structured Reasoning over Natural Language Statements" by Peter Clark

I feel immensely privileged to have collaborated on the paper ' NeOn-GPT: A Large Language Model-Powered Pipeline for Ontology Learning' which was presented in the special track – Large Language Models for Knowledge Engineering, at ESWC this year. I am sincerely grateful to all my collaborators Nadeen Fathallah Stefano De Giorgis Peter Haase Andrea Poltronieri

Selecting between the workshops and conference sessions occurring simultaneously posed a delightful challenge. While my attendance was primarily devoted to the captivating sessions on ‘Knowledge Graphs from Text’ and ‘Knowledge Graph Construction’, with fleeting participation in the 'PhD Symposium', I am confident that the calibre and enriching experience remained consistent across all sessions. It was truly a privilege to have met Fajar J. Ekaputra Vasily Orlov Achim Reiz @irenecelino Alexander Brinkmann and engage in enriching conversations thereafter

My heartfelt gratitude to Albert Mero?o Pe?uela and the entire ESWC team for orchestrating such a remarkable conference experience

#ESWC #ESWC2024 #SemanticWeb #semanticsconf #linkeddata #ontology #GlobalGraphCommunity

Achim Reiz

Build High-Quality Knowledge Graphs. With neonto.

9 个月

I absolutely share the sentiment! Frank van Harmelen's keynote stuck most in my head with his call to further work on the semantics in the much-discussed neuro-symbolic (or, as he proposed, neuro-semantic) approaches. I'm eager to see how to integrate more logic into the NLP-connected applications.

要查看或添加评论,请登录

Arunav Das的更多文章

社区洞察

其他会员也浏览了