Knowledge Graphs and Large Language Models

A special issue of Machine Learning and Knowledge Extraction (ISSN 2504-4990).

Deadline for manuscript submissions: closed (26 May 2025) | Viewed by 2662

Special Issue Editors


E-Mail Website
Guest Editor
High Council of Arabic Language, HCLA, Algiers, Algeria
Interests: Arabic language processing; machine translation; language identification; speech recognition

E-Mail Website
Guest Editor
Center for Language and Speech Processing, Johns Hopkins University, Baltimore, MD 21218, USA
Interests: speech processing and modeling; speaker and language recognition; audio segmentation; emotion recognition and health applications

E-Mail Website
Guest Editor
CNRS-SAMOVAR Institut Polytechnique de Paris, 91120 Palaiseau, France
Interests: speech and natural language processing; spoken dialogue systems; speaker and language recognition

Special Issue Information

Dear Colleagues,

In recent years, the fields of knowledge graphs (KGs) and large language models (LLMs) have witnessed remarkable advancements, revolutionizing the landscape of artificial intelligence and natural language processing. KGs, which are structured representations of knowledge, and LLMs, which are powerful language models trained on vast amounts of text data, have individually demonstrated their prowess in various applications.

However, the combination of, and synergy between, KGs and LLMs have emerged as representing a new frontier, offering unprecedented opportunities for enhancing knowledge representation, understanding, and generation. This integration not only enriches the semantic understanding of textual data but also empowers AI systems with the ability to reason, infer, and generate contextually relevant responses.

This Special Issue aims to delve into theoretical foundations, historical perspectives, and practical applications concerning the fusion between knowledge graphs and large language models. We invite contributions that explore the following areas:

  1. Theoretical Frameworks: Papers elucidating the theoretical underpinnings of the integration of KGs into LLMs, including methodologies, algorithms, and models for knowledge-enhanced language understanding and generation.
  2. Historical Perspectives: Insights into the evolution of KGs and LLMs, tracing their development trajectories, seminal works, and the transformative milestones leading to their integration.
  3. Design and Implementation: Research articles focusing on design principles, architectures, and techniques for effectively combining KGs and LLMs to facilitate tasks such as information retrieval, responding to questions, knowledge inference, and natural language understanding.
  4. Explanatory Capabilities: Explorations into how the fusion of KGs and LLMs enables the development of explainable AI systems, providing transparent and interpretable insights into model decisions and outputs.
  5. Human-Centered Intelligent Systems: Studies examining the design and deployment of interactive AI systems that leverage KGs and LLMs to facilitate seamless human–computer interactions, catering not only to experts but also to a broader, less specialized audience.

We encourage submissions that contribute to advancing our understanding of the synergistic relationship between knowledge graphs and large language models, fostering interdisciplinary collaborations across computer science, artificial intelligence, linguistics, cognitive science, and beyond. By shedding light on this burgeoning area of research, this Special Issue aims to propel the field forward and inspire further innovations in AI-driven knowledge representation and natural language processing.

Dr. Mourad Abbas
Dr. Najim Dehak
Prof. Dr. Gérard Chollet
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Machine Learning and Knowledge Extraction is an international peer-reviewed open access quarterly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1800 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • knowledge graphs
  • large language models
  • generative and neurosymbolic AI

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • Reprint: MDPI Books provides the opportunity to republish successful Special Issues in book format, both online and in print.

Further information on MDPI's Special Issue policies can be found here.

Published Papers (2 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

24 pages, 3832 KiB  
Article
Stitching History into Semantics: LLM-Supported Knowledge Graph Engineering for 19th-Century Greek Bookbinding
by Dimitrios Doumanas, Efthalia Ntalouka, Costas Vassilakis, Manolis Wallace and Konstantinos Kotis
Mach. Learn. Knowl. Extr. 2025, 7(3), 59; https://doi.org/10.3390/make7030059 - 24 Jun 2025
Viewed by 509
Abstract
Preserving cultural heritage can be efficiently supported by structured and semantic representation of historical artifacts. Bookbinding, a critical aspect of book history, provides valuable insights into past craftsmanship, material use, and conservation practices. However, existing bibliographic records often lack the depth needed to [...] Read more.
Preserving cultural heritage can be efficiently supported by structured and semantic representation of historical artifacts. Bookbinding, a critical aspect of book history, provides valuable insights into past craftsmanship, material use, and conservation practices. However, existing bibliographic records often lack the depth needed to analyze bookbinding techniques, provenance, and preservation status. This paper presents a proof-of-concept system that explores how Large Language Models (LLMs) can support knowledge graph engineering within the context of 19th-century Greek bookbinding (1830–1900), and as a result, generate a domain-specific ontology and a knowledge graph. Our ontology encapsulates materials, binding techniques, artistic styles, and conservation history, integrating metadata standards like MARC and Dublin Core to ensure interoperability with existing library and archival systems. To validate its effectiveness, we construct a Neo4j knowledge graph, based on the generated ontology and utilize Cypher Queries—including LLM-generated queries—to extract insights about bookbinding practices and trends. This study also explores how semantic reasoning over the knowledge graph can identify historical binding patterns, assess book conservation needs, and infer relationships between bookbinding workshops. Unlike previous bibliographic ontologies, our approach provides a comprehensive, semantically rich representation of bookbinding history, methods and techniques, supporting scholars, conservators, and cultural heritage institutions. By demonstrating how LLMs can assist in ontology/KG creation and query generation, we introduce and evaluate a semi-automated pipeline as a methodological demonstration for studying historical bookbinding, contributing to digital humanities, book conservation, and cultural informatics. Finally, the proposed approach can be used in other domains, thus, being generally applicable in knowledge engineering. Full article
(This article belongs to the Special Issue Knowledge Graphs and Large Language Models)
Show Figures

Graphical abstract

18 pages, 2743 KiB  
Article
Context-Aware Few-Shot Learning SPARQL Query Generation from Natural Language on an Aviation Knowledge Graph
by Ines-Virginia Hernandez-Camero, Eva Garcia-Lopez, Antonio Garcia-Cabot and Sergio Caro-Alvaro
Mach. Learn. Knowl. Extr. 2025, 7(2), 52; https://doi.org/10.3390/make7020052 - 13 Jun 2025
Viewed by 427
Abstract
Question answering over domain-specific knowledge graphs implies several challenges. It requires sufficient knowledge of the world and the domain to understand what is being asked, familiarity with the knowledge graph’s structure to build a correct query, and knowledge of the query language. However, [...] Read more.
Question answering over domain-specific knowledge graphs implies several challenges. It requires sufficient knowledge of the world and the domain to understand what is being asked, familiarity with the knowledge graph’s structure to build a correct query, and knowledge of the query language. However, mastering all of these is a time-consuming task. This work proposes a prompt-based approach that enables natural language to generate SPARQL queries. By leveraging the advanced language capabilities of large language models (LLMs), we constructed prompts that include a natural-language question, relevant contextual information from the domain-specific knowledge graph, and several examples of how the task should be executed. To evaluate our method, we applied it to an aviation knowledge graph containing accident report data. Our approach improved the results of the original work—in which the aviation knowledge graph was first introduced—by 6%, demonstrating its potential for enhancing SPARQL query generation for domain-specific knowledge graphs. Full article
(This article belongs to the Special Issue Knowledge Graphs and Large Language Models)
Show Figures

Figure 1

Back to TopTop