Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (581)

Search Parameters:
Keywords = ontological domain

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
18 pages, 1305 KiB  
Article
Curriculum–Vacancy–Course Recommendation Model Based on Knowledge Graphs, Sentence Transformers, and Graph Neural Networks
by Valiya Ramazanova, Madina Sambetbayeva, Sandugash Serikbayeva, Aigerim Yerimbetova, Zhanar Lamasheva, Zhanna Sadirmekova and Gulzhamal Kalman
Technologies 2025, 13(8), 340; https://doi.org/10.3390/technologies13080340 - 5 Aug 2025
Abstract
This article addresses the task of building personalized educational recommendations based on a heterogeneous knowledge graph that integrates data from university curricula, job vacancies, and online courses. To solve the problem of course recommendations by their relevance to a user’s competencies, a graph [...] Read more.
This article addresses the task of building personalized educational recommendations based on a heterogeneous knowledge graph that integrates data from university curricula, job vacancies, and online courses. To solve the problem of course recommendations by their relevance to a user’s competencies, a graph neural network (GNN)-based approach is proposed, specifically utilizing and comparing the Heterogeneous Graph Transformer (HGT) architecture, Graph Sample and Aggregate network (GraphSAGE), and Heterogeneous Graph Attention Network (HAN). Experiments were conducted on a heterogeneous graph comprising various node and relation types. The models were evaluated using regression and ranking metrics. The results demonstrated the superiority of the HGT-based recommendation model as a link regression task, especially in terms of ranking metrics, confirming its suitability for generating accurate and interpretable recommendations in educational systems. The proposed approach can be useful for developing adaptive learning recommendations aligned with users’ career goals. Full article
(This article belongs to the Section Information and Communication Technologies)
Show Figures

Figure 1

17 pages, 1707 KiB  
Article
A Structural Causal Model Ontology Approach for Knowledge Discovery in Educational Admission Databases
by Bern Igoche Igoche, Olumuyiwa Matthew and Daniel Olabanji
Knowledge 2025, 5(3), 15; https://doi.org/10.3390/knowledge5030015 - 4 Aug 2025
Viewed by 77
Abstract
Educational admission systems, particularly in developing countries, often suffer from opaque decision processes, unstructured data, and limited analytic insight. This study proposes a novel methodology that integrates structural causal models (SCMs), ontological modeling, and machine learning to uncover and apply interpretable knowledge from [...] Read more.
Educational admission systems, particularly in developing countries, often suffer from opaque decision processes, unstructured data, and limited analytic insight. This study proposes a novel methodology that integrates structural causal models (SCMs), ontological modeling, and machine learning to uncover and apply interpretable knowledge from an admission database. Using a dataset of 12,043 records from Benue State Polytechnic, Nigeria, we demonstrate this approach as a proof of concept by constructing a domain-specific SCM ontology, validate it using conditional independence testing (CIT), and extract features for predictive modeling. Five classifiers, Logistic Regression, Decision Tree, Random Forest, K-Nearest Neighbors (KNN), and Support Vector Machine (SVM) were evaluated using stratified 10-fold cross-validation. SVM and KNN achieved the highest classification accuracy (92%), with precision and recall scores exceeding 95% and 100%, respectively. Feature importance analysis revealed ‘mode of entry’ and ‘current qualification’ as key causal factors influencing admission decisions. This framework provides a reproducible pipeline that combines semantic representation and empirical validation, offering actionable insights for institutional decision-makers. Comparative benchmarking, ethical considerations, and model calibration are integrated to enhance methodological transparency. Limitations, including reliance on single-institution data, are acknowledged, and directions for generalizability and explainable AI are proposed. Full article
(This article belongs to the Special Issue Knowledge Management in Learning and Education)
Show Figures

Figure 1

27 pages, 2496 KiB  
Article
A Context-Aware Tourism Recommender System Using a Hybrid Method Combining Deep Learning and Ontology-Based Knowledge
by Marco Flórez, Eduardo Carrillo, Francisco Mendes and José Carreño
J. Theor. Appl. Electron. Commer. Res. 2025, 20(3), 194; https://doi.org/10.3390/jtaer20030194 - 2 Aug 2025
Viewed by 238
Abstract
The Santurbán paramo is a sensitive high-mountain ecosystem exposed to pressures from extractive and agricultural activities, as well as increasing tourism. In response, this study presents a context-aware recommendation system designed to support sustainable tourism through the integration of deep neural networks and [...] Read more.
The Santurbán paramo is a sensitive high-mountain ecosystem exposed to pressures from extractive and agricultural activities, as well as increasing tourism. In response, this study presents a context-aware recommendation system designed to support sustainable tourism through the integration of deep neural networks and ontology-based semantic modeling. The proposed system delivers personalized recommendations—such as activities, accommodations, and ecological routes—by processing user preferences, geolocation data, and contextual features, including cost and popularity. The architecture combines a trained TensorFlow Lite model with a domain ontology enriched with GeoSPARQL for geospatial reasoning. All inference operations are conducted locally on Android devices, supported by SQLite for offline data storage, which ensures functionality in connectivity-restricted environments and preserves user privacy. Additionally, the system employs geofencing to trigger real-time environmental notifications when users approach ecologically sensitive zones, promoting responsible behavior and biodiversity awareness. By incorporating structured semantic knowledge with adaptive machine learning, the system enables low-latency, personalized, and conservation-oriented recommendations. This approach contributes to the sustainable management of natural reserves by aligning individual tourism experiences with ecological protection objectives, particularly in remote areas like the Santurbán paramo. Full article
Show Figures

Figure 1

25 pages, 3632 KiB  
Article
A Semantic Web and IFC-Based Framework for Automated BIM Compliance Checking
by Lu Jia, Maokang Chen, Chen Chen and Yanfeng Jin
Buildings 2025, 15(15), 2633; https://doi.org/10.3390/buildings15152633 - 25 Jul 2025
Viewed by 295
Abstract
In the architectural design phase, the inspection of design deliverables is critical, yet traditional manual checking methods are time-consuming, labor-intensive, and inefficient, with numerous drawbacks. With the development of BIM technology, automated rule compliance checking has become a trend. This paper presents a [...] Read more.
In the architectural design phase, the inspection of design deliverables is critical, yet traditional manual checking methods are time-consuming, labor-intensive, and inefficient, with numerous drawbacks. With the development of BIM technology, automated rule compliance checking has become a trend. This paper presents a method combining semantic web technology and IFC data to enhance human–machine collaborative inspection capabilities. First, a five-step process integrated with domain specifications is designed to construct a building object ontology, covering most architectural objects in the AEC domain. Second, a set of mapping rules is developed based on the expression mechanisms of IFC entities to establish a semantic bridge between IfcOWL and the building object ontology. Then, by analyzing regulatory codes, query rule templates for major constraint types are developed using semantic web SPARQL. Finally, the feasibility of the method is validated through a case study based on the Jena framework. Full article
(This article belongs to the Section Construction Management, and Computers & Digitization)
Show Figures

Figure 1

19 pages, 2689 KiB  
Article
A Multi-Temporal Knowledge Graph Framework for Landslide Monitoring and Hazard Assessment
by Runze Wu, Min Huang, Haishan Ma, Jicai Huang, Zhenhua Li, Hongbo Mei and Chengbin Wang
GeoHazards 2025, 6(3), 39; https://doi.org/10.3390/geohazards6030039 - 23 Jul 2025
Viewed by 316
Abstract
In the landslide chain from pre-disaster conditions to landslide mitigation and recovery, time is an important factor in understanding the geological hazards process and managing landsides. Static knowledge graphs are unable to capture the temporal dynamics of landslide events. To address this limitation, [...] Read more.
In the landslide chain from pre-disaster conditions to landslide mitigation and recovery, time is an important factor in understanding the geological hazards process and managing landsides. Static knowledge graphs are unable to capture the temporal dynamics of landslide events. To address this limitation, we propose a systematic framework for constructing a multi-temporal knowledge graph of landslides that integrates multi-source temporal data, enabling the dynamic tracking of landslide processes. Our approach comprises three key steps. First, we summarize domain knowledge and develop a temporal ontology model based on the disaster chain management system. Second, we map heterogeneous datasets (both tabular and textual data) into triples/quadruples and represent them based on the RDF (Resource Description Framework) and quadruple approaches. Finally, we validate the utility of multi-temporal knowledge graphs through multidimensional queries and develop a web interface that allows users to input landslide names to retrieve location and time-axis information. A case study of the Zhangjiawan landslide in the Three Gorges Reservoir Area demonstrates the multi-temporal knowledge graph’s capability to track temporal updates effectively. The query results show that multi-temporal knowledge graphs effectively support multi-temporal queries. This study advances landslide research by combining static knowledge representation with the dynamic evolution of landslides, laying the foundation for hazard forecasting and intelligent early-warning systems. Full article
(This article belongs to the Special Issue Landslide Research: State of the Art and Innovations)
Show Figures

Figure 1

33 pages, 2593 KiB  
Article
Methodological Exploration of Ontology Generation with a Dedicated Large Language Model
by Maria Assunta Cappelli and Giovanna Di Marzo Serugendo
Electronics 2025, 14(14), 2863; https://doi.org/10.3390/electronics14142863 - 17 Jul 2025
Viewed by 347
Abstract
Ontologies are essential tools for representing, organizing, and sharing knowledge across various domains. This study presents a methodology for ontology construction supported by large language models (LLMs), with an initial application in the automotive sector. Specifically, a user preference ontology for adaptive interfaces [...] Read more.
Ontologies are essential tools for representing, organizing, and sharing knowledge across various domains. This study presents a methodology for ontology construction supported by large language models (LLMs), with an initial application in the automotive sector. Specifically, a user preference ontology for adaptive interfaces in autonomous machines was developed using ChatGPT-4o. Based on this case study, the results were generalized into a reusable methodology. The proposed workflow integrates classical ontology engineering methodologies with the generative and analytical capabilities of LLMs. Each phase follows well-established steps: domain definition, term elicitation, class hierarchy construction, property specification, formalization, population, and validation. A key innovation of this approach is the use of a guiding table that translates domain knowledge into structured prompts, ensuring consistency across iterative interactions with the LLM. Human experts play a continuous role throughout the process, refining definitions, resolving ambiguities, and validating outputs. The ontology was evaluated in terms of logical consistency, structural properties, semantic accuracy, and inferential completeness, confirming its correctness and coherence. Additional validation through SPARQL queries demonstrated its reasoning capabilities. This methodology is generalizable to other domains, if domain experts adapt the guiding table to the specific context. Despite the support provided by LLMs, domain expertise remains essential to guarantee conceptual rigor and practical relevance. Full article
(This article belongs to the Special Issue Role of Artificial Intelligence in Natural Language Processing)
Show Figures

Figure 1

16 pages, 1534 KiB  
Article
Clinician-Based Functional Scoring and Genomic Insights for Prognostic Stratification in Wolf–Hirschhorn Syndrome
by Julián Nevado, Raquel Blanco-Lago, Cristina Bel-Fenellós, Adolfo Hernández, María A. Mori-Álvarez, Chantal Biencinto-López, Ignacio Málaga, Harry Pachajoa, Elena Mansilla, Fe A. García-Santiago, Pilar Barrúz, Jair A. Tenorio-Castaño, Yolanda Muñoz-GªPorrero, Isabel Vallcorba and Pablo Lapunzina
Genes 2025, 16(7), 820; https://doi.org/10.3390/genes16070820 - 12 Jul 2025
Viewed by 425
Abstract
Background/Objectives: Wolf–Hirschhorn syndrome (WHS; OMIM #194190) is a rare neurodevelopmental disorder, caused by deletions in the distal short arm of chromosome 4. It is characterized by developmental delay, epilepsy, intellectual disability, and distinctive facial dysmorphism. Clinical presentation varies widely, complicating prognosis and [...] Read more.
Background/Objectives: Wolf–Hirschhorn syndrome (WHS; OMIM #194190) is a rare neurodevelopmental disorder, caused by deletions in the distal short arm of chromosome 4. It is characterized by developmental delay, epilepsy, intellectual disability, and distinctive facial dysmorphism. Clinical presentation varies widely, complicating prognosis and individualized care. Methods: We assembled a cohort of 140 individuals with genetically confirmed WHS from Spain and Latin-America, and developed and validated a multidimensional, Clinician-Reported Outcome Assessment (ClinRO) based on the Global Functional Assessment of the Patient (GFAP), derived from standardized clinical questionnaires and weighted by HPO (Human Phenotype Ontology) term frequencies. The GFAP score quantitatively captures key functional domains in WHS, including neurodevelopment, epilepsy, comorbidities, and age-corrected developmental milestones (selected based on clinical experience and disease burden). Results: Higher GFAP scores are associated with worse clinical outcomes. GFAP showed strong correlations with deletion size, presence of additional genomic rearrangements, sex, and epilepsy severity. Ward’s clustering and discriminant analyses confirmed GFAP’s discriminative power, classifying over 90% of patients into clinically meaningful groups with different prognoses. Conclusions: Our findings support GFAP as a robust, WHS-specific ClinRO that may aid in stratification, prognosis, and clinical management. This tool may also serve future interventional studies as a standardized outcome measure. Beyond its clinical utility, GFAP also revealed substantial social implications. This underscores the broader socioeconomic burden of WHS and the potential value of GFAP in identifying high-support families that may benefit from targeted resources and services. Full article
(This article belongs to the Special Issue Molecular Basis of Rare Genetic Diseases)
Show Figures

Figure 1

38 pages, 2791 KiB  
Review
Digital Platforms for the Built Environment: A Systematic Review Across Sectors and Scales
by Michele Berlato, Leonardo Binni, Dilan Durmus, Chiara Gatto, Letizia Giusti, Alessia Massari, Beatrice Maria Toldo, Stefano Cascone and Claudio Mirarchi
Buildings 2025, 15(14), 2432; https://doi.org/10.3390/buildings15142432 - 10 Jul 2025
Viewed by 797
Abstract
The digital transformation of the Architecture, Engineering and Construction sector is accelerating the adoption of digital platforms as critical enablers of data integration, stakeholder collaboration and process optimization. This paper presents a systematic review of 125 peer-reviewed journal articles (2015–2025), selected through a [...] Read more.
The digital transformation of the Architecture, Engineering and Construction sector is accelerating the adoption of digital platforms as critical enablers of data integration, stakeholder collaboration and process optimization. This paper presents a systematic review of 125 peer-reviewed journal articles (2015–2025), selected through a PRISMA-guided search using the Scopus database, with inclusion criteria focused on English-language academic literature on platform-enabled digitalization in the built environment. Studies were grouped into six thematic domains, i.e., artificial intelligence in construction, digital twin integration, lifecycle cost management, BIM-GIS for underground utilities, energy systems and public administration, based on a combination of literature precedent and domain relevance. Unlike existing reviews focused on single technologies or sectors, this work offers a cross-sectoral synthesis, highlighting shared challenges and opportunities across disciplines and lifecycle stages. It identifies the functional roles, enabling technologies and systemic barriers affecting digital platform adoption, such as fragmented data sources, limited interoperability between systems and siloed organizational processes. These barriers hinder the development of integrated and adaptive digital ecosystems capable of supporting real-time decision-making, participatory planning and sustainable infrastructure management. The study advocates for modular, human-centered platforms underpinned by standardized ontologies, explainable AI and participatory governance models. It also highlights the importance of emerging technologies, including large language models and federated learning, as well as context-specific platform strategies, especially for applications in the Global South. Full article
(This article belongs to the Section Construction Management, and Computers & Digitization)
Show Figures

Figure 1

20 pages, 4177 KiB  
Article
Joint Entity–Relation Extraction for Knowledge Graph Construction in Marine Ranching Equipment
by Du Chen, Zhiwu Gao, Sirui Li, Xuruixue Guo, Yaqi Wu, Haiyu Zhang and Delin Zhang
Appl. Sci. 2025, 15(13), 7611; https://doi.org/10.3390/app15137611 - 7 Jul 2025
Viewed by 352
Abstract
The construction of marine ranching is a crucial component of China’s Blue Granary strategy, yet the fragmented knowledge system in marine ranching equipment impedes intelligent management and operational efficiency. This study proposes the first knowledge graph (KG) framework tailored for marine ranching equipment, [...] Read more.
The construction of marine ranching is a crucial component of China’s Blue Granary strategy, yet the fragmented knowledge system in marine ranching equipment impedes intelligent management and operational efficiency. This study proposes the first knowledge graph (KG) framework tailored for marine ranching equipment, integrating hybrid ontology design, joint entity–relation extraction, and graph-based knowledge storage: (1) The limitations in existing KG are obtained through targeted questionnaires for diverse users and employees; (2) A domain ontology was constructed through a combination of the top-down and the bottom-up approach, defining seven key concepts and eight semantic relationships; (3) Semi-structured data from enterprises and standards, combined with unstructured data from the literature were systematically collected, cleaned via Scrapy and regular expression, and standardized into JSON format, forming a domain-specific corpus of 1456 annotated sentences; (4) A novel BERT-BiGRU-CRF model was developed, leveraging contextual embeddings from BERT, parameter-efficient sequence modeling via BiGRU (Bidirectional Gated Recurrent Unit), and label dependency optimization using CRF (Conditional Random Field). The TE + SE + Ri + BMESO tagging strategy was introduced to address multi-relation extraction challenges by linking theme entities to secondary entities; (5) The Neo4j-based KG encapsulated 2153 nodes and 3872 edges, enabling scalable visualization and dynamic updates. Experimental results demonstrated superior performance over BiLSTM-CRF and BERT-BiLSTM-CRF, achieving 86.58% precision, 77.82% recall, and 81.97% F1 score. This study not only proposes the first structured KG framework for marine ranching equipment but also offers a transferable methodology for vertical domain knowledge extraction. Full article
(This article belongs to the Section Marine Science and Engineering)
Show Figures

Figure 1

12 pages, 349 KiB  
Article
Agentic AI for Cultural Heritage: Embedding Risk Memory in Semantic Digital Twins
by George Pavlidis
Computers 2025, 14(7), 266; https://doi.org/10.3390/computers14070266 - 7 Jul 2025
Viewed by 768
Abstract
Cultural heritage preservation increasingly relies on data-driven technologies, yet most existing systems lack the cognitive and temporal depth required to support meaningful, transparent, and policy-informed decision-making. This paper proposes a conceptual framework for memory-enabled, semantically grounded AI agents in the cultural domain, showing [...] Read more.
Cultural heritage preservation increasingly relies on data-driven technologies, yet most existing systems lack the cognitive and temporal depth required to support meaningful, transparent, and policy-informed decision-making. This paper proposes a conceptual framework for memory-enabled, semantically grounded AI agents in the cultural domain, showing how the integration of the ICCROM/CCI ABC method for risk assessment into the Panoptes ontology enables the structured encoding of risk cognition over time. This structured risk memory becomes the foundation for agentic reasoning, supporting prioritization, justification, and long-term preservation planning. It is argued that this approach constitutes a principled step toward the development of Cultural Agentic AI: autonomous systems that remember, reason, and act in alignment with cultural values. Proof-of-concept simulations illustrate how memory-enabled agents can trace evolving risk patterns, trigger policy responses, and evaluate mitigation outcomes through structured, explainable reasoning. Full article
Show Figures

Figure 1

17 pages, 1955 KiB  
Article
Development of Safety Domain Ontology Knowledge Base for Fall Accidents
by Hyunsoung Park and Sangyun Shin
Buildings 2025, 15(13), 2299; https://doi.org/10.3390/buildings15132299 - 30 Jun 2025
Viewed by 374
Abstract
Extensive research in the field of construction safety has predominantly focused on identifying the causes and impacts of construction accidents, evaluating safety plans, assessing the effectiveness of safety education materials, and analyzing relevant policies. However, comparatively limited attention has been given to the [...] Read more.
Extensive research in the field of construction safety has predominantly focused on identifying the causes and impacts of construction accidents, evaluating safety plans, assessing the effectiveness of safety education materials, and analyzing relevant policies. However, comparatively limited attention has been given to the systematic formation, management, and utilization of safety-related information and knowledge. Despite significant advancements in information and knowledge management technologies across the architecture, engineering, and construction (AEC) industries, their application in construction safety remains underdeveloped. This study addresses this gap by proposing a novel ontology-based framework specifically designed for construction safety management. Unlike previous models, the proposed ontology integrates diverse safety regulations and terminologies into a unified and semantically structured knowledge model. It comprises three primary superclasses covering key areas of construction safety, with an initial focus on fall hazards—one of the most frequent and severe risks, particularly in roofing activities. This domain-specific approach not only improves semantic clarity and standardization but also enhances reusability and extensibility for other risk domains. The ontology was developed using established methodologies and validated through reasoning tools and competency questions. By providing a formally structured, logic-driven knowledge base, the model supports automated safety reasoning, facilitates communication among stakeholders, and lays the foundation for future intelligent safety management systems in construction. This research contributes a validated, extensible, and regulation-aligned ontology model that addresses critical challenges in safety information integration, sharing, and application. Full article
Show Figures

Figure 1

22 pages, 1763 KiB  
Article
A FIT4NER Generic Approach for Framework-Independent Medical Named Entity Recognition
by Florian Freund, Philippe Tamla, Frederik Wilde and Matthias Hemmje
Information 2025, 16(7), 554; https://doi.org/10.3390/info16070554 - 29 Jun 2025
Viewed by 327
Abstract
This article focuses on assisting medical professionals in analyzing domain-specific texts and selecting and comparing Named Entity Recognition (NER) frameworks. It details the development and evaluation of a system that utilizes a generic approach alongside the structured Nunamaker methodology. This system empowers medical [...] Read more.
This article focuses on assisting medical professionals in analyzing domain-specific texts and selecting and comparing Named Entity Recognition (NER) frameworks. It details the development and evaluation of a system that utilizes a generic approach alongside the structured Nunamaker methodology. This system empowers medical professionals to train, evaluate, and compare NER models across diverse frameworks, such as Stanford CoreNLP, spaCy, and Hugging Face Transformers, independent of their specific implementations. Additionally, it introduces a concept for modeling a general training and evaluation process. Finally, experiments using various ontologies from the CRAFT corpus are conducted to assess the effectiveness of the current prototype. Full article
Show Figures

Figure 1

28 pages, 7367 KiB  
Article
Ontology Modeling Using Fractal and Fuzzy Concepts to Optimize Metadata Management
by Siku Kim, Yee Yeng Liau and Kwangyeol Ryu
Appl. Sci. 2025, 15(13), 7193; https://doi.org/10.3390/app15137193 - 26 Jun 2025
Viewed by 295
Abstract
To address the data management limitations of traditional ontology models in dynamic industrial settings, this study introduces the Fractal Fuzzy Ontology Modeling (FFOM) framework, a novel methodology for optimizing data management, integration, and decision making. FFOM’s value is rooted in two major contributions: [...] Read more.
To address the data management limitations of traditional ontology models in dynamic industrial settings, this study introduces the Fractal Fuzzy Ontology Modeling (FFOM) framework, a novel methodology for optimizing data management, integration, and decision making. FFOM’s value is rooted in two major contributions: firstly, the strategic use of fractal structures to achieve unparalleled scalability and modularity, which significantly reduces the effort required during data hierarchy updates by enabling self-similar, expandable data architectures. Secondly, FFOM features the synergistic use of fuzzy logic to meticulously manage ambiguity and uncertainty, including the representation of imprecise relationships and support for flexible, rule-based reasoning. The practical value of this integrated approach is demonstrated through a mold assembly case study, which validates FFOM’s effectiveness in structuring complex data hierarchies, managing uncertainty, and enabling automated reasoning. Implemented in the Web Ontology Language (OWL) for standardization and interoperability purposes, FFOM ultimately provides a clear pathway toward developing more intelligent, adaptive, and scalable data ecosystems in demanding manufacturing domains, where real-time data analysis is critical. Full article
(This article belongs to the Special Issue Advances in Ontology and the Semantic Web)
Show Figures

Figure 1

24 pages, 7080 KiB  
Review
Responsible Resilience in Cyber–Physical–Social Systems: A New Paradigm for Emergent Cyber Risk Modeling
by Theresa Sobb, Nour Moustafa and Benjamin Turnbull
Future Internet 2025, 17(7), 282; https://doi.org/10.3390/fi17070282 - 25 Jun 2025
Cited by 1 | Viewed by 344
Abstract
As cyber systems increasingly converge with physical infrastructure and social processes, they give rise to Complex Cyber–Physical–Social Systems (C-CPSS), whose emergent behaviors pose unique risks to security and mission assurance. Traditional cyber–physical system models often fail to address the unpredictability arising from human [...] Read more.
As cyber systems increasingly converge with physical infrastructure and social processes, they give rise to Complex Cyber–Physical–Social Systems (C-CPSS), whose emergent behaviors pose unique risks to security and mission assurance. Traditional cyber–physical system models often fail to address the unpredictability arising from human and organizational dynamics, leaving critical gaps in how cyber risks are assessed and managed across interconnected domains. The challenge lies in building resilient systems that not only resist disruption, but also absorb, recover, and adapt—especially in the face of complex, nonlinear, and often unintentionally emergent threats. This paper introduces the concept of ‘responsible resilience’, defined as the capacity of systems to adapt to cyber risks using trustworthy, transparent agent-based models that operate within socio-technical contexts. We identify a fundamental research gap in the treatment of social complexity and emergence in existing the cyber–physical system literature. To address this, we propose the E3R modeling paradigm—a novel framework for conceptualizing Emergent, Risk-Relevant Resilience in C-CPSS. This paradigm synthesizes human-in-the-loop diagrams, agent-based Artificial Intelligence simulations, and ontology-driven representations to model the interdependencies and feedback loops driving unpredictable cyber risk propagation more effectively. Compared to conventional cyber–physical system models, E3R accounts for adaptive risks across social, cyber, and physical layers, enabling a more accurate and ethically grounded foundation for cyber defence and mission assurance. Our analysis of the literature review reveals the underrepresentation of socio-emergent risk modeling in the literature, and our results indicate that existing models—especially those in industrial and healthcare applications of cyber–physical systems—lack the generalizability and robustness necessary for complex, cross-domain environments. The E3R framework thus marks a significant step forward in understanding and mitigating emergent threats in future digital ecosystems. Full article
(This article belongs to the Special Issue Internet of Things and Cyber-Physical Systems, 3rd Edition)
Show Figures

Figure 1

24 pages, 3832 KiB  
Article
Stitching History into Semantics: LLM-Supported Knowledge Graph Engineering for 19th-Century Greek Bookbinding
by Dimitrios Doumanas, Efthalia Ntalouka, Costas Vassilakis, Manolis Wallace and Konstantinos Kotis
Mach. Learn. Knowl. Extr. 2025, 7(3), 59; https://doi.org/10.3390/make7030059 - 24 Jun 2025
Viewed by 799
Abstract
Preserving cultural heritage can be efficiently supported by structured and semantic representation of historical artifacts. Bookbinding, a critical aspect of book history, provides valuable insights into past craftsmanship, material use, and conservation practices. However, existing bibliographic records often lack the depth needed to [...] Read more.
Preserving cultural heritage can be efficiently supported by structured and semantic representation of historical artifacts. Bookbinding, a critical aspect of book history, provides valuable insights into past craftsmanship, material use, and conservation practices. However, existing bibliographic records often lack the depth needed to analyze bookbinding techniques, provenance, and preservation status. This paper presents a proof-of-concept system that explores how Large Language Models (LLMs) can support knowledge graph engineering within the context of 19th-century Greek bookbinding (1830–1900), and as a result, generate a domain-specific ontology and a knowledge graph. Our ontology encapsulates materials, binding techniques, artistic styles, and conservation history, integrating metadata standards like MARC and Dublin Core to ensure interoperability with existing library and archival systems. To validate its effectiveness, we construct a Neo4j knowledge graph, based on the generated ontology and utilize Cypher Queries—including LLM-generated queries—to extract insights about bookbinding practices and trends. This study also explores how semantic reasoning over the knowledge graph can identify historical binding patterns, assess book conservation needs, and infer relationships between bookbinding workshops. Unlike previous bibliographic ontologies, our approach provides a comprehensive, semantically rich representation of bookbinding history, methods and techniques, supporting scholars, conservators, and cultural heritage institutions. By demonstrating how LLMs can assist in ontology/KG creation and query generation, we introduce and evaluate a semi-automated pipeline as a methodological demonstration for studying historical bookbinding, contributing to digital humanities, book conservation, and cultural informatics. Finally, the proposed approach can be used in other domains, thus, being generally applicable in knowledge engineering. Full article
(This article belongs to the Special Issue Knowledge Graphs and Large Language Models)
Show Figures

Graphical abstract

Back to TopTop