Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (773)

Search Parameters:
Keywords = semantic ontology

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
17 pages, 2340 KB  
Article
From Descriptors to Decisions: Structuring the Libyan National Land Cover Reference System with Land Cover Meta Language
by Bashir Nwer, Gautam Dadhich, Akram Alkasih, Abdourahman Maki and Fatima Mushtaq
Land 2026, 15(2), 257; https://doi.org/10.3390/land15020257 (registering DOI) - 2 Feb 2026
Abstract
The accurate representation of land cover is fundamental to sustainable land management, environmental monitoring, and spatial policy development. However, many national systems lack semantic interoperability, flexibility, and are often developed for narrowly focused purposes. This study presents an ontology-based approach to developing the [...] Read more.
The accurate representation of land cover is fundamental to sustainable land management, environmental monitoring, and spatial policy development. However, many national systems lack semantic interoperability, flexibility, and are often developed for narrowly focused purposes. This study presents an ontology-based approach to developing the Libyan National Land Cover Reference System (LLCRS) using the Land Cover Meta Language (LCML), defined in ISO 19144-2. The aim is to shift from fixed class labels to a structured set of observable descriptors—such as cover percentage, phenology, height, and spatial pattern—allowing for more precise, scalable, and interoperable representations of land cover. Using Libyan national classification schemes as a foundation, land cover classes were translated into LCML descriptors through iterative modeling and validation, supported by the Land Characterization System (LCHS) software. The resulting reference system offers a standardized, modular structure that facilitates crosswalks between national, regional, and global classification frameworks. It enhances consistency across mapping efforts and supports integration into national land monitoring workflows. The framework is tailored to Libya’s arid context but offers potential for adaptation and reusability in other arid/semi-arid regions, such as those in the Sahel or Arabian Peninsula, by adjusting descriptors to local environmental conditions while maintaining biophysical focus and excluding socio-economic or land-use dynamics. Full article
29 pages, 473 KB  
Article
Sem4EDA: A Knowledge-Graph and Rule-Based Framework for Automated Fault Detection and Energy Optimization in EDA-IoT Systems
by Antonios Pliatsios and Michael Dossis
Computers 2026, 15(2), 103; https://doi.org/10.3390/computers15020103 - 2 Feb 2026
Abstract
This paper presents Sem4EDA, an ontology-driven and rule-based framework for automated fault diagnosis and energy-aware optimization in Electronic Design Automation (EDA) and Internet of Things (IoT) environments. The escalating complexity of modern hardware systems, particularly within IoT and embedded domains, presents formidable challenges [...] Read more.
This paper presents Sem4EDA, an ontology-driven and rule-based framework for automated fault diagnosis and energy-aware optimization in Electronic Design Automation (EDA) and Internet of Things (IoT) environments. The escalating complexity of modern hardware systems, particularly within IoT and embedded domains, presents formidable challenges for traditional EDA methodologies. While EDA tools excel at design and simulation, they often operate as siloed applications, lacking the semantic context necessary for intelligent fault diagnosis and system-level optimization. Sem4EDA addresses this gap by providing a comprehensive ontological framework developed in OWL 2, creating a unified, machine-interpretable model of hardware components, EDA design processes, fault modalities, and IoT operational contexts. We present a rule-based reasoning system implemented through SPARQL queries, which operates atop this knowledge base to automate the detection of complex faults such as timing violations, power inefficiencies, and thermal issues. A detailed case study, conducted via a large-scale trace-driven co-simulation of a smart city environment, demonstrates the framework’s practical efficacy: by analyzing simulated temperature sensor telemetry and Field-Programmable Gate Array (FPGA) configurations, Sem4EDA identified specific energy inefficiencies and overheating risks, leading to actionable optimization strategies that resulted in a 23.7% reduction in power consumption and 15.6% decrease in operating temperature for the modeled sensor cluster. This work establishes a foundational step towards more autonomous, resilient, and semantically-aware hardware design and management systems. Full article
(This article belongs to the Special Issue Advances in Semantic Multimedia and Personalized Digital Content)
32 pages, 3003 KB  
Article
FARM: A Multi-Agent Framework for Automated Construction of Multi-Species Livestock Health Knowledge Graphs
by Songxue Zhang, Shanshan Cao, Nan Ma, Wei Sun and Fantao Kong
Agriculture 2026, 16(3), 356; https://doi.org/10.3390/agriculture16030356 - 2 Feb 2026
Abstract
Livestock health knowledge graphs are essential for decision-making and reasoning in animal husbandry, yet existing knowledge is scattered across unstructured literature and encoded in narrowly scoped, species-specific models, resulting in semantic fragmentation and limited reusability. To address these issues, we proposed FARM (Four-dimensional [...] Read more.
Livestock health knowledge graphs are essential for decision-making and reasoning in animal husbandry, yet existing knowledge is scattered across unstructured literature and encoded in narrowly scoped, species-specific models, resulting in semantic fragmentation and limited reusability. To address these issues, we proposed FARM (Four-dimensional Automated-Reasoning Multi-agent), a zero-shot multi-agent framework used for constructing multi-species livestock health knowledge graphs. FARM is grounded in a Four-Dimension Livestock Health Framework encompassing Rearing Environment, Physiological Status, Feed & Water Inputs, and Production Performance, and employs a unified ontology strategy that integrates cross-species general labels with species-specific constraints to achieve semantic alignment. The framework orchestrates five specialized agents—Coordination, Entity Extraction, Ontology Normalization, Relation Extraction, and Knowledge Fusion—to automate the construction process. Experiments on 2478 expertly annotated text samples demonstrate that FARM achieves an entity-level F1 score of 0.8070 (IoU ≥ 0.5), surpassing the strongest baseline by 0.1627. Moreover, it attains a corrected entity label accuracy of 90.44% and an F1 score of 0.9277 in relation existence identification, outperforming the baseline by 0.1114. Validation on 500 image samples further confirms its capability in multimodal evidence fusion. The resulting knowledge graph contains 29,064 entities and 26,662 triples, providing a reusable foundation for zero-shot extraction and unified cross-species semantic modeling. Full article
(This article belongs to the Section Artificial Intelligence and Digital Agriculture)
29 pages, 5294 KB  
Article
Building a Regional Platform for Monitoring Air Quality
by Stanimir Nedyalkov Stoyanov, Boyan Lyubomirov Belichev, Veneta Veselinova Tabakova-Komsalova, Yordan Georgiev Todorov, Angel Atanasov Golev, Georgi Kostadinov Maglizhanov, Ivan Stanimirov Stoyanov and Asya Georgieva Stoyanova-Doycheva
Future Internet 2026, 18(2), 78; https://doi.org/10.3390/fi18020078 (registering DOI) - 2 Feb 2026
Abstract
This paper presents PLAM (Plovdiv Air Monitoring)—a regional multi-agent platform for air quality monitoring, semantic reasoning, and forecasting. The platform uses a hybrid architecture that combines two types of intelligent agents: classic BDI (Belief-Desire-Intention) agents for complex, goal-oriented behavior and planning, and ReAct [...] Read more.
This paper presents PLAM (Plovdiv Air Monitoring)—a regional multi-agent platform for air quality monitoring, semantic reasoning, and forecasting. The platform uses a hybrid architecture that combines two types of intelligent agents: classic BDI (Belief-Desire-Intention) agents for complex, goal-oriented behavior and planning, and ReAct agents based on large language models (LLM) for quick response, analysis, and interaction with users. The system integrates data from heterogeneous sources, including local IoT sensor networks and public external services, enriching it with a specialized OWL ontology of environmental norms. Based on this data, the platform performs comparative analysis, detection of anomalies and inconsistencies between measurements, as well as predictions using machine learning models. The results are visualized and presented to users via a web interface and mobile application, including personalized alerts and recommendations. The architecture demonstrates essential properties of an intelligent agent such as autonomy, proactivity, reactivity, and social capabilities. The implementation and testing in the city of Plovdiv demonstrate the system’s ability to provide a more objective and comprehensive assessment of air quality, revealing significant differences between measurements from different institutions. The platform offers a modular and adaptive design, making it applicable to other regions, and outlines future development directions, such as creating a specialized small language model and expanding sensor capabilities. Full article
(This article belongs to the Special Issue Intelligent Agents and Their Application)
Show Figures

Graphical abstract

45 pages, 2716 KB  
Article
WoR Ontology: Modeling Data and Services in Web Connected Environments+
by Lara Kallab, Khouloud Salameh and Richard Chbeir
Sensors 2026, 26(3), 941; https://doi.org/10.3390/s26030941 (registering DOI) - 1 Feb 2026
Abstract
The Web of Things (WoT) is a set of standards established by the World Wide Web Consortium (W3C) to enable interoperability across various Internet of Things (IoT) platforms. These standards facilitate seamless device-to-device interactions and application-to-application communication across heterogeneous environments. To identify and [...] Read more.
The Web of Things (WoT) is a set of standards established by the World Wide Web Consortium (W3C) to enable interoperability across various Internet of Things (IoT) platforms. These standards facilitate seamless device-to-device interactions and application-to-application communication across heterogeneous environments. To identify and utilize resources, whether data or services, offered by Web-connected devices and applications, these resources must be described using an open, shared, and dynamic knowledge representation capable of supporting both syntactic and semantic interoperability. In this paper, we present WoR+, a Web of Resources ontology based on a modular and unified vocabulary for describing Web resources (Web services and Web data). WoR+ offers several advantages: (a) it supports the discovery, selection, and composition of data and services provided by Web-connected devices and applications; (b) it provides reasoning capabilities for inferring new knowledge; and (c) it supports extensibility and adaptability to emerging domain requirements. Experimental evaluation shows that WoR+ ontology achieves high effectiveness, strong performance, and good clarity and consistency. Full article
25 pages, 1018 KB  
Article
Ontology Quality Improvement in the Semantic Web: Evidence from Educational Knowledge Graphs
by Wassim Jaziri and Najla Sassi
Systems 2026, 14(2), 154; https://doi.org/10.3390/systems14020154 - 31 Jan 2026
Viewed by 106
Abstract
Intelligent systems draw much of their reliability from the quality of their ontologies; however, manual ontology assessment remains patchy, time-consuming, and difficult to scale. To address these limitations, this paper proposes a domain-independent, machine-learning-driven framework for ontology quality assessment and improvement in the [...] Read more.
Intelligent systems draw much of their reliability from the quality of their ontologies; however, manual ontology assessment remains patchy, time-consuming, and difficult to scale. To address these limitations, this paper proposes a domain-independent, machine-learning-driven framework for ontology quality assessment and improvement in the Semantic Web. The framework combines structural, semantic, and documentation metrics with supervised learning models to predict quality issues and recommend targeted refinements through a four-phase workflow comprising ML model development, metric definition, automated improvement, and empirical evaluation. The approach is validated on educational knowledge graphs using 1500 ontology modules from the EDUKG repository, including a 100-module expert-annotated gold set (κ = 0.82). Experimental results show structural precision of 93.5% and semantic precision of 90.2%, with overall F1-scores close to 90%, while reducing ontology development time by 42% and quality assessment time by 65%. These findings demonstrate that coupling ML with structured quality metrics substantially enhances ontology reliability while preserving pedagogical and operational relevance in educational settings. Although empirical validation is conducted in the education domain, the modular and ontology-agnostic architecture can be adapted to other knowledge-intensive domains through retraining and domain-specific calibration, offering a reproducible foundation for continuous ontology quality improvement in Semantic Web applications. Full article
(This article belongs to the Special Issue Digital Engineering: Transformational Tools and Strategies)
Show Figures

Figure 1

19 pages, 3593 KB  
Article
Mapping the ECC–Saliva Neuroimmune Axis Using AI: A System-Level Framework
by Ahmed Alamoudi and Hammam Ahmed Bahammam
Children 2026, 13(2), 185; https://doi.org/10.3390/children13020185 - 29 Jan 2026
Viewed by 142
Abstract
Background/Objectives: Early childhood caries (ECC) and saliva have been studied across disparate domains, including microbiome, fluoride, immune, oxidative-stress, and neuroendocrine research. However, the ECC–saliva literature has not previously been mapped as a connected system using modern natural language processing (NLP). This study treats [...] Read more.
Background/Objectives: Early childhood caries (ECC) and saliva have been studied across disparate domains, including microbiome, fluoride, immune, oxidative-stress, and neuroendocrine research. However, the ECC–saliva literature has not previously been mapped as a connected system using modern natural language processing (NLP). This study treats PubMed titles and abstracts as data to identify major themes, emerging topics, and candidate neuroimmune axes in ECC–saliva research. Methods: Using the NCBI E-utilities API, we retrieved 298 PubMed records (2000–2025) matching (“early childhood caries” [Title/Abstract]) AND saliva [Title/Abstract]. Text was cleaned with spaCy and embedded using a transformer encoder; BERTopic combined UMAP dimensionality reduction and HDBSCAN clustering to derive thematic topics. We summarised topics with class-based TF–IDF, constructed keyword co-occurrence networks, defined an internal topic-level Novelty Index (semantic distance plus temporal dispersion), and mapped high-novelty topics to gene ontology and Reactome pathways using g:Profiler. Prophet was used to model temporal trends and forecast topic-level publication trajectories. Finally, we generated a fully synthetic neuroimmune salivary dataset, based on realistic ranges from the literature, to illustrate how the identified axes could be operationalised in future ECC cohorts. Results: Seven coherent ECC–saliva topics were identified, including classical microbiome and fluoride domains as well as antioxidant/redox, proteomic, peptide immunity, and Candida–biofilm themes. High-novelty topics clustered around total antioxidant capacity, glutathione peroxidase, superoxide dismutase, and peptide-based host defence. Keyword networks and ontology enrichment highlighted “Detoxification of Reactive Oxygen Species”, “cellular oxidant detoxification”, and cytokine-mediated signalling as central processes. Temporal forecasting suggested plateauing growth for classical epidemiology and fluoride topics, with steeper projected increases for antioxidant and peptide-immunity themes. A co-mention heatmap revealed a literature-level Candida–cytokine–neuroendocrine triad (e.g., Candida albicans, IL-6/TNF, cortisol), which we propose as a testable neuro-immunometabolic hypothesis rather than a confirmed mechanism. Conclusions: AI-assisted topic modelling and network analysis provide a reproducible, bibliometric map of ECC–saliva research that highlights underexplored antioxidant/redox and neuroimmune salivary axes. The synthetic neuroimmune dataset and modelling pipeline are illustrative only, but together with the literature map, they offer a structured agenda for future ECC cohorts and mechanistic studies. Full article
(This article belongs to the Section Pediatric Dentistry & Oral Medicine)
Show Figures

Figure 1

24 pages, 5682 KB  
Article
An Ontology-Driven Digital Twin for Hotel Front Desk: Real-Time Integration of Wearables and OCC Camera Events via a Property-Defined REST API
by Moises Segura-Cedres, Desiree Manzano-Farray, Carmen Lidia Aguiar-Castillo, Rafael Perez-Jimenez, Vicente Matus Icaza, Eleni Niarchou and Victor Guerra-Yanez
Electronics 2026, 15(3), 567; https://doi.org/10.3390/electronics15030567 - 28 Jan 2026
Viewed by 173
Abstract
This article presents an ontology-driven Digital Twin (DT) for hotel front-desk operations that fuses two real-time data streams: (i) physiological and activity signals from wrist-worn wearables assigned to staff, and (ii) 3D people-positioning and occupancy events captured by reception-area cameras using a proprietary [...] Read more.
This article presents an ontology-driven Digital Twin (DT) for hotel front-desk operations that fuses two real-time data streams: (i) physiological and activity signals from wrist-worn wearables assigned to staff, and (ii) 3D people-positioning and occupancy events captured by reception-area cameras using a proprietary implementation of Optical Camera Communication (OCC). Building on a previously proposed front-desk ontology, the semantic model is extended with positional events, zone semantics, and wearable-derived workload indices to estimate queue state, staff workload, and service demand in real time. A vendor-agnostic, property-based REST API specifies the DT interface in terms of observable properties, including authentication and authorization, idempotent ingestion, timestamp conventions, version negotiation, integrity protection for signed webhooks, rate limiting and backoff, pagination and filtering, and privacy-preserving identifiers, enabling any compliant backend to implement the specification. The proposed layered architecture connects ingestion, spatial reasoning, and decision services to dashboards and key performance indicators (KPIs). This article details the positioning pipeline (calibration, normalized 3D coordinates, zone mapping, and confidence handling), the wearable workload pipeline, and an evaluation protocol covering localization error, zone classification, queue-length estimation, and workload accuracy. The results indicate that a spatially aware, ontology-based DT can support more balanced staff allocation and improved guest experience while remaining technology-agnostic and privacy-conscious. Full article
Show Figures

Figure 1

35 pages, 1504 KB  
Article
Scientific Artificial Intelligence: From a Procedural Toolkit to Cognitive Coauthorship
by Adilbek K. Bisenbaev
Philosophies 2026, 11(1), 12; https://doi.org/10.3390/philosophies11010012 - 27 Jan 2026
Viewed by 127
Abstract
This article proposes a redefinition of scientific authorship under conditions of algorithmic mediation. We shift the discussion from the ontological dichotomy of “tool versus author” to an operationalizable epistemology of contribution. Building on the philosophical triad of instrumentality—intervention, representation, and hermeneutics—we argue that [...] Read more.
This article proposes a redefinition of scientific authorship under conditions of algorithmic mediation. We shift the discussion from the ontological dichotomy of “tool versus author” to an operationalizable epistemology of contribution. Building on the philosophical triad of instrumentality—intervention, representation, and hermeneutics—we argue that contemporary AI systems (notably large language models, LLMs) exceed the role of a merely “mute” accelerator of procedures. They now participate in the generation of explanatory structures, the reframing of research problems, and the semantic reconfiguration of the knowledge corpus. In response, we formulate the AI-AUTHorship framework, which remains compatible with an anthropocentric legal order while recognizing and measuring AI’s cognitive participation. We introduce TraceAuth, a protocol for tracing cognitive chains of reasoning, and AIEIS (AI epistemic impact score), a metric that stratifies contributions along the axes of procedural (P), semantic (S), and generative (G) participation. The threshold between “support” and “creation” is refined through a battery of operational tests (alteration of the problem space; causal/counterfactual load; independent reproducibility without AI; interpretability and traceability). We describe authorship as distributed epistemic authorship (DEA): a network of people, artifacts, algorithms, and institutions in which AI functions as a nonsubjective node whose contribution is nonetheless auditable. The framework closes the gap between the de facto involvement of AI and de jure norms by institutionalizing a regime of “recognized participation,” wherein transparency, interpretability, and reproducibility of cognitive trajectories become conditions for acknowledging contribution, whereas human responsibility remains nonnegotiable. Full article
Show Figures

Figure 1

26 pages, 2177 KB  
Article
A Semantic Similarity Model for Geographic Terminologies Using Ontological Features and BP Neural Networks
by Zugang Chen, Xinyu Chen, Yin Ma, Jing Li, Linhan Yang, Guoqing Li, Hengliang Guo, Shuai Chen and Tian Liang
Appl. Sci. 2026, 16(2), 1105; https://doi.org/10.3390/app16021105 - 21 Jan 2026
Viewed by 95
Abstract
Accurate measurement of semantic similarity between geographic terms is a fundamental challenge in geographic information science, directly influencing tasks such as knowledge retrieval, ontology-based reasoning, and semantic search in geographic information systems (GIS). Traditional ontology-based approaches primarily rely on a narrow set of [...] Read more.
Accurate measurement of semantic similarity between geographic terms is a fundamental challenge in geographic information science, directly influencing tasks such as knowledge retrieval, ontology-based reasoning, and semantic search in geographic information systems (GIS). Traditional ontology-based approaches primarily rely on a narrow set of features (e.g., semantic distance or depth), which inadequately capture the multidimensional and context-dependent nature of geographic semantics. To address this limitation, this study proposes an ontology-driven semantic similarity model that integrates a backpropagation (BP) neural network with multiple ontological features—hierarchical depth, node distance, concept density, and relational overlap. The BP network serves as a nonlinear optimization mechanism that adaptively learns the contributions of each feature through cross-validation, balancing interpretability and precision. Experimental evaluations on the Geo-Terminology Relatedness Dataset (GTRD) demonstrate that the proposed model outperforms traditional baselines, including the Thesaurus–Lexical Relatedness Measure (TLRM), Word2Vec, and SBERT (Sentence-BERT), with Spearman correlation improvements of 4.2%, 74.8% and 80.1%, respectively. Additionally, comparisons with Linear Regression and Random Forest models, as well as bootstrap analysis and error analysis, confirm the robustness and generalization of the BP-based approach. These results confirm that coupling structured ontological knowledge with data-driven learning enhances robustness and generalization in semantic similarity computation, providing a unified framework for geographic knowledge reasoning, terminology harmonization, and ontology-based information retrieval. Full article
Show Figures

Figure 1

18 pages, 12523 KB  
Article
Automatic Generation of NGSI-LD Data Models from RDF Ontologies: Developmental Studies of Children and Adolescents Use Case
by Franc Drobnič, Gregor Starc, Gregor Jurak, Andrej Kos and Matevž Pustišek
Appl. Sci. 2026, 16(2), 992; https://doi.org/10.3390/app16020992 - 19 Jan 2026
Viewed by 144
Abstract
In the era of ever-greater data production and collection, public health research is often limited by the scarcity of data. To improve this, we propose data sharing in the form of Data Spaces, which provide technical, business, and legal conditions for an easier [...] Read more.
In the era of ever-greater data production and collection, public health research is often limited by the scarcity of data. To improve this, we propose data sharing in the form of Data Spaces, which provide technical, business, and legal conditions for an easier and trustworthy data exchange for all the participants. The data must be described in a commonly understandable way, which can be assured by machine-readable ontologies. We compared the semantic interoperability technologies used in the European Data Spaces initiatives and adopted them in our use case of physical development in children and youth. We propose an ontology describing data from the Analysis of Children’s Development in Slovenia (ACDSi) study in the Resource Description Framework (RDF) format and a corresponding Next Generation Systems Interface-Linked Data (NGSI-LD) data model. For this purpose, we have developed a tool to generate an NGSI-LD data model using information from an ontology in RDF format. The tool builds on the declaration from the standard that the NGSI-LD information model follows the graph structure of RDF, so that such translation is feasible. The source RDF ontology is analyzed using the standardized SPARQL Protocol and RDF Query Language (SPARQL), specifically using Property Path queries. The NGSI-LD data model is generated from the definitions collected in the analysis. The translation has been verified on Smart Applications REFerence (SAREF) ontology SAREF4BLDG and its corresponding Smart Data Models (52 models at the time). The generated artifacts have been tested on a Context Broker reference implementation. The tool supports basic ontology structures, and for it to translate more complex structures, further development is needed. Full article
Show Figures

Figure 1

29 pages, 4179 KB  
Article
Ontology-Enhanced Deep Learning for Early Detection of Date Palm Diseases in Smart Farming Systems
by Naglaa E. Ghannam, H. Mancy, Asmaa Mohamed Fathy and Esraa A. Mahareek
AgriEngineering 2026, 8(1), 29; https://doi.org/10.3390/agriengineering8010029 - 13 Jan 2026
Viewed by 355
Abstract
Early and accurate date palm disease detection is the key to successful smart farming ecosystem sustainability. In this paper, we introduce DoST-DPD, a new Dual-Stream Transformer architecture for multimodal disease diagnosis utilizing RGB, thermal and NIR imaging. In contrast with standard deep learning [...] Read more.
Early and accurate date palm disease detection is the key to successful smart farming ecosystem sustainability. In this paper, we introduce DoST-DPD, a new Dual-Stream Transformer architecture for multimodal disease diagnosis utilizing RGB, thermal and NIR imaging. In contrast with standard deep learning approaches, our model receives ontology-based semantic supervision (via per-dataset OWL ontologies), enabling knowledge injection via SPARQL-driven reasoning during training. This structured knowledge layer not only improves multimodal feature correspondence but also restricts label consistency for improving generalization performance, particularly in early disease diagnosis. We tested our proposed method on a comprehensive set of five benchmarks (PlantVillage, PlantDoc, Figshare, Mendeley, and Kaggle Date Palm) together with domain-specific ontologies. An ablation study validates the effectiveness of incorporating ontology supervision, consistently improving the performance across Accuracy, Precision, Recall, F1-Score and AUC. We achieve state-of-the-art performance over five widely recognized baselines (PlantXViT, Multi-ViT, ERCP-Net, andResNet), with our model DoST-DPD achieving the highest Accuracy of 99.3% and AUC of 98.2% on the PlantVillage dataset. In addition, ontology-driven attention maps and semantic consistency contributed to high interpretability and robustness in multiple crop and imaging modalities. Results: This work presents a scalable roadmap for ontology-integrated AI systems in agriculture and illustrates how structured semantic reasoning can directly benefit multimodal plant disease detection systems. The proposed model demonstrates competitive performance across multiple datasets and highlights the unique advantage of integrating ontology-guided supervision in multimodal crop disease detection. Full article
Show Figures

Figure 1

16 pages, 328 KB  
Article
SemanticHPC: Semantics-Aware, Hardware-Conscious Workflows for Distributed AI Training on HPC Architectures
by Alba Amato
Information 2026, 17(1), 78; https://doi.org/10.3390/info17010078 - 12 Jan 2026
Viewed by 228
Abstract
High-Performance Computing (HPC) has become essential for training medium- and large-scale Artificial Intelligence (AI) models, yet two bottlenecks remain under-exploited: the semantic coherence of training data and the interaction between distributed deep learning runtimes and heterogeneous HPC architectures. Existing work tends to optimise [...] Read more.
High-Performance Computing (HPC) has become essential for training medium- and large-scale Artificial Intelligence (AI) models, yet two bottlenecks remain under-exploited: the semantic coherence of training data and the interaction between distributed deep learning runtimes and heterogeneous HPC architectures. Existing work tends to optimise multi-node, multi-GPU training in isolation from data semantics or to apply semantic technologies to data curation without considering the constraints of large-scale training on modern clusters. This paper introduces SemanticHPC, an experimental framework that integrates ontology and Resource Description Framework (RDF)-based semantic preprocessing with distributed AI training (Horovod/PyTorch Distributed Data Parallel) and hardware-aware optimisations for Non-Uniform Memory Access (NUMA), multi-GPU and high-speed interconnects. The framework has been evaluated on 1–8 node configurations (4–32 GPUs) on a production-grade cluster. Experiments on a medium-size Open Images V7 workload show that semantic enrichment improves validation accuracy by 3.5–4.4 absolute percentage points while keeping the additional end-to-end overhead below 8% and preserving strong scaling efficiency above 79% on eight nodes. We argue that bringing semantic technologies into the training workflow—rather than treating them as an offline, detached phase—is a promising direction for large-scale AI on HPC systems. We detail an implementation based on standard Python libraries, RDF tooling and widely adopted deep learning runtimes, and we discuss the limitations and practical hurdles that need to be addressed for broader adoption. Full article
Show Figures

Graphical abstract

17 pages, 1538 KB  
Article
A Mobile Augmented Reality Integrating KCHDM-Based Ontologies with LLMs for Adaptive Q&A and Knowledge Testing in Urban Heritage
by Yongjoo Cho and Kyoung Shin Park
Electronics 2026, 15(2), 336; https://doi.org/10.3390/electronics15020336 - 12 Jan 2026
Viewed by 231
Abstract
A cultural heritage augmented reality system overlays virtual information onto real-world heritage sites, enabling intuitive exploration and interpretation with spatial and temporal contexts. This study presents the design and implementation of a cognitive Mobile Augmented Reality (MAR) system that integrates KCHDM-based ontologies with [...] Read more.
A cultural heritage augmented reality system overlays virtual information onto real-world heritage sites, enabling intuitive exploration and interpretation with spatial and temporal contexts. This study presents the design and implementation of a cognitive Mobile Augmented Reality (MAR) system that integrates KCHDM-based ontologies with large language models (LLMs) to facilitate intelligent exploration of urban heritage. While conventional AR guides often rely on static data, our system introduces a Semantic Retrieval-Augmented Generation (RAG) pipeline anchored in a structured knowledge base modeled after the Korean Cultural Heritage Data Model (KCHDM). This architecture enables the LLM to perform dynamic contextual reasoning, transforming heritage data into adaptive question-answering (Q&A) and interactive knowledge-testing quizzes that are precisely grounded in both historical and spatial contexts. The system supports on-site AR exploration and map-based remote exploration to ensure robust usability and precise spatial alignment of virtual content. To deliver a rich, multisensory experience, the system provides multimodal outputs, integrating text, images, models, and audio narration. Furthermore, the integration of a knowledge sharing repository allows users to review and learn from others’ inquires. This ontology-driven LLM-integrated MAR design enhances semantic accuracy and contextual relevance, demonstrating the potential of MAR for socially enriched urban heritage experiences. Full article
Show Figures

Figure 1

44 pages, 9272 KB  
Systematic Review
Toward a Unified Smart Point Cloud Framework: A Systematic Review of Definitions, Methods, and a Modular Knowledge-Integrated Pipeline
by Mohamed H. Salaheldin, Ahmed Shaker and Songnian Li
Buildings 2026, 16(2), 293; https://doi.org/10.3390/buildings16020293 - 10 Jan 2026
Viewed by 453
Abstract
Reality-capture has made point clouds a primary spatial data source, yet processing and integration limits hinder their potential. Prior reviews focus on isolated phases; by contrast, Smart Point Clouds (SPCs)—augmenting points with semantics, relations, and query interfaces to enable reasoning—received limited attention. This [...] Read more.
Reality-capture has made point clouds a primary spatial data source, yet processing and integration limits hinder their potential. Prior reviews focus on isolated phases; by contrast, Smart Point Clouds (SPCs)—augmenting points with semantics, relations, and query interfaces to enable reasoning—received limited attention. This systematic review synthesizes the state-of-the-art SPC terminology and methods to propose a modular pipeline. Following PRISMA, we searched Scopus, Web of Science, and Google Scholar up to June 2025. We included English-language studies in geomatics and engineering presenting novel SPC methods. Fifty-eight publications met eligibility criteria: Direct (n = 22), Indirect (n = 22), and New Use (n = 14). We formalize an operative SPC definition—queryable, ontology-linked, provenance-aware—and map contributions across traditional point cloud processing stages (from acquisition to modeling). Evidence shows practical value in cultural heritage, urban planning, and AEC/FM via semantic queries, rule checks, and auditable updates. Comparative qualitative analysis reveals cross-study trends: higher and more uniform density stabilizes features but increases computation, and hybrid neuro-symbolic classification improves long-tail consistency; however, methodological heterogeneity precluded quantitative synthesis. We distill a configurable eight-module pipeline and identify open challenges in data at scale, domain transfer, temporal (4D) updates, surface exports, query usability, and sensor fusion. Finally, we recommend lightweight reporting standards to improve discoverability and reuse. Full article
(This article belongs to the Section Construction Management, and Computers & Digitization)
Show Figures

Figure 1

Back to TopTop