Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (4,880)

Search Parameters:
Keywords = knowledge domain

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
24 pages, 2299 KB  
Systematic Review
Advancing Low-Carbon Construction: A Systematic Literature Review of Carbon Emissions of Prefabricated Construction
by Shengxi Zhang, Yinghao Zhao, Xianhua Fang, Yan Liu, Wenhao Bai and Shengbin Ma
Buildings 2025, 15(19), 3578; https://doi.org/10.3390/buildings15193578 (registering DOI) - 4 Oct 2025
Abstract
Prefabricated Construction (PC) Technology is recognized for its advantages in reducing carbon emissions, lowering energy consumption, conserving materials, and improving waste management. Despite significant research efforts, few systematic analyses have been conducted to consolidate the current understanding of carbon emissions in PC. To [...] Read more.
Prefabricated Construction (PC) Technology is recognized for its advantages in reducing carbon emissions, lowering energy consumption, conserving materials, and improving waste management. Despite significant research efforts, few systematic analyses have been conducted to consolidate the current understanding of carbon emissions in PC. To address this gap, the present study undertakes a comprehensive review using a synergistic approach that integrates scientometric and rigorous qualitative analyses. The aim is to synthesize state-of-the-art research on carbon emissions in PC and provide insightful directions for future academic work in this field. A database of 114 relevant journal articles was compiled through a meticulous data collection process, followed by scientometric analysis to map influential journals, key articles, active countries, and emerging research trends. The qualitative analysis identifies prevailing research domains, highlights critical research gaps, and anticipates future needs. This study contributes to enriching the existing knowledge base and offers both theoretical insights and practical guidance for advancing low-carbon construction, optimizing assessment frameworks, and promoting interdisciplinary collaboration and informed policymaking. Full article
Show Figures

Figure 1

81 pages, 4442 KB  
Systematic Review
From Illusion to Insight: A Taxonomic Survey of Hallucination Mitigation Techniques in LLMs
by Ioannis Kazlaris, Efstathios Antoniou, Konstantinos Diamantaras and Charalampos Bratsas
AI 2025, 6(10), 260; https://doi.org/10.3390/ai6100260 - 3 Oct 2025
Abstract
Large Language Models (LLMs) exhibit remarkable generative capabilities but remain vulnerable to hallucinations—outputs that are fluent yet inaccurate, ungrounded, or inconsistent with source material. To address the lack of methodologically grounded surveys, this paper introduces a novel method-oriented taxonomy of hallucination mitigation strategies [...] Read more.
Large Language Models (LLMs) exhibit remarkable generative capabilities but remain vulnerable to hallucinations—outputs that are fluent yet inaccurate, ungrounded, or inconsistent with source material. To address the lack of methodologically grounded surveys, this paper introduces a novel method-oriented taxonomy of hallucination mitigation strategies in text-based LLMs. The taxonomy organizes over 300 studies into six principled categories: Training and Learning Approaches, Architectural Modifications, Input/Prompt Optimization, Post-Generation Quality Control, Interpretability and Diagnostic Methods, and Agent-Based Orchestration. Beyond mapping the field, we identify persistent challenges such as the absence of standardized evaluation benchmarks, attribution difficulties in multi-method systems, and the fragility of retrieval-based methods when sources are noisy or outdated. We also highlight emerging directions, including knowledge-grounded fine-tuning and hybrid retrieval–generation pipelines integrated with self-reflective reasoning agents. This taxonomy provides a methodological framework for advancing reliable, context-sensitive LLM deployment in high-stakes domains such as healthcare, law, and defense. Full article
(This article belongs to the Section AI Systems: Theory and Applications)
24 pages, 1008 KB  
Article
A New Approach in Detecting Symmetrical Properties of the Role of Media in the Development of Key Competencies for Labor Market Positioning using Fuzzy AHP
by Aleksandra Penjišević, Branislav Sančanin, Ognjen Bakmaz, Maja Mladenović, Branislav M. Ranđelović and Dušan J. Simjanović
Symmetry 2025, 17(10), 1645; https://doi.org/10.3390/sym17101645 - 3 Oct 2025
Abstract
The result of accelerated development and technological progress is manifested through numerous changes in the labor market, primarily concerning the competencies of future employees. Many of those competencies have symmetrical character. The determinants that may influence the development of specific competencies are variable [...] Read more.
The result of accelerated development and technological progress is manifested through numerous changes in the labor market, primarily concerning the competencies of future employees. Many of those competencies have symmetrical character. The determinants that may influence the development of specific competencies are variable and dynamic, yet they share the characteristic of transcending temporal and spatial boundaries. In this paper we propose the use of a combination of Principal Component Analysis (PCA) and Fuzzy Analytic Hierarchy Process (FAHP) to rank 21st-century competencies that are developed independently of the formal educational process. Ability to organize and plan, appreciation of diversity and multiculturalism, and ability to solve problems appeared to be the highest-ranked competencies. The development of key competencies is symmetrical to the skills for the labor market. Also, the development of key competencies is symmetrical to the right selection of the quality of media content. The paper proves that the development of key competencies is symmetrical to the level of education of both parents. One of the key findings is that participants with higher levels of media literacy express more readiness for the contemporary labor market. Moreover, the family, particularly parents, exerts a highly significant positive influence on the development of 21st-century competencies. Parents with higher levels of education, in particular, provide a stimulating environment for learning, foster critical thinking, and encourage the exploration of diverse domains of knowledge. Full article
22 pages, 2445 KB  
Article
The Construction of a Design Method Knowledge Graph Driven by Multi-Source Heterogeneous Data
by Jixing Shi, Kaiyi Wang, Zhongqing Wang, Zhonghang Bai and Fei Hu
Appl. Sci. 2025, 15(19), 10702; https://doi.org/10.3390/app151910702 - 3 Oct 2025
Abstract
To address the fragmentation and weak correlation of knowledge in the design method domain, this paper proposes a framework for constructing a knowledge graph driven by multi-source heterogeneous data. The process involves collecting multi-source heterogeneous data and subsequently utilizing text mining and natural [...] Read more.
To address the fragmentation and weak correlation of knowledge in the design method domain, this paper proposes a framework for constructing a knowledge graph driven by multi-source heterogeneous data. The process involves collecting multi-source heterogeneous data and subsequently utilizing text mining and natural language processing techniques to extract design themes and method elements. A “theme–stage–attribute” three-dimensional mapping model is established to achieve semantic coupling of knowledge. The BERT-BiLSTM-CRF (Bidirectional Encoder Representations from Transformers-Bidirectional Long Short-Term Memory-Conditional Random Field) model is employed for entity recognition and relation extraction, while the Sentence-BERT (Sentence Bidirectional Encoder Representations from Transformers) model is used to perform multi-source knowledge fusion. The Neo4j graph database facilitates knowledge storage, visualization, and querying, forming the basis for developing a prototype of a design method recommendation system. The framework’s effectiveness was validated through experiments on extraction performance and knowledge graph quality. The results demonstrate that the framework achieves an F1 score of 91.2% for knowledge extraction, and an 8.44% improvement over the baseline. The resulting graph’s node and relation coverage reached 94.1% and 91.2%, respectively. In complex semantic query tasks, the framework shows a significant advantage over traditional classification systems, achieving a maximum F1 score of 0.97. It can effectively integrate dispersed knowledge in the field of design methods and support method matching throughout the entire design process. This research is of significant value for advancing knowledge management and application in innovative product design. Full article
15 pages, 272 KB  
Article
Prevalence and Practice Domains of Advanced Practice Nurses Among Participants in the Latin American Nursing Leadership School: A Cross-Sectional Study
by Patricia Rebollo-Gómez, Esperanza Barroso-Corroto, Joseba Rabanales-Sotos, Ángel López-González, José Alberto Laredo-Aguilera and Juan Manuel Carmona-Torres
Healthcare 2025, 13(19), 2515; https://doi.org/10.3390/healthcare13192515 - 3 Oct 2025
Abstract
Aims: This study aimed to determine the prevalence of nurses in a Latin American leadership school who meet advanced nurse standards. Design: A descriptive cross-sectional study was conducted. Methods: Data were collected between January and November 2024 from a total of 92 participants [...] Read more.
Aims: This study aimed to determine the prevalence of nurses in a Latin American leadership school who meet advanced nurse standards. Design: A descriptive cross-sectional study was conducted. Methods: Data were collected between January and November 2024 from a total of 92 participants from the Latin American Leadership School of FUDEN-FEPPEN (Foundation for the Development of Nursing—Pan American Federation of Professional Nurses). The response rate was 13%. The Spanish version of the APRD (advance practice role delineation) was validated in Spanish. The study was approved by the Social Ethics Committee of UCLM (Universidad Castilla-La Mancha). Inference analysis was performed to examine factors associated with advanced practice domains. Results: A total of 92 nurses participated in the study. Among the participants, 35.86% (33 nurses) met the requirements for advanced practice nurses and the minimum training required by the International Council of Nurses. Nurses in both primary care and specialized care perform more advanced practice activities in direct care; however, nurses practicing teaching and research perform more advanced practice activities in the indirect practice domains (training, research and teaching). Conclusions: The percentage of nurses participating in the Latin American leadership school who met the standards was determined, with the most frequent domains those related to direct care, such as expert care planning, integrated care, and inter-professional collaboration. Implications for the profession and patient care: To our knowledge, this is the first study that describes the profile of advanced practice nurses in the Latin American context. This study shows that advanced practice activities exist and are practiced, but there is no clear delimitation or regulation of these activities. Reporting method: The study was conducted following the STROBE guidelines. Public contribution: This study did not include patient or public involvement in its design, conduct, or reporting. Full article
(This article belongs to the Section Nursing)
32 pages, 6548 KB  
Article
Smart City Ontology Framework for Urban Data Integration and Application
by Xiaolong He, Xi Kuai, Xinyue Li, Zihao Qiu, Biao He and Renzhong Guo
Smart Cities 2025, 8(5), 165; https://doi.org/10.3390/smartcities8050165 - 3 Oct 2025
Abstract
Rapid urbanization and the proliferation of heterogeneous urban data have intensified the challenges of semantic interoperability and integrated urban governance. To address this, we propose the Smart City Ontology Framework (SMOF), a standards-driven ontology that unifies Building Information Modeling (BIM), Geographic Information Systems [...] Read more.
Rapid urbanization and the proliferation of heterogeneous urban data have intensified the challenges of semantic interoperability and integrated urban governance. To address this, we propose the Smart City Ontology Framework (SMOF), a standards-driven ontology that unifies Building Information Modeling (BIM), Geographic Information Systems (GIS), Internet of Things (IoT), and relational data. SMOF organizes five core modules and eleven major entity categories, with universal and extensible attributes and relations to support cross-domain data integration. SMOF was developed through competency questions, authoritative knowledge sources, and explicit design principles, ensuring methodological rigor and alignment with real governance needs. Its evaluation combined three complementary approaches against baseline models: quantitative metrics demonstrated higher attribute richness and balanced hierarchy; LLM as judge assessments confirmed conceptual completeness, consistency, and scalability; and expert scoring highlighted superior scenario fitness and clarity. Together, these results indicate that SMOF achieves both structural soundness and practical adaptability. Beyond structural evaluation, SMOF was validated in two representative urban service scenarios, demonstrating its capacity to integrate heterogeneous data, support graph-based querying and enable ontology-driven reasoning. In sum, SMOF offers a robust and scalable solution for semantic data integration, advancing smart city governance and decision-making efficiency. Full article
(This article belongs to the Special Issue Breaking Down Silos in Urban Services)
Show Figures

Figure 1

18 pages, 3371 KB  
Article
Fusing Geoscience Large Language Models and Lightweight RAG for Enhanced Geological Question Answering
by Bo Zhou and Ke Li
Geosciences 2025, 15(10), 382; https://doi.org/10.3390/geosciences15100382 - 2 Oct 2025
Abstract
Mineral prospecting from vast geological text corpora is impeded by challenges in domain-specific semantic interpretation and knowledge synthesis. General-purpose Large Language Models (LLMs) struggle to parse the complex lexicon and relational semantics of geological texts, limiting their utility for constructing precise knowledge graphs [...] Read more.
Mineral prospecting from vast geological text corpora is impeded by challenges in domain-specific semantic interpretation and knowledge synthesis. General-purpose Large Language Models (LLMs) struggle to parse the complex lexicon and relational semantics of geological texts, limiting their utility for constructing precise knowledge graphs (KGs). Our novel framework addresses this gap by integrating a domain-specific LLM, GeoGPT, with a lightweight retrieval-augmented generation architecture, LightRAG. Within this framework, GeoGPT automates the construction of a high-quality mineral-prospecting KG by performing ontology definition, entity recognition, and relation extraction. The LightRAG component then leverages this KG to power a specialized geological question-answering (Q&A) system featuring a dual-layer retrieval mechanism for enhanced precision and an incremental update capability for dynamic knowledge incorporation. The results indicate that the proposed method achieves a mean F1-score of 0.835 for entity extraction, representing a 17% to 25% performance improvement over general-purpose large models using generic prompts. Furthermore, the geological Q&A model, built upon the LightRAG framework with GeoGPT as its core, demonstrates a superior win rate against the DeepSeek-V3 and Qwen2.5-72B general-purpose large models by 8–29% in the geochemistry domain and 53–78% in the remote sensing geology domain. This study establishes an effective and scalable methodology for intelligent geological text analysis, enabling lightweight, high-performance Q&A systems that accelerate knowledge discovery in mineral exploration. Full article
Show Figures

Figure 1

26 pages, 2248 KB  
Article
Exploring Critical Success Factors of AI-Integrated Digital Twins on Saudi Construction Project Deliverables: A PLS-SEM Approach
by Aljawharah A. Alnaser and Haytham Elmousalami
Buildings 2025, 15(19), 3543; https://doi.org/10.3390/buildings15193543 - 2 Oct 2025
Abstract
Artificial intelligence-enhanced digital twins are widely acknowledged as effective instruments for facilitating digital transformation in the building industry. Nonetheless, their implementation remains uneven, with little knowledge regarding the organizational conditions that convert these technologies into enhanced project outcomes. This study investigates the critical [...] Read more.
Artificial intelligence-enhanced digital twins are widely acknowledged as effective instruments for facilitating digital transformation in the building industry. Nonetheless, their implementation remains uneven, with little knowledge regarding the organizational conditions that convert these technologies into enhanced project outcomes. This study investigates the critical success factors (CSFs) that shape the effectiveness of AI-integrated digital twins in Saudi Arabia’s construction industry. A hierarchical structural equation model was developed to capture three dimensions of CSFs, including human-centric, technological, and governance-related, and to evaluate their impact on project deliverables, including time, cost, resource utilization, quality, and risk. Data from a survey of 120 industry professionals were assessed utilizing a PLS-SEM approach, incorporating rigorous measurement and structural assessments. Results indicate that technology and infrastructural factors have the most significant impact on critical success factors, followed by governance and human-related enablers. Consequently, CSFs substantially forecast project outcomes, mediating the influences of all three domains. These findings underscore the importance of investing in data quality, scalable infrastructure, and governance frameworks, complemented by workforce training and incentives, to realize the benefits of AI-enabled digital transformations fully. The study presents a validated paradigm that elucidates how enabling conditions enhance performance improvements, providing practical direction for industry players and policymakers. Full article
(This article belongs to the Special Issue The Power of Knowledge in Enhancing Construction Project Delivery)
Show Figures

Figure 1

27 pages, 5542 KB  
Article
ILF-BDSNet: A Compressed Network for SAR-to-Optical Image Translation Based on Intermediate-Layer Features and Bio-Inspired Dynamic Search
by Yingying Kong and Cheng Xu
Remote Sens. 2025, 17(19), 3351; https://doi.org/10.3390/rs17193351 - 1 Oct 2025
Abstract
Synthetic aperture radar (SAR) exhibits all-day and all-weather capabilities, granting it significant application in remote sensing. However, interpreting SAR images requires extensive expertise, making SAR-to-optical remote sensing image translation a crucial research direction. While conditional generative adversarial networks (CGANs) have demonstrated exceptional performance [...] Read more.
Synthetic aperture radar (SAR) exhibits all-day and all-weather capabilities, granting it significant application in remote sensing. However, interpreting SAR images requires extensive expertise, making SAR-to-optical remote sensing image translation a crucial research direction. While conditional generative adversarial networks (CGANs) have demonstrated exceptional performance in image translation tasks, their massive number of parameters pose substantial challenges. Therefore, this paper proposes ILF-BDSNet, a compressed network for SAR-to-optical image translation. Specifically, first, standard convolutions in the feature-transformation module of the teacher network are replaced with depthwise separable convolutions to construct the student network, and a dual-resolution collaborative discriminator based on PatchGAN is proposed. Next, knowledge distillation based on intermediate-layer features and channel pruning via weight sharing are designed to train the student network. Then, the bio-inspired dynamic search of channel configuration (BDSCC) algorithm is proposed to efficiently select the optimal subnet. Meanwhile, the pixel-semantic dual-domain alignment loss function is designed. The feature-matching loss within this function establishes an alignment mechanism based on intermediate-layer features from the discriminator. Extensive experiments demonstrate the superiority of ILF-BDSNet, which significantly reduces number of parameters and computational complexity while still generating high-quality optical images, providing an efficient solution for SAR image translation in resource-constrained environments. Full article
Show Figures

Figure 1

15 pages, 1081 KB  
Article
Digital Tools for Decision Support in Social Rehabilitation
by Valeriya Gribova and Elena Shalfeeva
J. Pers. Med. 2025, 15(10), 468; https://doi.org/10.3390/jpm15100468 - 1 Oct 2025
Abstract
Objectives: The process of social rehabilitation involves several stages, from assessing an individual’s condition and determining their potential for rehabilitation to implementing a personalized plan with continuous monitoring of progress. Advances in information technology, including artificial intelligence, enable the use of software-assisted [...] Read more.
Objectives: The process of social rehabilitation involves several stages, from assessing an individual’s condition and determining their potential for rehabilitation to implementing a personalized plan with continuous monitoring of progress. Advances in information technology, including artificial intelligence, enable the use of software-assisted solutions for objective assessments and personalized rehabilitation strategies. The research aims to present interconnected semantic models that represent expandable knowledge in the field of rehabilitation, as well as an integrated framework and methodology for constructing virtual assistants and personalized decision support systems based on these models. Materials and Methods: The knowledge and data accumulated in these areas require special tools for their representation, access, and use. To develop a set of models that form the basis of decision support systems in rehabilitation, it is necessary to (1) analyze the domain, identify concepts and group them by type, and establish a set of resources that should contain knowledge for intellectual support; (2) create a set of semantic models to represent knowledge for the rehabilitation of patients. The ontological approach, combined with the cloud cover of the IACPaaS platform, has been proposed. Results: This paper presents a suite of semantic models and a methodology for implementing decision support systems capable of expanding rehabilitation knowledge through updated regulatory frameworks and empirical data. Conclusions: The potential advantage of such systems is the combination of the most relevant knowledge with a high degree of personalization in rehabilitation planning. Full article
(This article belongs to the Section Personalized Medical Care)
Show Figures

Figure 1

42 pages, 7970 KB  
Review
Object Detection with Transformers: A Review
by Tahira Shehzadi, Khurram Azeem Hashmi, Marcus Liwicki, Didier Stricker and Muhammad Zeshan Afzal
Sensors 2025, 25(19), 6025; https://doi.org/10.3390/s25196025 - 1 Oct 2025
Abstract
The astounding performance of transformers in natural language processing (NLP) has motivated researchers to explore their applications in computer vision tasks. A detection transformer (DETR) introduces transformers to object detection tasks by reframing detection as a set prediction problem. Consequently, it eliminates the [...] Read more.
The astounding performance of transformers in natural language processing (NLP) has motivated researchers to explore their applications in computer vision tasks. A detection transformer (DETR) introduces transformers to object detection tasks by reframing detection as a set prediction problem. Consequently, it eliminates the need for proposal generation and post-processing steps. Despite competitive performance, DETR initially suffered from slow convergence and poor detection of small objects. However, numerous improvements are proposed to address these issues, leading to substantial improvements, enabling DETR to achieve state-of-the-art performance. To the best of our knowledge, this paper is the first to provide a comprehensive review of 25 recent DETR advancements. We dive into both the foundational modules of DETR and its recent enhancements, such as modifications to the backbone structure, query design strategies, and refinements to attention mechanisms. Moreover, we conduct a comparative analysis across various detection transformers, evaluating their performance and network architectures. We aim for this study to encourage further research in addressing the existing challenges and exploring the application of transformers in the object detection domain. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

38 pages, 4628 KB  
Article
Towards Optimal Sensor Placement for Cybersecurity: An Extensible Model for Defensive Cybersecurity Sensor Placement Evaluation
by Neal Wagner, Suresh K. Damodaran and Michael Reavey
Sensors 2025, 25(19), 6022; https://doi.org/10.3390/s25196022 - 1 Oct 2025
Abstract
Optimal sensor placement (OSP) is concerned with determining a configuration for a collection of sensors, including sensor type, number, and location, that yields the best evaluation according to a predefined measure of efficacy. Central to the OSP problem is the need for a [...] Read more.
Optimal sensor placement (OSP) is concerned with determining a configuration for a collection of sensors, including sensor type, number, and location, that yields the best evaluation according to a predefined measure of efficacy. Central to the OSP problem is the need for a method to evaluate candidate sensor configurations. Despite the wide use of cybersecurity sensors for the protection of network systems against cyber attacks, there is limited research focused on OSP for defensive cybersecurity, and limited research on evaluation methods for cybersecurity sensor configurations that consider both the sensor data source locations and the sensor analytics/rules used. This paper seeks to address these gaps by providing an extensible mathematical model for the evaluation of cybersecurity sensor configurations, including sensor data source locations and analytics, meant to defend against cyber attacks. We demonstrate model usage via a case study on a representative network system subject to multi-step attacks that employ real cyber attack techniques recorded in the MITRE ATT&CK knowledge base and protected by a configuration of defensive cybersecurity sensors. The proposed model supports the potential for adaptation of techniques and methods developed for OSP in other problem domains than the cybersecurity domain. Full article
(This article belongs to the Section Sensor Networks)
Show Figures

Figure 1

20 pages, 5435 KB  
Article
Do LLMs Offer a Robust Defense Mechanism Against Membership Inference Attacks on Graph Neural Networks?
by Abdellah Jnaini and Mohammed-Amine Koulali
Computers 2025, 14(10), 414; https://doi.org/10.3390/computers14100414 - 1 Oct 2025
Abstract
Graph neural networks (GNNs) are deep learning models that process structured graph data. By leveraging their graphs/node classification and link prediction capabilities, they have been effectively applied in multiple domains such as community detection, location sharing services, and drug discovery. These powerful applications [...] Read more.
Graph neural networks (GNNs) are deep learning models that process structured graph data. By leveraging their graphs/node classification and link prediction capabilities, they have been effectively applied in multiple domains such as community detection, location sharing services, and drug discovery. These powerful applications and the vast availability of graphs in diverse fields have facilitated the adoption of GNNs in privacy-sensitive contexts (e.g., banking systems and healthcare). Unfortunately, GNNs are vulnerable to the leakage of sensitive information through well-defined attacks. Our main focus is on membership inference attacks (MIAs) that allow the attacker to infer whether a given sample belongs to the training dataset. To prevent this, we introduce three LLM-guided defense mechanisms applied at the posterior level: posterior encoding with noise, knowledge distillation, and secure aggregation. Our proposed approaches not only successfully reduce MIA accuracy but also maintain the model’s performance on the node classification task. Our findings, validated through extensive experiments on widely used GNN architectures, offer insights into balancing privacy preservation with predictive performance. Full article
Show Figures

Figure 1

18 pages, 2718 KB  
Article
Metamodel-Based Digital Twin Architecture with ROS Integration for Heterogeneous Model Unification in Robot Shaping Processes
by Qingxin Li, Peng Zeng, Qiankun Wu and Hualiang Zhang
Machines 2025, 13(10), 898; https://doi.org/10.3390/machines13100898 - 1 Oct 2025
Abstract
Precision manufacturing requires handling multi-physics coupling during processing, where digital twin and AI technologies enable rapid robot programming under customized requirements. However, heterogeneous data sources, diverse domain models, and rapidly changing demands pose significant challenges to digital twin system integration. To overcome these [...] Read more.
Precision manufacturing requires handling multi-physics coupling during processing, where digital twin and AI technologies enable rapid robot programming under customized requirements. However, heterogeneous data sources, diverse domain models, and rapidly changing demands pose significant challenges to digital twin system integration. To overcome these limitations, this paper proposes a digital twin modeling strategy based on a metamodel and a virtual–real fusion architecture, which unifies models between the virtual and physical domains. Within this framework, subsystems achieve rapid integration through ontology-driven knowledge configuration, while ROS provides the execution environment for establishing robot manufacturing digital twin scenarios. A case study of a robot shaping system demonstrates that the proposed architecture effectively addresses heterogeneous data association, model interaction, and application customization, thereby enhancing the adaptability and intelligence of precision manufacturing processes. Full article
(This article belongs to the Section Advanced Manufacturing)
Show Figures

Figure 1

19 pages, 1182 KB  
Article
HGAA: A Heterogeneous Graph Adaptive Augmentation Method for Asymmetric Datasets
by Hongbo Zhao, Wei Liu, Congming Gao, Weining Shi, Zhihong Zhang and Jianfei Chen
Symmetry 2025, 17(10), 1623; https://doi.org/10.3390/sym17101623 - 1 Oct 2025
Abstract
Edge intelligence plays an increasingly vital role in ensuring the reliability of distributed microservice-based applications, which are widely used in domains such as e-commerce, industrial IoT, and cloud-edge collaborative platforms. However, anomaly detection in these systems encounters a critical challenge: labeled anomaly data [...] Read more.
Edge intelligence plays an increasingly vital role in ensuring the reliability of distributed microservice-based applications, which are widely used in domains such as e-commerce, industrial IoT, and cloud-edge collaborative platforms. However, anomaly detection in these systems encounters a critical challenge: labeled anomaly data are scarce. This scarcity leads to severe class asymmetry and compromised detection performance, particularly under the resource constraints of edge environments. Recent approaches based on Graph Neural Networks (GNNs)—often integrated with DeepSVDD and regularization techniques—have shown potential, but they rarely address this asymmetry in an adaptive, scenario-specific way. This work proposes Heterogeneous Graph Adaptive Augmentation (HGAA), a framework tailored for edge intelligence scenarios. HGAA dynamically optimizes graph data augmentation by leveraging feedback from online anomaly detection. To enhance detection accuracy while adhering to resource constraints, the framework incorporates a selective bias toward underrepresented anomaly types. It uses knowledge distillation to model dataset-dependent distributions and adaptively adjusts augmentation probabilities, thus avoiding excessive computational overhead in edge environments. Additionally, a dynamic adjustment mechanism evaluates augmentation success rates in real time, refining the selection processes to maintain model robustness. Experiments were conducted on two real-world datasets (TraceLog and FlowGraph) under simulated edge scenarios. Results show that HGAA consistently outperforms competitive baseline methods. Specifically, compared with the best non-adaptive augmentation strategies, HGAA achieves an average improvement of 4.5% in AUC and 4.6% in AP. Even larger gains are observed in challenging cases: for example, when using the HGT model on the TraceLog dataset, AUC improves by 14.6% and AP by 18.1%. Beyond accuracy, HGAA also significantly enhances efficiency: compared with filter-based methods, training time is reduced by up to 71% on TraceLog and 8.6% on FlowGraph, confirming its suitability for resource-constrained edge environments. These results highlight the potential of adaptive, edge-aware augmentation techniques in improving microservice anomaly detection within heterogeneous, resource-limited environments. Full article
(This article belongs to the Special Issue Symmetry and Asymmetry in Embedded Systems)
Show Figures

Figure 1

Back to TopTop