Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (1,037)

Search Parameters:
Keywords = language network analysis

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
31 pages, 3673 KB  
Article
Unveiling Systemic Risks in Sustainable Safety Management: Integrating BERTopic, LLM, and SNA for Accident Text Mining
by Lanjing Wang, Rui Huang, Yige Chen, Yunxiang Yang, Jing Zhan and Haiyuan Gong
Sustainability 2026, 18(8), 3787; https://doi.org/10.3390/su18083787 - 10 Apr 2026
Abstract
To unveil the underlying risk structures in complex industrial systems, this paper proposes a hybrid analytical framework that integrates BERTopic modeling, a large language model (LLM), and social network analysis (SNA). This framework aims to extract systemic safety intelligence from unstructured accident reports. [...] Read more.
To unveil the underlying risk structures in complex industrial systems, this paper proposes a hybrid analytical framework that integrates BERTopic modeling, a large language model (LLM), and social network analysis (SNA). This framework aims to extract systemic safety intelligence from unstructured accident reports. It first employs BERTopic to identify latent causal topics based on 745 Chinese accident investigation reports and utilizes DeepSeek-V3.1 (LLM) for semantic refinement and causal mapping of these topics. Subsequently, a semantic network of causal keywords based on positive pointwise mutual information (PPMI) is constructed, and its topological structure is analyzed using SNA methods. The study identifies and analyzes five major risk communities: confined spaces, fire, mining, construction, and road traffic. It reveals that accident causation exhibits the small-world characteristics of multi-factor coupling and non-linearity, with core risk nodes concentrated in systemic inducements such as organizational management and compliance deficiencies. The results demonstrate that this framework effectively identifies the latent systemic risk patterns embedded within the texts, providing methodological support for developing sustainable safety management mechanisms based on design for safety. Full article
(This article belongs to the Special Issue Achieving Sustainability in Safety Management and Design for Safety)
14 pages, 871 KB  
Article
Validation of a Dermatology-Focused Multimodal Image-and-Data Assistant in Diagnosis and Management of Common Dermatologic Conditions
by Joshua Mijares, Emma J. Bisch, Eanna DeGuzman, Kanika Garg, David Pontes, Neil K. Jairath, Vignesh Ramachandran, George Jeha, Andjela Nemcevic and Syril Keena T. Que
Medicina 2026, 62(4), 715; https://doi.org/10.3390/medicina62040715 - 9 Apr 2026
Abstract
Background and Objectives: Shortages of dermatologists create significant barriers to care, particularly for inflammatory and history-dependent conditions where image-only artificial intelligence (AI) classifiers have limited applicability. Current teledermatology solutions largely focus on single-task, morphology-based neoplasm classifiers, leaving the vast majority of dermatologic [...] Read more.
Background and Objectives: Shortages of dermatologists create significant barriers to care, particularly for inflammatory and history-dependent conditions where image-only artificial intelligence (AI) classifiers have limited applicability. Current teledermatology solutions largely focus on single-task, morphology-based neoplasm classifiers, leaving the vast majority of dermatologic presentations underserved. This study evaluated the diagnostic accuracy and management plan quality of Dermflow (Prava Medical, Delaware, USA), a proprietary dermatology-focused Multimodal Image-and-Data Assistant (MIDA) that autonomously gathers dermatology-specific history, integrates data with patient-submitted images, and outputs structured differential diagnoses and management summaries. Materials and Methods: Two AI systems, Dermflow and Claude Sonnet 4 (Claude, a leading vision–language model), analyzed 87 clinical images from the Skin Condition Image Network and Diverse Dermatology Images databases, representing 10 inflammatory dermatoses and 9 neoplastic conditions stratified across Fitzpatrick Skin Tone (FST) categories (I–II, III–IV, V–VI). For the diagnostic comparison, Dermflow received images and autonomously gathered clinical history, while Claude received identical images without history. For the management plan comparison, both systems received the correct diagnosis and the clinical histories gathered by Dermflow. The primary outcome was diagnostic accuracy. The secondary outcome was management plan quality, assessed by two blinded dermatologists across eight clinical dimensions using 5-point Likert scales. Chi-square tests compared diagnostic accuracy between models; t-tests and ANOVA compared management quality scores. Results: Dermflow achieved markedly superior diagnostic accuracy compared to Claude (86.2% vs. 24.1%, p < 0.001). Both models maintained consistent diagnostic performance across FST categories without significant within-model differences (Dermflow p = 0.924; Claude p = 0.828). Management plan quality showed no significant overall differences between models. However, composite management quality scores declined significantly for darker skin tones across both systems: Dermflow scored 4.20 (FST I–II), 3.99 (FST III–IV), and 3.47 (FST V–VI); Claude scored 4.35, 3.97, and 3.44, respectively (p < 0.001 for most pairwise FST comparisons within each model). Conclusions: Multimodal AI integrating targeted history with image analysis achieves substantially higher diagnostic accuracy than image-only approaches across both inflammatory and neoplastic dermatologic conditions. Autonomous history gathering addresses fundamental limitations of morphology-only classifiers and enables scalable, patient-facing triage across the full spectrum of dermatologic disease. However, both models demonstrated reduced management plan quality for darker skin tones despite receiving the correct diagnosis, suggesting persistent training data limitations that require targeted bias-mitigation strategies beyond domain-specific instruction. Full article
Show Figures

Figure 1

19 pages, 1466 KB  
Article
D2MNet: Difference-Aware Decoupling and Multi-Prompt Learning for Medical Difference Visual Question Answering
by Lingge Lai, Weihua Ou, Jianping Gou and Zhonghua Liu
J. Imaging 2026, 12(4), 162; https://doi.org/10.3390/jimaging12040162 - 9 Apr 2026
Abstract
Difference visual question answering (Diff-VQA) aims to answer questions by identifying and reasoning about differences between medical images. Existing methods often rely on simple feature subtraction or fusion to model image differences, while overlooking the asymmetric descriptive requirements of changed and unchanged cases [...] Read more.
Difference visual question answering (Diff-VQA) aims to answer questions by identifying and reasoning about differences between medical images. Existing methods often rely on simple feature subtraction or fusion to model image differences, while overlooking the asymmetric descriptive requirements of changed and unchanged cases and providing limited task-specific guidance to pretrained language decoders. To address these limitations, we propose D2MNet (Difference-aware Decoupling and Multi-prompt Network), a framework for medical Diff-VQA that combines change-aware reasoning with prompt-guided answer generation. Specifically, a Change Analysis Module (CAM) predicts whether a change is present and produces a binary change-aware prompt; a Difference-Aware Module (DAM) uses dual attention to capture fine-grained difference features; and a multi-prompt learning mechanism (MLM) injects question-aware, change-aware, and learnable prompts into the language decoder to improve contextual alignment and response generation. Experiments on the MIMIC-DiffVQA benchmark show that D2MNet achieves a CIDEr score of 2.907 ± 0.040, outperforming the strongest baseline, ReAl (2.409), under the same evaluation setting. These results demonstrate the effectiveness of the proposed design on benchmark medical Diff-VQA and suggest its potential for assisting difference-aware medical answer generation. Full article
(This article belongs to the Section Medical Imaging)
Show Figures

Figure 1

22 pages, 725 KB  
Review
From In Silico Hypothesis to Validation: The Role of Real-World Evidence in the Preliminary Verification of AI-Generated Drug-Repositioning Candidates: A Comprehensive Review
by Michał Gałuszewski, Jan Olszewski, Karolina Jankowska, Krzysztof Wójcik and Anna Bielecka-Wajdman
J. Clin. Med. 2026, 15(7), 2801; https://doi.org/10.3390/jcm15072801 - 7 Apr 2026
Abstract
Background/Objectives: Drug repositioning has emerged as a promising strategy to address the innovation crisis in pharmaceutical development. While artificial intelligence enables efficient in silico hypothesis generation, clinical translation remains challenging. This study aims to evaluate the role of Real-World Evidence (RWE) in validating [...] Read more.
Background/Objectives: Drug repositioning has emerged as a promising strategy to address the innovation crisis in pharmaceutical development. While artificial intelligence enables efficient in silico hypothesis generation, clinical translation remains challenging. This study aims to evaluate the role of Real-World Evidence (RWE) in validating AI-generated drug-repositioning candidates. Methods: A comprehensive literature review was conducted in PubMed using a predefined search strategy integrating drug repositioning, artificial intelligence, and real-world data. After multi-stage screening, 22 original research articles were included for analysis. Results: Network-based algorithms and natural language processing dominated AI-driven hypothesis generation. Validation using Electronic Health Records and insurance databases enabled retrospective assessment of drug efficacy across large populations. Successful applications were identified in neurodegenerative, metabolic, infectious, autoimmune, and psychiatric diseases. Conclusions: The integration of AI-based analytics with RWE provides a promising framework for the preliminary verification of computational predictions, potentially informing the translational pathway toward clinical practice. However, the effectiveness of this approach remains dependent on data quality and the specific therapeutic context, requiring further standardization of clinical data. Full article
(This article belongs to the Section Pharmacology)
Show Figures

Graphical abstract

35 pages, 3162 KB  
Article
An LLM-Based Agentic Network Traffic Incident-Report Approach Towards Explainable-AI Network Defense
by Chia-Hong Chou, Arjun Sudheer and Younghee Park
J. Sens. Actuator Netw. 2026, 15(2), 32; https://doi.org/10.3390/jsan15020032 - 7 Apr 2026
Abstract
Traditional intrusion detection systems for IoT networks achieve high classification accuracy but lack interpretability and actionable incident-response capabilities, limiting their operational value in security-critical environments. This paper presents a graph-based multi-agent framework that integrates ensemble machine learning with Large Language Model (LLM)-powered incident [...] Read more.
Traditional intrusion detection systems for IoT networks achieve high classification accuracy but lack interpretability and actionable incident-response capabilities, limiting their operational value in security-critical environments. This paper presents a graph-based multi-agent framework that integrates ensemble machine learning with Large Language Model (LLM)-powered incident report generation via Retrieval-Augmented Generation (RAG). The system employs a three-phase architecture: (1) a lightweight Random Forest binary pre-detection, achieving 99.49% accuracy with a 6 MB model size for edge deployment; (2) ensemble classification combining Multi-Layer Perceptron, Random Forest, and XGBoost with soft voting and SHAP-based feature attribution for explainability; and (3) a ReAct-based summary agent that synthesizes classification results with external threat intelligence from Web search and scholarly databases to generate evidence-grounded incident reports. To address the challenge of evaluating non-deterministic LLM outputs, we introduce custom RAG evaluation metrics—faithfulness and groundedness implemented via the LLM-as-Judge framework. Experimental validation on the ACI IoT Network Dataset 2023 demonstrates ensemble accuracy exceeding 99.8% across 11 attack classes; perfect groundedness scores (1.0), indicating all generated claims derive from the retrieved context; and moderate faithfulness (0.64), reflecting appropriate analytical synthesis. The ensemble approach mitigates individual model weaknesses, improving the UDP Flood F1 score from 48% (MLP alone) to 95% through soft voting. This work bridges the gap between high-accuracy detection and trustworthy, actionable security analysis for automated incident-response systems. Full article
(This article belongs to the Special Issue Feature Papers in the Section of Network Security and Privacy)
Show Figures

Figure 1

24 pages, 3164 KB  
Article
Research on Evolution Characteristics and Dynamic Mechanism of Global Photovoltaic Raw Material Trade Network Under the Carbon Neutrality Target
by Yingying Fan and Yi Liang
Sustainability 2026, 18(7), 3574; https://doi.org/10.3390/su18073574 - 6 Apr 2026
Viewed by 205
Abstract
With the acceleration of the global energy transition, the photovoltaic industry has become a significant force in the promotion of green development, and photovoltaic raw materials play a crucial role in this process. In this paper, 177 countries during the period of 2001 [...] Read more.
With the acceleration of the global energy transition, the photovoltaic industry has become a significant force in the promotion of green development, and photovoltaic raw materials play a crucial role in this process. In this paper, 177 countries during the period of 2001 to 2024 were taken as the research subjects, with a focus on polysilicon and silicon wafers as components of upstream photovoltaic raw materials. Through a combination of the evolutionary analysis of nodes, the overall structure, and the three-dimensional structure with an exponential random graph model, the evolution and dynamic mechanisms of the global photovoltaic raw material trade network are explored. The study reveals the following: (1) The global PV raw material trade volume tended to increase from 2001 to 2024. (2) The global photovoltaic raw material trade network showed a tendency towards the “enhanced dominance of core countries and denser trade connections,” with the trade volume between core countries continuously expanding and the network density, average clustering coefficient, and connection efficiency increasing annually, which is a reflection of the globalization and regional cooperation of the global photovoltaic industry. (3) From the weighted out-degree and in-degree ranking evolution of the global photovoltaic raw materials trade network, it can be seen that China consolidated its core position, while Southeast Asian countries tended to transfer their processing and manufacturing links. The status of the United States and traditional industrial powers gradually declined, which is a reflection of the restructuring of the global industrial chain along with regional geopolitical agglomeration effects. (4) Internal attributes such as the national economic level, population size, and urbanization rate, as well as external network effects such as common language and geographical proximity, significantly influence the formation path of the photovoltaic raw material trade network. Moreover, the network exhibits distinct heterogeneous complementarity mechanisms and path dependence characteristics, with a structural evolution that tends toward stability and cooperative relationships showing significant time inertia. Overall, the global trade volume of photovoltaic raw materials continues to grow, and the core positions of major countries such as China, the United States, and Germany remain prominent but show a transitional trend towards Southeast Asian countries. The strengthening of the level of coordination and cooperation among global photovoltaic raw material producers to ensure supply chain stability, promote resource sharing and technological progress, and achieve the sustainable development of green energy policies is necessary. Full article
(This article belongs to the Special Issue Carbon Neutrality and Green Development)
Show Figures

Figure 1

46 pages, 3809 KB  
Review
Overview on Predictive Maintenance Techniques for Turbomachinery
by Pierpaolo Dini, Damiano Nardi and Sergio Saponara
Machines 2026, 14(4), 396; https://doi.org/10.3390/machines14040396 - 5 Apr 2026
Viewed by 116
Abstract
Within the Industry 5.0 paradigm, the management of critical assets requires advanced digital architectures capable of ensuring resilience and operational sustainability. The present systematic review analyzes the state of the art in predictive maintenance (PdM) technologies for turbines and turbomachinery, providing a technical [...] Read more.
Within the Industry 5.0 paradigm, the management of critical assets requires advanced digital architectures capable of ensuring resilience and operational sustainability. The present systematic review analyzes the state of the art in predictive maintenance (PdM) technologies for turbines and turbomachinery, providing a technical examination of anomaly and fault detection frameworks, extended to remaining useful life (RUL) estimation and root cause analysis (RCA). The work addresses inherent sectoral challenges, ranging from the processing of high-dimensional multivariate time series (MTS) from Supervisory Control and Data Acquisition (SCADA) systems to labeled data scarcity and signal non-stationarity in real-world environments. Both purely data-driven frameworks and hybrid physics-informed models, such as Physics-Informed Neural Networks (PINNs), are critically evaluated against performance indicators. A significant contribution of this study lies in the classification of methodologies based on their readiness for real-time inference, emphasizing the role of Explainable AI (XAI) in providing transparent insights to domain experts, who remain central to decision-making processes. The primary objective of this review is to offer an analytical overview of progress to date against current technological gaps, tracing a clear trajectory for future developments. In this regard, the adoption of Generative AI and Large Language Models (LLMs) is identified as a fundamental step toward evolving into interactive, human-centric decision support systems. Full article
Show Figures

Figure 1

39 pages, 6349 KB  
Article
Bilingualism in Context: A Bayesian Psychometric Network Analysis of Language and Culture Among U.S. Heritage Spanish–English Speakers of Latin American Descent
by William Rayo and Ivan Carbajal
Behav. Sci. 2026, 16(4), 522; https://doi.org/10.3390/bs16040522 - 1 Apr 2026
Viewed by 363
Abstract
Bilingualism has increasingly been understood as a multidimensional and context-sensitive experience, prompting growing interest in how specific aspects of bilingual language use relate to cognition. We used Bayesian psychometric network analysis to examine how bilingual language practices, bicultural identity management, and cognition relate [...] Read more.
Bilingualism has increasingly been understood as a multidimensional and context-sensitive experience, prompting growing interest in how specific aspects of bilingual language use relate to cognition. We used Bayesian psychometric network analysis to examine how bilingual language practices, bicultural identity management, and cognition relate within the same system in a sample of 404 U.S.-born heritage Spanish–English bilingual adults of Latin American descent. This approach conceptualizes bilingualism as a complex system, quantifies uncertainty in the estimated network structure, and identifies aspects of bilingual experience that serve as bridges to cognition and bicultural identity. The strongest bridges between domains were the edge between language mixing and attentional control and the edge between unintended language switching and bicultural harmony. These findings provide a more holistic and socially infused characterization of how bilingualism, biculturalism, and cognition interact in U.S. heritage speakers of Spanish. Full article
Show Figures

Figure 1

45 pages, 6749 KB  
Article
An Ontology-Based Architecture for Interoperable Healthcare Systems-of-Systems: Structure, Interaction Patterns, and Covenant-Based Governance
by Mohamed Mogahed and Mo Mansouri
Systems 2026, 14(4), 376; https://doi.org/10.3390/systems14040376 - 31 Mar 2026
Viewed by 284
Abstract
Healthcare fragmentation—characterized by poor coordination among independently operating organizations—systematically degrades care quality while escalating costs. While healthcare delivery inherently operates as a System of Systems (SoS), existing approaches lack semantic rigor to bridge governance principles with implementable architectures, and digital engineering paradigms remain [...] Read more.
Healthcare fragmentation—characterized by poor coordination among independently operating organizations—systematically degrades care quality while escalating costs. While healthcare delivery inherently operates as a System of Systems (SoS), existing approaches lack semantic rigor to bridge governance principles with implementable architectures, and digital engineering paradigms remain disconnected from formal representations of regulatory constraints and organizational interdependencies. This paper presents a comprehensive Web Ontology Language (OWL 2 DL)-based ontology integrating structural, behavioral, and regulatory dimensions of healthcare SoS into a unified, computationally tractable framework. Developed following the Methontology engineering methodology and validated using the HermiT reasoner, the ontology formalizes constituent system categories through functional decomposition, establishes an interaction taxonomy distinguishing intra-category coordination from inter-category integration, and introduces the Covenant class as a novel governance mechanism. The covenant embeds legal frameworks (HIPAA, GDPR), interoperability protocols (FHIR, HL7), and technical standards (SNOMED, LOINC, ICD-11, ISO) as first-class ontological entities with explicit relationships to interaction properties. Governance enforcement is operationalized through a layered validation architecture comprising SWRL rules for deductive compliance checking, SHACL shapes for structural constraint validation, and OWL equivalentClass axioms for automated conflict detection. The ontology is further validated through four operational scenarios that demonstrate automated consent validation, standards compliance verification, protocol interoperability checking, and temporal compliance with conflict detection, alongside extended SPARQL queries that reveal constituent system landscapes, standards coverage, interaction networks, and topological properties through node degree calculation, hub identification, and network density analysis. The ontology enables pre-implementation governance assessments, evidence-based policy simulation, digital twin implementations with continuous compliance monitoring, and resilience planning through network analysis, transforming governance from reactive compliance checking to proactive coordination engineering. Full article
Show Figures

Figure 1

35 pages, 1234 KB  
Article
EHMN 2026: A Thermodynamically Refined, SBML-Standardised Human Metabolic Network for Genome-Scale Analysis and QSP Integration
by Igor Goryanin, Leonid Slovianov, Stephen Checkley and Irina Goryanin
Metabolites 2026, 16(4), 236; https://doi.org/10.3390/metabo16040236 - 31 Mar 2026
Viewed by 280
Abstract
Background: Genome-scale metabolic models (GEMs) are foundational tools for systems biology, enabling quantitative interrogation of human metabolism across physiological and pathological states. However, many legacy reconstructions exhibit heterogeneous identifier usage, incomplete pathway integration, and limited thermodynamic refinement, constraining reproducibility, interoperability, and translational applicability. [...] Read more.
Background: Genome-scale metabolic models (GEMs) are foundational tools for systems biology, enabling quantitative interrogation of human metabolism across physiological and pathological states. However, many legacy reconstructions exhibit heterogeneous identifier usage, incomplete pathway integration, and limited thermodynamic refinement, constraining reproducibility, interoperability, and translational applicability. Methods: We present EHMN 2026, an update of the Edinburgh Human Metabolic Network. The reconstruction was refined through systematic identifier reconciliation using MetaNetX and ChEBI mappings, duplicate reaction consolidation, thermodynamic directionality assessment, and structured pathway annotation via Reactome. The final model was encoded in Systems Biology Markup Language (SBML) Level 3 Version 2 with the Flux Balance Constraints (FBC2) package, ensuring explicit gene–protein–reaction (GPR) representation and compatibility with modern constraint-based modelling toolchains. Results: EHMN 2026 comprises 11 compartments, 14,321 metabolites (species), and 22,642 reactions, supported by 3996 gene products. Of all reactions, 9638 (42.6%) contain GPR associations, linking metabolic transformations to 2887 unique Ensembl gene identifiers (ENSG). Pathway integration yielded 2194 unique Reactome identifiers, providing structured pathway-level organisation of metabolic functions. Thermodynamic refinement reduced infeasible energy-generating cycles and improved reaction directionality coherence while preserving global network connectivity. The reconstruction is fully SBML-compliant and portable across major modelling platforms. Compared with Recon3D and Human1, EHMN 2026 uniquely combines native Reactome reaction-level annotation, systematic MetaNetX identifier harmonisation, documented thermodynamic cycle elimination (37 cycles, 0 remaining), and an 11-compartment architecture supporting organelle-specific modelling—features designed for QSP and multi-layer integration applications. Conclusions: EHMN 2026 delivers a rigorously harmonised, thermodynamically refined, and pathway-annotated human metabolic reconstruction with enhanced annotation depth and standards-based interoperability. By combining genome-scale coverage with structured gene and pathway integration, the model establishes a robust computational backbone for reproducible metabolic analysis and provides a scalable foundation for future multi-layer systems pharmacology and integrative modelling frameworks. Full article
Show Figures

Figure 1

34 pages, 1140 KB  
Article
LLM-DSaR: LLM-Enhanced Semantic Augmentation for Temporal Knowledge Graph Reasoning
by Ruoxi Liu, Chunfang Liu and Xiangyin Zhang
Electronics 2026, 15(7), 1446; https://doi.org/10.3390/electronics15071446 - 30 Mar 2026
Viewed by 245
Abstract
Temporal Knowledge Graph Inference (TKGI) is a cornerstone for intelligent decision-making in dynamic scenarios, but existing models face critical bottlenecks, including inadequate complex-context modeling, a lack of entity importance quantification, insufficient novel-event reasoning accuracy, and weak domain adaptability. To address these issues, this [...] Read more.
Temporal Knowledge Graph Inference (TKGI) is a cornerstone for intelligent decision-making in dynamic scenarios, but existing models face critical bottlenecks, including inadequate complex-context modeling, a lack of entity importance quantification, insufficient novel-event reasoning accuracy, and weak domain adaptability. To address these issues, this study proposes a semantics-enhanced model (LLM-DSaR) integrating Large Language Models (LLMs), temporal attention networks, and optimized contrastive learning. Specifically, a two-stage LLM semantic enhancement (LLM1 + LLM2) framework first generates structured semantic analysis reports via adaptive prompt engineering, and then extracts domain-specific semantic embeddings from the last-layer hidden states through pooling and linear projection, which are further fused with TransE-based structural embeddings; meanwhile, LLM2 mitigates data sparsity in novel-event reasoning; a dynamic weight fusion (DWF) framework adaptively assigns feature weights to achieve deep feature synergy; an LLM-enhanced contrastive-learning module strengthens event clustering and discrimination. Experiments on five public datasets and a self-constructed Robotics Temporal Knowledge Graph (RTKG) show LLM-DSaR outperforms 16 baselines: on RTKG, its MRR is 10.35 percentage points higher than GCR, and Hits@10 reaches 88.87%. Ablation experiments validate core modules’ effectiveness, confirming LLM-DSaR adapts to professional scenarios like robot maintenance prediction, providing a novel technical paradigm for complex-domain TKG reasoning. Full article
(This article belongs to the Section Artificial Intelligence)
Show Figures

Figure 1

26 pages, 2798 KB  
Article
Toward Sustainable Education: Generative AI-Powered Argument Mining in Student Writing
by Yupei Ren, Ning Zhang, Xiaoyu Li, Yadong Zhang, Yuqing Chen and Man Lan
Sustainability 2026, 18(7), 3338; https://doi.org/10.3390/su18073338 - 30 Mar 2026
Viewed by 375
Abstract
As critical elements in argumentative writing, argument components and strategies significantly influence argument quality. However, the existing research lacks an in-depth exploration of how students construct and utilize these elements in argumentative writing. This study first evaluates the performance of leading large language [...] Read more.
As critical elements in argumentative writing, argument components and strategies significantly influence argument quality. However, the existing research lacks an in-depth exploration of how students construct and utilize these elements in argumentative writing. This study first evaluates the performance of leading large language models (LLMs) in identifying argument components and strategies using three approaches: single-task learning (STL), chain-of-thought (CoT), and multi-task learning (MTL). With the aid of learning analytics methods (Epistemic Network Analysis (ENA) and two-mode network), the study further reveals the intrinsic mechanisms linking argument components, strategies, and writing quality. Specifically, the research trains and evaluates LLMs on 226 argumentative essays, encompassing 4726 components and 4837 strategies. Compared to basic STL, the CoT and MTL methods significantly improve LLMs’ performance in both tasks. Moreover, learning analytics indicate that high-quality essays possess rich and complex logical relations, presenting multidimensional and multi-layered reasoning structures, whereas low-quality essays predominantly rely on simple and repetitive connections, lacking deeper logical support. These findings have significant implications for the automated analysis of argumentative writing and the sustainable development of education, not only providing valuable insights for educators in argumentation instruction but also contributing to the systematic enhancement of students’ argumentative abilities and critical thinking. Full article
Show Figures

Figure 1

31 pages, 2016 KB  
Article
Measuring Complexity at the Requirements Stage: Spectral Metrics as Development Effort Predictors
by Maximilian Vierlboeck, Antonio Pugliese, Roshanak Rose Nilchiani, Paul T. Grogan and Rashika Sugganahalli Natesh Babu
Systems 2026, 14(4), 364; https://doi.org/10.3390/systems14040364 - 30 Mar 2026
Viewed by 329
Abstract
Complexity in engineered systems presents one of the most persistent challenges in modern development since it is driving cost overruns, schedule delays, and outright project failures. Yet while architectural complexity has been studied, the structural complexity embedded within requirements specifications remains poorly understood [...] Read more.
Complexity in engineered systems presents one of the most persistent challenges in modern development since it is driving cost overruns, schedule delays, and outright project failures. Yet while architectural complexity has been studied, the structural complexity embedded within requirements specifications remains poorly understood and inadequately quantified. This gap is consequential: requirements fundamentally drive system design, and complexity introduced at this stage propagates through architecture, implementation, and integration. To address this gap, we build on Natural Language Processing methods that extract structural networks from textual requirements. Using these extracted structures, we conduct a controlled experiment employing molecular integration tasks as structurally isomorphic proxies for requirements integration—leveraging the topological equivalence between molecular graphs and requirement networks while eliminating confounding factors such as domain expertise and semantic ambiguity. Our results demonstrate that spectral measures predict integration effort with correlations exceeding 0.95, while structural metrics achieve correlations above 0.89. Notably, density-based metrics show no significant predictive validity. These findings indicate that eigenvalue-derived measures capture cognitive and effort dimensions that simpler connectivity metrics cannot. As a result, this research bridges a critical methodological gap between architectural complexity analysis and requirements engineering practice, providing a validated foundation for applying these metrics to requirements engineering, where similar structural complexity patterns may predict integration effort. Full article
(This article belongs to the Section Systems Engineering)
Show Figures

Figure 1

27 pages, 4695 KB  
Article
A Novel Weighted Ensemble Framework of Transformer and Deep Q-Network for ATP-Binding Site Prediction Using Protein Language Model Features
by Jiazhi Song, Jingqing Jiang, Chenrui Zhang and Shuni Guo
Int. J. Mol. Sci. 2026, 27(7), 3097; https://doi.org/10.3390/ijms27073097 - 28 Mar 2026
Viewed by 412
Abstract
Adenosine triphosphate (ATP) serves as a central energy currency and signaling molecule in cellular processes, with ATP-binding sites in proteins playing critical roles in enzymatic catalysis, signal transduction, and gene regulation. The accurate identification of ATP-binding sites is essential for understanding protein function [...] Read more.
Adenosine triphosphate (ATP) serves as a central energy currency and signaling molecule in cellular processes, with ATP-binding sites in proteins playing critical roles in enzymatic catalysis, signal transduction, and gene regulation. The accurate identification of ATP-binding sites is essential for understanding protein function mechanisms and facilitating drug discovery, enzyme engineering, and disease pathway analysis. In this study, we present a novel hybrid deep learning framework that synergizes heterogeneous learning paradigms based on protein sequence information for accurate ATP-binding site prediction. Our approach integrates two complementary base classifiers. One is a Transformer-based model, which leverages high-level contextual embeddings generated by Evolutionary Scale Modeling 2 (ESM-2), a state-of-the-art protein language model, combined with a local–global dual-attention mechanism that enables the model to simultaneously characterize short-segment and long-range contextual dependencies across the entire protein sequence. The other is a deep Q-network (DQN)-inspired classifier that achieves residue-level prediction as a sequential decision-making process. The final predictions are generated using a weighted ensemble strategy, where optimal weights are determined via cross-validations to leverage the strengths of both models. The prediction results on benchmark independent testing sets indicate that our method achieves satisfactory performance on key metrics. Beyond predictive efficacy, this work uncovers the intrinsic biological mechanisms underlying protein–ATP interactions, including the synergistic roles of local structural motifs and global conformational constraints, as well as family-specific binding patterns, endowing the research with substantial biological significance. The research in this work offers a deeper understanding of the protein–ligand recognition mechanisms and supportive efforts on large-scale functional annotations that are critical for system biology and drug target discovery. Full article
(This article belongs to the Section Molecular Informatics)
Show Figures

Figure 1

20 pages, 1191 KB  
Article
Bridging the Semantic Gap in 5G: A Hybrid RAG Framework for Dual-Domain Understanding of O-RAN Standards and srsRAN Implementation
by Yedil Nurakhov, Nurislam Kassymbek, Duman Marlambekov, Aksultan Mukhanbet and Timur Imankulov
Appl. Sci. 2026, 16(7), 3275; https://doi.org/10.3390/app16073275 - 28 Mar 2026
Viewed by 362
Abstract
The rapid evolution of the Open Radio Access Network (O-RAN) architecture and the exponential growth in specification complexity create significant barriers for researchers translating 5G standards into practical implementations. Existing evaluation frameworks for large language models, such as ORAN-Bench-13K, focus predominantly on the [...] Read more.
The rapid evolution of the Open Radio Access Network (O-RAN) architecture and the exponential growth in specification complexity create significant barriers for researchers translating 5G standards into practical implementations. Existing evaluation frameworks for large language models, such as ORAN-Bench-13K, focus predominantly on the theoretical comprehension of regulatory documents while neglecting the critical aspect of software execution. This disparity results in a profound semantic gap, defined here as the structural and conceptual misalignment between abstract normative requirements and their concrete realization in the source code of open platforms like srsRAN. To bridge this divide and enable advanced cognitive reasoning, this paper presents a Hybrid Retrieval-Augmented Generation (RAG) framework designed to unify two heterogeneous knowledge domains: the O-RAN/3GPP specification corpus and the srsRAN C++ codebase. The proposed architecture leverages a hierarchical Parent–Child Chunking strategy to preserve the structural integrity of complex code and normative protocols. Additionally, it introduces a probabilistic Semantic Query Routing mechanism that dynamically selects the relevant context domain based on query intent. This routing actively mitigates semantic interference—a phenomenon where merging conflicting cross-domain terminology introduces informational noise, which our baseline tests showed degrades response accuracy by 4.7%. Empirical evaluation demonstrates that the hybrid approach successfully overcomes this, achieving an overall accuracy of 76.70% and outperforming the standard RAG baseline of 72.00%. Furthermore, system performance analysis reveals that effective context filtering reduces the average response generation latency to 3.47 s, compared to 3.73 s for traditional RAG methods, rendering the framework highly suitable for real-time telecommunications engineering tasks. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Figure 1

Back to TopTop