Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (85)

Search Parameters:
Keywords = hybrid graph attention network

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
15 pages, 1308 KB  
Article
Evolution of Convolutional and Recurrent Artificial Neural Networks in the Context of BIM: Deep Insight and New Tool, Bimetria
by Andrzej Szymon Borkowski, Łukasz Kochański and Konrad Rukat
Infrastructures 2026, 11(1), 6; https://doi.org/10.3390/infrastructures11010006 - 22 Dec 2025
Viewed by 99
Abstract
This paper discusses the evolution of convolutional (CNN) and recurrent (RNN) artificial neural networks in applications for Building Information Modeling (BIM). The paper outlines the milestones reached in the last two decades. The article organizes the current state of knowledge and technology in [...] Read more.
This paper discusses the evolution of convolutional (CNN) and recurrent (RNN) artificial neural networks in applications for Building Information Modeling (BIM). The paper outlines the milestones reached in the last two decades. The article organizes the current state of knowledge and technology in terms of three aspects: (1) computer visualization coupled with BIM models (detection, segmentation, and quality verification in images, videos, and point clouds), (2) sequence and time series modeling (prediction of costs, energy, work progress, risk), and (3) integration of deep learning results with the semantics and topology of Industry Foundation Class (IFC) models. The paper identifies the most used architectures, typical data pipelines (synthetic data from BIM models, transfer learning, mapping results to IFC elements) and practical limitations: lack of standardized benchmarks, high annotation costs, a domain gap between synthetic and real data, and discontinuous interoperability. We indicate directions for development: combining CNN/RNN with graph models and transformers for wider use of synthetic data and semi-/supervised learning, as well as explainability methods that increase trust in AECOO (Architecture, Engineering, Construction, Owners & Operators) processes. A practical case study presents a new application, Bimetria, which uses a hybrid CNN/OCR (Optical Character Recognition) solution to generate 3D models with estimates based on two-dimensional drawings. A deep review shows that although the importance of attention-based and graph-based architectures is growing, CNNs and RNNs remain an important part of the BIM process, especially in engineering tasks, where, in our experience and in the Bimetria case study, mature convolutional architectures offer a good balance between accuracy, stability and low latency. The paper also raises some fundamental questions to which we are still seeking answers. Thus, the article not only presents the innovative new Bimetria tool but also aims to stimulate discussion about the dynamic development of AI (Artificial Intelligence) in BIM. Full article
(This article belongs to the Special Issue Modern Digital Technologies for the Built Environment of the Future)
Show Figures

Figure 1

21 pages, 886 KB  
Article
A Dual-Attention CNN–GCN–BiLSTM Framework for Intelligent Intrusion Detection in Wireless Sensor Networks
by Laith H. Baniata, Ashraf ALDabbas, Jaffar M. Atwan, Hussein Alahmer, Basil Elmasri and Chayut Bunterngchit
Future Internet 2026, 18(1), 5; https://doi.org/10.3390/fi18010005 - 22 Dec 2025
Viewed by 187
Abstract
Wireless Sensor Networks (WSNs) are increasingly being used in mission-critical infrastructures. In such applications, they are evaluated on the risk of cyber intrusions that can target the already constrained resources. Traditionally, Intrusion Detection Systems (IDS) in WSNs have been based on machine learning [...] Read more.
Wireless Sensor Networks (WSNs) are increasingly being used in mission-critical infrastructures. In such applications, they are evaluated on the risk of cyber intrusions that can target the already constrained resources. Traditionally, Intrusion Detection Systems (IDS) in WSNs have been based on machine learning techniques; however, these models fail to capture the nonlinear, temporal, and topological dependencies across the network nodes. As a result, they often suffer degradation in detection accuracy and exhibit poor adaptability against evolving threats. To overcome these limitations, this study introduces a hybrid deep learning-based IDS that integrates multi-scale convolutional feature extraction, dual-stage attention fusion, and graph convolutional reasoning. Moreover, bidirectional long short-term memory components are embedded into the unified framework. Through this combination, the proposed architecture effectively captures the hierarchical spatial–temporal correlations in the traffic patterns, thereby enabling precise discrimination between normal and attack behaviors across several intrusion classes. The model has been evaluated on a publicly available benchmarking dataset, and it has been found to attain higher classification capability in multiclass scenarios. Furthermore, the model outperforms conventional IDS-focused approaches. In addition, the proposed design aims to retain suitable computational efficiency, making it appropriate for edge and distributed deployments. Consequently, this makes it an effective solution for next-generation WSN cybersecurity. Overall, the findings emphasize that combining topology-aware learning with multi-branch attention mechanisms offers a balanced trade-off between interpretability, accuracy, and deployment efficiency for resource-constrained WSN environments. Full article
Show Figures

Graphical abstract

27 pages, 3290 KB  
Article
Intelligent Routing Optimization via GCN-Transformer Hybrid Encoder and Reinforcement Learning in Space–Air–Ground Integrated Networks
by Jinling Liu, Song Li, Xun Li, Fan Zhang and Jinghan Wang
Electronics 2026, 15(1), 14; https://doi.org/10.3390/electronics15010014 - 19 Dec 2025
Viewed by 167
Abstract
The Space–Air–Ground Integrated Network (SAGIN), a core architecture for 6G, faces formidable routing challenges stemming from its high-dynamic topological evolution and strong heterogeneous resource characteristics. Traditional protocols like OSPF suffer from excessive convergence latency due to frequent topology updates, while existing intelligent methods [...] Read more.
The Space–Air–Ground Integrated Network (SAGIN), a core architecture for 6G, faces formidable routing challenges stemming from its high-dynamic topological evolution and strong heterogeneous resource characteristics. Traditional protocols like OSPF suffer from excessive convergence latency due to frequent topology updates, while existing intelligent methods such as DQN remain confined to a passive reactive decision-making paradigm, failing to leverage spatiotemporal predictability of network dynamics. To address these gaps, this study proposes an adaptive routing algorithm (GCN-T-PPO) integrating a GCN-Transformer hybrid encoder, Particle Swarm Optimization (PSO), and Proximal Policy Optimization (PPO) with spatiotemporal attention. Specifically, the GCN-Transformer encoder captures spatial topological dependencies and long-term temporal traffic evolution, with PSO optimizing hyperparameters to enhance prediction accuracy. The PPO agent makes proactive routing decisions based on predicted network states (next K time steps) to adapt to both topological and traffic dynamics. Extensive simulations on real dataset-parameterized environments (CelesTrak TLE data, CAIDA 100G traffic statistics, CRAWDAD UAV mobility models) demonstrate that under 80% high load and bursty Pareto traffic, GCN-T-PPO reduces end-to-end latency by 42.4% and packet loss rate by 75.6%, while improving QoS satisfaction rate by 36.9% compared to DQN. It also outperforms SOTA baselines including OSPF, DDPG, D2-RMRL, and Graph-Mamba. Ablation studies validate the statistical significance (p < 0.05) of key components, confirming the synergistic gains from spatiotemporal joint modeling and proactive decision-making. This work advances SAGIN routing from passive response to active prediction, significantly enhancing network stability, resource utilization efficiency, and QoS guarantees, providing an innovative solution for 6G global seamless coverage and intelligent connectivity. Full article
Show Figures

Figure 1

27 pages, 3305 KB  
Article
SatViT-Seg: A Transformer-Only Lightweight Semantic Segmentation Model for Real-Time Land Cover Mapping of High-Resolution Remote Sensing Imagery on Satellites
by Daoyu Shu, Zhan Zhang, Fang Wan, Wang Ru, Bingnan Yang, Yan Zhang, Jianzhong Lu and Xiaoling Chen
Remote Sens. 2026, 18(1), 1; https://doi.org/10.3390/rs18010001 - 19 Dec 2025
Viewed by 245
Abstract
The demand for real-time land cover mapping from high-resolution remote sensing (HR-RS) imagery motivates lightweight segmentation models running directly on satellites. By processing on-board and transmitting only fine-grained semantic products instead of massive raw imagery, these models provide timely support for disaster response, [...] Read more.
The demand for real-time land cover mapping from high-resolution remote sensing (HR-RS) imagery motivates lightweight segmentation models running directly on satellites. By processing on-board and transmitting only fine-grained semantic products instead of massive raw imagery, these models provide timely support for disaster response, environmental monitoring, and precision agriculture. Many recent methods combine convolutional neural networks (CNNs) with Transformers to balance local and global feature modeling, with convolutions as explicit information aggregation modules. Such heterogeneous hybrids may be unnecessary for lightweight models if similar aggregation can be achieved homogeneously, and operator inconsistency complicates optimization and hinders deployment on resource-constrained satellites. Meanwhile, lightweight Transformer components in these architectures often adopt aggressive channel compression and shallow contextual interaction to meet compute budgets, impairing boundary delineation and recognition of small or rare classes. To address this, we propose SatViT-Seg, a lightweight semantic segmentation model with a pure Vision Transformer (ViT) backbone. Unlike CNN-Transformer hybrids, SatViT-Seg adopts a homogeneous two-module design: a Local-Global Aggregation and Distribution (LGAD) module that uses window self-attention for local modeling and dynamically pooled global tokens with linear attention for long-range interaction, and a Bi-dimensional Attentive Feed-Forward Network (FFN) that enhances representation learning by modulating channel and spatial attention. This unified design overcomes common lightweight ViT issues such as channel compression and weak spatial correlation modeling. SatViT-Seg is implemented and evaluated in LuoJiaNET and PyTorch; comparative experiments with existing methods are run in PyTorch with unified training and data preprocessing for fairness, while the LuoJiaNET implementation highlights deployment-oriented efficiency on a graph-compiled runtime. Compared with the strongest baseline, SatViT-Seg improves mIoU by up to 1.81% while maintaining the lowest FLOPs among all methods. These results indicate that homogeneous Transformers offer strong potential for resource-constrained, on-board real-time land cover mapping in satellite missions. Full article
(This article belongs to the Special Issue Geospatial Artificial Intelligence (GeoAI) in Remote Sensing)
Show Figures

Figure 1

35 pages, 8987 KB  
Article
A Method for UAV Path Planning Based on G-MAPONet Reinforcement Learning
by Jian Deng, Honghai Zhang, Yuetan Zhang, Mingzhuang Hua and Yaru Sun
Drones 2025, 9(12), 871; https://doi.org/10.3390/drones9120871 - 17 Dec 2025
Viewed by 228
Abstract
To address the issues of efficiency and robustness in UAV trajectory planning under complex environments, this paper proposes a Graph Multi-Head Attention Policy Optimization Network (G-MAPONet) algorithm that integrates Graph Attention (GAT), Multi-Head Attention (MHA), and Group Relative Policy Optimization (GRPO). The algorithm [...] Read more.
To address the issues of efficiency and robustness in UAV trajectory planning under complex environments, this paper proposes a Graph Multi-Head Attention Policy Optimization Network (G-MAPONet) algorithm that integrates Graph Attention (GAT), Multi-Head Attention (MHA), and Group Relative Policy Optimization (GRPO). The algorithm adopts a three-layer architecture of “GAT layer for local feature perception–MHA for global semantic reasoning–GRPO for policy optimization”, comprehensively achieving the goals of dynamic graph convolution quantization and global adaptive parallel decoupled dynamic strategy adjustment. Comparative experiments in multi-dimensional spatial environments demonstrate that the Gat_Mha combined mechanism exhibits significant superiority compared to single attention mechanisms, which verifies the efficient representation capability of the dual-layer hybrid attention mechanism in capturing environmental features. Additionally, ablation experiments integrating Gat, Mha, and GRPO algorithms confirm that the dual-layer fusion mechanism of Gat and Mha yields better improvement effects. Finally, comparisons with traditional reinforcement learning algorithms across multiple performance metrics show that the G-MAPONet algorithm reduces the number of convergence episodes (NCE) by an average of more than 19.14%, increases the average reward (AR) by over 16.20%, and successfully completes all dynamic path planning (PPTC) tasks; meanwhile, the algorithm’s reward values and obstacle avoidance success rate are significantly higher than those of other algorithms. Compared with the baseline APF algorithm, its reward value is improved by 8.66%, and the obstacle avoidance repetition rate is also enhanced, which further verifies the effectiveness of the improved G-MAPONet algorithm. In summary, through the dual-layer complementary mode of GAT and MHA, the G-MAPONet algorithm overcomes the bottlenecks of traditional dynamic environment modeling and multi-scale optimization, enhances the decision-making capability of UAVs in unstructured environments, and provides a new technical solution for trajectory planning in intelligent logistics and distribution. Full article
Show Figures

Figure 1

25 pages, 3766 KB  
Article
WiFi RSS and RTT Indoor Positioning with Graph Temporal Convolution Network
by Lila Rana and Aayush Dulal
Sensors 2025, 25(24), 7622; https://doi.org/10.3390/s25247622 - 16 Dec 2025
Viewed by 479
Abstract
Indoor positioning using commodity WiFi has gained significant attention; however, achieving sub-meter accuracy across diverse layouts remains challenging due to multipath fading and Non-Line-Of-Sight (NLOS) effects. In this work, we propose a hybrid Graph–Temporal Convolutional Network (GTCN) model that incorporates Access Point (AP) [...] Read more.
Indoor positioning using commodity WiFi has gained significant attention; however, achieving sub-meter accuracy across diverse layouts remains challenging due to multipath fading and Non-Line-Of-Sight (NLOS) effects. In this work, we propose a hybrid Graph–Temporal Convolutional Network (GTCN) model that incorporates Access Point (AP) geometry through graph convolutions while capturing temporal signal dynamics via dilated temporal convolutional networks. The proposed model adaptively learns per-AP importance using a lightweight gating mechanism and jointly exploits WiFi Received Signal Strength (RSS) and Round-Trip Time (RTT) features for enhanced robustness. The model is evaluated across four experimental areas such as lecture theatre, office, corridor, and building floor covering areas from 15 m × 14.5 m to 92 m × 15 m. We further analyze the sensitivity of the model to AP density under both LOS and NLOS conditions, demonstrating that positioning accuracy systematically improves with denser AP deployment, especially in large-scale mixed environments. Despite its high accuracy, the proposed GTCN remains computationally lightweight, requiring fewer than 105 trainable parameters and only tens of MFLOPs per inference, enabling real-time operation on embedded and edge devices. Full article
(This article belongs to the Special Issue Signal Processing for Satellite Navigation and Wireless Localization)
Show Figures

Figure 1

30 pages, 15770 KB  
Article
A Hybrid Deep Learning Framework for Enhanced Fault Diagnosis in Industrial Robots
by Jun Wu, Yuepeng Zhang, Bo Gao, Linzhong Xia, Xueli Zhu, Hui Wang and Xiongbo Wan
Algorithms 2025, 18(12), 779; https://doi.org/10.3390/a18120779 - 10 Dec 2025
Viewed by 330
Abstract
Predominant fault diagnosis in industrial robots depends on dedicated vibration or acoustics sensors. However, their practical deployment is often limited by installation constraints, susceptibility to environmental noise, and cost considerations. Applying Energy-Based Maintenance (EBM) principles to achieve enhanced fault diagnosis under practical industrial [...] Read more.
Predominant fault diagnosis in industrial robots depends on dedicated vibration or acoustics sensors. However, their practical deployment is often limited by installation constraints, susceptibility to environmental noise, and cost considerations. Applying Energy-Based Maintenance (EBM) principles to achieve enhanced fault diagnosis under practical industrial conditions, we propose a hybrid deep learning framework, the Multi-head Graph Attention Network (MGAT) with Multi-scale CNNBiLSTM Fusion (MGAT-MCNNBiLSTM) for industrial robots. This approach obviates the need for additional dedicated sensors, effectively mitigating associated deployment complexities. The framework embodies four core innovations: (1) Based on the EBM paradigm, motor current is established as the most effective and practical choice for enabling cost-efficient and scalable industrial robot fault diagnosis. A corresponding dataset of motor current has been acquired from industrial robots operating under diverse fault scenarios. (2) An integrated MGAT-MCNNBiLSTM architecture that synergistically models multiscale local features and complex dynamics through its MCNNBiLSTM module while capturing nonlinear interdependencies via MGAT. This comprehensive feature representation enables robust and highly accurate fault detection. (3) The study found that the application of spectral preprocessing techniques yields a marked and statistically significant enhancement in diagnostic performance. A comprehensive and systematic analysis was undertaken to uncover the underlying reasons for this observed performance improvement. (4) To emulate challenging industrial settings and cost-sensitive implementations, noise signal injection was employed to evaluate model robustness in high-electromagnetic-interference environments and low-cost, low-resolution ADC implementations. Experimental validation on real-world industrial robot datasets demonstrates that MGAT-MCNNBiLSTM achieves a superior diagnostic accuracy of 90.7560%. This performance marks a significant absolute improvement of 1.51–8.55% over competing models, including LCNNBiLSTM, SCNNBiLSTM, MCCBiLSTM, GAT, and MGAT. Under challenging noise and low-resolution conditions, the proposed model consistently outperforms CNNBiLSTM variants, GAT, and MGAT with an improvement of 1.37–10.26% and enhanced industrial utility and deployment potential. Full article
Show Figures

Figure 1

21 pages, 4695 KB  
Article
A Graph-Based Deep Learning Framework with Gating and Omics-Linked Attention for Multi-Omics Integration and Biomarker Discovery
by Zhanpeng Huang, Yutao Deng, Jinyuan Liu and Zhaohan Cai
Biology 2025, 14(12), 1764; https://doi.org/10.3390/biology14121764 - 10 Dec 2025
Viewed by 385
Abstract
Integration of multi-omics data provides a comprehensive perspective on complex biological systems, facilitating advances in disease classification and biomarker discovery. However, the heterogeneity and high dimensionality of omics data present significant analytical challenges. To achieve effective and interpretable multi-omics integration, we propose a [...] Read more.
Integration of multi-omics data provides a comprehensive perspective on complex biological systems, facilitating advances in disease classification and biomarker discovery. However, the heterogeneity and high dimensionality of omics data present significant analytical challenges. To achieve effective and interpretable multi-omics integration, we propose a novel deep learning framework named MOGOLA(Multi-Omics integration by Gating and Omics-Linked Attention). MOGOLA consists of three core components: (1) A hybrid graph learning module that integrates Graph Convolutional Networks and Graph Attention Networks for intra-omics feature extraction. (2) A gating and confidence mechanism that adaptively weighs feature importance across different omics types. (3) A cross-omics attention-based fusion module that captures inter-omics relationships. Comprehensive evaluations on four benchmark datasets (BRCA, KIPAN, ROSMAP, and LGG) demonstrate that MOGOLA consistently outperforms eleven state-of-the-art approaches. Ablation studies further validate the contribution of each module, while biomarkers identification highlight the framework’s clinical potential. These results show that MOGOLA is a robust and interpretable approach for multi-omics data integration and a contribution to advances in computational biology and precision medicine. Full article
Show Figures

Figure 1

19 pages, 2656 KB  
Article
A Novel Hybrid Temporal Fusion Transformer Graph Neural Network Model for Stock Market Prediction
by Sebastian Thomas Lynch, Parisa Derakhshan and Stephen Lynch
AppliedMath 2025, 5(4), 176; https://doi.org/10.3390/appliedmath5040176 - 8 Dec 2025
Viewed by 1168
Abstract
Forecasting stock prices remains a central challenge in financial modelling, as markets are influenced by market sentiment, firm-level fundamentals and complex interactions between macroeconomic and microeconomic factors, for example. This study evaluates the predictive performance of both classical statistical models and advanced attention-based [...] Read more.
Forecasting stock prices remains a central challenge in financial modelling, as markets are influenced by market sentiment, firm-level fundamentals and complex interactions between macroeconomic and microeconomic factors, for example. This study evaluates the predictive performance of both classical statistical models and advanced attention-based deep learning architectures for daily stock price forecasting. Using a dataset of major U.S. equities and Exchange Traded Funds (ETFs) covering 2012–2024, we compare traditional statistical approaches, Seasonal Autoregressive Integrated Moving Average (SARIMA) and Exponential Smoothing (ES) in the Error, Trend, Seasonal (ETS) framework, with deep learning architectures such as the Temporal Fusion Transformer (TFT), and a novel hybrid model, the TFT-Graph Neural Network (TFT-GNN), which incorporates relational information between assets. All models are assessed under consistent experimental conditions in terms of forecast accuracy, computational efficiency, and interpretability. Our results indicate that while statistical models offer strong baselines with high stability and low computational cost, the TFT outperforms them in capturing short-term nonlinear dependencies. The hybrid TFT-GNN achieves the highest overall predictive accuracy, demonstrating that relational signals derived from inter-asset connections provide meaningful enhancements beyond traditional temporal and technical indicators. These findings highlight the advantages of integrating relational learning into temporal forecasting frameworks and emphasise the continued relevance of statistical models as interpretable and efficient benchmarks for evaluating deep learning approaches in high-frequency financial prediction. Full article
Show Figures

Figure 1

27 pages, 6182 KB  
Article
Graph-Based Deep Learning and Multi-Source Data to Provide Safety-Actionable Insights for Rural Traffic Management
by Taimoor Ali Khan and Yaqin Qin
Vehicles 2025, 7(4), 151; https://doi.org/10.3390/vehicles7040151 - 5 Dec 2025
Viewed by 331
Abstract
This study confronts the significant challenges inherent in Traffic State Estimation (TSE) for rural arterial networks, where sparse sensor coverage and complex, dynamic traffic flows complicate effective management and safety assurance. Traditional TSE methodologies, often dependent on single-source data streams, fail to accurately [...] Read more.
This study confronts the significant challenges inherent in Traffic State Estimation (TSE) for rural arterial networks, where sparse sensor coverage and complex, dynamic traffic flows complicate effective management and safety assurance. Traditional TSE methodologies, often dependent on single-source data streams, fail to accurately model the intricate spatiotemporal dependencies present in such environments. This fundamental limitation precipitates critical safety hazards, including pervasive over speeding and dangerous queue spillback phenomena at intersections. To address these deficiencies, we introduce a novel hybrid intelligence framework that synergistically combines a Graph Attention Temporal Convolutional Network (GAT-TCN) with advanced Kalman Filter variants, specifically the Extended, Unscented, and Sliding Window Kalman Filters. The GAT-TCN component is engineered to excel at learning complex, non-linear correlations across both space and time through multi-source data fusion. Empirical validation conducted on a real-world rural toll corridor demonstrates that our proposed model achieves a statistically significant superiority over conventional benchmarks, as rigorously quantified by substantial reductions in both Mean Absolute Error (MAE) and Root Mean Square Error (RMSE). Beyond mere predictive accuracy, the framework delivers transformative safety enhancements by facilitating the proactive identification of hazardous events, enabling earlier detection of over speeding and queue spillback compared to existing methods. Consequently, this research provides a scalable and robust framework for proactive rural traffic management, fundamentally shifting the paradigm from achieving incremental predictive improvements to generating decisive, safety-actionable insights for infrastructure operators. Full article
Show Figures

Figure 1

25 pages, 1910 KB  
Review
Natural Language Processing in Generating Industrial Documentation Within Industry 4.0/5.0
by Izabela Rojek, Olga Małolepsza, Mirosław Kozielski and Dariusz Mikołajewski
Appl. Sci. 2025, 15(23), 12662; https://doi.org/10.3390/app152312662 - 29 Nov 2025
Viewed by 649
Abstract
Deep learning (DL) methods have revolutionized natural language processing (NLP), enabling industrial documentation systems to process and generate text with high accuracy and fluency. Modern deep learning models, such as transformers and recurrent neural networks (RNNs), learn contextual relationships in text, making them [...] Read more.
Deep learning (DL) methods have revolutionized natural language processing (NLP), enabling industrial documentation systems to process and generate text with high accuracy and fluency. Modern deep learning models, such as transformers and recurrent neural networks (RNNs), learn contextual relationships in text, making them ideal for analyzing and creating complex industrial documentation. Transformer-based architectures, such as BERT (Bidirectional Encoder Representations from Transformers) and GPT (Generative Pre-trained Transformer), are ideally suited for tasks such as text summarization, content generation, and question answering, which are crucial for documentation systems. Pre-trained language models, tuned to specific industrial datasets, support domain-specific vocabulary, ensuring the generated documentation complies with industry standards. Deep learning-based systems can use sequential models, such as those used in machine translation, to generate documentation in multiple languages, promoting accessibility, and global collaboration. Using attention mechanisms, these models identify and highlight critical sections of input data, resulting in the generation of accurate and concise documentation. Integration with optical character recognition (OCR) tools enables DL-based NLP systems to digitize and interpret legacy documents, streamlining the transition to automated workflows. Reinforcement learning and human feedback loops can enhance a system’s ability to generate consistent and contextually relevant text over time. These approaches are particularly effective in creating dynamic documentation that is automatically updated based on data from sensors, registers, or other sources in real time. The scalability of DL techniques enables industrial organizations to efficiently produce massive amounts of documentation, reducing manual effort and improving overall efficiency. NLP has become a fundamental technology for automating the generation, maintenance, and personalization of industrial documentation within the Industry 4.0, 5.0, and emerging Industry 6.0 paradigms. Recent advances in large language models, search-assisted generation, and multimodal architectures have significantly improved the accuracy and contextualization of technical manuals, maintenance reports, and compliance documents. However, persistent challenges such as domain-specific terminology, data scarcity, and the risk of hallucinations highlight the limitations of current approaches in safety-critical manufacturing environments. This review synthesizes state-of-the-art methods, comparing rule-based, neural, and hybrid systems while assessing their effectiveness in addressing industrial requirements for reliability, traceability, and real-time adaptation. Human–AI collaboration and the integration of knowledge graphs are transforming documentation workflows as factories evolve toward cognitive and autonomous systems. The review included 32 articles published between 2018 and 2025. The implications of these bibliometric findings suggest that a high percentage of conference papers (69.6%) may indicate a field still in its conceptual phase, which contextualizes the article’s emphasis on proposed architecture rather than their industrial validation. Most research was conducted in computer science, suggesting early stages of technological maturity. The leading countries were China and India, but these countries did not have large publication counts, nor were leading researchers or affiliations observed, suggesting significant research dispersion. However, the most frequently observed SDGs indicate a clear health context, focusing on “industry innovation and infrastructure” and “good health and well-being”. Full article
(This article belongs to the Special Issue Emerging and Exponential Technologies in Industry 4.0)
Show Figures

Figure 1

20 pages, 3174 KB  
Article
Decoding Multi-Omics Signatures in Lower-Grade Glioma Using Protein–Protein Interaction-Informed Graph Attention Networks and Ensemble Learning
by Murtada K. Elbashir, Afrah Alanazi and Mahmood A. Mahmood
Diagnostics 2025, 15(22), 2894; https://doi.org/10.3390/diagnostics15222894 - 14 Nov 2025
Viewed by 413
Abstract
Background/Objectives: Lower-grade gliomas (LGGs) are a biologically and clinically heterogeneous group of brain tumors, for which molecular stratification plays essential role in diagnosis, prognosis, and therapeutic decision-making. Conventional unimodal classifiers do not necessarily describe cross-layer regulatory dynamics which entail the heterogeneity of [...] Read more.
Background/Objectives: Lower-grade gliomas (LGGs) are a biologically and clinically heterogeneous group of brain tumors, for which molecular stratification plays essential role in diagnosis, prognosis, and therapeutic decision-making. Conventional unimodal classifiers do not necessarily describe cross-layer regulatory dynamics which entail the heterogeneity of glioma. Methods: This paper presents a protein–protein interaction (PPI)-informed hybrid model that combines multi-omics profiles, including RNA expression, DNA methylation, and microRNA expression, with a Graph Attention Network (GAT), Random Forest (RF), and logistic stacking ensemble learning. The proposed model utilizes ElasticNet-based feature selection to obtain the most informative biomarkers across omics layers, and the GAT module learns the biologically significant topological representations in the PPI network. The Synthetic Minority Over-Sampling Technique (SMOTE) was used to mitigate the class imbalance, and the model performance was assessed using a repeated five-fold stratified cross-validation approach using the following performance metrics: accuracy, precision, recall, F1-score, ROC-AUC, and AUPRC. Results: The findings illustrate that a combination of multi-omics data increases subtype classification rates (up to 0.984 ± 0.012) more than single-omics methods, and DNA methylation proves to be the most discriminative modality. In addition, analysis of interpretability using attention revealed the major subtype-specific biomarkers, including UBA2, LRRC41, ANKRD53, and WDR77, that show great biological relevance and could be used as diagnostic and therapeutic tools. Conclusions: The proposed multi-omics based on a biological and explainable framework provides a solid computational approach to molecular stratification and biomarker identification in lower-grade glioma, bridging between predictive power, biological clarification, and clinical benefits. Full article
(This article belongs to the Special Issue A New Era in Diagnosis: From Biomarkers to Artificial Intelligence)
Show Figures

Figure 1

17 pages, 863 KB  
Article
A Hybrid Graph Neural Network Framework for Malicious URL Classification
by Sarah Mohammed Alshehri, Sanaa Abdullah Sharaf and Rania Abdulrahman Molla
Electronics 2025, 14(22), 4387; https://doi.org/10.3390/electronics14224387 - 10 Nov 2025
Viewed by 612
Abstract
The increasing reliance on Internet-based services has been accompanied by a rapid growth in cyber threats, particularly phishing attacks using misleading Uniform Resource Locators (URLs) to mislead users and compromise sensitive data. This paper proposes a hybrid deep learning architecture that integrates Graph [...] Read more.
The increasing reliance on Internet-based services has been accompanied by a rapid growth in cyber threats, particularly phishing attacks using misleading Uniform Resource Locators (URLs) to mislead users and compromise sensitive data. This paper proposes a hybrid deep learning architecture that integrates Graph Convolutional Networks (GCN), Attention Mechanism and Long Short-Term Memory (LSTM) networks, and for accurate classification of malicious and benign URLs. The model combines sequential pattern recognition through LSTM, structural graph representations via GCN, and feature prioritization using attention to enhance detection performance. Experiments were conducted on a labeled URL dataset of 100,000 and subsequently 200,000 samples, using consistent training and testing splits. The proposed model showed stable performance across different dataset sizes and ultimately outperformed other approaches on the expanded dataset, demonstrating stronger generalization capabilities. These findings highlight the effectiveness of the proposed hybrid model in capturing structural URL features, providing a reliable approach for detecting phishing attacks via structural URL analysis, and offer a foundation for future research on graph-based cybersecurity systems. Full article
(This article belongs to the Section Computer Science & Engineering)
Show Figures

Figure 1

23 pages, 3168 KB  
Article
Spatio-Temporal Feature Fusion-Based Hybrid GAT-CNN-LSTM Model for Enhanced Short-Term Power Load Forecasting
by Jia Huang, Qing Wei, Tiankuo Wang, Jiajun Ding, Longfei Yu, Diyang Wang and Zhitong Yu
Energies 2025, 18(21), 5686; https://doi.org/10.3390/en18215686 - 29 Oct 2025
Viewed by 619
Abstract
Conventional power load forecasting frameworks face limitations in dynamic spatial topology capture and long-term dependency modeling. To address these issues, this study proposes a hybrid GAT-CNN-LSTM architecture for enhanced short-term power load forecasting. The model integrates three core components synergistically: Graph Attention Network [...] Read more.
Conventional power load forecasting frameworks face limitations in dynamic spatial topology capture and long-term dependency modeling. To address these issues, this study proposes a hybrid GAT-CNN-LSTM architecture for enhanced short-term power load forecasting. The model integrates three core components synergistically: Graph Attention Network (GAT) dynamically captures spatial correlations via adaptive node weighting, resolving static topology constraints; a CNN-LSTM module extracts multi-scale temporal features—convolutional kernels decompose load fluctuations, while bidirectional LSTM layers model long-term trends; and a gated fusion mechanism adaptively weights and fuses spatio-temporal features, suppressing noise and enhancing sensitivity to critical load periods. Experimental validations on multi-city datasets show significant improvements: the model outperforms baseline models by a notable margin in error reduction, exhibits stronger robustness under extreme weather, and maintains superior stability in multi-step forecasting. This study concludes that the hybrid model balances spatial topological analysis and temporal trend modeling, providing higher accuracy and adaptability for STLF in complex power grid environments. Full article
Show Figures

Figure 1

20 pages, 719 KB  
Article
Quantum-Driven Chaos-Informed Deep Learning Framework for Efficient Feature Selection and Intrusion Detection in IoT Networks
by Padmasri Turaka and Saroj Kumar Panigrahy
Technologies 2025, 13(10), 470; https://doi.org/10.3390/technologies13100470 - 17 Oct 2025
Viewed by 701
Abstract
The rapid development of the Internet of Things (IoT) poses significant problems in securing heterogeneous, massive, and high-volume network traffic against cyber threats. Traditional intrusion detection systems (IDSs) are often found to be poorly scalable, or are ineffective computationally, because of the presence [...] Read more.
The rapid development of the Internet of Things (IoT) poses significant problems in securing heterogeneous, massive, and high-volume network traffic against cyber threats. Traditional intrusion detection systems (IDSs) are often found to be poorly scalable, or are ineffective computationally, because of the presence of redundant or irrelevant features, and they suffer from high false positive rates. Addressing these limitations, this study proposes a hybrid intelligent model that combines quantum computing, chaos theory, and deep learning to achieve efficient feature selection and effective intrusion classification. The proposed system offers four novel modules for feature optimization: chaotic swarm intelligence, quantum diffusion modeling, transformer-guided ranking, and multi-agent reinforcement learning, all of which work with a graph-based classifier enhanced with quantum attention mechanisms. This architecture allows as much as 75% feature reduction, while achieving 4% better classification accuracy and reducing computational overhead by 40% compared to the best-performing models. When evaluated on benchmark datasets (NSL-KDD, CICIDS2017, and UNSW-NB15), it shows superior performance in intrusion detection tasks, thereby marking it as a viable candidate for scalable and real-time IoT security analytics. Full article
Show Figures

Figure 1

Back to TopTop