Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (1,816)

Search Parameters:
Keywords = graph convolutional networks

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
31 pages, 7528 KB  
Article
Shield Machine Attitude Prediction Method Based on Causal Graph Convolutional Network
by Liang Zeng, Xingao Yan, Chenning Zhang, Xue Wang and Shanshan Wang
Algorithms 2026, 19(3), 224; https://doi.org/10.3390/a19030224 - 16 Mar 2026
Abstract
Accurately predicting and controlling the attitude of a shield tunneling machine is critical for quality assurance in shield tunneling projects. Existing prediction methods utilize historical data to construct a machine learning framework to predict future attitude deviations. However, this method is poorly interpretable [...] Read more.
Accurately predicting and controlling the attitude of a shield tunneling machine is critical for quality assurance in shield tunneling projects. Existing prediction methods utilize historical data to construct a machine learning framework to predict future attitude deviations. However, this method is poorly interpretable and lacks practical engineering guidance. Considering the shortcomings of this prediction method, this study suggests an innovative deep learning method called causal graph convolutional network (C-GCN-GRU), and the goal of this project is the improvement of the interpretability of the shield attitude prediction. The causal relationships between key attitude features of the shield machine are recognized and quantified by the PCMCI+ method. The found causal relationships are converted into collocation matrices to be input into a model consisting of GCN and GRU, and combined with multi-head causal attention to better forecast the shield machine attitude. The results trained on a dataset from the Karnaphuli River Tunnel Project in Bangladesh show that the accuracy of the four variables characterizing the shield attitude and position predicted by the C-GCN-GRU model outperforms that of the other four similar models and provides decision support for attitude and position adjustments in shield tunnels. Full article
Show Figures

Figure 1

31 pages, 2256 KB  
Article
Trust Assessment of Distributed Power Grid Terminals via Dual-Domain Graph Neural Networks
by Cen Chen, Jinghong Lan, Yi Wang, Zhuo Lv, Junchen Li, Ying Zhang, Xinlei Ming and Yubo Song
Electronics 2026, 15(6), 1211; https://doi.org/10.3390/electronics15061211 - 13 Mar 2026
Viewed by 171
Abstract
As distributed terminals are increasingly integrated into modern power systems with high penetration of renewable energy and decentralized resources, access control mechanisms must support continuous and highly detailed trust assessment. Existing approaches based on machine learning primarily rely on network traffic features from [...] Read more.
As distributed terminals are increasingly integrated into modern power systems with high penetration of renewable energy and decentralized resources, access control mechanisms must support continuous and highly detailed trust assessment. Existing approaches based on machine learning primarily rely on network traffic features from a single source and analyze terminals in isolation, which limits their ability to capture complex device states and correlated attack behaviors. This paper presents a trust assessment framework for distributed power grid terminals that combines multidimensional behavioral modeling with dual domain graph neural networks. Behavioral features are collected from network traffic, runtime environment, and hardware or kernel events and are fused into compact representations through a variational autoencoder to mitigate redundancy and reduce computational overhead. Based on the fused features and observed communication relationships, two graphs are constructed in parallel: a feature domain graph reflecting behavioral similarity and a topological domain graph capturing communication structure between terminals. Graph convolution is performed in both domains to jointly model individual behavioral risk and correlation across terminals. A fusion mechanism based on attention is further introduced to adaptively integrate embeddings specific to each domain, together with a loss function that enforces both shared and complementary representations across domains. Experiments conducted on the CIC EV Charger Attack Dataset 2024 show that the proposed framework achieves a classification accuracy of 96.84%, while maintaining a recall rate above 95% for the low trust category. These results indicate that incorporating multidimensional behavior perception and dual domain relational modeling improves trust assessment performance for distributed power grid terminals under complex attack scenarios. Full article
(This article belongs to the Special Issue Advances in Data Security: Challenges, Technologies, and Applications)
Show Figures

Figure 1

19 pages, 1198 KB  
Article
GSMTNet: Dual-Stream Video Anomaly Detection via Gated Spatio-Temporal Graph and Multi-Scale Temporal Learning
by Di Jiang, Huicheng Lai, Guxue Gao, Dan Ma and Liejun Wang
Electronics 2026, 15(6), 1200; https://doi.org/10.3390/electronics15061200 - 13 Mar 2026
Viewed by 122
Abstract
Video Anomaly Detection aims to identify video segments containing abnormal events. However, detecting anomalies relies more heavily on temporal modeling, particularly when anomalies exhibit only subtle deviations from normal events. However, most existing methods inadequately model the heterogeneity in spatiotemporal relationships, especially the [...] Read more.
Video Anomaly Detection aims to identify video segments containing abnormal events. However, detecting anomalies relies more heavily on temporal modeling, particularly when anomalies exhibit only subtle deviations from normal events. However, most existing methods inadequately model the heterogeneity in spatiotemporal relationships, especially the dynamic interactions between human pose and video appearance. To address this, we propose GSMTNet, a dual-stream heterogeneous unsupervised network integrating gated spatio-temporal graph convolution and multi-scale temporal learning. First, we introduce a dynamic graph structure learning module, which leverages gated spatio-temporal graph convolutions with manifold transformations to model latent spatial relationships via human pose graphs. This is coupled with a normalizing flow-based density estimation module to model the probability distribution of normal samples in a latent space. Second, we design a hybrid dilated temporal module that employs multi-scale temporal feature learning to simultaneously capture long- and short-term dependencies, thereby enhancing the separability between normal patterns and potential deviations. Finally, we propose a dual-stream fusion module to hierarchically integrate features learned from pose graphs and raw video sequences, followed by a prediction head that computes anomaly scores from the fused features. Extensive experiments demonstrate state-of-the-art performance, achieving 86.81% AUC on ShanghaiTech and 70.43% on UBnormal, outperforming existing methods in rare anomaly scenarios. Full article
Show Figures

Figure 1

24 pages, 4228 KB  
Article
From Layout to Data: AI-Driven Route Matrix Generation for Logistics Optimization
by Ádám Francuz and Tamás Bányai
Mathematics 2026, 14(5), 910; https://doi.org/10.3390/math14050910 - 7 Mar 2026
Viewed by 254
Abstract
This study proposes an end-to-end mathematical framework to automatically transform warehouse layout images into optimization-ready route matrices. The objective is to convert visual spatial information into a discrete, graph-based representation suitable for combinatorial route optimization. The problem is formulated as a mapping from [...] Read more.
This study proposes an end-to-end mathematical framework to automatically transform warehouse layout images into optimization-ready route matrices. The objective is to convert visual spatial information into a discrete, graph-based representation suitable for combinatorial route optimization. The problem is formulated as a mapping from continuous image space to a structured grid representation, integrating image segmentation, graph construction, and Traveling Salesman Problem (TSP)-based routing. Synthetic warehouse layouts were generated to create labeled training data, and a U-Net convolutional neural network was trained to perform multi-class segmentation of warehouse elements. The predicted grid representation was then converted into a graph structure, where feasible cells define vertices and adjacency defines edges. Shortest path distances were computed using Breadth-First Search, and the resulting distance matrix was used to solve a TSP instance. The segmentation model achieved approximately 98% training accuracy and 95–97% validation accuracy. The generated route matrices enabled successful construction of feasible and optimal round-trip routes in all tested scenarios. The proposed framework demonstrates that warehouse layouts can be automatically transformed into discrete mathematical representations suitable for logistics optimization, reducing manual preprocessing and enabling scalable integration into digital logistics systems. Full article
(This article belongs to the Special Issue Soft Computing in Computational Intelligence and Machine Learning)
Show Figures

Figure 1

14 pages, 1034 KB  
Article
Causal-Enhanced LSTM-RF: Early Warning of Dynamic Overload Risk for Distribution Transformers
by Hao Bai, Yipeng Liu, Yawen Zheng, Ming Dong, Qiaoyi Ding and Hao Wang
Energies 2026, 19(5), 1354; https://doi.org/10.3390/en19051354 - 7 Mar 2026
Viewed by 194
Abstract
The frequency of extreme weather events has become higher, and electricity consumption has also become more complex. These changes increase the risk of overload in distribution transformers (DTs), and this risk threatens the stability and reliability of the power grid. Existing methods have [...] Read more.
The frequency of extreme weather events has become higher, and electricity consumption has also become more complex. These changes increase the risk of overload in distribution transformers (DTs), and this risk threatens the stability and reliability of the power grid. Existing methods have significant limitations. Traditional static threshold methods (based on DGA gas ratios and electrical signal thresholds) fail to consider temporal changes and complex links between factors, while modern machine learning models lack cause–effect relationships over time and clear ways to describe uncertainty. With such motivations, this paper proposes a causal-enhanced hybrid framework, which combines Long Short-Term Memory (LSTM) networks and Random Forest (RF) algorithms. The framework uses causal Seasonal Trend decomposition using Loess (STL) to reveal load patterns at different time scales. The mutual information index and spatiotemporal graph convolutional network (ST-GCN) are used to explore nonlinear relations and reveal how temperature affects load changes. The LSTM model captures time dependence in load series, and the Bayesian optimized Random Forest is used to solve the problem of data imbalance and quantify uncertainty. In addition, the framework constructs an early warning system that combines data from many sources in real time. Test results show that the proposed algorithm exhibits excellent performance in multi-source data environments. Full article
Show Figures

Figure 1

34 pages, 4142 KB  
Article
Subject-Independent Multimodal Interaction Modeling for Joint Emotion and Immersion Estimation in Virtual Reality
by Haibing Wang and Mujiangshan Wang
Symmetry 2026, 18(3), 451; https://doi.org/10.3390/sym18030451 - 6 Mar 2026
Viewed by 157
Abstract
Virtual Reality (VR) has emerged as a powerful medium for immersive human–computer interaction, where users’ emotional and experiential states play a pivotal role in shaping engagement and perception. However, existing affective computing approaches often model emotion recognition and immersion estimation as independent problems, [...] Read more.
Virtual Reality (VR) has emerged as a powerful medium for immersive human–computer interaction, where users’ emotional and experiential states play a pivotal role in shaping engagement and perception. However, existing affective computing approaches often model emotion recognition and immersion estimation as independent problems, overlooking their intrinsic coupling and the structured relationships underlying multimodal physiological signals. In this work, we propose a modality-aware multi-task learning framework that jointly models emotion recognition and immersion estimation from a graph-structured and symmetry-aware interaction perspective. Specifically, heterogeneous physiological and behavioral modalities—including eye-tracking, electrocardiogram (ECG), and galvanic skin response (GSR)—are treated as relational components with structurally symmetric encoding and fusion mechanisms, while their cross-modality dependencies are adaptively aggregated to preserve interaction symmetry at the representation level and introduce controlled asymmetry at the task-optimization level through weighted multi-task learning, without introducing explicit graph neural network architectures. To support reproducible evaluation, the VREED dataset is further extended with quantitative immersion annotations derived from presence-related self-reports via weighted aggregation and factor analysis. Extensive experiments demonstrate that the proposed framework consistently outperforms recurrent, convolutional, and Transformer-based baselines. Compared with the strongest Transformer baseline, the proposed framework yields consistent relative performance gains of approximately 3–7% for emotion recognition metrics and reduces immersion estimation errors by nearly 9%. Beyond empirical improvements, this study provides a structured interpretation of multimodal affective modeling that highlights symmetry, coupling, and controlled symmetry breaking in multi-task learning, offering a principled foundation for adaptive VR systems, emotion-driven personalization, and dynamic user experience optimization. Full article
(This article belongs to the Section Computer)
Show Figures

Figure 1

32 pages, 23347 KB  
Article
Dynamically Weighted Spatiotemporal Fusion for Deep Learning-Based Prediction of EHA Degradation in Aviation Systems
by Tianyuan Guan, Dianrong Gao, Jiangwei Ma, Jing Wu, Yunpeng Yuan, Yun Ji, Jianhua Zhao and Yingna Liang
Sensors 2026, 26(5), 1662; https://doi.org/10.3390/s26051662 - 6 Mar 2026
Viewed by 191
Abstract
Electro-hydrostatic actuators (EHAs) are increasingly deployed in modern aircraft due to their compact size, fast response, and high power-to-weight ratio. However, existing airborne QAR and EICAS data are typically recorded as independent parameters without explicit correspondence to system health states, making degradation assessment [...] Read more.
Electro-hydrostatic actuators (EHAs) are increasingly deployed in modern aircraft due to their compact size, fast response, and high power-to-weight ratio. However, existing airborne QAR and EICAS data are typically recorded as independent parameters without explicit correspondence to system health states, making degradation assessment and remaining useful life (RUL) prediction challenging. To address this issue, this paper proposes a spatiotemporal degradation modeling framework, termed PreDyn-ST, based on multivariate time series (MTS) data. The method integrates SimCLR-based contrastive pretraining and a dynamic feature fusion mechanism to capture evolving temporal dependencies and spatial sensor correlations. Specifically, graph convolutional networks (GCNs) incorporating physical connectivity priors are employed for spatial modeling, while a Transformer extracts long-range temporal patterns. A learnable dynamic weighting mechanism adaptively balances spatial and temporal features during training. The adaptive behavior is further analyzed using correlation statistical index (CSI) curves for interpretability. Experimental validation on a self-developed EHA degradation test bench and the C-MAPSS benchmark dataset demonstrates that PreDyn-ST achieves competitive and stable prediction performance. In particular, the method shows robust performance under complex operating conditions such as FD004. These results indicate the effectiveness of the proposed framework for accurate and interpretable degradation modeling in aerospace applications. Full article
(This article belongs to the Section Fault Diagnosis & Sensors)
Show Figures

Figure 1

24 pages, 2685 KB  
Article
Research on an Intelligent Scheduling Method Based on GCN-AM-LSTM for Bus Passenger Flow Prediction
by Xiaolei Ji, Zhe Li, Zhiwei Guo, Haotian Li and Hongpeng Nie
Appl. Sci. 2026, 16(5), 2525; https://doi.org/10.3390/app16052525 - 5 Mar 2026
Viewed by 209
Abstract
With the acceleration of urbanization, public transit systems face prominent challenges, including insufficient passenger flow prediction accuracy and low scheduling efficiency. This study analyzes passenger flow variation patterns from both spatial and temporal dimensions, constructs spatiotemporal matrices, and employs matrix dimensionality reduction methods [...] Read more.
With the acceleration of urbanization, public transit systems face prominent challenges, including insufficient passenger flow prediction accuracy and low scheduling efficiency. This study analyzes passenger flow variation patterns from both spatial and temporal dimensions, constructs spatiotemporal matrices, and employs matrix dimensionality reduction methods to extract key features. We propose a passenger flow prediction model based on GCN-AM-LSTM and a dynamic real-time intelligent scheduling strategy. For passenger flow prediction, the model first utilizes Graph Convolutional Networks (GCNs) to extract spatial features of the transit network, then employs Attention Mechanism-enhanced Long Short-Term Memory networks (AM-LSTM) to perform weighted extraction of temporal features, and finally integrates external factors such as weather conditions to generate prediction outputs. For scheduling optimization, a dynamic real-time scheduling mode is adopted: the foundational framework optimizes dynamic departure timetables using a multi-objective particle swarm optimization algorithm, which is then combined with real-time passenger flow data to adjust departure intervals at the route level and implement stop-skipping strategies at the station level. Validation was conducted using Xiamen BRT Line 1 as a case study. Experimental results demonstrate that the proposed GCN-AM-LSTM prediction model reduces Mean Absolute Error (MAE) by 14% and 22% compared to CNN and LSTM models, respectively, achieving significantly improved prediction accuracy. Regarding scheduling optimization, the number of departures decreased by 15.24%, passenger waiting time costs were reduced by 3.7%, and transit operating costs decreased by 3.19%, effectively balancing service quality and operational efficiency. Full article
(This article belongs to the Special Issue Research and Estimation of Traffic Flow Characteristics)
Show Figures

Figure 1

25 pages, 2728 KB  
Article
GDNN: A Practical Hybrid Book Recommendation System for the Field of Ideological and Political Education
by Yanli Liang, Hui Liu and Songsong Liu
Electronics 2026, 15(5), 1086; https://doi.org/10.3390/electronics15051086 - 5 Mar 2026
Viewed by 187
Abstract
Ideological and political education (IPE) is a cornerstone of higher education in China. As IPE-related book collections expand rapidly, university libraries face a growing challenge of information overload, which hinders the accurate characterization of student reading preferences and the efficient matching of resources [...] Read more.
Ideological and political education (IPE) is a cornerstone of higher education in China. As IPE-related book collections expand rapidly, university libraries face a growing challenge of information overload, which hinders the accurate characterization of student reading preferences and the efficient matching of resources to demand. To address these issues, this study proposes GDNN, a practical hybrid recommendation system designed for both warm-start and cold-start scenarios. For warm-start users with historical borrowing records, we develop the PPSM-GCN framework. This framework enhances the classical graph convolutional collaborative filtering model LightGCN by integrating a novel potential positive sample mining (PPSM) strategy, which effectively mitigates data sparsity and improves the modeling of latent interests. For cold-start users without interaction history, we introduce an embedding and MLP architecture. This deep neural network learns implicit reader–book associations from reader attributes and book metadata, enabling personalized recommendations even in the absence of historical data. Experimental results demonstrate that PPSM-GCN and the embedding and MLP method achieve significant performance gains in their respective scenarios. This research provides both technical support and practical insights for the precise delivery of IPE resources and the overall enhancement of educational effectiveness in higher education. Full article
Show Figures

Figure 1

20 pages, 7825 KB  
Article
STAG-Net: A Lightweight Spatial–Temporal Attention GCN for Real-Time 6D Human Pose Estimation in Human–Robot Collaboration Scenarios
by Chunxin Yang, Ruoyu Jia, Qitong Guo, Xiaohang Shi, Masahiro Hirano and Yuji Yamakawa
Robotics 2026, 15(3), 54; https://doi.org/10.3390/robotics15030054 - 4 Mar 2026
Viewed by 251
Abstract
Most existing research in human pose estimation focuses on predicting joint positions, paying limited attention to recovering the full 6D human pose, which comprises both 3D joint positions and bone orientations. Position-only methods treat joints as independent points, often resulting in structurally implausible [...] Read more.
Most existing research in human pose estimation focuses on predicting joint positions, paying limited attention to recovering the full 6D human pose, which comprises both 3D joint positions and bone orientations. Position-only methods treat joints as independent points, often resulting in structurally implausible poses and increased sensitivity to depth ambiguities—cases where poses share nearly identical joint positions but differ significantly in limb orientations. Incorporating bone orientation information helps enforce geometric consistency, yielding more anatomically plausible skeletal structures. Additionally, many state-of-the-art methods rely on large, computationally expensive models, which limit their applicability in real-time scenarios, such as human–robot collaboration. In this work, we propose STAG-Net, a novel 2D-to-6D lifting network that integrates Graph Convolutional Networks (GCNs), attention mechanisms, and Temporal Convolutional Networks (TCNs). By simultaneously learning joint positions and bone orientations, STAG-Net promotes geometrically consistent skeletal structures while remaining lightweight and computationally efficient. On the Human3.6M benchmark, STAG-Net achieves an MPJPE of 41.8 mm using 243 input frames. In addition, we introduce a lightweight single-frame variant, STG-Net, which achieves 50.8 mm MPJPE while operating in real time at 60 FPS using a single RGB camera. Extensive experiments on multiple large-scale datasets demonstrate the effectiveness and efficiency of the proposed approach. Full article
(This article belongs to the Special Issue Human–Robot Collaboration in Industry 5.0)
Show Figures

Figure 1

28 pages, 4123 KB  
Article
Nonlinear Impacts of Air Pollutants and Meteorological Factors on PM2.5: An Interpretable GT-iFormer Model with SHAP Analysis
by Dong Li, Mengmeng Liu, Houzeng Han and Jian Wang
Atmosphere 2026, 17(3), 266; https://doi.org/10.3390/atmos17030266 - 3 Mar 2026
Viewed by 282
Abstract
Accurate prediction of PM2.5 concentration is crucial for air quality management and public health protection. However, existing methods often struggle to capture and interpret the nonlinear relationships among multiple atmospheric variables. This study proposes GT-iFormer, a novel interpretable deep learning model that [...] Read more.
Accurate prediction of PM2.5 concentration is crucial for air quality management and public health protection. However, existing methods often struggle to capture and interpret the nonlinear relationships among multiple atmospheric variables. This study proposes GT-iFormer, a novel interpretable deep learning model that integrates graph convolutional networks (GCNs), Temporal Convolutional Networks (TCNs), and inverted Transformer (iTransformer) for PM2.5 concentration prediction. The model features a GTCN-Block that encapsulates GCN and TCN with residual-style fusion, preserving feature-level dependencies alongside temporal patterns to prevent information degradation. The Pearson correlation coefficients and KNN algorithm are innovatively integrated to build a data-driven graph structure, which allows GCNs to flexibly model the nonlinear relationships between pollutants and meteorological factors based on observed data. TCNs obtain multi-scale temporal patterns via causal dilated convolutions. Subsequently, the concatenated representations of GTCN-Block are input into iTransformer to model global inter-variable interactions using attention mechanisms along the axis of the variable. We incorporated SHAP (SHapley Additive exPlanations) analysis to expose feature importance and nonlinear relationships with PM2.5 predictions. Results on the hour-level data of Beijing (2020–2021) and Shenzhen (2021) show that our proposed GT-iFormer surpasses all baseline models, with an RMSE of 8.781 μg/m3 and R2 of 0.978 for Beijing, and an RMSE of 3.871 μg/m3 and R2 of 0.957 for Shenzhen on single-step prediction, equating to RMSE reductions of 15.75% and 17.92%, respectively, over the best baseline model. The SHAP analysis shows clearly distinct regional patterns, with combustion sources dominant in Beijing (represented by CO at 28.231%), and traffic emissions dominant in Shenzhen (represented by NO2 at 25.908%). Crucial threshold effects are established for all variables, with significant cross-city differences that can serve as general forecasts and guidance for city-specific air quality management policies. Full article
(This article belongs to the Section Air Quality)
Show Figures

Figure 1

21 pages, 7860 KB  
Article
D-SFANet: Application of a Multimodal Fusion Framework Based on Attention Mechanisms in ADHD Identification and Classification
by Li Zhang, Guangcheng Dongye and Ming Jing
Mathematics 2026, 14(5), 851; https://doi.org/10.3390/math14050851 - 2 Mar 2026
Viewed by 244
Abstract
The diagnosis of attention-deficit/hyperactivity disorder (ADHD) has long relied on subjective scales, lacking objective neuroimaging biomarkers. Static functional connectivity (sFC) and dynamic functional connectivity (dFC), as commonly used metrics in resting-state functional magnetic resonance imaging (rs-fMRI) analysis, provide important perspectives for related research. [...] Read more.
The diagnosis of attention-deficit/hyperactivity disorder (ADHD) has long relied on subjective scales, lacking objective neuroimaging biomarkers. Static functional connectivity (sFC) and dynamic functional connectivity (dFC), as commonly used metrics in resting-state functional magnetic resonance imaging (rs-fMRI) analysis, provide important perspectives for related research. However, existing unimodal approaches struggle to effectively integrate the spatiotemporal characteristics of functional connectivity. To address this, this paper proposes the multimodal fusion framework D-SFANet, which synergistically models the static and dynamic features of brain functional connectivity through an attention mechanism: in the static path, it integrates a multi-scale convolutional network with phenotypic information extraction to extract hierarchical topological features; in the dynamic path, it combines graph theory with a bidirectional long short-term memory network (BiLSTM) to capture key state transition patterns in brain networks. Experimental validation demonstrates that D-SFANet achieves significantly higher classification accuracy than existing mainstream methods, robustly validating the effectiveness of its spatiotemporal fusion strategy. Full article
Show Figures

Figure 1

19 pages, 2509 KB  
Article
Emotion Recognition Using Multi-View EEG-fNIRS and Cross-Attention Feature Fusion
by Ni Yan, Guijun Chen and Xueying Zhang
Biosensors 2026, 16(3), 145; https://doi.org/10.3390/bios16030145 - 2 Mar 2026
Viewed by 317
Abstract
To improve the accuracy of emotion recognition, this paper proposes a multi-view EEG-fNIRS and cross-attention fusion module named FGCN-TCNN-CAF, which employs a differentiated modeling strategy for the frequency, spatial, and temporal features of EEG-fNIRS signals. First, frequency-domain and time-domain features are extracted from [...] Read more.
To improve the accuracy of emotion recognition, this paper proposes a multi-view EEG-fNIRS and cross-attention fusion module named FGCN-TCNN-CAF, which employs a differentiated modeling strategy for the frequency, spatial, and temporal features of EEG-fNIRS signals. First, frequency-domain and time-domain features are extracted from EEG, and time-domain features are obtained from fNIRS signals. Then, a frequency-domain graph convolutional network (FGCN) and a time-domain convolutional network (TCNN) are deployed in parallel. The EEG feature views from different frequency bands are modeled using an FGCN module to capture graph-structured relationships, while the time-domain views of EEG and fNIRS are processed by a TCNN module to extract spatial and temporal features. Finally, a cross-attention fusion network (CAF) is applied to achieve interactive fusion of multimodal features. Experiments demonstrate that the proposed multi-view EEG approach achieves higher recognition accuracy compared to using only the EEG view. Additionally, the mmultimodalrecognition results outperform single-modal EEG and single-modal fNIRS by 1.73% and 6.65%, respectively. When compared with other emotion recognition models, the proposed method achieves the highest accuracy of 96.09%, proving its superior performance. Full article
(This article belongs to the Special Issue Applications of AI in Non-Invasive Biosensing Technologies)
Show Figures

Figure 1

18 pages, 2882 KB  
Article
Fault Detection and Identification of Wind Turbines via Causal Spatio-Temporal Features and Variable-Level Normalized Flow
by Xiheng Gao, Weimin Li and Hongxiu Zhu
Math. Comput. Appl. 2026, 31(2), 35; https://doi.org/10.3390/mca31020035 - 1 Mar 2026
Viewed by 179
Abstract
Anomaly identification and fault localization of wind turbines through Supervisory Control and Data Acquisition (SCADA) data is a popular topic today, but most studies overlook the complex time-space interdependence between wind turbine (WT) SCADA variables, which results in low detection accuracy for anomalies [...] Read more.
Anomaly identification and fault localization of wind turbines through Supervisory Control and Data Acquisition (SCADA) data is a popular topic today, but most studies overlook the complex time-space interdependence between wind turbine (WT) SCADA variables, which results in low detection accuracy for anomalies in critical moving components of the wind turbine. To address this problem, this paper proposes a fault detection and identification method based on a dynamic graph model with a causal spatio-temporal attention mechanism and variable-level normalized flow. First, it introduces a spatio-temporal attention mechanism under causality to extract the spatio-temporal attention mechanism under causality to extract spatio-temporal features of the variables and uses a graph convolutional neural network to represent the extracted spatio-temporal features as a dynamic graph. Secondly, a dynamic normalization flow is suggested for calculating the logarithmic density estimation between variables. Finally, the anomaly scores are calculated through logarithmic density estimation. Based on these scores, anomalies are detected and localized. Experimental validation on real SCADA data from wind turbines demonstrates that the method can effectively identify abnormal operating states and provide early warnings, achieving higher accuracy and greater stability. Full article
Show Figures

Figure 1

17 pages, 1664 KB  
Article
STGCformer: Spatio-Temporal Graph Convolutional Transformer for Short-Term Wind Power Forecasting
by Chenyu Tian, Min Xia, Shi Yuan, Liwen Wang and Wei Zhuang
Energies 2026, 19(5), 1214; https://doi.org/10.3390/en19051214 - 28 Feb 2026
Viewed by 165
Abstract
The accuracy of short-term wind power forecasting (STWPF) is crucial for the stable operation of power systems. To address the issue of insufficient capture of spatio-temporal dependencies in existing models, which leads to low prediction accuracy, this paper proposes a novel Transformer-based spatio-temporal [...] Read more.
The accuracy of short-term wind power forecasting (STWPF) is crucial for the stable operation of power systems. To address the issue of insufficient capture of spatio-temporal dependencies in existing models, which leads to low prediction accuracy, this paper proposes a novel Transformer-based spatio-temporal graph convolutional (STGCformer) model. The time series decomposition module (TSDM) captures periodic fluctuations and long-term variations within the data by performing seasonal trend decomposition. The spatio-temporal graph convolutional (STGC) architecture combines a Graph Attention Network (GAT) with convolutional layers (Convs) to capture both spatial and temporal dependencies, jointly processing the spatio-temporal characteristics inherent in wind power data. The Transformer’s attention mechanism simultaneously handles both short-term and long-term fluctuations. Extensive experimental results show that STGCformer achieves the best prediction accuracy across multiple time steps (24, 48, 72, 96 h), with the average absolute error (MAE) and mean absolute percentage error (MAPE) at 48 h being 41.383 and 3.862, respectively. This model provides a new methodological framework for STWPF. Full article
Show Figures

Figure 1

Back to TopTop