Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (4)

Search Parameters:
Keywords = temporal-spatial learnable graph convolutional neural network

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
23 pages, 3845 KB  
Article
A Spatiotemporal Forecasting Method for Cooling Load of Chillers Based on Patch-Specific Dynamic Filtering
by Jie Li, Zhengri Jin and Tao Wu
Sustainability 2025, 17(21), 9883; https://doi.org/10.3390/su17219883 - 5 Nov 2025
Cited by 1 | Viewed by 623
Abstract
Accurate cooling load forecasting in chiller units is critical for building energy optimization, yet remains challenging due to non-stationary nonlinear dynamics driven by coupled external weather variability (solar radiation, ambient temperature) and internal thermal loads. Conventional models fail to capture the spatiotemporal coupling [...] Read more.
Accurate cooling load forecasting in chiller units is critical for building energy optimization, yet remains challenging due to non-stationary nonlinear dynamics driven by coupled external weather variability (solar radiation, ambient temperature) and internal thermal loads. Conventional models fail to capture the spatiotemporal coupling inherent in load time series, violating their stationarity assumptions. To address this, this research proposes OptiNet, a spatiotemporal forecasting framework integrating patch-specific dynamic filtering with graph neural networks. OptiNet partitions multi-sensor data into non-overlapping time patches to develop a dynamic spatiotemporal graph. A learnable routing mechanism then performs adaptive dependency filtering to capture time-varying temporal–spatial correlations, followed by graph convolution for load prediction. Validated on long-term industrial logs (52,075 multi-sensor samples at 20 min; district cooling plant in Zhangjiang, Shanghai, with multiple chillers, towers, pumps, building meters, and a weather station), OptiNet achieves consistently lower MAE and MSE than Graph WaveNet across 6–144-step horizons and sampling frequencies of 20–60 min; among 30 set-tings it leads in 26, with MSE reductions up to 27.8% (60 min, 72-step) and typical long-horizon (72–144 steps) gains of ≈2–18% MSE and ≈1–15% MAE. Crucially, the model provides interpretable spatial-temporal dependencies (e.g., “Zone B solar radiation influences Unit 2 load with 4-h lag”), enabling data-driven chiller sequencing strategies that reduce electricity consumption by 12.7% in real-world deployments—directly advancing energy-efficient building operations. Full article
Show Figures

Figure 1

21 pages, 2245 KB  
Article
Frequency-Aware and Interactive Spatial-Temporal Graph Convolutional Network for Traffic Flow Prediction
by Guoqing Teng, Han Wu, Hao Wu, Jiahao Cao and Meng Zhao
Appl. Sci. 2025, 15(20), 11254; https://doi.org/10.3390/app152011254 - 21 Oct 2025
Viewed by 1587
Abstract
Accurate traffic flow prediction is pivotal for intelligent transportation systems; yet, existing spatial-temporal graph neural networks (STGNNs) struggle to jointly capture the long-term structural stability, short-term dynamics, and multi-scale temporal patterns of road networks. To address these shortcomings, we propose FISTGCN, a Frequency-Aware [...] Read more.
Accurate traffic flow prediction is pivotal for intelligent transportation systems; yet, existing spatial-temporal graph neural networks (STGNNs) struggle to jointly capture the long-term structural stability, short-term dynamics, and multi-scale temporal patterns of road networks. To address these shortcomings, we propose FISTGCN, a Frequency-Aware Interactive Spatial-Temporal Graph Convolutional Network. FISTGCN enriches raw traffic flow features with learnable spatial and temporal embeddings, thereby providing comprehensive spatial-temporal representations for subsequent modeling. Specifically, it utilizes an interactive dynamic graph convolutional block that generates a time-evolving fused adjacency matrix by combining adaptive and dynamic adjacency matrices. It then applies dual sparse graph convolutions with cross-scale interactions to capture multi-scale spatial dependencies. The gated spectral block projects the input features into the frequency domain and adaptively separates low- and high-frequency components using a learnable threshold. It then employs learnable filters to extract features from different frequency bands and adopts a gating mechanism to adaptively fuse low- and high-frequency information, thereby dynamically highlighting short-term fluctuations or long-term trends. Extensive experiments on four benchmark datasets demonstrate that FISTGCN delivers state-of-the-art predictive accuracy while maintaining competitive computational efficiency. Full article
Show Figures

Figure 1

16 pages, 741 KB  
Article
Speech Emotion Recognition Based on Temporal-Spatial Learnable Graph Convolutional Neural Network
by Jingjie Yan, Haihua Li, Fengfeng Xu, Xiaoyang Zhou, Ying Liu and Yuan Yang
Electronics 2024, 13(11), 2010; https://doi.org/10.3390/electronics13112010 - 21 May 2024
Cited by 8 | Viewed by 2267
Abstract
The Graph Convolutional Neural Networks (GCN) method has shown excellent performance in the field of deep learning, and using graphs to represent speech data is a computationally efficient and scalable approach. In order to enhance the adequacy of graph neural networks in extracting [...] Read more.
The Graph Convolutional Neural Networks (GCN) method has shown excellent performance in the field of deep learning, and using graphs to represent speech data is a computationally efficient and scalable approach. In order to enhance the adequacy of graph neural networks in extracting speech emotional features, this paper proposes a Temporal-Spatial Learnable Graph Convolutional Neural Network (TLGCNN) for speech emotion recognition. TLGCNN firstly utilizes the Open-SMILE toolkit to extract frame-level speech emotion features. Then, a bidirectional long short-term memory (Bi LSTM) network is used to process the long-term dependencies of speech features which can further extract deep frame-level emotion features. The extracted frame-level emotion features are then input into subsequent network through two pathways. Finally, one pathway constructs the extracted frame-level deep emotion feature vectors into a graph structure applying an adaptive adjacency matrix to catch latent spatial connections, while the other pathway concatenates emotion feature vectors with graph-level embedding obtained from learnable graph convolutional neural network for prediction and classification. Through these two pathways, TLGCNN can simultaneously obtain temporal speech emotional information through Bi-LSTM and spatial speech emotional information through Learnable Graph Convolutional Neural (LGCN) network. Experimental results demonstrate that this method achieves weighted accuracy of 66.82% and 58.35% on the IEMOCAP and MSP-IMPROV databases, respectively. Full article
(This article belongs to the Special Issue Applied AI in Emotion Recognition)
Show Figures

Figure 1

16 pages, 9199 KB  
Article
STAGCN: Spatial–Temporal Attention Graph Convolution Network for Traffic Forecasting
by Yafeng Gu and Li Deng
Mathematics 2022, 10(9), 1599; https://doi.org/10.3390/math10091599 - 8 May 2022
Cited by 19 | Viewed by 9059
Abstract
Traffic forecasting plays an important role in intelligent transportation systems. However, the prediction task is highly challenging due to the mixture of global and local spatiotemporal dependencies involved in traffic data. Existing graph neural networks (GNNs) typically capture spatial dependencies with the predefined [...] Read more.
Traffic forecasting plays an important role in intelligent transportation systems. However, the prediction task is highly challenging due to the mixture of global and local spatiotemporal dependencies involved in traffic data. Existing graph neural networks (GNNs) typically capture spatial dependencies with the predefined or learnable static graph structure, ignoring the hidden dynamic patterns in traffic networks. Meanwhile, most recurrent neural networks (RNNs) or convolutional neural networks (CNNs) cannot effectively capture temporal correlations, especially for long-term temporal dependencies. In this paper, we propose a spatial–temporal attention graph convolution network (STAGCN), which acquires a static graph and a dynamic graph from data without any prior knowledge. The static graph aims to model global space adaptability, and the dynamic graph is designed to capture local dynamics in the traffic network. A gated temporal attention module is further introduced for long-term temporal dependencies, where a causal-trend attention mechanism is proposed to increase the awareness of causality and local trends in time series. Extensive experiments on four real-world traffic flow datasets demonstrate that STAGCN achieves an outstanding prediction accuracy improvement over existing solutions. Full article
(This article belongs to the Topic Machine and Deep Learning)
Show Figures

Figure 1

Back to TopTop