Dynamic Pattern Matching Network for Traffic Prediction
Abstract
1. Introduction
- We reformulate the task of capturing spatiotemporal dependencies via Graph Convolutional Networks (GCNs) into a dynamic pattern matching framework. This allows our model to better distinguish node-specific characteristics and reduces reliance on the static topology of the road network, making it more adaptable to varying spatial and flow distribution patterns across traffic nodes.
- We propose a novel Dynamic Pattern Matching Network (DPMNet), which leverages a memory module to store representative patterns over long temporal ranges. These patterns encapsulate both long-term traffic trends and intra-node flow regularities, enhancing the model’s capacity for long-term traffic prediction.
- Based on DPMNet, we develop a comprehensive forecasting framework, DPMformer, and conduct extensive experiments on four real-world traffic datasets. Our results demonstrate that DPMformer significantly outperforms existing baseline methods in prediction accuracy, validating the effectiveness and generalizability of our approach.
2. Related Work
2.1. Traffic Flow Prediction
2.2. Neural Memory Networks
3. Methodology
3.1. Encoder and Decoder Structure
3.2. Embedding Layer
3.3. Dynamic Pattern Matching Network
3.4. Multi-Head Self-Attention Module
3.5. Regression Layer
4. Experiments
4.1. Datasets and Settings
- METR-LA: This dataset consists of traffic speed data recorded by 207 sensors on highways in Los Angeles. It spans four months, starting from March 2012.
- PEMS-BAY: Collected by the California Department of Transportation (CalTrans), this dataset includes readings from 325 sensors over the period from 1 January 2017, to 31 May 2017.
- PEMSD7 (M) and PEMSD7 (L): These datasets represent traffic data from California District 7, covering weekdays from May to June 2012. The primary difference between the two lies in the number of sensors used.
4.2. Baselines
- HA [1]: This method uses the historical average as the prediction for future values.
- ARIMA [2]: This method represents the current value of traffic flow as a linear combination of its historical values, and corrects the prediction using a linear combination of past error terms.
- VAR [3]: This extended auto-regressive model is designed for modeling the relationships between multiple variables.
- SVR [4]: This method maps time series data to a high-dimensional feature space and finds the optimal hyperplane for prediction within that space.
- STGCN [7]: This method utilizes graph convolution and 1D convolutional neural networks to separately extract temporal and spatial information.
- DCRNN [5]: This method integrates diffusion convolution into a recurrent neural network to capture both temporal and spatial dependencies.
- GWN [36]: This method designs an adaptive adjacency matrix combined with temporal convolution networks (TCNs) to extract spatiotemporal dependencies.
- ASTGCN [8]: This method combines spatiotemporal attention mechanisms to capture dynamic spatiotemporal correlations in traffic flow.
- STSGCN [9]: This method emphasizes the importance of local spatiotemporal dependencies by understanding the heterogeneity of spatiotemporal data.
- GMAN [37]: This method introduces spatial and temporal attention mechanisms to separately model dynamic spatial and nonlinear temporal correlations.
- STEP [38]: This method designs a pre-training model to learn segment representations from very long historical sequences.
- STGM [39]: This method investigates how the historical information of node i affects the future state of node j for more accurate predictions.
- STAEformer [40]: This method explicitly models the time delay of spatial information propagation, considering the delayed transmission characteristic of traffic flow.
- DDGCRN [41]: This method combines dynamic graphs with RNN architectures to effectively extract spatiotemporal dependencies via a decoupling structure.
- STD-MAE [42]: This method constructs a pre-training framework using two decoupled auto-encoders and integrates them seamlessly to improve performance.
4.3. Evaluation Metrics
4.4. Overall Performance
4.5. Ablation Studies
- w/o DPMNet: The DPMNet module is removed and replaced with an MLP with a similar parameter scale, while the time embedding is retained. This setup tests the contribution of the pattern matching capability introduced by DPMNet.
- w/ GCN: DPMNet is replaced with a GCN model containing two graph convolution layers, constructed using a static geographical adjacency matrix to capture spatial dependencies between nodes.
- w/o Time Embedding: The time embedding module is removed to evaluate the model’s reliance on time information for modeling.
- w/o MSA (Multi-Head Self-Attention): The multi-head self-attention module is removed and replaced with a simple feature fusion operation to verify the effectiveness of the attention mechanism in spatiotemporal modeling.
4.6. Hyperparameter Sensitivity Analysis
4.6.1. Sensitivity to Hidden Dimension
4.6.2. Sensitivity to Numbers of Encoders (N) and Decoders (M)
4.7. Visualizing Prediction Results
5. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Liu, J.; Guan, W. A summary of traffic flow forecasting methods. J. Highw. Transp. Res. Dev. 2004, 21, 82–85. [Google Scholar]
- Zhang, G.P. Time series forecasting using a hybrid ARIMA and neural network model. Neurocomputing 2003, 50, 159–175. [Google Scholar] [CrossRef]
- Williams, B.M.; Hoel, L.A. Modeling and forecasting vehicular traffic flow as a seasonal ARIMA process: Theoretical basis and empirical results. J. Transp. Eng. 2003, 129, 664–672. [Google Scholar] [CrossRef]
- Wu, C.H.; Ho, J.M.; Lee, D.T. Travel-time prediction with support vector regression. IEEE Trans. Intell. Transp. Syst. 2004, 5, 276–281. [Google Scholar] [CrossRef]
- Li, Y.; Yu, R.; Shahabi, C.; Liu, Y. Diffusion Convolutional Recurrent Neural Network: Data-Driven Traffic Forecasting. In Proceedings of the 6th International Conference on Learning Representations, ICLR 2018, Vancouver, BC, Canada, 30 April–3 May 2018. [Google Scholar]
- Lai, G.; Chang, W.C.; Yang, Y.; Liu, H. Modeling long-and short-term temporal patterns with deep neural networks. In Proceedings of the 41st International ACM SIGIR Conference on Research & Development in Information Retrieval, Ann Arbor, MI, USA, 8–12 July 2018; pp. 95–104. [Google Scholar]
- Yu, B.; Yin, H.; Zhu, Z. Spatio-Temporal Graph Convolutional Networks: A Deep Learning Framework for Traffic Forecasting. In Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence, Stockholm, Sweden, 13–19 July 2018; International Joint Conferences on Artificial Intelligence Organization: Washington, DC, USA, 2018; pp. 3634–3640. [Google Scholar] [CrossRef]
- Guo, S.; Lin, Y.; Feng, N.; Song, C.; Wan, H. Attention based spatial-temporal graph convolutional networks for traffic flow forecasting. In Proceedings of the AAAI Conference on Artificial Intelligence, Honolulu, HI, USA, 27 January–1 February 2019; Volume 33, pp. 922–929. [Google Scholar]
- Song, C.; Lin, Y.; Guo, S.; Wan, H. Spatial-temporal synchronous graph convolutional networks: A new framework for spatial-temporal network data forecasting. In Proceedings of the AAAI Conference on Artificial Intelligence, New York, NY, USA, 7–12 February 2020; Volume 34, pp. 914–921. [Google Scholar]
- Chen, X.; Tang, H.; Wu, Y.; Shen, H.; Li, J. AdpSTGCN: Adaptive spatial–temporal graph convolutional network for traffic forecasting. Knowl.-Based Syst. 2024, 301, 112295. [Google Scholar]
- Yang, S.; Liu, J.; Zhao, K. Space meets time: Local spacetime neural network for traffic flow forecasting. In Proceedings of the 2021 IEEE International Conference on Data Mining (ICDM), Auckland, New Zealand, 7–10 December 2021; IEEE: Piscataway, NJ, USA, 2021; pp. 817–826. [Google Scholar]
- Yan, X.; Gan, X.; Tang, J.; Zhang, D.; Wang, R. ProSTformer: Progressive space-time self-attention model for short-term traffic flow forecasting. IEEE Trans. Intell. Transp. Syst. 2024, 25, 10802–10816. [Google Scholar] [CrossRef]
- Liu, Q.; Sun, S.; Liu, M.; Wang, Y.; Gao, B. Online spatio-temporal correlation-based federated learning for traffic flow forecasting. IEEE Trans. Intell. Transp. Syst. 2024, 25, 13027–13039. [Google Scholar] [CrossRef]
- Cai, J.; Wang, C.H.; Hu, K. LCDFormer: Long-term correlations dual-graph transformer for traffic forecasting. Expert Syst. Appl. 2024, 249, 123721. [Google Scholar] [CrossRef]
- Fang, Y.; Qin, Y.; Luo, H.; Zhao, F.; Xu, B.; Zeng, L.; Wang, C. When Spatio-Temporal Meet Wavelets: Disentangled Traffic Forecasting via Efficient Spectral Graph Attention Networks. In Proceedings of the 39th IEEE International Conference on Data Engineering, ICDE 2023, Anaheim, CA, USA, 3–7 April 2023; IEEE: Piscataway, NJ, USA, 2023; pp. 517–529. [Google Scholar]
- Jiang, W.; Zhang, L. Geospatial data to images: A deep-learning framework for traffic forecasting. Tsinghua Sci. Technol. 2018, 24, 52–64. [Google Scholar] [CrossRef]
- Agafonov, A. Traffic flow prediction using graph convolution neural networks. In Proceedings of the 2020 10th International Conference on Information Science and Technology (ICIST), London, UK, 9–15 September 2020; IEEE: Piscataway, NJ, USA, 2020; pp. 91–95. [Google Scholar]
- Fukuda, S.; Uchida, H.; Fujii, H.; Yamada, T. Short-term prediction of traffic flow under incident conditions using graph convolutional recurrent neural network and traffic simulation. IET Intell. Transp. Syst. 2020, 14, 936–946. [Google Scholar] [CrossRef]
- Bai, L.; Yao, L.; Li, C.; Wang, X.; Wang, C. Adaptive graph convolutional recurrent network for traffic forecasting. Adv. Neural Inf. Process. Syst. 2020, 33, 17804–17815. [Google Scholar]
- Shi, Z.; Zhang, Y.; Wang, J.; Qin, J.; Liu, X.; Yin, H.; Huang, H. DAGCRN: Graph convolutional recurrent network for traffic forecasting with dynamic adjacency matrix. Expert Syst. Appl. 2023, 227, 120259. [Google Scholar] [CrossRef]
- Xia, Z.; Zhang, Y.; Yang, J.; Xie, L. Dynamic spatial–temporal graph convolutional recurrent networks for traffic flow forecasting. Expert Syst. Appl. 2024, 240, 122381. [Google Scholar] [CrossRef]
- Shao, Z.; Zhang, Z.; Wei, W.; Wang, F.; Xu, Y.; Cao, X.; Jensen, C.S. Decoupled Dynamic Spatial-Temporal Graph Neural Network for Traffic Forecasting. Proc. Vldb Endow. 2022, 15, 2733–2746. [Google Scholar] [CrossRef]
- Weston, J.; Chopra, S.; Bordes, A. Memory networks. In Proceedings of the 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, 7–9 May 2015. [Google Scholar]
- Sukhbaatar, S.; Weston, J.; Fergus, R. End-to-end memory networks. Adv. Neural Inf. Process. Syst. 2015, 28. [Google Scholar]
- Madotto, A.; Wu, C.; Fung, P. Mem2Seq: Effectively Incorporating Knowledge Bases into End-to-End Task-Oriented Dialog Systems. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics, ACL 2018, Melbourne, Australia, 15–20 July 2018; Gurevych, I., Miyao, Y., Eds.; Association for Computational Linguistics: Stroudsburg, PA, USA, 2018; Volume 1, pp. 1468–1478. [Google Scholar]
- Yang, T.; Chan, A.B. Learning dynamic memory networks for object tracking. In Proceedings of the European conference on computer vision (ECCV), Munich, Germany, 8–14 September 2018; pp. 152–167. [Google Scholar]
- Oh, S.W.; Lee, J.Y.; Xu, N.; Kim, S.J. Video object segmentation using space-time memory networks. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Seoul, Republic of Korea, 27 October–2 November 2019; pp. 9226–9235. [Google Scholar]
- Chang, Y.; Sun, F.; Wu, Y.; Lin, S. A Memory-Network Based Solution for Multivariate Time-Series Forecasting. arXiv 2018, arXiv:1809.02105. [Google Scholar]
- Lee, H.; Jin, S.; Chu, H.; Lim, H.; Ko, S. Learning to Remember Patterns: Pattern Matching Memory Networks for Traffic Forecasting. In Proceedings of the Tenth International Conference on Learning Representations, ICLR 2022, Virtual Event, 25–29 April 2022. OpenReview.net. [Google Scholar]
- Jiang, R.; Wang, Z.; Yong, J.; Jeph, P.; Chen, Q.; Kobayashi, Y.; Song, X.; Fukushima, S.; Suzumura, T. Spatio-temporal meta-graph learning for traffic forecasting. In Proceedings of the AAAI Conference on Artificial Intelligence, Washington, DC, USA, 7–14 February 2023; Volume 37, pp. 8078–8086. [Google Scholar]
- Peng, D.; Zhang, Y. MA-GCN: A memory augmented graph convolutional network for traffic prediction. Eng. Appl. Artif. Intell. 2023, 121, 106046. [Google Scholar] [CrossRef]
- Liu, Y.; Guo, B.; Meng, J.; Zhang, D.; Yu, Z. Spatio-Temporal Memory Augmented Multi-Level Attention Network for Traffic Prediction. IEEE Trans. Knowl. Data Eng. 2023, 36, 2643–2658. [Google Scholar] [CrossRef]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar]
- Lei Ba, J.; Kiros, J.R.; Hinton, G.E. Layer normalization. arXiv 2016, arXiv:1607.06450. [Google Scholar]
- Vaswani, A. Attention is all you need. Adv. Neural Inf. Process. Syst. 2017, 30. [Google Scholar]
- Wu, Z.; Pan, S.; Long, G.; Jiang, J.; Zhang, C. Graph WaveNet for Deep Spatial-Temporal Graph Modeling. In Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence, IJCAI 2019, Macao, China, 10–16 August 2019; Kraus, S., Ed.; AAAI Press: Washington, DC, USA, 2019; pp. 1907–1913. [Google Scholar]
- Zheng, C.; Fan, X.; Wang, C.; Qi, J. Gman: A graph multi-attention network for traffic prediction. In Proceedings of the AAAI Conference on Artificial Intelligence, New York, NY, USA, 7–12 February 2020; Volume 34, pp. 1234–1241. [Google Scholar]
- Shao, Z.; Zhang, Z.; Wang, F.; Xu, Y. Pre-training enhanced spatial-temporal graph neural network for multivariate time series forecasting. In Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, Washington, DC, USA, 14–18 August 2022; pp. 1567–1577. [Google Scholar]
- Lablack, M.; Shen, Y. Spatio-temporal graph mixformer for traffic forecasting. Expert Syst. Appl. 2023, 228, 120281. [Google Scholar] [CrossRef]
- Liu, H.; Dong, Z.; Jiang, R.; Deng, J.; Deng, J.; Chen, Q.; Song, X. Spatio-temporal adaptive embedding makes vanilla transformer sota for traffic forecasting. In Proceedings of the 32nd ACM International Conference on Information and Knowledge Management, Birmingham, UK, 21–23 October 2023; pp. 4125–4129. [Google Scholar]
- Weng, W.; Fan, J.; Wu, H.; Hu, Y.; Tian, H.; Zhu, F.; Wu, J. A decomposition dynamic graph convolutional recurrent network for traffic forecasting. Pattern Recognit. 2023, 142, 109670. [Google Scholar] [CrossRef]
- Gao, H.; Jiang, R.; Dong, Z.; Deng, J.; Ma, Y.; Song, X. Spatial-Temporal-Decoupled Masked Pre-training for Spatiotemporal Forecasting. In Proceedings of the Thirty-Third International Joint Conference on Artificial Intelligence IJCAI-24, Jeju, Republic of Korea, 3–9 August 2024; Larson, K., Ed.; International Joint Conferences on Artificial Intelligence Organization: Washington, DC, USA, 2024; pp. 3998–4006. [Google Scholar] [CrossRef]
Dataset | Nodes | Time Steps | Time Range | Time Interval | Train/Valid/Test |
---|---|---|---|---|---|
METR-LA | 207 | 34,272 | 03/2012–06/2012 | 5 min | 7/1/2 |
PEMS-BAY | 325 | 52,116 | 01/2017–05/2017 | 5 min | 7/1/2 |
PEMSD7 (M) | 228 | 12,672 | 05/2012–06/2012 | 5 min | 6/2/2 |
PEMSD7 (L) | 1026 | 12,672 | 05/2012–06/2012 | 5 min | 6/2/2 |
Datasets | N | M | Batch Size | Hidden Dim | Learning Rate |
---|---|---|---|---|---|
METR-LA | 2 | 1 | 16 | 32 | 0.002 |
PEMS-BAY | 2 | 1 | 16 | 32 | 0.002 |
PEMSD7 (M) | 2 | 1 | 16 | 32 | 0.002 |
PEMSD7 (L) | 2 | 1 | 16 | 16 | 0.002 |
Datasets | Models | 3 Steps (15 min) | 6 Steps (30 min) | 12 Steps (60 min) | ||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|
MAE | RMSE | MAPE | MAE | RMSE | MAPE | MAE | RMSE | MAPE | ||||
METR-LA | HA | 5.811 | 12.061 | 14.94% | 5.940 | 12.162 | 15.52% | 5.915 | 12.152 | 15.17% | ||
ARIMA | 3.99 | 8.21 | 9.60% | 5.15 | 10.45 | 12.70% | 6.90 | 13.23 | 17.40% | |||
VAR | 4.42 | 7.89 | 10.20% | 5.41 | 9.13 | 12.70% | 6.52 | 10.11 | 15.80% | |||
SVR | 3.99 | 8.45 | 9.30% | 5.05 | 10.87 | 12.10% | 6.72 | 13.76 | 16.70% | |||
STGCN | 2.988 | 5.205 | 7.03% | 3.229 | 6.101 | 8.52% | 3.872 | 7.922 | 10.05% | |||
DCRNN | 2.899 | 4.986 | 6.98% | 3.211 | 6.044 | 8.36% | 3.854 | 7.865 | 10.01% | |||
GWN | 2.814 | 5.213 | 6.92% | 3.101 | 5.999 | 8.25% | 3.538 | 7.656 | 9.98% | |||
ASTGCN | 3.015 | 5.214 | 7.11% | 3.376 | 6.173 | 8.58% | 3.942 | 7.997 | 10.12% | |||
STSGCN | 3.31 | 7.62 | 8.06% | 4.13 | 9.77 | 10.29% | 5.06 | 11.66 | 12.91% | |||
GMAN | 2.80 | 5.55 | 7.41% | 3.12 | 6.49 | 8.73% | 3.44 | 7.35 | 10.07% | |||
STEP | 2.61 | 4.98 | 6.60% | 2.96 | 5.97 | 7.96% | 3.37 | 6.99 | 9.61% | |||
STGM | 2.569 | 4.891 | 6.52% | 2.857 | 5.759 | 7.80% | 3.229 | 7.099 | 9.39% | |||
STAEformer | 2.65 | 5.11 | 6.85% | 2.97 | 6.00 | 8.13% | 3.34 | 7.02 | 9.70% | |||
STD-MAE | 2.62 | 5.02 | 6.70% | 2.99 | 6.07 | 8.04% | 3.40 | 7.07 | 9.59% | |||
DPMformer (Ours) | 2.52 | 4.86 | 6.45% | 2.74 | 5.41 | 7.36% | 3.06 | 6.13 | 8.31% | |||
PEMS-BAY | HA | 3.333 | 6.687 | 8.10% | 3.333 | 6.686 | 8.10% | 3.332 | 6.685 | 8.10% | ||
ARIMA | 1.62 | 3.30 | 3.50% | 2.33 | 4.76 | 5.40% | 3.38 | 6.50 | 8.30% | |||
VAR | 1.74 | 3.16 | 3.60% | 2.32 | 4.25 | 5.00% | 2.93 | 5.44 | 6.50% | |||
SVR | 1.85 | 3.59 | 3.80% | 2.48 | 5.18 | 5.50% | 3.28 | 7.08 | 8.00% | |||
STGCN | 1.327 | 2.827 | 2.79% | 1.698 | 3.887 | 3.81% | 2.055 | 4.748 | 5.02% | |||
DCRNN | 1.377 | 2.867 | 2.96% | 1.726 | 3.905 | 3.97% | 2.091 | 4.798 | 4.99% | |||
GWN | 1.322 | 2.759 | 2.78% | 1.660 | 3.737 | 3.75% | 1.991 | 4.562 | 4.75% | |||
ASTGCN | 1.435 | 3.057 | 3.25% | 1.795 | 4.066 | 4.40% | 2.103 | 4.770 | 5.30% | |||
STSGCN | 1.44 | 3.01 | 3.04% | 1.83 | 4.18 | 4.17% | 2.26 | 5.21 | 5.40% | |||
GMAN | 1.802 | 4.219 | 4.47% | 1.794 | 4.143 | 4.40% | 2.186 | 5.034 | 5.29% | |||
STEP | 1.26 | 2.73 | 2.59% | 1.55 | 3.58 | 3.43% | 1.79 | 4.20 | 4.18% | |||
STGM | 1.254 | 2.623 | 2.699% | 1.584 | 3.687 | 3.70% | 1.857 | 4.369 | 4.34% | |||
STAEformer | 1.31 | 2.78 | 2.76% | 1.62 | 3.68 | 3.62% | 1.88 | 4.34 | 4.41% | |||
STD-MAE | 1.23 | 2.62 | 2.56% | 1.53 | 3.53 | 3.42% | 1.77 | 4.20 | 4.17% | |||
DPMformer (Ours) | 1.16 | 2.54 | 2.37% | 1.39 | 3.09 | 2.96% | 1.49 | 3.40 | 3.33% |
Models | PEMSD7 (M) | PEMSD7 (L) | |||||
---|---|---|---|---|---|---|---|
MAE | RMSE | MAPE | MAE | RMSE | MAPE | ||
HA | 4.59 | 8.63 | 14.53% | 4.84 | 9.03 | 14.90% | |
AMIRA | 7.27 | 13.20 | 15.38% | 7.51 | 12.39 | 15.83% | |
VAR | 4.25 | 7.61 | 10.28% | 4.45 | 8.09 | 11.62% | |
SVR | 4.09 | 7.47 | 10.03% | 4.41 | 8.11 | 11.58% | |
STGCN | 3.86 | 6.79 | 10.06% | 3.89 | 6.83 | 10.09% | |
DCRNN | 3.83 | 7.18 | 9.81% | 4.33 | 8.33 | 11.41% | |
GWN | 3.19 | 6.24 | 8.02% | 3.75 | 7.09 | 9.41% | |
ASTGCN | 3.14 | 6.18 | 8.12% | 3.51 | 6.81 | 9.24% | |
STGM | 3.002 | 6.331 | 8.01% | 3.37 | 6.62 | 9.02% | |
DDGCRN | 2.59 | 5.21 | 6.48% | 2.79 | 5.68 | 7.06% | |
STD-MAE | 2.52 | 5.20 | 6.35% | 2.64 | 5.50 | 6.65% | |
DPMformer (Ours) | 2.47 | 4.93 | 5.98% | 2.61 | 5.37 | 6.39% |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Huang, Y.; Han, W.; Xie, Y.; Wu, S. Dynamic Pattern Matching Network for Traffic Prediction. Sustainability 2025, 17, 4004. https://doi.org/10.3390/su17094004
Huang Y, Han W, Xie Y, Wu S. Dynamic Pattern Matching Network for Traffic Prediction. Sustainability. 2025; 17(9):4004. https://doi.org/10.3390/su17094004
Chicago/Turabian StyleHuang, Yanguo, Weilong Han, Yingmin Xie, and Shuiqing Wu. 2025. "Dynamic Pattern Matching Network for Traffic Prediction" Sustainability 17, no. 9: 4004. https://doi.org/10.3390/su17094004
APA StyleHuang, Y., Han, W., Xie, Y., & Wu, S. (2025). Dynamic Pattern Matching Network for Traffic Prediction. Sustainability, 17(9), 4004. https://doi.org/10.3390/su17094004