TimesNet-PM2.5: Interpretable TimesNet for Disentangling Intraperiod and Interperiod Variations in PM2.5 Prediction
Abstract
:1. Introduction
- Pivotal enhancements were introduced to the TimesBlock within the TimesNet architecture, specifically integrating targeted temporal patterns and refining time sequence transformation methodologies. Specifically, we incorporated four task-related temporal variations for recognition.
- A more specific loss function was designed to dynamically adjust the learning orientation between general prediction and specific time spots, utilizing an easy-to-hard learning strategy for enhanced forecasting.
- A gate mechanism was introduced in TimesNet that deftly intertwined statistical correlation with PM2.5 and the inherent learning of the model. This dual-aspect approach refined feature selection, ensuring optimal input for the TimesBlock and enhancing PM2.5 time-series projections.
- This redesigned TimesNet boasts great interpretability, allowing for vivid visual explanations of predictions. Notably, its performance surpasses both classical time-series forecasting baselines and the original TimesNet model.
2. Related Works
2.1. Previous Studies on PM2.5 Forecasting
2.2. Time-Series Representation Learning
2.3. TimesNet
3. Methodology
3.1. Overview
3.2. Modified TimesBlock
3.3. Specific Loss
3.4. Gate Mechanism for Feature Selection
3.5. Sensor Specifications
4. Experiments
4.1. Dataset and Task
4.2. Details of Training
4.2.1. Hyperparameters
4.2.2. Metrices
4.3. Results
5. Analysis
5.1. Feature Importance in Shapley Method
5.2. Temporal 2D-Variations in TimesNet-PM2.5
5.3. Future PM2.5 Situation Our Model Predicted
6. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- Lelieveld, J.; Evans, J.S.; Fnais, M.; Giannadaki, D.; Pozzer, A. The contribution of outdoor air pollution sources to premature mortality on a global scale. Nature 2015, 525, 367–371. [Google Scholar] [CrossRef] [PubMed]
- Ban, W.; Shen, L. PM2.5 Prediction Based on the CEEMDAN Algorithm and a Machine Learning Hybrid Model. Sustainability 2022, 14, 16128. [Google Scholar] [CrossRef]
- Jin, X.B.; Wang, Z.Y.; Kong, J.L.; Bai, Y.T.; Su, T.L.; Ma, H.J.; Chakrabarti, P. Deep Spatio-Temporal Graph Network with Self-Optimization for Air Quality Prediction. Entropy 2023, 25, 247. [Google Scholar] [CrossRef] [PubMed]
- Wang, H.; Zhang, L.; Wu, R.; Cen, Y. Spatio-temporal fusion of meteorological factors for multi-site PM2.5 prediction: A deep learning and time-variant graph approach. Environ. Res. 2023. Epub ahead of print. [Google Scholar] [CrossRef]
- Hu, X.; Belle, J.H.; Meng, X.; Wildani, A.; Waller, L.A.; Strickland, M.J.; Liu, Y. Estimating PM2. 5 concentrations in the conterminous United States using the random forest approach. Environ. Sci. Technol. 2017, 51, 6936–6944. [Google Scholar] [CrossRef] [PubMed]
- Box, G.E.; Jenkins, G.M. Some recent advances in forecasting and control. J. R. Stat. Soc. Ser. C (Appl. Stat.) 1968, 17, 91–109. [Google Scholar] [CrossRef]
- Lin, S.; Zhao, J.; Li, J.; Liu, X.; Zhang, Y.; Wang, S.; Mei, Q.; Chen, Z.; Gao, Y. A Spatial–Temporal Causal Convolution Network Framework for Accurate and Fine-Grained PM2. 5 Concentration Prediction. Entropy 2022, 24, 1125. [Google Scholar] [CrossRef] [PubMed]
- Zhu, S.; Lian, X.; Wei, L.; Che, J.; Shen, X.; Yang, L.; Qiu, X.; Liu, X.; Gao, W.; Ren, X.; et al. PM2. 5 forecasting using SVR with PSOGSA algorithm based on CEEMD, GRNN and GCA considering meteorological factors. Atmos. Environ. 2018, 183, 20–32. [Google Scholar] [CrossRef]
- Liaw, A.; Wiener, M. Classification and Regression by randomForest. R News 2002, 2, 18–22. [Google Scholar]
- Han, H.; Zhang, M.; Hou, M.; Zhang, F.; Wang, Z.; Chen, E.; Wang, H.; Ma, J.; Liu, Q. STGCN: A spatial-temporal aware graph learning method for POI recommendation. In Proceedings of the 2020 IEEE International Conference on Data Mining (ICDM), Sorrento, Italy, 17–20 November 2020; pp. 1052–1057. [Google Scholar]
- Krizhevsky, A.; Sutskever, I.; Hinton, G.E. ImageNet classification with deep convolutional neural networks. In Proceedings of the Advances in Neural Information Processing Systems (NIPS ’12), Lake Tahoe, NV, USA, 3–6 December 2012; pp. 1097–1105. [Google Scholar]
- Graves, A.; Mohamed, A.R.; Hinton, G. Speech recognition with deep recurrent neural networks. In Proceedings of the 2013 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Vancouver, BC, Canada, 26–31 May 2013; pp. 6645–6649. [Google Scholar]
- Chung, J.; Gulcehre, C.; Cho, K.; Bengio, Y. Empirical evaluation of gated recurrent neural networks on sequence modeling. arXiv 2014, arXiv:1412.3555. [Google Scholar]
- Li, X.; Peng, L.; Yao, X.; Cui, S.; Hu, Y.; You, C.; Chi, T. Long short-term memory neural network for air pollutant concentration predictions: Method development and evaluation. Environ. Pollut. 2018, 231, 997–1004. [Google Scholar] [CrossRef]
- Zhang, J.; Zheng, Y.; Tong, D.; Shao, M.; Wang, S. Spatio-temporal attention-based gated recurrent unit networks for air pollutant concentration prediction. Atmos. Environ. 2021, 244, 117874. [Google Scholar]
- Luči’c, P.; Balaž, A. Vector autoregression (VAR) model for exchange rate prediction in Serbia. Industrija 2017, 45, 173–190. [Google Scholar]
- Derczynski, L.; Gaizauskas, R. Empirical validation of Reichenbach’s tense and aspect annotations. In Proceedings of the 10th Joint ACL—ISO Workshop on Interoperable Semantic Annotation (ISA-10), Reykjavik, Iceland, 26 May 2013; pp. 64–72. [Google Scholar]
- Svetunkov, I.; Kourentzes, N. Complex Exponential Smoothing State Space Model. In Proceedings of the 37th International Symposium on Forecasting, Cairns, Australia, 25–28 June 2017. [Google Scholar]
- Zareba, M.; Dlugosz, H.; Danek, T.; Weglinska, E. Big-Data-Driven Machine Learning for Enhancing Spatiotemporal Air Pollution Pattern Analysis. Atmosphere 2023, 14, 760. [Google Scholar] [CrossRef]
- Saiohai, J.; Bualert, S.; Thongyen, T.; Duangmal, K.; Choomanee, P.; Szymanski, W.W. Statistical PM2.5 Prediction in an Urban Area Using Vertical Meteorological Factors. Atmosphere 2023, 14, 589. [Google Scholar] [CrossRef]
- Lyu, X.; Hueser, M.; Hyland, S.L.; Zerveas, G.; Raetsch, G. Improving clinical predictions through unsupervised time series representation learning. arXiv 2018, arXiv:1812.00490. [Google Scholar]
- Paparrizos, J.; Franklin, M.J. Grail: Efficient time-series representation learning. Proc. VLDB Endow. 2019, 12, 1762–1777. [Google Scholar] [CrossRef]
- Fan, H.; Zhang, F.; Gao, Y. Self-supervised time series representation learning by inter-intra relational reasoning. arXiv 2020, arXiv:2011.13548. [Google Scholar]
- Cheng, Z.; Yang, Y.; Jiang, S.; Hu, W.; Ying, Z.; Chai, Z.; Wang, C. Time2Graph+: Bridging time series and graph representation learning via multiple attentions. IEEE Trans. Knowl. Data Eng. 2021, 35, 2078–2090. [Google Scholar] [CrossRef]
- Zerveas, G.; Jayaraman, S.; Patel, D.; Bhamidipaty, A.; Eickhoff, C. A transformer-based framework for multivariate time series representation learning. In Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery & Data Mining, Singapore, 14–18 August 2021; pp. 2114–2124. [Google Scholar]
- Li, S.; Jin, X.; Xuan, Y.; Zhou, X.; Chen, W.; Wang, Y.X.; Yan, X. Enhancing the locality and breaking the memory bottleneck of transformer on time series forecasting. Adv. Neural Inf. Process. Syst. 2019, 32, 4567–4572. [Google Scholar]
- Kitaev, N.; Kaiser, Ł.; Levskaya, A. Reformer: The efficient transformer. arXiv 2020, arXiv:2001.04451. [Google Scholar]
- Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A.N.; Kaiser, Ł.; Polosukhin, I. Attention is all you need. Adv. Neural Inf. Process. Syst. 2017, 30, 1234–1239. [Google Scholar]
- Zhou, H.; Zhang, S.; Peng, J.; Zhang, S.; Li, J.; Xiong, H.; Zhang, W. Informer: Beyond efficient transformer for long sequence time-series forecasting. Proc. AAAI Conf. Artif. Intell. 2021, 35, 11106–11115. [Google Scholar] [CrossRef]
- Zhou, T.; Ma, Z.; Wen, Q.; Wang, X.; Sun, L.; Jin, R. Fedformer: Frequency enhanced decomposed transformer for long-term series forecasting. In Proceedings of the International Conference on Machine Learning, Honululu, HI, USA, 23–29 July 2022; pp. 27268–27286. [Google Scholar]
- Liu, S.; Yu, H.; Liao, C.; Li, J.; Lin, W.; Liu, A.X.; Dustdar, S. Pyraformer: Low-complexity pyramidal attention for long-range time series modeling and forecasting. In Proceedings of the International Conference on Learning Representations, Virtual, 3–7 May 2021. [Google Scholar]
- Wu, H.; Hu, T.; Liu, Y.; Zhou, H.; Wang, J.; Long, M. Timesnet: Temporal 2d-variation modeling for general time series analysis. arXiv 2022, arXiv:2210.02186. [Google Scholar]
- Szegedy, C.; Liu, W.; Jia, Y.; Sermanet, P.; Reed, S.; Anguelov, D.; Erhan, D.; Vanhoucke, V.; Rabinovich, A. Going deeper with convolutions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA, 7–12 June 2015; pp. 1–9. [Google Scholar]
- Brook, R.D.; Rajagopalan, S.; Pope, C.A., III; Brook, J.R.; Bhatnagar, A.; Diez-Roux, A.V.; Holguin, F.; Hong, Y.; Luepker, R.V.; Mittleman, M.A.; et al. Particulate matter air pollution and cardiovascular disease: An update to the scientific statement from the American Heart Association. Circulation 2010, 121, 2331–2378. [Google Scholar] [CrossRef]
- Steinfeld, J.I. Atmospheric chemistry and physics: From air pollution to climate change. Environ. Sci. Policy Sustain. Dev. 1998, 40, 26. [Google Scholar] [CrossRef]
- Jacobson, M. Fundamentals of Atmospheric Modeling; Cambridge University Press: Cambridge, UK, 2005. [Google Scholar]
- Kingma, D.P.; Ba, J. Adam: A method for stochastic optimization. arXiv 2014, arXiv:1412.6980. [Google Scholar]
- Zou, K.H.; Tuncali, K.; Silverman, S.G. Correlation and simple linear regression. Radiology 2003, 227, 617–628. [Google Scholar] [CrossRef]
- Pedregosa, F.; Varoquaux, G.; Gramfort, A.; Michel, V.; Thirion, B.; Grisel, O.; Blondel, M.; Prettenhofer, P.; Weiss, R.; Dubourg, V.; et al. Scikit-learn: Machine Learning in Python. J. Mach. Learn. Res. 2011, 12, 2825–2830. [Google Scholar]
- Lundberg, S.M.; Lee, S.I. A unified approach to interpreting model predictions. Adv. Neural Inf. Process. Syst. 2017, 30, 1234–1239. [Google Scholar]
- Danek, T.; Zaręba, M. The use of public data from low-cost sensors for the geospatial analysis of air pollution from solid fuel heating during the COVID-19 pandemic spring period in Krakow, Poland. Sensors 2021, 21, 5208. [Google Scholar] [CrossRef] [PubMed]
Parameter | Range | Resolution | Accuracy |
---|---|---|---|
PM2.5 | 0–1000 ug/m3 | 1 ug/m3 | ±10% (<500 ug/m3) |
PM10 | 0–1000 ug/m3 | 1 ug/m3 | ±10% (<500 ug/m3) |
TSP | 0–1000 ug/m3 | 1 ug/m3 | ±10% (<500 ug/m3) |
CO (carbon monoxide) | 0–10 ppm | 0.001 ppm | ±5% F.S |
SO2 (sulfur dioxide) | 0–5 ppm | 0.001 ppm | ±5% F.S |
NO2 (nitrogen dioxide) | 0–5 ppm | 0.001 ppm | ±5% F.S |
O3 (ozone) | 0–5 ppm | 0.1 ppm | ±5% F.S |
Temperature | −40–60 °C | 0.01 °C | ±0.3 °C (at 25 °C) |
Relative humidity | 0–100% RH | 0.01% RH | ±3% RH |
Wind speed | 0–60 m/s | 0.01 m/s | ±0.1 m/s |
Wind direction | 0–360° | 1° | ±2° |
Atmospheric pressure | 30–110 Kpa | 0.01 Kpa | 4% |
Piezoelectric rainfall | 0–4 mm/min | 0.01 mm | 4% |
Optical rainfall | 0–4 mm/min | 0.01 mm | 4% |
Illuminance | 0–20 W LUX | 1 lux | 3% |
Total radiation | 0–1800 W/m2 | 1 W/m2 | 3% |
Noise | 30–120 dB | 0.1 dB | +1.5 dB |
CO2 (carbon dioxide) | 400–5000 ppm | 1 ppm | ±(50 ppm + 5% reading) |
Feature | Statistical Value | Correlation with PM2.5 | |
---|---|---|---|
1 | PM10 | 26 ± 26 ug/m3 | 0.91 |
2 | CO | 0.62 ± 0.75 ppm | 0.20 |
3 | NO2 | 11 ± 12 ppm | 0.023 |
4 | SO2 | 9.9 ± 4.9 ppm | 0.043 |
5 | Atmospheric pressure | 101 ± 1.00 Kpa | 0.31 |
6 | Humidity | 60 ± 13% RH | 0.008 |
7 | Temperature | 30 ± 6.0 °C | −0.25 |
8 | Wind direction | 160 ± 87° | −0.092 |
9 | Wind speed | 1.02 ± 0.72 m/s | 0.13 |
Model | ARIMA | LSTM | ||||||||
Metric | MSE | MAE | RMSE | MAPE | MSPE | MSE | MAE | RMSE | MAPE | MSPE |
MeiLan 1 h | 3.04 | 1.21 | 0.21 | 0.00 | 0.00 | 49.39 | 4.41 | 7.03 | 0.17 | 0.11 |
MeiLan 6 h | 7.89 | 2.32 | 2.81 | 0.06 | 0.00 | 96.19 | 6.81 | 9.81 | 0.23 | 0.09 |
MeiLan 12 h | 26.39 | 3.90 | 5.14 | 0.09 | 0.01 | 146.19 | 8.04 | 12.09 | 0.28 | 0.18 |
MeiLan 24 h | 58.30 | 6.22 | 7.64 | 0.17 | 0.04 | 115.97 | 7.71 | 10.77 | 0.30 | 0.24 |
MeiLan Avg | 23.15 | 3.16 | 3.95 | 0.08 | 0.02 | 101.93 | 6.74 | 9.93 | 0.24 | 0.16 |
All 1 h | 5.48 | 2.18 | 2.34 | 0.18 | 0.02 | 54.21 | 4.53 | 7.36 | 0.19 | 0.06 |
All 6 h | 50.26 | 6.48 | 7.08 | 0.73 | 0.71 | 120.16 | 6.90 | 10.95 | 0.33 | 0.53 |
All 12 h | 52.09 | 6.85 | 7.21 | 0.75 | 0.72 | 193.82 | 9.22 | 13.91 | 0.35 | 0.39 |
All 24 h | 62.39 | 7.55 | 7.90 | 0.90 | 1.02 | 160.44 | 8.07 | 12.67 | 0.36 | 0.44 |
All Avg | 42.55 | 5.76 | 6.13 | 0.64 | 0.62 | 132.16 | 7.18 | 11.22 | 0.31 | 0.36 |
Model | LinearRegression | RandomForestRegressor | ||||||||
Metric | MSE | MAE | RMSE | MAPE | MSPE | MSE | MAE | RMSE | MAPE | MSPE |
MeiLan 1 h | 19.68 | 3.29 | 4.44 | 0.31 | 0.01 | 20.51 | 3.38 | 4.53 | 0.32 | 0.02 |
MeiLan 6 h | 19.69 | 3.29 | 4.44 | 0.31 | 0.01 | 20.89 | 3.41 | 4.57 | 0.32 | 0.02 |
MeiLan 12 h | 19.74 | 3.29 | 4.44 | 0.31 | 0.01 | 21.00 | 3.43 | 4.59 | 0.33 | 0.02 |
MeiLan 24 h | 19.75 | 3.30 | 4.44 | 0.31 | 0.02 | 21.01 | 3.44 | 4.58 | 0.33 | 0.02 |
MeiLan Avg | 19.72 | 3.29 | 4.44 | 0.31 | 0.01 | 20.86 | 3.42 | 4.57 | 0.32 | 0.02 |
All 1 h | 13.68 | 2.18 | 3.60 | 0.11 | 0.06 | 9.56 | 1.83 | 3.01 | 0.095 | 0.038 |
All 6 h | 44.38 | 4.17 | 6.49 | 0.23 | 0.24 | 37.61 | 3.87 | 5.98 | 0.218 | 0.276 |
All 12 h | 60.81 | 4.96 | 7.60 | 0.29 | 0.34 | 51.68 | 4.64 | 7.01 | 0.266 | 0.304 |
All 24 h | 85.33 | 5.86 | 9.01 | 0.35 | 1.61 | 73.11 | 5.55 | 8.33 | 0.352 | 1.65 |
All Avg | 51.06 | 4.29 | 6.96 | 0.247 | 0.57 | 42.99 | 3.97 | 6.39 | 0.238 | 0.57 |
Model | GradientBoostingRegressor | SVR | ||||||||
Metric | MSE | MAE | RMSE | MAPE | MSPE | MSE | MAE | RMSE | MAPE | MSPE |
MeiLan 1 h | 19.86 | 3.30 | 4.46 | 0.31 | 0.01 | 12.96 | 2.69 | 3.60 | 0.10 | 0.02 |
MeiLan 6 h | 20.03 | 3.32 | 4.48 | 0.31 | 0.01 | 1.58 | 3.39 | 4.40 | 0.15 | 0.04 |
MeiLan 12 h | 19.88 | 3.31 | 4.46 | 0.31 | 0.01 | 22.48 | 4.00 | 4.74 | 0.17 | 0.04 |
MeiLan 24 h | 19.92 | 3.31 | 4.46 | 0.31 | 0.01 | 25.12 | 4.20 | 5.01 | 0.19 | 0.06 |
MeiLan Avg | 19.92 | 3.31 | 4.46 | 0.31 | 0.01 | 19.99 | 3.57 | 4.47 | 0.15 | 0.04 |
All 1 h | 13.25 | 2.11 | 3.49 | 0.11 | 0.06 | 9.26 | 1.78 | 2.92 | 0.092 | 0.037 |
All 6 h | 42.97 | 4.04 | 6.28 | 0.22 | 0.23 | 36.42 | 3.74 | 5.79 | 0.212 | 0.267 |
All 12 h | 58.89 | 4.80 | 7.36 | 0.28 | 0.33 | 50.05 | 4.49 | 6.79 | 0.258 | 0.294 |
All 24 h | 82.63 | 5.68 | 8.72 | 0.34 | 1.55 | 70.84 | 5.37 | 8.07 | 0.340 | 1.60 |
All Avg | 49.44 | 4.16 | 6.74 | 0.239 | 0.55 | 41.67 | 3.85 | 6.19 | 0.230 | 0.55 |
Model | TimesNet | TimesNet-PM2.5(Ours) | ||||||||
Metric | MSE | MAE | RMSE | MAPE | MSPE | MSE | MAE | RMSE | MAPE | MSPE |
MeiLan 1 h | 8.17 | 1.80 | 2.86 | 0.15 | 0.05 | 4.98 | 1.32 | 2.46 | 0.13 | 0.03 |
MeiLan 6 h | 8.09 | 2.01 | 2.84 | 0.17 | 0.06 | 7.52 | 1.88 | 2.74 | 0.17 | 0.05 |
MeiLan 12 h | 13.57 | 2.39 | 3.68 | 0.17 | 0.05 | 12.74 | 2.11 | 3.57 | 0.18 | 0.07 |
MeiLan 24 h | 19.73 | 2.99 | 4.44 | 0.27 | 0.22 | 20.10 | 2.84 | 4.48 | 0.25 | 0.12 |
MeiLan Avg | 12.38 | 2.27 | 3.46 | 0.19 | 0.10 | 11.83 | 2.07 | 3.36 | 0.18 | 0.07 |
All 1 h | 20.22 | 1.95 | 4.50 | 0.26 | 0.47 | 13.46 | 1.42 | 3.67 | 0.14 | 0.90 |
All 6 h | 38.30 | 3.10 | 6.19 | 0.41 | 0.24 | 24.79 | 2.47 | 4.98 | 0.24 | 0.32 |
All 12 h | 51.21 | 3.66 | 7.16 | 0.55 | 0.87 | 30.33 | 2.78 | 5.51 | 0.29 | 0.66 |
All 24 h | 64.51 | 4.08 | 8.03 | 0.51 | 0.55 | 42.47 | 3.26 | 6.52 | 0.35 | 1.16 |
All Avg | 43.56 | 3.20 | 6.47 | 0.43 | 0.69 | 27.76 | 2.48 | 5.16 | 0.25 | 0.76 |
Models | MSE | RMSE | MAE |
---|---|---|---|
AR | 5.62 | 2.37 | 1.43 |
MA | 10.16 | 3.19 | 2.08 |
ARMA | 5.81 | 2.41 | 1.45 |
ANN | 6.75 | 2.60 | 1.61 |
GRU | 6.23 | 2.50 | 1.56 |
ST-GCN | 5.76 | 2.39 | 1.44 |
ST-CCN-PM2.5 | 3.43 | 1.85 | 1.41 |
TimesNet-PM2.5 | 4.98 | 2.46 | 1.32 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Huang, Y.; Zhou, Z.; Wang, Z.; Zhi, X.; Liu, X. TimesNet-PM2.5: Interpretable TimesNet for Disentangling Intraperiod and Interperiod Variations in PM2.5 Prediction. Atmosphere 2023, 14, 1604. https://doi.org/10.3390/atmos14111604
Huang Y, Zhou Z, Wang Z, Zhi X, Liu X. TimesNet-PM2.5: Interpretable TimesNet for Disentangling Intraperiod and Interperiod Variations in PM2.5 Prediction. Atmosphere. 2023; 14(11):1604. https://doi.org/10.3390/atmos14111604
Chicago/Turabian StyleHuang, Yiming, Ziyu Zhou, Zihao Wang, Xiaoying Zhi, and Xiliang Liu. 2023. "TimesNet-PM2.5: Interpretable TimesNet for Disentangling Intraperiod and Interperiod Variations in PM2.5 Prediction" Atmosphere 14, no. 11: 1604. https://doi.org/10.3390/atmos14111604
APA StyleHuang, Y., Zhou, Z., Wang, Z., Zhi, X., & Liu, X. (2023). TimesNet-PM2.5: Interpretable TimesNet for Disentangling Intraperiod and Interperiod Variations in PM2.5 Prediction. Atmosphere, 14(11), 1604. https://doi.org/10.3390/atmos14111604