DESTformer: A Transformer Based on Explicit Seasonal–Trend Decomposition for Long-Term Series Forecasting
Abstract
:1. Introduction
- A transformer architecture based on seasonal–trend decomposition is proposed that can effectively decouple complex long sequences and learn representations of seasonal and trend components in a targeted manner.
- A multi-view attention mechanism (MVI-Attention) is proposed that can perform holistic modeling from multiple perspectives in the frequency domain to capture important periodic structures in time series.
- A multi-scale attention mechanism (MSC-Attention) is proposed to enhance information utilization in the trend view via the modeling of variable-length sub-trends, thus learning information-rich trend representations.
- Extensive experiments are conducted on six benchmark datasets in multiple domains (energy, transportation, ecology, weather, and disease). Experimental results show that DESTformer improves 6.0% and 4.8% over state-of-the-art methods in multivariate and univariate long-term time series prediction tasks, respectively.
2. Related Work
2.1. Long-Term Time Series Forecasting
2.2. Time Series Decomposition
3. Methodology
3.1. Problem Definition
3.2. DESTformer Architecture
3.2.1. Frequency Decomposition
3.2.2. Model Inputs
3.2.3. Encoder
3.2.4. Decoder
3.3. Multi-View Attention
3.4. Multi-Scale Attention
3.5. Complexity Analysis
Algorithm 1 Overall DESTformer procedure. |
|
4. Experiments
4.1. Experimental Settings
4.1.1. Datasets
4.1.2. Baselines
4.1.3. Evaluation Metrics
4.1.4. Implementation Details
4.2. Main Results
4.2.1. Multivariate Forecasting Results
4.2.2. Univariate Forecasting Results
4.3. Ablation Studies
4.3.1. Traditional Time Series Decomposition vs. Frequency Decomposition
4.3.2. Self-Attention Family vs. MVI-Attention
4.3.3. Self-Attention vs. MSC-Attention
4.4. Efficiency Analysis
4.5. Representation Disentanglement
5. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Zhou, H.; Zhang, S.; Peng, J.; Zhang, S.; Li, J.; Xiong, H.; Zhang, W. Informer: Beyond efficient transformer for long sequence time-series forecasting. In Proceedings of the AAAI Conference on Artificial Intelligence, Vancouver, BC, Canada, 2–9 February 2021; Volume 35, pp. 11106–11115. [Google Scholar]
- Wu, H.; Xu, J.; Wang, J.; Long, M. Autoformer: Decomposition transformers with auto-correlation for long-term series forecasting. Adv. Neural Inf. Process. Syst. 2021, 34, 22419–22430. [Google Scholar]
- Zhou, T.; Ma, Z.; Wen, Q.; Wang, X.; Sun, L.; Jin, R. Fedformer: Frequency enhanced decomposed transformer for long-term series forecasting. In Proceedings of the International Conference on Machine Learning, PMLR, Baltimore, MD, USA, 17–23 July 2022; pp. 27268–27286. [Google Scholar]
- Guo, C.; Li, D.; Chen, X. Unequal Interval Dynamic Traffic Flow Prediction with Singular Point Detection. Appl. Sci. 2023, 13, 8973. [Google Scholar] [CrossRef]
- Han, L.; Du, B.; Sun, L.; Fu, Y.; Lv, Y.; Xiong, H. Dynamic and multi-faceted spatio-temporal deep learning for traffic speed forecasting. In Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery & Data Mining, Virtual, 13 February–23 April 2021; pp. 547–555. [Google Scholar]
- He, Z.; Zhao, C.; Huang, Y. Multivariate Time Series Deep Spatiotemporal Forecasting with Graph Neural Network. Appl. Sci. 2022, 12, 5731. [Google Scholar] [CrossRef]
- Qin, H.; Ke, S.; Yang, X.; Xu, H.; Zhan, X.; Zheng, Y. Robust spatio-temporal purchase prediction via deep meta learning. In Proceedings of the AAAI Conference on Artificial Intelligence, Vancouver, BC, Canada, 2–9 February 2021; Volume 35, pp. 4312–4319. [Google Scholar]
- An, Y.; Zhang, L.; Yang, H.; Sun, L.; Jin, B.; Liu, C.; Yu, R.; Wei, X. Prediction of treatment medicines with dual adaptive sequential networks. IEEE Trans. Knowl. Data Eng. 2021, 34, 5496–5509. [Google Scholar] [CrossRef]
- Zhu, J.; Tang, H.; Zhang, L.; Jin, B.; Xu, Y.; Wei, X. A Global View-Guided Autoregressive Residual Network for Irregular Time Series Classification. In Proceedings of the Pacific-Asia Conference on Knowledge Discovery and Data Mining, Osaka, Japan, 25–28 May 2023; Springer: Berlin/Heidelberg, Germany, 2023; pp. 289–300. [Google Scholar]
- Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A.N.; Kaiser, Ł.; Polosukhin, I. Attention is all you need. Adv. Neural Inf. Process. Syst. 2017, 30. [Google Scholar]
- Kitaev, N.; Kaiser, Ł.; Levskaya, A. Reformer: The efficient transformer. arXiv 2020, arXiv:2001.04451. [Google Scholar]
- Du, D.; Su, B.; Wei, Z. Preformer: Predictive transformer with multi-scale segment-wise correlations for long-term time series forecasting. arXiv 2022, arXiv:2202.11356. [Google Scholar]
- Wang, Z.; Xu, X.; Zhang, W.; Trajcevski, G.; Zhong, T.; Zhou, F. Learning Latent Seasonal-Trend Representations for Time Series Forecasting. In Proceedings of the Advances in Neural Information Processing Systems, New Orleans, LA, USA, 28 November–9 December 2022. [Google Scholar]
- Box, G.E.; Jenkins, G.M. Some recent advances in forecasting and control. J. R. Stat. Society. Ser. C Appl. Stat. 1968, 17, 91–109. [Google Scholar] [CrossRef]
- Box, G.E.; Jenkins, G.M.; Reinsel, G.C.; Ljung, G.M. Time Series Analysis: Forecasting and Control; John Wiley & Sons: Hoboken, NJ, USA, 2015. [Google Scholar]
- Woo, G.; Liu, C.; Sahoo, D.; Kumar, A.; Hoi, S. Etsformer: Exponential smoothing transformers for time-series forecasting. arXiv 2022, arXiv:2202.01381. [Google Scholar]
- Toharudin, T.; Pontoh, R.S.; Caraka, R.E.; Zahroh, S.; Lee, Y.; Chen, R.C. Employing long short-term memory and Facebook prophet model in air temperature forecasting. Commun.-Stat.-Simul. Comput. 2020, 52, 279–290. [Google Scholar] [CrossRef]
- Devlin, J.; Chang, M.W.; Lee, K.; Toutanova, K. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv 2018, arXiv:1810.04805. [Google Scholar]
- Chen, W.; Xing, X.; Xu, X.; Pang, J.; Du, L. SpeechFormer++: A Hierarchical Efficient Framework for Paralinguistic Speech Processing. IEEE/ACM Trans. Audio Speech Lang. Process. 2023, 31, 775–788. [Google Scholar] [CrossRef]
- Dosovitskiy, A.; Beyer, L.; Kolesnikov, A.; Weissenborn, D.; Zhai, X.; Unterthiner, T.; Dehghani, M.; Minderer, M.; Heigold, G.; Gelly, S.; et al. An image is worth 16 × 16 words: Transformers for image recognition at scale. arXiv 2020, arXiv:2010.11929. [Google Scholar]
- Li, S.; Jin, X.; Xuan, Y.; Zhou, X.; Chen, W.; Wang, Y.X.; Yan, X. Enhancing the locality and breaking the memory bottleneck of transformer on time series forecasting. Adv. Neural Inf. Process. Syst. 2019, 32. [Google Scholar]
- Cleveland, R.B.; Cleveland, W.S.; McRae, J.E.; Terpenning, I. STL: A seasonal-trend decomposition. J. Off. Stat 1990, 6, 3–73. [Google Scholar]
- Jarrah, M.; Derbali, M. Predicting Saudi Stock Market Index by Using Multivariate Time Series Based on Deep Learning. Appl. Sci. 2023, 13, 8356. [Google Scholar] [CrossRef]
- Asadi, R.; Regan, A.C. A spatio-temporal decomposition based deep neural network for time series forecasting. Appl. Soft Comput. 2020, 87, 105963. [Google Scholar] [CrossRef]
- Ju, J.; Liu, F.A. Multivariate time series data prediction based on att-lstm network. Appl. Sci. 2021, 11, 9373. [Google Scholar] [CrossRef]
- Taylor, S.J.; Letham, B. Forecasting at scale. Am. Stat. 2018, 72, 37–45. [Google Scholar] [CrossRef]
- Oreshkin, B.N.; Carpov, D.; Chapados, N.; Bengio, Y. N-BEATS: Neural basis expansion analysis for interpretable time series forecasting. arXiv 2019, arXiv:1905.10437. [Google Scholar]
- Sen, R.; Yu, H.F.; Dhillon, I.S. Think globally, act locally: A deep neural network approach to high-dimensional time series forecasting. Adv. Neural Inf. Process. Syst. 2019, 32. [Google Scholar]
- Woo, G.; Liu, C.; Sahoo, D.; Kumar, A.; Hoi, S. CoST: Contrastive learning of disentangled seasonal-trend representations for time series forecasting. arXiv 2022, arXiv:2202.01575. [Google Scholar]
- Gao, J.; Sultan, H.; Hu, J.; Tung, W.W. Denoising nonlinear time series by adaptive filtering and wavelet shrinkage: A comparison. IEEE Signal Process. Lett. 2010, 17, 237–240. [Google Scholar]
- Gao, J.; Hu, J.; Tung, W.w. Facilitating joint chaos and fractal analysis of biosignals through nonlinear adaptive filtering. PLoS ONE 2011, 6, e24331. [Google Scholar] [CrossRef]
- Wiener, N. Generalized harmonic analysis. Acta Math. 1930, 55, 117–258. [Google Scholar] [CrossRef]
- Hyndman, R.J.; Khandakar, Y. Automatic time series forecasting: The forecast package for R. J. Stat. Softw. 2008, 27, 1–22. [Google Scholar] [CrossRef]
- Bahdanau, D.; Cho, K.; Bengio, Y. Neural machine translation by jointly learning to align and translate. arXiv 2014, arXiv:1409.0473. [Google Scholar]
- Van der Maaten, L.; Hinton, G. Visualizing data using t-SNE. J. Mach. Learn. Res. 2008, 9. [Google Scholar]
- Lai, G.; Chang, W.C.; Yang, Y.; Liu, H. Modeling long-and short-term temporal patterns with deep neural networks. In Proceedings of the 41st International ACM SIGIR Conference on Research & Development in Information Retrieval, Ann Arbor, MI, USA, 8–12 July 2018; pp. 95–104. [Google Scholar]
- Ariyo, A.A.; Adewumi, A.O.; Ayo, C.K. Stock Price Prediction Using the ARIMA Model. In Proceedings of the 2014 UKSim-AMSS 16th International Conference on Computer Modelling and Simulation, Cambridge, UK, 26–28 March 2014; IEEE: Piscataway, NJ, USA, 2014; pp. 106–112. [Google Scholar]
- Kingma, D.P.; Ba, J. Adam: A method for stochastic optimization. arXiv 2014, arXiv:1412.6980. [Google Scholar]
- Paszke, A.; Gross, S.; Massa, F.; Lerer, A.; Bradbury, J.; Chanan, G.; Killeen, T.; Lin, Z.; Gimelshein, N.; Antiga, L.; et al. Pytorch: An imperative style, high-performance deep learning library. Adv. Neural Inf. Process. Syst. 2019, 32. [Google Scholar]
Methods | Training | Testing | |
---|---|---|---|
Time | Memory | Steps | |
LSTM [34] | L | ||
Transformer [10] | L | ||
LogTrans [21] | 1 | ||
Informer [1] | 1 | ||
Autoformer [2] | 1 | ||
FEDformer [3] | 1 | ||
DESTformer | 1 |
Models | DESTformer | FEDformer [3] | Autoformer [2] | Informer [1] | LogTrans [21] | ARIMA [37] | LSTM [34] | ||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Metric | MSE | MAE | MSE | MAE | MSE | MAE | MSE | MAE | MSE | MAE | MSE | MAE | MSE | MAE | |
EET* | 96 | 0.198 | 0.266 | 0.203 | 0.287 | 0.255 | 0.339 | 0.365 | 0.453 | 0.768 | 0.642 | 1.354 | 0.829 | 2.041 | 1.073 |
192 | 0.255 | 0.317 | 0.269 | 0.328 | 0.281 | 0.340 | 0.533 | 0.563 | 0.989 | 0.757 | 1.562 | 0.986 | 2.249 | 1.112 | |
336 | 0.311 | 0.358 | 0.325 | 0.366 | 0.339 | 0.372 | 1.363 | 0.887 | 1.334 | 0.872 | 1.842 | 1.212 | 2.568 | 1.238 | |
720 | 0.402 | 0.398 | 0.421 | 0.415 | 0.422 | 0.419 | 3.379 | 1.388 | 3.048 | 1.328 | 2.315 | 1.712 | 2.720 | 1.287 | |
Electricity | 96 | 0.187 | 0.296 | 0.193 | 0.308 | 0.201 | 0.317 | 0.274 | 0.368 | 0.258 | 0.357 | 0.252 | 0.362 | 0.375 | 0.437 |
192 | 0.195 | 0.307 | 0.201 | 0.315 | 0.222 | 0.334 | 0.296 | 0.386 | 0.266 | 0.368 | 0.301 | 0.393 | 0.442 | 0.473 | |
336 | 0.203 | 0.325 | 0.214 | 0.329 | 0.231 | 0.338 | 0.300 | 0.394 | 0.280 | 0.380 | 0.322 | 0.403 | 0.439 | 0.473 | |
720 | 0.229 | 0.338 | 0.246 | 0.355 | 0.254 | 0.361 | 0.373 | 0.439 | 0.283 | 0.376 | 0.377 | 0.462 | 0.980 | 0.814 | |
Exchange | 96 | 0.127 | 0.259 | 0.148 | 0.278 | 0.197 | 0.323 | 0.847 | 0.752 | 0.968 | 0.812 | 1.365 | 0.911 | 1.453 | 1.049 |
192 | 0.155 | 0.355 | 0.271 | 0.380 | 0.300 | 0.369 | 1.204 | 0.895 | 1.040 | 0.851 | 1.235 | 0.953 | 1.846 | 1.179 | |
336 | 0.436 | 0.469 | 0.460 | 0.500 | 0.509 | 0.524 | 1.672 | 1.036 | 1.659 | 1.081 | 1.563 | 1.123 | 2.136 | 1.231 | |
720 | 0.998 | 0.732 | 1.195 | 0.841 | 1.447 | 0.941 | 2.478 | 1.310 | 1.941 | 1.127 | 1.780 | 1.240 | 2.984 | 1.427 | |
Traffic | 96 | 0.562 | 0.354 | 0.587 | 0.366 | 0.613 | 0.388 | 0.719 | 0.391 | 0.684 | 0.384 | 0.632 | 0.534 | 0.843 | 0.453 |
192 | 0.593 | 0.355 | 0.604 | 0.373 | 0.616 | 0.382 | 0.696 | 0.379 | 0.685 | 0.390 | 0.695 | 0.592 | 0.847 | 0.453 | |
336 | 0.603 | 0.377 | 0.621 | 0.383 | 0.622 | 0.337 | 0.777 | 0.420 | 0.733 | 0.408 | 0.732 | 0.620 | 0.853 | 0.455 | |
720 | 0.599 | 0.356 | 0.626 | 0.382 | 0.660 | 0.408 | 0.864 | 0.472 | 0.717 | 0.396 | 0.793 | 0.711 | 1.500 | 0.805 | |
Weather | 96 | 0.202 | 0.276 | 0.217 | 0.296 | 0.266 | 0.336 | 0.300 | 0.384 | 0.458 | 0.490 | 0.423 | 0.446 | 0.369 | 0.406 |
192 | 0.254 | 0.318 | 0.276 | 0.336 | 0.307 | 0.367 | 0.594 | 0.544 | 0.658 | 0.589 | 0.492 | 0.511 | 0.416 | 0.435 | |
336 | 0.307 | 0.369 | 0.339 | 0.380 | 0.359 | 0.395 | 0.578 | 0.523 | 0.797 | 0.652 | 0.533 | 0.562 | 0.455 | 0.454 | |
720 | 0.395 | 0.417 | 0.403 | 0.428 | 0.419 | 0.428 | 1.059 | 0.741 | 0.869 | 0.675 | 0.591 | 0.612 | 0.535 | 0.520 | |
ILI | 24 | 3.019 | 1.256 | 3.228 | 1.260 | 3.483 | 1.287 | 5.764 | 1.677 | 4.480 | 1.444 | 4.121 | 1.562 | 5.914 | 1.734 |
36 | 2.418 | 1.069 | 2.679 | 1.080 | 3.103 | 1.148 | 4.755 | 1.467 | 4.799 | 1.467 | 4.992 | 1.588 | 6.631 | 1.854 | |
48 | 2.586 | 1.065 | 2.622 | 1.078 | 2.669 | 1.085 | 4.763 | 1.469 | 4.800 | 1.468 | 5.312 | 1.895 | 6.736 | 1.857 | |
60 | 2.789 | 1.146 | 2.857 | 1.157 | 2.770 | 1.125 | 5.264 | 0.564 | 5.278 | 1.560 | 5.882 | 1.989 | 6.870 | 1.879 |
Models | DESTformer | FEDformer [3] | Autoformer [2] | Informer [1] | LogTrans [21] | ARIMA [37] | LSTM [34] | ||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Metric | MSE | MAE | MSE | MAE | MSE | MAE | MSE | MAE | MSE | MAE | MSE | MAE | MSE | MAE | |
EET* | 96 | 0.058 | 0.177 | 0.072 | 0.206 | 0.065 | 0.189 | 0.080 | 0.217 | 0.075 | 0.208 | 0.211 | 0.362 | 1.921 | 0.963 |
192 | 0.096 | 0.234 | 0.102 | 0.245 | 0.118 | 0.256 | 0.112 | 0.259 | 0.129 | 0.275 | 0.261 | 0.406 | 2.122 | 1.007 | |
336 | 0.127 | 0.258 | 0.130 | 0.279 | 0.154 | 0.305 | 0.166 | 0.314 | 0.154 | 0.302 | 0.317 | 0.448 | 2.448 | 1.146 | |
720 | 0.154 | 0.317 | 0.178 | 0.325 | 0.182 | 0.335 | 0.228 | 0.380 | 0.160 | 0.366 | 0.487 | 0.334 | 2.554 | 1.120 | |
Electricity | 96 | 0.244 | 0.356 | 0.253 | 0.370 | 0.341 | 0.438 | 0.258 | 0.367 | 0.288 | 0.393 | 0.367 | 0.391 | 0.382 | 0.446 |
192 | 0.264 | 0.354 | 0.282 | 0.386 | 0.345 | 0.428 | 0.285 | 0.388 | 0.432 | 0.483 | 0.423 | 0.446 | 0.410 | 0.463 | |
336 | 0.337 | 0.429 | 0.346 | 0.431 | 0.406 | 0.470 | 0.336 | 0.423 | 0.430 | 0.483 | 0.401 | 0.422 | 0.437 | 0.495 | |
720 | 0.419 | 0.479 | 0.422 | 0.484 | 0.565 | 0.581 | 0.607 | 0.599 | 0.491 | 0.531 | 0.533 | 0.576 | 0.884 | 0.756 | |
Exchange | 96 | 0.117 | 0.274 | 0.154 | 0.304 | 0.241 | 0.387 | 1.327 | 0.944 | 0.237 | 0.377 | 0.118 | 0.285 | 1.563 | 0.995 |
192 | 0.257 | 0.381 | 0.286 | 0.420 | 0.300 | 0.369 | 1.258 | 0.924 | 0.738 | 0.619 | 0.304 | 0.404 | 1.754 | 1.008 | |
336 | 0.497 | 0.450 | 0.511 | 0.555 | 0.509 | 0.524 | 2.179 | 1.296 | 2.018 | 1.070 | 0.736 | 0.598 | 2.035 | 1.003 | |
720 | 1.229 | 0.862 | 1.301 | 0.879 | 1.260 | 0.867 | 1.280 | 0.953 | 2.405 | 1.175 | 1.871 | 0.935 | 2.785 | 1.023 | |
Traffic | 96 | 0.198 | 0.298 | 0.207 | 0.312 | 0.246 | 0.346 | 0.257 | 0.353 | 0.226 | 0.317 | 0.422 | 0.435 | 0.756 | 0.369 |
192 | 0.193 | 0.307 | 0.205 | 0.312 | 0.266 | 0.370 | 0.299 | 0.376 | 0.314 | 0.408 | 0.466 | 0.472 | 0.856 | 0.358 | |
336 | 0.215 | 0.319 | 0.219 | 0.323 | 0.263 | 0.371 | 0.312 | 0.387 | 0.387 | 0.453 | 0.562 | 0.518 | 0.779 | 0.430 | |
720 | 0.230 | 0.300 | 0.244 | 0.344 | 0.269 | 0.372 | 0.366 | 0.436 | 0.437 | 0.491 | 0.634 | 0.590 | 0.458 | 0.750 | |
Weather | 96 | 0.006 | 0.059 | 0.006 | 0.062 | 0.011 | 0.081 | 0.004 | 0.044 | 0.005 | 0.052 | 0.126 | 0.149 | 0.360 | 0.395 |
192 | 0.005 | 0.058 | 0.006 | 0.062 | 0.008 | 0.067 | 0.002 | 0.040 | 0.006 | 0.060 | 0.188 | 0.212 | 0.452 | 0.421 | |
336 | 0.004 | 0.045 | 0.004 | 0.050 | 0.006 | 0.062 | 0.004 | 0.049 | 0.006 | 0.054 | 0.237 | 0.271 | 0.367 | 0.426 | |
720 | 0.006 | 0.053 | 0.006 | 0.059 | 0.009 | 0.070 | 0.003 | 0.042 | 0.007 | 0.059 | 0.312 | 0.351 | 0.522 | 0.503 | |
ILI | 24 | 0.629 | 0.596 | 0.708 | 0.627 | 0.948 | 0.732 | 5.282 | 2.050 | 3.607 | 1.662 | 3.442 | 1.612 | 4.886 | 1.652 |
36 | 0.572 | 0.601 | 0.584 | 0.617 | 0.634 | 0.650 | 4.554 | 1.916 | 2.407 | 1.363 | 3.634 | 1.920 | 5.426 | 1.698 | |
48 | 0.702 | 0.652 | 0.717 | 0.697 | 0.791 | 0.752 | 4.273 | 1.846 | 3.106 | 1.575 | 3.885 | 1.989 | 5.265 | 1.775 | |
60 | 0.820 | 0.763 | 0.855 | 0.774 | 0.874 | 0.797 | 5.214 | 2.057 | 3.698 | 1.733 | 4.263 | 2.147 | 5.994 | 1.754 |
Input Length I | 96 | 192 | 336 | |||||||
---|---|---|---|---|---|---|---|---|---|---|
Prediction Length O | 336 | 720 | 1440 | 336 | 720 | 1440 | 336 | 720 | 1440 | |
DESTformer | MSE | 0.291 | 0.398 | 0.503 | 0.302 | 0.397 | 0.467 | 0.312 | 0.388 | 0.552 |
MAE | 0.355 | 0.393 | 0.488 | 0.327 | 0.402 | 0.455 | 0.366 | 0.407 | 0.533 | |
DESTformer-f | MSE | 0.318 | 0.416 | 0.522 | 0.320 | 0.408 | 0.472 | 0.325 | 0.401 | 0.568 |
MAE | 0.366 | 0.401 | 0.508 | 0.335 | 0.418 | 0.472 | 0.379 | 0.425 | 0.549 | |
DESTformer-s | MSE | 0.362 | 0.457 | 0.544 | 0.377 | 0.458 | 0.505 | 0.358 | 0.433 | 0.598 |
MAE | 0.398 | 0.453 | 0.532 | 0.365 | 0.452 | 0.497 | 0.346 | 0.425 | 0.587 | |
DESTformer-t | MSE | 0.327 | 0.453 | 0.563 | 0.355 | 0.446 | 0.544 | 0.346 | 0.435 | 0.604 |
MAE | 0.325 | 0.447 | 0.556 | 0.346 | 0.435 | 0.537 | 0.334 | 0.424 | 0.597 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Wang, Y.; Zhu, J.; Kang, R. DESTformer: A Transformer Based on Explicit Seasonal–Trend Decomposition for Long-Term Series Forecasting. Appl. Sci. 2023, 13, 10505. https://doi.org/10.3390/app131810505
Wang Y, Zhu J, Kang R. DESTformer: A Transformer Based on Explicit Seasonal–Trend Decomposition for Long-Term Series Forecasting. Applied Sciences. 2023; 13(18):10505. https://doi.org/10.3390/app131810505
Chicago/Turabian StyleWang, Yajun, Jianping Zhu, and Renke Kang. 2023. "DESTformer: A Transformer Based on Explicit Seasonal–Trend Decomposition for Long-Term Series Forecasting" Applied Sciences 13, no. 18: 10505. https://doi.org/10.3390/app131810505
APA StyleWang, Y., Zhu, J., & Kang, R. (2023). DESTformer: A Transformer Based on Explicit Seasonal–Trend Decomposition for Long-Term Series Forecasting. Applied Sciences, 13(18), 10505. https://doi.org/10.3390/app131810505