A Consistency-Aware Hybrid Static–Dynamic Multivariate Network for Forecasting Industrial Key Performance Indicators
Abstract
1. Introduction
- Consistency-Aware Dynamic Segmentation: A novel optimization-based segmentation method is proposed to adaptively partition time series data while explicitly maintaining consistency across multiple related variables, minimizing redundancy while preserving essential information.
- Hybrid Static–Dynamic Representation Network: The forecasting module integrates a dual-stream architecture to effectively extract complex features from both static and dynamic variables, and it fuses their representations through feature concatenation and nonlinear modeling.
- Hierarchical Attention Module: The model fuses disentangled and unified approaches to capture both independent temporal patterns and cross-factor dependencies, thereby improving feature representation in complex multivariate scenarios.
2. Literature Review
3. Problem Statement
4. Methodology
4.1. Consistency-Aware Dynamic Segmentation
Algorithm 1 Genetic Algorithm for Dynamic Segmentation |
|
- Initialization: Generate an initial population, where each individual encodes a candidate segmentation scheme.
- Selection: Individuals are selected for reproduction based on their fitness, which is evaluated using GRA scores via a roulette-wheel strategy.
- Crossover: Selected individuals are paired and crossed to exchange segmentation points, producing new offspring.
- Mutation: With a defined probability, individuals undergo mutation, modifying segmentation points to explore new solutions.
- Evaluation: The fitness of all individuals is assessed, and the population is iteratively refined over successive generations.
4.2. Hybrid Multivariate Forecasting Model
- (1)
- Static Representation Module
- (2)
- Dynamic Temporal Disentanglement and Attention Module
- (3)
- Predictor
4.3. Running Flow of Proposed Method
- Data Acquisition and Processing: Taking radar data as an example, multivariate time series are collected, including six influencing factors along with dR data. The raw datasets are then preprocessed and aligned to ensure consistency across all variables, followed by a consistency-aware segmentation process aimed at maximizing the consistency score. This leads to the partitioning of data into training, validation, and test sets for subsequent modeling.
- Hybrid Static–Dynamic Network Construction: The segmented data are processed by CHSDM-Net, which consists of parallel branches for static and dynamic feature extraction. Static factors are encoded through a representation module, while dynamic factors undergo instance normalization, temporal disentanglement, and hierarchical attention mechanisms to capture local and global dependencies. The resulting features are fused and passed to the prediction module for forecasting the target indicator.
- Model Evaluation and Analysis: The performance of the proposed model is comprehensively assessed through segmentation analysis, comparative experiments, ablation studies, and sensitivity analysis, thereby validating its effectiveness and robustness.
5. Experiments and Results
5.1. Experimental Setup
5.1.1. Dataset Description
- 1.
- Numerical case
- : Randomly assigned a value between 1 and 5.
- : Randomly selected from the range of 15 to 30.
- : Randomly chosen from 1 to 4.
- : Randomly chosen from 1 to 4.
- 2.
- Illustrative case
- represent filtering, temperature, wind level, and flight path data, respectively;
- represents RCS over the time period T;
- represents SNR over the same time period.
5.1.2. Evaluation Metrics
5.1.3. Baselines
- Autoformer [28]: Autoformer introduces a decomposition architecture combined with an auto-correlation mechanism, which enables the model to effectively capture long-term dependencies and periodic patterns in time series data.
- FEDformer [29]: By incorporating frequency-enhanced decomposition and seasonal-trend separation, FEDformer enhances both the accuracy and efficiency of long-term forecasting with transformer-based models.
- Pyraformer [30]: Leveraging a pyramidal attention structure, Pyraformer is designed to efficiently model long-range dependencies while significantly reducing computational complexity.
- N-HiTS [32]: The N-HiTS model employs neural hierarchical interpolation, effectively utilizing multi-resolution representations to improve forecasting performance on complex and diverse time series.
- PatchTST [55]: PatchTST segments time series into patches to serve as input tokens and applies channel-independent transformers, resulting in efficient and accurate long-term multivariate forecasting.
- DLinear [56]: As a lightweight linear model, DLinear decomposes time series into trend and seasonal components, offering both strong predictive performance and computational efficiency.
- Pathformer [33]: Pathformer stands out by introducing a multi-scale transformer architecture with adaptive pathway selection, which allows the model to capture both local and global temporal dependencies.
- TiDE [53]: TiDE adopts a dense encoder structure and incorporates advanced feature extraction and aggregation strategies, thereby achieving robust results in long-term time series forecasting tasks.
- SARIMAX [37]: An extension of ARIMA that incorporates seasonal effects and exogenous variables, providing an interpretable linear baseline for time series with complex seasonality and external influences.
5.1.4. Implementation Details
5.2. Overall Performance
5.2.1. Segmentation Results
5.2.2. Main Results
5.3. Ablation Study
- w/o Seg: The model is trained and evaluated on the dataset without performing the optimized segmentation process.
- w/o Cons: Instead of using only consistent segmented datasets, all available data—including inconsistent samples—are merged for model training and evaluation.
- w/o Sta: The Static Representation Module is removed to assess its contribution to overall performance.
- w/o Dyn: The Dynamic Temporal Disentanglement and Attention Module is omitted, and only the static branch is retained.
- w/o Tr: The Trend-wise Attention block within the dynamic module is removed, while other components remain unchanged.
- w/o Pe: The Periodic-aware Attention block within the dynamic module is removed, while other components remain unchanged.
5.4. Comparative Analysis of Segmentation and Decomposition Methods
- 1.
- Segmentation Methods Comparison
- 2.
- Decomposition Methods Comparison
5.5. Sensitivity Analysis
- Epoch Sensitivity Analysis
- 2.
- Sequence Length Sensitivity Analysis
6. Discussion
7. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- Odufuwa, O.Y.; Tartibu, L.K.; Kusakana, K. Artificial neural network modelling for predicting efficiency and emissions in mini-diesel engines: Key performance indicators and environmental impact analysis. Fuel 2025, 387, 134294. [Google Scholar] [CrossRef]
- Frohmann, M.; Karner, M.; Khudoyan, S.; Wagner, R.; Schedl, M. Predicting the Price of Bitcoin Using Sentiment-Enriched Time Series Forecasting. Big Data Cogn. Comput. 2023, 7, 137. [Google Scholar] [CrossRef]
- Dioubi, F.; Hundera, N.W.; Xu, H.; Zhu, X. Enhancing stock market predictions via hybrid external trend and internal components analysis and long short term memory model. J. King Saud Univ. Comput. Inf. Sci. 2024, 36, 102252. [Google Scholar] [CrossRef]
- Alharthi, M.; Mahmood, A. Enhanced Linear and Vision Transformer-Based Architectures for Time Series Forecasting. Big Data Cogn. Comput. 2024, 8, 48. [Google Scholar] [CrossRef]
- AlSharabi, K.; Bin Salamah, Y.; Aljalal, M.; Abdurraqeeb, A.M.; Alturki, F.A. Long-Term Forecasting of Solar Irradiation in Riyadh, Saudi Arabia, Using Machine Learning Techniques. Big Data Cogn. Comput. 2025, 9, 21. [Google Scholar] [CrossRef]
- Elwahsh, H.; Tawfeek, M.A.; Abd El-Aziz, A.A.; Mahmood, M.A.; Alsabaan, M.; El-shafeiy, E. A new approach for cancer prediction based on deep neural learning. J. King Saud Univ. Comput. Inf. Sci. 2023, 35, 101565. [Google Scholar] [CrossRef]
- Xie, H.; Wei, L.; Ruan, G.; Zhang, H.; Shi, J.; Lin, S.; Liu, C.; Liu, X.; Zheng, X.; Chen, Y.; et al. Performance of anthropometry-based and bio-electrical impedance-based muscle-mass indicators in the Global Leadership Initiative on Malnutrition criteria for predicting prognosis in patients with cancer. Clin. Nutr. 2024, 43, 1791–1799. [Google Scholar] [CrossRef]
- Su, C.; Peng, X.; Yang, D.; Lu, R.; Huang, H.; Zhong, W. A Transferable Ensemble Additive Network for Interpretable Prediction of Key Performance Indicators. IEEE Trans. Instrum. Meas. 2024, 73, 2532214. [Google Scholar] [CrossRef]
- Azam, M.A.; Siddiqui, M.A.; Ali, H. Development of performance indicator for metal-organic frameworks in atmospheric water harvesting. Sep. Purif. Technol. 2025, 355, 129660. [Google Scholar] [CrossRef]
- Zhang, P.; Cao, L.; Dong, F.; Gao, Z.; Zou, Y.; Wang, K.; Zhang, Y.; Sun, P. A Study of Hybrid Predictions Based on the Synthesized Health Indicator for Marine Systems and Their Equipment Failure. Appl. Sci. 2022, 12, 3329. [Google Scholar] [CrossRef]
- Han, H.; Li, H.; Wu, X.; Yang, H.; Zhao, D. Cascaded LSTM-Based State Prediction of Equipment in Wastewater Treatment Process. IEEE Trans. Instrum. Meas. 2024, 73, 3541112. [Google Scholar] [CrossRef]
- Kim, D.; Baek, J.-G. Bagging ensemble-based novel data generation method for univariate time series forecasting. Expert Syst. Appl. 2022, 203, 117366. [Google Scholar] [CrossRef]
- Sun, L.; Ji, Y.; Li, Q.; Yang, T. A process knowledge-based hybrid method for univariate time series prediction with uncertain inputs in process industry. Adv. Eng. Inform. 2024, 60, 102438. [Google Scholar] [CrossRef]
- Balderas, L.; Lastra, M.; Benítez, J.M. An Efficient Green AI Approach to Time Series Forecasting Based on Deep Learning. Big Data Cogn. Comput. 2024, 8, 120. [Google Scholar] [CrossRef]
- Liu, Z.; Feng, Y.; Liu, H.; Tang, R.; Yang, B.; Zhang, D.; Jia, W.; Tan, J. TVC Former: A transformer-based long-term multivariate time series forecasting method using time-variable coupling correlation graph. Knowl.-Based Syst. 2025, 314, 113147. [Google Scholar] [CrossRef]
- Bao, X.; Zheng, Y.; Zhong, J.; Chen, L. SIMTSeg: A self-supervised multivariate time series segmentation method with periodic subspace projection and reverse diffusion for industrial process. Adv. Eng. Inform. 2024, 62, 102859. [Google Scholar] [CrossRef]
- Wu, H.; Jing, S.; Zhang, R.; Zhang, F.; Jiang, C. Phase unwrapping error identification and suppression method in ϕ-OTDR systems based on PELT-VMD-ARIMA. Opt. Express 2024, 32, 29344–29361. [Google Scholar] [CrossRef]
- Cheng, X.; Huang, B.; Zong, J. Device-Free Human Activity Recognition Based on GMM-HMM Using Channel State Information. IEEE Access 2021, 9, 76592–76601. [Google Scholar] [CrossRef]
- Machado, A.P.F.; Munaro, C.J.; Ciarelli, P.M. Enhancing one-class classifiers performance in multivariate time series through dynamic clustering: A case study on hydraulic system fault detection. Expert Syst. Appl. 2025, 286, 128088. [Google Scholar] [CrossRef]
- Wang, L.; Shen, P. Memetic segmentation based on variable lag aware for multivariate time series. Inf. Sci. 2024, 657, 120003. [Google Scholar] [CrossRef]
- Heo, T.; Manuel, L. Greedy copula segmentation of multivariate non-stationary time series for climate change adaptation. Prog. Disaster Sci. 2022, 14, 100221. [Google Scholar] [CrossRef]
- Huang, J.; Ren, L.; Ji, Z.; Yan, K. Single-channel EEG automatic sleep staging based on transition optimized HMM. Multimed. Tools Appl. 2022, 30, 43063–43081. [Google Scholar] [CrossRef]
- Guo, S.; Zheng, S.; Li, J.; Zhou, Q.; Xu, H. A lightweight social cognitive risk potential field model for path planning with dedicated dynamic and static traffic factors. IET Intell. Transp. Syst. 2025, 19, e12595. [Google Scholar] [CrossRef]
- Shi, X.; Hao, K.; Chen, L.; Wei, B.; Liu, X. Multivariate time series prediction of complex systems based on graph neural networks with location embedding graph structure learning. Adv. Eng. Inform. 2022, 54, 101810. [Google Scholar] [CrossRef]
- Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A.N.; Kaiser, L.; Polosukhin, I. Attention is all you need. In Proceedings of the 31st International Conference on Neural Information Processing Systems, Long Beach, CA, USA, 4–9 December 2017; Volume 30. [Google Scholar]
- Woo, G.; Liu, C.; Sahoo, D.; Kumar, A.; Hoi, S. CoST: Contrastive learning of disentangled seasonal-trend representations for time series forecasting. arXiv 2022, arXiv:2202.01575. [Google Scholar]
- Yu, G.; Zou, J.; Hu, X.; Aviles-Rivero, A.I.; Qin, J.; Wang, S. Revitalizing multivariate time series forecasting: Learnable decomposition with inter-series dependencies and intra-series variations modeling. In Proceedings of the International Conference on Machine Learning, Vienna, Austria, 21–27 July 2024; pp. 57818–57841. [Google Scholar]
- Wu, H.; Xu, J.; Wang, J.; Long, M. Autoformer: Decomposition transformers with auto-correlation for long-term series forecasting. In Proceedings of the 35th Conference on Neural Information Processing Systems (NeurIPS 2021), Online, 6–14 December 2021; pp. 22419–22430. [Google Scholar]
- Zhou, T.; Ma, Z.; Wen, Q.; Wang, X.; Sun, L.; Jin, R. FEDformer: Frequency enhanced decomposed transformer for long-term series forecasting. In Proceedings of the International Conference on Machine Learning, Baltimore, MD, USA, 17–23 July 2022; pp. 27268–27286. [Google Scholar]
- Liu, S.; Yu, H.; Liao, C.; Li, J.; Lin, W.; Liu, A.X.; Dustdar, S. Pyraformer: Low-complexity pyramidal attention for long-range time series modeling and forecasting. In Proceedings of the International Conference on Learning Representations, Virtual Event, Austria, 3–7 May 2021. [Google Scholar]
- Shabani, A.; Abdi, A.; Meng, L.; Sylvain, T. Scaleformer: Iterative multi-scale refining transformers for time series forecasting. arXiv 2022, arXiv:2206.04038. [Google Scholar]
- Challu, C.; Olivares, K.G.; Oreshkin, B.N.; Garza Ramirez, F.; Canseco, M.M.; Dubrawski, A. N-HiTS: Neural hierarchical interpolation for time series forecasting. In Proceedings of the AAAI Conference on Artificial Intelligence, Online, 22 February–1 March 2022; pp. 6989–6997. [Google Scholar]
- Chen, P.; Zhang, Y.; Cheng, Y.; Shu, Y.; Wang, Y.; Wen, Q.; Yang, B.; Guo, C. Pathformer: Multi-scale transformers with adaptive pathways for time series forecasting. arXiv 2024, arXiv:2402.05956. [Google Scholar]
- Xu, M.; Yang, F.; Fang, Y.; Li, F.; Yan, R. Research on time series-based pipeline ground penetrating radar calibration angle prediction algorithm. Sensors 2024, 24, 379. [Google Scholar] [CrossRef]
- William, W.S.W. Multivariate Time Series Analysis and Applications; Wiley-Blackwell: Hoboken, NJ, USA, 2019. [Google Scholar]
- Yuan, J.; Li, D. Epidemiological and clinical characteristics of influenza patients in respiratory department under the prediction of autoregressive integrated moving average model. Results Phys. 2021, 24, 104070. [Google Scholar] [CrossRef]
- Jang, G.; Seo, J.; Lee, H. Analyzing the impact of COVID-19 on seasonal infectious disease outbreak detection using hybrid SARIMAX-LSTM model. J. Infect. Public Health 2025, 18, 102772. [Google Scholar] [CrossRef]
- Mulla, S.; Pande, C.B.; Singh, S.K. Times series forecasting of monthly rainfall using seasonal auto regressive integrated moving average with exogenous variables (SARIMAX) model. Water Resour. Manag. 2024, 38, 1825–1846. [Google Scholar] [CrossRef]
- Cortes, C.; Vapnik, V. Support-Vector Networks. Mach. Learn. 1995, 20, 273–297. [Google Scholar] [CrossRef]
- Van Ryzin, J. Classification and Regression Trees (Book). J. Am. Stat. Assoc. 1986, 81, 253. [Google Scholar] [CrossRef]
- Breiman, L. Random forests. Mach. Learn. 2001, 45, 5–32. [Google Scholar] [CrossRef]
- LeCun, Y.; Boser, B.; Denker, J.S.; Henderson, D.; Howard, R.E.; Hubbard, W.; Jackel, L.D. Backpropagation applied to handwritten zip code recognition. Neural Comput. 1989, 1, 541–551. [Google Scholar] [CrossRef]
- Hopfield, J.J. Neural networks and physical systems with emergent collective computational abilities. Proc. Natl. Acad. Sci. USA 1982, 79, 2554–2558. [Google Scholar] [CrossRef]
- Nigam, S. Forecasting time series using convolutional neural network with multiplicative neuron. Appl. Soft Comput. 2025, 174, 112921. [Google Scholar] [CrossRef]
- Salazar, C.; Banerjee, A.G. A distance correlation-based approach to characterize the effectiveness of recurrent neural networks for time series forecasting. Neurocomputing 2025, 629, 129641. [Google Scholar] [CrossRef]
- Ye, H.; Chen, J.; Gong, S.; Jiang, F.; Zhang, T.; Chen, J.; Gao, X. ATFNet: Adaptive time-frequency ensembled network for long-term time series forecasting. arXiv 2024, arXiv:2404.05192. [Google Scholar]
- Kheir, N.A.; Holmes, W.M. On validating simulation models of missile systems. Simulation 1978, 30, 117–128. [Google Scholar] [CrossRef]
- Montgomery, D.C.; Conard, R.G. Comparison of simulation and flight-test data for missile systems. Simulation 1980, 34, 63–72. [Google Scholar] [CrossRef]
- Abdel-Magid, Y.L.; Abido, M.A. Optimal multiobjective design of robust power system stabilizers using genetic algorithms. IEEE Trans. Power Syst. 2003, 18, 1125–1132. [Google Scholar] [CrossRef]
- Qiu, X.; Hu, J.; Zhou, L.; Wu, X.; Du, J.; Zhang, B.; Guo, C.; Zhou, A.; Jensen, C.S.; Sheng, Z.; et al. TFB: Towards Comprehensive and Fair Benchmarking of Time Series Forecasting Methods. Proc. Very Large Data Bases 2024, 17, 2363–2377. [Google Scholar] [CrossRef]
- Kim, T.; Kim, J.; Tae, Y.; Park, C.; Choi, J.-H.; Choo, J. Reversible instance normalization for accurate time-series forecasting against distribution shift. In Proceedings of the International Conference on Learning Representations, Virtual Event, Austria, 3–7 May 2021. [Google Scholar]
- Luo, Y.; Lyu, Z.; Huang, X. TFDNet: Time-Frequency Enhanced Decomposed Network for Long-term Time Series Forecasting. arXiv 2023, arXiv:2308.13386. [Google Scholar] [CrossRef]
- Das, A.; Kong, W.; Leach, A.; Mathur, S.; Sen, R.; Yu, R. Long-term forecasting with TiDE: Time-series dense encoder. arXiv 2023, arXiv:2304.08424. [Google Scholar]
- Li, Z.; Qi, S.; Li, Y.; Xu, Z. Revisiting long-term time series forecasting: An investigation on linear mapping. arXiv 2023, arXiv:2305.10721. [Google Scholar]
- Nie, Y.; Nguyen, N.H.; Sinthong, P.; Kalagnanam, J. A time series is worth 64 words: Long-term forecasting with transformers. arXiv 2022, arXiv:2211.14730. [Google Scholar]
- Zeng, A.; Chen, M.; Zhang, L.; Xu, Q. Are transformers effective for time series forecasting? Proc. AAAI Conf. Artif. Intell. 2023, 37, 11121–11128. [Google Scholar] [CrossRef]
- Wang, M.; Meng, Y.; Sun, L.; Zhang, T. Decomposition combining averaging seasonal-trend with singular spectrum analysis and a marine predator algorithm embedding Adam for time series forecasting with strong volatility. Expert Syst. Appl. 2025, 274, 126864. [Google Scholar] [CrossRef]
- Simon, J.; Moll, J.; Krozer, V. Trend decomposition for temperature compensation in a radar-based structural health monitoring system of wind turbine blades. Sensors 2024, 24, 800. [Google Scholar] [CrossRef]
Dataset | Radar1 | Radar2 | Radar3 | Radar4 | Radar5 | Radar6 |
---|---|---|---|---|---|---|
dataset1 | 13,775 | 14,371 | 14,948 | 15,373 | 15,580 | 14,608 |
dataset2 | 14,901 | 14,681 | 10,472 | 14,379 | 14,241 | 14,384 |
dataset3 | 14,779 | 14,929 | 12,594 | 14,191 | 14,840 | 14,784 |
dataset4 | 14,297 | 15,665 | 13,687 | 14,312 | 14,415 | 14,823 |
After preprocessing | 9163 | 9380 | 7890 | 8818 | 10,028 | 10,308 |
Hyperparameter | Range |
---|---|
Learning rate | [1 × 10−4, 1 × 10−3] |
Dropout rate | [0, 0.1] |
Batch size | [4, 8, 16, 32, 64] |
Sequence length | [20, 30, 40] |
Prediction length | [20, 30, 40] |
Segment | Original Segmentation | Optimized Segmentation | |||
---|---|---|---|---|---|
Count | Score | Count | Score | ||
1 | 2609 | 0.661 | 2580 | 0.661 | |
2 | 2537 | 0.681 | 2563 | 0.674 | |
3 | 2689 | 0.690 | 2748 | 0.691 | |
4 | 2078 | 0.646 | 2022 | 0.646 |
Method | Dataset 1 vs. Dataset 2 | Dataset 1 vs. Dataset 3 | Dataset 2 vs. Dataset 3 |
---|---|---|---|
Original | 0.7096 | 0.6437 | 0.5857 |
GA Optimized | 0.7099 | 0.6437 | 0.5857 |
Dataset | Metric | CHSDM-Net | Autoformer | FEDformer | Pyraformer | Pathformer | PatchTST | N-HiTS | TiDE | DLinear | SARIMAX |
---|---|---|---|---|---|---|---|---|---|---|---|
Numeric | MAE↓ | 0.0568 | 0.1209 | 0.0857 | 0.1682 | 0.0967 | 0.1066 | 0.0857 | 0.0768 | 0.2278 | 0.5031 |
RMSE↓ | 0.1155 | 0.5013 | 0.3568 | 0.2849 | 0.3618 | 0.3611 | 0.3675 | 0.3535 | 0.3125 | 06771 | |
R2↑ | 0.9987 | 0.9801 | 0.9909 | 0.9923 | 0.9906 | 0.9914 | 0.9912 | 0.9909 | 0.9908 | 0.9524 | |
MASE↓ | 0.0125 | 0.0318 | 0.0192 | 0.0388 | 0.0205 | 0.0241 | 0.0183 | 0.0149 | 0.0587 | 0.2315 | |
Radar1 | MAE↓ | 0.0643 | 0.0986 | 0.1399 | 0.0868 | 0.1234 | 0.1337 | 0.0670 | 0.0826 | 0.0689 | 1.4368 |
RMSE↓ | 0.1274 | 0.1540 | 0.1944 | 0.1450 | 0.1954 | 0.1928 | 0.1292 | 0.1368 | 0.1417 | 1.7694 | |
R2↑ | 0.9906 | 0.9865 | 0.9784 | 0.9877 | 0.9775 | 0.9788 | 0.9907 | 0.9893 | 0.9879 | −0.3071 | |
MASE↓ | 0.3277 | 0.4462 | 0.6521 | 0.3720 | 0.6613 | 0.5742 | 0.3260 | 0.3933 | 0.2995 | 3.2291 | |
Radar2 | MAE↓ | 0.0533 | 0.0877 | 0.1254 | 0.0784 | 0.1108 | 0.1149 | 0.0530 | 0.0704 | 0.0660 | 0.9565 |
RMSE↓ | 0.0998 | 0.1304 | 0.1673 | 0.1167 | 0.1541 | 0.1589 | 0.0926 | 0.1086 | 0.1108 | 1.1904 | |
R2↑ | 0.9786 | 0.9648 | 0.9485 | 0.9697 | 0.9457 | 0.9510 | 0.9825 | 0.9756 | 0.9748 | −0.5423 | |
MASE↓ | 0.1343 | 0.2463 | 0.3533 | 0.2269 | 0.2887 | 0.3541 | 0.1372 | 0.1858 | 0.1939 | 2.9407 | |
Radar3 | MAE↓ | 0.1470 | 0.2152 | 0.2926 | 0.1949 | 0.3028 | 0.2913 | 0.1709 | 0.1769 | 0.1920 | 2.1623 |
RMSE↓ | 0.2361 | 0.3021 | 0.3886 | 0.2747 | 0.4023 | 0.3865 | 0.2514 | 0.2563 | 0.2846 | 2.6083 | |
R2↑ | 0.9836 | 0.9750 | 0.9575 | 0.9809 | 0.9547 | 0.9557 | 0.9836 | 0.9828 | 0.9750 | −0.3206 | |
MASE↓ | 0.0979 | 0.1688 | 0.2266 | 0.1249 | 0.2142 | 0.2102 | 0.1273 | 0.1257 | 0.1447 | 3.5609 | |
Radar4 | MAE↓ | 0.0415 | 0.0521 | 0.0877 | 0.0500 | 0.0653 | 0.0867 | 0.0464 | 0.0462 | 0.0618 | 1.1584 |
RMSE↓ | 0.0936 | 0.1008 | 0.1306 | 0.1015 | 0.1235 | 0.1430 | 0.0948 | 0.0963 | 0.1198 | 1.4514 | |
R2↑ | 0.9893 | 0.9871 | 0.9798 | 0.9871 | 0.9794 | 0.9736 | 0.9895 | 0.9889 | 0.9826 | −0.4982 | |
MASE↓ | 0.1985 | 0.2541 | 0.4198 | 0.2530 | 0.3006 | 0.3536 | 0.2229 | 0.2100 | 0.2575 | 3.2294 | |
Radar5 | MAE↓ | 0.0714 | 0.0978 | 0.1106 | 0.0715 | 0.0980 | 0.1101 | 0.0630 | 0.1866 | 0.0718 | 0.8926 |
RMSE↓ | 0.1089 | 0.1533 | 0.1493 | 0.1102 | 0.1390 | 0.1524 | 0.1021 | 0.2417 | 0.1208 | 1.1174 | |
R2↑ | 0.9804 | 0.9864 | 0.9648 | 0.9778 | 0.9667 | 0.9576 | 0.9804 | 0.9114 | 0.9767 | −0.2622 | |
MASE↓ | 0.1814 | 0.4260 | 0.2872 | 0.1825 | 0.2554 | 0.3085 | 0.1630 | 0.6197 | 0.1906 | 2.7657 |
Method | MAE | RMSE | MASE | |
---|---|---|---|---|
CHSDM-Net | 0.0568 | 0.1155 | 0.9987 | 0.0923 |
STL | 0.5754 | 0.7343 | 0.9468 | 0.2437 |
Moving Average | 0.5806 | 0.7567 | 0.9496 | 0.2573 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Long, J.; Jia, X.; Li, B.; Zhu, L.; Wang, M. A Consistency-Aware Hybrid Static–Dynamic Multivariate Network for Forecasting Industrial Key Performance Indicators. Big Data Cogn. Comput. 2025, 9, 163. https://doi.org/10.3390/bdcc9070163
Long J, Jia X, Li B, Zhu L, Wang M. A Consistency-Aware Hybrid Static–Dynamic Multivariate Network for Forecasting Industrial Key Performance Indicators. Big Data and Cognitive Computing. 2025; 9(7):163. https://doi.org/10.3390/bdcc9070163
Chicago/Turabian StyleLong, Jiahui, Xiang Jia, Bingyi Li, Lin Zhu, and Miao Wang. 2025. "A Consistency-Aware Hybrid Static–Dynamic Multivariate Network for Forecasting Industrial Key Performance Indicators" Big Data and Cognitive Computing 9, no. 7: 163. https://doi.org/10.3390/bdcc9070163
APA StyleLong, J., Jia, X., Li, B., Zhu, L., & Wang, M. (2025). A Consistency-Aware Hybrid Static–Dynamic Multivariate Network for Forecasting Industrial Key Performance Indicators. Big Data and Cognitive Computing, 9(7), 163. https://doi.org/10.3390/bdcc9070163