A VMD-Based Four-Stage Hybrid Forecasting Model with Error Correction for Complex Coal Price Series
Abstract
1. Introduction
2. Literature Review
2.1. Evolution of Single Forecasting Models
2.2. Technical Development of the Decomposition–Ensemble Paradigm
2.3. Evolution of Approaches to Error Handling
2.4. Research Gap Identification and Study Positioning
3. Methodology
3.1. Overall Design of the Enhanced Coal Price Forecasting Framework
- Module 1: Series Decomposition. The VMD algorithm decomposes the original coal price series into several modal components, each with distinct frequency characteristics. This lays the foundation for the subsequent forecasting stage.
- Module 2: Parallel Modal Forecasting. For each decomposed mode, an ARIMA and a GRU-Attention model are deployed in parallel, with the former capturing linear features and the latter mining non-linear dynamic patterns.
- Module 3: Optimized Ensemble. Using a data-driven, multi-scheme weight optimization strategy, the linear and non-linear forecasts for each mode are optimally ensembled. These modal forecasts are then aggregated to reconstruct a preliminary overall price forecast.
- Module 4: Systematic Error Correction. The systematic error accumulated from the preceding modules is treated as an independent modeling object. An error correction model is then trained to correct the preliminary forecast, yielding the final price forecast.
3.2. Series Decomposition: VMD-Based Implementation
3.3. Parallel Modal Forecasting: A Synergistic Adaptive Dual-Model Design
3.3.1. Linear Forecasting Unit: Dynamic ARIMA
3.3.2. Non-Linear Forecasting Unit: GRU with Attention Mechanism
- Calculate Compatibility Scores: The final hidden state, , is compared with all preceding states () in the look-back window to assess their relevance to the current forecast, yielding compatibility scores.
- Generate Attention Weights: The softmax function normalizes these scores to produce attention weights, , where each weight quantifies the attention placed on the i-th historical time step.
- Construct the Context Vector: A dynamic context vector, , is formed as a weighted sum of the hidden states. This vector, summarizing the most relevant historical information, is then passed through a fully connected layer to yield the final modal forecast.
3.3.3. Synergistic Adaptive Design
- The Dynamic ARIMA Unit performs coefficient-level adaptation, dynamically adjusting model coefficients for each forecast via the “fixed orders, re-estimated coefficients” strategy.
- The GRU-Attention Unit performs information-processing-level adaptation, recalculating attention weights on historical data in real-time for each forecast.
3.4. Optimized Ensemble: A Multi-Scheme Weight Optimization Strategy
3.4.1. Intra-Modal Ensemble: Weight Optimization Based on Multi-Scheme Evaluation
3.4.2. Cross-Modal Reconstruction
3.5. Systematic Error Correction: Explicit Error Modeling
3.5.1. Error Accumulation and the Necessity for Correction
- Decomposition Error: As an approximation algorithm, VMD incurs information loss during the decomposition of the original series, resulting in a structural decomposition residual.
- Forecasting Error: The forecasting models for each mode are essentially function approximations of the true dynamics; they cannot perfectly capture all variation patterns, thus inevitably introducing forecasting biases.
- Intra-modal Ensemble Error: The linear weighted ensemble is a simplified treatment of the relationship between the two heterogeneous models and cannot fully capture their potential non-linear complementary effects.
- Cross-modal Ensemble Error: Direct summation for reconstruction overlooks the complex coupling relationships that may exist among different frequency components in the real economic system.
3.5.2. Residual Modeling Based on GRU-Attention
3.6. Hyperparameter Optimization Strategy: Optuna Framework
3.7. Model Evaluation Metrics
- MAPE measures relative error as a percentage, making it suitable for cross-scale comparisons. RMSE, due to its squared term, is highly sensitive to large errors and effectively highlights extreme deviations. MAE represents the average magnitude of absolute errors, providing straightforward interpretability. For all three metrics, smaller values indicate better forecasting accuracy. The formulas are:
- Theil’s U Statistic: Compares the model’s accuracy to a naive “random walk” forecast, where the next value equals the current value. A value below 1 indicates the model outperforms the naive benchmark, while a value above 1 suggests inferior performance.
- DA: Assesses the model’s ability to predict the direction of changes (e.g., increase or decrease). This metric is particularly important in economic forecasting, where the direction can be as critical as the magnitude. DA is expressed as the percentage of correct directional predictions.
- DM test: Evaluates whether the difference in predictive accuracy between two models is statistically significant. The null hypothesis () assumes both models have equal forecasting accuracy. A p-value below a chosen significance level (e.g., 0.05) indicates a statistically significant difference, supporting the superiority of one model.
3.8. Forecasting System Implementation and Workflow
3.8.1. Stage One: Initial Model Training
- VMD Parameter Optimization and Decomposition: After determining the optimal VMD parameters (K and α) on the training set, the set is decomposed into K modal components.
- Modal Forecaster Training: For each mode, an ARIMA and a GRU-Attention model are trained.
- Ensemble Weight Determination: The multi-scheme optimization strategy is applied to determine the optimal ensemble weights for each mode.
- Residual Series Generation: A preliminary forecast is generated on the training set using the trained models and weights, from which the residual series is calculated.
- Error Correction Model Training: An independent GRU-Attention model is trained on the generated residual series to serve as the error corrector.
3.8.2. Stage Two: Rolling Forecast Validation
4. Data and Experimental Design
4.1. Data Source and Description
4.2. Dataset Partitioning and Statistical Features
- Training Set: The first 588 observations (June 2010–September 2021), used for model training and hyperparameter optimization.
- Test Set: The final 147 observations (October 2021–May 2025), used exclusively for final model performance evaluation.
4.3. Preprocessing Parameter Settings
4.3.1. VMD Parameter Optimization
- Determination of Modes (K): The selection of K involves a trade-off between decomposition accuracy and economic interpretability, as a larger K improves accuracy but reduces interpretation. To find the optimal value, we identify the “elbow point” on a curve plotting VMD reconstruction error (measured by MSE) against K for a given α. As shown in Figure 4, the marginal improvement in MSE peaks at K = 6. Since this result is consistent across various α values, we select K = 6 as the optimal number of modes.
- Optimization of Penalty Factor (α): With K fixed at 6, we test different α values and find that α = 1000 yields the best modal separation, as it produces the clearest spectral boundaries and highest compactness.
4.3.2. Data Normalization
4.4. Model Training and Hyperparameter Optimization
4.4.1. Time Series Cross-Validation Strategy
4.4.2. Hyperparameter Optimization Using Optuna
- Objective: Minimize the average MSE over the 3-fold cross-validation. We run 100 trials for each model and select the hyperparameter set with the lowest average MSE.
- Search Space Design:
- GRU: Integer from [16, 192], step 16.
- Dropout rate: Uniformly sampled from [0.1, 0.5].
- Learning rate: Log-uniformly sampled from [1 × 10−4, 1 × 10−2].
- Look-back window: Categorical choice from {2, 4, 8, 13, 15, 25, 49}.
- Fixed Parameters: To balance efficiency and complexity, we use 1 GRU layer and a batch size of 32.
- Look-back Window Design: The window size candidates are chosen based on two factors: (1) standard financial periods, including annual (49 weeks), semi-annual (25 weeks), quarterly (13 weeks), bi-monthly (8 weeks), and monthly (4 weeks); and (2) characteristic periods identified by the VMD (2 and 15 weeks). This design enhances interpretability by incorporating both domain knowledge and data-driven insights.
4.5. Benchmark Models and Ablation Study
4.5.1. Benchmark Models
- ARIMA: A classic linear time-series model used to establish a performance baseline.
- GRU: A standard deep learning model used to verify the advantage of non-linear modeling over linear models.
- GRU-Attention: An advanced single-unit deep learning model used to measure the performance gain of our composite framework over a single model.
4.5.2. Ablation Study
- VMD-Sum: This model independently forecasts each VMD-decomposed mode with GRU-Attention and sums the results. It tests the effectiveness of the decomposition–ensemble strategy against direct forecasting of the original series.
- VMD-Ensemble: Building on VMD-Sum, this model uses a dual-model (ARIMA and GRU-Attention) forecast for each mode, which are then ensembled via our weight optimization strategy. It validates the superiority of this dual-model ensemble over a simple summation of single-model forecasts.
- Hybrid VMD-Ensemble-EC (The Complete Model): This is our final proposed framework, which adds an Error Correction (EC) module to VMD-Ensemble. It is used to validate the contribution of the systematic error correction.
4.6. Experimental Environment
5. Results and Discussion
5.1. VMD: Unifying Technical Effectiveness and Economic Interpretability
- Low-Frequency Components: Long-Term Trends & Policy Cycles
- IMF1 delineates the long-term trend driven by macroeconomic fundamentals, with its turning point around 2016 reflecting shifting market expectations from the “New Normal” slowdown to the “Supply-Side Structural Reform.”
- IMF2 represents policy-driven fluctuations. Its peaks and troughs map to key policy impacts: administrative de-capacity in 2016 drove prices up; post-2017 environmental policies suppressed demand; and a post-2021 shift to “energy security” due to pandemic-related supply chain issues again pushed prices higher.
- Mid-Frequency Components: Industrial Rhythms and Seasonality
- IMF3 shows a stable annual cycle, stemming from work schedules around the Spring Festival and annual maintenance in steam coal consuming sectors (e.g., cement, chemicals), creating industrial demand fluctuations independent of power-sector seasonality.
- IMF4 reflects climate-driven seasonality in electricity consumption, with its semi-annual cycle perfectly matching peak demand in summer and winter.
- High-Frequency Components: Market Dynamics and Exogenous Shocks
- IMF5 represents short-term fluctuations, reflecting market participants’ trading strategies based on inventory levels and supply–demand gaps.
- IMF6, a high-frequency irregular component, captures major unpredictable events. Its extreme spikes record structural breaks such as the COVID-19 impact (2020–2021), the geopolitical premium from the war in Ukraine (2022), and the failure of hydropower substitution due to extreme drought (2022).
5.2. Overall Performance Comparison
5.3. Ablation Analysis: Investigating the Sources of Performance Gain
- Introducing VMD (Stage 0 → 1): Adding VMD reduces MAPE by 1.4%, demonstrating its effectiveness in lowering overall error. However, the temporary worsening of Theil’s U (1.0037) and DA (45.62%) underscores the need for subsequent modules to refine forecasts. Importantly, VMD lays the groundwork for tailored ensemble strategies and improved interpretability by decomposing complex series into simpler sub-series.
- Introducing the Dual-Model Weighted Ensemble (Stage 1 → 2): The dual-model ensemble further reduces MAPE by 6.2%, significantly improving Theil’s U (0.7376) and partially recovering DA. This demonstrates the effectiveness of combining complementary models with optimized weights.
- 3.
- Introducing Error Correction (Stage 2 → 3): The error correction module delivers the largest performance boost, cutting MAPE by 25.1% and achieving the lowest Theil’s U (0.5086) and highest DA (49.56%) across all stages. By addressing systematic biases in residual errors, the module plays a crucial role in optimizing overall performance. The DM test further validates this, showing that the final model (Stage 3) is significantly more accurate than the baseline (Stage 0) and VMD-Sum (Stage 1) models (p < 0.05). Although its performance gain over Stage 2 does not reach conventional statistical significance (p = 0.0546), the error correction module demonstrates a strong trend toward improving accuracy and remains essential for achieving the best overall results.
5.4. Robustness Analysis: Performance Under Different Market Conditions
6. Conclusions and Future Work
6.1. Main Conclusions
6.2. Limitations of the Model
- 1.
- Inability to Support Multi-Step Forecasting
- 2.
- Data and Feature Constraints
- 3.
- Limited Component Transparency
6.3. Future Research Prospects
- 1.
- Expanding Data Dimensions and Exploring Advanced Models
- 2.
- Deepening Interpretability Research
- 3.
- Broadening the Scope of Application
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
Appendix A
Mode | Look-Back | GRUs | Dropout Rate | Learning Rate |
---|---|---|---|---|
IMF1 | 4 | 176 | 0.35 | 0.002607 |
IMF2 | 4 | 128 | 0.20 | 0.001947 |
IMF3 | 8 | 160 | 0.10 | 0.009414 |
IMF4 | 4 | 112 | 0.10 | 0.009212 |
IMF5 | 8 | 160 | 0.10 | 0.009414 |
IMF6 | 4 | 160 | 0.35 | 0.002620 |
References
- Jones, D. Global Electricity Review 2024; Ember: London, UK, 2024. [Google Scholar]
- Box, G.E.P.; Jenkins, G.M.; Reinsel, G.C.; Ljung, G.M. Time Series Analysis: Forecasting and Control; John Wiley & Sons: Hoboken, NJ, USA, 2015. [Google Scholar] [CrossRef]
- Makridakis, S.; Andersen, A.; Carbone, R.; Fildes, R.; Hibon, M.; Lewandowski, R.; Newton, J.; Parzen, E.; Winkler, R. The accuracy of extrapolation (time series) methods: Results of a forecasting competition. J. Forecast. 1982, 1, 111–153. [Google Scholar] [CrossRef]
- Tsay, R.S. Analysis of Financial Time Series; John Wiley & Sons: Hoboken, NJ, USA, 2005. [Google Scholar] [CrossRef]
- Zhang, G.; Patuwo, B.E.; Hu, M.Y. Forecasting with artificial neural networks: The state of the art. Int. J. Forecast. 1998, 14, 35–62. [Google Scholar] [CrossRef]
- Smola, A.J.; Schölkopf, B. A tutorial on support vector regression. Stat. Comput. 2004, 14, 199–222. [Google Scholar] [CrossRef]
- Bengio, Y.; Simard, P.; Frasconi, P. Learning long-term dependencies with gradient descent is difficult. IEEE Trans. Neural Netw. 1994, 5, 157–166. [Google Scholar] [CrossRef] [PubMed]
- Hochreiter, S.; Schmidhuber, J. Long short-term memory. Neural Comput. 1997, 9, 1735–1780. [Google Scholar] [CrossRef]
- Cho, K.; van Merriënboer, B.; Gulcehre, C.; Bahdanau, D.; Bougares, F.; Schwenk, H.; Bengio, Y. Learning phrase representations using RNN encoder-decoder for statistical machine translation. arXiv 2014, arXiv:1406.1078. [Google Scholar] [CrossRef]
- Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A.N.; Kaiser, Ł.; Polosukhin, I. Attention is all you need. Adv. Neural Inf. Process. Syst. 2017, 30, 5998–6008. [Google Scholar]
- Taylor, S.J.; Letham, B. Forecasting at scale. Am. Stat. 2018, 72, 37–45. [Google Scholar] [CrossRef]
- Li, X.; Sengupta, T.; Si Mohammed, K.; Jamaani, F. Forecasting the lithium mineral resources prices in China: Evidence with Facebook Prophet (Fb-P) and Artificial Neural Networks (ANN) methods. Resour. Policy 2023, 82, 103580. [Google Scholar] [CrossRef]
- Mitchell, T.M. Machine Learning; McGraw-Hill: Columbus, OH, USA, 1997. [Google Scholar]
- Adadi, A.; Berrada, M. Peeking inside the black-box: A survey on explainable artificial intelligence (XAI). IEEE Access. 2018, 6, 52138–52160. [Google Scholar] [CrossRef]
- Wu, Z.; Huang, N.E. Ensemble empirical mode decomposition: A noise-assisted data analysis method. Adv. Adapt. Data Anal. 2009, 1, 1–41. [Google Scholar] [CrossRef]
- Dragomiretskiy, K.; Zosso, D. Variational mode decomposition. IEEE Trans. Signal Process. 2014, 62, 531–544. [Google Scholar] [CrossRef]
- Yu, L.; Wang, Z.; Tang, L. A decomposition–ensemble model with data-characteristic-driven reconstruction for crude oil price forecasting. Appl. Energy. 2015, 156, 251–267. [Google Scholar] [CrossRef]
- Meng, E.; Huang, S.; Huang, Q.; Fang, W.; Wang, H.; Leng, G.; Liang, H. A hybrid VMD-SVM model for practical streamflow prediction using an innovative input selection framework. Water Resour. Manag. 2021, 35, 1321–1337. [Google Scholar] [CrossRef]
- Zhao, L.; Li, Z.; Qu, L.; Zhang, J.; Teng, B. A hybrid VMD-LSTM/GRU model to predict non-stationary and irregular waves on the east coast of China. Ocean Eng. 2023, 287, 114136. [Google Scholar] [CrossRef]
- Bates, J.M.; Granger, C.W.J. The combination of forecasts. J. Oper. Res. Soc. 1969, 20, 451–468. [Google Scholar] [CrossRef]
- Wolpert, D.H. Stacked generalization. Neural Netw. 1992, 5, 241–259. [Google Scholar] [CrossRef]
- Domingos, P. A unified bias-variance decomposition and its applications. In Proceedings of the 17th International Conference on Machine Learning, San Francisco, CA, USA, 29 June—2 July 2000; University of Washington: Seattle, WA, USA, 2000; pp. 231–238. [Google Scholar]
- Granger, C.W.; Morris, M.J. Time series modelling and interpretation. J. R. Stat. Soc. Ser. A Gen. 1976, 139, 246–257. [Google Scholar] [CrossRef]
- Friedman, J.H. Greedy function approximation: A gradient boosting machine. Ann. Stat. 2001, 29, 1189–1232. [Google Scholar] [CrossRef]
- Khashei, M.; Bijari, M. A novel hybridization of artificial neural networks and ARIMA models for time series forecasting. Appl. Soft Comput. 2010, 11, 2664–2675. [Google Scholar] [CrossRef]
- Pai, P.F.; Lin, C.S. A hybrid ARIMA and support vector machines model in stock price forecasting. Omega 2005, 33, 497–505. [Google Scholar] [CrossRef]
- Livieris, I.E.; Pintelas, E.; Pintelas, P. A CNN–LSTM model for gold price time-series forecasting. Neural Comput. Appl. 2020, 32, 17351–17360. [Google Scholar] [CrossRef]
- Yu, L.; Wang, S.; Lai, K.K. Forecasting crude oil price with an EMD-based neural network ensemble learning paradigm. Energy Econ. 2008, 30, 2623–2635. [Google Scholar] [CrossRef]
- Bahdanau, D.; Cho, K.; Bengio, Y. Neural machine translation by jointly learning to align and translate. arXiv 2014, arXiv:1409.0473. [Google Scholar]
- Luong, M.-T.; Pham, H.; Manning, C.D. Effective approaches to attention-based neural machine translation. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, Lisbon, Portugal, September 2015; pp. 1412–1421. [Google Scholar] [CrossRef]
- Makridakis, S.; Hibon, M. The M3-Competition: Results, conclusions and implications. Int. J. Forecast. 2000, 16, 451–476. [Google Scholar] [CrossRef]
- Akiba, T.; Sano, S.; Yanase, T.; Ohta, T.; Koyama, M. Optuna: A Next-generation Hyperparameter Optimization Framework. In Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, Anchorage, AK, USA, 4–8 August 2019; pp. 2623–2631. [Google Scholar] [CrossRef]
- Bergstra, J.; Bardenet, R.; Bengio, Y.; Kégl, B. Algorithms for hyper-parameter optimization. Adv. Neural Inf. Process. Syst. 2011, 24. [Google Scholar]
- Li, L.; Jamieson, K.; DeSalvo, G.; Rostamizadeh, A.; Talwalkar, A. Hyperband: A novel bandit-based approach to hyperparameter optimization. J. Mach. Learn. Res. 2017, 18, 6765–6816. [Google Scholar]
- Zhang, Y.; Liu, D.; Wang, L. A novel hybrid model for crude oil price forecasting based on VMD and GRU. Energy 2021, 214, 118931. [Google Scholar]
- Li, C.; Chen, Z.; Li, M.; Liu, Y. A hybrid approach for wind power forecasting based on variational mode decomposition and long short-term memory networks. J. Clean. Prod. 2020, 259, 120857. [Google Scholar] [CrossRef]
- Zhang, K.; Cao, H.; Thé, J.; Yu, H. A hybrid model for multi-step coal price forecasting using decomposition technique and deep learning algorithms. Appl. Energy. 2022, 306, 118011. [Google Scholar] [CrossRef]
- Moreno-Torres, J.G.; Raeder, T.; Alaiz-Rodríguez, R.; Chawla, N.V.; Herrera, F. A unifying view on dataset shift in classification. Pattern Recognit. 2012, 45, 521–530. [Google Scholar] [CrossRef]
- Geman, S.; Bienenstock, E.; Doursat, R. Neural networks and the bias/variance dilemma. Neural Comput. 1992, 4, 1–58. [Google Scholar] [CrossRef]
- Lundberg, S.M.; Lee, S.I. A unified approach to interpreting model predictions. Adv. Neural Inf. Process. Syst. 2017, 30, 4765–4774. [Google Scholar]
- Chen, T.; Guestrin, C. XGBoost: A scalable tree boosting system. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, San Francisco, CA, USA, 13–17 August 2016; pp. 785–794. [Google Scholar] [CrossRef]
N | Mean | Standard Deviation | Median | Minimum | Maximum | |
---|---|---|---|---|---|---|
Full Set | 735 | 624.02 | 117.09 | 598 | 371 | 854 |
Training Set | 588 | 599.97 | 119.10 | 578 | 371 | 854 |
Test Set | 147 | 720.23 | 15.70 | 724 | 671 | 742 |
Model Type | Stage | Model Abbreviation | Key Description | Main Parameters and Determination Method |
---|---|---|---|---|
Benchmark | - | ARIMA | Classic linear model | Box-Jenkins: ARIMA (5, 1, 5) |
- | GRU | Standard RNN | Optuna Opt.: Lookback = 49, Units = 192, Dropout = 0.45 | |
Stage 0 (Baseline) | GRU-Attention | GRU with attention | Optuna Opt.: Lookback = 4, Units = 176, Dropout = 0.35 | |
Ablation | Stage 1 | VMD-Sum | VMD + GRU-A per mode + Sum | VMD(K = 6, α = 1000); Optuna for GRU-A (see Appendix A). |
Stage 2 | VMD-Ensemble | VMD + Dual-model per mode + Ensemble | Ensemble strategy per Section 3.3; parameters inherited. | |
Stage 3 (Complete) | Hybrid VMD-Ensemble-EC | VMD-Ensemble + Error Correction | EC model via Optuna: Lookback = 8, Units = 192, Dropout = 0.2. |
Model Name | RMSE | MAE | MAPE(%) | Theil’s U | DA (%) | DM Test (p-Value) * | |
---|---|---|---|---|---|---|---|
Benchmark | ARIMA (5, 1, 5) | 6.5255 | 3.5390 | 0.4752 | 1.1883 | 43.52 | 0.0371 |
GRU | 3.7553 | 2.6114 | 0.3632 | 1.1572 | 49.01 | <0.0001 | |
GRU-Attention | 3.3311 | 2.4906 | 0.3480 | 0.7415 | 49.26 | 0.0156 | |
Proposed Model | Hybrid VMD-Ensemble-EC | 2.7166 | 1.7280 | 0.2408 | 0.5086 | 49.56 | - |
Experimental Stage | Model Configuration | RMSE | MAE | MAPE (%) | Theil’s U | DA (%) | DM Test (p-Value) | Relative MAPE Improvement * |
---|---|---|---|---|---|---|---|---|
Stage 0: Baseline | GRU-Attention (single model) | 3.3311 | 2.4906 | 0.3480 | 0.7415 | 48.26 | 0.0156 | - |
Stage 1: +VMD | VMD-Sum (Decomposition + Sum) | 3.0359 | 2.4710 | 0.3430 | 1.0037 | 45.62 | 0.0401 | 1.4% |
Stage 2: +Ensemble | VMD-Ensemble (Dual-model & multi-scheme) | 3.1866 | 2.3136 | 0.3216 | 0.7376 | 47.36 | 0.0546 | 6.2% |
Stage 3: +EC | Hybrid VMD-Ensemble-EC | 2.7166 | 1.7280 | 0.2408 | 0.5086 | 49.56 | - | 25.1% |
Model | Intense Volatility Period (N = 33) | Mild Decline Period (N = 31) | ||
---|---|---|---|---|
RMSE | MAE | RMSE | MAE | |
GRU | 5.2906 | 3.3384 | 3.3830 | 2.7328 |
GRU-Attention | 3.9465 | 2.7173 | 4.0199 | 3.5335 |
VMD-Sum | 4.1456 | 3.0998 | 2.3774 | 2.0302 |
VMD-Ensemble | 4.1725 | 2.8454 | 2.8929 | 2.2323 |
Hybrid VMD-Ensemble-EC | 3.8241 | 2.4253 | 2.5751 | 1.9212 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Qin, Q.; Li, L. A VMD-Based Four-Stage Hybrid Forecasting Model with Error Correction for Complex Coal Price Series. Mathematics 2025, 13, 2912. https://doi.org/10.3390/math13182912
Qin Q, Li L. A VMD-Based Four-Stage Hybrid Forecasting Model with Error Correction for Complex Coal Price Series. Mathematics. 2025; 13(18):2912. https://doi.org/10.3390/math13182912
Chicago/Turabian StyleQin, Qing, and Lingxiao Li. 2025. "A VMD-Based Four-Stage Hybrid Forecasting Model with Error Correction for Complex Coal Price Series" Mathematics 13, no. 18: 2912. https://doi.org/10.3390/math13182912
APA StyleQin, Q., & Li, L. (2025). A VMD-Based Four-Stage Hybrid Forecasting Model with Error Correction for Complex Coal Price Series. Mathematics, 13(18), 2912. https://doi.org/10.3390/math13182912