# A Hybrid Model Based on a Two-Layer Decomposition Approach and an Optimized Neural Network for Chaotic Time Series Prediction

^{1}

^{2}

^{*}

## Abstract

**:**

## 1. Introduction

- A hybrid model based on a two-layer decomposition technique is proposed in this paper. For the sake of solving the problem that the prediction model based on single decomposition technique cannot completely deal with the nonlinear and non-stationary of chaotic time series, this paper puts forward a two-layer decomposition technique based on CEEMDAN and VMD, which is able to fully extract the complex characteristics of time series and improve prediction accuracy.
- A firefly algorithm (FA) is applied to optimize the weights between input and hidden layer, the weights between the hidden and output layer and the thresholds of neuron nodes, which can reduce the human interference of parameter settings and improve the function approximation ability of the neural network. A BPNN optimized by the FA is applied to predict the subsequences obtained by two-layer decomposition.
- The real world chaotic time series, daily maximum temperature time series in Melbourne, is used to assess the validity of the proposed hybrid model. The experimental results indicate that our hybrid model has a significant improvement in prediction accuracy compared to the existing single-model-based approaches and hybrid models based on the single layer decomposition technique.

## 2. Preliminaries and Related Works

#### 2.1. Complete Ensemble Empirical Mode Decomposition with Adaptive Noise

#### 2.2. Variational Mode Decomposition

#### 2.3. Firefly Algorithm

#### 2.4. Related Works

## 3. Methodology

#### 3.1. The Structure of CEEMDAN–VMD–FABP Model

#### 3.2. Algorithm Design

#### 3.2.1. CEEMDAN for Original Time Series

#### 3.2.2. VMD for IMF1

Algorithm 1: ADMM Optimization Process for VMD |

Initialize$\left\{{\widehat{u}}_{k}^{1}\right\}$, $\left\{{\omega}_{k}^{1}\right\}$, ${\widehat{\lambda}}^{1}$, $n=0$ |

repeat |

$n=n+1$ |

for $k=1:K$ do |

Update ${\widehat{u}}_{k}^{1}$ for all $\omega \ge 0$ |

${\widehat{u}}_{k}^{n+1}\left(\omega \right)=\frac{\widehat{f}\left(\omega \right)-{\displaystyle \sum _{i\ne k}{\widehat{u}}_{i}\left(\omega \right)+\frac{\widehat{\lambda}\left(\omega \right)}{2}}}{1+2\alpha {\left(\omega -{\omega}_{k}\right)}^{2}}$ |

Update ${\omega}_{k}$ |

${\omega}_{k}^{n+1}=\frac{{\displaystyle {\int}_{0}^{\infty}\omega {\left|{\widehat{u}}_{k}\left(\omega \right)\right|}^{2}d\omega}}{{\displaystyle {\int}_{0}^{\infty}{\left|{\widehat{u}}_{k}\left(\omega \right)\right|}^{2}d\omega}}$ |

end for |

for all $\omega \ge 0$ |

${\widehat{\lambda}}^{n+1}\left(\omega \right)\leftarrow {\widehat{\lambda}}^{n}\left(\omega \right)+\gamma \left[\widehat{f}\left(\omega \right)-{\displaystyle \sum _{k}{\widehat{u}}_{k}^{n+1}\left(\omega \right)}\right]$ |

until convergence ${\sum}_{k}\Vert {\widehat{u}}_{k}^{n+1}-{\widehat{u}}_{k}^{n}\Vert {}_{2}^{2}/{\Vert {\widehat{u}}_{k}^{n}\Vert}_{2}^{2}}<\epsilon $ |

#### 3.2.3. BPNN Optimized by a Firefly Algorithm

Algorithm 2: Process of the Firefly Algorithm |

Initialize$n$, ${\beta}_{0}$, $\gamma $, $\alpha $, $\epsilon $, $t=0$ |

Define the maximum number of iterations (MaxGeneration). |

while$t<MaxGeneration$ |

$t=t+1$ |

for $i=1:n$ |

for $j=1:i$ |

Calculate light intensity ${I}_{i}$ at ${s}_{i}$ position. |

If ${I}_{j}>{I}_{i}$ |

Move firefly $i$ towards $j$. |

end if |

Update the attractiveness values. |

Evaluate the new solutions and update the light intensity. |

end for |

end for |

Rank the fireflies and find the current best. |

end while |

Output the global optimal value. |

## 4. Experimental Results

## 5. Conclusions

- The actual time series is usually non-stationary and noisy. It is generally difficult to analyze the original time series. CEEMDAN is an anti-noise decomposition method, and VMD can handle non-stationary signals very well. Therefore, subsequences decomposed by CEEMDAN and VMD are easy to analyze and predict.
- After decomposition of the original signal, the BPNN was used for prediction. At this stage, the parameters in the BPNN greatly influenced prediction accuracy. Therefore, in order to reasonably select the model parameters, the FA algorithm was introduced to optimize the parameters of BP.

## Author Contributions

## Funding

## Conflicts of Interest

## References

- Shah, H.; Tairan, N.; Garg, H.; Ghazali, R. A Quick Gbest Guided Artificial Bee Colony Algorithm for Stock Market Prices Prediction. Symmetry
**2018**, 10, 292. [Google Scholar] [CrossRef] - Zhai, H.; Cui, L.; Nie, Y.; Xu, X.; Zhang, W. A Comprehensive Comparative Analysis of the Basic Theory of the Short Term Bus Passenger Flow Prediction. Symmetry
**2018**, 10, 369. [Google Scholar] [CrossRef] - Han, M.; Zhang, R.; Xu, M. Multivariate Chaotic Time Series Prediction Based on ELM–PLSR and Hybrid Variable Selection Algorithm. Neural Process. Lett.
**2017**, 46, 705–717. [Google Scholar] [CrossRef] - Chandra, R. Competition and collaboration in cooperative coevolution of Elman recurrent neural networks for time-series prediction. IEEE Trans. Neural Netw. Learn. Syst.
**2015**, 26, 3123–3136. [Google Scholar] [CrossRef] [PubMed] - Yaslan, Y.; Bican, B. Empirical mode decomposition based denoising method with support vector regression for time series prediction: A case study for electricity load forecasting. Measurement
**2017**, 103, 52–61. [Google Scholar] [CrossRef] - Chen, D. Research on traffic flow prediction in the big data environment based on the improved RBF neural network. IEEE Trans. Ind. Inform.
**2017**, 13, 2000–2008. [Google Scholar] [CrossRef] - Malik, Z.K.; Hussain, A.; Wu, Q.J. Multilayered echo state machine: A novel architecture and algorithm. IEEE Trans. Cybern.
**2017**, 47, 946–959. [Google Scholar] [CrossRef] - Gibson, J. Entropy Power, Autoregressive Models, and Mutual Information. Entropy
**2018**, 20, 750. [Google Scholar] [CrossRef] - Alsharif, M.H.; Younes, M.K.; Kim, J. Time Series ARIMA Model for Prediction of Daily and Monthly Average Global Solar Radiation: The Case Study of Seoul, South Korea. Symmetry
**2019**, 11, 240. [Google Scholar] [CrossRef] - Yan, J.; Li, K.; Bai, E.; Yang, Z.; Foley, A. Time series wind power forecasting based on variant Gaussian Process and TLBO. Neurocomputing
**2016**, 189, 135–144. [Google Scholar] [CrossRef] - Nava, N.; Di Matteo, T.; Aste, T. Financial Time Series Forecasting Using Empirical Mode Decomposition and Support Vector Regression. Risks
**2018**, 6, 7. [Google Scholar] [CrossRef] - Baghaee, H.R.; Mirsalim, M.; Gharehpetian, G.B. Power calculation using RBF neural networks to improve power sharing of hierarchical control scheme in multi-DER microgrids. IEEE J. Emerg. Sel. Top. Power Electron.
**2016**, 4, 1217–1225. [Google Scholar] [CrossRef] - Ahn, J.; Shin, D.; Kim, K.; Yang, J. Indoor Air Quality Analysis Using Deep Learning with Sensor Data. Sensors
**2017**, 17, 2476. [Google Scholar] [CrossRef] [PubMed] - Takeda, H.; Tamura, Y.; Sato, S. Using the ensemble Kalman filter for electricity load forecasting and analysis. Energy
**2016**, 104, 184–198. [Google Scholar] [CrossRef] - Bogiatzis, A.; Papadopoulos, B. Global Image Thresholding Adaptive Neuro-Fuzzy Inference System Trained with Fuzzy Inclusion and Entropy Measures. Symmetry
**2019**, 11, 286. [Google Scholar] [CrossRef] - Mlakić, D.; Baghaee, H.R.; Nikolovski, S. A novel ANFIS-based islanding detection for inverter–interfaced microgrids. IEEE Trans. Smart Grid
**2018**, in press. [Google Scholar] - Alhasa, K.M.; Mohd Nadzir, M.S.; Olalekan, P.; Latif, M.T.; Yusup, Y.; Iqbal Faruque, M.R.; Ahamad, F.; Abd Hamid, H.H.; Aiyub, K.; Md Ali, S.H.; et al. Calibration Model of a Low-Cost Air Quality Sensor Using an Adaptive Neuro-Fuzzy Inference System. Sensors
**2018**, 18, 4380. [Google Scholar] [CrossRef] - Zhou, J.; Yu, X.; Jin, B. Short-Term Wind Power Forecasting: A New Hybrid Model Combined Extreme-Point Symmetric Mode Decomposition, Extreme Learning Machine and Particle Swarm Optimization. Sustainability
**2018**, 10, 3202. [Google Scholar] [CrossRef] - Fan, G.-F.; Qing, S.; Wang, H.; Hong, W.-C.; Li, H.-J. Support Vector Regression Model Based on Empirical Mode Decomposition and Auto Regression for Electric Load Forecasting. Energies
**2013**, 6, 1887–1901. [Google Scholar] [CrossRef] - Liu, H.; Tian, H.Q.; Liang, X.F.; Li, Y.F. Wind speed forecasting approach using secondary decomposition algorithm and Elman neural networks. Appl. Energy
**2015**, 157, 183–194. [Google Scholar] [CrossRef] - Ren, Y.; Suganthan, P.N. Empirical mode decomposition-k nearest neighbor models for wind speed forecasting. J. Power Energy Eng.
**2014**, 2, 176–185. [Google Scholar] [CrossRef] - Wang, W.C.; Chau, K.W.; Xu, D.M.; Chen, X.Y. Improving forecasting accuracy of annual runoff time series using ARIMA based on EEMD decomposition. Water Resour. Manag.
**2015**, 29, 2655–2675. [Google Scholar] [CrossRef] - Torres, M.E.; Colominas, M.A.; Schlotthauer, G.; Flandrin, P. A complete ensemble empirical mode decomposition with adaptive noise. In Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), Prague, Czech Republic, 22–27 May 2011. [Google Scholar]
- Dragomiretskiy, K.; Zosso, D. Variational mode decomposition. IEEE Trans. Signal Process.
**2014**, 62, 531–544. [Google Scholar] [CrossRef] - Yang, X.S. Firefly algorithm, stochastic test functions and design optimization. Int. J. Bio-Inspired Comput.
**2010**, 2, 78–84. [Google Scholar] [CrossRef] - Chen, C.F.; Lai, M.C.; Yeh, C.C. Forecasting tourism demand based on empirical mode decomposition and neural network. Knowl.-Based Syst.
**2012**, 26, 281–287. [Google Scholar] [CrossRef] - Zhou, Q.; Jiang, H.; Wang, J.; Zhou, J. A hybrid model for PM2. 5 forecasting based on ensemble empirical mode decomposition and a general regression neural network. Sci. Total Environ.
**2014**, 496, 264–274. [Google Scholar] [CrossRef] - Liu, H.; Tian, H.Q.; Li, Y.F. Comparison of new hybrid FEEMD-MLP, FEEMD-ANFIS, Wavelet Packet-MLP and Wavelet Packet-ANFIS for wind speed predictions. Energy Convers. Manag.
**2015**, 89, 1–11. [Google Scholar] [CrossRef] - Abdoos, A.A. A new intelligent method based on combination of VMD and ELM for short term wind power forecasting. Neurocomputing
**2016**, 203, 111–120. [Google Scholar] [CrossRef] - Lahmiri, S. A variational mode decompoisition approach for analysis and forecasting of economic and financial time series. Expert Syst. Appl.
**2016**, 55, 268–273. [Google Scholar] [CrossRef] - Jianwei, E.; Bao, Y.; Ye, J. Crude oil price analysis and forecasting based on variational mode decomposition and independent component analysis. Phys. A Stat. Mech. Appl.
**2017**, 484, 412–427. [Google Scholar] - Wang, D.; Luo, H.; Grunder, O.; Lin, Y. Multi-step ahead wind speed forecasting using an improved wavelet neural network combining variational mode decomposition and phase space reconstruction. Renew. Energy
**2017**, 113, 1345–1358. [Google Scholar] [CrossRef] - Huang, N.E.; Shen, Z.; Long, S.R.; Wu, M.C.; Shih, H.H.; Zheng, Q.; Yen, N.; Tung, C.C.; Liu, H.H. The empirical mode decomposition and the Hilbert spectrum for nonlinear and non-stationary time series analysis. Proc. R. Soc. Lond. Ser. A Math. Phys. Eng. Sci.
**1998**, 454, 903–995. [Google Scholar] [CrossRef] - Li, Y.; Li, Y.; Chen, X.; Yu, J. Denoising and Feature Extraction Algorithms Using NPE Combined with VMD and Their Applications in Ship-Radiated Noise. Symmetry
**2017**, 9, 256. [Google Scholar] [CrossRef] - Ghadimi, E.; Teixeira, A.; Shames, I.; Johansson, M. Optimal parameter selection for the alternating direction method of multipliers (ADMM): Quadratic problems. IEEE Trans. Autom. Control
**2015**, 60, 644–658. [Google Scholar] [CrossRef] - Baghaee, H.R.; Mirsalim, M.; Gharehpetan, G.B.; Talebi, H.A. Nonlinear load sharing and voltage compensation of microgrids based on harmonic power-flow calculations using radial basis function neural networks. IEEE Syst. J.
**2018**, 12, 2749–2759. [Google Scholar] [CrossRef] - Nikolovski, S.; Reza Baghaee, H.; Mlakić, D. ANFIS-based peak power shaving/curtailment in microgrids including PV units and besss. Energies
**2018**, 11, 2953. [Google Scholar] [CrossRef]

**Figure 3.**Decomposition results of the daily maximum temperature time series in Melbourne based on complete ensemble empirical mode decomposition with adaptive noise (CEEMDAN).

**Figure 6.**Prediction results and prediction errors of the daily maximum temperature in Melbourne based on the hybrid model of CEEMDAN and FABP.

**Figure 7.**Decomposition results of IMF1 based on the variational mode decomposition (VMD) algorithm.

**Figure 8.**Prediction results and prediction errors of the daily maximum temperature in Melbourne based on the two-layer decomposition algorithm and a BPNN optimized by a firefly algorithm (FABP).

Notation | Meaning |
---|---|

$\epsilon (t)$ | independent Gaussian white noise with unit variance |

${\omega}_{0}$ | a noise coefficient |

${r}_{1}\left(t\right)$ | the first residue |

${E}_{j}(\cdot )$ | a function to extract the j-th intrinsic mode function (IMF) decomposed by EMD |

${u}_{k}\left(t\right)$ | the k-th mode of decomposition |

${\omega}_{k}$ | the center frequency of mode k |

${\partial}_{t}(\cdot )$ | partial derivative |

$\sigma $ | the Dirac distribution |

$\ast $ | convolution computation |

$\alpha $ | the balancing parameter of the data-fidelity constraint |

$\lambda $ | the Lagrangian multiplier |

$\widehat{f}\left(\omega \right)$ | the Fourier transforms of $f\left(t\right)$ |

${I}_{0}$ | the intensity of the light source |

$\gamma $ | the light absorption coefficient |

${r}_{ij}$ | the distance between firefly $i$ and $j$ |

${\beta}_{0}$ | the attractiveness at the light source $(r=0)$ |

${s}_{i}$ | the space positions of firefly $i$ |

Prediction Error | RMSE | NRMSE | MAPE | SMAPE |
---|---|---|---|---|

Overall prediction error | 0.7763 | 0.0325 | 0.0277 | 0.0138 |

IMF1 prediction error | 0.6361 | 0.0808 | 1.8515 | 0.5829 |

**Table 3.**Prediction errors of daily maximum temperature time series based on different algorithms (one step ahead).

Model | RMSE | NRMSE | MAPE | SMAPE | Training Time | Testing Time |
---|---|---|---|---|---|---|

RBF | 1.7241 | 0.0721 | 0.0695 | 0.0345 | 0.9984 | 0.1092 |

ANFIS | 3.2410 | 0.1356 | 0.1224 | 0.0598 | 22.3393 | 0.0936 |

BP | 1.3818 | 0.0578 | 0.0511 | 0.0255 | 0.4368 | 0.0468 |

FABP | 1.3618 | 0.0570 | 0.0505 | 0.0251 | 34.4294 | 0.0780 |

CEEMDAN–FABP | 0.7763 | 0.0325 | 0.0277 | 0.0138 | 151.3834 | 0.1404 |

VMD–FABP | 0.7026 | 0.0294 | 0.0266 | 0.0132 | 197.1852 | 0.2340 |

CEEMDAN–VMD–FABP | 0.5131 | 0.0215 | 0.0198 | 0.0099 | 307.5092 | 0.2964 |

**Table 4.**Prediction errors of daily maximum temperature time series based on different algorithms (two steps ahead).

Model | RMSE | NRMSE | MAPE | SMAPE |
---|---|---|---|---|

RBF | 2.6741 | 0.1119 | 0.1051 | 0.0517 |

ANFIS | 3.4725 | 0.1453 | 0.1317 | 0.0644 |

BP | 2.4105 | 0.1009 | 0.0912 | 0.0454 |

FABP | 2.4032 | 0.1006 | 0.0924 | 0.0456 |

CEEMDAN–FABP | 0.9292 | 0.0389 | 0.0346 | 0.0172 |

VMD–FABP | 0.7240 | 0.0303 | 0.0276 | 0.0138 |

CEEMDAN–VMD–FABP | 0.6910 | 0.0289 | 0.0262 | 0.0130 |

**Table 5.**Prediction errors of daily maximum temperature time series based on different algorithms (three steps ahead).

Model | RMSE | NRMSE | MAPE | SMAPE |
---|---|---|---|---|

RBF | 3.3242 | 0.1391 | 0.1286 | 0.0628 |

ANFIS | 3.4715 | 0.1453 | 0.1330 | 0.0651 |

BP | 3.1497 | 0.1318 | 0.1211 | 0.0589 |

FABP | 3.1435 | 0.1315 | 0.1194 | 0.0587 |

CEEMDAN–FABP | 1.1266 | 0.0471 | 0.0420 | 0.0209 |

VMD–FABP | 0.9105 | 0.0381 | 0.0345 | 0.0172 |

CEEMDAN–VMD–FABP | 0.8692 | 0.0364 | 0.0333 | 0.0166 |

**Table 6.**Prediction errors of daily maximum temperature time series based on different algorithms (five steps ahead).

Model | RMSE | NRMSE | MAPE | SMAPE |
---|---|---|---|---|

RBF | 3.4960 | 0.1463 | 0.1353 | 0.0662 |

ANFIS | 3.4408 | 0.1440 | 0.1333 | 0.0654 |

BP | 3.3497 | 0.1402 | 0.1295 | 0.0633 |

FABP | 3.3595 | 0.1406 | 0.1285 | 0.0632 |

CEEMDAN–FABP | 1.5822 | 0.0662 | 0.0598 | 0.0297 |

VMD–FABP | 1.2500 | 0.0523 | 0.0487 | 0.0242 |

CEEMDAN–VMD–FABP | 0.9864 | 0.0413 | 0.0370 | 0.0183 |

© 2019 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).

## Share and Cite

**MDPI and ACS Style**

Xu, X.; Ren, W.
A Hybrid Model Based on a Two-Layer Decomposition Approach and an Optimized Neural Network for Chaotic Time Series Prediction. *Symmetry* **2019**, *11*, 610.
https://doi.org/10.3390/sym11050610

**AMA Style**

Xu X, Ren W.
A Hybrid Model Based on a Two-Layer Decomposition Approach and an Optimized Neural Network for Chaotic Time Series Prediction. *Symmetry*. 2019; 11(5):610.
https://doi.org/10.3390/sym11050610

**Chicago/Turabian Style**

Xu, Xinghan, and Weijie Ren.
2019. "A Hybrid Model Based on a Two-Layer Decomposition Approach and an Optimized Neural Network for Chaotic Time Series Prediction" *Symmetry* 11, no. 5: 610.
https://doi.org/10.3390/sym11050610