Research on a Novel Hybrid Decomposition–Ensemble Learning Paradigm Based on VMD and IWOA for PM2.5 Forecasting

The non-stationarity, nonlinearity and complexity of the PM2.5 series have caused difficulties in PM2.5 prediction. To improve prediction accuracy, many forecasting methods have been developed. However, these methods usually do not consider the importance of data preprocessing and have limitations only using a single forecasting model. Therefore, this paper proposed a new hybrid decomposition–ensemble learning paradigm based on variation mode decomposition (VMD) and improved whale-optimization algorithm (IWOA) to address complex nonlinear environmental data. First, the VMD is employed to decompose the PM2.5 sequences into a set of variational modes (VMs) with different frequencies. Then, an ensemble method based on four individual forecasting approaches is applied to forecast all the VMs. With regard to ensemble weight coefficients, the IWOA is applied to optimize the weight coefficients, and the final forecasting results were obtained by reconstructing the refined sequences. To verify and validate the proposed learning paradigm, four daily PM2.5 datasets collected from the Jing-Jin-Ji area of China are chosen as the test cases to conduct the empirical research. The experimental results indicated that the proposed learning paradigm has the best results in all cases and metrics.


Introduction
Pollution of the environment is one of the most serious issues facing humankind today, and badly polluted air can cause great damage in economics and people's lives. According to the World Health Organization (WHO), it is known that almost 3 million children die every year from a range of problems caused by air pollution [1]. With the process of industrialization and urbanization, the air pollution is becoming increasingly serious and the hazy weather has grown rapidly, especially in developing countries. In recent years, the foggy weather in many areas of China have become increasingly serious. Since the beginning of 2013, sustained haze weather has turned Beijing-Tianjin-Hebei (Jing-Jin-Ji region) into heavy pollution region. Fine particulate matter is one of the key contributors that leading to air pollution and hazy weather. It carries many adverse health effects, such as respiratory diseases and premature death [2].
Recently, increasingly countries have set up environmental monitoring systems, which can provide a large amount of PM monitoring data. However, PM data are affected by many factors and fluctuates greatly over time, making it very challenging to predict. Therefore, many models and tools have been developed to predict PM 2.5 and other air pollutant concentrations to improve the accuracy of the predictions. These models can be generally categorized into physical, statistical and hybrid models. For example, physical methods can be used to simulate the processes of emissions, diffusion and transfer of pollutants through meteorological, emission, and chemical models [3][4][5]. Statistical methods which mainly include autoregressive integrated moving average model (ARIMA), artificial neural networks (ANN) and multiple linear regression (MLR) [2,[6][7][8][9], have been broadly applied to the pollutant concentration prediction. For instance, Ref. [10] proposed a forecasting model based on MLR and bivariate correlation analysis to predict the annual and seasonal concentrations of PM10 and PM 2.5 . Ref. [11] studied the effects of meteorological factors on ultrafine particulate matter (UFP) and PM10 concentrations under traffic congestion conditions using the ARIMA model. However, in practice, most pollutant sequences are non-linear and irregular, which may involve the problem of non-linear dynamical systems, so these linear algorithms are still problematic in predicting PM concentration. On the contrary, using artificial neural network models to predict pollutant concentration can overcome the limitations of traditional linear models and handle nonlinear problems well. [12] developed extended model based on long-term and short-term memory neural network. The model takes into account the spatiotemporal correlation to predict the pollutant concentration and shows excellent performance. [13] applied cuckoo search (CS) to optimize BPNN to predict PM concentrations in four major cities in China.
Recently, to predict air quality more accurately, many hybrid models have been proposed based on ensemble learning paradigms, data preprocessing techniques and heuristic algorithms. For example, Ref. [14] developed a new prediction model based on the multidimensional k-nearest neighbor model and the ensemble empirical mode decomposition (EEMD) method. Ref. [15] developed a novel hybrid model based on wavelet transform (WT) and stacked autoencoder (SAE) and long short-term memory (LSTM) to simulate PM 2.5 at six sites in China. Ref. [16] developed a model based on a combination of WT and neural network algorithm to decompose the PM 2.5 data and then perform sub-series prediction analysis and finally data reconstruction. Ref. [17] proposed a novel PM 2.5 hybrid prediction model, which includes a new pre-processing method (wavelet transform and variational mode decomposition), using differential evolution (DE) algorithm optimized BPNN to predict each decomposition sequence. The drawback of the decomposition-based prediction model is that using a single method to predict all signal sequences. Since different decomposition sequences have different characteristics, a single model does not fit all the characteristics of the decomposition sequences [18]. Thus, ensemble prediction model integrated multiple single models will help avoid the shortcomings of a single model and further improve the prediction accuracy. Furthermore, many heuristic algorithms are used to help optimize the weight coefficients of the ensemble model. [19] developed an ensemble model based on differential evolution (DE) to determine the optimum weights for electricity demand forecasting. Ref. [20] employed the cuckoo search algorithm (CSO) to optimize the weight coefficients of ensemble model. Whale optimization algorithm (WOA), proposed by Ref. [21], is a novel heuristic algorithm by imitating whale behavior in nature. However, the WOA will encounter problems such as being stuck in a local optimal solution and slow in convergence, when solving more complex problems. Thus, a new improved whale optimization algorithm (IWOA) is proposed in this study to strengthen the local seeking capability of the WOA.
Through the above analysis, considering the criticality of data pre-processing and the limitations of one single prediction model, a new hybrid decomposition-ensemble learning paradigm based on variation mode decomposition (VMD) and modified whaleoptimization algorithm (IWOA) is introduced. First, the original PM sequence is decomposed into different VM sequences using VMD. Then, the weight-determined ensemble model, which optimized by IWOA, is employed to forecast each decomposition component. Finally, several prediction subsets are assembled into the final prediction result.
The paper is structured as follows: in Section 2, several single forecasting models, the ensemble prediction theory and VMD, are introduced. In Section 3, the proposed decomposition-ensemble model is presented. In Section 4, the study areas and the evaluation criteria are described. In Section 5, the comparative results of the proposed model and other models is in conducted. Finally, in Section 6, the conclusions the important results of this paper are explicitly introduced.

Related Methodology
Four individual forecasting models, VMD, IWOA, which employed in the suggested ensemble model, are described as follows.

Four Individual Prediction Methods
In latest years, many prediction models have been developed and applied to PM 2.5 concentration prediction. This paper uses four popular methods, BPNN, ANFIS, ANFIS-FCM and GMDH, which show good performance in PM 2.5 prediction, to construct the ensemble models.

The Back Propagation Neural Network (BPNN)
The BPNN is a multi-layer feed-forward neural network, which is widely used in many fields. The BPNN algorithm needs to find the parameter with the minimum error, i.e., the minimum value of the error between the output value and the actual value according to the negative gradient direction. The process of the BPNN is mainly divided into update and learning stages: where η denotes the learning speed, w ij represents the weights between nodes i and j, α denotes the impulse parameter, E denotes the error super curve face and t denotes the current iterative steps.

The Adaptive Network Based Fuzzy Inference System (ANFIS)
Ref. [22] proposed the ANFIS that combines the blur systems and neural networks. It plays the advantages of both and makes up for the shortcomings of each. ANFIS can form an adaptive neuro-fuzzy controller by using a neural network learning mechanism to automatically retrieve rules from the input and output sample data. Through the offline training and the online learning algorithms, it can create fuzzy inferences and control the self-adjustment of rules, thereby making the system itself develop towards adaptive, self-organizing, and self-learning. ANFIS includes five-layer network, and each layer contains several node functions.
Layer 1: This process is the fuzzy layer. Each node in layer 1 is adaptive, all have node function, and will generate membership of a fuzzy set.
where x denotes the input to node I, and A i denotes the language label associated with the function of this node. The "µ" denotes the membership functions for A i , which described by generalized Gaussian functions. Layer 2: In this process every node is a circular node labeled ∏ out, i.e., ∏ -norm operation: Layer 3: At each node in this layer, the ratio of the firing weights under the ith rule to the sum of the firing weights under all rules is calculated: At this level, all outputs are collectively referred to as normalized emission intensity.
Layer 4: In this process, the contribution of the ith rule to the overall output is calculated: where w i represents the out of layer 3, and (a i x + b i y + c i ) is the parameter set. Layer 5: In this layer, the signal node ∑ calculate the final output as the sum of all incoming signals: The final output of the adaptive neural fuzzy inference system is: The FCM is a type of data aggregation method. In the FCM method, each data point needs to be classified as a level assigned at the member level in the cluster. FCM divides a selection of n vector x i , (i = 1, 2, . . . , n) into fuzzy groups, and then finds a clustering center in each fuzzy group in a way that minimizes the cost function of the similarity measure. The above i = 1, 2, . . . , c represents random sampling from n points. Here is a brief introduction to the stage of the FCM algorithm. First, choosing the centers of cluster c i , (i = 1, 2, . . . , c) from the n points (x 1 , x 2 , x 3 , . . . , x n ) randomly. Second, the membership matrix U, is calculated by using the subsequent equation as follows: where d ij = c i − x j is the Euclidean distance which involves the i-th cluster center and the j-th data point, and m is the fuzziness index. Third, compute the cost function using the following formula.
Stop the process when it falls below a certain threshold. Additionally, finally, the new c fuzzy clustering center c i , i = 1, 2, . . . , c is calculated using the following equation:

The Group Method of Data Handling (GMDH)
The GMDH is a series of computer-based inductive algorithms for the mathematical modeling of multi-parameter data sets. It is characterized by a fully automatic structure and parameter optimization of the model. The GMDH can be used for data extraction, knowledge detection, prediction, modeling of complex systems, optimization, and pattern recognition [23]. The GMDH algorithm features an induction procedure to classify increasingly complex multinomial models and select the best solution by the externality criterion.
The GMDH pattern generally have multiple sets of inputs and one set of outputs, and is a subset of the components of the basic function: where a denotes coefficients and f denotes the fundamental function that depends on different inputs, m represents the number of fundamental function components. The basic function (12) is called the partial model and the GMDH considers various subsets of this function and thus finds the optimal solution. The coefficients of the model are first estimated using the least squares method. Then, the number of local components of the model is gradually increased. Finally, the GMDH algorithm finds the best complexity model structure by minimizing the external criterion. This process is called self-organizing of the model.
The main idea of the GMDH neural network learning algorithm is as follows: a series of source neurons are generated by performing cross-combining on each entry unit of the system, and the mean square error of the output error corresponding to each neuron is calculated; then, several outputs are chosen from the created neurons with smaller mean square error than a predetermined threshold, the selected neurons are used as the input unit of the new generation; the process of survival of the fittest and gradual evolution is repeated until the new generation of neurons is no better than the previous generation.

Variation Mode Decomposition (VMD)
The VMD is a non-recursive signal treatment algorithm that decomposes the original signal into a family of patterns with a specific frequency spectrum domain bandwidth [24]. During the decomposition process, each pattern can be compressively pulsed around a certain center. If the bandwidth of each pattern is required, three steps should be completed. After that, a constraint variational problem can be given. The details of VMD are described in [25].

Optimization Algorithm-IWOA
An improved heuristic algorithm IWOA is developed to enhance the performance of the ensemble model. The IWOA determines the optimal weight coefficient of the ensemble model. The basic whale optimization algorithm (WOA), chaotic local search (CLS), and the WOA modified by CLS will be described below.

Overview of the Whale Optimization Algorithm
The WOA, put forward by S. Mirjalili in 2016, is a simulation of the hunting mechanism of humpback whales, called bubble-net feeding method. Humpback whales form distinctive bubbles by circling around their prey in a circular or "9 shaped" path during foraging. With special exercises, humpback whales first form a spiral bubble 10-15 m below their prey while swimming upstream to the surface. Then, it surrounds the prey with its flashing fins to prevent it from escaping and catch it [21]. The mathematical principles of the above humpback whale behavior are described as follows: (a) Encircling prey: The humpback whale orbits its prey, and updates its position to the best search agent as the number of iterations increases. It can be depicted mathematically as: where X* denotes the position vector of the best solution obtained thus far, The mathematical modeling of humpback whale's vesicular behavior is designed for the following two methods. 1. Shrinking encircling mechanism: This is a bracket predation mechanism that requires finding a new agent location, which can be anywhere between the agent's original location and the current optimal agent location. The values of → A in this process is in the interval [−1, 1].
2. Spiral update position: A spiral equation needs to be created between the positions of both the prey and the whale to mimic the spiral motion of the humpback whale, as shown below.
The probability p, a random number in [0,1], is assumed to select between the shrinking encircling and the spiral-shaped path during the optimization process.
(c) Search for prey: During the search phase, the variation of vector → A can also be used to find prey at random. Therefore, to move away from a reference whale, → A can be utilized with the random values greater than 1 or less than −1. The mathematical model at this stage is as follows: where → X rand is a random position vector (random whale) selected from the current population.

IWOA
As mentioned earlier, WOA was recently proposed and widely used in many fields. However, it also has shortcomings, like slow conversion in the late stage and easy to fall into a local optimum. Additionally, the chaotic local search (CLS), based on chaotic search, can effectively avoid the local optimization and converge to the global optimization. The blending of WOA and CLS can help improve global conversion and prevent falling into local solutions. To accelerate the local convergence of WOA, the chaotic local search algorithm is also applied. When WOA finishes iterating to find the best solution, the acceptance of these new solutions as determined by CLS will perform to local search a better solution close to the best solution. A logistic equation applied in CLS is defined as follows: where cx i denotes the ith chaotic variable, iter is the iteration number. When µ = 4, the above equation exhibits chaotic dynamics, cx i denotes range in (0,1) and cx 0 / ∈ {0.25, 0.5, 0.75}. For more details about CLS, please refer to [26].
The pseudo-code of the IWOA algorithm is outlined as follows: Algorithm: Improved whale-optimization algorithm (IWOA) Objective: Minimize and maximize the objective function f (x), Maxiter-the maximum number of iteration. I-a population pop. p-the switch probability

Decomposition-Ensemble Learning Paradigm
In this part, we suggest a new hybrid decomposition-ensemble learning paradigm that integrates VMD method, several prediction models and IWOA optimization. The main process of the developed decomposition-ensemble paradigm is shown in Figure 1. The three main steps of the ensemble model are as follows: - Step 1: Decomposition process: First, the features and noise of the original pollution data needed to be cleaned and processed so that an effective prediction model could be built. In this study, VMD technology was used to disaggregate the original pollution datasets into a set of VMs and the residue component with corresponding frequencies. - Step 2: Ensemble forecasting and IWOA optimization: The decomposition sequences with different characteristics were obtained via the VMD process. However, different sequences had different properties, which meant that a single prediction method could no longer effectively adapt to all the characteristics of the VMs. Thus, the ensemble strategy is adopted to solve this problem, and can be described as that if there are M types of prediction methods with the correct selection of weight coefficients to solve a problem. The results of multiple models were added together. Assume that E model (Model = "BPNN", "ANFIS", "ANFIS-FCM", "GMDH") is the ensemble prediction result of each VM by using the above methods. Then, using IWOA to optimize the output of the E model , it can be expressed as where w i (i = 1, 2, . . . , N) is the weight coefficient of the model N. w i ∈ [−2, 2] is the range of weight coefficients by NNCT [27].

Decomposition-Ensemble Learning Paradigm
In this part, we suggest a new hybrid decomposition-ensemble learning paradigm that integrates VMD method, several prediction models and IWOA optimization. The main process of the developed decomposition-ensemble paradigm is shown in Figure 1. The three main steps of the ensemble model are as follows: Step 1: Decomposition process: First, the features and noise of the original pollution data needed to be cleaned and processed so that an effective prediction model could be built. In this study, VMD technology was used to disaggregate the original pollution datasets into a set of VMs and the residue component with corresponding frequencies. - Step 2: Ensemble forecasting and IWOA optimization: The decomposition sequences with different characteristics were obtained via the VMD process. However, different sequences had different properties, which meant that a single prediction method could no longer effectively adapt to all the characteristics of the VMs. Thus, the ensemble strategy is adopted to solve this problem, and can be described as that if there are M types of prediction methods with the correct selection of weight coefficients to solve a problem. The results of multiple models were added together. Assume that E model (Model = "BPNN", "ANFIS", "ANFIS-FCM", "GMDH") is the ensemble prediction result of each VM by using the above methods. Then, using IWOA to optimize the output of the E model , it can be expressed as To improve the optimal weight coefficients w i (i = 1, 2, . . . , N), IWOA was employed to find the optimal solution for the ensemble weight coefficients. Before optimization, the objective equation needed to be confirmed first. The objective function of this paper is set by Equation (18). When the predefined minimum value of the objective function or the maximum iterations was reached, the optimization process was terminated. Nevertheless, the search boundary of the WOA is set to [−2, 2], the nesting dimension is 5 and the maximum number of iterations is 500.

-
Step 3: Assemble forecasting results: Through the above steps, the overall prediction results of the VMs were obtained. Then, the prediction results were combined to obtain the final result.

Data Description
In this paper, the PM 2.5 concentration data from the Environmental Protection of the People's Republic of China (http://www.mep.gov.cn/) were collected to verify the performance of the proposed model. The selected daily PM 2.5 concentration sample data are for Beijing, Tianjin, Baoding and Shijiazhuang from 1 August 2015 to 31 August 2017. The total data number of daily PM 2.5 concentration for each city were 763. In each experiment, the first 572 data (approximately 75% of the total data) of each VM were used for training subsection, and the rest were the test subsection. When all predicted VMs were integrated into the overall result, the 189 pieces of data (about 25%) in the test results were used for optimizing the weights of the ensemble model and the rest were used for model testing.

Model Assessment Standards
To effectively assess the prediction performance of the developed model, four popular error criteria, shown in Table 1, were employed to assessment the prediction capacity of the developed model. Smaller values denote better prediction performance.  The root mean-square forecast error The average of absolute error Theil's inequality coefficient Here, y n andŷ n present the actual and predicted values at time n, respectively. N denotes the sample size.

Data Decomposition by VMD
In the proposed VMD-IWOA ensemble model, the original PM 2.5 concentration sequence is first decomposed into several independent VMs by using VMD. However, too many VMs introduce new problems. During the integrated prediction process, each VM generates estimation errors, and too many VMs cause an accumulation of errors. It also increases the time consumed in a single prediction step. To prevent the above problems, the entire VMs were restructured into three VMs and a residual.

The Process of Ensemble Forecast on VMs
The BPNN, ANFIS, ANFIS-FCM and GMDH prediction models were applied to forecast each VM, which reconstructed in Section 5.1. Additionally, then, the ensemble model integrates the results of the four prediction models on each VM, and optimizes the weights of the four prediction results based on IWOA. Before the simulation, the parameters of the four neural network model need to be initialized. The input nodes of the neural network are set to four, the hidden nodes to nine and the output nodes to 1. Besides, the rolling single-step forecasting operation method based on PM 2.5 concentration data of four cities is used to test the predictive performance. The detailed experimental parameters of the four neural networks are shown in Table 2. Table 3 shows the prediction results of the single models and the proposed ensemble model for each VM. To evaluate model performance, the RMSE was utilized as a model evaluation index. As can be seen from Table 3, each model performed optimally predictive behavior at a particular VM. For instance, the experimental results in Beijing were shown as follows: the BPNN provides the lowest RMSE values among all single models at VM2 and VM3, while at VM1 and residual, GMDH has the lowest RMSE values. The prediction results in Tianjin show that the ANFIS presents the best results at VM1. The FCM performs best at VM3. At VM2 and Residual, the GMDH provides the best results. The experimental results in Baoding show that among all of the single models, the RMSE value was lower than those of the other methods at VM1 and Residual, when the ANFIS was applied. At VM2 and VM3, the GMDH presents the optimal results. The forecasting results in Shijiazhuang reveal that the GMDH performs better than the others at VM2, VM3 and residual while ANFIS performs the best at VM1. Based on the above analysis, it can be revealed that each model has its advantages on the particular VMs. A single prediction model cannot be used to predict all decomposition signals uniformly. Thus, the most suitable model is selected according to the different conditions, which reveals that an ensemble model can incorporate the virtues of multiple individual models to overcome the limitations of individual models. Therefore, this study proposed an ensemble model based on the IWOA to seek the best weight coefficients of the ensemble model. The searching boundary is set in [−2, 2] based on the NNCT, and the RMSE criteria is used as fitness function of IWOA. Table 3 presents the best weights and final results of the ensemble model. By comparing with each single model, it indicates that the developed ensemble model can give the desired prediction results.
Comparing the ensemble model with BPNN, ANFIS, FCM and GMDH, the average RMSE of four cities at VM1 was reduced by 26.10%, 2.62%, 7.80% and 3.97%, respectively; At VM2, the average RMSE of four cities was reduced by 5.81%, 11.15%, 11.51% and 3.59%; At VM3, the average RMSE of four sites was reduced by 7.19%, 58.21%, 13.92% and 6.22%, respectively. For Residual, the average RMSE of four sites was reduced by 17.79%, 33.09%, 22.27% and 7.51%, respectively. Consequently, it can be seen that compared with the single models BPNN, ANFIS, FCM and GMDH, the forecasting result of the ensemble model is significantly improved on each VM component.

Model Performance Evaluation and Comparison
To evaluate the proposed ensemble model, three types of model comparison experiments were designed to compare the proposed ensemble model with other individual models, VMD-based models, and existing benchmark models.

Experiment 2: The Comparison between the Ensemble Model and Individual Models
This experiment used four individual models to make comparison with the developed ensemble model. The four individual models are BPNN, ANFIS, FCM and GMDH. Table 5 indicates the comparison forecasting results between ensemble model and other single models. From Table 5

Experiment 2: The Comparison between the Ensemble Model and Individual Models
This experiment used four individual models to make comparison with the developed ensemble model. The four individual models are BPNN, ANFIS, FCM and GMDH. Table 5 indicates the comparison forecasting results between ensemble model and other single models. From Table 5

Experiment 3: The Comparison between the Proposed Model and the Existing Models
This part was conducted to further verify that the suggested hybrid decompositionensemble method can effectively improve performance prediction. Several existing models widely used in environmental prediction were applied to conduct comparative studies to access the suggested models. The existing models include two simple algorithms (i.e., ARIMA and RBFNN) and three hybrid algorithms (i.e., SSA-ENN, EEMD-GRNN and EEND-WOA-BPNN). The results of the comparative study are given in Table 6 and Figure  4. It can be seen from Table 6 and Figure 4

Experiment 3: The Comparison between the Proposed Model and the Existing Models
This part was conducted to further verify that the suggested hybrid decompositionensemble method can effectively improve performance prediction. Several existing models widely used in environmental prediction were applied to conduct comparative studies to access the suggested models. The existing models include two simple algorithms (i.e., ARIMA and RBFNN) and three hybrid algorithms (i.e., SSA-ENN, EEMD-GRNN and EEND-WOA-BPNN). The results of the comparative study are given in Table 6 and Figure 4. It can be seen from Table 6 and Figure 4 that   In addition, the error mean and error STD are also used to evaluate the models' accuracy and stability, and the results shows that the developed model has higher accuracy and stability than other existing models. Therefore, it can be concluded that the proposed ensemble model can be successfully and effectively employed for PM2.5 concentration prediction compared with existing models. Furthermore, the proposed ensemble model has the following highlights compared to previous works [15,16]: 1. the data decomposi-  In addition, the error mean and error STD are also used to evaluate the models' accuracy and stability, and the results shows that the developed model has higher accuracy and stability than other existing models. Therefore, it can be concluded that the proposed ensemble model can be successfully and effectively employed for PM 2.5 concentration prediction compared with existing models. Furthermore, the proposed ensemble model has the following highlights compared to previous works [15,16]: 1. the data decomposition; 2. multi-model integration prediction; and 3. the optimized ensemble pattern weighting coefficients.

Conclusions
Reliable and precise PM 2.5 concentration forecasting is important for air quality early warning and pollution control. Owing to uncertainties and unstable of the PM 2.5 datasets, the original PM 2.5 series are very difficult to forecast accurately. Thus, it is still a challenging task to predict and simulate the PM 2.5 reasonably. In this study, a new hybrid decomposition-ensemble learning paradigm, which based on variation mode decomposition (VMD) and modified whale-optimization algorithm (IWOA), is proposed to predict the PM 2.5 concentration. In this developed paradigm, the VMD method was employed to decompose the original PM 2.5 sequence into several VM series for forecasting. The prediction results show that the single prediction model used for pollution concentration prediction has limited capability and is not appropriate for all VMs. To this end, an ensemble model, based on four individual forecasting approaches, BPNN, ANFIS, FCM and GMDH, is proposed for predict all the VM components. Furthermore, in order to ascertain the best ensemble weight coefficients, an improved Whale Optimization Algorithm, named IWOA, is proposed and the final forecasting results were achieved by reconstructing the precise sequence. The main contributions of this paper are summarized as follows: (1) A new decomposition-ensemble learning paradigm is developed for PM 2.5 concentration forecasting. (2) The VMD technique is adopted to decompose the primary PM 2.5 series. (3) ANFIS, ANFIS-FCM and GMDH are utilized for PM 2.5 forecasting. (4) An improved heuristic algorithm, IWOA, is developed to improve the weight coefficients of the ensemble model.
To evaluate the developed model, daily PM 2.5 sequence from four cities located in Jing-Jin-Ji area of China were collected as the test cases for the comparison study. The comparison results indicated that the developed ensemble model is superior to comparison models, include four VMD-based models, four individual models, two benchmark models and three existing models. Thus, the developed ensemble model provides an effective forecasting ability, especially for the highly volatile and irregular data (e.g., PM 2.5 concentration) and can be a powerful tool for decision makers in air quality monitoring and early warning system.

Conflicts of Interest:
The authors declare no conflict of interest.