Incremental Granular Model Improvement Using Particle Swarm Optimization

This paper proposes an incremental granular model (IGM) based on particle swarm optimization (PSO) algorithm. An IGM is a combination of linear regression (LR) and granular model (GM) where the global part calculates the error using LR. However, traditional CFCM clustering presents some problems because the number of clusters generated in each context is the same and a fixed value is used for fuzzification coefficient. In order to solve these problems, we optimize the number of clusters and their fuzzy numbers according to the characteristics of the data, and use natural imitative optimization PSO algorithm. We further evaluate the performance of the proposed method and the existing IGM by comparing the predicted performance using the Boston housing dataset. The Boston housing dataset contains housing price information in Boston, USA, and features 13 input variables and 1 output variable. As a result of the prediction, we can confirm that the proposed PSO-IGM shows better performance than the existing IGM.


Introduction
Various studies have been conducted on complex real-world problems with nonlinear characteristics.A linear regression (LR) method models linear correlation among dependent and one or more independent variables, fuzzy inference [1], in a way similar to how humans solve vague and uncertain problems.It employs an artificial neural network with adaptation and learning that imitates information processing in human brain [2] using the adaptive neuro-fuzzy inference system (ANFIS) model.Also, auto-regressive moving average model (ARIMA) [3] using autoregressive, integrated, moving average is being studied and applied to various fields.
Zhang [4] studied the LR method, the ANFIS model, and the ARIMA model to predict blood pressure using classification and regression trees.Krueger [5] proposed a model for predicting semiconductor yield using a linear prediction model.Yahia [6] proposed a model for predicting the SAR filtering results in speckle filtering using linear regression analysis.Zhang [7] proposed a model for predicting on-line damping ratio using local weighted linear regression.Drouard [8] proposed a model to estimate the head posture of a robot using linear regression.Martin [9] proposed a model that predicts graduate student productivity using vector regression support (SVR).Amikhani [10] proposed a model to predict the performance of solar power plants using an artificial neural network (ANN) and an adaptive neuro-fuzzy inference system.Naderloo [11] proposed a model that uses ANFIS to predict crops yields based on various energy inputs.Umrao [12] proposed a model to predict the strength and elastic modulus of heterogeneous sedimentary rocks using ANFIS.Zare [13] proposed a model for predicting groundwater fluctuations using ANFIS and wavelet-ANFIS.Adigizel [14] proposed a model to predict the effect of dust particles on photovoltaic modules using ANFIS.Ordonez [15] proposed Symmetry 2019, 11, 390 3 of 18 Yifei [49] conducted a study comparing PSO and GA, and PSO was able to confirm the superior performance even at faster computation speed and variable optimization than GA.
In this paper, we propose a method to optimize the number of clusters and fuzzy coefficients, which are internal parameters of the CFCM clustering method that models the local part of the IGM with particle swarm optimization (PSO), which is a natural imitative optimization algorithm.To verify the predictive performance of the proposed method, we conduct an experiment using average housing prices in Boston area from the Boston housing dataset.The experimental results show that the proposed PSO-IGM method generates a sufficient number of clusters for each context and optimizes the fuzzy coefficient to fit the model.It also shows better predictive performance than the existing IGM.The composition of this paper is as follows.Section 1 explains the research background.Section 2 describes the existing IGM and the proposed method, PSO-IGM.Section 3 uses the Boston housing dataset to predict the housing prices in Boston and compare their performance.Section 4 discusses the experimental results.Finally, Section 5 concludes and discusses future research.

Incremental Granular Model (IGM)
The incremental granular model is an information particle model proposed by Pedrycz [31] to predict modeling error.The IGM consists of the global part, the LR model, and the local part, context-based fuzzy C-means (CFCM) clustering-based GM.The global part is modeled by LR to calculate the model error, and the local part is compensated for the calculated error with GM to obtain the final value.

Global Part: Linear Regression (LR)
The arbitrary input-output data is organized in the form of {x k , y k }, k = 1, 2, . . ., n, where x k represents the input vector and y k represents the output.Figure 1 shows the LR of the global part, which can be expressed as Equation (1).
where, r T denotes the coefficients of LR calculated by the least square error (LSE).The error of the model, which is the result of LR, is e k = y k − z k , and is expressed by linguistic rules.The error of the model is transformed into {x k , e k } form, which is a new type of input-error data to be used in the granular model (GM) of the local part.able to confirm the superior performance even at faster computation speed and variable optimization than GA.
In this paper, we propose a method to optimize the number of clusters and fuzzy coefficients, which are internal parameters of the CFCM clustering method that models the local part of the IGM with particle swarm optimization (PSO), which is a natural imitative optimization algorithm.To verify the predictive performance of the proposed method, we conduct an experiment using average housing prices in Boston area from the Boston housing dataset.The experimental results show that the proposed PSO-IGM method generates a sufficient number of clusters for each context and optimizes the fuzzy coefficient to fit the model.It also shows better predictive performance than the existing IGM.The composition of this paper is as follows.Section 1 explains the research background.Section 2 describes the existing IGM and the proposed method, PSO-IGM.Section 3 uses the Boston housing dataset to predict the housing prices in Boston and compare their performance.Section 4 discusses the experimental results.Finally, Section 5 concludes and discusses future research.

Incremental Granular Model (IGM)
The incremental granular model is an information particle model proposed by Pedrycz [31] to predict modeling error.The IGM consists of the global part, the LR model, and the local part, contextbased fuzzy C-means (CFCM) clustering-based GM.The global part is modeled by LR to calculate the model error, and the local part is compensated for the calculated error with GM to obtain the final value.

Global Part: Linear Regression (LR)
The arbitrary input-output data is organized in the form of { ,  },  = 1, 2, … , , where  represents the input vector and  represents the output.Figure 1 shows the LR of the global part, which can be expressed as Equation 1.
where,  denotes the coefficients of LR calculated by the least square error (LSE).The error of the model, which is the result of LR, is  =  −  , and is expressed by linguistic rules.The error of the model is transformed into { ,  } form, which is a new type of input-error data to be used in the granular model (GM) of the local part.Granular model (GM), which models the localized part of the incremental granular model, is designed with the CFCM clustering method.The CFCM clustering is a modification of the existing fuzzy C-means clustering method and is described in Section 2.2.1.

Context-Based Fuzzy C-Means (CFCM) Clustering
The context-based fuzzy C-means (CFCM) clustering method proposed by Pedrycz [24] is a clustering method used in the local part of a granular model (GM).The CFCM clustering can cluster the information particles more precisely by considering the characteristics of the output variables as well as the input variables.Figure 2 shows the difference between general fuzzy C-means (FCM) clustering method and CFCM clustering method.When the input-error data is given, the FCM clustering method creates a cluster using only its initial center value and the Euclidean distance between the cluster center and the data.On the other hand, CFCM clustering method considers the characteristics of the output variables.In Figure 2a input-error data are of the same color.In Figure 2b, however, input-error data are of different colors because we consider the characteristics of the output variable.Accordingly, where FCM generates only two clusters, CFCM generates three clusters with better accuracy.Granular model (GM), which models the localized part of the incremental granular model, is designed with the CFCM clustering method.The CFCM clustering is a modification of the existing fuzzy C-means clustering method and is described in Section 2.2.1.

Context-Based Fuzzy C-Means (CFCM) Clustering
The context-based fuzzy C-means (CFCM) clustering method proposed by Pedrycz [24] is a clustering method used in the local part of a granular model (GM).The CFCM clustering can cluster the information particles more precisely by considering the characteristics of the output variables as well as the input variables.Figure 2 shows the difference between general fuzzy C-means (FCM) clustering method and CFCM clustering method.When the input-error data is given, the FCM clustering method creates a cluster using only its initial center value and the Euclidean distance between the cluster center and the data.On the other hand, CFCM clustering method considers the characteristics of the output variables.In Figure 2 (a) input-error data are of the same color.In Figure 2 (b), however, input-error data are of different colors because we consider the characteristics of the output variable.Accordingly, where FCM generates only two clusters, CFCM generates three clusters with better accuracy.
The procedure of the CFCM clustering method is as follows.[ Step 1] Set (1 <  < ) and the number of clusters (2 <  < ). [ Step 2] Initial division matrix , threshold value , the number of repetitions is specified. [ Step 3] Compute the center of each cluster  ( = 1, 2, … , ) using the membership matrix  and equation (3).
The membership matrix  is updated using the center value of the cluster  and the equation (4) [24].
Here,  represents the degree of affiliation of  in the generated cluster.In other words, the linguistic form defined in the output variable can be represented with a fuzzy set , {:  > [0, 1]}, which is computed using a fuzzy equalization algorithm.Then,  =   ,  = 1, 2, … ,  can represent the belonging value of  in .The procedure of the CFCM clustering method is as follows. [ Step 3] Compute the center of each cluster c i (i = 1, 2, . . ., c) using the membership matrix U and Equation (3). [ Step 4] The membership matrix U is updated using the center value of the cluster c and the Equation (4) [24].Here, f j represents the degree of affiliation of x j in the generated cluster.In other words, the linguistic form defined in the output variable can be represented with a fuzzy set A, {A : B > [0, 1]}, which is computed using a fuzzy equalization algorithm.Then, f j = A(y j ), j = 1, 2, . . ., n can represent the belonging value of y j in A. [ Step 5] If ||J r − J r+1 || ≤ ε is satisfied, the procedure stops.If not, return to Step 3.
2.1.4.Granular Model (GM) Figure 3 shows the structure of the GM using the CFCM clustering method.The preamble parameter values are included in the first layer node as the values obtained with the CFCM clustering.The conclusion values are W, the context created in the output space.The context consists of a triangle shape and contains a lower limit value y lower , a model value y, and an upper limit value y upper ; the respective equations are as follows.Assuming a triangular fuzzy set for the context, the triangular fuzzy number can be expressed as Equation ( 8) [22,23].
2.1.4.Granular Model (GM) Figure 3 shows the structure of the GM using the CFCM clustering method.The preamble parameter values are included in the first layer node as the values obtained with the CFCM clustering.The conclusion values are , the context created in the output space.The context consists of a triangle shape and contains a lower limit value  , a model value , and an upper limit value ; the respective equations are as follows.Assuming a triangular fuzzy set for the context, the triangular fuzzy number can be expressed as equation ( 8) [22,23].Here, the logarithmic operation expresses that it operates on a set of fuzzy numbers, and ξ k is the sum of activation level values generated in the kth context.The activation level value of each context is calculated as follows.Here, e k represents a modeling error value obtained from the LR.Finally, the predicted value of the IGM is calculated by combining the modeling error value obtained from LR with the activation value obtained from GM [31].Figure 4 shows the structure of GM.It calculates the triangular fuzzy number through CFCM clustering in the first layer and combines the values to obtain the final value.

Particle Swarm Optimization-Based Incremental Granular Model (PSO-IGM)
The problem with the existing incremental granular model is that the same number of clusters is created for each context.This results in excess clusters for each context and reduces the prediction performance of the model.In order to solve these problems, we propose IGM which optimizes the number of clusters and fuzzy coefficients using particle swarm optimization.

Particle Swarm Optimization (PSO)
The particle swarm optimization (PSO) algorithm is a natural imitation algorithm based on social behavior patterns of biological communities rather than evolutionary mechanism of natural selection, as proposed by Kennedy and Eberhart [39][40][41][42][43][44][45][46][47][48][49][50].PSO is an algorithm for finding the optimal solution by mimicking the behavior habits of animals such as birds, fish, bees, and ants.In this method, several particles are dispersed in the search space, and repeatedly adjust their position closer to a better solution.Thus, the group of particles gradually converges in the direction of the optimal solution.PSO is a heuristic method, which is a kind of computational optimization technique.Unlike the conventional methods, it does not require a specific termination condition such as convergence value and terminates after the predetermined number of iterations.
The PSO determines the position X i+1 k of the next unit time using the position P i k = (p k1 , p k2 , . . ., p kn ) of the best solution experienced by the particle and the position G i = (g 1 , g 2 , . . ., g n ) of the best solution experienced by the cluster.The principle of PSO is shown in Figure 5, and the procedure is as follows. [ Step 1] The initial position vector X 0 k , k = 1, 2, . . ., n and the velocity vector V 0 k are set using random numbers for all the objects.Then, X pbest k is set to the initial position vector , which is the minimum cost function, to the global optimum position vector X gbest of the whole group. [ Step 3] Update the next position vector X i k and the velocity vector V i k of each object using the following formula [43,44].
Symmetry 2019, 11, 390 7 of 18 [ Step 4] If the cost function value J(X i k ) is better than J(X pbest k ) in the current position vector for each individual, replace X i k with X pbest K .[Step 5] For the entire entity, if J(X i k ) has better cost function value than J(X gbest ), replace X i k with X gbest . [ Step 6] Obtain a solution with a satisfactory cost function or iterate to Step 3 until the sufficient number of generations is reached.

Particle Swarm Optimization (PSO)
The particle swarm optimization (PSO) algorithm is a natural imitation algorithm based on social behavior patterns of biological communities rather than evolutionary mechanism of natural selection, as proposed by Kennedy and Eberhart [39][40][41][42][43][44][45][46][47][48][49][50].PSO is an algorithm for finding the optimal solution by mimicking the behavior habits of animals such as birds, fish, bees, and ants.In this method, several particles are dispersed in the search space, and repeatedly adjust their position closer to a better solution.Thus, the group of particles gradually converges in the direction of the optimal solution.PSO is a heuristic method, which is a kind of computational optimization technique.Unlike the conventional methods, it does not require a specific termination condition such as convergence value and terminates after the predetermined number of iterations.
The PSO determines the position  of the next unit time using the position  = ( ,  , … ,  ) of the best solution experienced by the particle and the position  = ( ,  , … ,  ) of the best solution experienced by the cluster.The principle of PSO is shown in Figure 5, and the procedure is as follows.

Particle Swarm Optimization-Based Incremental Granular Model (PSO-IGM)
The structure of the incremental granular model based on the particle swarm optimization, which is the natural mimic optimization algorithm described above, can be expressed as shown in Figure 6, and the procedure is as follows.
[Step 1] First, linear prediction is performed using the LR that models the global portion from the numerical input-output data.Here, a modeling error between the actual desired output and the LR output can be obtained.Based on this, a new type of input-error data is formed from the input-output data. [ Step 2] Contexts are generated in the newly generated input-error space, and each context is calculated using the statistical characteristics of the error distribution. [ Step 3] performs CFCM clustering in the input space corresponding to the context generated in the error space.In this case, the GM that models the local part generates the cluster considering the characteristics of the output variable in each context.At this time, the optimum number of clusters and the fuzzification coefficient are selected using the PSO.
[ Step 4] We randomly create an object in the search space, set each object to pbest, and create an initial cluster.Each object has a position vector and a velocity vector. [ Step 5] The generated objects are evaluated using the fitness function.If the fitness value obtained here is better than the fitness value of the previous generation, set it to pbest.The best value among pbest is set to the gbest of the whole cluster.
Symmetry 2019, 11, 390 8 of 18 [Step 6] We update the position vector and the velocity vector based on gbest and pbest for all objects and repeat Steps 4-6 to obtain the optimal number of clusters and fuzzy coefficients. [ Step 7] Obtain the active function value of the clusters generated by the corresponding context, and the triangular fuzzy number using the weight through the context. [ Step 8] Combine the output of the global LR with the output of the local part of the GM to obtain the final predicted value.
As shown in the structure of the PSO-based IGM, the number of clusters corresponding to each context and the fuzzification coefficients are composed of parameters of PSO particle.In Figure 6, the size of the parameter is 7 because it has 6 clusters and 1 fuzzification coefficient.These parameters select an optimal value within a certain range, and use the value for each context to create a cluster.
[Step 2] In Step 1, we set  , which is the minimum cost function, to the global optimum position vector  of the whole group.[Step 3] Update the next position vector  and the velocity vector  of each object using the following formula [43,44].The structure of the incremental granular model based on the particle swarm optimization, which is the natural mimic optimization algorithm described above, can be expressed as shown in Figure 6, and the procedure is as follows.

Results
In this section, we use the Boston housing dataset to evaluate the predictive performance of the PSO-based IGM described in Section 3, and conduct an experiment to predict house prices in Boston, USA.

Boston Housing Dataset
The Boston housing dataset is provided by the StatLib library maintained by Carnegie Mellon University.The data on housing prices in Greater Boston Area consists of 13 input variables Symmetry 2019, 11, 390 9 of 18 and 1 output variable.The input variables include per capita crime rate by town, proportion of residential land zoned for lots over 25,000 sq.ft., proportion of non-retail business acres per town, Charles River dummy variable, nitric oxide concentration (10 ppm), average number of rooms per dwelling, proportion of owner-occupied units built prior to 1940, weighted distances to five Boston employment centers, index of accessibility to radial highways, full-value property-tax rate per $10,000, the proportion of blacks by town, pupil-teacher ratio by town, and % lower status of the population.The output variable is the median value of owner-occupied homes in.The size of the data is 506 × 14.In our experiment, the training data and the test data were divided 50:50, respectively, and normalized to the range 0 to 1.

Experimental Method
The experimental procedure is as follows.We compare the predictive performance of the existing IGM with the PSO-IGM proposed in this paper.As described above, the existing IGM uses the LR model as a global part and GM as a local part.The global part of the PSO-IGM proceeds using the same LR method as the existing IGM, and optimizes the GM internal parameters using the PSO optimization algorithm in the local part GM.
First, in the experiment using the existing IGM, the experiment was conducted by changing the number of contexts and the number of clusters of the GM, which models the local part.The fuzzification coefficient (m) was fixed at 1.5, and the number of contexts was increased from 5 to 8. The number of clusters was increased from 2 to 20 by 1, and the prediction performance was confirmed.Next, in the experiment using the proposed method, the PSO-based IGM, the number of contexts of the GM that models the local part is increased from 5 to 8 as in the conventional method, but the number of clusters and the fuzzification coefficient are calculated using the PSO algorithm.The range of the number of clusters to be optimized is from 2 to 9, and the range of the fuzzification coefficient is set 1.5 to 2.5.The number of iterations of the PSO was set to 50, and the prediction performance was confirmed by setting the optimal number of the clusters and the fuzzy coefficient by setting the inertia weight to 1, the inertia weight damping to 0.99, the personal learning coefficient to 1.5, and the global learning coefficient to 2.
The predictive performance was evaluated using the root mean square error (RMSE).RMSE is a prediction measure that uses the difference between the predicted value and the observed value of the model, and can be expressed as Equation (13).
Here, y i represents the predicted value of the model, and ŷi represents the actual observed value.When both values are equal, the prediction performance becomes 100%, and the RMSE value becomes 0. Therefore, the smaller the RMSE value, the better the prediction performance.

Result Analysis
The prediction performance of the existing IGM using the Boston housing dataset is shown in Tables 1-4.The context was fixed to 5, 6, 7 and 8, and the number of clusters was increased from 2 to 20 by 1.As a result, when the context was fixed to 7 and the number of clusters was set to 9, the verification RMSE was 3.74, which shows the best prediction performance.Figure 7 shows a graph summarizing the predictive performance of the IGM with contexts fixed to 5, 6, 7 and 8, respectively.As seen in the figure, the best prediction performance is obtained when the context is 7 and the number of clusters is 9.The proposed PSO-IGM method fixes the contexts to 5, 6, 7 and 8, respectively, and then generates the final model by optimizing the number of clusters and the fuzzy coefficients generated for each context.Table 5 shows the prediction performance of the PSO-IGM.4.74 4.05 Figure 7 shows a graph summarizing the predictive performance of the IGM with contexts fixed to 5, 6, 7 and 8, respectively.As seen in the figure, the best prediction performance is obtained when the context is 7 and the number of clusters is 9.The proposed PSO-IGM method fixes the contexts to 5, 6, 7 and 8, respectively, and then generates the final model by optimizing the number of clusters and the fuzzy coefficients generated for each context.Table 5 shows the prediction performance of the PSO-IGM.Figure 9 shows a graph that compares the predictive performance of the IGM that optimizes the number of clusters to be created in the context and the fuzzification coefficient after fixing the context to 5, 6, 7 and 8, respectively.As seen in the figure, the best prediction performance is obtained when the context is 8 (the number of clusters is 7, 5, 3, 7, 3, 4, 5, 2) and the fuzzification coefficient is 1.8734.Figure 9 shows a graph that compares the predictive performance of the IGM that optimizes the number of clusters to be created in the context and the fuzzification coefficient after fixing the context to 5, 6, 7 and 8, respectively.Next, the Figures 10-13 show that the cost function decreases in the PSO-IGM (context = 5, 6, 7, 8). Figure 14 is a visualization of each clustering and fuzzification coefficient obtained from the PSO-IGM.5, 6, 7, and 8 on the x-axis represent the respective contexts, and the black bars above represent the number of clusters.Before the optimization, the cost function value is about 0.1, but it can be confirmed that the value gradually decreases as the optimization process is repeated.The orange bar represents the fuzzification coefficient.As seen in the figure, the best prediction performance is obtained when the context is 8 (the number of clusters is 7, 5, 3, 7, 3, 4, 5, 2) and the fuzzification coefficient is 1.8734.

Discussion
Figure 15 shows the prediction performance of PSO-IGM, which optimizes the number of clusters and fuzzy coefficients of each context, which are internal parameter values, using existing IGM and PSO with fixed contexts of 5, 6, 7 and 8.In the conventional IGM, when the context is fixed to 8, the number of clusters is set to 14, and the fuzzy coefficient is set to 1.5, the verification RMSE is 3.72 and the proposed method, PSO-IGM, fixes the context to 8 and sets the number of clusters to 7, 5, 3, 7, 3, 4, 5, 2 and the fuzzy coefficient was set to 1.8734, the verification RMSE was the best at 3.55.As a result, we confirmed that the prediction performance of PSO-IGM, which optimizes the number of clusters and fuzzy coefficients, is superior to that of IGM using the same number of clusters and fuzzy coefficients.Table 6 summarizes the performance of each IGM and PSO-IGM in Figure 15.

Discussion
Figure 15 shows the prediction performance of PSO-IGM, which optimizes the number of clusters and fuzzy coefficients of each context, which are internal parameter values, using existing IGM and PSO with fixed contexts of 5, 6, 7 and 8.In the conventional IGM, when the context is fixed to 8, the number of clusters is set to 14, and the fuzzy coefficient is set to 1.5, the verification RMSE is 3.72 and the proposed method, PSO-IGM, fixes the context to 8 and sets the number of clusters to 7, 5, 3, 7, 3, 4, 5, 2 and the fuzzy coefficient was set to 1.8734, the verification RMSE was the best at 3.55.As a result, we confirmed that the prediction performance of PSO-IGM, which optimizes the number of clusters and fuzzy coefficients, is superior to that of IGM using the same number of clusters and fuzzy coefficients.Table 6 summarizes the performance of each IGM and PSO-IGM in Figure 15.

Conclusions
In this paper, we propose a PSO-IGM method that optimizes the number of the clusters and the fuzzy coefficients of the incremental granular model using the particle swarm optimization algorithm.The deficiency of the existing IGM is that the same number of clusters is created for each context and the same fuzzy coefficient is used.To solve these problems, we optimize the number of clusters required per context using the PSO and accordingly optimize it using the same method with fuzzy coefficients.Experimental results show that the PSO-IGM method proposed in this paper has better prediction performance compared to the existing IGM, and that prediction performance can be improved by optimizing the internal parameters according to the characteristics of the data.In future research, we plan to optimize not only the number of the clusters and the fuzzification coefficients, but also the number of contexts.

Figure 2 .
Figure 2. Difference between fuzzy C-means clustering and context-based fuzzy C-means clustering: (a) fuzzy C-means clustering; (b) context-based fuzzy C-means clustering.

Figure 2 .
Figure 2. Difference between fuzzy C-means clustering and context-based fuzzy C-means clustering: (a) fuzzy C-means clustering; (b) context-based fuzzy C-means clustering.

Figure 3 .
Figure 3. Structure of a triangular fuzzy number.

Figure 3 .
Figure 3. Structure of a triangular fuzzy number.

Figure 3 .
Figure 3. Structure of a triangular fuzzy number. .

Figure 4 .
Figure 4. Structure of granular model.Figure 4. Structure of granular model.

Figure 4 .
Figure 4. Structure of granular model.Figure 4. Structure of granular model.

Figure 5 .
Figure 5. Structure of particle swarm optimization.Figure 5. Structure of particle swarm optimization.

Figure 5 .
Figure 5. Structure of particle swarm optimization.Figure 5. Structure of particle swarm optimization.

[Step 4 ]
If the cost function value ( ) is better than ( ) in the current position vector for each individual, replace  with  .[Step 5] For the entire entity, if ( ) has better cost function value than ( ), replace  with  .[Step 6] Obtain a solution with a satisfactory cost function or iterate to Step 3 until the sufficient number of generations is reached.2.2.2.Particle Swarm Optimization-Based Incremental Granular Model (PSO-IGM)

Figure 6 .
Figure 6.Structure of particle swarm optimization-based incremental granular model.Figure 6. Structure of particle swarm optimization-based incremental granular model.

Figure 6 .
Figure 6.Structure of particle swarm optimization-based incremental granular model.Figure 6. Structure of particle swarm optimization-based incremental granular model.

Figure 8
Figure 8 shows the average of predicted performance of IGMs.The red line in the middle shows the average of the predicted performance and the blue box shows the range of the 25th percentile to the 75th percentile.The Red Cross mark indicates an abnormal value.

Figure 8
Figure 8 shows the average of predicted performance of IGMs.The red line in the middle shows the average of the predicted performance and the blue box shows the range of the 25th percentile to the 75th percentile.The Red Cross mark indicates an abnormal value.

Figure 11 .
Figure 11.IGM optimization process for six contexts.

Figure 12 .
Figure 12.IGM optimization process for seven contexts.

Figure 13 .
Figure 13.IGM optimization process for eight contexts.

Table 1 .
Predictive performance of the incremental granular model using five contexts

Table 2 .
Predictive performance of the incremental granular model using six contexts

Table 3 .
Predictive performance of the incremental granular model using seven contexts

Table 4 .
Predictive performance of the incremental granular model using five contexts.

Table 5 .
Predictive performance of the incremental granular model using particle swarm optimization.

Table 5 .
Predictive performance of the incremental granular model using particle swarm optimization

Table 6 .
Predictive performance of all incremental granular models

Table 6 .
Predictive performance of all incremental granular models