Next Article in Journal
Stereographic Visualization of 5-Dimensional Regular Polytopes
Previous Article in Journal
Locating Chromatic Number of Powers of Paths and Cycles
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Incremental Granular Model Improvement Using Particle Swarm Optimization

Department of Control and Instrumentation Engineering, Chosun University, Gwangju 61452, Korea
*
Author to whom correspondence should be addressed.
Symmetry 2019, 11(3), 390; https://doi.org/10.3390/sym11030390
Submission received: 16 February 2019 / Revised: 12 March 2019 / Accepted: 14 March 2019 / Published: 18 March 2019

Abstract

:
This paper proposes an incremental granular model (IGM) based on particle swarm optimization (PSO) algorithm. An IGM is a combination of linear regression (LR) and granular model (GM) where the global part calculates the error using LR. However, traditional CFCM clustering presents some problems because the number of clusters generated in each context is the same and a fixed value is used for fuzzification coefficient. In order to solve these problems, we optimize the number of clusters and their fuzzy numbers according to the characteristics of the data, and use natural imitative optimization PSO algorithm. We further evaluate the performance of the proposed method and the existing IGM by comparing the predicted performance using the Boston housing dataset. The Boston housing dataset contains housing price information in Boston, USA, and features 13 input variables and 1 output variable. As a result of the prediction, we can confirm that the proposed PSO-IGM shows better performance than the existing IGM.

1. Introduction

Various studies have been conducted on complex real-world problems with nonlinear characteristics. A linear regression (LR) method models linear correlation among dependent and one or more independent variables, fuzzy inference [1], in a way similar to how humans solve vague and uncertain problems. It employs an artificial neural network with adaptation and learning that imitates information processing in human brain [2] using the adaptive neuro-fuzzy inference system (ANFIS) model. Also, auto-regressive moving average model (ARIMA) [3] using autoregressive, integrated, moving average is being studied and applied to various fields.
Zhang [4] studied the LR method, the ANFIS model, and the ARIMA model to predict blood pressure using classification and regression trees. Krueger [5] proposed a model for predicting semiconductor yield using a linear prediction model. Yahia [6] proposed a model for predicting the SAR filtering results in speckle filtering using linear regression analysis. Zhang [7] proposed a model for predicting on-line damping ratio using local weighted linear regression. Drouard [8] proposed a model to estimate the head posture of a robot using linear regression. Martin [9] proposed a model that predicts graduate student productivity using vector regression support (SVR). Amikhani [10] proposed a model to predict the performance of solar power plants using an artificial neural network (ANN) and an adaptive neuro-fuzzy inference system. Naderloo [11] proposed a model that uses ANFIS to predict crops yields based on various energy inputs. Umrao [12] proposed a model to predict the strength and elastic modulus of heterogeneous sedimentary rocks using ANFIS. Zare [13] proposed a model for predicting groundwater fluctuations using ANFIS and wavelet-ANFIS. Adigizel [14] proposed a model to predict the effect of dust particles on photovoltaic modules using ANFIS. Ordonez [15] proposed a model that predicts the remaining lifetime of the aircraft engine using hybrid ARIMA-ANFIS. Torbat [16] proposed a model to predict the commodity market consumption rate using hybrid probabilistic fuzzy ARIMA. Ohyver [17] proposed a model to predict rise in prices using ARIMA. Barak [18] proposed an energy consumption prediction model using the ensemble ARIMA-ANFIS. Ramos [19] proposed a model to predict consumer retail sales volume using ARIMA. Suhermi [20] proposed a model for predicting roll motion using hybrid deep learning and ARIMA. Musaylh [21] proposed a model that predicts short-term electricity demand in Queensland using SVR and ARIMA.
Also, clustering method for data analysis had been the subject of numerous studies, among them, a particular interest was paid to context-based fuzzy C-means (CFCM) clustering using fuzzy C-means (FCM) clustering. Special studies had been conducted on extracting information particles using CFCM clustering and designing a granular model (GM) or linguistic model (LM) using the extracted data [22,23]. Unlike the existing FCM clustering method, the CFCM clustering can model the data more precisely because it creates the context in the output space and the cluster for each context is built considering the characteristics of both the output and the input space [24]. Zhu [25] proposed a hybrid TS fuzzy model that combines the Takagi–Sugeno (TS) fuzzy model and information segmentation method. Hmouz [26] proposed a time series prediction model using granular time series. Froelich [27] proposed a detailed time series modeling method that uses information granules in time. Cimino [28] proposed a genetic interval neural network using the spacing value of the information particles. Zhao [29] proposed a model that predicts the amount of energy generated in steel production using GM. Pedrycz [23] proposed LM to model user-centric systems. In addition to using GM, an incremental granular model (IGM) [30] was suggested, which is a combination of GM and LR. The IGM computes the error of the model using the global portion of the LR, and the GM estimates the final predicted value by compensating the model error using the local portion.
However, since the traditional GM method described above generates the same cluster for each context, it is difficult to obtain good prediction performance for problems with strong nonlinear characteristics. To solve these problems, we studied optimization of internal parameters using the genetic algorithm (GA), an evolutionary optimization algorithm. Oztekin [31] proposed a model that predicts the quality of life for patients with lung transplant by combining support vector machine (SVM) with GA. Garcia [32] proposed a model that predicts the amount of short-term traffic congestion using cross entropy and GA. Sharma [33] proposed a model that predicts the sea surface temperature of the Arabian Sea by applying GA to the preliminary heuristic orthogonal function (EOF). Sadi [34] proposed a GA model for group method of data handling (GMDH) to predict the asphaltene precipitation. Esfe [35] proposed a model to predict the viscosity of CuO-ethylene glycol nanofluid using GA-based artificial neural networks. Francescomarino [36] proposed a model that optimizes the hyper parameters of predictive business process monitoring. Rotta [37] proposed a model for predicting failure by applying GA to APODIS. Byeon [38] suggested a method to optimize internal parameters by applying GA to the IGM.
However, in addition to GA, which imitates the evolution of living things and solves optimization problems using computations of crossing and mutation, there is also particle swarm optimization (PSO) [39]. The advantage of PSO is that it can quickly find a global solution by exchanging information among multiple individuals using a simple algorithm. Huang [40] proposed a short-term load prediction model using PSO for autoregressive moving average with exogenous variable (ARMAX). Chan [41] proposed a short-term traffic forecasting model using intelligent PSO for road sensor systems. Bashir [42] proposed a wavelet method using PSO-based artificial neural network. Anamika [43] proposed electricity price prediction and classification model using the wavelet dynamic weighting PSO-FFNN. Rocha [44] proposed a model that predicts the capacity of a power generation system by applying PSO to an extreme learning machine (ELM). Ma [45] proposed an improved gray model and a railroad gap prediction model using PSO-SVM. Catalao [46] proposed a short-term electricity pricing model using hybrid wavelet-PSO-ANFIS. Liao [47] proposed a model that predicts the temperature of the reheating furnace combined with the fuzzy artificial neural network and PSO. Alik [48] and Yifei [49] conducted a study comparing PSO and GA, and PSO was able to confirm the superior performance even at faster computation speed and variable optimization than GA.
In this paper, we propose a method to optimize the number of clusters and fuzzy coefficients, which are internal parameters of the CFCM clustering method that models the local part of the IGM with particle swarm optimization (PSO), which is a natural imitative optimization algorithm. To verify the predictive performance of the proposed method, we conduct an experiment using average housing prices in Boston area from the Boston housing dataset. The experimental results show that the proposed PSO-IGM method generates a sufficient number of clusters for each context and optimizes the fuzzy coefficient to fit the model. It also shows better predictive performance than the existing IGM. The composition of this paper is as follows. Section 1 explains the research background. Section 2 describes the existing IGM and the proposed method, PSO-IGM. Section 3 uses the Boston housing dataset to predict the housing prices in Boston and compare their performance. Section 4 discusses the experimental results. Finally, Section 5 concludes and discusses future research.

2. Proposed Methods

2.1. Incremental Granular Model (IGM)

The incremental granular model is an information particle model proposed by Pedrycz [31] to predict modeling error. The IGM consists of the global part, the LR model, and the local part, context-based fuzzy C-means (CFCM) clustering-based GM. The global part is modeled by LR to calculate the model error, and the local part is compensated for the calculated error with GM to obtain the final value.

2.1.1. Global Part: Linear Regression (LR)

The arbitrary input-output data is organized in the form of { x k ,   y k } ,   k = 1 ,   2 ,   ,   n , where x k represents the input vector and y k represents the output. Figure 1 shows the LR of the global part, which can be expressed as Equation (1).
z k = r T x k + r 0
where, r T denotes the coefficients of LR calculated by the least square error (LSE). The error of the model, which is the result of LR, is e k = y k z k , and is expressed by linguistic rules. The error of the model is transformed into { x k ,   e k } form, which is a new type of input-error data to be used in the granular model (GM) of the local part.

2.1.2. Local Part: Granular Model (GM)

Granular model (GM), which models the localized part of the incremental granular model, is designed with the CFCM clustering method. The CFCM clustering is a modification of the existing fuzzy C-means clustering method and is described in Section 2.2.1.

2.1.3. Context-Based Fuzzy C-Means (CFCM) Clustering

The context-based fuzzy C-means (CFCM) clustering method proposed by Pedrycz [24] is a clustering method used in the local part of a granular model (GM). The CFCM clustering can cluster the information particles more precisely by considering the characteristics of the output variables as well as the input variables. Figure 2 shows the difference between general fuzzy C-means (FCM) clustering method and CFCM clustering method. When the input-error data is given, the FCM clustering method creates a cluster using only its initial center value and the Euclidean distance between the cluster center and the data. On the other hand, CFCM clustering method considers the characteristics of the output variables. In Figure 2a input-error data are of the same color. In Figure 2b, however, input-error data are of different colors because we consider the characteristics of the output variable. Accordingly, where FCM generates only two clusters, CFCM generates three clusters with better accuracy.
The procedure of the CFCM clustering method is as follows.
[Step 1] Set m ( 1 < m < Q ) and the number of clusters c ( 2 < c < n ) .
[Step 2] Initial division matrix U , threshold value W , the number of repetitions is specified.
U ( [ u i j ] i = 1 ,   ,   c ,   j = 1 ,   ,   n )
[Step 3] Compute the center of each cluster c i ( i = 1 ,   2 ,   ,   c ) using the membership matrix U and Equation (3).
c i = j = 1 n u i j m x j j = 1 n u i j m
[Step 4] The membership matrix U is updated using the center value of the cluster c and the Equation (4) [24].
u i j = f j k = 1 c ( d i j d k j ) 2 ( m 1 )
Here, f j represents the degree of affiliation of x j in the generated cluster. In other words, the linguistic form defined in the output variable can be represented with a fuzzy set A ,   { A : B > [ 0 ,   1 ] } , which is computed using a fuzzy equalization algorithm. Then, f j = A ( y j ) ,   j = 1 ,   2 ,   ,   n can represent the belonging value of y j in A .
[Step 5] If | | J r J r + 1 | | ε is satisfied, the procedure stops. If not, return to Step 3.
J = j = 1 n i = 1 c u i j m | | x j c i | | 2

2.1.4. Granular Model (GM)

Figure 3 shows the structure of the GM using the CFCM clustering method. The preamble parameter values are included in the first layer node as the values obtained with the CFCM clustering. The conclusion values are W , the context created in the output space. The context consists of a triangle shape and contains a lower limit value y l o w e r , a model value y , and an upper limit value y u p p e r ; the respective equations are as follows. Assuming a triangular fuzzy set for the context, the triangular fuzzy number can be expressed as Equation (8) [22,23].
y l o w e r = t = 1 p z t w t + w 0
y u p p e r = t = 1 p z t w t + + w 0
y = W 1 ξ 1 W 2 ξ 2 W p ξ p
Here, the logarithmic operation expresses that it operates on a set of fuzzy numbers, and ξ k is the sum of activation level values generated in the k th context. The activation level value of each context is calculated as follows. Here, e k represents a modeling error value obtained from the LR. Finally, the predicted value of the IGM is calculated by combining the modeling error value obtained from LR with the activation value obtained from GM [31]. Figure 4 shows the structure of GM. It calculates the triangular fuzzy number through CFCM clustering in the first layer and combines the values to obtain the final value.
Y = W t z t

2.2. Particle Swarm Optimization-Based Incremental Granular Model (PSO-IGM)

The problem with the existing incremental granular model is that the same number of clusters is created for each context. This results in excess clusters for each context and reduces the prediction performance of the model. In order to solve these problems, we propose IGM which optimizes the number of clusters and fuzzy coefficients using particle swarm optimization.

2.2.1. Particle Swarm Optimization (PSO)

The particle swarm optimization (PSO) algorithm is a natural imitation algorithm based on social behavior patterns of biological communities rather than evolutionary mechanism of natural selection, as proposed by Kennedy and Eberhart [39,40,41,42,43,44,45,46,47,48,49,50]. PSO is an algorithm for finding the optimal solution by mimicking the behavior habits of animals such as birds, fish, bees, and ants. In this method, several particles are dispersed in the search space, and repeatedly adjust their position closer to a better solution. Thus, the group of particles gradually converges in the direction of the optimal solution. PSO is a heuristic method, which is a kind of computational optimization technique. Unlike the conventional methods, it does not require a specific termination condition such as convergence value and terminates after the predetermined number of iterations.
The PSO determines the position X k i + 1 of the next unit time using the position P k i = ( p k 1 ,   p k 2 ,   ,   p k n ) of the best solution experienced by the particle and the position G i = ( g 1 , g 2 , , g n ) of the best solution experienced by the cluster. The principle of PSO is shown in Figure 5, and the procedure is as follows.
[Step 1] The initial position vector X k 0 ,   k = 1 ,   2 ,   ,   n and the velocity vector V k 0 are set using random numbers for all the objects. Then, X k p b e s t is set to the initial position vector X k 0 .
[Step 2] In Step 1, we set X k p b e s t , which is the minimum cost function, to the global optimum position vector X g b e s t of the whole group.
[Step 3] Update the next position vector X k i and the velocity vector V k i of each object using the following formula [43,44].
V k i + 1 = w V k i + c 1 r 1 ( X k p b e s t x k i ) + c 2 r 2 ( X g b e s t X k i )
X k i + 1 = X k i + V k i + 1 ,     k = 1 ,   2 ,   ,   n
[Step 4] If the cost function value J ( X k i ) is better than J ( X k p b e s t ) in the current position vector for each individual, replace X k i with X K p b e s t .
[Step 5] For the entire entity, if J ( X k i ) has better cost function value than J ( X g b e s t ) , replace X k i with X g b e s t .
[Step 6] Obtain a solution with a satisfactory cost function or iterate to Step 3 until the sufficient number of generations is reached.

2.2.2. Particle Swarm Optimization-Based Incremental Granular Model (PSO-IGM)

The structure of the incremental granular model based on the particle swarm optimization, which is the natural mimic optimization algorithm described above, can be expressed as shown in Figure 6, and the procedure is as follows.
[Step 1] First, linear prediction is performed using the LR that models the global portion from the numerical input–output data. Here, a modeling error between the actual desired output and the LR output can be obtained. Based on this, a new type of input-error data is formed from the input–output data.
[Step 2] Contexts are generated in the newly generated input-error space, and each context is calculated using the statistical characteristics of the error distribution.
[Step 3] performs CFCM clustering in the input space corresponding to the context generated in the error space. In this case, the GM that models the local part generates the cluster considering the characteristics of the output variable in each context. At this time, the optimum number of clusters and the fuzzification coefficient are selected using the PSO.
[Step 4] We randomly create an object in the search space, set each object to pbest, and create an initial cluster. Each object has a position vector and a velocity vector.
[Step 5] The generated objects are evaluated using the fitness function. If the fitness value obtained here is better than the fitness value of the previous generation, set it to pbest. The best value among pbest is set to the gbest of the whole cluster.
[Step 6] We update the position vector and the velocity vector based on gbest and pbest for all objects and repeat Steps 4–6 to obtain the optimal number of clusters and fuzzy coefficients.
[Step 7] Obtain the active function value of the clusters generated by the corresponding context, and the triangular fuzzy number using the weight through the context.
[Step 8] Combine the output of the global LR with the output of the local part of the GM to obtain the final predicted value.
Y = W t z t
As shown in the structure of the PSO-based IGM, the number of clusters corresponding to each context and the fuzzification coefficients are composed of parameters of PSO particle. In Figure 6, the size of the parameter is 7 because it has 6 clusters and 1 fuzzification coefficient. These parameters select an optimal value within a certain range, and use the value for each context to create a cluster.

3. Results

In this section, we use the Boston housing dataset to evaluate the predictive performance of the PSO-based IGM described in Section 3, and conduct an experiment to predict house prices in Boston, USA.

3.1. Boston Housing Dataset

The Boston housing dataset is provided by the StatLib library maintained by Carnegie Mellon University. The data on housing prices in Greater Boston Area consists of 13 input variables and 1 output variable. The input variables include per capita crime rate by town, proportion of residential land zoned for lots over 25,000 sq.ft., proportion of non-retail business acres per town, Charles River dummy variable, nitric oxide concentration (10 ppm), average number of rooms per dwelling, proportion of owner-occupied units built prior to 1940, weighted distances to five Boston employment centers, index of accessibility to radial highways, full-value property-tax rate per $10,000, the proportion of blacks by town, pupil-teacher ratio by town, and % lower status of the population. The output variable is the median value of owner-occupied homes in. The size of the data is 506 × 14. In our experiment, the training data and the test data were divided 50:50, respectively, and normalized to the range 0 to 1.

3.2. Experimental Method

The experimental procedure is as follows. We compare the predictive performance of the existing IGM with the PSO-IGM proposed in this paper. As described above, the existing IGM uses the LR model as a global part and GM as a local part. The global part of the PSO-IGM proceeds using the same LR method as the existing IGM, and optimizes the GM internal parameters using the PSO optimization algorithm in the local part GM.
First, in the experiment using the existing IGM, the experiment was conducted by changing the number of contexts and the number of clusters of the GM, which models the local part. The fuzzification coefficient (m) was fixed at 1.5, and the number of contexts was increased from 5 to 8. The number of clusters was increased from 2 to 20 by 1, and the prediction performance was confirmed. Next, in the experiment using the proposed method, the PSO-based IGM, the number of contexts of the GM that models the local part is increased from 5 to 8 as in the conventional method, but the number of clusters and the fuzzification coefficient are calculated using the PSO algorithm. The range of the number of clusters to be optimized is from 2 to 9, and the range of the fuzzification coefficient is set 1.5 to 2.5. The number of iterations of the PSO was set to 50, and the prediction performance was confirmed by setting the optimal number of the clusters and the fuzzy coefficient by setting the inertia weight to 1, the inertia weight damping to 0.99, the personal learning coefficient to 1.5, and the global learning coefficient to 2.
The predictive performance was evaluated using the root mean square error (RMSE). RMSE is a prediction measure that uses the difference between the predicted value and the observed value of the model, and can be expressed as Equation (13).
R M S E = 1 n i = 1 n ( y i y i ^ ) 2
Here, y i represents the predicted value of the model, and y i ^ represents the actual observed value. When both values are equal, the prediction performance becomes 100%, and the RMSE value becomes 0. Therefore, the smaller the RMSE value, the better the prediction performance.

3.3. Result Analysis

The prediction performance of the existing IGM using the Boston housing dataset is shown in Table 1, Table 2, Table 3 and Table 4. The context was fixed to 5, 6, 7 and 8, and the number of clusters was increased from 2 to 20 by 1. As a result, when the context was fixed to 7 and the number of clusters was set to 9, the verification RMSE was 3.74, which shows the best prediction performance.
Figure 7 shows a graph summarizing the predictive performance of the IGM with contexts fixed to 5, 6, 7 and 8, respectively. As seen in the figure, the best prediction performance is obtained when the context is 7 and the number of clusters is 9.
The proposed PSO-IGM method fixes the contexts to 5, 6, 7 and 8, respectively, and then generates the final model by optimizing the number of clusters and the fuzzy coefficients generated for each context. Table 5 shows the prediction performance of the PSO-IGM.
Figure 8 shows the average of predicted performance of IGMs. The red line in the middle shows the average of the predicted performance and the blue box shows the range of the 25th percentile to the 75th percentile. The Red Cross mark indicates an abnormal value.
Figure 9 shows a graph that compares the predictive performance of the IGM that optimizes the number of clusters to be created in the context and the fuzzification coefficient after fixing the context to 5, 6, 7 and 8, respectively. Next, the Figure 10, Figure 11, Figure 12 and Figure 13 show that the cost function decreases in the PSO-IGM (context = 5, 6, 7, 8). Figure 14 is a visualization of each clustering and fuzzification coefficient obtained from the PSO-IGM. 5, 6, 7, and 8 on the x-axis represent the respective contexts, and the black bars above represent the number of clusters. Before the optimization, the cost function value is about 0.1, but it can be confirmed that the value gradually decreases as the optimization process is repeated. The orange bar represents the fuzzification coefficient. As seen in the figure, the best prediction performance is obtained when the context is 8 (the number of clusters is 7, 5, 3, 7, 3, 4, 5, 2) and the fuzzification coefficient is 1.8734.

4. Discussion

Figure 15 shows the prediction performance of PSO-IGM, which optimizes the number of clusters and fuzzy coefficients of each context, which are internal parameter values, using existing IGM and PSO with fixed contexts of 5, 6, 7 and 8. In the conventional IGM, when the context is fixed to 8, the number of clusters is set to 14, and the fuzzy coefficient is set to 1.5, the verification RMSE is 3.72 and the proposed method, PSO-IGM, fixes the context to 8 and sets the number of clusters to 7, 5, 3, 7, 3, 4, 5, 2 and the fuzzy coefficient was set to 1.8734, the verification RMSE was the best at 3.55. As a result, we confirmed that the prediction performance of PSO-IGM, which optimizes the number of clusters and fuzzy coefficients, is superior to that of IGM using the same number of clusters and fuzzy coefficients. Table 6 summarizes the performance of each IGM and PSO-IGM in Figure 15.

5. Conclusions

In this paper, we propose a PSO-IGM method that optimizes the number of the clusters and the fuzzy coefficients of the incremental granular model using the particle swarm optimization algorithm. The deficiency of the existing IGM is that the same number of clusters is created for each context and the same fuzzy coefficient is used. To solve these problems, we optimize the number of clusters required per context using the PSO and accordingly optimize it using the same method with fuzzy coefficients. Experimental results show that the PSO-IGM method proposed in this paper has better prediction performance compared to the existing IGM, and that prediction performance can be improved by optimizing the internal parameters according to the characteristics of the data. In future research, we plan to optimize not only the number of the clusters and the fuzzification coefficients, but also the number of contexts.

Author Contributions

C.-U.Y. suggested the idea of the work and performed the experiments; K.-C.K. designed the experimental method; both authors wrote and critically revised the paper.

Funding

This study was supported by research fund from Chosun University, 2018.

Conflicts of Interest

The author declares no conflict of interest.

References

  1. Zadeh, L.A. Fuzzy sets. Inf. Control 1965, 8, 338–353. [Google Scholar] [CrossRef] [Green Version]
  2. Han, M.; Sun, Y.; Fan, Y. An improved fuzzy neural network based on T.S. model. Expert Syst. Appl. 2008, 34, 2905–2920. [Google Scholar] [CrossRef]
  3. Dhekale, B.S.; Sahu, P.K.; Vishwajith, K.P.; Narsimahaiah, L. Structural time series analysis towards modeling and forecasting of groundwater fluctuations in murshidabad district of west Bengal. Ecosystem 2015, 5, 117–126. [Google Scholar]
  4. Zhang, B.; Wei, Z.; Ren, J.; Cheng, Y.; Zheng, Z. An empirical study on predicting blood pressure using classification and regression trees. IEEE Access 2018, 6, 21758–21768. [Google Scholar] [CrossRef]
  5. Krueger, D.C.; Montgomery, D.C.; Mastrangelo, C.M. Application of generalized linear models to predict semiconductor yield using defect metrology data. IEEE Trans. Semicond. Manuf. 2011, 24, 44–58. [Google Scholar] [CrossRef]
  6. Yahia, M.; Hamrouni, T.A.; Abdelfattah, R. Infinite number of looks prediction in SAR filtering by linear regression. IEEE Geosci. Remote Sens. Lett. 2017, 14, 2205–2209. [Google Scholar] [CrossRef]
  7. Zhang, J.; Chung, C.Y.; Han, Y. Online damping ratio prediction using locally weighted linear regression. IEEE Trans. Power Syst. 2016, 31, 1954–1962. [Google Scholar] [CrossRef]
  8. Drouard, V.; Horaud, R.; Deleforge, A.; Ba, S.; Evangelidis, G. Robust head-pose estimation based on partially-latent mixture of linear regressions. IEEE Trans. Image Process. 2017, 26, 1428–1440. [Google Scholar] [CrossRef] [PubMed]
  9. Martin, C.L.; Cazarez, R.L.U.; Floriano, A.G. Support vector regression for predicting the productivity of higher education graduate students from individually developed software projects. IET Softw. 2017, 11, 265–270. [Google Scholar] [CrossRef]
  10. Amirkhani, S.; Nasirivatan, S.; Kasaeian, A.B.; Hajinezhad, A. ANN and ANFIS models to predict the performance of solar chimney power plants. Renew. Energy 2015, 83, 597–607. [Google Scholar] [CrossRef]
  11. Naderloo, L.; Alimardani, R.; Omid, M.; Sarmadian, F.; Alimardani, F. Application of ANFIS predict crop yield based on different energy inputs. Measurement 2012, 45, 1406–1413. [Google Scholar] [CrossRef]
  12. Umrao, R.K.; Sharma, L.L.; Singh, R.; Singh, T.N. Determination of strength and modulus of elasticity of heterogenous sedimentary rocks: An ANFIS predictive technique. Measurement 2018, 126, 194–201. [Google Scholar] [CrossRef]
  13. Zare, M.; Koch, M. Groundwater level fluctuations simulation and prediction by ANFIS-and hybrid wavelet-ANFIS/fuzzy C-means (FCM) clustering models: Applications to the miandarband plain. J. Hydro-Environ. Res. 2018, 18, 63–76. [Google Scholar] [CrossRef]
  14. Adiguzel, E.; Ozer, E.; Akgundogdu, A.; Yilmaz, A.E. Prediction of dust particle size effect on efficiency of photovoltaic modules with ANFIS: An experimental study in Aegean region, Turkey. Sol. Energy 2019, 177, 690–702. [Google Scholar] [CrossRef]
  15. Ordonez, C.; Lasheras, F.S.; Pardinas, J.R.; Juez, F.J.D.C. A hybrid ARIMA-SVM model for the study of the remaining useful life of aircraft engines. J. Comput. Appl. Math. 2019, 346, 184–191. [Google Scholar] [CrossRef]
  16. Torbat, S.; Khashei, M.; Bijari, M. A hybrid probabilistic fuzzy ARIMA model for consumption forecasting in commodity markets. Econ. Anal. Policy 2018, 58, 22–31. [Google Scholar] [CrossRef]
  17. Ohyver, M.; Pudjihastuti, H. Arima model for forecasting the price of medium quality rice to anticipate price fluctuations. Procedia Comput. Sci. 2018, 135, 707–711. [Google Scholar] [CrossRef]
  18. Barak, S.; Sadegh, S.S. Forecasting energy consumption using ensemble ARIMA-ANFIS hybrid algorithm. Int. J. Electr. Power Energy Syst. 2016, 82, 92–104. [Google Scholar] [CrossRef]
  19. Ramos, P.; Santos, N.; Rebelo, R. Performance of state space and ARIMA models for consumer retail sales forecasting. Robot. Comput.-Integr. Manuf. 2015, 34, 151–163. [Google Scholar] [CrossRef] [Green Version]
  20. Suhermi, N.; Suhartono; Prastyo, D.D.; Ali, B. Roll motion prediction using a hybrid deep learning and ARIMA model. Procedia Comput. Sci. 2018, 144, 251–258. [Google Scholar] [CrossRef]
  21. Musaylh, M.S.A.; Deo, R.C.; Adamowski, J.F.; Li, Y. Short-term electricity demand forecasting with MARS, SVR and ARIMA models using aggregated demand data in Queensland, Australia. Adv. Eng. Inform. 2018, 35, 1–16. [Google Scholar] [CrossRef]
  22. Pedrycz, W.; Vasiakos, A.V. Linguistic models and linguistic modeling. IEEE Trans. Syst. Manand Cybern. Part B 1999, 29, 745–757. [Google Scholar] [CrossRef] [PubMed]
  23. Pedrycz, W.; Kwak, K.C. Linguistic models as a framework of user-centric system modeling. IEEE Trans. Syst. Manand Cybern. Part A Syst. Hum. 2006, 36, 727–745. [Google Scholar] [CrossRef]
  24. Pedrycz, W. Conditional fuzzy C-means. Pattern Recognit. Lett. 1996, 17, 625–632. [Google Scholar] [CrossRef]
  25. Zhu, X.; Pedrycz, W.; Li, Z. A design of granular Takagi-Sugeno fuzzy model through the synergy of fuzzy subspace clustering and optimal allocation of information granularity. IEEE Trans. Fuzzy Syst. 2018, 26, 2499–2509. [Google Scholar] [CrossRef]
  26. Hmouz, R.A.; Pedrycz, W.; Balamash, A. Description and prediction of time series: A general framework of granular computing. Expert Syst. Appl. 2015, 42, 4830–4839. [Google Scholar] [CrossRef]
  27. Froelich, W.; Pedrycz, W. Fuzzy cognitive maps in the modeling of granular time series. Knowl.-Based Syst. 2017, 115, 110–122. [Google Scholar] [CrossRef]
  28. Cimino, M.G.C.A.; Lazzerini, B.; Marcelloni, F.; Pedrycz, W. Genetic interval neural networks for granular data regression. Inf. Sci. 2014, 257, 313–330. [Google Scholar] [CrossRef]
  29. Zhao, J.; Han, Z.; Pedrycz, W.; Wang, W. Granular model of long-term prediction for energy system in steel industry. IEEE Trans. Cybern. 2016, 46, 388–400. [Google Scholar] [CrossRef]
  30. Pedrycz, W.; Kwak, K.C. The development of incremental models. IEEE Trans. Fuzzy Syst. 2007, 15, 507–518. [Google Scholar] [CrossRef]
  31. Oztekin, A.; Ebbini, L.A.; Sevkli, Z.; Delen, D. A decision analysis approach to predicting quality of life for lung transplant recipients: A hybrid genetic algorithms-based methodology. Eur. J. Oper. Res. 2018, 266, 639–651. [Google Scholar] [CrossRef]
  32. Garcia, P.L.; Onieva, E.; Osaba, E.; Masegosa, A.D.; Perallos, A. A hybrid method for short-term traffic congestion forecasting using genetic algorithms and cross entropy. IEEE Trans. Intell. Transp. Syst. 2016, 17, 557–569. [Google Scholar] [CrossRef]
  33. Neetu; Sharma, R.; Basu, S.; Sarkar, A.; Pal, P.K. Data-adaptive prediction of sea-surface temperature in the Arabian sea. IEEE Geosci. Remote Sens. Lett. 2011, 8, 9–13. [Google Scholar] [CrossRef]
  34. Sadi, M.; Shahrabadi, A. Evolving robust intelligent model based on group method of data handling technique optimized by genetic algorithm to asphaltene precipitation. J. Pet. Sci. Eng. 2018, 171, 1211–1222. [Google Scholar] [CrossRef]
  35. Esfe, M.H.; Bahiraei, M.; Mahian, O. Experimental study for developing an accurate model to predict viscosity of CuO-ethylene glycol nanofluid using genetic algorithm based neural network. Power Technol. 2018, 338, 383–390. [Google Scholar] [CrossRef]
  36. Di Francescomarino, C.; Dumas, M.; Federici, M.; Ghidini, C.; Maggi, F.M.; Rizzi, W.; Simonetto, L. Genetic algorithms for hyperparameter optimization in predictive business process monitoring. Inf. Syst. 2018, 74, 67–83. [Google Scholar] [CrossRef]
  37. Rotta, G.A.; Vega, J.; Murari, A.; Canto, S.D. Global optimization driven by genetic algorithm for disruption predictors based on APODIS architecture. Fusion Eng. Des. 2016, 112, 1014–1018. [Google Scholar] [CrossRef]
  38. Byeon, Y.H.; Kwak, K.C. A design of genetically oriented rules-based incremental granular models and its application. Symmetry 2017, 9, 324. [Google Scholar] [CrossRef]
  39. Kennedy, J.; Eberhart, R. Particle swarm optimization. In Proceedings of the International Conference on Neural Networks (ICNN ’95), Perth, WA, Australia, 27 November–1 December 1995; Volume 4, pp. 1942–1948. [Google Scholar]
  40. Huang, C.M.; Huang, C.J.; Wang, M.L. A particle swarm optimization to identifying the ARMAX model for short-term load forecasting. IEEE Trans. Power Syst. 2005, 20, 1126–1133. [Google Scholar] [CrossRef]
  41. Chan, K.Y.; Dillon, T.S.; Chang, E. An intelligent particle swarm optimization for short-term traffic flow forecasting using on road sensor systems. IEEE Trans. Ind. Electron. 2013, 60, 4714–4725. [Google Scholar] [CrossRef]
  42. Bashir, Z.A.; Hawary, E.E. Applying wavelets to short-term load forecasting using PSO-based neural networks. IEEE Trans. Power Syst. 2009, 24, 20–27. [Google Scholar] [CrossRef]
  43. Anamika; Peesapati, R.; Kumar, N. Electricity price forecasting and classification through wavelet-dynamic weighted PSO-FFNN approach. IEEE Syst. J. 2018, 12, 3075–3084. [Google Scholar] [CrossRef]
  44. Rocha, H.R.O.; Silvestre, L.J.; Celeste, W.C.; Coura, D.J.C.; Rigo, L.O., Jr. Forecast of distributed electrical generation system capacity based on seasonal micro generators using ELM and PSO. IEEE Lat. Am. Trans. 2018, 16, 1136–1141. [Google Scholar] [CrossRef]
  45. Ma, Z.; Dong, Y.; Liu, H.; Shao, X.; Wang, C. Forecast of non-equal interval track irregularity based on improved grey model and PSO-SVM. IEEE Access 2018, 6, 34812–34818. [Google Scholar] [CrossRef]
  46. Catalao, J.P.S.; Pousinho, H.M.I.; Mendes, V.M.F. Hybrid wavelet-PSO-ANFIS approach for short-term electricity prices forecasting. IEEE Trans. Power Syst. 2011, 26, 137–144. [Google Scholar] [CrossRef]
  47. Liao, Y.X.; She, J.H.; Wu, M. Integrated hybrid-PSO and fuzzy-NN decoupling control for temperature of reheating furnace. IEEE Trans. Ind. Electron. 2009, 56, 2704–2714. [Google Scholar] [CrossRef]
  48. Alik, B.; Teguar, M.; Mekhaldi, A. Minimization of grounding system cost using PSO, GAO, and HPSGAO techniques. IEEE Trans. Power Deliv. 2015, 30, 2561–2569. [Google Scholar] [CrossRef]
  49. Yifei, T.; Meng, Z.; Jingwei, L.; Dongbo, L.; Yulin, W. Research on intelligent welding robot patch optimization based on GA and PSO algorithms. IEEE Access 2018, 6, 65397–65404. [Google Scholar] [CrossRef]
  50. Shivakumar, R.; Lakshmipathi, R. Implementation of an innovative Bio inspired GA and PSO algorithm for controller design considering steam GT Dynamics. Int. J. Comput. Sci. Issues IJCSI 2010, 7, 18–28. [Google Scholar]
Figure 1. Structure of linear regression.
Figure 1. Structure of linear regression.
Symmetry 11 00390 g001
Figure 2. Difference between fuzzy C-means clustering and context-based fuzzy C-means clustering: (a) fuzzy C-means clustering; (b) context-based fuzzy C-means clustering.
Figure 2. Difference between fuzzy C-means clustering and context-based fuzzy C-means clustering: (a) fuzzy C-means clustering; (b) context-based fuzzy C-means clustering.
Symmetry 11 00390 g002
Figure 3. Structure of a triangular fuzzy number.
Figure 3. Structure of a triangular fuzzy number.
Symmetry 11 00390 g003
Figure 4. Structure of granular model.
Figure 4. Structure of granular model.
Symmetry 11 00390 g004
Figure 5. Structure of particle swarm optimization.
Figure 5. Structure of particle swarm optimization.
Symmetry 11 00390 g005
Figure 6. Structure of particle swarm optimization-based incremental granular model.
Figure 6. Structure of particle swarm optimization-based incremental granular model.
Symmetry 11 00390 g006
Figure 7. Predictive IGM performance (context = 5, 6, 7, 8).
Figure 7. Predictive IGM performance (context = 5, 6, 7, 8).
Symmetry 11 00390 g007
Figure 8. Average IGM performance (context = 5, 6, 7, 8).
Figure 8. Average IGM performance (context = 5, 6, 7, 8).
Symmetry 11 00390 g008
Figure 9. Predictive performance of the PSO-IGMs (context = 5, 6, 7, 8)
Figure 9. Predictive performance of the PSO-IGMs (context = 5, 6, 7, 8)
Symmetry 11 00390 g009
Figure 10. IGM optimization process for 5 contexts.
Figure 10. IGM optimization process for 5 contexts.
Symmetry 11 00390 g010
Figure 11. IGM optimization process for 6 contexts.
Figure 11. IGM optimization process for 6 contexts.
Symmetry 11 00390 g011
Figure 12. IGM optimization process for 7 contexts.
Figure 12. IGM optimization process for 7 contexts.
Symmetry 11 00390 g012
Figure 13. IGM optimization process for 8 contexts.
Figure 13. IGM optimization process for 8 contexts.
Symmetry 11 00390 g013
Figure 14. Optimized internal parameters of the PSO-IGMs (context = 5, 6, 7, 8).
Figure 14. Optimized internal parameters of the PSO-IGMs (context = 5, 6, 7, 8).
Symmetry 11 00390 g014
Figure 15. Predictive performance of all IGMs.
Figure 15. Predictive performance of all IGMs.
Symmetry 11 00390 g015
Table 1. Predictive performance of the incremental granular model using five contexts
Table 1. Predictive performance of the incremental granular model using five contexts
No. of Clusters/Fuzzification Coefficient (m = 1.5)Training RMSETesting RMSE
24.264.34
33.834.46
43.684.84
53.574.36
63.474.43
73.444.12
83.244.46
93.364.40
103.304.33
113.254.49
123.204.56
133.364.65
143.354.80
153.544.75
163.694.51
173.594.62
183.964.55
193.944.50
204.004.56
Table 2. Predictive performance of the incremental granular model using six contexts
Table 2. Predictive performance of the incremental granular model using six contexts
No. of Clusters/Fuzzification Coefficient (m = 1.5)Training RMSETesting RMSE
24.274.32
34.355.34
43.714.31
53.544.18
63.614.15
73.493.95
83.254.29
93.564.29
103.524.14
113.234.07
123.694.23
133.764.20
143.344.18
153.684.22
163.844.26
173.714.27
184.124.39
194.344.48
204.614.77
Table 3. Predictive performance of the incremental granular model using seven contexts
Table 3. Predictive performance of the incremental granular model using seven contexts
No. of Clusters/Fuzzification Coefficient (m = 1.5)Training RMSETesting RMSE
24.284.27
34.214.96
43.624.05
53.554.10
63.703.96
73.603.94
83.393.76
93.113.74
103.273.75
113.453.88
123.113.91
133.283.98
143.543.98
153.764.19
163.774.13
174.004.15
184.224.20
194.474.18
204.494.13
Table 4. Predictive performance of the incremental granular model using five contexts.
Table 4. Predictive performance of the incremental granular model using five contexts.
No. of Clusters/Fuzzification Coefficient (m = 1.5)Training RMSETesting RMSE
24.284.26
34.044.24
43.604.07
53.634.08
63.213.91
73.563.83
83.963.83
93.193.77
103.163.92
113.293.83
123.703.77
133.373.84
143.363.72
153.673.81
163.894.11
174.114.08
184.394.11
194.554.11
204.744.05
Table 5. Predictive performance of the incremental granular model using particle swarm optimization.
Table 5. Predictive performance of the incremental granular model using particle swarm optimization.
AlgorithmNo. of ContextsNo. of Clusters/Fuzzification CoefficientTraining RMSETesting RMSE
PSO-IGM55 5 4 3 2/2.37033.603.94
64 7 8 4 6 3/1.57403.173.56
76 3 3 6 5 3 7/1.99013.423.73
87 5 3 7 3 4 5 2/1.87343.243.55
Table 6. Predictive performance of all incremental granular models
Table 6. Predictive performance of all incremental granular models
AlgorithmNo. of ContextsNo. of Clusters/Fuzzification CoefficientTraining RMSETesting RMSE
IGM57/1.53.444.12
PSO-IGM5 5 4 3 2/2.37033.603.94
IGM67/1.53.493.95
PSO-IGM4 7 8 4 6 3/1.57403.173.56
IGM79/1.53.113.74
PSO-IGM6 3 3 6 5 3 7/1.99013.423.73
IGM814/1.53.363.72
PSO-IGM7 5 3 7 3 4 5 2/1.87343.243.55

Share and Cite

MDPI and ACS Style

Yeom, C.-U.; Kwak, K.-C. Incremental Granular Model Improvement Using Particle Swarm Optimization. Symmetry 2019, 11, 390. https://doi.org/10.3390/sym11030390

AMA Style

Yeom C-U, Kwak K-C. Incremental Granular Model Improvement Using Particle Swarm Optimization. Symmetry. 2019; 11(3):390. https://doi.org/10.3390/sym11030390

Chicago/Turabian Style

Yeom, Chan-Uk, and Keun-Chang Kwak. 2019. "Incremental Granular Model Improvement Using Particle Swarm Optimization" Symmetry 11, no. 3: 390. https://doi.org/10.3390/sym11030390

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop