Next Article in Journal
A Five-Point Subdivision Scheme with Two Parameters and a Four-Point Shape-Preserving Scheme
Previous Article in Journal
Global Modulus-Based Synchronous Multisplitting Multi-Parameters TOR Methods for Linear Complementarity Problems
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Initial Condition Optimization Approach for Improving the Prediction Precision of a GM(1,1) Model

1
Department of Statistics, Faculty of Science, Sebha University, Sebha, Libya
2
School of Informatics and Applied Mathematics, University Malaysia Terengganu (UMT), Kuala Terengganu 21300, Terengganu, Malaysia
*
Author to whom correspondence should be addressed.
Math. Comput. Appl. 2017, 22(1), 21; https://doi.org/10.3390/mca22010021
Submission received: 28 December 2016 / Revised: 8 February 2017 / Accepted: 8 February 2017 / Published: 22 February 2017

Abstract

:
Grey model GM(1,1) has attained excellent prediction accuracy with restricted data and has been broadly utilized in a range of areas. However, the GM(1,1) forecasting model sometimes yields large forecasting errors which directlyaffect the simulation and prediction precision directly. Therefore, the improvement of the GM(1,1) model is an essential issue, and the current study aims to enhance the prediction precision of the GM(1,1) model. Specifically, in order to improve the prediction precision of GM(1,1) model, it is necessary to consider improving the initial condition in the response function of the model. Consequently, the purpose of this paper is to put forward a new method to enhance the performance of the GM(1,1) model by optimizing its initial condition. The minimum sum of squared error was used to optimize the new initial condition of the model. The numerical outcomes show that the improved GM(1,1) model provides considerably better performance than traditional grey model GM(1,1) . The result demonstrates that the improved grey model GM(1,1) achieves the objective of minimizing the forecast errors.

1. Introduction

The grey systems theory, set up by Julong Deng in 1982, is a new system that spotlights on the investigation of issues including small samples and poor information [1]. Through nearly 30 years of development, this system has been extensively applied in various research branches and has achieved great results.
Grey model GM(1,1) is the one core contents of this a new system. Meanwhile, it is the most commonly used in the literature because of its computational efficiency [2]. However, the traditional GM(1,1) model has demonstrated some restrictions which influence the relevance of the model and its prediction precision. Therefore, to remedy the weaknesses of the traditional grey model GM(1,1), numerous scholars gave careful consideration to the improvement research of this model. They found that enhancements of the model can be categorized into three types, as follows:
(1) A number of scholars have paid much attention to grey derivative enhancement. Wang et al. [3] proposed a GM(1,1) direct modeling technique with a gradually optimization of whitened values of grey derivative to sequence modeling of unequal time interval. They also proved that the new technique continues to have the same properties of linear transformation consistency as the traditional technique. Sun and Wei [4] introduced the direct GM(1,1) with an enhanced method of grey derivative, which raised the precision of the modeling method. The new approach has been efficiently confirmed to have the characteristic of exponent, coefficient and conversion of continuous superposition. The model is appropriate for the low growth sequence as well as the high growth sequence. In addition, it is appropriate for the non-homogeneous exponential data. Another study by Zhou et al. [5] presented a new approach for optimization of the white differential equation by using the original grey differential equation. The process started from the original grey differential equation, via observing the correlation between the original data X(0) and the derivative of its 1-AGO. A new whitened equation of GM(1,1) was constructed, which is equivalent to the original grey differential equation. Meanwhile, the new GM(1,1) model which is nearer to the modifications of data was obtained.
(2) Many scholars have concentrated on reconstruction of the background value. For example, Chang et al. [6] proposed a new approach that combined the grey model to optimize the grey prediction’s modeling error. The study considered every background value at a discrete point as an independent parameter. Dai and Chen [7] asserted that the background value is a significant element in affecting prediction accuracy of the GM(1,1) model. The scholars posited that the conventional background value is substituted by using a new reconstruction strategy to the background value of a conventional GM(1,1) model based totally on Gauss-Legendre formula. In another study, Yao and Wang [8] utilized a modified GM(1,1) model primarily based on restructuring the background value to predict the electrical energy consumption in the society of eastern China. Mahdi and Norizan [9] developed an optimization model for the traditional GM(1,1) model based on reconstructing the background value. Another study by Mahdi and Norizan [10] proposed a new approach to enhance the prediction accuracy of a GM(1,1) model through an optimization of the background value. The new background value that was deduced with the supposition used the discrete function with the non-homogeneous exponential law.
(3) Some scholars focus on improvements on the initial condition in the time response function. For instance, Dan et al. [11] proposed a method for grey model improvement using the last item of X(1) as initial value of the grey differential equation to enhance prediction accuracy. Xie and Liu [12] discussed the influence of different fitting points and proposed optimization model fitting point contributes to the model. Another study by Wang et al., [13] introduced a new approach for grey model improvement based on a modified initial condition using the first item and the last item of X(1) to enhance prediction accuracy of a traditional GM(1,1) model. Chen and Li [14] also proposed a new technique for a GM(1,1) model using an optimal weighted combination with a different initial value. These improvement the new grey models make to the initial condition may raise the forecast accuracy in certain practical applications. Nevertheless, various possibilities to improve prediction precision of GM(1,1) model still exist. The first term of X(1) has been used as the initial condition of the GM(1,1) model. This has repeatedly happened in practical applications in which new information contained in different terms other than the first term of X(1) is not adequately utilized.
The motivation of novelty of this paper is to propose a new method to enhance the prediction precision of the GM(1,1) model by optimizing the original initial condition and empirically compare it with the traditional GM(1,1) model based on the measurement criteria for the forecasting performance. The new initial condition is proven using the minimum sum of squared error.
The remaining parts of this paper are structured as follows: Section 2 focuses on improved GM(1,1) model. Section 3 proves the application of the modified technique through a numerical examples and results. Section 4 discusses on the results while a conclusion is drawn in Section 5 based on the discussion.

2. Materials and Methods

Improved GM(1,1) Model: Let x(0) = {x(0)(1), x(0)(2), …, x(0)(n)} n 4 , be a sequence of raw data. Denote its accumulation generated operation sequence (AGO) by X(1) = {x(1)(1), x(1)(2), …, x(1)(n)} n 4 , where x ( 1 ) ( k ) = i = 1 k x ( 0 ) ( i ) , k = 2 , 3 , , n and x ( 1 ) ( 1 ) = x ( 0 ) ( 1 ) .
Then
x ( 0 ) ( k ) + a x ( 1 ) ( k ) = b
is indicated to as the original form of the GM(1,1) model, where the symbol GM(1,1) represents “first order grey model in one variable.”
Let Z(1) = {z(1)(1), z(1)(2), …, z(1)(n)} be the sequence generated from X(1) by the adjacent neighbor means. That is,
z ( 1 ) ( k ) = 0.5 x ( 1 ) ( k ) + 0.5 x ( 1 ) ( k 1 ) , k = 2 , 3 , , n
Then,
x ( 0 ) ( k ) + a z ( 1 ) ( k ) = b
is indicated to as the basic form of the GM(1,1) model, also called a grey differential equation. And the equation
d x ( 1 ) d t + a x ( 1 ) = b
is the whitened (or image) equation of GM(1,1). If we let
Y = [ x ( 0 ) ( 2 ) x ( 0 ) ( 3 ) x ( 0 ) ( n ) ] ,   B = [ z ( 1 ) ( 2 ) 1 z ( 1 ) ( 3 ) 1 z ( 1 ) ( n ) 1 ]
then the parameters estimated by least squares method are
[ a , b ] T = ( B T B ) 1 B T Y
The solution, also known as time response function, of the whitenization equation is given by
x ^ ( 1 ) ( t ) = ( x ( 0 ) ( 1 ) b a ) e a ( t 1 ) + b a
The time response sequence of the GM(1,1) model in Equation (3) is given below:
x ^ ( 1 ) ( k ) = ( x ( 0 ) ( 1 ) b a ) e a ( k 1 ) + b a
The restored values of x(0) (k)’s are given as follows
x ^ ( 0 ) ( k ) = x ^ ( 1 ) ( k ) x ^ ( 1 ) ( k 1 ) = ( 1 e a ) ( x ( 0 ) ( 1 ) b a ) e a k
where k = 2 , 3 , , n .
Using the discrete form in Equation (7), x ^ ( 1 ) ( k ) can be expressed as:
x ^ ( 1 ) ( k ) = C e a k + b a , k = 2 , 3 , , n
where, in traditional GM(1,1) model, initial condition is
C = [ x ( 1 ) ( 1 ) b a ] e a
By implementing the inverse accumulated generating operation (IAGO) on x ^ ( 1 ) ( k ) , the restored value of raw data is given as follows
x ^ ( 0 ) ( k )   = x ^ ( 1 ) ( k ) x ^ ( 1 ) ( k 1 ) = C [ e a k e a ( k 1 ) ] , k = 2 , 3 , , n
The initial condition is one of the important factors that impacts the accuracy of traditional grey model GM(1,1). Since the optimal fitting curve might not pass a certain point in the raw data, the original initial condition based on (10) is not the best, meaning that the optimization of the original initial condition is required.
The estimate of C can be found by minimizing the sum of squared error based on the restored values. Construct a function, as follows:
f ( C ) = k = 2 n [ x ^ ( 0 ) ( k ) x ( 0 ) ( k ) ] 2
Substituting (11) into (12) gives
f ( C ) = k = 2 n [ C ( e a k e a ( k 1 ) ) x ( 0 ) ( k ) ] 2 = C 2 k = 2 n ( e a k e a ( k 1 ) ) 2 2 C k = 2 n ( e a k e a ( k 1 ) ) x ( 0 ) ( k ) + k = 2 n x ( 0 ) 2 ( k )
d f ( C ) d C = 2 C k = 2 n ( e a k e a ( k 1 ) ) 2 2 k = 2 n ( e a k e a ( k 1 ) ) x ( 0 ) ( k )
Let d f ( C ) d C = 0 , namely
C k = 2 n ( e a k e a ( k 1 ) ) 2 = k = 2 n ( e a k e a ( k 1 ) ) x ( 0 ) ( k )
The optimum C can be written as
C = k = 2 n ( e a k e a ( k 1 ) ) x ( 0 ) ( k ) k = 2 n ( e a k e a ( k 1 ) ) 2
Here f ( C ) = k = 2 n [ x ^ ( 0 ) ( k ) x ( 0 ) ( k ) ] 2 is a minimum
From the above descriptions, we can summarize prediction steps of the improved grey model GM(1,1) as follows:
Step 1: Calculating background values z ( 1 ) ( k ) in Equation (2) by using the new data sequence i.e., X(1).
Step 2: Calculating parameters a and b in Equation (5) based on the vector Y and the matrix B.
Step 3: Calculating the optimum value of C by substituting a and raw data x(0)(k) in (14).
Step 4: Substituting a and the optimum value of C in (11), to get the restored values sequence x ^ ( 0 ) ( k ) ( k = 1 , 2 , , n + p ) , where p indicates the prediction step size.
Evaluative accuracy of forecasting models: We consider two examples to demonstrate the advantage of the improved GM(1,1) model. The prediction accuracy performance is assessed by using the mean absolute percentage error (MAPE) is given below:
M A P E = 1 n k = 1 n | x ( 0 ) ( k ) x ^ ( 0 ) ( k ) | x ( 0 ) ( k ) × 100 %
where A b s o l u t e Re l a t i v e E r r o r = | x ( 0 ) ( k ) x ^ ( 0 ) ( k ) | x ( 0 ) ( k ) × 100 %

3. Results

Example 3.1:

Let us take the simulated data sequence created by f ( t ) = e 0.3 t , t = 1 , 2 , , 11 .
(1.349859, 1.822119, 2.459603, 3.320117, 4.481689, 6.049647, 8.16617, 11.02318, 14.87973, 20.08554, 27.11264). To evaluate the prediction accuracy of the traditional GM(1,1) and the improved GM(1,1) models put forward in this study, we use the first eight observations in the simulated data sequence (in-sample data) to structure the two models , i.e.,
X(0) = (1.349859, 1.822119, 2.459603, 3.320117, 4.481689, 6.049647, 8.16617, 11.02318)
On the other hand, the last three observations in the simulated data sequence, i.e., (14.87973, 20.08554, 27.11264) (out-of-sample) are used for predictive inspection.
First, the estimation of parameters is done and the traditional grey model GM(1,1) is structured as follows:
a = 0.2978 , b = 1.1489 , x ^ ( 0 ) ( k ) = ( 1 e 0.2978 ) ( x ( 0 ) ( 1 ) + 1.1489 0.2978 ) e 0.2978 ( k 1 ) , k = 2 , 3 , , 8
Second, the estimation of parameters is done and the improved GM(1,1) model is structured as follows:
a = 2978 , x ^ ( 0 ) ( k ) = 3.9431 [ e 0.2978 k e 0.2978 ( k 1 ) ] , k = 2 , 3 , , 8
The comparison of the prediction accuracy results for the above mentioned models is displayed in Table 1.

Example 3.2:

We employ the historical annual LCD TV output of China from 1996 to 2005 from reference [15] as our research data, which are listed in Table 2.
To evaluate prediction accuracy of the traditional GM(1,1) and the improved GM(1,1) models put forward in this study, we use the first seven observations in the research data sequence (in-sample data) to structure the two models, i.e.,
X(0) = (3.28, 5.48, 10.07, 17.70, 29.73, 49.39, 92.67)
On the other hand, the last three observations in the research data sequence, i.e., (162.23, 280.86, 513.40) (out-of-sample) are used for predictive inspection.
First, the estimation of parameters is done and the traditional grey model GM(1,1) is structured as follows:
a = 0.5521 , b = 1.7999 , x ^ ( 0 ) ( k ) = ( 1 e 0.5521 ) ( x ( 0 ) ( 1 ) + 1.7999 0.5521 ) e 0.5521 ( k 1 ) , k = 2 , 3 , , 7
Second, the estimation of parameters is done and the improved GM(1,1) model is structured as follows:
x ^ ( 0 ) ( k ) = 4.4932 [ e 0.5521 k e 0.5521 ( k 1 ) ] , k = 2 , 3 , , 7
The comparison of the prediction accuracy results for the above mentioned models is displayed in Table 3.
The comparison of prediction accuracy results for the above mentioned models is displayed in Table 1 and Table 3 respectively. From Table 1, the MAPE of in sample data for the traditional GM(1,1) model and the improved GM(1,1) model is 1.5195% and 0.5004% respectively. And the MAPE of out sample data for the traditional GM(1,1) model and the improved GM(1,1) model is 2.6112% and 0.6913% respectively.
From Table 3, the MAPE of in sample data for the traditional GM(1,1) model and the improved GM(1,1) model is 15.14% and 2.83% respectively. And the MAPE of out sample data for the traditional GM(1,1) model and the improved GM(1,1) model is 19.63% and 4.09% respectively.

4. Discussion

The fitness and prediction error of the improved GM(1,1) model is fairly small and its absolute relative error is much less than the former model greatly. Furthermore, Table 1 provides a way of assessing the performance of the prediction values following the function f ( t ) = e 0.3 t . For absolute relative errors, it is necessary to have an absolute relative error as near to zero as possible. And Table 1 also shows that the absolute relative errors from the improved GM(1,1) model are much closer to zero than those from the traditional GM(1,1) model. Actual values and the fitted values of two compared models are depicted in Table 1. As presented in Table 1, the improved GM(1,1) model has smaller MAPE (0.5004%) in comparison with the traditional grey model GM(1,1) model (1.5195%) in-sample data respectively. This implies that the improved GM(1,1) model can decrease the fitted error of a traditional GM(1,1) model. From a short-term forecasting perspective, the improved GM(1,1) model has a lower MAPE as (0.6913) compared with the traditional GM(1,1) model, which means that the improved GM(1,1) model reaches the objective of minimizing the forecasting error.
It can be seen from Table 3 that the fitted and the predictive values of the traditional GM(1,1) model are completely underrated through parameter k and the absolute relative errors range between 11.20% and 22.26%. While, the fitted and predictive values of the improved GM(1,1) model are overrated in points of the sequent data and the absolute relative errors range between 0.81% and 7.23%. Table 3 depicts the actual and fitted values of the two compared models. Table 3 points out that there are major differences amongst the outcomes of the two models. Since MAPE is 15.14% too high, the traditional GM(1,1) model is confirmed to be invalid. This high error was caused because at the first entry of raw data sequence for initial condition, the traditional GM(1,1) model utilizes x(0)(1). The accuracy of performance will rise significantly if the optimization of the initial condition proposed in this study is utilized, since the results of the improved model indicated that the MAPE is 2.83%. From a short-term forecasting viewpoint, the prediction MAPE of improved model in 2003, 2004 and 2005 is 4.09%, which is less than the error of any other model, and the prediction accuracy of the improved model is considerably higher.
From the comparative analysis of Table 1 and Table 3 it is obvious that improved model has superiority both in fitness and in prediction. Finally, the results of the improved GM(1,1) model performed better than the results of other studies [1,2,13].

5. Conclusions

This study proposed a new formula for calculation of the initial condition of the GM(1,1) model. The minimum sum of squared error method was utilized to develop the new initial condition of GM(1,1) forecast formula. Hence, the initial condition of the traditional grey model GM(1,1) has been freed from automatically passing a single point of raw data. The improved GM(1,1) model significantly enhances the precision of the gray forecasting model, as shown in the results of numerical examples.

Acknowledgments

We would like to thank the Ministry of Higher Education in Libya for financial support, and Universiti Malaysia Terengganu (UMT) for their support.

Author Contributions

Mahdi Madhi developed and solved the proposed model and carried out this analysis and wrote the paper. Norizan Mohamed revised the paper. Mahdi Madhi and Norizan Mohamed have given approval to the ultimate form of the paper.

Conflicts of Interest

The authors announce no conflict of interest.

References

  1. Deng, J. Introduction to grey system theory. J. Grey Syst. 1989, 1, 1–24. [Google Scholar]
  2. Liu, S.; Jeffrey, F.; Lin, Y. Grey Systems: Theory and Applications; Springer: Berlin, Germany, 2010. [Google Scholar]
  3. Wang, Y.; Zhijie, C.; Zhiqiang, G.; Mianyun, C. A generalization of the GM(1,1) direct modeling method with a step by step optimizing grey derivative’s whiten values and its applications. Kybernetes 2004, 33, 382–389. [Google Scholar]
  4. Sun, Y.N.; Wei, Y. Optimization of grey derivative in GM(1,1) based on the discrete exponential sequence. In Proceedings of the International Symposium on Information Processing, San Francisco, CA, USA, 13–16 April 2009; pp. 313–315.
  5. Zhou, R.; Li, R.-G.; Chen, Y. The Optimized White Differential Equation Based on the Original Grey Differential Equation. Int. J. Educ. Manag. Eng. 2011, 1, 44. [Google Scholar] [CrossRef]
  6. Chang, T.-C.; Wen, K.-L.; Chang, H.T.; You, M.-L. Inverse approach to find an optimum α for grey prediction model. In Proceedings of the 1999 IEEE International Conference on Systems, Man, and Cybernetics, Tokyo, Japan, 12–15 October 1999; pp. 309–313.
  7. Dai, W.; Chen, Y. Research of GM(1,1) Background Value Based on Gauss-Legendre quadrature and Its Application. In Proceedings of the ICIT’07, IEEE International Conference on Integration Technology, Shenzhen, China, 20–24 March 2007; pp. 100–102.
  8. Yao, M.; Wang, X. Electricity consumption forecasting based on a class of new GM(1,1) model. In Mechatronics and Automatic Control Systems; Springer International Publishing: Switzerland, 2014; pp. 947–953. [Google Scholar] [CrossRef]
  9. Mahdi Hassan, M.; Norizan, M. A Modified Grey Model Gm(1,1) Based on Reconstruction of Background Value. Far East J. Math. Sci. 2017, 101, 189–199. [Google Scholar]
  10. Mahdi Hassan, M.; Norizan, M. An Improved GM(1,1) Model Based on Modified Background Value. Inf. Technol. J. 2017, 16, 11–16. [Google Scholar]
  11. Dang, Y.; Liu, S.; Chen, K. The GM models that x (n) be taken as initial value. Kybernetes 2004, 33, 247–254. [Google Scholar]
  12. Xie, N.-M.; Liu, S.-F. Discrete grey forecasting model and its optimization. Appl. Math. Model. 2009, 33, 1173–1186. [Google Scholar] [CrossRef]
  13. Wang, Y.; Dang, Y.; Li, Y.; Liu, S. An approach to increase prediction precision of GM(1,1) model based on optimization of the initial condition. Expert Syst. Appl. 2010, 37, 5640–5644. [Google Scholar] [CrossRef]
  14. Chen, Q.; Li, J. Research on Optimum Weighted Combination GM(1,1) Model with Different Initial Value. In International Conference on Intelligent Computing; Springer International Publishing: Switzerland, 2015; pp. 354–362. [Google Scholar]
  15. Zhao, Y.Z.; Wu, C.-Y. An improved GM(1,1) model of integrated optimizing its background value and initial condition. In Fuzzy Information and Engineering 2010; Springer: Berlin/Heidelberg, Germany, 2010; pp. 693–704. [Google Scholar]
Table 1. The comparison of the prediction accuracy results of the traditional Grey model GM(1,1) and the improved model. MAPE: mean absolute percentage error.
Table 1. The comparison of the prediction accuracy results of the traditional Grey model GM(1,1) and the improved model. MAPE: mean absolute percentage error.
Raw DataTraditional GM(1,1) ModelImproved GM(1,1) Model
kActual ValuesModel ValuesAbsolute Relative Error (%)Model ValuesAbsolute Relative Error (%)
11.3499
21.82211.80640.85941.84211.0950
32.45962.43301.08092.48100.8692
43.32013.27691.30063.34150.6451
54.48164.41351.51884.50050.4226
66.04975.94441.74076.06160.1963
78.16628.00621.95898.16410.0261
811.023210.78322.176910.99580.2484
MAPE 1.5195 0.5004
9 *14.879714.52352.394214.80980.4700
10 *20.085519.56102.611419.94660.6915
11 *27.112626.34582.828126.86520.9125
MAPE 2.6112 0.6913
Forecasting value with *.
Table 2. The LCD TV output per year (unit: ten thousand)
Table 2. The LCD TV output per year (unit: ten thousand)
Year1996199719981999200020012002200320042005
Output3.285.4810.0717.7029.7349.3992.67162.23280.86513.40
Table 3. The comparison of the prediction accuracy results of the traditional GM(1,1) model and the improved model.
Table 3. The comparison of the prediction accuracy results of the traditional GM(1,1) model and the improved model.
Raw DataTraditional GM(1,1) ModelImproved GM(1,1) Model
kActual ValuesModel ValuesAbsolute Relative Error (%)Model ValuesAbsolute Relative Error (%)
13.28
25.484.8212.065.754.94
310.078.3716.889.990.81
417.7014.5417.8617.351.99
529.7325.2515.0630.131.35
649.3943.8611.2052.345.96
792.6776.1817.8090.901.91
MAPE 15.14 2.83
8 *162.23132.3118.44157.882.68
9 *280.86229.8018.18274.222.36
10 *513.40399.1422.26476.297.23
MAPE 19.63 4.09
Forecasting value with *.

Share and Cite

MDPI and ACS Style

Madhi, M.; Mohamed, N. An Initial Condition Optimization Approach for Improving the Prediction Precision of a GM(1,1) Model. Math. Comput. Appl. 2017, 22, 21. https://doi.org/10.3390/mca22010021

AMA Style

Madhi M, Mohamed N. An Initial Condition Optimization Approach for Improving the Prediction Precision of a GM(1,1) Model. Mathematical and Computational Applications. 2017; 22(1):21. https://doi.org/10.3390/mca22010021

Chicago/Turabian Style

Madhi, Mahdi, and Norizan Mohamed. 2017. "An Initial Condition Optimization Approach for Improving the Prediction Precision of a GM(1,1) Model" Mathematical and Computational Applications 22, no. 1: 21. https://doi.org/10.3390/mca22010021

Article Metrics

Back to TopTop