Next Article in Journal
A Series Representation for the Hurwitz–Lerch Zeta Function
Next Article in Special Issue
Grey Estimator-Based Tracking Controller Applied to Swarm Robot Formation
Previous Article in Journal
Sequential Riemann–Liouville and Hadamard–Caputo Fractional Differential Equation with Iterated Fractional Integrals Conditions
Previous Article in Special Issue
A Novel Color Recognition Model for Improvement on Color Differences in Products via Grey Relational Grade
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

GM(1,1;λ) with Constrained Linear Least Squares

Department of Electrical Engineering, Lunghwa University of Science and Technology, Taoyuan 33326, Taiwan
*
Author to whom correspondence should be addressed.
Axioms 2021, 10(4), 278; https://doi.org/10.3390/axioms10040278
Submission received: 1 September 2021 / Revised: 20 October 2021 / Accepted: 25 October 2021 / Published: 27 October 2021
(This article belongs to the Special Issue Grey System Theory and Applications in Mathematics)

Abstract

:
The only parameters of the original GM(1,1) that are generally estimated by the ordinary least squares method are the development coefficient a and the grey input b. However, the weight of the background value, denoted as λ, cannot be obtained simultaneously by such a method. This study, therefore, proposes two simple transformation formulations such that the unknown parameters a , b and λ can be simultaneously estimated by the least squares method. Therefore, such a grey model is termed the GM(1,1;λ). On the other hand, because the permission zone of the development coefficient is bounded, the parameter estimation of the GM(1,1) could be regarded as a bound-constrained least squares problem. Since constrained linear least squares problems generally can be solved by an iterative approach, this study applies the Matlab function lsqlin to solve such constrained problems. Numerical results show that the proposed GM(1,1;λ) performs better than the GM(1,1) in terms of its model fitting accuracy and its forecasting precision.

1. Introduction

Uncertain systems with small samples and poor information exist commonly in the real world. Among them, a system with partially known (white) and partially unknown (black) information is generally regarded as a grey system. Grey system theory, introduced by Deng in the early 1980s, is a methodology that focuses on the study of uncertain systems through generating, excavating, and extracting useful information from what is available [1,2]. The research areas of the grey system theory can generally be categorized into the following six fields: grey generating, grey relational analysis, grey models, grey prediction, grey decision making and grey control [3]. The grey model is the kernel of grey prediction, where the latter is one of the most important and widely used fields in the grey system theory. In the class of grey models, the GM(1,1), namely the first-order and single-variable grey model, is the most basic and widely used model. It requires a relatively small amount of data (typically four or more) to construct a mathematical model, and then employs a simple calculation process to estimate the behavior of unknown systems. Generally speaking, the GM(1,1) has a high forecasting accuracy and has been successfully applied to many fields, such as prediction control [4], power load forecasting [5], fuel production prediction [6,7], agricultural output [8], energy demand forecasting [9,10,11], etc.
The response of the classic GM(1,1) is essentially an exponential model with two internal parameters: the development coefficient a and the grey input b. Generally speaking, these internal parameters are estimated by the least squares method with the background values being part of the observed data. In the grey model, the background values are obtained by a mean generating operation, which is generally defined as the equal-weighted average of two selected data. Such an equal-weighted average formula also gives rise to the motivation to optimize the weight of the background value, hereafter denoted as λ, to improve the precision of the fitting and prediction of GM(1,1) [12,13,14]. Until now, many researchers have proposed improvements to GM(1,1) to overcome these problems. The improved methods can generally be divided into the following three categories: (i) compensation with a residual GM(1,1) model [2], (ii) a combination with different expert schemes, e.g., fuzzy control [15], (iii) optimization of internal parameters a, b, and/or λ by evolutionary algorithms, such as the genetic algorithm (GA) [8,12,14] and differential evolution (DE) [13]. Rather than using the classic GM(1,1), the adaptive GA-based optimization approach proposed in [7] has been employed to simultaneously optimize the parameters a, b and λ of GM(1,1;λ) to improve accuracy compared with the traditional GM(1,1).
Linear regression models are often fitted using the least squares approach. Thus, the internal parameters of a classic GM(1,1), i.e., the development coefficient a and the grey input b, are generally estimated by the ordinary least squares method. Since both a and b are bounded [16], the parameter estimation could be regarded as a bound-constrained least squares problem. Linear least squares problems with bound constraints are commonly solved to find model parameters within bounds based on physical considerations. Common algorithms include bounded-variable least squares (BVLS) and the Matlab function lsqlin [17]. While the parameter estimation of the GM(1,1) also takes the weight of the background value λ into consideration, the issue will become a nonlinear regression problem. Since some nonlinear regression problems can be moved to a linear domain by a suitable transformation of the model formulation [15], this study, therefore, proposes two simple transformation formulations to solve this problem. By the proposed transformation, the unknown parameters, a , b and λ , can also be estimated by the ordinary least squares method. In addition, this study also attempts to improve the precision of the fitting and prediction of the GM(1,1) by the proposed parameter estimation approach.
The remainder of this paper is organized as follows. Section 2 briefly describes the GM(1,1) and linear least squares problems. The proposed GM(1,1;λ) with constrained linear least squares is given in Section 3. Section 4 presents the simulation results on two numerical problems. Finally, Section 5 contains some conclusions of this study.

2. GM(1,1) and Linear Least Squares Problems

2.1. GM(1,1)

Assume that the raw data sequence is x 0 = x 0 1 , x 0 2 , , x 0 n , where x 0 k 0 , k . Performing the accumulated generating operation (AGO) on x 0 yields a new generated sequence x 1 , where the AGO is defined by
x 1 k = i = 1 k x 0 i ,   k = 1 , 2 , , n ,
and x 1 is termed the first-order accumulated generating sequence of x 0 . While x 1 is obtained, the whitenization function of the traditional GM(1,1) can be defined as:
d x 1 d t + a x 1 = b ,
where a and b represent the development coefficient and grey input (control variable), respectively [2,3]. In the GM(1,1), the forecasting value of x 1 k can be determined by the time response function of (2) with an initial condition x 1 1 = x 0 1 , that is,
x ^ 1 k = x 0 1 b a e a k 1 + b a ,   k = 1 , 2 , , n .
In addition, the grey differential equation corresponding to the whitenization function given in (2) is
x 0 k + a z 1 k = b ,   k = 2 , 3 , , n ,
where
z 1 k = λ x 1 k + ( 1 λ ) x 1 k 1 .
Herein, z 1 k is termed the background value and λ 0 , 1 . The matrix form of the grey differential Equation (4) is
B p = Y ,
where
B = z 1 2 1 z 1 3 1 z 1 n 1 ,
p = a ,   b T ,
Y = x 0 2 ,   x 0 3 ,   ,   x 0 n T .
Then, a and b could be determined by using the ordinary least squares method:
a ,   b T = B T B 1 B T Y .
The forecasting value of x 0 k can be obtained by applying the inverse AGO (IAGO) to x ^ 1 k as follows.
x ^ 0 k   =   x ^ 1 k x ^ 1 k 1 ,   k = 2 , 3 , , n ,
Then, substituting (3) into (11) yields
x ^ 0 k   =   1 e a x 0 1 b a e a k 1 ,
where k = 2 , 3 , , n , and x ^ 0 1 = x 0 1 . Finally, it should be noted that the GM(1,1) with a constant parameter λ , usually specified to 0.5, is the so-called classic GM(1,1), while the GM(1,1) with the adjustable parameter λ is denoted as GM(1,1;λ) for differentiation.

2.2. Linear Least Squares Problems

As far as the linear system given in (6) is concerned, a classic approach to choosing p is to minimize the least squares (LS) error between Y and B p :
min B p Y 2 , α p β
where α and β are given constant vectors and · denotes the Euclidean distance. The previous linear least squares problem with an additional constraint on the solution is termed the bound-constrained least squares problem. Generally speaking, the solution methods for the bound-constrained least squares problems of the form (13) can be categorized as active set or interior point methods [17]. In the active set methods, a sequence of equality constrained problems is solved with efficient solution methods. The interior point methods use variants of Newton’s method to solve the Karush–Kuhn–Tucker (KKT) equality conditions for (13).
Without considering the constraints α p β , the ordinary least squares method given in (10) represents a simple method for solving the LS problem of (13). However, while the parameter estimation of the GM(1,1) takes the weight of the background value λ into consideration, the parameter estimation will become a nonlinear regression problem. Even so, some nonlinear regression problems can be moved to a linear domain by a suitable transformation of the model formulation [18,19,20]. With the help of suitable transformation, this study could apply the Matlab function lsqlin to solve the constrained problem as given in (13). Note that the Matlab function lsqlin is a linear least squares solver with bounds or linear constraints.

3. GM(1,1;λ) with Constrained Linear Least Squares

Upon inspection of (4), the grey differential equation of the GM(1,1) only contains two undetermined parameters ( a and b ) , but does not contain the weight of the background value λ . This is also the main reason that the weighting factor λ cannot be determined by the least squares method given in (10). Substituting (5) into (4) yields
x 0 k + a λ x 1 k + a 1 λ x 1 k 1 = b ,   k = 2 , 3 , , n .
As can be seen, two nonlinear terms, a λ and a 1 λ , appear in the new equation (14), hereafter termed the grey differential equation of the GM(1,1;λ). Unfortunately, a and λ are not independent variables and they are inseparable. The current grey differential equation of the GM(1,1;λ), therefore, is not in the form of a nonlinear regression model. Hence, it is difficult to apply the method of least squares to estimate the values of the unknown parameters in (14). Thus, the main issue to be solved in this study is how to simultaneously estimate the unknown parameters a , b and λ by the least squares method.

3.1. Parameters Estimation of GM(1,1;λ)

A nonlinear model can sometimes be transformed into a linear one. For example, an exponential model, say y = α e β x , where α and β are unknown parameters, can be transformed into a linear model by taking logarithms, i.e., ln y = ln α + β x . Then, the unknown parameters can be estimated by the ordinary least squares method [18,19,20]. The main purpose of this section is how to find a suitable transformation of the model formulation (14) such that the nonlinear regression problems can be moved to a linear domain.
Let a 1 = a λ and a 2 = a 1 λ , where a 1 and a 2 are the components of the development coefficient a . Then, (14) becomes
x 0 k + a 1 x 1 k + a 2 x 1 k 1 = b ,   k = 2 , 3 , , n .
Obviously, (15) is a linear model. The corresponding matrix form is
B λ p λ = Y ,
where
B λ = x 1 2 x 1 1 1 x 1 3 x 1 2 1 x 1 n x 1 n 1 1 ,
p λ = a 1 ,   a 2 ,   b T .
According to the ordinary least squares method, a 1 , a 2 and b can be estimated by
a 1 , a 2 , b T = B λ T B λ 1 B λ T Y .
Once the components a 1 and a 2 are determined, the development coefficient a and the weighting factor λ are obtained by
a = a 1 + a 2 ,
λ = a 1 / a 1 + a 2 .
With the proposed simple transformation, the development coefficient a , the grey input b , and the weight of background value λ can be simultaneously obtained from the ordinary least squares method (19) and the transformation formulations given in (20) and (21).

3.2. Boundary Constraint on Estimated Parameters

Generally speaking, the parameter estimation of the classic GM(1,1) is solving a linear least squares problem without any additional constraint on the solution, i.e., a simple unconstrained linear least squares problem. Let λ = 0.5 , and then substituting (1) and (5) into (10) will yield [16]
a = k = 2 n z 1 k k = 2 n x 0 k n 1 k = 2 n z 1 k x 0 k / Δ ,
b = k = 2 n z 1 k 2 k = 2 n x 0 k k = 2 n z 1 k k = 2 n z 1 k x 0 k / Δ ,
Δ = n 1 k = 2 n z 1 k 2 k = 2 n z 1 k 2 .
Assume that the length of the raw data sequence x 0 used to construct a classic GM(1,1) is n and the sequence x 0 is bounded, i.e., 0 x 0 k   x m a x 0 , where x m a x 0 = max k x 0 k and k = 1 , 2 , , n . Then, it can be derived from (22), (23) and (24) that [16]
2 / n + 1 < a < 2 / n + 1 ,
x m a x 0 b   x m a x 0 .
It is obvious that (25) and (26) are the boundary constraints on a and b , respectively.
However, these two constraints are neglected while estimating the parameters a and b by the least squares method (10). Moreover, while taking the weight of the background value λ into consideration, the parameter estimation of the GM(1,1;λ) will become a linear least squares problem with the newly added constraints on the parameters a 1 and a 2 . The main issue of this section is how to properly determine the newly added constraints on the estimated parameters.
It can be inferred from the boundary constraint (25), the condition 0 λ 1 , and the proposed transformations, a 1 = a λ and a 2 = a 1 λ , that 2 / n + 1 < a i < 2 / n + 1 ,   i = 1 ,   2 . In addition, the condition 0 λ 1 also implies that a 1 , a 2 and a must have the same sign. Then, the parameter estimation of the GM(1,1;λ) can be obtained by one of the following two bound-constrained least squares problems:
(i).
a 0 :
min        B λ p λ Y 2 subject   to    0 a i < + C / n + 1 ,   i = 1 , 2 , 0 a < + C / n + 1 , x m a x 0 b   x m a x 0 ,
(ii).
a < 0 :
min        B λ p λ Y 2 subject   to    C / n + 1 < a i < 0 ,   i = 1 , 2 , C / n + 1 < a < 0 , x m a x 0 b   x m a x 0 ,
where C = 2 . Our experiences show that the development coefficient a is generally bounded in 1 / n + 1 , + 1 / n + 1 , i.e., C = 1 [16]. This fact reveals that the constant C could take a value from the interval [1,2]. Generally, C = 1.5 or 2.0 in the simulation. Note that this study also applies the Matlab function lsqlin to solve such constrained problems.

4. Simulation Results

The study selects two numerical examples to verify the effectiveness of the proposed GM(1,1;λ) against the original GM(1,1) and GA-based GM(1,1). In addition, the MAPE (mean absolute percentage error) was employed to measure fitting/forecasting performance as follows [21]:
MAPE = 1 n 1 k = 2 n x 0 k x ^ 0 k x 0 k × 100 % .
The lower the MAPE value, the more accurate the grey model. Table 1 lists the MAPE criteria for evaluating a forecasting model [22].

4.1. Example 1

Example 1 takes inbound tourists’ arrivals in Taiwan from 2003 to 2014 as an experimental example [12]. Table 2 lists the corresponding real values. The fitting values obtained by the original GM(1,1), GA-based GM(1,1) [12] and the proposed GM(1,1;λ) are also given in the same table. Figure 1 also depicts the real and forecasting values of different forecasting models. As can be seen, the original GM(1,1), i.e., λ = 0.5 , has a MAPE of 6.67%.
In [12], the parameter settings of the GA-based GM(1,1) are given as follows: population size = 70, number of generations = 100, reproduction probability = 0.85, and mutation probability = 0.005. With the previous parameter settings, the optimal weight of the background value obtained by GA is λ = 0.47323 with a minimum MAPE of 6.58%. While applying the proposed GM(1,1;λ) with C = 1.5 to solve the same problem, the estimated parameters are a 1 = 0.0769 , a 2 = 0.0385 and b = 2 , 334 , 485.66 . Then, it can be obtained from (20) and (21) that a = 0.1154 and λ = 0.6667 . The corresponding overall fitting result (MAPE) is 6.33%. Although all the grey models could provide highly accurate forecasting according to MAPE criteria, the proposed GM(1,1;λ) has the best fitting ability. This fact also reveals that the proposed approach actually improves the model fitting precision of the original GM(1,1).

4.2. Example 2

Example 2 takes the processing volumes of crude oil given in [7] as an illustration. In the example, the processing volumes from 1983 to 1992 are selected as the fitting (in-sample) data to examine model fitting ability, whereas the volumes from 1993 to 1994 are selected as the predicting (out-of-sample) data for the ex post testing. The real values and the corresponding fitting and forecasting results obtained by the GM(1,1), GA-based GM(1,1) and the proposed GM(1,1;λ) with C = 1.5 are depicted in Table 3 and Figure 2.
It can be seen from Table 3 that the MAPEs of the GM(1,1), GA-based GM(1,1) and the proposed GM(1,1;λ) for model fitting were 4.89%, 3.60% and 4.18%, respectively, whereas the MAPEs were 5.04%, 4.84% and 3.30%, respectively, for the ex post testing. According to the MAPE criteria, all the grey models can attain high forecasting ability in both in-sample and out-of-sample data. The GA-based GM(1,1) has the best fitting accuracy, whereas the proposed GM(1,1;λ) performs the best forecasting accuracy. In addition, the proposed GM(1,1;λ) also performs better than the original GM(1,1) in terms of both fitting and forecasting abilities. That is to say, the proposed constrained least squares method can effectively enhance the fitting and forecasting accuracies of GM(1,1).

5. Conclusions

This study proposes two simple transformation formulations such that the development coefficient a , the grey input b , and the weight of the background value λ can be simultaneously obtained from the ordinary least squares method. The corresponding boundary constraints are also derived in the study. With the help of the estimation of λ, the proposed approach could also be applied to improve the precision of the fitting and prediction of the original GM(1,1). Two real cases, the inbound arrivals in Taiwan and the processing volumes of crude oil, were used to evaluate the model fitting and forecasting performances of the proposed GM(1,1;λ). Numerical results showed that the GM(1,1), GA-based GM(1,1) and the proposed GM(1,1;λ) could provide highly accurate forecasting according to the MAPE criteria. In particular, the proposed GM(1,1;λ) performs better than the traditional GM(1,1) in terms of its accuracy in performing model fitting and forecasting. That is to say, the proposed approach actually improves the model fitting and forecasting precision of the original GM(1,1).

Author Contributions

Conceptualization, M.-F.Y.; methodology, M.-F.Y.; software, M.-F.Y. and M.-H.C.; validation, M.-F.Y. and M.-H.C.; writing—original draft preparation, M.-F.Y.; writing—review and editing, M.-F.Y. and M.-H.C. All authors have read and agreed to the published version of the manuscript.

Funding

The work was supported by Ministry of Science and Technology, Taiwan, through Grant MOST 108-2221-E-262-003.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Deng, J. Control problems of grey system. Syst. Control. Lett. 1982, 1, 288–294. [Google Scholar]
  2. Liu, S.; Lin, Y. Grey Systems: Theory and Applications (Understanding Complex Systems); Springer: Heidelberg/Berlin, Germany, 2011. [Google Scholar]
  3. Wen, K.L. Grey Systems: Modeling and Prediction; Yang’s Scientific Press: Tucson, AZ, USA, 2004. [Google Scholar]
  4. Hodzic, M.; Tai, L.C. Grey predictor reference model for assisting particle swarm optimization for wind turbine control. Renew. Energy 2016, 86, 251–256. [Google Scholar] [CrossRef]
  5. Zhao, H.; Guo, S. An optimized grey model for annual power load forecasting. Energy 2016, 107, 272–286. [Google Scholar] [CrossRef]
  6. Zhou, W.; He, J.M. Generalized GM (1, 1) model and its application in forecasting of fuel production. Appl. Math. Model. 2013, 37, 6234–6243. [Google Scholar] [CrossRef]
  7. Shi, X.; Wang, Z.; Wang, Z. Adaptive genetic algorithm GM(1,1;λ) model and its application. Int. J. Nonlinear Sci. 2011, 11, 195–199. [Google Scholar]
  8. Ou, S.L. Forecasting agricultural output with an improved grey forecasting model based on the genetic algorithm. Comput. Electron. Agric. 2012, 85, 33–39. [Google Scholar] [CrossRef]
  9. Lee, Y.S.; Tong, L.I. Forecasting energy consumption using a grey model improved by incorporating genetic programming. Energy Convers. Manag. 2011, 52, 147–152. [Google Scholar] [CrossRef]
  10. Wang, Z.-X.; Hao, P. An improved grey multivariable model for predicting industrial energy consumption in China. Appl. Math. Model. 2016, 40, 5745–5758. [Google Scholar] [CrossRef]
  11. Hu, Y.C. Nonadditive grey prediction using functional-link net for energy demand forecasting. Sustainability 2017, 9, 1166. [Google Scholar] [CrossRef] [Green Version]
  12. Nan, H.T. Design a grey prediction model based on genetic algorithm for better forecasting international tourist arrivals. J. Grey Syst. 2016, 19, 7–12. [Google Scholar]
  13. Zhao, Z.; Wang, J.; Zhao, J.; Su, Z. Using a grey model optimized by differential evolution algorithm to forecast the per capita annual net income of rural households in China. Omega 2012, 40, 525–532. [Google Scholar] [CrossRef]
  14. Wang, C.H.; Hsu, L.C. Using genetic algorithms grey theory to forecast high technology industrial output. Appl. Math. Comput. 2008, 195, 256–263. [Google Scholar] [CrossRef]
  15. Lia, Y.; Lingb, L.; Chenb, J. Combined grey prediction fuzzy control law with application to road tunnel ventilation system. J. Appl. Res. Technol. 2015, 13, 313–320. [Google Scholar] [CrossRef]
  16. Yeh, M.F.; Lu, H.C. On some of the basic features of GM(1,1) mode. J. Grey Syst. 1996, 8, 19–35. [Google Scholar]
  17. Mead, J.L.; Renaut, R.A. Least squares problems with inequality constraints as quadratic constraints. Linear Algebra Appl. 2010, 432, 1936–1949. [Google Scholar] [CrossRef] [Green Version]
  18. Zulkifli, N.; Sorooshian, S.; Anvari, A. Modeling for regressing variables. J. Stat. Econom. Methods 2012, 1, 1–8. [Google Scholar]
  19. Seber, G.A.F.; Wild, C.J. Nonlinear Regression; John Wiley and Sons: New York, NY, USA, 1989. [Google Scholar]
  20. Coleman, T.F.; Li, Y. A reflective newton method for minimizing a quadratic function subject to bounds on some of the variables. SIAM J. Optim. 1996, 6, 1040–1058. [Google Scholar] [CrossRef]
  21. Myttenaere, A.; Golden, B.; Grand, B.; Rossi, F. Mean Absolute Percentage Error for regression models. Neurocomputing 2016, 5, 38–48. [Google Scholar] [CrossRef] [Green Version]
  22. DeLurgio, S.A. Forecasting Principles and Applications; Irwin/McGraw-Hill: New York, NY, USA, 1998. [Google Scholar]
Figure 1. Real and forecasting values of different forecasting models for Example 1.
Figure 1. Real and forecasting values of different forecasting models for Example 1.
Axioms 10 00278 g001
Figure 2. Real and forecasting values of different forecasting models for Example 2.
Figure 2. Real and forecasting values of different forecasting models for Example 2.
Axioms 10 00278 g002
Table 1. MAPE criteria for model evaluation [22].
Table 1. MAPE criteria for model evaluation [22].
MAPE (%)Forecasting Ability
<10High
10–20Good
20–50Reasonable
>50Inaccurate
Table 2. Real values of inbound tourists’ arrivals in Taiwan from 2003 to 2014 and fitting values obtained by the GM(1,1), GA-based GM(1,1) and the proposed GM(1,1;λ).
Table 2. Real values of inbound tourists’ arrivals in Taiwan from 2003 to 2014 and fitting values obtained by the GM(1,1), GA-based GM(1,1) and the proposed GM(1,1;λ).
ModelGM(1,1)GA-Based GM(1,1)Proposed GM(1,1;λ)
Development coefficient a−0.1313−0.1317−0.1154
Grey input b2,038,364.362,045,775,782,334,485.66
Background value λ0.50.473230.6667
YearReal valueFitting valueRelative error (%)Fitting valueRelative error (%)Fitting valueRelative error (%)
20032,248,1172,248,11702,248,11702,248,1170
20042,950,3422,493,504.53615.482,503,172.29915.162,749,456.5156.80
20053,378,1182,843,239.68415.832,855,549.38915.473,085,728.8228.65
20063,519,8273,242,028.1507.893,257,531.3797.453,463,128.9181.61
20073,716,0633,696,750.0790.523,716,101.2610.003,886,686.9364.59
20083,845,1874,215,250.6129.624,239,225.03710.254,362,048.22213.44
20094,395,0044,806,475.2419.364,835,990.10210.034,895,548.57511.38
20105,567,2775,480,624.1351.565,516,763.1020.915,494,298.6941.31
20116,087,4846,249,328.1272.666,293,370.0203.386,166,278.9531.29
20127,311,4707,125,849.3682.547,179,301.6081.816,920,445.7645.34
20138,016,2808,125,310.1441.368,189,947.7412.177,766,850.9553.11
20149,910,2049,264,953.7666.519,342,864.7615.728,716,775.74112.04
MAPE (%) 6.67 6.58 6.33
Note: MAPE without taking the first data (year 2003) into consideration.
Table 3. Fitting and forecasting results of Example 2.
Table 3. Fitting and forecasting results of Example 2.
ModelGM(1,1)GA-Based GM(1,1)Proposed GM(1,1;λ)
Development coefficient a−0.03897−0.03879−0.03841
Grey input b7631.4176327475.61
Background value λ0.50.461.0
YearReal valueFitting valueRelative error (%)Fitting valueRelative error (%)Fitting valueRelative error (%)
19837490749007490074900
198476658079.685.4180525.047914.363.25
198579048400.756.2883705.898224.294.05
198685658734.581.9887001.578546.360.22
198797189081.676.5494442.818881.058.61
198810,1649442.567.0998023.569228.849.20
198910,5289817.786.7499835.179590.258.91
1990978310,207.924.3410,1583.839965.821.87
199110,25010,613.563.5410,5603.0210,356.091.03
199210,81511,035.322.0310,9771.4910,761.650.49
MAPE (%) 4.89 3.60 4.18
199311,29011,473.841.6211,4121.0811,183.090.95
199411,00011,929.788.4511,9468.6011,621.035.65
MAPE (%) 5.04 4.84 3.30
Note: MAPE without taking the first data (year 1983) into consideration.
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Yeh, M.-F.; Chang, M.-H. GM(1,1;λ) with Constrained Linear Least Squares. Axioms 2021, 10, 278. https://doi.org/10.3390/axioms10040278

AMA Style

Yeh M-F, Chang M-H. GM(1,1;λ) with Constrained Linear Least Squares. Axioms. 2021; 10(4):278. https://doi.org/10.3390/axioms10040278

Chicago/Turabian Style

Yeh, Ming-Feng, and Ming-Hung Chang. 2021. "GM(1,1;λ) with Constrained Linear Least Squares" Axioms 10, no. 4: 278. https://doi.org/10.3390/axioms10040278

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop