Next Article in Journal
Nonlinear EHD Instability of Two-Superposed Walters’ B Fluids Moving through Porous Media
Next Article in Special Issue
GPU Based Modelling and Analysis for Parallel Fractional Order Derivative Model of the Spiral-Plate Heat Exchanger
Previous Article in Journal
Structure of Iso-Symmetric Operators
Previous Article in Special Issue
The Approximate and Analytic Solutions of the Time-Fractional Intermediate Diffusion Wave Equation Associated with the Fokker–Planck Operator and Applications
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Forecasting Economic Growth of the Group of Seven via Fractional-Order Gradient Descent Approach

1
Department of Mathematics, Guizhou University, Guiyang 550025, China
2
Department of Mathematical Analysis and Numerical Mathematics, Comenius University in Bratislava, Mlynská dolina, 842 48 Bratislava, Slovakia
3
Mathematical Institute of Slovak Academy of Sciences, Štefánikova 49, 814 73 Bratislava, Slovakia
*
Author to whom correspondence should be addressed.
Axioms 2021, 10(4), 257; https://doi.org/10.3390/axioms10040257
Submission received: 29 August 2021 / Revised: 7 October 2021 / Accepted: 11 October 2021 / Published: 15 October 2021
(This article belongs to the Special Issue Fractional Calculus - Theory and Applications)

Abstract

:
This paper establishes a model of economic growth for all the G7 countries from 1973 to 2016, in which the gross domestic product (GDP) is related to land area, arable land, population, school attendance, gross capital formation, exports of goods and services, general government, final consumer spending and broad money. The fractional-order gradient descent and integer-order gradient descent are used to estimate the model parameters to fit the GDP and forecast GDP from 2017 to 2019. The results show that the convergence rate of the fractional-order gradient descent is faster and has a better fitting accuracy and prediction effect.

1. Introduction

In recent years, fractional model has become a research hotspot because of its advantages. Fractional calculus has developed rapidly in academic circles, and its achievements in the fields include [1,2,3,4,5,6,7,8,9,10].
Gradient descent is generally used as a method of solving the unconstrained optimization problems, and is widely used in evaluation and in other aspects. The rise in fractional calculus provides a new idea for advances in the gradient descent method. Although numerous achievements have been made in the two fields of fractional calculus and gradient descent, the research results combining the two are still in their infancy. Recently, ref. [11] applied the fractional order gradient descent to image processing and solved the problem of blurring image edges and texture details using a traditional denoising method, based on integer order. Next, ref. [12] improved the fractional-order gradient descent method and used it to identify the parameters of the discrete deterministic system in advance. Thereafter, ref. [13] applied the fractional-order gradient descent to the training of neural networks’ backpropagation (BP), which proves the monotony and convergence of the method.
Compared with the traditional integer-order gradient descent, the combination of fractional calculus and gradient descent provides more freedom of order; adjusting the order can provide new possibilities for the algorithm. In this paper, economic growth models of seven countries are established, and their cost functions are trained by gradient descent (fractional- and integer-order). To compare the performance of fractional- and integer-order gradient descent, we visualize the rate of convergence of the cost function, evaluate the model with M S E , M A D and R 2 indicators and predict the GDP of the seven countries in 2017–2019 according to the trained parameters.

The Group of Seven (G7)

The G6 was set up by France after western countries were hit by the first oil shock. In 1976, Canada’s accession marked the birth of the G7, whose members are the United States, the United Kingdom, France, Germany, Japan, Italy and Canada seven developed countries. The annual summit mechanism of the G7 focuses on major issues of common interest, such as inclusive economic growth, world peace and security, climate change and oceans, which have had a profound impact on global, economic and political governance. In addition to the G7 members, there are a number of developing countries with large economies, such as China, India and Brazil. In the context of economic globalization, the study of G7 economic trends and economic-related factors can provide a useful reference for these countries’ development.
The economic crisis broke out in western countries in 1973, so the data in this paper cover the period from 1973 to 2016, and data for the seven countries are available since then. Some G7 members (France, Germany, Italy and the United States) were members of the European Union (EU) during this period, so this paper also establishes the economic growth model of the EU. Data for this article are from the World Bank.

2. Model Describes

The prediction of variables generally uses time series models [14] (for example, ARIMA and SARIMA), or artificial neural networks [15,16], which have been very popular in recent years. The time series model mainly predicts the future trend in variables, but it is difficult to reflect the change in unexpected factors in the model. Additionally, the neural network model needs to adjust more parameters, the network structure selection is too large, the training efficiency is not high enough, and easy to overfit.
Although the linear model is simple in form and easy to model, its weight can intuitively express the importance of each attribute, so the linear model has a good explanatory ability. It is reasonable to build a linear regression model of economic growth, which can clearly learn which factors have an impact on the economy.
Next, we chose eight explanatory variables to describe the economic growth in this paper. The explained variable is y, where y refers to GDP and is a function. The expression for y is as follows:
y ( t ) = j = 1 , 2 , 3 , 4 , 5 , 6 , 7 , 8 θ j x j ( t ) + θ 0 + ϵ ,
where t is year ( t = 44 ), θ 0 is the intercept. ϵ is an unobservable term of random error. θ j represents the weight of each variable. The eight explanatory variables are:
  • x 1 : land area (km 2 )
  • x 2 : arable land (hm 2 )
  • x 3 : population
  • x 4 : school attendance (years)
  • x 5 : gross capital formation (in 2010 US$)
  • x 6 : exports of goods and services (in 2010 US$)
  • x 7 : general government final consumer spending (in 2010 US$)
  • x 8 : broad money (in 2010 US$)

3. Fractional-Order Derivative

Due to the differing conditions, there are different forms of fractional calculus definition, the most common of which are Gr u ¨ nwald–Letnikov, Riemann–Liouville, and Caputo. In this article, we chose the definition of fractional-order derivative in terms of the Caputo form. Given the function f ( t ) , the Caputo fractional-order derivative of order α is defined as follows:
Caputo D t α f ( t ) = 1 Γ ( 1 α ) c t ( t τ ) α f ( τ ) d τ ,
where Caputo D t α is the Caputo derivative operator. α is the fractional order, and the interval is α ( 0 , 1 ) . Γ ( · ) is the gamma function. c is the initial value. For simplicity, D t α c is used in this paper to represent the Caputo fractional derivative operator instead Caputo D t α .
Caputo fractional differential has good properties. For example, we provide the Laplace transform of Caputo operator as follows:
L { D α f ( t ) } = s α F ( s ) k = 0 n 1 f ( k ) ( 0 ) s α k 1 ,
where F ( s ) is a generalized integral with a complex parameter s, F ( s ) = 0 f ( t ) e s t d t . n = : [ α ] is the α rounded up to the nearest integer. It can be seen from the Laplace transform that the definition of the initial value of Caputo differentiation is consistent with that of integer-order differential equations and has a definite physical meaning. Therefore, Caputo fractional differentiation has a wide range of applications.

4. Gradient Descent Method

4.1. The Cost Function

The cost function (also known as the loss function) is essential for a majority of algorithms in machine learning. The model’s optimization is the process of training the cost function, and the partial derivative of the cost function with respect to each parameter is the gradient mentioned in gradient descent. To select the appropriate parameters θ for the model (1) and minimize the modeling error, we introduce the cost function:
C ( θ ) = 1 2 m i = 1 m ( h θ ( x ( i ) ) y ( i ) ) 2 ,
where h θ ( x ( i ) ) is a modification of model (1), h θ ( x ) = θ 0 + θ 1 x 1 + + θ j x j , which represents the output value of the model. x ( i ) are the sample features. y ( i ) is the true data, and t represents the number of samples ( m = 44 ).

4.2. The Integer-Order Gradient Descent

The first step of the integer-order gradient descent is to take the partial derivative of the cost function C ( θ ) :
C ( θ ) θ j = 1 m i = 1 m ( h θ ( x ( i ) ) y ( i ) ) x j ( i ) , j = 1 , 2 , , 8 ,
and the update function is as follows:
θ j + 1 = θ j η 1 m i = 1 m ( h θ ( x ( i ) ) y ( i ) ) x j ( i ) ,
where η is learning rate, η > 0 .

4.3. The Fractional-Order Gradient Descent

The first step of fractional-order gradient descent is to find the fractional derivative of the cost function C ( θ ) . According to Caputo’s definition of fractional derivative, from [17] we know that if g ( h ( t ) ) is a compound function of t, then the fractional derivation of α with respect to t is
D t α c g ( h ) = ( g ( h ) ) h · D t α c h ( t ) .
It can be known from (5) that the fractional derivative of a composite function can be expressed as the product of integral and fractional derivatives. Therefore, the calculation for D θ j α c C ( θ ) is as follows:
D θ j α c C ( θ ) = 1 m i = 1 m ( h θ ( x ( i ) ) y ( i ) ) Γ ( 1 α ) c θ j ( θ j τ ) α [ h θ ( x ( i ) ) y ( i ) ] θ j d τ = 1 m i = 1 m ( h θ ( x ( i ) ) y ( i ) ) x j ( i ) Γ ( 1 α ) c θ j ( θ j τ ) α d τ = 1 m ( 1 α ) Γ ( 1 α ) ( θ j c ) ( 1 α ) i = 1 m ( h θ ( x ( i ) ) y ( i ) ) x j ( i ) ,
and the update function is as follows:
θ j + 1 = θ j η 1 m ( 1 α ) Γ ( 1 α ) ( θ j c ) ( 1 α ) i = 1 m ( h θ ( x ( i ) ) y ( i ) ) x j ( i ) , j = 1 , 2 , , 8
where η is the learning rate, η > 0 . α is the fractional order, 0 < α < 1 . c is the initial value of Caputo’s fractional derivative, and c < min { θ j } .

5. Model Evaluation Indexes

We use the absolute relative error ( A R E ) to measure the prediction error:
A R E i = y i y ^ i y i .
To evaluate the fitting quality of gradient descent on the model, the following three indicators can be calculated:
The mean square error ( M S E ):
M S E = 1 n i = 1 n ( y i y i ^ ) 2 .
The coefficient of determination ( R 2 ):
R 2 = 1 i = 1 n ( y i y i ^ ) 2 i = 1 n ( y i y i ¯ ) 2 .
The mean absolute deviation ( M A D ):
M A D = i = 1 n y i y i ^ n .
In these formulas, n is the number of years ( n = 44 ). y i and y i ^ are the real value and the model output, respectively. y i ¯ is the mean of the GDP.

6. Main Results

In this article, we standardize the data for each country before running the algorithm, and each iteration to update θ uses m samples. The grid search method was used to select the appropriate learning rate and initial weight interval, and the effects of different fractional orders are compared to select the best order (see Table 1).The learning rate and the initial weight interval are applicable to both fractional-order gradient descent and integer-order gradient descent.

6.1. Comparison of Convergence Rate of Fractional and Integer Order Gradient Descent

In order to facilitate visual comparison, (4) and (6) are iterated 50 times, respectively, as well as their convergence rates (see Figure 1).
As shown in Figure 1, for each dataset, after the same number of iterations, the convergence rate of fractional-order gradient descent is faster than that of integer-order gradient descent, which indicates that the method combining fractional-order and gradient descent is better than the traditional integer-order gradient descent in the convergence rate of update equation.

6.2. Fitting Result

Then, we fit GDP with integer-order gradient descent and fractional-order gradient descent, respectively. Start by setting a threshold and stop iterating when the gradient is less than this threshold. The fitting effect diagram is shown in Figure 2, and the performance evaluation of the model is shown in Table 2.
It can be seen from Table 2 that the M S E , R 2 and M A D results of GDP fitted by fractional-order gradient descent are better than that fitted by integer-order gradient descent, which indicates that, under the same iteration number, learning rate and initial weight interval, the fitting performance of the data fitted by fractional-order gradient descent is better than that of integer-order.

6.3. Predicted Results

Finally, in order to test the prediction effect of fractional- and integer-order gradient descent on GDP, we forecast the GDP from 2017 to 2019, and used the A R E index to measure the prediction error (see Table 3).

7. Conclusions

In this paper, the gradient descent method is used to study the linear model problems which is different from [18,19]. The results show that, in addition to the least square estimation, the gradient descent method can also solve the regression analysis problem by iterating the cost function, and obtain good results, a without complicating the model. It also improves the interpretability of explanatory variables. We apply the fractional differential to gradient descent, and compare the performance of fractional-order gradient descent with that of integer-order gradient descent. It was found that the fractional-order has a faster convergence rate, higher fitting accuracy and lower prediction error than the integer-order. This provides an alternative method for fitting and forecasting GDP and has a certain reference value.

Author Contributions

J.W. supervised and led the planning and execution of this research, proposed the research idea of combining fractional calculus with gradient descent, formed the overall research objective, and reviewed, evaluated and revised the manuscript. According to this research goal, X.W. collected data of economic indicators and applied statistics to create a model and used Python software to write codes to analyze data and optimize the model, and finally wrote the first draft. M.F. reviewed, evaluated and revised the manuscript. All authors have read and agreed to the published version of the manuscript.

Funding

This work is partially supported by Training Object of High Level and Innovative Talents of Guizhou Province ((2016)4006), Major Research Project of Innovative Group in Guizhou Education Department ([2018]012), the Slovak Research and Development Agency under the contract No. APVV-18-0308 and by the Slovak Grant Agency VEGA No. 1/0358/20 and No. 2/0127/20.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Acknowledgments

The authors are grateful to the referees for their careful reading of the manuscript and valuable comments. The authors thank the help from the editor too.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Wang, J.; Ahmed, G.; O’Regan, D. Topological structure of the solution set for fractional non-instantaneous impulsive evolution inclusions. J. Fixed Point Theory Appl. 2018, 20, 1–25. [Google Scholar] [CrossRef]
  2. Li, M.; Wang, J. Representation of solution of a Riemann-Liouville fractional differential equation with pure delay. Appl. Math. Lett. 2018, 85, 118–124. [Google Scholar] [CrossRef]
  3. Yang, D.; Wang, J.; O’Regan, D. On the orbital Hausdorff dependence of differential equations with non-instantaneous impulses. C. R. Acad. Sci. Paris, Ser. I 2018, 356, 150–171. [Google Scholar] [CrossRef]
  4. You, Z.; Fečkan, M.; Wang, J. Relative controllability of fractional-order differential equations with delay. J. Comput Appl. Math. 2020, 378, 112939. [Google Scholar] [CrossRef]
  5. Wang, J.; Fečkan, M.; Zhou, Y. A Survey on impulsive fractional differential equations. Frac. Calc. Appl. Anal. 2016, 19, 806–831. [Google Scholar] [CrossRef]
  6. Victor, S.; Malti, R.; Garnier, H.; Outstaloup, A. Parameter and differentiation order estimation in fractional models. Automatica 2013, 49, 926–935. [Google Scholar] [CrossRef]
  7. Tang, Y.; Zhen, Y.; Fang, B. Nonlinear vibration analysis of a fractional dynamic model for the viscoelastic pipe conveying fluid. Appl. Math. Model. 2018, 56, 123–136. [Google Scholar] [CrossRef]
  8. Li, W.; Ning, J.; Zhao, G.; Du, B. Ship course keeping control based on fractional order sliding mode. J. Shanghai Marit. Univ. 2020, 41, 25–30. [Google Scholar]
  9. Yasin, F.; Ali, A.; Kiavash, F.; Rohollah, M.; Ami, R. A fractional-order model for chronic lymphocytic leukemia and immune system interactions. Math. Methods Appl. Sci. 2020, 44, 391–406. [Google Scholar]
  10. Chen, L.; Altaf, M.; Abdon, A.; Sunil, K. A new financial chaotic model in Atangana-Baleanu stochastic fractional differential equations. Alex. Eng. 2021, 60, 5193–5204. [Google Scholar]
  11. Pu, Y.; Zhang, N.; Zhang, Y.; Zhou, J. A texture image denoising approach based on fractional developmental mathematics. Pattern Anal. Appl. 2016, 19, 427–445. [Google Scholar] [CrossRef]
  12. Cui, R.; Wei, Y.; Chen, Y. An innovative parameter estimation for fractional-order systems in the presence of outliers. Nonlinear Dyn. 2017, 89, 453–463. [Google Scholar] [CrossRef]
  13. Wang, J.; Wen, Y.; Gou, Y.; Ye, Z.; Chen, H. Fractional-order gradient descent learning of BP neural networks with Caputo derivative. Neural Netw. 2017, 89, 19–30. [Google Scholar] [CrossRef] [PubMed]
  14. Guo, J.; Dong, B. International rice price forecast based on SARIMA model. Price Theory Pract. 2019, 1, 79–82. [Google Scholar]
  15. Xu, Y.; Chen, Y. Comparison between seasonal ARIMA model and LSTM neural network forecast. Stat. Decis. 2021, 2, 46–50. [Google Scholar]
  16. Wang, X.; Wang, J.; Fečkan, M. BP neural network calculus in economic growth modeling of the Group of Seven. Mathematics 2020, 8, 37. [Google Scholar] [CrossRef] [Green Version]
  17. Boroomand, A.; Menhaj, M. Fractional-order Hopfield neural networks. In Advances in Neuro-Information Processing, ICONIP 2008; Lecture Notes in Computer Science; Springer: Berlin/Heidelberg, Germany, 2009; Volume 5506, pp. 883–890. [Google Scholar]
  18. Tejado, I.; Pérez, E.; Valério, D. Fractional calculus in economics growth modeling of the Group of Seven. Fract. Calc. Appl. Anal. 2019, 22, 139–157. [Google Scholar] [CrossRef]
  19. Ming, H.; Wang, J.; Fečkan, M. The application of fractional calculus in Chinese economic growth models. Mathematics 2019, 7, 665. [Google Scholar] [CrossRef] [Green Version]
Figure 1. Comparison of convergence rate and fitting error between fractional- and integer-order gradient descent: (a) Canada (b) France (c) Germany (d) Italy (e) Japan (f) The United Kingdom (g) The United States (h) European Union.
Figure 1. Comparison of convergence rate and fitting error between fractional- and integer-order gradient descent: (a) Canada (b) France (c) Germany (d) Italy (e) Japan (f) The United Kingdom (g) The United States (h) European Union.
Axioms 10 00257 g001aAxioms 10 00257 g001b
Figure 2. Fitting of GDP of the G7 countries by fractional-order gradient descent method: (a) Canada (b) France (c) Germany (d) Italy (e) Japan (f) The United Kingdom (g) The United States (h) European Union.
Figure 2. Fitting of GDP of the G7 countries by fractional-order gradient descent method: (a) Canada (b) France (c) Germany (d) Italy (e) Japan (f) The United Kingdom (g) The United States (h) European Union.
Axioms 10 00257 g002
Table 1. Parameters for different countries.
Table 1. Parameters for different countries.
Country α Learning RateInitial Interval
Canada0.80.03 ( 0.5 , 0.5 )
France0.80.03 ( 0.8 , 0.8 )
Germany0.80.03 ( 0.1 , 0.1 )
Italy0.80.03 ( 0.5 , 0.5 )
Japan0.80.03 ( 0.1 , 0.1 )
The United Kingdom0.80.03 ( 0.5 , 0.5 )
The United States0.80.03 ( 0.1 , 0.1 )
European Union0.80.03 ( 0.5 , 0.5 )
Table 2. Performance of integer order and fractional order gradient descent.
Table 2. Performance of integer order and fractional order gradient descent.
CanadaFranceGermanyItaly
IndexInteger (4)Fractional (6)Integer (4)Fractional (6)Integer (4)Fractional (6)Integer (4)Fractional (6)
M S E  ( × 10 20 )2.25481.56897.33964.38517.62626.89763.25212.701
R 2 0.99840.99890.99710.99830.99810.99830.99740.9978
M A D  ( × 10 10 )1.10150.90662.20761.682.28242.02031.49471.3146
JapanThe United KingdomThe United StatesEuropean Union
IndexInteger (4)Fractional (6)Integer (4)Fractional (6)Integer (4)Fractional (6)Integer (4)Fractional (6)
M S E  ( × 10 20 )19.910316.665615.242113.787698.420160.7402197.914390.5717
R 2 0.99860.99890.99460.99510.99930.99950.99830.9992
M A D  ( × 10 10 )3.86633.27453.11822.94897.85935.71411.83937.2684
Table 3. Integer-order and fractional-order gradient descent for G7 countries’ GDP data from 2017 to 2019.
Table 3. Integer-order and fractional-order gradient descent for G7 countries’ GDP data from 2017 to 2019.
CountryYearActual ValuePredicted Value A R E
IntegerFractionalIntegerFractional
20171869939124387.5518511761209481865471720455.360.010030.00239
Canada20181907592951375.511885635961969.181897212921116.780.011510.00544
20191939183469806.341913183323405.811924536620147.110.013410.00755
20172876185347152.3529455832966252913853765393.870.023130.01211
France20182927751436718.372987173241226.192955215192748.210.01930.0084
20192971919320115.833052414282679.983007640733954.070.026080.01103
20173873475897139.373992089822476.933987473981388.450.030620.02943
Germany20183922591386837.484035516755191.924019973502352.350.028790.02483
20193944379455526.154007551577032.443942199462068.090.016020.00055
20172124019926800.662152553322306.662148504123256.220.013430.01053
Italy20182144072575240.172184791916115.442178336024841.510.018990.01598
20192151420719257.081694388219398.541946816137097.530.212430.0951
20176150456276847.656246751221623.446217262375879.730.015660.01086
Japan20186170335002849.186302599251651.136266099914852.530.021440.01552
20196210698351093.346274298653661.426272342082178.180.014110.01379
20172841238185971.412714332507299.132737032647202.610.044670.03668
The United Kingdom20182879331251695.232735833239476.112760916583838.650.049840.041126
20192921446026408.242784137398857.082812534141119.140.0470.03728
201717403783207186.717154216039682.217344565695242.30.014340.0034
The United States201817913248631409.517681187933498.617835485270334.40.017250.00434
201918300385513295.618004286468803.118168502346487.90.016180.00721
201716012037378199.317983491434848.4180724605581640.044790.04006
European Union201816351210756244.218105516308926.618296349316535.80.057150.04272
20191660535189452418828265531889.1192412905067590.034460.01328
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Wang, X.; Fečkan, M.; Wang, J. Forecasting Economic Growth of the Group of Seven via Fractional-Order Gradient Descent Approach. Axioms 2021, 10, 257. https://doi.org/10.3390/axioms10040257

AMA Style

Wang X, Fečkan M, Wang J. Forecasting Economic Growth of the Group of Seven via Fractional-Order Gradient Descent Approach. Axioms. 2021; 10(4):257. https://doi.org/10.3390/axioms10040257

Chicago/Turabian Style

Wang, Xiaoling, Michal Fečkan, and JinRong Wang. 2021. "Forecasting Economic Growth of the Group of Seven via Fractional-Order Gradient Descent Approach" Axioms 10, no. 4: 257. https://doi.org/10.3390/axioms10040257

APA Style

Wang, X., Fečkan, M., & Wang, J. (2021). Forecasting Economic Growth of the Group of Seven via Fractional-Order Gradient Descent Approach. Axioms, 10(4), 257. https://doi.org/10.3390/axioms10040257

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop