Next Article in Journal
FPGA-Based Implementation of Multidimensional Reconciliation Encoding in Quantum Key Distribution
Next Article in Special Issue
Some Non-Obvious Consequences of Non-Extensiveness of Entropy
Previous Article in Journal
News Stance Discrimination Based on a Heterogeneous Network of Social Background Information Fusion
Previous Article in Special Issue
Non-Additive Entropy Composition Rules Connected with Finite Heat-Bath Effects
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Testing Nonlinearity with Rényi and Tsallis Mutual Information with an Application in the EKC Hypothesis

1
Department of Statistics, Faculty of Sciences and Literature, Yildiz Technical University, Davutpasa, Esenler, 34210 Istanbul, Turkey
2
Department of Informatics, Faculty of Management, Marmara University, Göztepe, 34180 Istanbul, Turkey
3
Department of Computer, Faculty of Engineering, Halic University, Eyupsultan, 34060 Istanbul, Turkey
*
Author to whom correspondence should be addressed.
Entropy 2023, 25(1), 79; https://doi.org/10.3390/e25010079
Submission received: 25 October 2022 / Revised: 20 December 2022 / Accepted: 28 December 2022 / Published: 31 December 2022
(This article belongs to the Special Issue Non-additive Entropy Formulas: Motivation and Derivations)

Abstract

:
The nature of dependence between random variables has always been the subject of many statistical problems for over a century. Yet today, there is a great deal of research on this topic, especially focusing on the analysis of nonlinearity. Shannon mutual information has been considered to be the most comprehensive measure of dependence for evaluating total dependence, and several methods have been suggested for discerning the linear and nonlinear components of dependence between two variables. We, in this study, propose employing the Rényi and Tsallis mutual information measures for measuring total dependence because of their parametric nature. We first use a residual analysis in order to remove linear dependence between the variables, and then we compare the Rényi and Tsallis mutual information measures of the original data with that the lacking linear component to determine the degree of nonlinearity. A comparison against the values of the Shannon mutual information measure is also provided. Finally, we apply our method to the environmental Kuznets curve (EKC) and demonstrate the validity of the EKC hypothesis for Eastern Asian and Asia-Pacific countries.

1. Introduction

An analysis of the dependence between two or more random variables can be traced back to the late 19th century, beginning with the works of mathematicians such as Gauss and Laplace. Later, Galton created the concept of correlation, which enabled Pearson to derive the correlation coefficient that has been extensively used in all kinds of statistical analyses since then [1]. When the dependence is linear or approximately linear, the correlation coefficient is the most effective indicator of the relationship between the random variables. It also provides a simple interpretation for the direction of the relation, whether positive or negative. When the dependence departs from the linearity, the linear correlation coefficient is of no use, and various methods have been proposed for evaluating nonlinearity. One of these measures is Spearman’s correlation coefficient, which is nonparametric and uses ranked values to assess monotonic nonlinearity between two random variables [2]. Another measure for nonlinear dependence is the correlation ratio, which expresses the relationship between random variables as a single valued function. In the case of nonlinear relationships, the value of the correlation ratio is greater than the correlation coefficient, and therefore, the difference between the correlation ratio and the correlation coefficient refers to the degree of the nonlinearity of dependence [3]. Polynomial regression has also been used for modeling nonlinear dependence in various phenomena. Although nonparametric regression models have been used more often, polynomial regression is still being deployed for modeling dependence in some areas of application, such as biomechanics [4], cosmology [5], climatization [6], and chemistry [7]. As more and more-complex data have been produced through technological development, the need for analyzing these data have given rise to a new field, called functional data analysis, which also includes functional regression. Functional regression models assume functional relationships between responses and predictors, and for polynomial models, these relationships are in polynomial form rather than linear [8].
Shannon entropy plays a central role in information theory as a measure of information choice and uncertainty. Conditional entropy can also be used as a measure of missing information [9]. Conditional entropy or mutual information do not assume any underlying distribution and reflect the stochastic relationship between random variables as a whole—linear or nonlinear [10]. These properties have made mutual information a good choice for analyzing dependencies. Hence, mutual information is extensively used for dependency analysis, especially in finance [11,12,13] and in genetics [14,15,16]. Although mutual information is an effective method for determining the dependency between random variables, it does not provide any information on the nature of the dependence as being linear or nonlinear. Very few attempts have been made to investigate the nature of the dependence by extracting the linear component of Shannon mutual information, though some have, such as [1,17].
The environmental Kuznets curve (EKC) hypothesis states that there is an inverse U-shape relationship between per capita gross domestic product (GDP) and measures of environmental degradation [18]. Because carbon dioxide ( C O 2 ) is the major factor for greenhouse gas emissions, it is accepted as the main reason for the environmental degradation. Hence, the same relationship is assumed between GDP and C O 2 . So the EKC is an indication of the “stages of economic growth” that economies pass through as they make a transition from agriculturally based to industrial and then to postindustrial service-based economies. In a way, EKC provides a visual representation of the stages of economic growth, as seen in Figure 1 (Panayatou 1993).
There are various methods in the literature to test the EKC. Some studies have used panel data, while others have used time series data [19]. Panayotou [20], who first suggested the term EKC, used cross-sectional data and empirically tested the relation between environmental degradation and economic development for the late 1980s. He discovered quadratic patterns in a sample of developing and developed countries. Antle and Heidebrink [21] found turning points for the EKC curve by using cross-sectional data. Vasilev [22] also studied EKC with cross-sectional data.
Although the determination of the exact shape of the Kuznets curve is important, demonstrating its nonlinearity will help support the EKC hypothesis. We aim to determine nonlinearity by deploying mutual information with an application on EKC. The Rényi and Tsallis mutual information types are used in determining the nonlinearity of EKC, and the results are compared with that of Shannon. By demonstrating the confirmation of the EKC hypothesis, it can be concluded that the “grow and pollute now, clean later” strategy revealed by the hypothesis has enormous environmental costs, so alternative strategies should be developed for growth.
The structure of our study is as follows: Section 2 describes the tests for nonlinearity on the basis of mutual information. Section 3 starts with the application by conducting a cross-sectional analysis using ordinary least squares (OLS) and then adds the application of nonlinearity tests. Finally, Section 4 concludes.

2. Relative Entropy, Mutual Information, and Dependence

2.1. Mutual Information

Relative entropy is a special case of statistical divergence. It is a measure of the inefficiency of assuming that the probability distribution is q when the true distribution is p [23]. Shannon, Rényi, and Tsallis relative entropies for the discrete case are defined as follows:
D S = ( p q ) = P ( x ) log P ( x ) q ( x )
D R = ( p q ) = 1 α 1 log P ( x ) ( P ( x ) q ( x ) ) α 1
D T = ( p q ) = 1 α 1 P ( x ) [ ( P ( x ) q ( x ) ) α 1 1 ]
Bivariate extensions are as follows:
D S ( p ( x , y ) q ( x , y ) ) = p ( x , y ) log p ( x , y ) q ( x , y )
D R ( p ( x , y ) q ( x , y ) ) = 1 α 1 log p ( x , y ) ( p ( x , y ) q ( x , y ) ) α 1
D T ( ( p ( x , y ) q ( x , y ) ) = 1 α 1 p ( x , y ) [ ( p ( x , y ) q ( x , y ) ) α 1 1 ]
To check the independence of variables, the null and alternative hypotheses can be stated as follows:
H 0 : p X , Y ( x , y ) = q X , Y ( x , y )
H A : p X , Y ( x , y ) q X , Y ( x , y )
where q X , Y ( x , y ) = p X ( x ) · p Y ( y ) for all ( x , y ) R 2 .
Mutual information can be seen as the divergence of the joint probability function from the product of the two marginal probability distributions. In other words, mutual information is derived as a special case of divergence or relative entropy. Three alternative formulations of mutual information are due to Shannon, Rényi, and Tsallis. Shannon mutual information (or Kullback—Leibler divergence) is defined as follows:
M ( X , Y ) = D S ( p X , Y ( x , y ) p X ( x ) p Y ( y ) ) = p ( x , y ) log p ( x , y ) p X ( x ) p Y ( y )
Mutual information formulated this way is also called as cross entropy.
Rényi order-α divergence (or Rényi mutual information) of p X , Y ( x , y ) from p X ( x ) p Y ( y ) is given as follows:
D R ( p X , Y ( x , y ) p X ( x ) p Y ( y ) ) = 1 α 1 log p X , Y ( x , y ) α ( p X ( x ) p Y ( y ) ) α 1
Tsallis order-α divergence of p X , Y ( x , y ) from p X ( x ) p Y ( y ) (or Tsallis mutual information) is given as follows:
D T ( p X , Y ( x , y ) p X ( x ) p Y ( y ) ) = 1 p X , Y ( x , y ) α ( p X ( x ) p Y ( y ) ) α 1 1 α
In the case of independence, the Rényi and Tsallis mutual information types are 0, just like Shannon mutual information. As α → 1, the Rényi and Tsallis mutual information types approach Shannon mutual information [24]. The mutual information of two variables reflects the reduction in the variability of one variable, by knowing the other. Mutual information becomes 0 if and only if the random variables are independent. It should also be emphasized that mutual information measures general dependence, whereas the correlation coefficient measures linear dependence [15].

2.2. Testing Linearity by Using Mutual Information

The application of the Shannon mutual information measure on the problem of detecting nonlinearity was suggested by Tanaka, Okamoto, and Naito [17] and by Smith [1].
This method utilizes the residuals obtained by the ordinary linear regression model. Note that a linear regression model that fits data well is a good indicator of linear relation between variables so that the residuals obtained from a linear model are considered to include no linear dependence on independent variables:
ξ i = Y i b 0 j = 1 p b j X j
Next, the mutual information between residuals and observed values of the independent variable is calculated. The mutual information between independent and dependent variables M(X,Y) can be computed, as can the mutual information between independent variable and the residuals obtained from linear regression M(X,ξ). Note that the later statistic reflects the nonlinear dependence between the original variables. If the mutual information between the independent variable and residuals does not differ much from the mutual information between the dependent and independent variables, then the relation is nonlinear. By comparing M(X,ξ) with M(X,Y), we can evaluate the degree of nonlinearity in the dependence [1,17].
We suggest that nonlinearity can be detected better by the Rényi and Tsallis mutual information measures because of their parametric nature.
Especially becauase the Tsallis mutual information measure is calculated on the basis of the power of α, the larger the α value, the larger the Tsallis mutual information was becoming, so the difference between these two common mutual information measures cannot be interpreted. Therefore, we suggest a new measure that still leads to the same result, as seen in Equation (13):
λ S , R , T = | 1 M ( X , ξ ) M ( X , Y ) |
The letters S, R, and T in the index indicate the Shannon, Rényi, and Tsallis mutual information measures, respectively. As M(X,ξ) and M(X,Y) become closer to each other, λ converges to zero, implying nonlinearity. This hypothesis is tested by using two simulated data sets, one of which represents a linear relationship and the other one reflects curvilinearity. The number of simulated pairs of X and Y values is 1000. The simulated data representing the linear and the curvilinear relationships are modeled by Equations (14) and (15):
Y = a + b X + e
Y = a + b X + c X 2 + e
Various α values between 0 and 5 are selected randomly from a uniform distribution for assessing the effect of α on nonlinearity measures. Table 1 provides 50 randomly generated observations from a uniform distribution for different values of α and the corresponding λ values for the Rényi and Tsallis measures.
Because λ values close to 1 indicate a linear relationship, λ T , λ S , and λ R support the linearity hypothesis. It can be observed that λ T detects linearity more strongly than does λ R for α > 1; conversely, λ R captures linearity better for α < 1. The mean and the standard deviation for each mutual information measure are also presented in Table 1 for checking the variability of each measure against various α values. The standard deviation values for λ T are lower than those for λ R , pointing out the consistency of Tsallis in the case of linearity.
On the other hand, nonlinearity is captured by λ T better than by λ R for α < 1 and vice versa for α > 1. When the standard deviations are considered, λ R is more stable in determining nonlinearity.
Changing the scale parameter α of mutual information measures naturally changes the sensitivity of this measure, and by plotting λ values against the scale parameter α, the change in sensitivity can be graphically displayed. In order to visually interpret the results, the λ R and λ T values, according to the different α values, are as seen in the graphs:
As can be seen from Figure 2, for α > 1, λ T is more succesful and stable than λ R for a linear relationship. In addition, the λ T measure consistently takes values close to 1, whereas λ R gets smaller as α values increase.
As seen in Figure 3, in the curvilinear relationship, λ T started to grow after alpha 1.4; λ R takes values close to zero in all values of alpha. However, λ T also takes a maximum value of 0.09101. Both common information measures can be used as criteria in nonlinearity. However, λ R more consistently indicates nonlinearity. Because there is no logarithmic function in the Tsallis mutual information formula, when α takes a value greater than 1, Tsallis mutual information makes deviations from linearity less important than Rényi mutual information does. Therefore, λ T will make it less sensitive to nonlinearity than λ R and therefore more unresponsive to nonlinearity than λ S and λ T . For the same reason, λ T will represent linearity better than Rényi will in linear relationship.
An important general property of Rényi entropy is that for a given probability distribution, Rényi entropy is a monotonically decreasing function of α, where α is an arbitrary real number other than 1. Therefore, as can be seen in Figure 2, increasing α values will not provide additional information, so α values are limited to 5.

2.3. Method for Bin-Size Selection

Mutual information depends mainly on both the bin size and the sample size; thus, a natural question arises about the optimal choice of one parameter given the value of another. Here, we use the Freedman–Diaconis rule for finding the optimal number of bins. According to this rule, the optimal number of bins can be calculated on the basis of the interquartile range ( I Q R = Q 3 Q 1 ) and the number of data points n. Freedman and Diaconis use the IQR of the data instead of the standard deviation; therefore, this method is described as more robust than some of the other methods.
Δ b i n = 2 × I Q R ( X ) n 3
The Freedman–Diaconis rule takes into account the asymmetry of the data and sets the bin size to be proportional to the IQR [25].

3. Checking the EKC Hypothesis for East Asian and Asia-Pacific Countries (1971–2016)

3.1. Model

To test the EKC hypothesis, a simple linear regression model is applied. Using the ordinary least squares procedure, we find a quadratic relationship (“inverted U-hypothesis”) between C O 2 emissions (metric tons per capita) and GDP per capita (current USD) in a time series of East Asia and Asia-Pacific countries (excluding high-income countries) over a 46-year period.
East Asia and Asia-Pacific countries were classified initially as low income (LIC) in the 1990s, then as lower middle income (LMC) in 2010. In fact, the highest growth rate of C O 2 emissions (5.6% (1990–2008)) was observed in the East Asia and the Asia-Pacific region, where the highest GDP growth rates (7.2% (1990–2000) and 9.4% (2000–2010)) were achieved.
We first examine the residual diagrams from a linear regression model to determine whether there are serious deviations from assumptions. In Figure 4a, nonlinearity is apparent, whereas in Figure 4b, the deviation from normality assumption can be seen:
According to a quick visual check of the residuals in Figure 4a, a quadratic model seems to be more appropriate. In Table 2, the results of quadratic models are given. The scatter diagram of C O 2 and GDP variables is shown in Figure 5.
To test the appropriateness of a simple linear regression function, the null and alternative hypotheses are given as follows:
H 0 : E ( Y ) = β 0 + β 1 X
H a : E ( Y ) β 0 + β 1 X
The general linear test statistic for simple regression model is as follows:
F * = S S L F c 2 ÷ S S P E n c = 1.641 0.107 = 15.209
When we look at the results, shown in Table 3, F * > F ( 0.05 ; 3.41 ) = 2.833 , so we reject null hypothesis H 0 . This means that the linear regression function does not provide a good fit for the data. The dependence measures are r 2 = 0.91 and η X Y 2 = 0.96 . A nonzero value of η Y X 2 r 2 is associated with a departure from linearity. The calculated value of this difference is η Y X 2 r 2 = 0.05 . To test the significance of this difference, the alternatives are given as follows:
H 0 : The relationship between X and Y is linear.
H a : The relationship between X and Y is not linear.
The test statistic is as follows:
F * = n c c 2 · η X Y 2 r 2 1 η X Y 2 = 41 3 0.05 0.04 = 17.083
This value of F also indicates a significant departure from linearity.

3.2. Testing Linearity on the Basis of Shannon, Rényi, and Tsallis Mutual Information Measures

The Tanaka, Okamoto, and Naito [17] and Smith [1] method is based on comparing the Shannon mutual information between the original data series with that between the new ones obtained by removing linear dependence from the original ones.
Entropy and mutual information calculations are based on a contingency table. A possible reason for the EKC hypothesis may lie in the fact that in poor countries, most of the output is produced in the agricultural sector. So C O 2 emissions are lower in these countries than in other countries. In middle-income countries, pollution begins to increase. As the country grows, it tends to switch to cleaner technologies.
Here, on the basis of the Freedman–Diaconis rule, the optimal number of bins is calculated and presented in Table 4:
To detect nonlinearity by using the Shannon, Rényi and Tsallis mutual information measures, the following table for different values of alpha may help. To evaluate the degree of nonlinearity included in the dependence, the two mutual information measures were compared. When M ( X , ξ ) = M ( X , Y ) , the dependence is interpreted to be based on nonlinearity, so the proposed λ S , λ R , and λ T measures are considered as criteria of nonlinearity.
As seen in the Table 5, λ S , λ R , and λ T are close to zero, so the relationship is nonlinear. As can be checked from the simulation data in Table 1, α < 1 λ T and α > 1 λ R more successfully reveal the curvature. Therefore, the results obtained from the EKC data also support this situation. As a result, the λ S , λ R , and λ T values nearly zero indicate a curvilinear relationship, which supports the EKC hypothesis.
The relationship between λ and α can be seen in Figure 6:

4. Conclusions

The environmental Kuznets curve (EKC) hypothesizes that the relationship between environmental quality and real output has an inverted U-shaped quality. Using the ordinary least squares estimation procedure, we have found a quadratic relationship between C O 2 emission and GDP in a time series of East Asia and Asia-Pasific countries (excluding high-income countries) over a period of 46 years. One technique to check the EKC hypothesis utilizes an F test, by which we have concluded that the linear model does not provide a good fit for the data. As a second technique, comparing the linear determination coefficient with the correlation ratio may be useful. Again, for the EKC data, the difference between these two association measures was found to be significant, addressing curvilinearity. Alternatively, the difference between two dependence measures on the basis of mutual information can be used. Although Shannon mutual information has been used more often in the literature, we suggested that the Rényi and Tsallis mutual information measures catch the nature of the relation between the variables better because of their parametric flexibility.
In this study, the mutual information between dependent and independent variables (M(X,Y)) was found first. Secondly, by using a simple linear regression model, the residuals (ξ) were calculated. Then, the mutual information between the independent variable and the residuals (M(X,ξ)) was obtained. Finally, by comparing these two mutual information measures, the degree of nonlinearity included in the dependence was determined. We also proposed a measure of nonlinearity, λ, and demonstrated that the Rényi and Tsallis mutual information measures determined nonlinearity better for certain ranges of α values compared with the Shannon mutual information measure.
Applications of all these measures on C O 2 emissions and GDP data underlined curvilinearity, and hence, the presumed pattern by the EKC hypothesis was realistic. The result concludes that the “growth and pollute now, clean later” strategy is wasting a lot of resources and has enormous environmental costs. Therefore, countries should seek alternative growth strategies.

Author Contributions

Formal analysis, E.T., A.E., E.U., B.Ş. and Z.Z.Ş.; Data curation, E.T., A.E., E.U., B.Ş. and Z.Z.Ş.; Writing—original draft, E.T., A.E., E.U., B.Ş. and Z.Z.Ş.; Writing—review & editing, E.T., A.E., E.U., B.Ş. and Z.Z.Ş. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Data Availability Statement

Data given within manuscript.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Smith, R. A Mutual Information Approach to Calculating Nonlinearity. Stat 2015, 4, 291–303. [Google Scholar] [CrossRef] [Green Version]
  2. Yi, W.; Li, Y.; Cao, H.; Xiong, M.; Shugart, Y.Y.; Jin, L. Efficient Test for Nonlinear Dependence of Two Continuous Variables. BMC Bioinform. 2015, 16, 260. [Google Scholar]
  3. Weatherburn, C.E. A First Course Mathematical Statistics; Cambridge University Press: Cambridge, UK, 1961. [Google Scholar]
  4. Pestaña-Melero, F.L.; Haff, G.; Rojas, F.J.; Pérez-Castilla, A.; García-Ramos, A. Reliability of the Load-Velocity Relationship Obtained Through Linear and Polynomial Regression Models to Predict the 1-Repetition Maximum Load. J. Appl. Biomech. 2018, 34, 184–190. [Google Scholar] [CrossRef] [PubMed]
  5. Gómez-Valent, A.; Amendola, L. H0 from Cosmic Chronometers and Type Ia Supernovae, with Gaussian Processes and the Novel Weighted Polynomial Regression Method. J. Cosmol. Astropart. Phys. 2018, 2018, 51. [Google Scholar] [CrossRef] [Green Version]
  6. Akhlaghi, Y.G.; Ma, X.; Zhao, X.; Shittu, S.; Li, J. A Statistical Model for Dew Point Air Cooler Based on the Multiple Polynomial Regression Approach. Energy 2019, 181, 868–881. [Google Scholar] [CrossRef] [Green Version]
  7. Gajewicz-Skretna, A.; Kar, S.; Piotrowska, M.; Leszczynski, J. The Kernel-Weighted Local Polynomial Regression (Kwlpr) Approach: An Efficient, Novel Tool for Development of Qsar/Qsaar Toxicity Extrapolation Models. J. Cheminform. 2021, 13, 9. [Google Scholar] [CrossRef]
  8. Morris, J.S. Functional Regression. Annu. Rev. Stat. Its Appl. 2015, 2, 321–359. [Google Scholar] [CrossRef]
  9. Shannon, C.E. A Mathematical Theory of Communication. ACM SIGMOBILE Mob. Comput. Commun. Rev. 2001, 5, 3–55. [Google Scholar] [CrossRef] [Green Version]
  10. Darbellay, G.A.; Wuertz, D. The Entropy as a Tool for Analysing Statistical Dependences in Financial Time Series. Phys. A Stat. Mech. Its Appl. 2000, 287, 429–439. [Google Scholar] [CrossRef]
  11. Dionisio, A.; Menezes, R.; Mendes, D.A. Mutual Information: A Measure of Dependency for Nonlinear Time Series. Phys. A Stat. Mech. Its Appl. 2004, 344, 326–329. [Google Scholar] [CrossRef]
  12. BBarbi, A.; Prataviera, G. Nonlinear Dependencies on Brazilian Equity Network from Mutual Information Minimum Spanning Trees. Phys. A Stat. Mech. Its Appl. 2019, 523, 876–885. [Google Scholar] [CrossRef]
  13. Mohti, W.; Dionísio, A.; Ferreira, P.; Vieira, I. Frontier Markets’ Efficiency: Mutual Information and Detrended Fluctuation Analyses. J. Econ. Interact. Coord. 2019, 14, 551–572. [Google Scholar] [CrossRef]
  14. Wu, X.; Jin, L.; Xiong, M. Mutual Information for Testing Gene-Environment Interaction. PLoS ONE 2009, 4, e4578. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  15. Li, J.; Dong, W.; Meng, D. Grouped Gene Selection of Cancer Via Adaptive Sparse Group Lasso Based on Conditional Mutual Information. IEEE/ACM Trans. Comput. Biol. Bioinform. 2017, 15, 2028–2038. [Google Scholar] [CrossRef]
  16. Rosas, F.E.; Mediano, P.A.; Gastpar, M.; Jensen, H.J. Quantifying High-Order Interdependencies Via Multivariate Extensions of the Mutual Information. Phys. Rev. E 2019, 100, 032305. [Google Scholar] [CrossRef] [Green Version]
  17. Tanaka, N.; Okamoto, H.; Naito, M. Detecting and Evaluating Intrinsic Nonlinearity Present in the Mutual Dependence between Two Variables. Phys. D Nonlinear Phenom. 2000, 147, 1–11. [Google Scholar] [CrossRef]
  18. Grossman, G.M.; Krueger, A.B. Economic Growth and the Environment, Quarterly Journal of Economics, Cx, May, 353–377. Int. Libr. Crit. Writ. Econ. 2002, 141, 105–129. [Google Scholar]
  19. Selden, T.M.; Song, D. Neoclassical Growth, the J Curve for Abatement, and the Inverted U Curve for Pollution. J. Environ. Econ. Manag. 1995, 29, 162–168. [Google Scholar] [CrossRef]
  20. Panayotou, T. Empirical Tests and Policy Analysis of Environmental Degradation at Different Stages of Economic Development; Tech. & Emp. Prog.: Geneva, Switzerland, 1993. [Google Scholar]
  21. Antle, J.M.; Heidebrink, G. Environment and Development: Theory and International Evidence. Econ. Dev. Cult. Chang. 1995, 43, 603–625. [Google Scholar] [CrossRef]
  22. Vasilev, A. Is There an Environmental Kuznets Curve: Empirical Evidence in a Cross-Section Country Data; ZBW: Kiel, Germany, 2014. [Google Scholar]
  23. Thomas, M.T.C.A.J.; Joy, A.T. Elements of Information Theory; Wiley-Interscience: Hoboken, NJ, USA, 2006. [Google Scholar]
  24. Ullah, A. Entropy, Divergence and Distance Measures with Econometric Applications. J. Stat. Plan. Inference 1996, 49, 137–162. [Google Scholar] [CrossRef]
  25. Freedman, D.; Diaconis, P. On the Histogram as a Density Estimator: L 2 Theory. Z. Für Wahrscheinlichkeitstheorie Und Verwandte Geb. 1981, 57, 453–476. [Google Scholar] [CrossRef]
Figure 1. Environmental Kuznets curve.
Figure 1. Environmental Kuznets curve.
Entropy 25 00079 g001
Figure 2. λ values versus α in the case of linearity.
Figure 2. λ values versus α in the case of linearity.
Entropy 25 00079 g002
Figure 3. λ values versus α in the case of curvilinearity.
Figure 3. λ values versus α in the case of curvilinearity.
Entropy 25 00079 g003
Figure 4. Residual plots listed as (a) fitted values to residuals and (b) normal Q-Q plot of standardized residuals.
Figure 4. Residual plots listed as (a) fitted values to residuals and (b) normal Q-Q plot of standardized residuals.
Entropy 25 00079 g004
Figure 5. Quadratic regression model estimation.
Figure 5. Quadratic regression model estimation.
Entropy 25 00079 g005
Figure 6. λ values against different α values for EKC data.
Figure 6. λ values against different α values for EKC data.
Entropy 25 00079 g006
Table 1. λ values for linear and curvilinear relationships, based on simulations.
Table 1. λ values for linear and curvilinear relationships, based on simulations.
Linear
Relationship
Curvilinear
Relationship
αλRλTλRλT
0.070.99560.99060.04890.0308
0.130.99170.9830.04570.0263
0.170.98940.97880.04470.0247
0.180.98880.97780.04450.0244
0.350.98080.96640.04330.0228
0.410.97850.9640.04310.0232
0.480.9760.96190.04290.024
0.560.97340.96040.04290.0254
0.670.97020.95950.04350.0281
0.690.96960.95950.04370.0287
0.740.96840.95960.04440.0304
0.820.96680.96050.04640.0339
0.870.96630.96180.04830.0369
1.360.94730.9640.00870.0099
1.460.94460.96620.0030.0037
1.860.93230.97490.01290.0214
2.110.92360.97970.01380.0271
2.180.92090.9810.01390.0285
2.440.91020.98510.01390.0334
2.540.90560.98650.01380.0352
2.730.89620.98880.01360.0386
2.780.89350.98930.01360.0395
2.80.89240.98950.01360.0398
2.830.89080.98980.01350.0403
2.840.89020.98990.01350.0405
2.920.88560.99070.01340.0419
3.010.88010.99140.01330.0436
3.020.87950.99150.01330.0437
3.040.87820.99170.01330.0441
3.090.87490.99210.01320.045
3.230.86520.99310.01310.0476
3.290.86080.99340.01310.0487
3.340.8570.99370.0130.0497
3.380.85380.9940.0130.0504
3.40.85220.99410.0130.0508
3.50.8440.99460.01290.0528
3.570.83790.99490.01280.0542
3.710.82490.99540.01280.057
3.740.8220.99560.01280.0576
3.770.81910.99570.01270.0583
3.880.8080.9960.01270.0607
3.970.79850.99630.01270.0627
4.040.79080.99640.01270.0643
4.170.77620.99670.01270.0673
4.540.73180.99740.01280.0769
4.620.72190.99750.01280.0792
4.710.71080.99760.01290.0818
4.760.70460.99760.01290.0833
4.850.69340.99770.0130.0861
4.940.68220.99780.01310.089
λ S 0.95890.0121
Mean0.87830.98480.02110.0443
Std. Dev.0.08690.01340.01430.0199
Table 2. Summary of the model.
Table 2. Summary of the model.
abcF R 2
Parameter Estimates1.01210.00161.4 × 10−71581.2240.986
Standard Error0.045995.9 × 10−59.35 × 10−9
p-Value4.99 × 10−254.39 × 10−295.08 × 10−19
Model C O 2 = 1.021 + 0.0016 G D P ( 1.4 × 10 7 ) G D P 2
a: constant; b and c: coefficients of model; F: F test; R2: coefficient of determination.
Table 3. Related ANOVA table.
Table 3. Related ANOVA table.
Source of VariationDfSum of SquaresMean Squares
Explained variation by linear regression1SSR = 98.65898.658
Explained variation by nonlinear regression3SSLF = 4.9251.641
Unexplained variation41SSPE = 4.4250.107
Total45SST = 108.01
Table 4. Optimal number of bins.
Table 4. Optimal number of bins.
Variables n b i n s
C O 2 7
GDP14
Residuals9
Table 5. λ values for EKC data.
Table 5. λ values for EKC data.
αλRλTαλRλT
0.070.38920.36553.010.06490.1460
0.130.38090.34443.020.06460.1459
0.170.37410.33233.040.06400.1458
0.180.37220.32953.090.06250.1455
0.350.33550.29023.230.05870.1453
0.410.32210.27973.290.05730.1455
0.480.30710.26903.340.05620.1457
0.560.29100.25863.380.05530.1460
0.670.27080.24663.40.05490.1461
0.690.26730.24473.50.05300.1471
0.740.25900.24023.570.05180.1481
0.820.24680.23393.710.04970.1505
0.870.24000.23073.740.04930.1511
1.360.17350.19613.770.04890.1518
1.460.16310.19133.880.04760.1545
1.860.12710.17433.970.04670.1570
2.110.10870.16534.040.04610.1591
2.180.10400.16304.170.04510.1635
2.440.08880.15564.540.04300.1782
2.540.08370.15324.620.04270.1817
2.730.07510.14944.710.04230.1858
2.780.07310.14864.760.04210.1881
2.80.07230.14834.850.04180.1923
2.830.07110.14794.940.04150.1965
2.840.07080.1477λS0.21810.2181
2.920.06790.1468Mean0.13130.1914
St. Dev.0.11470.0607
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Tuna, E.; Evren, A.; Ustaoğlu, E.; Şahin, B.; Şahinbaşoğlu, Z.Z. Testing Nonlinearity with Rényi and Tsallis Mutual Information with an Application in the EKC Hypothesis. Entropy 2023, 25, 79. https://doi.org/10.3390/e25010079

AMA Style

Tuna E, Evren A, Ustaoğlu E, Şahin B, Şahinbaşoğlu ZZ. Testing Nonlinearity with Rényi and Tsallis Mutual Information with an Application in the EKC Hypothesis. Entropy. 2023; 25(1):79. https://doi.org/10.3390/e25010079

Chicago/Turabian Style

Tuna, Elif, Atıf Evren, Erhan Ustaoğlu, Büşra Şahin, and Zehra Zeynep Şahinbaşoğlu. 2023. "Testing Nonlinearity with Rényi and Tsallis Mutual Information with an Application in the EKC Hypothesis" Entropy 25, no. 1: 79. https://doi.org/10.3390/e25010079

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop