1. Introduction
Many econometric and statistical applications are interested in tests of the nonlinear functions of the parameters, which can be expressed as the ratio of two unknown parameters including the ratio of regression coefficients, the ratio of the two linear functions such as the ratio of affine transformations of random variables and generally the ratio of the two nonlinear functions.
A non-exhaustive list of examples of econometric models where inferences for the ratio of parameters are used as follows: the long-run elasticities and flexibilities in dynamic models (
Li & Maddala, 1999;
Dorfman et al., 1990;
Bernard et al., 2007;
J. G. Hirschberg et al., 2008); the willingness to pay value, i.e., the maximum price an agent would pay to obtain an improvement in a particular attribute of a desired good or service (
Lye & Hirschberg, 2018); the turning point in a quadratic specification model where the estimated relationship is either a U-shaped or an inverted U-shaped curve, for example, Kuznet and Beveridge curves, in applications to dynamic panel data (
Bernard et al., 2019;
Lye & Hirschberg, 2018); the determination of the non-accelerating inflation rate of unemployment (NAIRU); for example, a Phillips curve (
Staiger et al., 1997;
J. Hirschberg & Lye, 2010c;
Lye & Hirschberg, 2018); the structural parameter in an exactly identified system of equations as estimated by the two-stage least squares method (
J. Hirschberg & Lye, 2017;
Lye & Hirschberg, 2018;
Andrews et al., 2019); the notion of weak instruments in econometric models (
Woglom, 2001); inequality indices (
Dufour et al., 2018,
2024); structural impulse responses (
Olea et al., 2021).
Lye and Hirschberg (
2018) give some other examples of econometric models.
However, the statistical properties of the ratio of parameters can be problematic because the analytical expressions of the moments are generally not available, e.g., the ratio of asymptotically normally distributed random variables is a non-central Cauchy distribution. Moreover, if the denominator of the ratio is not significantly different from zero, the probability distribution of the ratio shows unusual behavior, and the confidence intervals are unbounded. Another problem worth highlighting is the bias of the estimator in a finite sample when studying a nonlinear function of parameters.
To test the null hypothesis of the nonlinear functions of parameters, which can be expressed as the ratio of two unknown parameters, we use confidence intervals (CIs). The two widely used approaches for constructing CIs are the Fieller method and the Delta method. The advantage of these methods is that they can be implemented in any context and are easy to compute, they do not require the use of intensive calculation and sampling strategies as would be needed when using a Bootstrap or Bayesian method (
J. Hirschberg & Lye, 2010c;
J. Hirschberg & Lye, 2017;
Lye & Hirschberg, 2018).
Fieller (
1954) proposed a method to derive the confidence interval (CI) of the ratio of two random variables. In Fieller’s method, it assumes that both the numerator and the denominator of the ratio follow normal distribution. The method is based on the inversion of the pivotal
t-statistic, which gives an exact CI for achieving the required coverage probability. The Fieller’s CI is asymmetric around the ratio estimate, which is a good property, as it can be reflected in the skewness of the small sample distribution of the ratio. However, if the denominator of the ratio is not significantly different from zero, Fieller’s CI will be unbounded, being either the entire real line or the union of two disconnected infinite intervals. It has a positive probability of producing CI with infinite length. Furthermore, Fieller’s interval requires finding roots of a quadratic equation and these can be imaginary. In addition, if this quadratic equation has one root, the confidence interval will be half-open.
The Delta method is based on the first-order Taylor expansion by considering nonlinear functions of parameters. By assuming asymptotic normality in large samples, this method produces a symmetric and bounded CI, unlike the Fieller method. However, the Delta method often has an inaccurate coverage probability (
Dufour, 1997) and unbalanced tail errors even at moderate sample sizes (
J. Hirschberg & Lye, 2010c). A geometric interpretation of the Fieller and Delta methods can be found in
von Luxburg and Franz (
2009);
J. Hirschberg and Lye (
2010a,
2010b). According to
J. Hirschberg and Lye (
2010c), if the true value of the ratio has the same sign as the correlation coefficient between the numerator and the denominator then the Delta and Fieller intervals may be very similar even if the denominator has a high variance. However, if the signs are opposite and the precision of the denominator is low, then the Delta method has poorer performance.
Moreover, there are two potential problems with these Fieller and Delta methods; first, the parameter
estimator is biased for nonlinear parameter functions. Second, the estimated parameters have non-normal and asymmetric distribution. Thus, the variance of the estimated parameters is not useful in constructing confidence intervals,
Dorfman et al. (
1990);
Li and Maddala (
1999).
In order to overcome the
limitations of the previous methods, some numerical procedures have been proposed in the literature such as the parametric bootstrap method and the nonparametric bootstrap method (bootstrap standard, bootstrap
t-statistic; bootstrap percentile, bootstrap bias-corrected, bootstrap bias-corrected and accelerated) see
Krinsky and Robb (
1986);
Dorfman et al. (
1990);
Li and Maddala (
1999), among others. The CIs obtained from these iterative procedures are bounded and are more computationally intensive.
Dorfman et al. (
1990) compared the Delta and Fieller methods and three types of the single bootstrap and found that the bootstrap did not achieve nominal coverage and that all methods performed reasonably well.
The bootstrap percentile-t and the Delta methods confidence intervals are very close to each other in many cases in terms of the length of the confidence intervals,
Li and Maddala (
1999).
It should be noted that all the previous methods do not take into account the bias of the estimator, which should be a prerequisite for constructing a reliable confidence interval.
In this regard, the paper has five main contributions.
First, we propose a novel analytical approach that modifies the Delta method to reduce the effect of skewness.
This method is based on the Edgeworth expansion (
Hall, 1992b). We then propose
an easy to compute confidence interval for the ratio of parameters and the interval has the coverage probability converging to the nominal level at a rate of
where
n is the sample size.
Second, the source of potential bias is due to the nonlinearity of the ratio
in terms of
and
. It is well known that even when exact unbiased estimators of
and
are available, the ratio estimator
could still be badly biased in finite samples. We consider a second-order term in the Taylor series expansion to bias estimation that evaluates the nonlinearity of the ratio estimator
and we propose a bias-corrected estimator, which is identical to the almost unbiased ratio estimator proposed by
Tin (
1965).
Third, we investigate the problem of approximating the variance of a nonlinear function of parameters based on a second-degree Taylor series expansion. Unfortunately, when calculating the variance of the second-degree Taylor expansion, most authors (
Parr, 1983;
Hayya et al., 1975;
Y. Wang et al., 2015) did not take into account the possible covariances between the random variables, which is indispensable because it provides a better approximation. This variance is none other than the variance of the bias-corrected estimator (or the variance of the almost unbiased ratio estimator of
Tin (
1965)).
Fourth, we define a modified version of the Delta method, correct the estimator of the bias, and calculate the corresponding variance. This can be helpful in terms of more accurate coverage probabilities for the CIs.
Fifth, we propose a novel analytical approach to construct the CI for the ratio estimate. Our method, Edgeworth expansion with bias-corrected estimator uses the Edgeworth expansion but adopts an estimator corrected for the bias and its variance. The method always produces a bounded CI. Simulation results show that it generally outperforms the Edgeworth expansion in terms of controlling the coverage probabilities and the average width and is particularly useful when the data are skewed.
The rest of this paper is organized as follows:
Section 2 presents some highlights.
Section 3 studies the different methods for constructing CIs, the Fieller and Delta methods and we will develop the Edgeworth expansions for the Delta method.
Section 4 provides an analytical form of the bias that can be used to construct the bias-corrected estimator and to calculate the variance of the bias-corrected estimator.
Section 5 presents the confidence intervals with the bias-corrected estimator.
Section 6 presents some econometric applications. The simulation study and the results are presented in
Section 7 and
Section 8 conclude the paper.
3. Methods
3.1. The Delta Method (or the Taylor’s Series Expansion)
The Delta method often referred to as the Taylor’s series expansion (
Dorfman et al., 1990;
Briggs & Fenn, 1998;
Li & Maddala, 1999, among others) estimates the variance of a nonlinear function of two or more random variables is given by taking a first-order Taylor expansion around the mean value of the variables and calculating the variance for this expression. In the case of the ratio of
parameters , the variance of
is (Full derivation details can be see in
Appendix A).
which can also be written as
where
is the square of the coefficient of variation for a random variable
for
and
is the coefficient of co-variation of
and
and
is the correlation coefficient between
and
.
To construct a confidence interval for the ratio , we assume that is asymptotically normal distributed with zero mean and variance .
Let
be a consistent estimator of
, the Delta method
confidence limits for the ratio
is given as follows:
where
is namely the classical estimator and
, the estimated standard error of the classical estimator and
is the
quantile for standard normal distribution.
This method assumes that is normally distributed and thus symmetrical around its mean. However, the assumption of normality is clearly strong as there is no guarantee that is normally distributed.
However, for large sample sizes (or rather small coefficients of variation) the distribution of a ratio may be close to normal.
The assumption of a normal distribution may be justified in the case of large samples, but it is unlikely that the distribution of a ratio will generally follow a well-behaved distribution. Furthermore, the assumption of a normal distribution may be quite inaccurate if the data have a skewed distribution.
3.2. The Fieller Method
Fieller (
1954) proposed a general procedure for constructing confidence limits for the ratio of the means of two normal distributions. In Fieller’s method, the ratio variable is transformed into a linear function. The confidence interval of the ratio variable can be obtained by solving out the quadratic roots of the linear function.
For testing the null hypothesis
equivalently it is written as on a linear combination of the parameters
:
, the method assumes that
and
follow a joint normal distribution function such that
is normally distributed. Hence, the pivotal statistic for this test is
which is
t-distribution with
degrees of freedom under the null hypothesis.
Let
denotes the
th percentile of the
t- distribution with
degrees of freedom, we have
By replacing the expression of square
T and rearranging gives a quadratic equation in
.
where , ,
and .
Finding an explicit form for the confidence intervals for
requires solving the quadratic equation. The solution of this inequality depends on the sign of
a and
, the discriminant of the quadratic equation. Through simple calculation, we can expressed
d as follows:
where
is the estimate of the correlation coefficient between
and
. Hence,
also implies
.
If
, let
and
<
be the two real-valued solutions to the quadratic equation in
by changing the inequality into an equality. This gives the bounds of the Fieller interval in the case
. These two roots are the lower and upper limits of the
confidence interval. The bounds of the interval are given by
where
is the Fieller estimator and
the estimated standard error of the Fieller estimator and
.
However, if the Fieller CI will be unbounded. Hence, if the Fieller CI will be the complement of a finite interval and if the Fieller CI will be the whole real line .
Other intervals may be considered when , the Fieller CI will be if otherwise, it will be if .
Remark 1. - (1)
Following the Fieller estimator, the term can be considered as a correction factor to the estimated ratio estimator.
- (2)
If then the Fieller estimator is equal to the classical estimator .
- (3)
In the case of finite interval, the condition is equivalent to which means rejecting the null hypothesis , i.e., is significantly different from zero. The test of this null hypothesis is the first step of Scheffé’s procedure (Scheffé, 1970). - (4)
The statistic is equal to the absolute inverse of the coefficient of variation for , so the null hypothesis is rejected if the coefficient of variation for is negligible. (A high coefficient of variation for means a low statistical value).
- (5)
If h is close to zero the Fieller CI becomes the Delta CI.
It should be noted that the null hypothesis : , was obtained from the nonlinear relationship only when . However, Fieller’s method does not take this information into account.Therefore, the Fieller CI has the potential to overestimate the confidence length.
Furthermore, Fieller’s estimator is a linear combination of the ratio estimator (or classical estimator). As we mentioned in
Section 1, the ratio estimator is generally biased, so Fieller’s estimator is also generally biased.
The advantage of Fieller’s method over the Delta method is that it takes into account the potential skewness of the sampling distribution of the ratio estimator and therefore may not be symmetric around the point estimate. Fieller’s method provides an exact solution subject to the joint normality assumption. However, it has been argued that the assumption of joint normality may be difficult to justify, particularly when sample sizes are small. In particular, the random variable follows a skewed distribution, which may cause problems for the normality assumption.
The normal approximation is a rather rough approximation, especially when sample sizes are not large; it does not take into account the skewness of the underlying distribution, which is often the main source of error of the normal approximation. To remove the effect of the skewness, we develop the Edgeworth expansion.
3.3. Edgeworth Expansion
The Delta method-based confidence interval is not very robust and can be quite inaccurate in practice for non-normal data. It produces intervals that are symmetric around the point estimate, so it does not take skewness into account. The correction for skewness used in our confidence intervals is based on the Edgeworth expansion.
We propose a method based on the Edgeworth expansion to modify the Delta intervals to remove the effect of skewness. The expansion provides a way to correct for the skewness in the data and to derive new confidence intervals for the ratio parameters. Thus, we consider two aspects: first an Edgeworth expansion is derived for the Delta method for a ratio of parameters on a normal random variable and second by using the inverse of the Edgeworth expansions, which are the quantiles of the distribution that is the Cornish–Fisher expansion, we construct an approximate confidence interval, which contains a order correction for the effect of skewness.
The Delta method can be easily extended for a better approximation by using Edgeworth expansion.
Let
where
,
and
the estimate of
in Delta method, we assume that the distriubtion of a random variable
U has the Edgeworth expansion (
Hwang, 2019;
Hall, 1992a).
where
and
are the standard normal distribution and density functions, respectively,
is the skewness, and
n is the sample size. This expansion can be interpreted as the sum of the normal distribution
, and an error due to the skewness of the distribution. When the error (the
skewness correction) in absolute value is small,
U can be accurately approximated by a normal distribution. Conversely, when the error in absolute value is large, the second term in the formulation cannot be ignored, and therefore, the normal approximation would not be as accurate. The
skewness correction is an even function of
x, which means that it changes the distribution function symmetrically about zero. Thus, the skewness of the distribution
F has a significant effect, especially when the sample size
n is small.
To construct asymptotic confidence intervals, we should invert the Edgeworth expansions to obtain expansions of distribution quantiles. Such expansions are known as Cornish–Fisher expansions.
For any
, let
be the
quantile of distribution
, which is the solution to
. This quantile of distribution
admits a Cornish–Fisher expansion of the form (
Hall, 1992b;
Hwang, 2019).
where
is the estimate of
and
is the
-th quantile of the standard normal distribution.
The
Edgeworth expansion confidence interval for the ratio
is given by
where
and
and
are the
and
quantiles of distribution
.
For positively skewed data, the true quantile is larger than the associated standard normal quantiles and similarly the true lower quantile is larger than .
From the Cornish–Fisher expansion, we can state the asymptotic coverage probability of the proposed intervals.
The coverage probability of confidence intervals is given by
4. Bias-Correction Analysis
4.1. Bias of Estimator
In
Section 1, we showed that the ratio estimator
is a biased estimator of the ratio parameters. It is essential to determine the expected direction and magnitude of this bias.
The Fieller estimator and the classical estimator are strongly consistent (converge to the ratio with probability one), and generally are biased.
In the following, we propose to correct the bias of the classic estimator. It is well known in the literature that the ratio of the parameters uses only first-order expansions to approximate asymptotic sampling distributions. However, calculating higher-order expansions can also be useful given that they can be used to estimate the bias of the ratio of the parameters and the analytical form of the bias obtained can be used to construct the bias-corrected estimator. (Furthermore, higher-order expansions are also useful because they can be used to estimate the bias of the ratio parameters and the analytical form of the bias obtained can be used to construct the bias-corrected estimator.)
We consider a second-order term in the Taylor series expansion to bias estimation that evaluates the nonlinearity of the ratio of the parameters. This additional second-order term can be helpful, in the sense of more accurate coverage probabilities for the CIs.
Let
is
then from a second-order Taylor’s series expansion
where
G is a Jacobian vector containing all the first-order partial derivatives and
H is a Hessian matrix containing all the second partial derivatives for the nonlinear function
evaluated at
and
, and the remainder
is of order
i.e.,
as
for
.as
.
We define the bias and variance of ratio estimator using the first and second moments of the terms in this second-order Taylor’s series expansion. Taking the expectation of this expansion and under the conditions for we obtain the bias of ratio estimator given in the following proposition.
Proposition 1. Let be a ratio of parameters, a second-order Taylor’s series expansion gives the approximation of biaswhere Σ is the variance-covariance matrix of and and H is the Hessian matrix of the second partial derivatives. The estimate of bias is given bywhere and are the estimates of H and Σ respectively. This yieldswhich can also be written aswhere can be considered as a correction factor to the estimated ratio estimator. This bias is identical to Tin’s bias, as described by Tin in 1965. It uses the same information as the correction factor formed by subtracting
from
. This bias is order to
Tin (
1965). Our bias is derived by a different method.
Tin (
1965) and
David and Sukhatme (
1974) used an asymptotic series expansion of the ratio estimator under certain conditions. The high-order of Tin’s bias formulation was given by
David and Sukhatme (
1974).
To obtain the sign of the bias, we express the bias as a function of the coefficient of variation and the coefficient of co-variation
where
is the estimate of the correlation coefficient between
and
.
Following this latter formula, if the coefficient of variation of is close to zero, then the bias may be negligible relative to the variation in . Furthermore, if the coefficient of variation of is greater than the coefficient of variation of the absolute value of the bias increases if the correlation between and becomes zero or negative. Similarly, if the coefficient of variation of is greater than the coefficient of variation of the bias is negative for a high positive correlation coefficient. Moreover, if then the absolute value of the bias is positive, if then the absolute value of the bias is negative, and if then the ratio estimator is unbiased.
4.2. The Bias-Corrected Estimator
We have obtained an analytic form of the bias and the estimate bias of the ratio parameters can be used to correct the estimator, the bias-corrected estimator is given by
Following this result, the bias-corrected estimator is obtained by adjusting the classical estimator by the term that is capable of reducing it from order to order .
The bias-corrected estimator can also be written as
where
can be considered as a correction factor to the estimated ratio estimator.
This bias-corrected estimator
has the same structure as
Tin’s (
1965) almost unbiased ratio estimator in the sense that its bias is of
, i.e., the bias of
converges to zero at a fast rate than that of
. Tin called it a modified ratio estimator”. He showed that his estimator is better than other competing estimators of population mean, up to the second order of approximation and it is equivalent to the
Beale (
1962) estimator up to the first order of approximation. Tin’s estimator has been studied theoretically and via simulation by,
Dalabehera and Sahoo (
1995),
Swain and Dash (
2020) and they found Tin’s estimator generally to be less biased and more efficient compared with other proposed ratio estimators.
The bias-corrected estimator
in terms of coefficient of variation and the coefficient of co-variation of
and
is
where
can be considered as a correction factor to the estimated ratio estimator.
In the next, we examine the case where the numerator and denominator of a ratio are independent. In this case, we will specify the bias and the bias-corrected estimator in the following proposition:
Proposition 2. If and are independent, we have the following
- (i)
The estimate of the bias is
where denotes the square of the statistic (or statistic) for and can be considered as a correction factor to the estimated ratio estimator.
The estimate of the bias of the ratio parameters is an estimator of the ratio weighted by the square of the coefficient of variation of (the inverse of the square of the statistic for or the inverse of the statistic).
- (ii)
The bias-corrected estimator is
The bias-corrected estimator of the ratio parameters is an estimator of the ratio weighted by the simple statistic , this weight is less than one because or is positive.
4.3. The Variance of the Bias-Corrected Estimator
As we have shown, the bias-corrected estimator
corresponds to the
Tin (
1965) almost unbiased ratio estimator, also known as the modified ratio estimator. The approximation of the variance of
with a second-order term expressed in terms of the coefficient of variation and the coefficient of co-variation of
and
is identical to the variance of the almost unbiased ratio estimator. We therefore use this variance as the variance of the bias-corrected estimator.
The estimate of the variance of the bias-corrected estimator
is as follows (full derivation details can be see in
Appendix A):
where the first order part
is the estimate of the variance of
corresponding to a first order approximation and the second order part corresponding to an additional part from second-order approximation, which permit to take into account the correlation between the random variables.
This yields
which can also be written as
Thus, this variance can be express in terms of coefficient variation of
and
by
where
is the estimate of the correlation coefficient between
and
.
This variance is identical to the variance of the “almost unbiased ratio estimator” (or the variance of the modified ratio estimator) of
Tin (
1965), see also
David and Sukhatme (
1974).
If and are independent, we have:
- (i)
The estimate of the variance of the bias-corrected estimator
is given by
which can also be written as
- (ii)
The variance
can be express in terms of coefficient variation of
and
6. Some Econometric Applications
6.1. The Ratio of Two Linear Combinations of Parameters
Many of the nonlinear functions studied in economic applications are expressed in the functional form of a ratio of two linear combinations of parameters. In this section, we consider the test of one such nonlinear function.
We will specify the bias of the estimator, the bias-corrected estimator, and its variance. Note that the formulations of the confidence intervals are given in the previous section. We will see that the calculations are quite simple and do not require intensive computation.
Consider the general linear model
where
Y is an
vector of observations,
X is a
full-rank design matrix,
is a
vector of unknown parameters, and
is an
vector of normal random errors with zero mean and variance
:
. The OLS estimators of unknown parameters are
and
where
are the OLS residuals.
Consider a null hypothesis for the ratio of two linear combinations of parameters
where
K and
L are
vectors of known constants.
We have the following different terms:
, ,
, , ,
, , =
By replacing all these terms in the formulation of the bias for , the bias-corrected estimator , and the variance of the bias-corrected estimator , we have the following proposition.
Proposition 3. - (i)
The bias for iswhich can also be written aswhere can be considered as a correction factor to the estimated ratio estimator. - (ii)
The bias-corrected estimator is given bywhich can be written aswhere can be considered as a correction factor for the estimated ratio estimator. - (iii)
The estimate of the variance of the bias-corrected estimatorwhere is the first-order approximationand is the additional part from second-order approximation
Next, we consider the case where the numerator and the denominator of the ratio are independent.
Proposition 4. - (i)
If and are independent, then the bias for becomes, which can be written aswhere can be considered as a correction factor for the estimated ratio estimator. - (ii)
The bias-corrected estimator is given bywhich can be written aswhere can be considered as a correction factor for the estimated ratio estimator. - (iii)
The variance of the bias-corrected estimator
We will illustrate this result with an econometric application to show the simplicity of calculation for our method. Let us take the case of the turning point, which has been the subject of numerous economic applications.
6.2. The Turning Point
Consider a classical linear model described by the quadratic regression model
where
y is the dependent variable and
x the independent variable and
is an unobserved random error term with expected value
and variance
. A common example of such model is the
Kuznets (
1955) curve that proposes the relationship between income inequality and income, can be represented by an inverted U shaped curve. Following the Kuznets hypothesis the relation between a country’s income equality and economic development is concave, with income equality first increasing and then decreasing as the country s economy is developing. See
Bernard et al. (
2019),
J. Hirschberg and Lye (
2005), and
Lye and Hirschberg (
2018), among others, for the applications and the extensions of this “Kuznets curve”. The turning point (or extremum value) is given by
assuming
, the extremum value
is a minimum value if
and a maximum value if
.
In this case, and and we have
, ,
, , ,
, ,
In the formulation of the bias for , the bias-corrected estimator and its variance, by replacing all these terms, we have the following results:
- (i)
The bias for iswhich can be written aswhere
can be considered as a correction factor to the estimated ratio estimator. - (ii)
The bias can be express in terms of the coefficients of variation and the coefficient of co-variation of and where denotes the for for , and is the estimate of the correlation coefficient between and and the term can be considered as a correction factor to the estimated ratio estimator. An another alternative form of the bias iswhere
can be considered as a correction factor to the estimated ratio estimator. - (iii)
The bias-corrected estimator which can be written aswhere
can be considered as a correction factor to the estimated ratio estimator. - (iv)
The bias-corrected estimator
in terms of the coefficient of variation and the coefficient of co-variation of
and
iswhere
can be considered as a correction factor to the estimated ratio estimator. An another alternative form iswhere
can be considered as a correction factor to the estimated ratio estimator. - (v)
The estimate of the variance of the bias-corrected estimator - (vi)
Thus, this variance
can be express in terms of coefficient variation of
and
byThis variance is easily calculated using thet-statistics for
for
We have developed a new method for deriving analytical formulae for the bias of the estimator ratios , the bias-corrected estimator , and the variance of the bias-corrected estimator . The advantage of this method is that the calculations are quite simple and do not require intensive computations like the bootstrap methods.
8. Conclusions
We have developed new methods for constructing confidence intervals for the nonlinear functions of parameters In many practical applications, the distribution of the data are not symmetric, in particular when the sample size is small. We propose that the Edgeworth expansion to the statistics makes it possible to remedy this inconvenience. The Delta method can then be extended using the Edgeworth expansion to obtain a better approximation. Furthermore, we have shown that the nonlinear functions of the parameters are biased and we have given an analytical expression of the bias of the ratio of the parameters. This has allowed us to define bias-corrected estimators and, more particularly, to calculate the variance associated with these bias-corrected estimators. We have therefore proposed two other new methods: the Delta method with bias correction and the Edgeworth expansion with bias correction. The new methods we propose are straightforward to calculate and do not require intensive calculations such as bootstrapping.
The results of the simulation study showed that our methods generally have better coverage probabilities and confidence width and are narrower than the Delta method and Fieller’s method. In the case of bivariate normality, the Delta with bias correction intervals gives better coverage probabilities than the Delta intervals. They are comparable and sometimes better than Fieller’s intervals. When the data have been generated from a skewed distribution, the Edgeworth without and with the bias correction have good performance in terms of controlling the coverage probabilities and average length intervals. Thus, we recommend using our new methods with bias correction to construct a reliable confidence interval for nonlinear functions of the estimated parameters.
Finally, it should be noted that the method outlined in this paper for deriving analytical formulae for the bias of ratio estimators, the bias-corrected estimator, and the variance of the bias-corrected estimator can be useful in several econometric and statistical applications, such as, e.g., the long-run elasticities and flexibilities in dynamic models, the willingness to pay value, structural impulse responses, etc.