Previous Article in Journal
Integration and Risk Transmission Dynamics Between Bitcoin, Currency Pairs, and Traditional Financial Assets in South Africa
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Re-Examining Confidence Intervals for Ratios of Parameters

by
Zaka Ratsimalahelo
CRESE (UR 3190), Université Marie et Louis Pasteur, F-25000 Besançon, France
Econometrics 2025, 13(3), 37; https://doi.org/10.3390/econometrics13030037
Submission received: 24 June 2025 / Revised: 28 August 2025 / Accepted: 11 September 2025 / Published: 20 September 2025

Abstract

This paper considers the problem of constructing confidence intervals (CIs) for nonlinear functions of parameters, particularly ratios of parameters a common issue in econometrics and statistics. Classical CIs (such as the Delta method and the Fieller method) often fail in small samples due to biased parameter estimators and skewed distributions. We extended the Delta method using the Edgeworth expansion to correct for skewness due to estimated parameters having non-normal and asymmetric distributions. The resulting bias-corrected confidence intervals are easy to compute and have a good coverage probability that converges to the nominal level at a rate of O ( n 1 / 2 ) where n is the sample size. We also propose bias-corrected estimators based on second-order Taylor expansions, aligning with the “almost unbiased ratio estimator” . We then correct the CIs according to the Delta method and the Edgeworth expansion. Thus, our new methods for constructing confidence intervals account for both the bias and the skewness of the distribution of the nonlinear functions of parameters. We conduct a simulation study to compare the confidence intervals of our new methods with the two classical methods. The methods evaluated include Fieller’s interval, Delta with and without the bias correction interval, and Edgeworth expansion with and without the bias correction interval. The results show that our new methods with bias correction generally have good performance in terms of controlling the coverage probabilities and average length intervals. They should be recommended for constructing confidence intervals for nonlinear functions of estimated parameters.

1. Introduction

Many econometric and statistical applications are interested in tests of the nonlinear functions of the parameters, which can be expressed as the ratio of two unknown parameters including the ratio of regression coefficients, the ratio of the two linear functions such as the ratio of affine transformations of random variables and generally the ratio of the two nonlinear functions.
A non-exhaustive list of examples of econometric models where inferences for the ratio of parameters are used as follows: the long-run elasticities and flexibilities in dynamic models (Li & Maddala, 1999; Dorfman et al., 1990; Bernard et al., 2007; J. G. Hirschberg et al., 2008); the willingness to pay value, i.e., the maximum price an agent would pay to obtain an improvement in a particular attribute of a desired good or service (Lye & Hirschberg, 2018); the turning point in a quadratic specification model where the estimated relationship is either a U-shaped or an inverted U-shaped curve, for example, Kuznet and Beveridge curves, in applications to dynamic panel data (Bernard et al., 2019; Lye & Hirschberg, 2018); the determination of the non-accelerating inflation rate of unemployment (NAIRU); for example, a Phillips curve (Staiger et al., 1997; J. Hirschberg & Lye, 2010c; Lye & Hirschberg, 2018); the structural parameter in an exactly identified system of equations as estimated by the two-stage least squares method (J. Hirschberg & Lye, 2017; Lye & Hirschberg, 2018; Andrews et al., 2019); the notion of weak instruments in econometric models (Woglom, 2001); inequality indices (Dufour et al., 2018, 2024); structural impulse responses (Olea et al., 2021). Lye and Hirschberg (2018) give some other examples of econometric models.
Other examples of statistical applications, include cost-effectiveness analysis (Briggs & Fenn, 1998); and the comparison of health outcomes across spatial domains (Beyene & Moineddin, 2005); bioequivalence assessment, dose–response analysis (Sitter & Wu, 1993; Faraggi et al., 2003; Y. Wang et al., 2015); the estimation of willingness to pay (Leitner, 2024) And other statistical applications, P. Wang et al. (2021); Raghav et al. (2025).
However, the statistical properties of the ratio of parameters can be problematic because the analytical expressions of the moments are generally not available, e.g., the ratio of asymptotically normally distributed random variables is a non-central Cauchy distribution. Moreover, if the denominator of the ratio is not significantly different from zero, the probability distribution of the ratio shows unusual behavior, and the confidence intervals are unbounded. Another problem worth highlighting is the bias of the estimator in a finite sample when studying a nonlinear function of parameters.
To test the null hypothesis of the nonlinear functions of parameters, which can be expressed as the ratio of two unknown parameters, we use confidence intervals (CIs). The two widely used approaches for constructing CIs are the Fieller method and the Delta method. The advantage of these methods is that they can be implemented in any context and are easy to compute, they do not require the use of intensive calculation and sampling strategies as would be needed when using a Bootstrap or Bayesian method (J. Hirschberg & Lye, 2010c; J. Hirschberg & Lye, 2017; Lye & Hirschberg, 2018).
Fieller (1954) proposed a method to derive the confidence interval (CI) of the ratio of two random variables. In Fieller’s method, it assumes that both the numerator and the denominator of the ratio follow normal distribution. The method is based on the inversion of the pivotal t-statistic, which gives an exact CI for achieving the required coverage probability. The Fieller’s CI is asymmetric around the ratio estimate, which is a good property, as it can be reflected in the skewness of the small sample distribution of the ratio. However, if the denominator of the ratio is not significantly different from zero, Fieller’s CI will be unbounded, being either the entire real line or the union of two disconnected infinite intervals. It has a positive probability of producing CI with infinite length. Furthermore, Fieller’s interval requires finding roots of a quadratic equation and these can be imaginary. In addition, if this quadratic equation has one root, the confidence interval will be half-open.
The Delta method is based on the first-order Taylor expansion by considering nonlinear functions of parameters. By assuming asymptotic normality in large samples, this method produces a symmetric and bounded CI, unlike the Fieller method. However, the Delta method often has an inaccurate coverage probability (Dufour, 1997) and unbalanced tail errors even at moderate sample sizes (J. Hirschberg & Lye, 2010c). A geometric interpretation of the Fieller and Delta methods can be found in von Luxburg and Franz (2009); J. Hirschberg and Lye (2010a, 2010b). According to J. Hirschberg and Lye (2010c), if the true value of the ratio has the same sign as the correlation coefficient between the numerator and the denominator then the Delta and Fieller intervals may be very similar even if the denominator has a high variance. However, if the signs are opposite and the precision of the denominator is low, then the Delta method has poorer performance.
Moreover, there are two potential problems with these Fieller and Delta methods; first, the parameter estimator is biased for nonlinear parameter functions. Second, the estimated parameters have non-normal and asymmetric distribution. Thus, the variance of the estimated parameters is not useful in constructing confidence intervals, Dorfman et al. (1990); Li and Maddala (1999).
In order to overcome the limitations of the previous methods, some numerical procedures have been proposed in the literature such as the parametric bootstrap method and the nonparametric bootstrap method (bootstrap standard, bootstrap t-statistic; bootstrap percentile, bootstrap bias-corrected, bootstrap bias-corrected and accelerated) see Krinsky and Robb (1986); Dorfman et al. (1990); Li and Maddala (1999), among others. The CIs obtained from these iterative procedures are bounded and are more computationally intensive.
Dorfman et al. (1990) compared the Delta and Fieller methods and three types of the single bootstrap and found that the bootstrap did not achieve nominal coverage and that all methods performed reasonably well.
The bootstrap percentile-t and the Delta methods confidence intervals are very close to each other in many cases in terms of the length of the confidence intervals, Li and Maddala (1999).
It should be noted that all the previous methods do not take into account the bias of the estimator, which should be a prerequisite for constructing a reliable confidence interval.
In this regard, the paper has five main contributions. First, we propose a novel analytical approach that modifies the Delta method to reduce the effect of skewness. This method is based on the Edgeworth expansion (Hall, 1992b). We then propose an easy to compute confidence interval for the ratio of parameters and the interval has the coverage probability converging to the nominal level at a rate of O ( n 1 / 2 ) where n is the sample size. Second, the source of potential bias is due to the nonlinearity of the ratio θ ^ = θ ^ 1 /   θ ^ 2 in terms of θ ^ 1 and θ ^ 2 . It is well known that even when exact unbiased estimators of θ ^ 1 and θ ^ 2 are available, the ratio estimator θ ^ could still be badly biased in finite samples. We consider a second-order term in the Taylor series expansion to bias estimation that evaluates the nonlinearity of the ratio estimator θ ^ and we propose a bias-corrected estimator, which is identical to the almost unbiased ratio estimator proposed by Tin (1965). Third, we investigate the problem of approximating the variance of a nonlinear function of parameters based on a second-degree Taylor series expansion. Unfortunately, when calculating the variance of the second-degree Taylor expansion, most authors (Parr, 1983; Hayya et al., 1975; Y. Wang et al., 2015) did not take into account the possible covariances between the random variables, which is indispensable because it provides a better approximation. This variance is none other than the variance of the bias-corrected estimator (or the variance of the almost unbiased ratio estimator of Tin (1965)). Fourth, we define a modified version of the Delta method, correct the estimator of the bias, and calculate the corresponding variance. This can be helpful in terms of more accurate coverage probabilities for the CIs. Fifth, we propose a novel analytical approach to construct the CI for the ratio estimate. Our method, Edgeworth expansion with bias-corrected estimator uses the Edgeworth expansion but adopts an estimator corrected for the bias and its variance. The method always produces a bounded CI. Simulation results show that it generally outperforms the Edgeworth expansion in terms of controlling the coverage probabilities and the average width and is particularly useful when the data are skewed.
The rest of this paper is organized as follows: Section 2 presents some highlights. Section 3 studies the different methods for constructing CIs, the Fieller and Delta methods and we will develop the Edgeworth expansions for the Delta method. Section 4 provides an analytical form of the bias that can be used to construct the bias-corrected estimator and to calculate the variance of the bias-corrected estimator. Section 5 presents the confidence intervals with the bias-corrected estimator. Section 6 presents some econometric applications. The simulation study and the results are presented in Section 7 and Section 8 conclude the paper.

2. Some Highlights

2.1. Definitions, Notation

Let X and Y be two random variables, we assume that the first and second moments exist, then the expected value of X is denoted by E ( X ) = x , the variance of X by V ( X ) , the square of the coefficient of variation of X is defined by C V ( X ) 2 = V ( X ) x 2 and the coefficient of variation of X by C V ( X ) = V ( X ) x . A similar notation will be used for the random variable Y. We assume that E ( X ) = x and E ( Y ) = y are non-zero. The covariance of X and Y is defined by C o v ( X , Y ) = E ( X Y ) E ( X ) E ( Y ) , the correlation coefficient between X and Y is defined by ρ = C o v ( X , Y ) V ( X ) V ( Y ) so it satisfies ρ 1 and C o v ( X , Y ) = ρ V ( X ) V ( Y ) . The coefficient of co-variation of X and Y is defined by C V ( X , Y ) = C o v ( X , Y ) x y , which can be expressed as the product of the correlation coefficient and the coefficients of variation of X and Y, respectively: C V ( X , Y ) = ρ V ( X ) x V ( Y ) y = ρ C V ( X ) C V ( Y ) . We use the notation a ± b for the interval a b , a + b ( b 0 ) .

2.2. The Ratio Estimator Is Biased

Let θ ^ 1 and θ ^ 2  be consistent estimators of θ 1 and θ 2 , respectively, E ( θ ^ 1 ) = θ 1 and E ( θ ^ 2 ) = θ 2 and the ratio estimator θ ^ = θ ^ 1 / θ ^ 2 is a consistent estimator of the ratio θ = θ 1 / θ 2 . It is well known that the ratio of two unbiased estimators is not, in general, itself an unbiased estimator, i.e., E ( θ ^ 1 / θ ^ 2 ) E ( θ ^ 1 ) / E ( θ ^ 2 ) = θ 1 / θ 2 .
The expected value of the ratio between θ ^ 1 and θ ^ 2 , provided that the appropriate moments exist, is given by
E ( θ ^ 1 / θ ^ 2 ) = E ( θ ^ 1 × 1 / θ ^ 2 ) = E ( θ ^ 1 ) × E ( 1 / θ ^ 2 ) + C o v ( θ ^ 1 , 1 / θ ^ 2 )
If θ ^ 1 and θ ^ 2 are independent or if θ ^ 1 and 1 / θ ^ 2 are uncorrelated, then
E ( θ ^ 1 × 1 / θ ^ 2 ) = E ( θ ^ 1 ) × E ( 1 / θ ^ 2 ) .
It is well known that E ( 1 / θ ^ 2 ) 1 / E ( θ ^ 2 ) , Jensen’s inequality implies that E ( 1 / θ ^ 2 )   1 / E ( θ ^ 2 ) because the function 1 / z is convex for z > 0 or z < 0 , then we have
E ( θ ^ 1 / θ ^ 2 ) = E ( θ ^ 1 ) × E ( 1 / θ ^ 2 ) E ( θ ^ 1 ) / E ( θ ^ 2 )
and using that E ( θ ^ 1 ) = θ 1 and E ( θ ^ 2 ) = θ 2 we have
E ( θ ^ 1 / θ ^ 2 ) θ 1 / θ 2 E ( θ ^ ) θ
This result shows that the estimator of the ratio of two unbiased estimators is, in general, biased.
We will now consider a more general framework that can be precisely define the bias of the ratio estimator θ ^ .
Note that the covariance of θ ^ 2 and the ratio θ ^ 1 / θ ^ 2 is
C o v ( θ ^ 2 , θ ^ 1 / θ ^ 2 ) = E ( θ ^ 2 × θ ^ 1 / θ ^ 2 ) E ( θ ^ 2 ) × E ( θ ^ 1 / θ ^ 2 ) = E ( θ ^ 1 ) E ( θ ^ 2 ) × E ( θ ^ 1 / θ ^ 2 )
Then, by rearranging these terms, provided that E ( θ ^ 2 ) 0 , we obtain the expected value of the ratio between θ ^ 1 and θ ^ 2
E ( θ ^ 1 / θ ^ 2 ) = E ( θ ^ 1 ) / E ( θ ^ 2 ) 1 / E ( θ ^ 2 ) × C o v ( θ ^ 2 , θ ^ 1 / θ ^ 2 )
This result shows that the expected value of a ratio of two random variables is not the ratio of the expected values and using that E ( θ ^ 1 ) = θ 1 and E ( θ ^ 2 ) = θ 2 , we get
E ( θ ^ 1 / θ ^ 2 ) = θ 1 / θ 2 1 / θ 2 × C o v ( θ ^ 2 , θ ^ 1 / θ ^ 2 ) E ( θ ^ ) = θ 1 / θ 2 × C o v ( θ ^ 2 , θ ^ )
Then the bias of the ratio estimator θ ^ is
B i a s ( θ ^ ) = E ( θ ^ ) θ = 1 / θ 2 × C o v ( θ ^ 2 , θ ^ )
The ratio estimator θ ^ is thus generally a biased estimator of the true value of the ratio θ even if its components θ ^ 1 and θ ^ 2 are themselves unbiased with the size of the bias of θ ^ depending on both θ 2 and the covariance between θ ^ 2 and the ratio θ ^ 1 / θ ^ 2 .
The bias of θ ^ can be written as
B i a s ( θ ^ ) = ρ * V ( θ ^ 2 ) V ( θ ^ ) θ 2
where ρ * = C o v ( θ ^ 2 , θ ^ ) V ( θ ^ 2 ) V ( θ ^ ) is the correlation coefficient between θ ^ 2 and the ratio estimator θ ^ and V ( θ ^ 2 ) and V ( θ ^ ) are their standard errors, respectively.
Consequently, the absolute value of the bias is
B i a s ( θ ^ ) = ρ * V ( θ ^ 2 ) V ( θ ^ ) θ 2 V ( θ ^ 2 ) V ( θ ^ ) θ 2
assuming θ 2 > 0 and the correlation coefficient between θ ^ 2 and the ratio estimator θ ^ satisfies ρ * 1 .
Thus, an upper bound to the ratio of the absolute value of the bias to its standard error is given by
B i a s ( θ ^ ) V ( θ ^ ) V ( θ ^ 2 ) θ 2 = C V ( θ ^ 2 )
where C V ( θ ^ 2 ) is the coefficient of variation of θ ^ 2 .
The bias in the ratio estimator θ ^ is negligible in relation to its standard error if the coefficient of variation of θ ^ 2 is small, which is likely to be the case when the sample size is sufficiently large. It is well known that the variance of estimator V ( θ ^ 2 ) is order O ( n 1 ) then also the bias of θ ^ is also order O ( n 1 ) . Cochran (1977) has shown that if the coefficient of variation of θ ^ 2 is less than 0.1 , then the bias in ratio estimator θ ^ is small relative to its standard error. However, in econometrics and statistics models, in which the bias may be considerable and bias correction can often improve the finite sample performace of estimators.
Furthermore, the main difficulty in estimating the bias of θ ^ in order to obtain unbiased estimates of θ is to estimate the covariance between θ ^ 2 and the ratio θ ^ 1 / θ ^ 2 . Thus, it is difficult to obtain an analytical expression of the bias, as we will see later in Section 4 that using an approximation of the ratio of the parameters gives an analytical form of the bias and leads to a reduction in the bias from order O ( n 1 ) to order O ( n 2 ) .

3. Methods

3.1. The Delta Method (or the Taylor’s Series Expansion)

The Delta method often referred to as the Taylor’s series expansion (Dorfman et al., 1990; Briggs & Fenn, 1998; Li & Maddala, 1999, among others) estimates the variance of a nonlinear function of two or more random variables is given by taking a first-order Taylor expansion around the mean value of the variables and calculating the variance for this expression. In the case of the ratio of parameters  θ ^ = g ( θ ^ 1 , θ ^ 2 ) = θ ^ 1 / θ ^ 2 , the variance of θ ^ is (Full derivation details can be see in Appendix A).
V ( θ ^ ) = 1 θ ^ 2 2 V ( θ ^ 1 ) 2 θ ^ 1 θ ^ 2 C o v ( θ ^ 1 , θ ^ 2 ) + θ ^ 1 2 θ ^ 2 2 V ( θ ^ 2 ) ,
which can also be written as
V ( θ ^ ) = θ ^ 1 2 θ ^ 2 2 C V ( θ ^ 1 ) 2 2 C V ( θ ^ 1 , θ ^ 2 ) + C V ( θ ^ 2 ) 2
where C V ( θ ^ i ) 2 is the square of the coefficient of variation for a random variable θ ^ i for i = 1 , 2 and C V ( θ ^ 1 , θ ^ 2 ) = ρ C V ( θ ^ 1 ) C V ( θ ^ 2 ) is the coefficient of co-variation of θ ^ 1 and θ ^ 2 and ρ = C o v ( θ ^ 1 , θ ^ 2 ) V ( θ ^ 1 ) V ( θ ^ 2 ) is the correlation coefficient between θ ^ 1 and θ ^ 2 .
To construct a confidence interval for the ratio θ = θ 1 / θ 2 , we assume that n 1 / 2 ( θ ^ θ ) is asymptotically normal distributed with zero mean and variance V ( θ ^ ) .
Let V ^ ( θ ^ ) be a consistent estimator of V ( θ ^ ) , the Delta method 100 ( 1 α ) % confidence limits for the ratio θ 1 / θ 2 is given as follows:
C I D : θ ^ 1 θ ^ 2 ± z α / 2 Q D
where θ ^ 1 θ ^ 2 is namely the classical estimator and Q D = V ^ ( θ ^ ) = θ ^ 1 θ ^ 2 C V ^ ( θ ^ 1 ) 2 2 C V ^ ( θ ^ 1 , θ ^ 2 ) + C V ^ ( θ ^ 2 ) 2 1 / 2 , the estimated standard error of the classical estimator and z α / 2 is the ( α / 2 ) t h quantile for standard normal distribution.
This method assumes that θ ^ is normally distributed and thus symmetrical around its mean. However, the assumption of normality is clearly strong as there is no guarantee that θ ^ is normally distributed.
However, for large sample sizes (or rather small coefficients of variation) the distribution of a ratio may be close to normal.
The assumption of a normal distribution may be justified in the case of large samples, but it is unlikely that the distribution of a ratio will generally follow a well-behaved distribution. Furthermore, the assumption of a normal distribution may be quite inaccurate if the data have a skewed distribution.

3.2. The Fieller Method

Fieller (1954) proposed a general procedure for constructing confidence limits for the ratio of the means of two normal distributions. In Fieller’s method, the ratio variable is transformed into a linear function. The confidence interval of the ratio variable can be obtained by solving out the quadratic roots of the linear function.
For testing the null hypothesis H 0 : θ 1 θ 2 = γ equivalently it is written as on a linear combination of the parameters H 0 : θ 1 γ θ 2 = 0 , the method assumes that θ ^ 1 and θ ^ 2 follow a joint normal distribution function such that θ ^ 1 γ θ ^ 2 is normally distributed. Hence, the pivotal statistic for this test is
T = θ ^ 1 γ θ ^ 2 V ^ ( θ ^ 1 ) 2 γ C o v ^ ( θ ^ 1 θ ^ 2 ) + γ 2 V ^ ( θ ^ 2 )
which is t-distribution with d f degrees of freedom under the null hypothesis.
Let t α / 2 , d f denotes the 100 ( 1 α / 2 ) th percentile of the t- distribution with d f degrees of freedom, we have
P T 2 t α / 2 , d f 2 = 1 α
By replacing the expression of square T and rearranging gives a quadratic equation in γ .
a γ 2 + b γ + c 0
where  a = θ ^ 2 2 t α / 2 , d f 2 V ^ ( θ ^ 2 ) ,  b = 2 θ ^ 1 θ ^ 2 t α / 2 , d f 2 C o v ^ ( θ ^ 1 θ ^ 2 ) , and  c = θ ^ 1 2 t α / 2 , d f 2 V ^ ( θ ^ 1 ) .
Finding an explicit form for the confidence intervals for γ requires solving the quadratic equation. The solution of this inequality depends on the sign of a and d = b 2 4 a c , the discriminant of the quadratic equation. Through simple calculation, we can expressed d as follows:
d = 4 θ ^ 1 θ ^ 2 2 t α / 2 , d f 2 C V ^ ( θ ^ 2 ) ρ ^ C V ^ ( θ ^ 1 ) 2 + a C V ^ ( θ ^ 1 ) 2 ( 1 ρ ^ 2 )
where ρ ^ is the estimate of the correlation coefficient between θ ^ 1 and θ ^ 2 . Hence, a > 0 also implies d > 0 .
If d > 0 , let γ L and γ U ( γ L < γ U ) be the two real-valued solutions to the quadratic equation in γ by changing the inequality into an equality. This gives the bounds of the Fieller interval in the case a > 0 . These two roots are the lower and upper limits of the ( 1 α ) confidence interval. The bounds of the interval are given by
C I F : [ γ L , γ U ] = θ ^ 1 θ ^ 2 F ± t α / 2 , d f Q F
where θ ^ 1 θ ^ 2 F = θ ^ 1 θ ^ 2 1 1 h 1 h ρ ^ C V ^ ( θ ^ 1 ) C V ^ ( θ ^ 2 ) is the Fieller estimator and Q F = θ ^ 1 θ ^ 2 1 1 h C V ^ ( θ ^ 1 ) 2 2 C V ^ ( θ ^ 1 , θ ^ 2 ) + C V ^ ( θ ^ 2 ) 2 h C V ^ ( θ ^ 1 ) 2 ( 1 ρ ^ 2 ) 1 / 2 the estimated standard error of the Fieller estimator and h = t α / 2 , d f 2 C V ^ ( θ ^ 2 ) 2 .
However, if a < 0 the Fieller CI will be unbounded. Hence, if d > 0 the Fieller CI will be the complement of a finite interval ( , γ U ) ( γ L , ) and if d < 0 the Fieller CI will be the whole real line ( , + ) .
Other intervals may be considered when a = 0 , the Fieller CI will be , c b if b > 0 otherwise, it will be c b , if b < 0 .
Remark 1. 
(1) 
Following the Fieller estimator, the term 1 1 h 1 h ρ ^ C V ^ ( θ ^ 1 ) C V ^ ( θ ^ 2 ) can be considered as a correction factor to the estimated ratio estimator.
(2) 
If ρ ^ C V ^ ( θ ^ 1 ) C V ^ ( θ ^ 2 ) = 1 then the Fieller estimator θ ^ 1 θ ^ 2 F is equal to the classical estimator θ ^ 1 θ ^ 2 .
(3) 
In the case of finite interval, the condition a > 0 is equivalent to θ ^ 2 V ^ ( θ ^ 2 ) > t α / 2 , d f which means rejecting the null hypothesis H 0 : θ 2 = 0 , i.e., θ 2 is significantly different from zero. The test of this null hypothesis is the first step of Scheffé’s procedure (Scheffé, 1970).
(4) 
The t statistic θ ^ 2 V ^ ( θ ^ 2 ) is equal to 1 C V ^ ( θ ^ 2 ) the absolute inverse of the coefficient of variation for θ ^ 2 , so the null hypothesis is rejected if the coefficient of variation for θ ^ 2 is negligible. (A high coefficient of variation for θ ^ 2 means a low statistical value).
(5) 
If h is close to zero the Fieller CI becomes the Delta CI.
It should be noted that the null hypothesis H 0 : θ 1   γ θ 2 = 0 , was obtained from the nonlinear relationship θ 1 θ 2 = γ only when θ 2 0 . However, Fieller’s method does not take this information into account.Therefore, the Fieller CI has the potential to overestimate the confidence length.
Furthermore, Fieller’s estimator is a linear combination of the ratio estimator (or classical estimator). As we mentioned in Section 1, the ratio estimator is generally biased, so Fieller’s estimator is also generally biased.
The advantage of Fieller’s method over the Delta method is that it takes into account the potential skewness of the sampling distribution of the ratio estimator and therefore may not be symmetric around the point estimate. Fieller’s method provides an exact solution subject to the joint normality assumption. However, it has been argued that the assumption of joint normality may be difficult to justify, particularly when sample sizes are small. In particular, the random variable follows a skewed distribution, which may cause problems for the normality assumption.
The normal approximation is a rather rough approximation, especially when sample sizes are not large; it does not take into account the skewness of the underlying distribution, which is often the main source of error of the normal approximation. To remove the effect of the skewness, we develop the Edgeworth expansion.

3.3. Edgeworth Expansion

The Delta method-based confidence interval is not very robust and can be quite inaccurate in practice for non-normal data. It produces intervals that are symmetric around the point estimate, so it does not take skewness into account. The correction for skewness used in our confidence intervals is based on the Edgeworth expansion.
We propose a method based on the Edgeworth expansion to modify the Delta intervals to remove the effect of skewness. The expansion provides a way to correct for the skewness in the data and to derive new confidence intervals for the ratio parameters. Thus, we consider two aspects: first an Edgeworth expansion is derived for the Delta method for a ratio of parameters on a normal random variable and second by using the inverse of the Edgeworth expansions, which are the quantiles of the distribution that is the Cornish–Fisher expansion, we construct an approximate confidence interval, which contains a n 1 / 2 order correction for the effect of skewness.
The Delta method can be easily extended for a better approximation by using Edgeworth expansion.
Let U = V ^ ( θ ^ ) 1 / 2 n ( θ ^ θ ) where θ ^ = θ ^ 1 θ ^ 2 , θ = θ 1 θ 2 and V ^ ( θ ^ ) the estimate of V ( θ ^ ) in Delta method, we assume that the distriubtion of a random variable U has the Edgeworth expansion (Hwang, 2019; Hall, 1992a).
F ( x ) = P ( U x ) = Φ ( x ) n 1 / 2 κ 1 6 ( x 2 1 ) ϕ ( x ) + O ( n 1 / 2 )
where Φ ( x ) and ϕ ( x ) are the standard normal distribution and density functions, respectively, κ is the skewness, and n is the sample size. This expansion can be interpreted as the sum of the normal distribution Φ ( x ) , and an error due to the skewness of the distribution. When the error (the n 1 / 2 skewness correction) in absolute value is small, U can be accurately approximated by a normal distribution. Conversely, when the error in absolute value is large, the second term in the formulation cannot be ignored, and therefore, the normal approximation would not be as accurate. The n 1 / 2 skewness correction is an even function of x, which means that it changes the distribution function symmetrically about zero. Thus, the skewness of the distribution F has a significant effect, especially when the sample size n is small.
To construct asymptotic confidence intervals, we should invert the Edgeworth expansions to obtain expansions of distribution quantiles. Such expansions are known as Cornish–Fisher expansions.
For any 0 < α < 1 , let ξ α be the α t h quantile of distribution F ( . ) , which is the solution to F ( ξ α ) = α . This quantile of distribution ξ α = F 1 ( α ) admits a Cornish–Fisher expansion of the form (Hall, 1992b; Hwang, 2019).
ξ α = z α + n 1 / 2 κ ^ 1 6 ( z α 2 1 ) + O ( n 1 / 2 )
where κ ^ is the estimate of κ and z α is the α -th quantile of the standard normal distribution.
The 100 ( 1 α ) % Edgeworth expansion confidence interval for the ratio θ 1 θ 2 is given by
C I E : θ ^ 1 θ ^ 2 ξ 1 α / 2 Q D , θ ^ 1 θ ^ 2 ξ α / 2 Q D
where Q D = θ ^ 1 θ ^ 2 C V ^ ( θ ^ 1 ) 2 2 C V ^ ( θ ^ 1 , θ ^ 2 ) + C V ^ ( θ ^ 2 ) 2 1 / 2 , and ξ α / 2 and ξ 1 α / 2 are the ( α / 2 ) t h and ( 1 α / 2 ) t h quantiles of distribution F ( . ) .
For positively skewed data, the true 1 α / 2 quantile ξ 1 α / 2 is larger than the associated standard normal quantiles z α / 2 and similarly the true lower quantile ξ α / 2 is larger than z α / 2 .
From the Cornish–Fisher expansion, we can state the asymptotic coverage probability of the proposed intervals.
The coverage probability of confidence intervals is given by
P ( θ 1 θ 2 C I E ) = 1 α + O ( n 1 / 2 ) .

4. Bias-Correction Analysis

4.1. Bias of Estimator

In Section 1, we showed that the ratio estimator θ ^ = θ ^ 1 / θ ^ 2 is a biased estimator of the ratio parameters. It is essential to determine the expected direction and magnitude of this bias.
The Fieller estimator and the classical estimator are strongly consistent (converge to the ratio θ 1 / θ 2 with probability one), and generally are biased.
In the following, we propose to correct the bias of the classic estimator. It is well known in the literature that the ratio of the parameters uses only first-order expansions to approximate asymptotic sampling distributions. However, calculating higher-order expansions can also be useful given that they can be used to estimate the bias of the ratio of the parameters and the analytical form of the bias obtained can be used to construct the bias-corrected estimator. (Furthermore, higher-order expansions are also useful because they can be used to estimate the bias of the ratio parameters and the analytical form of the bias obtained can be used to construct the bias-corrected estimator.)
We consider a second-order term in the Taylor series expansion to bias estimation that evaluates the nonlinearity of the ratio of the parameters. This additional second-order term can be helpful, in the sense of more accurate coverage probabilities for the CIs.
Let θ is g ( θ 1 , θ 2 ) = θ 1 / θ 2 then from a second-order Taylor’s series expansion
g ( θ ^ 1 , θ ^ 2 ) = g ( θ 1 , θ 2 ) + G θ ^ 1 θ 1 θ ^ 2 θ 2 + 1 2 θ ^ 1 θ 1 θ ^ 2 θ 2 H θ ^ 1 θ 1 θ ^ 2 θ 2 + R n
where G is a Jacobian vector containing all the first-order partial derivatives and H is a Hessian matrix containing all the second partial derivatives for the nonlinear function g ( θ ^ 1 , θ ^ 2 ) evaluated at θ 1 and θ 2 , and the remainder R n is of order O θ ^ 1 θ 1 θ ^ 2 θ 2 2 i.e., R n θ ^ 1 θ 1 θ ^ 2 θ 2 2 0 as θ ^ i θ i for i = 1 , 2 .as n .
We define the bias and variance of ratio estimator using the first and second moments of the terms in this second-order Taylor’s series expansion. Taking the expectation of this expansion and under the conditions E ( θ ^ i θ i ) = 0 for i = 1 , 2 we obtain the bias of ratio estimator given in the following proposition.
Proposition 1. 
Let θ = θ 1 θ 2 be a ratio of parameters, a second-order Taylor’s series expansion gives the approximation of bias
B i a s ( θ ^ ) = E ( θ ^ ) θ = 1 2 ( v e c H ) v e c ( Σ ) + O ( n 2 )
where Σ is the variance-covariance matrix of θ ^ 1 and θ ^ 2 and H is the Hessian matrix of the second partial derivatives.
The estimate of bias is given by
B i a s ^ ( θ ^ ) = 1 2 ( v e c H ^ ) v e c Σ ^ + O ( n 2 )
where H ^  and Σ ^ are the estimates of H and Σ respectively.
This yields
B i a s ^ ( θ ^ ) = 1 θ ^ 2 2 C o v ^ ( θ ^ 1 , θ ^ 2 ) + θ ^ 1 θ ^ 2 3 V ^ ( θ ^ 2 ) + O ( n 2 ) ,
which can also be written as
B i a s ^ ( θ ^ ) = θ ^ 1 θ ^ 2 V ^ ( θ ^ 2 ) θ ^ 2 2 C o v ^ ( θ ^ 1 , θ ^ 2 ) θ ^ 2 θ ^ 1 + O ( n 2 )
where V ^ ( θ ^ 2 ) θ ^ 2 2 C o v ^ ( θ ^ 1 , θ ^ 2 ) θ ^ 2 θ ^ 1 can be considered as a correction factor to the estimated ratio estimator.
Proof. 
(see Appendix A). □
This bias is identical to Tin’s bias, as described by Tin in 1965. It uses the same information as the correction factor formed by subtracting V ^ ( θ ^ 2 ) θ ^ 2 2 from C o v ^ ( θ ^ 2 , θ ^ 1 ) θ ^ 2 θ ^ 1 . This bias is order to O ( n 2 )  Tin (1965). Our bias is derived by a different method. Tin (1965) and David and Sukhatme (1974) used an asymptotic series expansion of the ratio estimator under certain conditions. The high-order of Tin’s bias formulation was given by David and Sukhatme (1974).
To obtain the sign of the bias, we express the bias as a function of the coefficient of variation and the coefficient of co-variation
B i a s ^ ( θ ^ ) = θ ^ 1 θ ^ 2 C V ^ ( θ ^ 1 ) C V ^ ( θ ^ 2 ) C V ^ ( ( θ ^ 2 ) C V ^ ( θ ^ 1 ) ρ ^ + O ( n 2 )
where ρ ^ is the estimate of the correlation coefficient between θ ^ 1 and θ ^ 2 .
Following this latter formula, if the coefficient of variation of θ ^ 2 is close to zero, then the bias may be negligible relative to the variation in θ ^ . Furthermore, if the coefficient of variation of θ ^ 2 is greater than the coefficient of variation of θ ^ 1 the absolute value of the bias increases if the correlation between θ ^ 1 and θ ^ 2 becomes zero or negative. Similarly, if the coefficient of variation of θ ^ 1 is greater than the coefficient of variation of θ ^ 2 the bias is negative for a high positive correlation coefficient. Moreover, if C V ^ ( ( θ ^ 2 ) C V ^ ( θ ^ 1 ) > ρ ^ then the absolute value of the bias is positive, if C V ^ ( ( θ ^ 2 ) C V ^ ( θ ^ 1 ) < ρ ^ then the absolute value of the bias is negative, and if C V ^ ( ( θ ^ 2 ) C V ^ ( θ ^ 1 ) = ρ ^ then the ratio estimator is unbiased.

4.2. The Bias-Corrected Estimator

We have obtained an analytic form of the bias and the estimate bias of the ratio parameters can be used to correct the estimator, the bias-corrected estimator is given by
θ ^ B C = θ ^ B i a s ^ ( θ ^ ) = θ ^ 1 θ ^ 2 1 2 ( v e c H ^ ) v e c Σ ^ + O ( n 2 )
This yields
θ ^ 1 θ ^ 2 B C = θ ^ 1 θ ^ 2 + 1 θ ^ 2 2 C o v ^ ( θ ^ 1 , θ ^ 2 ) θ ^ 1 θ ^ 2 3 V ^ ( θ ^ 2 ) + O ( n 2 )
Following this result, the bias-corrected estimator is obtained by adjusting the classical estimator by the term that is capable of reducing it from order O ( n 1 ) to order O ( n 2 ) .
The bias-corrected estimator can also be written as
θ ^ 1 θ ^ 2 B C = θ ^ 1 θ ^ 2 1 + C o v ^ ( θ ^ 1 , θ ^ 2 ) θ ^ 1 θ ^ 2 V ^ ( θ ^ 2 ) θ ^ 2 2 + O ( n 2 )
where 1 + C o v ^ ( θ ^ 1 , θ ^ 2 ) θ ^ 1 θ ^ 2 V ^ ( θ ^ 2 ) θ ^ 2 2 can be considered as a correction factor to the estimated ratio estimator.
This bias-corrected estimator θ ^ B C has the same structure as Tin’s (1965) almost unbiased ratio estimator in the sense that its bias is of O ( n 2 ) , i.e., the bias of θ ^ 1 θ ^ 2 B C converges to zero at a fast rate than that of θ ^ 1 θ ^ 2 . Tin called it a modified ratio estimator”. He showed that his estimator is better than other competing estimators of population mean, up to the second order of approximation and it is equivalent to the Beale (1962) estimator up to the first order of approximation. Tin’s estimator has been studied theoretically and via simulation by, Dalabehera and Sahoo (1995), Swain and Dash (2020) and they found Tin’s estimator generally to be less biased and more efficient compared with other proposed ratio estimators.
The bias-corrected estimator θ ^ B C in terms of coefficient of variation and the coefficient of co-variation of θ ^ 1 and θ ^ 2 is
θ ^ 1 θ ^ 2 B C = θ ^ 1 θ ^ 2 1 + C V ^ ( θ ^ 1 ) C V ^ ( θ ^ 2 ) ρ ^ C V ^ ( θ ^ 2 ) C V ^ ( θ ^ 1 ) + O ( n 2 )
where 1 + C V ^ ( θ ^ 1 ) C V ^ ( θ ^ 2 ) ρ ^ C V ^ ( θ ^ 2 ) C V ^ ( θ ^ 1 ) can be considered as a correction factor to the estimated ratio estimator.
In the next, we examine the case where the numerator and denominator of a ratio are independent. In this case, we will specify the bias and the bias-corrected estimator in the following proposition:
Proposition 2. 
If θ ^ 1 and θ ^ 2 are independent, we have the following
(i) 
The estimate of the bias is B i a s ^ ( θ ^ ) = θ ^ 1 θ ^ 2 C V ^ ( θ ^ 2 ) 2 = θ ^ 1 θ ^ 2 1 t ( θ ^ 2 ) 2
where t ( θ ^ 2 ) 2 denotes the square of the t statistic (or F 1 statistic) for θ ^ 2 and 1 t ( θ ^ 2 ) 2 can be considered as a correction factor to the estimated ratio estimator.
The estimate of the bias of the ratio parameters is an estimator of the ratio weighted by the square of the coefficient of variation of θ ^ 2 (the inverse of the square of the t statistic for θ ^ 2 or the inverse of the F 1 statistic).
(ii) 
The bias-corrected estimator is  θ ^ 1 θ ^ 2 B C = θ ^ 1 θ ^ 2 1 C V ^ ( θ ^ 2 ) 2 = θ ^ 1 θ ^ 2 1 1 t ( θ ^ 2 ) 2 .
The bias-corrected estimator of the ratio parameters is an estimator of the ratio weighted by the simple statistic 1 1 t ( θ ^ 2 ) 2 , this weight is less than one because C V ^ ( θ ^ 2 ) 2 or 1 t ( θ ^ 2 ) 2 is positive.

4.3. The Variance of the Bias-Corrected Estimator

As we have shown, the bias-corrected estimator θ ^ B C corresponds to the Tin (1965) almost unbiased ratio estimator, also known as the modified ratio estimator. The approximation of the variance of θ ^ with a second-order term expressed in terms of the coefficient of variation and the coefficient of co-variation of θ ^ 1 and θ ^ 2 is identical to the variance of the almost unbiased ratio estimator. We therefore use this variance as the variance of the bias-corrected estimator.
The estimate of the variance of the bias-corrected estimator θ ^ B C is as follows (full derivation details can be see in Appendix A):
V ^ ( θ ^ B C ) = G ^ Σ ^ G ^ first-order part + 1 2 ( v e c H ^ ) ( Σ ^ Σ ^ ) v e c H ^ second-order part
where the first order part G ^ Σ ^ G ^ is the estimate of the variance of θ ^ corresponding to a first order approximation and the second order part corresponding to an additional part from second-order approximation, which permit to take into account the correlation between the random variables.
This yields
V ^ ( θ ^ B C ) = 1 θ ^ 2 2 V ^ ( θ ^ 1 ) 2 θ ^ 1 θ ^ 2 C o v ^ ( θ ^ 1 , θ ^ 2 ) + θ ^ 1 2 θ ^ 2 2 V ^ ( θ ^ 2 ) first-order approximation + 1 θ ^ 2 4 V ^ ( θ ^ 2 ) V ^ ( θ ^ 1 ) 4 θ ^ 1 θ ^ 2 C o v ^ ( θ ^ 1 , θ ^ 2 ) + 2 θ ^ 1 2 θ ^ 2 2 V ^ ( θ ^ 2 ) + 1 θ ^ 2 4 C o v ^ ( θ ^ 1 , θ ^ 2 ) 2 additional part from second-order approximation
which can also be written as
V ^ ( θ ^ B C ) = θ ^ 1 2 θ ^ 2 2 V ^ ( θ ^ 1 ) θ ^ 1 2 2 C o v ^ ( θ ^ 1 , θ ^ 2 ) θ ^ 1 θ ^ 2 + V ^ ( θ ^ 2 ) θ ^ 2 2 first-order approximation + V ^ ( θ ^ 2 ) θ ^ 2 2 V ^ ( θ ^ 1 ) θ ^ 1 2 4 C o v ^ ( θ ^ 1 , θ ^ 2 ) θ ^ 1 θ ^ 2 + 2 V ^ ( θ ^ 2 ) θ ^ 2 2 + C o v ^ ( θ ^ 1 , θ ^ 2 ) 2 θ ^ 1 2 θ ^ 2 2 additional part from second-order approximation
Thus, this variance can be express in terms of coefficient variation of θ ^ 1 and θ ^ 2 by
V ^ ( θ ^ B C ) = θ ^ 1 2 θ ^ 2 2 C V ^ ( θ ^ 1 ) 2 2 ρ ^ C V ^ ( θ ^ 1 ) C V ^ ( θ ^ 2 ) + C V ^ ( θ ^ 2 ) 2 first-order approximation + C V ^ ( θ ^ 2 ) 2 C V ^ ( θ ^ 1 ) 2 4 ρ ^ C V ^ ( θ ^ 1 ) C V ^ ( θ ^ 2 ) + ρ ^ 2 C V ^ ( θ ^ 1 ) 2 + 2 C V ^ ( θ ^ 2 ) 2 additional part from second-order approximation
where ρ ^ is the estimate of the correlation coefficient between θ ^ 1 and θ ^ 2 .
This variance is identical to the variance of the “almost unbiased ratio estimator” (or the variance of the modified ratio estimator) of Tin (1965), see also David and Sukhatme (1974).
If θ ^ 1 and θ ^ 2 are independent, we have:
(i)
The estimate of the variance of the bias-corrected estimator θ ^ B C is given by
V ^ ( θ ^ B C ) = 1 θ ^ 2 2 V ^ ( θ ^ 1 ) + θ ^ 1 2 θ ^ 2 2 V ^ ( θ ^ 2 ) first-order approximation + 1 θ ^ 2 4 V ^ ( θ ^ 2 ) V ^ ( θ ^ 1 ) + 2 θ ^ 1 2 θ ^ 2 2 V ^ ( θ ^ 2 ) additional part from second-order approximation
which can also be written as
V ^ ( θ ^ B C ) = θ ^ 1 2 θ ^ 2 2 V ^ ( θ ^ 1 ) θ ^ 1 2 + V ^ ( θ ^ 2 ) θ ^ 2 2 first - order approximation + V ^ ( θ ^ 2 ) θ ^ 2 2 V ^ ( θ ^ 1 ) θ ^ 1 2 + 2 V ^ ( θ ^ 2 ) θ ^ 2 2 additional part from sec ond - order approximation
(ii)
The variance V ^ ( θ ^ B C ) can be express in terms of coefficient variation of θ ^ 1 and θ ^ 2
V ^ ( θ ^ B C ) = θ ^ 1 2 θ ^ 2 2 C V ^ ( θ ^ 1 ) 2 + C V ^ ( θ ^ 2 ) 2 first - order approximation + C V ^ ( θ ^ 2 ) 2 C V ^ ( θ ^ 1 ) 2 + 2 C V ^ ( θ ^ 2 ) 2 additional part from second-order approximation

5. Confidence Intervals with Bias-Corrected Estimator

In this section, we construct new confidence intervals that take into account the bias of the estimator for the Delta method, and both the bias of the estimator and the asymmetry of the distribution for the Edgeworth expansion method.

5.1. Delta Method Based Confidence Interval with Bias-Corrected Estimator

Let us define the estimated standard error of the bias-corrrected estimator θ ^ B C by
Q B C = V ^ ( θ ^ B C )
And the bias-corrrected estimator is
θ ^ 1 θ ^ 2 B C = θ ^ 1 θ ^ 2 1 + C o v ^ ( θ ^ 1 , θ ^ 2 ) θ ^ 1 θ ^ 2 V ^ ( θ ^ 2 ) θ ^ 2 2
or in terms of coeffficient of variation and coeffficient of co-variation
θ ^ 1 θ ^ 2 B C = θ ^ 1 θ ^ 2 1 + C V ^ ( θ ^ 1 ) C V ^ ( θ ^ 2 ) ρ ^ C V ^ ( θ ^ 2 ) C V ^ ( θ ^ 1 ) .
The 100 ( 1 α ) % confidence limits of the Delta method bias-corrrected for the ratio θ 1 / θ 2 is given as follows:
C I D b c : θ ^ 1 θ ^ 2 B C ± z α / 2 Q B C
where z α / 2 is the ( α / 2 ) t h quantile for standard normal distribution.

5.2. Edgeworth Expansion Based Confidence Interval with Bias-Corrected Estimator

For the Edgeworth expansion based confidence interval, we use the same correct term for the estimator of the ratio parameters, then the 100 ( 1 α ) % confidence interval for the ratio θ 1 / θ 2 based Edgeworth expansion becomes
C I E b c : θ ^ 1 θ ^ 2 B C ξ 1 α / 2 Q B C , θ ^ 1 θ ^ 2 B C ξ α / 2 Q B C
where ξ α / 2 and ξ 1 α / 2 are the ( α / 2 ) t h and ( 1 α / 2 ) t h quantiles of distribution with
ξ α = z α + n 1 / 2 κ ^ 1 6 ( z α 2 1 )
where κ ^ is the estimate of the skewness κ and z α is the α t h quantile of the standard normal distribution.

6. Some Econometric Applications

6.1. The Ratio of Two Linear Combinations of Parameters

Many of the nonlinear functions studied in economic applications are expressed in the functional form of a ratio of two linear combinations of parameters. In this section, we consider the test of one such nonlinear function.
We will specify the bias of the estimator, the bias-corrected estimator, and its variance. Note that the formulations of the confidence intervals are given in the previous section. We will see that the calculations are quite simple and do not require intensive computation.
Consider the general linear model
Y = X β + ε
where Y is an n × 1 vector of observations, X is a n × k full-rank design matrix, β is a k × 1 vector of unknown parameters, and ε is an n × 1 vector of normal random errors with zero mean and variance σ 2 I : ε N ( 0 , σ 2 I ) . The OLS estimators of unknown parameters are β ^ = ( X X ) 1 X Y and σ ^ 2 = ε ^ ε ^ / n k where ε ^ are the OLS residuals.
Consider a null hypothesis for the ratio of two linear combinations of parameters
H 0 θ = K β L β
where K and L are k × 1 vectors of known constants.
We have the following different terms:
θ 1 = K β , θ 1 2 = ( K β ) 2 , V ^ ( θ ^ 1 ) = K V ^ ( β ^ ) K = σ ^ 2 K ( X X ) 1 K
θ 2 = L β , θ 2 2 = ( L β ) 2 , θ 2 3 = ( L β ) 3 , V ^ ( θ ^ 2 ) = L V ^ ( β ^ ) L =   σ ^ 2 L ( X X ) 1 L
θ 1 θ 2 = ( K β ) ( L β ) , θ 1 2 θ 2 2 = ( K β ) 2 ( L β ) 2 , C o v ^ ( θ ^ 1 , θ ^ 2 ) = C o v ^ ( K β ^ , L β ^ ) = σ ^ 2 K ( X X ) 1 L
By replacing all these terms in the formulation of the bias for θ ^ , the bias-corrected estimator θ ^ B C , and the variance of the bias-corrected estimator V ^ ( θ ^ B C ) , we have the following proposition.
Proposition 3. 
(i) 
The bias for θ ^ is
B i a s ^ ( θ ^ ) = 1 ( L β ) 2 σ ^ 2 K ( X X ) 1 L + K β ^ ( L β ^ ) 3 σ ^ 2 L ( X X ) 1 L ,
which can also be written as
B i a s ^ ( θ ^ ) = K β ^ L β ^ σ ^ 2 L ( X X ) 1 L ( L β ^ ) 2 σ ^ 2 K ( X X ) 1 L ( K β ^ ) ( L β ^ )
where σ ^ 2 L ( X X ) 1 L ( L β ^ ) 2 σ ^ 2 K ( X X ) 1 L ( K β ^ ) ( L β ^ ) can be considered as a correction factor to the estimated ratio estimator.
(ii) 
The bias-corrected estimator θ ^ B C is given by
θ ^ 1 θ ^ 2 B C = K β ^ L β ^ + 1 ( L β ) 2 σ ^ 2 K ( X X ) 1 L K β ^ ( L β ^ ) 3 σ ^ 2 L ( X X ) 1 L ,
which can be written as
θ ^ 1 θ ^ 2 B C = K β ^ L β ^ 1 + σ ^ 2 K ( X X ) 1 L ( K β ^ ) ( L β ^ ) L ( X X ) 1 L ( L β ^ ) 2
where 1 + σ ^ 2 K ( X X ) 1 L ( K β ^ ) ( L β ^ ) L ( X X ) 1 L ( L β ^ ) 2 can be considered as a correction factor for the estimated ratio estimator.
(iii) 
The estimate of the variance of the bias-corrected estimator
V ^ ( θ ^ B C ) = ( K β ^ ) 2 ( L β ^ ) 2 A 1 + A 2
where A 1 is the first-order approximation
A 1 = σ ^ 2 K ( X X ) 1 K ( K β ^ ) 2 2 K ( X X ) 1 L ( K β ) ( L β ) + L ( X X ) 1 L ( L β ^ ) 2
and A 2 is the additional part from second-order approximation
A 2 = σ ^ 2 L ( X X ) 1 L ( L β ^ ) 2 σ ^ 4 K ( X X ) 1 K ( K β ^ ) 2 4 K ( X X ) 1 L ( K β ) ( L β ) + 2 L ( X X ) 1 L ( L β ^ ) 2 + σ ^ 4 ( K ( X X ) 1 L ) 2 ( K β ^ ) 2 ( L β ^ ) 2
Next, we consider the case where the numerator and the denominator of the ratio are independent.
Proposition 4. 
(i) 
If θ ^ 1 and θ ^ 2 are independent, then the bias for θ ^ becomes
B i a s ^ ( θ ^ ) = K β ^ ( L β ^ ) 3 σ ^ 2 L ( X X ) 1 L
, which can be written as
B i a s ^ ( θ ^ ) = K β ^ L β ^ σ ^ 2 L ( X X ) 1 L ( L β ^ ) 2
where σ ^ 2 L ( X X ) 1 L ( L β ^ ) 2 can be considered as a correction factor for the estimated ratio estimator.
(ii) 
The bias-corrected estimator θ ^ B C is given by
θ ^ 1 θ ^ 2 B C = K β ^ L β ^ K β ^ ( L β ^ ) 3 σ ^ 2 L ( X X ) 1 L
which can be written as
θ ^ 1 θ ^ 2 B C = K β ^ L β ^ 1 σ ^ 2 L ( X X ) 1 L ( L β ^ ) 2
where 1 σ ^ 2 L ( X X ) 1 L ( L β ^ ) 2 can be considered as a correction factor for the estimated ratio estimator.
(iii) 
The variance of the bias-corrected estimator
V ^ ( θ ^ B C ) = ( K β ^ ) 2 ( L β ^ ) 2 σ ^ 2 K ( X X ) 1 K ( K β ^ ) 2 + L ( X X ) 1 L ( L β ^ ) 2 first-order approximation + L ( X X ) 1 L ( L β ^ ) 2 σ ^ 2 K ( X X ) 1 K ( K β ^ ) 2 + 2 L ( X X ) 1 L ( L β ^ ) 2 additional part from second-order approximation
We will illustrate this result with an econometric application to show the simplicity of calculation for our method. Let us take the case of the turning point, which has been the subject of numerous economic applications.

6.2. The Turning Point

Consider a classical linear model described by the quadratic regression model
y = β 0 + β 1 x + β 2 x 2 + ε
where y is the dependent variable and x the independent variable and ε is an unobserved random error term with expected value E ( ε ) = 0 and variance V ( ε ) = σ 2 . A common example of such model is the Kuznets (1955) curve that proposes the relationship between income inequality and income, can be represented by an inverted U shaped curve. Following the Kuznets hypothesis the relation between a country’s income equality and economic development is concave, with income equality first increasing and then decreasing as the country s economy is developing. See Bernard et al. (2019), J. Hirschberg and Lye (2005), and Lye and Hirschberg (2018), among others, for the applications and the extensions of this “Kuznets curve”. The turning point (or extremum value) is given by
θ = β 1 2 β 2
assuming β 2 0 , the extremum value θ is a minimum value if β 2 0 and a maximum value if β 2 0 .
In this case, K = ( 0 , 1 , 0 ) and L = ( 0 , 0 , 2 ) and we have
θ 1 = β 1 , θ 1 2 = β 1 2 , V ^ ( θ ^ 1 ) = V ^ ( β ^ 1 ) = σ ^ β 1 2
θ 2 = 2 β 2 , θ ^ 2 2 = 4 β 2 2 , θ ^ 2 3 = 8 β 2 3 , V ^ ( θ ^ 2 ) = 4 V ^ ( β ^ 2 ) = 4 σ ^ β 2 2
θ 1 θ 2 = 2 β 1 β 2 , θ 1 2 θ 2 2 = 4 β 1 2 β 2 2 , C o v ^ ( θ ^ 1 , θ ^ 2 ) = 2 C o v ^ ( β ^ 1 , β ^ 2 ) = 2 σ ^ β ^ 1 β ^ 2
In the formulation of the bias for θ ^ , the bias-corrected estimator θ ^ B C and its variance, by replacing all these terms, we have the following results:
(i)
The bias for  θ ^  is
B i a s ^ ( θ ^ ) = 1 2 1 β ^ 2 2 σ ^ β ^ 1 β ^ 2 β ^ 1 β ^ 2 3 σ ^ β ^ 2 2
which can be written as
B i a s ^ ( θ ^ ) = 1 2 β ^ 1 β ^ 2 σ ^ β ^ 2 2 β ^ 2 2 σ ^ β ^ 1 β ^ 2 β ^ 1 β ^ 2
where  σ ^ β ^ 2 2 β ^ 2 2 σ ^ β ^ 1 β ^ 2 β ^ 1 β ^ 2  can be considered as a correction factor to the estimated ratio estimator.
(ii)
The bias can be express in terms of the coefficients of variation and the coefficient of co-variation of  β ^ 1  and  β ^ 2
B i a s ^ ( θ ^ ) = 1 2 β ^ 1 β ^ 2 1 t ( β ^ 2 ) 2 ρ ^ 1 t ( β ^ 1 ) 1 t ( β ^ 2 )
where  t ( β ^ i )  denotes the  t s t a t i s t i c  for  β ^ i  for  i = 1 , 2 , and  ρ ^  is the estimate of the correlation coefficient between  β ^ 1  and  β ^ 2  and the term  1 t ( β ^ 2 ) 2 ρ ^ 1 t ( β ^ 1 ) 1 t ( β ^ 2 )  can be considered as a correction factor to the estimated ratio estimator.
An another alternative form of the bias is
B i a s ^ ( θ ^ ) = 1 2 β ^ 1 β ^ 2 1 t ( β ^ 1 ) 1 t ( β ^ 2 ) t ( β ^ 1 ) t ( β ^ 2 ) ρ ^
where  1 t ( β ^ 1 ) 1 t ( β ^ 2 ) t ( β ^ 1 ) t ( β ^ 2 ) ρ ^  can be considered as a correction factor to the estimated ratio estimator.
(iii)
The bias-corrected estimator  θ ^ B C
θ ^ B C = 1 2 β ^ 1 β ^ 2 1 2 1 β ^ 2 2 σ ^ β ^ 1 β ^ 2 β ^ 1 β ^ 2 3 σ ^ β ^ 2 2
which can be written as
θ ^ B C = 1 2 β ^ 1 β ^ 2 1 + σ ^ β ^ 1 β ^ 2 β ^ 1 β ^ 2 σ ^ β ^ 2 2 β ^ 2 2
where  1 + σ ^ β ^ 1 β ^ 2 β ^ 1 β ^ 2 σ ^ β ^ 2 2 β ^ 2 2  can be considered as a correction factor to the estimated ratio estimator.
(iv)
The bias-corrected estimator  θ ^ B C  in terms of the coefficient of variation and the coefficient of co-variation of  β ^ 1  and  β ^ 2  is
θ ^ B C = 1 2 β ^ 1 β ^ 2 1 + ρ ^ 1 t ( β ^ 1 ) 1 t ( β ^ 2 ) 1 t ( β ^ 2 ) 2
where  1 + ρ ^ 1 t ( β ^ 1 ) 1 t ( β ^ 2 ) 1 t ( β ^ 2 ) 2  can be considered as a correction factor to the estimated ratio estimator.
An another alternative form is
θ ^ B C = 1 2 β ^ 1 β ^ 2 1 + 1 t ( β ^ 1 ) 1 t ( β ^ 2 ) ρ ^ t ( β ^ 1 ) t ( β ^ 2 )
where  1 + 1 t ( β ^ 1 ) 1 t ( β ^ 2 ) ρ ^ t ( β ^ 1 ) t ( β ^ 2 )  can be considered as a correction factor to the estimated ratio estimator.
(v)
The estimate of the variance of the bias-corrected estimator  θ ^ B C
V ^ ( θ ^ B C ) = 1 4 β ^ 1 2 β ^ 2 2 σ ^ β ^ 1 2 β ^ 1 2 2 σ ^ β ^ 1 β ^ 2 β ^ 1 β ^ 2 + σ ^ β ^ 2 2 β ^ 2 2 first-order approximation + σ ^ β ^ 2 2 β ^ 2 2 σ ^ β ^ 1 2 β ^ 1 2 4 σ ^ β ^ 1 β ^ 2 β ^ 1 β ^ 2 + 2 σ ^ β ^ 2 2 β ^ 2 2 + ( σ ^ β ^ 1 β ^ 2 ) 2 β ^ 1 2 β ^ 2 2 additional part from second-order approximation
(vi)
Thus, this variance  V ^ ( θ ^ B C )  can be express in terms of coefficient variation of  β ^ 1  and  β ^ 2  by
V ^ ( θ ^ B C ) = 1 4 β ^ 1 2 β ^ 2 2 C V ^ ( β ^ 1 ) 2 2 ρ ^ C V ^ ( β ^ 1 ) C V ^ ( β ^ 2 ) + C V ^ ( β ^ 2 ) 2 first-order approximation + C V ^ ( β ^ 2 ) 2 C V ^ ( β ^ 1 ) 2 4 ρ ^ C V ^ ( β ^ 1 ) C V ^ ( β ^ 2 ) + ρ ^ 2 C V ^ ( β ^ 1 ) 2 + 2 C V ^ ( β ^ 2 ) 2 additional part from second-order approximation
This variance is easily calculated using thet-statistics for  β ^ i  for  i = 1 , 2
V ^ ( θ ^ B C ) = 1 4 β ^ 1 2 β ^ 2 2 1 t ( β ^ 1 ) 2 2 ρ ^ 1 t ( β ^ 1 ) 1 t ( β ^ 2 ) + 1 t ( β ^ 2 ) 2 first-order approximation + 1 t ( β ^ 2 ) 2 1 t ( β ^ 1 ) 2 4 ρ ^ 1 t ( β ^ 1 ) 1 t ( β ^ 2 ) + ρ ^ 2 1 t ( β ^ 1 ) 2 + 2 1 t ( β ^ 2 ) 2 additional part from second-order approximation
We have developed a new method for deriving analytical formulae for the bias of the estimator ratios θ ^ , the bias-corrected estimator θ ^ B C , and the variance of the bias-corrected estimator V ^ ( θ ^ B C ) . The advantage of this method is that the calculations are quite simple and do not require intensive computations like the bootstrap methods.

7. Simulation Study

7.1. Overview

In this section, we carry out a simulation study to assess the coverage probabilities of the methods presented in the previous section. We also examine, the average length of the confidence intervals. We evaluate the performance of the Fieller interval, the Delta method interval without and with bias correction and the Edgeworth interval without and with bias correction. Let X 1 , , X n be i.i.d. observations from some distributions F with mean μ X and variance σ X 2 , Y 1 , , Y n be i.i.d. observations from some distributions G with mean μ Y and variance σ Y 2 and ρ σ X σ Y the covariance between X i s and Y j s where ρ is the correlation coefficient. Let X ¯ = 1 n i = 1 n X i and Y ¯ = 1 n i = 1 n Y i and their ratio θ ^ = X ¯ Y ¯ is a consistent estimator of θ = μ X μ Y .
We generate data from three bivariate distributions: a bivariate normal distribution, and two positively skewed family of distributions. The two families that we consider are the bivariate lognormal distribution and the bivariate mixture ( X i s are lognormal and Y j s are normal) distribution. We choose three correlation coefficients between X i and Y j (−0.8, 0.1, 0.8) and four sample sizes (25, 50, 100, 1000). We use 10,000 data sets. The data are generated as follows:
(a)
Bivariate Normal Distribution
X i Y i i . i . d N 2 μ X = 7 μ Y = 5 , σ X 2 = 2 ρ σ X σ Y ρ σ X σ Y σ Y 2 = 1
(b)
Bivariate Mixture Distribution
X i = e X ˜ i
X ˜ i Y i i . i . d N 2 μ X ˜ = 5 μ Y = 4 , σ X ˜ 2 = 0 , 2 ρ σ X ˜ σ Y ρ σ X ˜ σ Y σ Y 2 = 0 , 5
(c)
Bivariate Lognormal Distribution
X i Y i i . i . d exp N 2 μ X = 5 μ Y = 4 , σ X 2 = 0 , 2 ρ σ X σ Y ρ σ X σ Y σ Y 2 = 0 , 5

7.2. Results

The results of our simulation are presented in Table 1. The values presented in the table are confidence intervals based on the Fieller method, the Delta method, the Delta method with the bias correction (denoted by Dbc), the Edgeworth method, and the Edgeworth method with the bias correction (denoted by Ebc). The values of the average width (denoted by Width) are the average lengths of the corresponding intervals. For data generated from normal distribution, all intervals give good performance. That is, all coverage probabilities are closer to the nominal level. Average interval lengths (Width) are also comparable for all methods. The Fieller and the Delta confidence intervals are in many cases very close to each other in terms of the coverage probabilities and we can also observe that the average interval lengths for Delta method with the bias correction (Dbc) are less wide than for the Delta method without the bias correction which means that the estimator is more accurate. We also observe that the average interval lengths for the Edgeworth method with the bias correction (Ebc) are narrower than for the Edgeworth method without the bias correction. However, for data generated from the bivariate mixture and bivariate lognormal distributions, Delta methods confidence intervals are obviously inadequate, the coverage probabilities are lower than the nominal level. Fieller’s intervals are also insufficient in terms of coverage probabilities. All the other methods give coverage probabilities lower than the nominal level. The Dbc intervals outperform Delta intervals. The Dbc intervals give better coverage probabilities than Delta intervals. They are comparable and sometimes better than the Fieller intervals. Note that the Delta interval has the longest average width whereas the Dbc interval has the shortest average width. The same applies to the Ebc compared to the Edgeworth expansion. We also observe that the Ebc interval performs much better than the Edgeworth interval. This can be explained by the fact that the estimated ratio is biased. Overall, the Edgeworth and the Edgeworth bias corrected appear to be best in terms of coverage probabilities and average width (width). To explore how the correlation coefficients affect the coverage probabilities we performed simulations for different values (−0.8, 0.1, 0.8) from Table 1. The simulation results showed that the correlation coefficients have an impact on the coverage probabilities. The sample sizes have a substantial impact on the coverage probabilities for almost all methods. Among all the methods, the Edgeworth bias-corrected (Ebc) method seems to give a narrower average than the others. The important conclusion from our simulation is that one should use the Edgeworth bias corrected, rather than the Edgeworth expansion. We also consider other sample sizes and other correlation structures. The results are similar and are not reported here.
In summary, the Edgeworth without and with the bias correction have good performance in terms of coverage probability and average width and should be recommended for constructing confidence intervals when data are from skewed distributions.

8. Conclusions

We have developed new methods for constructing confidence intervals for the nonlinear functions of parameters In many practical applications, the distribution of the data are not symmetric, in particular when the sample size is small. We propose that the Edgeworth expansion to the statistics makes it possible to remedy this inconvenience. The Delta method can then be extended using the Edgeworth expansion to obtain a better approximation. Furthermore, we have shown that the nonlinear functions of the parameters are biased and we have given an analytical expression of the bias of the ratio of the parameters. This has allowed us to define bias-corrected estimators and, more particularly, to calculate the variance associated with these bias-corrected estimators. We have therefore proposed two other new methods: the Delta method with bias correction and the Edgeworth expansion with bias correction. The new methods we propose are straightforward to calculate and do not require intensive calculations such as bootstrapping.
The results of the simulation study showed that our methods generally have better coverage probabilities and confidence width and are narrower than the Delta method and Fieller’s method. In the case of bivariate normality, the Delta with bias correction intervals gives better coverage probabilities than the Delta intervals. They are comparable and sometimes better than Fieller’s intervals. When the data have been generated from a skewed distribution, the Edgeworth without and with the bias correction have good performance in terms of controlling the coverage probabilities and average length intervals. Thus, we recommend using our new methods with bias correction to construct a reliable confidence interval for nonlinear functions of the estimated parameters.
Finally, it should be noted that the method outlined in this paper for deriving analytical formulae for the bias of ratio estimators, the bias-corrected estimator, and the variance of the bias-corrected estimator can be useful in several econometric and statistical applications, such as, e.g., the long-run elasticities and flexibilities in dynamic models, the willingness to pay value, structural impulse responses, etc.

Funding

This research received no external funding.

Informed Consent Statement

Not applicable.

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding author.

Conflicts of Interest

The author declare no conflicts of interest.

Appendix A

The Delta method is useful to approximate the moments of the nonlinear functions of parameters by using Taylor’s series expansion. In the literature, only first-order expansions are used to approximate asymptotic sampling distributions. The Delta method provides a compromise to approximate the asymptotic sampling distribution of the ratio parameters θ = θ 1 / θ 2 where θ 1 and θ 2 are unknwon parameters. However, higher-order expansions are also useful because they can be used to estimate the bias of the ratio parameters and the analytical form of the bias obtained can be used to construct the bias-corrected estimator. We begin with how the variance of the ratio of the parameters in the main text can be approximated with the Delta method. We then extend this approach to obtain the higher-order terms necessary to estimate the bias and derive a bias-corrected estimator.
The variance of a first order Taylor’s series expansion,
Let θ is g ( θ 1 , θ 2 ) = θ 1 / θ 2 . On the basis of Taylor’s series expansion, the Delta method approximates the variance of a function of estimators of parameters θ ^ = g ( θ ^ 1 , θ ^ 2 ) which estimates g ( θ 1 , θ 2 ) . Since θ ^ 1 and θ ^ 2 are unbiased estimators of θ 1 and θ 2 , respectively, i.e., E ( θ ^ i ) = θ i for i = 1 , 2 , the variance of θ ^ is
V ( θ ^ ) = V ( g ( θ ^ 1 , θ ^ 2 ) = G Σ G
where G is a Jacobian vector containing all the first-order partial derivatives of g ( θ ^ 1 , θ ^ 2 ) evaluated at θ i for i = 1 , 2 .
G = g ( θ ^ 1 , θ ^ 2 ) θ ^ 1 , g ( θ ^ 1 , θ ^ 2 ) θ ^ 2 = 1 θ 2 , θ 1 θ 2 2
and Σ is the variance-covariance matrix of θ ^ 1 and θ ^ 2 defined as follows
Σ = V ( θ ^ 1 ) C o v ( θ ^ 1 θ ^ 2 ) C o v ( θ ^ 2 θ ^ 1 ) V ( θ ^ 2 )
Solving Equation (A1) and using the estimators θ ^ 1 and θ ^ 2 to replace for unknown parameters θ 1 and θ 2 , respectively, we get the variance of θ ^
V ( θ ^ ) = 1 θ ^ 2 2 V ( θ ^ 1 ) 2 θ ^ 1 θ ^ 2 C o v ( θ ^ 1 , θ ^ 2 ) + θ ^ 1 2 θ ^ 2 2 V ( θ ^ 2 )
which can be written by
V ( θ ^ ) = θ ^ 1 2 θ ^ 2 2 V ( θ ^ 1 ) θ ^ 1 2 2 C o v ( θ ^ 1 , θ ^ 2 ) θ ^ 1 θ ^ 2 + V ( θ ^ 2 ) θ ^ 2 2
Thus, the variance V ( θ ^ ) can be express in terms of the coefficient of variation and the coefficient of co-variation of θ ^ 1 and θ ^ 2
V ( θ ^ ) = θ ^ 1 2 θ ^ 2 2 C V ( θ ^ 1 ) 2 2 C V ( θ ^ 1 , θ ^ 2 ) + C V ( θ ^ 2 ) 2 = θ ^ 1 2 θ ^ 2 2 C V ( θ ^ 1 ) 2 2 ρ C V ( θ ^ 1 ) C V ( θ ^ 2 ) + C V ( θ ^ 2 ) 2
where C V ( θ ^ i ) 2 is the square of the coefficient of variation of θ ^ i for i = 1 , 2 and C V ( θ ^ 1 , θ ^ 2 ) is the coefficient of co-variation of θ ^ 1 and θ ^ 2 and ρ = C o v ( θ ^ 1 , θ ^ 2 ) V ( θ ^ 1 ) V ( θ ^ 2 ) is the correlation coefficient between θ ^ 1 and θ ^ 2 .
Bias of estimator
The first-order Taylor’s series approximations may not be accurate in some applications because of bias from truncation of the Taylor’s series or small-sample bias in the asymptotic regression parameter variances used in the Taylor’s series formulas. A second order Taylor’s series expansios of g ( θ ^ 1 , θ ^ 2 ) is
g ( θ ^ 1 , θ ^ 2 ) = g ( θ 1 , θ 2 ) + G θ ^ 1 θ 1 θ ^ 2 θ 2 + 1 2 θ ^ 1 θ 1 θ ^ 2 θ 2 H θ ^ 1 θ 1 θ ^ 2 θ 2 + R n
where H is a Hessian matrix containing all the second partial derivatives of g ( θ ^ 1 , θ ^ 2 ) evaluated at θ i   i = 1 , 2 .
H = 2 g ( θ ^ 1 , θ ^ 2 ) θ ^ 1 2 , 2 g ( θ ^ 1 , θ ^ 2 ) θ ^ 1 θ ^ 2 2 g ( θ ^ 1 , θ ^ 2 ) θ ^ 2 θ ^ 1 , 2 g ( θ ^ 1 , θ ^ 1 ) θ ^ 2 2 = 0 , 1 θ 2 2 1 θ 2 2 , 2 θ 1 θ 2 3
and the remainder R n is of order O θ ^ 1 θ 1 θ ^ 2 θ 2 2 , i.e., R n θ ^ 1 θ 1 θ ^ 2 θ 2 2 0 as θ ^ i θ i for i = 1 , 2 as n .
By taking expectation of Equation (A2) and since E ( θ ^ i θ i ) = 0 for i = 1 , 2 , and E ( R n ) = O ( n 2 ) we obtain
E ( g ( θ ^ 1 , θ ^ 2 ) ) = g ( θ 1 , θ 2 ) + 1 2 t r H Σ + O ( n 2 ) E ( θ ^ ) = θ + 1 2 t r H Σ + O ( n 2 )
where t r ( . ) denotes the trace of matrix, then the bias for θ ^ is defined by
B i a s ( θ ^ ) = E ( θ ^ ) θ = 1 2 t r H Σ + O ( n 2 ) = 1 2 ( v e c H ) v e c Σ + O ( n 2 )
where v e c ( . ) denotes the vectorisation operator which stacks the columns of the matrix and the matrix H is symmetric so that v e c H = v e c H .
Since H and Σ are unknown, we estimate bias as
B i a s ^ ( θ ^ ) = 1 2 t r H ^ Σ ^ + O ( n 2 = 1 2 ( v e c H ^ ) v e c Σ ^ + O ( n 2 )
where H ^ is the estimate of the Hessian matrix of the second-order partial derivatives and Σ ^ is the estimate of the variance–covariance matrix of θ ^ 1 and θ ^ 2 .
This yields
B i a s ^ ( θ ^ ) = 1 θ ^ 2 2 C o v ^ ( θ ^ 1 , θ ^ 2 ) + θ ^ 1 θ ^ 2 3 V ^ ( θ ^ 2 ) + O ( n 2 )
which can be written as
B i a s ^ ( θ ^ ) = θ ^ 1 θ ^ 2 V ^ ( θ ^ 2 ) θ ^ 2 2 C o v ^ ( θ ^ 1 , θ ^ 2 ) θ ^ 1 θ ^ 2 + O ( n 2 )
The approximation of the variance of  θ ^  with a second-order term
The calculation of the variance of the second order Taylor series reveals the covariances between the random variables and gives a better approximation.
To facilitate notation, let us define the random vector z = θ ^ 1 θ 1 , θ ^ 2 θ 2 with E ( z ) = 0 , E ( z z ) = Σ and z is a normal random variable z N ( 0 , Σ ) .
We can rewrite the second order of Taylor’s expansion as follows
g ( θ ^ 1 , θ ^ 2 ) = g ( θ 1 , θ 2 ) + G z + 1 2 z H z + R n
and its variance is
V ( g ( θ ^ 1 , θ ^ 2 ) ) = V ( G z ) + 1 4 V z H z + C o v G z , z H z
To obtain the variance V ( g ( θ ^ 1 , θ ^ 2 ) ) we need to calculate the three terms
(i)
V ( G z ) = G Σ G ;
(ii)
1 4 V z H z = 1 4 E z H z 2 E ( z H z ) 2 = 1 4 t r ( H Σ ) 2 + 2 t r ( H Σ ) 2 t r ( H Σ ) 2 = 1 2 t r ( H Σ ) 2 ;
(iii)
C o v G z , z H z = G E z z H z   = G E z z z v e c H = 0 .
Since odd moments of z are zero. Thus, the linear form G z and the quadratic form z H z are uncorrelated.
By combining these three results, we obtain the following result
V g ( θ ^ 1 , θ ^ 2 ) = G Σ G first-order part + 1 2 t r ( H Σ ) 2 second-order part = G Σ G first-order part + 1 2 ( v e c H ) ( Σ Σ ) v e c H second-order part
where the first order part G Σ G is the variance of θ ^ corresponding to a first order approximation and the second order part corresponding to an additional part from second-order approximation which permit to take into account the correlation between the random variables.
Let V ^ g ( θ ^ 1 , θ ^ 2 ) be the estimate of the variance V g ( θ ^ 1 , θ ^ 2 ) defined by
V ^ g ( θ ^ 1 , θ ^ 2 ) = G ^ Σ ^ G ^ first-order part + 1 2 ( v e c H ^ ) ( Σ ^ Σ ^ ) v e c H ^ second-order part
This yields
V ^ g ( θ ^ 1 , θ ^ 2 ) = 1 θ ^ 2 2 V ^ ( θ ^ 1 ) 2 θ ^ 1 θ ^ 2 C o v ^ ( θ ^ 1 , θ ^ 2 ) + θ ^ 1 2 θ ^ 2 2 V ^ ( θ ^ 2 ) first-order approximation + 1 θ ^ 2 4 V ^ ( θ ^ 2 ) V ^ ( θ ^ 1 ) 4 θ ^ 1 θ ^ 2 C o v ^ ( θ ^ 1 , θ ^ 2 ) + 2 θ ^ 1 2 θ ^ 2 2 V ^ ( θ ^ 2 ) + 1 θ ^ 2 4 C o v ^ ( θ ^ 1 , θ ^ 2 ) 2 additional part from second-order approximation
which can be written as
V ^ g ( θ ^ 1 , θ ^ 2 ) = θ ^ 1 2 θ ^ 2 2 V ^ ( θ ^ 1 ) θ ^ 1 2 2 C o v ^ ( θ ^ 1 , θ ^ 2 ) θ ^ 1 θ ^ 2 + V ^ ( θ ^ 2 ) θ ^ 2 2 first-order approximation + θ ^ 1 2 θ ^ 2 2 V ^ ( θ ^ 2 ) θ ^ 2 2 V ^ ( θ ^ 1 ) θ ^ 1 2 4 C o v ^ ( θ ^ 1 , θ ^ 2 ) θ ^ 1 θ ^ 2 + 2 V ^ ( θ ^ 2 ) θ ^ 2 2 + C o v ^ ( θ ^ 1 , θ ^ 2 ) 2 θ ^ 1 2 θ ^ 2 2 additional part from second-order approximation
Thus, this variance can be express in terms of the coefficient of variation and the coefficient of co-variation of θ ^ 1 and θ ^ 2 .
V ^ g ( θ ^ 1 , θ ^ 2 ) = θ ^ 1 2 θ ^ 2 2 C V ^ ( θ ^ 1 ) 2 2 C V ^ ( θ ^ 1 , θ ^ 2 ) + C V ^ ( θ ^ 2 ) 2 first-order approximation + θ ^ 1 2 θ ^ 2 2 C V ^ ( θ ^ 2 ) 2 C V ^ ( θ ^ 1 ) 2 4 C V ^ ( θ ^ 1 , θ ^ 2 ) + 2 C V ^ ( θ ^ 2 ) 2 + C V ^ ( θ ^ 1 , θ ^ 2 ) 2 additional part from second-order approximation = θ ^ 1 2 θ ^ 2 2 C V ^ ( θ ^ 1 ) 2 2 ρ ^ C V ^ ( θ ^ 1 ) C V ^ ( θ ^ 2 ) + C V ^ ( θ ^ 2 ) 2 first-order approximation + θ ^ 1 2 θ ^ 2 2 C V ^ ( θ ^ 2 ) 2 C V ^ ( θ ^ 1 ) 2 4 ρ ^ C V ^ ( θ ^ 1 ) C V ^ ( θ ^ 2 ) + ρ ^ 2 C V ^ ( θ ^ 1 ) 2 + 2 C V ^ ( θ ^ 2 ) 2 additional part from second-order approximation

References

  1. Andrews, I., Stock, J. H., & Sun, L. (2019). Weak instruments in instrumental variables regression: Theory and practice. Annual Review of Economics, 11(1), 727–753. [Google Scholar] [CrossRef]
  2. Beale, E. M. L. (1962). Some use of computers in operational research. Industrielle Organisation, 31, 27–28. [Google Scholar]
  3. Bernard, J.-T., Chu, B., Khalaf, L., & Voia, M. (2019). Non-standard confidence sets for ratios and tipping point with applications to dynamic panel data. Annals of Economics and Statistics, 134, 79–108. [Google Scholar] [CrossRef]
  4. Bernard, J.-T., Idoudi, N., Khalaf, L., & Yélou, C. (2007). Finite sample inference methods for dynamic energy demand models. Journal of Applied Econometrics, 22, 1211–1226. [Google Scholar] [CrossRef]
  5. Beyene, J., & Moineddin, R. (2005). Methods for confidence interval estimation of a ratio parameter with application to location quotients. BMC Medical Research Methodology, 5, 32. [Google Scholar] [CrossRef] [PubMed]
  6. Briggs, A., & Fenn, P. (1998). Confidence intervals or surfaces? Uncertainty on the cost-effectiveness plane. Health Economics, 7, 723–740. [Google Scholar] [CrossRef]
  7. Cochran, W. G. (1977). Sampling techniques. John Wiley & Sons. [Google Scholar]
  8. Dalabehera, M., & Sahoo, L. N. (1995). Efficiencies of six almost unbiased ratio estimators under a particular model. Statistical Papers, 36, 61–67. [Google Scholar] [CrossRef]
  9. David, I. P., & Sukhatme, B. V. (1974). On the bias and mean square error of the ratio estimator. Journal of the American Statistical Association, 69(346), 464–466. [Google Scholar] [CrossRef]
  10. Dorfman, J. H., Kling, C. L., & Sexton, R. J. (1990). Confidence intervals for elasticities and flexibilities: Reevaluating the ratios of normals case. American Journal of Agricultural Economics, 72, 1006–1017. [Google Scholar] [CrossRef]
  11. Dufour, J.-M. (1997). Some impossibility theorems in econometrics with applications to structural and dynamic models. Econometrica, 65(6), 1365–1387. [Google Scholar] [CrossRef]
  12. Dufour, J.-M., Flachaire, E., Khalaf, L., & Zalghout, A. (2018). Confidence sets for inequality measures: Fieller-type methods. In W. Green, L. Khalaf, P. Makdissi, R. Sickles, M. Veall, & M. Voia (Eds.), Productivity and inequality. Springer. [Google Scholar]
  13. Dufour, J.-M., Flachaire, E., Khalaf, L., & Zalghout, A. (2024). Identification-robust methods for comparing inequality witth an application to regional disparities. The Journal of Economic Inequality, 22, 433–452. [Google Scholar] [CrossRef]
  14. Faraggi, D., Izikson, P., & Reiser, B. (2003). Confidence intervals for the 50 per cent response dose. Statistics in Medicine, 22, 1977–1988. [Google Scholar] [CrossRef]
  15. Fieller, E. C. (1954). Some problems in interval estimation. Journal of the Royal Statistical Society, Series B, 16, 175–185. [Google Scholar] [CrossRef]
  16. Hall, P. (1992a). On the removal of skewness by transformation. Journal of the Royal Statistical Society, Series B, 54, 221–228. [Google Scholar] [CrossRef]
  17. Hall, P. (1992b). The bootstrap and edgeworth expansion. Springer. [Google Scholar]
  18. Hayya, J., Armstrong, D., & Gressis, N. (1975). A note on the ratio of two normall distributed variables. Management Sciences, 21, 1338–1341. [Google Scholar] [CrossRef]
  19. Hirschberg, J., & Lye, J. (2005). Inferences for the extremm of quadratic regression models. Technical report. 906. Department of Economics, The University of Melbourne. [Google Scholar]
  20. Hirschberg, J., & Lye, J. (2010a). A geometric comparison of the delta and fieller confidence intervals. The American Statistician, 64(3), 234–241. [Google Scholar] [CrossRef]
  21. Hirschberg, J., & Lye, J. (2010b). A reinterpretation of interactions in regressions. Applied Economics Letters, 17, 427–430. [Google Scholar] [CrossRef]
  22. Hirschberg, J., & Lye, J. (2010c). Two geometric representations of confidence intervals for ratios of linear combinations of regression parameters: An application of the NAIRU. Economics Letters, 108, 73–76. [Google Scholar] [CrossRef]
  23. Hirschberg, J., & Lye, J. (2017). Inverting the indirect—The ellipse and the boomerang: Visualizing the confidence intervals of the structural coefficient from two-stage least squares. Journal of Econometrics, 199, 173–183. [Google Scholar] [CrossRef]
  24. Hirschberg, J. G., Lye, J. N., & Slottje, D. J. (2008). Inferential methods for elasticity estimates. Journal of Econometrics, 147(2), 299–315. [Google Scholar] [CrossRef]
  25. Hwang, J. (2019). Note on edgeworth expansions and asymptotic refinements of percentile t-bootstrap methods. Working paper. University of Connecticut. [Google Scholar]
  26. Krinsky, I., & Robb, A. L. (1986). On approximating the statistical properties of elasticities. The Review of Economics and Statistics, 68, 715–719. [Google Scholar] [CrossRef]
  27. Kuznets, S. (1955). International differences in capital formation and financing. In Capital formation and economic growth (pp. 19–111). Princeton University Press. [Google Scholar]
  28. Leitner, L. (2024). Imprecision in the estimation of willingness to pay using subjective well-being data. Journal of Happiness Studies, 25(7), 94. [Google Scholar] [CrossRef]
  29. Li, H., & Maddala, G. S. (1999). Bootstrap variance estimation of nonlinnear functions of parameters: An application to long-run elasticities of energy demand. The Review of Economics and Statistics, 81(4), 728–733. [Google Scholar] [CrossRef]
  30. Lye, J., & Hirschberg, J. (2018). Ratios of parameters: Some econometric examples. Australian Economic Review, 51(4), 578–602. [Google Scholar] [CrossRef]
  31. Olea, J. L. M., Stock, J. H., & Watson, M. W. (2021). Inference in structural vector autoregressions identified with an external instrument. Journal of Econometrics, 225(1), 74–87. [Google Scholar] [CrossRef]
  32. Parr, W. C. (1983). A note on the jacknife the bootstrap and te deltta method estimators of bias and variance. Biometrika, 70(3), 719–722. [Google Scholar] [CrossRef]
  33. Raghav, Y. S., Ahmadini, A. A. H., Mahnashi, A. M., & Rather, K. U. I. (2025). Enhancing estimation efficiency with proposed estimator: A comparative analysis of poisson regression-based mean estimators. Kuwait Journal of Science, 52(1), 100282. [Google Scholar] [CrossRef]
  34. Scheffé, H. (1970). Multiple testing versus multiple estimation. Improper confidence sets. Estimation of directions and ratios. The Annals of Mathematical Statistics, 41(1), 1–29. [Google Scholar] [CrossRef]
  35. Sitter, R. R., & Wu, C. F. J. (1993). On the accurracy of Fieller intervals for binar response data. Journal of the American Statistical Association, 88, 1021–1025. [Google Scholar] [CrossRef]
  36. Staiger, D., Stock, J., & Watson, M. (1997). The nairu, unemployment and monetary policy. Journal of Economic Perspectives, 11, 33–49. [Google Scholar] [CrossRef]
  37. Swain, A. K. P. C., & Dash, P. (2020). On a class of almost unbiased ratio type estimators. Journal of Statistical Theory and Applications, 19(1), 28–35. [Google Scholar] [CrossRef]
  38. Tin, M. (1965). Comparison of some ratio estimators. Journal of the American Statistical Association, 60(309), 294–307. [Google Scholar] [CrossRef]
  39. von Luxburg, U., & Franz, V. H. (2009). A geometric approach to confidence sets for ratios: Fieller’s theorem, generalizations and bootstrap. Statistica Sinica, 19, 1095–1117. [Google Scholar]
  40. Wang, P., Xu, S., Wang, Y. X., Wu, B., Fung, W. K., Gao, G., & Liu, N. (2021). Penalized Fieller’s confidence interval for the ratio of bivariate normal means. Biometrics, 77(4), 1355–1368. [Google Scholar] [CrossRef] [PubMed]
  41. Wang, Y., Wang, S., & Carroll, R. J. (2015). The direct integral method for confidence intervals for the ratio of two location parameters. Biometrics, 71, 704–713. [Google Scholar] [CrossRef] [PubMed]
  42. Woglom, G. (2001). More results on the exact small sample properties of the instrumental variable estimator. Econometrica, 69(5), 1381–1389. [Google Scholar] [CrossRef]
Table 1. Coverage probability and average width (Width) of 95% confidence intervals.
Table 1. Coverage probability and average width (Width) of 95% confidence intervals.
ρ FiellerWidthDeltaWidthDbcWidthEdgeworthWidthEbcWidth
(a) Bivariate Normal Distribution
n = 25
0.80.95051.96250.94911.94550.94951.93640.95312.05230.95321.1935
0.10.94632.05720.94522.04430.94582.01150.95122.03650.95112.0136
−0.80.94852.41980.94822.40410.94732.14640.95282.05230.95292.0310
n = 50
0.80.95051.96250.95061.94850.95041.92750.95262.04970.95221.9210
0.10.94892.05770.94802.04430.94642.03240.95102.03420.95152.0387
−0.80.94762.41870.94692.40360.94782.16850.95212.04150.95192.0450
n = 100
0.80.95041.97530.95011.97530.95031.92120.95242.05200.95211.9215
0.10.94772.06780.94892.06210.94632.02180.95032.03650.95062.0240
−0.80.95042.39530.94682.39750.94752.23580.95062.05220.95052.0486
n = 1000
0.80.95011.97800.95011.96580.95001.92450.95202.05680.95201.9146
0.10.94762.07490.94692.05810.94602.05100.95042.04120.95002.0168
−0.80.95002.37630.94702.38600.94772.20450.95152.04950.95142.0475
(b) Bivariate Mixture Distribution
n = 25
0.80.828690.130.821486.480.829786.420.867485.860.881584.53
0.10.8713107.300.8474103.570.8512103.450.8671102.930.8705102.49
−0.80.8970139.340.8570133.280.8980132.510.9013131.140.9051130.57
n = 50
0.80.848591.700.832988.260.849687.570.881687.510.889886.76
0.10.8707107.180.8430103.780.8514102.120.8904101.450.9009101.14
−0.80.8945138.370.8553132.360.8598132.170.9002130.780.9121129.41
n = 100
0.80.862390.870.861089.240.872685.210.900286.450.913287.10
0.10.8798106.870.8725102.530.8798101.210.9045101.240.9187102.25
−0.80.9015137.210.9104130.870.8805130.540.9068130.360.9208129.21
n = 1000
0.80.867491.100.873590.130.876584.250.916588.120.921884.59
0.10.8723105.340.8806102.140.8725102.220.9046101.140.9284101.21
−0.80.9001136.210.9312131.510.9422130.570.9185130.030.9298129.25
(c) Bivariate Lognormal Distribution
n = 25
0.80.81191.59360.80761.40750.81211.27610.86181.40630.87151.2326
0.10.90272.6677085462.47430.90372.41060.89342.45300.89632.2078
−0.80.92323.43010.86883.14220.87213.12560.90663.10470.91583.0985
n = 50
0.80.84722.80020.83511.45120.84861.41500.88451.45260.90051.4328
0.10.90552.62000.86102.44620.90652.38120.89812.41750.90022.4076
−0.80.91693.07370.86883.14220.87653.10270.90583.11070.90842.9615
n = 100
0.80.84171.14070.84271.08210.86121.18350.87661.06650.91061.1078
0.10.91301.82170.88191.77270.91391.69410.90781.75160.91781.6851
−0.80.92442.22280.88692.16030.89812.28440.91342.12810.92232.0675
n = 1000
0.80.86261.16910.85731.11260.86211.10760.89381.10100.91151.0981
0.10.90881.81570.88231.73750.90541.69750.90571.74340.91821.6896
−0.80.92482.21060.89652.21860.90452.20810.91462.11260.92682.0198
Note: Dbc: Delta method with the bias correction; Ebc: Edgeworth method with the bias correction; Width: average confidence interval lenghts; ρ : correlation coefficients.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Ratsimalahelo, Z. Re-Examining Confidence Intervals for Ratios of Parameters. Econometrics 2025, 13, 37. https://doi.org/10.3390/econometrics13030037

AMA Style

Ratsimalahelo Z. Re-Examining Confidence Intervals for Ratios of Parameters. Econometrics. 2025; 13(3):37. https://doi.org/10.3390/econometrics13030037

Chicago/Turabian Style

Ratsimalahelo, Zaka. 2025. "Re-Examining Confidence Intervals for Ratios of Parameters" Econometrics 13, no. 3: 37. https://doi.org/10.3390/econometrics13030037

APA Style

Ratsimalahelo, Z. (2025). Re-Examining Confidence Intervals for Ratios of Parameters. Econometrics, 13(3), 37. https://doi.org/10.3390/econometrics13030037

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop