Abstract
Conditioning is a very useful way of using correlated information to reduce the variability of an estimate. Conditioning an estimate on a correlated estimate, reduces its covariance, and so provides more precise inference than using an unconditioned estimate. Here we give expansions in powers of for the conditional density and distribution of any multivariate standard estimate based on a sample of size n. Standard estimates include most estimates of interest, including smooth functions of sample means and other empirical estimates. We also show that a conditional estimate is not a standard estimate, so that Edgeworth-Cornish-Fisher expansions cannot be applied directly.
1. Introduction and Summary
Given correlated estimates of unknown parameters , inference on can be made more precise by conditioning on . To see this, suppose that X is a bivariate normal with correlation . Then
So if is an estimate of w based on a sample of size n, and it satisfies the Central Limit Theorem, (CLT),
then
if is non-zero. A similar result holds when for has dimension , which we now assume. To apply this result to obtain inference on given , we need to approximate its distribution, ideally beyond its first approximation given by the CLT. This paper uses expansions in powers of for the density and distribution of , for a wide class of estimates, called standard estimates.
Suppose that is a standard estimate of an unknown parameter of a statistical model, based on a sample of size n. That is, is a consistent estimate, and for , its rth order cumulants have magnitude and can be expanded in powers of . The coefficients in these expansions are called the cumulant coefficients. This is a very large class of estimates, with potential application to a range of practical problems. For example, may be a smooth function of one or more sample means, or a smooth functional of one or more empirical distributions. A smooth function of a standard estimate is also a standard estimate: see ref. [1]. Ref. [2] gave the multivariate Edgeworth expansions for the distribution and density of
in powers of about the multivariate normal in terms of the Edgeworth coefficients of (3). (For typos, see p25 of ref. [1]. Also replace by on 4th to last line p1121 and in (23). To line 3 p1138, add ). Ref. [3] gave the Edgeworth coefficients explicitly for the Edgeworth expansions to .
Choosing an estimate can be a tradeoff between simplicity and efficiency. Conventional point estimation emphasises efficiency as measured by mean square error. The maximum likelihood estimate is attractive as it is asymptotically efficient in this sense. However their cumulant coefficients generally take much more work to obtain than those of a moment estimate. See refs. [4,5]. But whether one chooses a simple estimate or a more complicated one, it will generally be a standard estimate.
Turning to conditioning, as noted, this is a very useful way of using correlated information to reduce the variability of estimates, and to make inference on unknown parameters more precise. This is the motivation for this paper. To emphasise when are vectors, we bold them. In Section 4 we take , and write and as , and of dimensions . Just as the distribution of allows inference on w, the conditional distribution of given , allows inference on for a given . The covariance of can be substantially less than that of . Only when and are uncorrelated, is there no advantage in conditioning. Given a statistical model, its unknown parameters w, will consist of one or more parameters of primary interest, , and the others. (For example, for an autoregressive time series with mean , autocorrelation , and variance of residuals , the parameter of primary interest is .) When conditioning one can choose to be all of the other parameters, or more simply, the single parameter which maximises the estimated correlation of with . This is another trade-off between efficiency and simplicity, as increasing will reduce the conditional variance.
We shall see that for V the asymptotic covariance of ,
the multivariate normal on , with density and distribution
where is a function of V, and is a function of V and is also linear in . If , this leads in Section 4 to 1- or 2-sided confidence intervals for , of error or . So unlike traditional confidence regions (including confidence intervals), the conditional versions depend on the value of the unknown . This gives a new level of sophistication to them over traditional confidence regions. While this paper does not deal with Studentized estimates, that next step can be done using ref. [6] or ref. [1]. The of most interest are small.
Theorems 1 and 2 give our main results: explicit expansions to for the conditional density and distribution of given , that is, for the conditional density and distribution of given . In other words, it gives the likely position of for any given . The main difficulty is integrating the density. Theorem 2 does this in terms of of (42), the integral of the multivariate Hermite polynomial, with respect to the conditional normal density. Note 1 gives in terms of derivatives of the multivariate normal distribution. Theorem 3 gives in terms of the partial moments of the conditional distribution. If , then Theorem 4 gives in terms of the unit normal distribution and density.
Section 4 specialises to the case . Examples are the condtional distribution and density of a bivariate sample mean, of entangled gamma random variables, and of a sample mean given the sample variance. Section 5 and Section 6 give conclusions, discussion, and suggestions for future research. Appendix A gives expansions for the conditional moments of It shows that given , is neither a standard estimate, nor a Type B estimate, so that Edgeworth-Cornish-Fisher expansions do not apply to it.
Ref. [7] (pp. 34–36) argue that an ideal choice of conditioning variable would be one whose distribution does not depend on . But this is generally not possible except for some exponential families. An example when it is true, is when and are location and scale parameters: on p54 they essentially suggest choosing . This is our motivation for Example 4. For some examples, see ref. [8]. Their (7.5) gave a form for the 3rd order expansion for the conditional density of a sample mean to , but they did not attempt to integrate it.
Conditional expansions for the sample mean were given in Chapter 12 of [9], and used in Sections 2.3 and 2.5 of [10] to show bootstrap consistency. For some other results on conditional distributions, see refs. [11,12,13,14,15,16].
2. Multivariate Edgeworth Expansions
Suppose that is a standard estimate of with respect to n. (n is typically the sample size.) That is, as , where we use for expected value, and for and the rth order cumulants of can be expanded as
where ≈ indicates an asymptotic expansion, and the cumulant coefficients may depend on n but are bounded as . So the bar replaces each by k. For example and We reserve for this bar notation to avoid double subscripts. (1) holds with . V may depend on n, but we assume that is bounded away from 0.
for . These are Bell polynomials in the cumulant coefficients of (2), as defined and given in [3]. Their importance lies in their central role in the Edgeworth expansions of of (1). (When and is a sample mean, the Edgeworth coefficients were given for all r in [17]. For typos, see pp. 24–25 of [1].)
Set Probability A is true. By [2], or [1], for non-lattice, the distribution and density of can be expanded as
where and for
is the multivariate Hermite polynomial. We use the tensor summation convention, repetition of in (6) implies their implicit summation over their range, . Ref. [3] gave explicitly for and for when .
where sums over all N permutations of giving distinct values. For example,
(So the repeated in (11) implies their repeated summatioin over .) are given explicitly in [3]. So (4) with the in [3] give the Edgeworth expansions for the distribution and density of of (1) to . and each have terms, but many are duplicates as is symmetric in . This is exploited by the notation of Section 4 of [3] to greatly reduce the number of terms in (6).
By (5), the density of relative to its asymptotic value is
and for measurable ,
If , then for r odd, so that
Examples 3 and 4 of [3] gave for
, and .
3. The Conditional Density and Distribution
For and partition and as and where are vectors of length . Partition as where are .
Now we come to the main purpose of this paper. Theorem 1 expands the conditional density of about the conditional density of . Its derivation is straightforward, the only novel feature being the use of Lemma 2 to find the reciprocal of a series, using Bell polynomials. Theorem 2 integrates the conditional density to obtain the expansion for the conditional distribution of about the conditional distribution of in terms of of (42) below, the integral of the Hermite polynomial of (8), with respect to the conditional normal density. Note 1 gives in terms of derivatives of the multivariate normal distribution. Theorem 3 gives in terms of the partial moments of the conditional normal distribution. For of (15), set
Lemma 1.
The elements of are
Proof.
gives 8 equations relating and . Now solve for .
So for □
Since in the sense that for , is less variable than , and is less variable than , unless and are uncorrelated, that is, is a matrix of zeros.
The conditional density of is
where is of (6) for , and is the density of of (15). By (4)–(6), Section 2.5 of [18], for of (18),
So the distribution of is
For of (17), of (18), and , set
Corollary 1.
Suppose that . Then for of (23),
Replacing V by an estimate will usually give 1- and 2-sided conditional confidence intervals of error and for given .
So is given by replacing and in by
By (5) and (6), for , of (22) is given by
and implicit summation in (28) for is now over . So,
Ordinary Bell polynomials. For a sequence from R, the partial ordinary Bell polynomial , is defined by the identity
where for They are tabled on p309 of [19]. To obtain (21), we use
Lemma 2.
Proof.
Now swap summations. □
Theorem 1.
Proof.
This follows from (21) and Lemma 2. □
So of (34) and (35) give the conditional density to . We call (33) the relative conditional density. We now give our main result, an expansion for the conditional distribution of . As noted, Theorem 2 gives this in terms of of (42) below, an integral of the Hermite polynomial of (8), and Note 1 gives in terms of derivatives of the multivariate normal distribution. Theorem 3 gives in terms of the partial moments of the conditional distribution of (24).
When , Theorem 4 gives in terms of and for
Theorem 2.
Proof.
(40) holds by (6). (41) holds by (6). Now use (23). □
Note 1.
By (39), in (37) is given by of (32) and of (40). Viewing as a polynomial in for u of (23), is linear in
for . So can be expanded in terms of the partial moments of
This has only integrals, while (12) has q integrals.
Lemma 3.
For , , where
Proof.
, where , and for of (20). □
Our main result, Theorem 2, gave the conditional distribution expansion in terms of of (42). Note 4.1 gave these in terms of the derivatives of . We now give in terms of , the partial moments of the conditional distribution of (24). As in (10), for any , set summed over all, N say, permutations of giving distinct . For example, .
Theorem 3.
Proof.
Since Substitute into the expressions for Now multiply by and integrate from to u. □
This gives the needed for . The needed for can be written down similarly in terms of the partial moments using for We now show that if , we only need the partial moments of at v of (36), and that these are easily written in terms of and a polynomial in v of (36).
The case So
Theorem 4.
For , is given by Theorem 3 with
where dot denotes multiplication. Also, .
Proof.
4. The Case
Theorem 2 gave the conditional Edgeworth expansion in terms of of (42). Theorem 3 gave needed for of (41) and of (37), in terms of the partial moments of (46). When , Theorem 4 gave in terms of and its partial moments for v of (36). But now so that or 2. So for of (9), we switch notation to
Similarly, write (2) as
Also, we switch from to
given for in Section 4 of [3]. So,
is just with 1 and 2 reversed. For the other and needed for , see Section 4 of [3]. Our main result for this section, Theorem 7, gives simple formulas for and for of (40), the main ingredient needed in Theorem 2 for the expansion of the conditional distribution.
Theorem 5.
Proof.
This follows from Theorem 1. □
Theorem 6 gives a laborious expression for the conditional distribution.
However Theorem 7 gives a huge simplification.
Theorem 6.
Proof
This follows from Theorems 3 and 4. □
This gives the needed for for the conditional distribution of (37)–(39) to . The needed for can be written down similarly. We now give a much simpler method for obtaining of (41), and so by (40), and needed for (37) by (38). Theorem 7 gives and in terms of of (53). Theorem 8 gives in terms of of (62), a function of of Theorem 4.
Theorem 7.
Proof.
By (59),
Note 2.
Proof.
For follow from Theorem 2.
By the proof of Theorem 3, can be read off [3] and the univariate Hermite polynomials given in terms of by expanding
□
To summarise, the conditional density of of (15), is given by Theorem 5, and the conditional distribution is given by (37), (41) in terms of of (65) and of Theorem 8.
Example 1.
Conditioning when is the mean of a sample with cumulants . The non-zero were given in Example 6 of [3]. So for and for other are given by (66)–(69) starting
The relative conditional density is given to by (33) in terms of of (6), of (54), of (28) for , and of (55) for .
The conditional distribution is given by (52) with of (65), starting
of Theorem 8, and of (70). As noted this is a far simpler result than using Theorem 6.
for of (71), (72) and above.
Example 2.
We now build on
the entangled gamma
model of Example 7 of [3], which gave the needed. Let be independent gamma random variables with means . For , set , and let be the mean of a random sample of size n distributed as . So, and where are independent gamma random variables with means . The rth order cumulants of are and otherwise . Now suppose that , the entangled exponential model. So , and have correlation ,
for of (25), that is, . Figure 1 plots the conditional asymptotic quantiles of , that is, , for . To , given n and , this figure is equivalent to a figure of versus . That is, Figure 1 shows to , the likely value of for a given value of In fact by (26), lies between the outer limits with probability 0.98+. So although labelled as versus , the figure can be viewed as showing the likely value of for a given value of
Figure 1.
of (25) for —courtesy of Dr Paul Teal.
We now give of (31), of (33), and of (55), and for , the coefficients of the expansion for the conditional distribution of (37).
By Note 2, of Example 7 of [3], symmetry, and (65),
Let us work through 2 numerical examples to get the conditional distribution to . We build on Example 7 of [3]. By Theorem 5, if then ,
We worked to 8 significant figures, but display less. If , then
So to the relative conditional density of (33) for is
so that for and 16 we can only include two terms, and for , only three terms. We now give the 1st 3 , needed by (37) for the conditional distribution to . By (50), . By (17), .
For example for to
so that divergence begins with the 4th term.
So to the relative conditional density of (33) for is
so that we can only include three terms. Finally, we now give the 1st three , needed by (37) for the conditional distribution to .
For example for to
so that divergence begins with the 3rd term.
Example 3.
Example 4.
Discussions of pivotal statistics advocate using the distribution of a sample mean, given the sample variance. So Let be the usual unbiased estimates of the mean and variance from a univariate random sample of size n from a distribution with rth cumulant . So By the last 2 equations of Section 12.15 and (12.35)–(12.38) of [20], the cumulant coefficients needed for of (3) for , – the coefficients needed for the conditional density to , in terms of , are
(33) gives in terms of and , that is, in terms of and of (28) in terms of . In this example, many of these are 0. The non-zero are in order needed,
For is now given by (13), , and Section 2 of [3]. By (4) and (33), this gives the conditional density to . And (65) gives needed for the conditional distribution to .
5. Conclusions
Conditioning is a very useful and basic way to use correlated information to reduce the variability of an estimate. These results provide the means to obtain conditional regions and 1- or 2-sided conditional confidence intervals.
Section 4 gave the conditional density and distribution of given to where is any partition of . The expansion (33) gave the conditional density of any multivariate standard estimate. Our main result, an explicit expansion for the conditional distribution (37) to , is given in terms of the leading of (42). These are given explicitly by Theorems 3 and 4.
When Theorem 5 simplified the conditional density expansion, and Theorem 7 gave a huge simplification, and the coefficients of the conditional distribution expansion in terms of of Theorem 8.
Cumulant coefficients can also be used to obtain estimates of bias for : see [21,22,23].
6. Discussion
Ref. [3] gave the density and distribution of to , for any standard estimate, in terms of functions of the cumulant coefficients of (2), called the Edgeworth coefficients, .
Most estimates of interest are standard estimates, including smooth functions of sample moments, like the sample skewness, kurtosis, correlation, and any multivariate function of k-statistics. (These are unbiased estimates of cumulants and their products, the most common example being that for a variance.) Unbiased estimates are not needed for Edgeworth expansions, although this does simplify the Edgeworth coefficients, as seen in Examples 1, 2, and 4. However unbiased estimates are not available for most parameters or functions of them, such as the ratio of two means or variances, except for special cases of exponential families. Ref. [1] gave the cumulant coefficients for smooth functions of standard estimates.
A good approximation for the distribution of an estimate, is vital for accurate inference. It enables one to explore the distribution’s dependence on underlying parameters. Our analytic method avoids the need for simulation or jack-knife or bootstrap methods while providing greater accuracy than any of them. Ref. [10] used the Edgeworth expansion to show that the bootstrap gives accuracy to . Ref. [24] said that “2nd order correctness usually cannot be bettered”. But this is not true using our analytic method. Simulation, while popular, can at best shine a light on behaviour, only when there is a small number of parameters, and only for limited values of their range.
Estimates based on a sample of independent, but not identically distributed random vectors, are also generally standard estimates. For example, for a univariate sample mean where has rth cumulant , then where is the average rth cumulant. For some examples, see [2,25,26,27] The last is for a function of a weighted mean of complex random matrices.
For conditions for the validity of multivariate Edgeworth expansions, see [28] and its references, and Appendix C of [3].
While the use of Edgeworth-Cornish-Fisher expansions is widespread, few papers address how to deal with their divergence for small sample sizes. Refs. [29,30] avoided this question as it did not arise in their examples. In contrast we confronted this in Example 2, the examples of Withers (1984), and in Example 7 of [3].
We now turn to conditioning. Conditioning on makes inference on more precise by reducing the covariance of the estimate. The covariance of can be substantially less than that of . See the references at the end of Section 1.
A conditional distribution by tilting, was first given by [31] up to , for a bivariate sample mean. See Chapter 4 of [32], Compare [8]. Tilting (also known as small sample asympotics, or saddlepoint expansioins), was first used in statistics by [33]. He gave an approximation to the density of a sample mean, good for the whole line, not just in the region where the Central Limit Theorem approximation holds.
Future directions.
1. The results here give the first step for constructing confidence intervals and confidence regions of higher order accuracy. See [6,34]. What is needed next, is an application of [1] to obtain the cumulant coefficients of or those of . This should be straightforward.
2. When , our expansion for the conditional distribution of of (15), can be inverted using the Lagrange Inversion Theorem, to give expansions for its percentiles. This should be straightforward. (The quantile expansions of [29] and Withers (1984) do not apply as Appendix A shows that conditional estimates of standard estimates are not standard estimates.)
3. Here we have only considered expansions about the normal. However expansions about other distributions can greatly reduce the number of terms by matching the leading bias coefficient. The framework for this is [2], building on [34]. For expansions about a matching gamma, see [35,36].
4. The results here can be extended to tilted (saddlepoint) expansions by applying the results of [2]. The tilted version of the multivariate distribution and density of a standard estimate are given by Corollaries 3, 4 there, and that of the conditional distribution and density follow from these. For the entangled gamma of Example 2, this requires solving a cubic. See also [37].
5. A possible alternative approach to finding the conditional distribution, is to use conditional cumulants, when these can be found. Section 6.2 of [38] uses conditional cumulants to give the conditional density of a sample mean to . Section 5.6 of [39] gave formulas for the 1st 4 cumulants conditional on only when and are uncorrelated. He says that this assumption can be removed, but gives no details how. That is unlikely to give an alternative to our approach, for as well as giving expansions for the first 3 conditional cumulants, Appendix A shows that the conditional estimate is not a standard estimate.
6. Lastly we discuss numerical computation. We have used [40] for our calculations. Its input is and , - not and . There is a function sub2(sb1,sb2) which takes as argument the two subscripts of mu, and returns the value. If global variables mu20, mu02, mu11 are symbolic variables (defined using sympy) then it returns the answer in terms of those, but if they are numeric then it returns a numeric answer. There is another function called biHermite(n, m, y1, y2) which takes the 2 subscripts of H. If y1 and y2 are symbolic, then it returns a symbolic answer, but if they are numeric it returns a numeric answer. A numerical example is given by Example 2, that is, for the case and or .
Similar software for numerical calculations for Theorems 5, 7 and 8 would be invaluable, as would software for applying the Lagrange Inversion Theorem. (We mention R-4.4.1 for Windows: dmvnorm for the density function of the multivariate normal, mvtnorm for the multivariate normal, qmvnorm for quantiles, and rmvnorm to generate multivariate normal variables.) On bivariate Hermite polynomials, see cran.r-project.org/web/packages/calculus/vignettes/hermite.html accessed on 20 December 2024.
Funding
This research received no external funding.
Data Availability Statement
No new data were created or analyzed in this study. Data sharing is not applicable to this article.
Conflicts of Interest
The author declares no conflict of interest.
Appendix A. Conditional Moments
Here we give expansions for the conditional moments of of (15), in terms of the conditional normal moments of , of (15). And we show that
is neither a standard estimate of , nor a Type B estimate, as defined below.
Consider the case . By (19),
Non-central moments.
Theorem A1.
Proof.
This follows from Theorem 1. □
So by (A3), the sth conditional moment of is
of (A5) and (A6). For example,
So of (A1) is not a standard estimate, as by (A3), the expansion for its mean is a power series in , not . Is it a Type B estimate? These are defined as for a standard estimate, but with cumulant expansions being series in , not . We shall see. Take . By Theorem 6, for of (3), of (A4) is given by
For example,
Finding the
The needed are given in Appendix B of [3] in terms of
For example,
Let us write in terms of of (A2), as
To get a general formula for , set
where if is odd. is just of Appendix B of [3] with V replaced by
Central moments. Set and .
For of (A3), set
Is the conditional estimate a Type B estimate? This requires its rth cumulant to have magnitude for . This is true for and 2 but not for , as has magnitude , since .
References
- Withers, C.S. 5th-Order Multivariate Edgeworth Expansions for Parametric Estimates. Mathematics 2024, 12, 905. Available online: https://www.mdpi.com/2227-7390/12/6/905/pdf (accessed on 20 December 2024). [CrossRef]
- Withers, C.S.; Nadarajah, S. Tilted Edgeworth expansions for asymptotically normal vectors. Ann. Inst. Stat. Math. 2010, 62, 1113–1142. [Google Scholar] [CrossRef]
- Withers, C.S. Edgeworth coefficients for standard multivariate estimates. Axioms 2025, 14, 632. [Google Scholar] [CrossRef]
- Shenton, L.R.; Bowman, K.O. Maximum Likelihood Estimation in Small Samples; Griffin’s Statistical Monograph: London, UK, 1977. [Google Scholar]
- Withers, C.S.; Nadarajah, S. Asymptotic properties of M-estimators in linear and nonlinear multivariate regression models. Metrika 2014, 77, 647–673. [Google Scholar] [CrossRef]
- Withers, C.S. Accurate confidence intervals when nuisance parameters are present. Comm. Statist.-Theory Methods 1989, 18, 4229–4259. [Google Scholar] [CrossRef]
- Barndoff-Nielsen, O.E.; Cox, D.R. Inference and Asymptotics; Chapman and Hall: London, UK, 1994. [Google Scholar]
- Barndoff-Nielsen, O.E.; Cox, D.R. Asymptotic Techniques for Use in Statistics; Chapman and Hall: London, UK, 1989. [Google Scholar]
- Bhattacharya, R.N.; Rao, R.R. Normal Approximation and Asymptotic Expansions; SIAM: Philadelphia, PA, USA, 2010. [Google Scholar]
- Hall, P. The Bootstrap and Edgeworth Expansion; Springer: New York, NY, USA, 1992. [Google Scholar]
- Booth, J.; Hall, P.; Wood, A. Bootstrap estimation of conditional distributions. Ann. Stat. 1992, 20, 1594–1610. [Google Scholar] [CrossRef]
- DiCiccio, T.J.; Martin, M.A.; Young, G.A. Analytical approximations to conditional distribution functions. Biometrika 1993, 80, 781–790. [Google Scholar] [CrossRef]
- Hansen, B.E. Autoregressive conditional density estimation. Int. Econ. Rev. 1994, 35, 705–730. [Google Scholar] [CrossRef]
- Klüppelberg, C.; Seifert, M.I. Explicit results on conditional distributions of generalized exponential mixtures. J. Appl. Probab. 2020, 57, 760–774. [Google Scholar] [CrossRef]
- Moreira, M.J. A conditional likelihood ratio test for structural models. Econometrica 2003, 71, 1027–1048. [Google Scholar] [CrossRef]
- Pfanzagl, P. Conditional distributions as derivatives. Ann. Probab. 1979, 7, 1046–1050. [Google Scholar] [CrossRef]
- Withers, C.S.; Nadarajah, S.N. Charlier and Edgeworth expansions via Bell polynomials. Probab. Math. Stat. 2009, 29, 271–280. [Google Scholar]
- Anderson, T.W. An Introduction to Multivariate Analysis; John Wiley: New York, NY, USA, 1958. [Google Scholar]
- Comtet, L. Advanced Combinatorics; Reidel: Dordrecht, The Netherlands, 1974. [Google Scholar]
- Stuart, A.; Ord, K. Kendall’s Advanced Theory of Statistics, 5th ed.; Griffin: London, UK, 1991; Volume 2. [Google Scholar]
- Withers, C.S.; Nadarajah, S. Nonparametric estimates of low bias. REVSTAT Stat. J. 2012, 10, 229–283. [Google Scholar]
- Withers, C.S.; Nadarajah, S. Bias reduction: The delta method versus the jackknife and the bootstrap. Pak. J. Statist. 2014, 30, 143–151. [Google Scholar]
- Withers, C.S.; Nadarajah, S. Bias reduction for standard and extreme estimates. Commun. Stat.-Simul. Comput. 2023, 52, 1264–1277. [Google Scholar] [CrossRef]
- Hall, P. Rejoinder: Theoretical Comparison of Bootstrap Confidence Intervals. Ann. Stat. 1988, 16, 981–985. [Google Scholar] [CrossRef]
- Skovgaard, I.M. Edgeworth expansions of the distributions of maximum likelihood estimators in the general (non i.i.d.) case. Scand. J. Statist. 1981, 8, 227–236. [Google Scholar]
- Skovgaard, I.M. Transformation of an Edgeworth expansion by a sequence of smooth functions. Scand. J. Statist. 1981, 8, 207–217. [Google Scholar]
- Withers, C.S.; Nadarajah, S. The distribution and percentiles of channel capacity for multiple arrays. Sadhana Sadh Indian Acad. Sci. 2020, 45, 1–25. [Google Scholar] [CrossRef]
- Skovgaard, I.M. On multivariate Edgeworth expansions. Int. Statist. Rev. 1986, 54, 169–186. [Google Scholar] [CrossRef]
- Cornish, E.A.; Fisher, R.A. Moments and cumulants in the specification of distributions. Rev. l’Inst. Int. Statist. 1937, 5, 307–322. [Google Scholar] [CrossRef]
- Fisher, R.A.; Cornish, E.A. The percentile points of distributions having known cumulants. Technometrics 1960, 2, 209–225. [Google Scholar] [CrossRef]
- Skovgaard, I.M. Saddlepoint expansions for conditional distributions. J. Appl. Probab. 1987, 24, 875–887. [Google Scholar] [CrossRef]
- Butler, R.W. Saddlepoint Approximations with Applications; Cambridge University Press: Cambridge, UK, 2007; pp. 107–144. [Google Scholar] [CrossRef]
- Daniels, H.E. Saddlepoint approximations in statistics. Ann. Math. Statist. 1954, 25, 631–650. [Google Scholar] [CrossRef]
- Hill, G.W.; Davis, A.W. Generalised asymptotic expansions of Cornish-Fisher type. Ann. Math. Statist. 1968, 39, 1264–1273. [Google Scholar] [CrossRef]
- Withers, C.S.; Nadarajah, S. Generalized Cornish-Fisher expansions. Bull. Brazilian Math. Soc. New Series 2011, 42, 213–242. [Google Scholar] [CrossRef]
- Withers, C.S.; Nadarajah, S. Expansions about the gamma for the distribution and quantiles of a standard estimate. Methodol. Comput. Appl. Probab. 2014, 16, 693–713. [Google Scholar] [CrossRef][Green Version]
- Jing, B.; Robinson, J. Saddlepoint approximations for marginal and conditional probabilities of transformed variables. Ann. Statist. 1994, 22, 1115–1132. [Google Scholar] [CrossRef]
- McCullagh, P. Tensor notation and cumulants of polynomials. Biometrika 1984, 71, 461–476. [Google Scholar] [CrossRef]
- McCullagh, P. Tensor Methods in Statistics; Chapman and Hall: London, UK, 1987. [Google Scholar]
- Teal, P. A Code to Calculate Bivariate Hermite Polynomials. 2024. Available online: https://github.com/paultnz/bihermite/tree/main (accessed on 20 December 2024).
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the author. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).