Next Article in Journal
Queuing-Inventory System with Catastrophes in the Warehouse: Case of Rare Catastrophes
Next Article in Special Issue
The Law of the Iterated Logarithm for Lp-Norms of Kernel Estimators of Cumulative Distribution Functions
Previous Article in Journal
Uncertain Asymptotic Stability Analysis of a Fractional-Order System with Numerical Aspects
Previous Article in Special Issue
A Continuous-Time Urn Model for a System of Activated Particles

Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

# 5th-Order Multivariate Edgeworth Expansions for Parametric Estimates

by
C. S. Withers
Callaghan Innovation (formerly Industrial Research Ltd.), Lower Hutt 5011, New Zealand
Mathematics 2024, 12(6), 905; https://doi.org/10.3390/math12060905
Submission received: 30 January 2024 / Revised: 4 March 2024 / Accepted: 8 March 2024 / Published: 19 March 2024

## Abstract

:
The only cases where exact distributions of estimates are known is for samples from exponential families, and then only for special functions of the parameters. So statistical inference was traditionally based on the asymptotic normality of estimates. To improve on this we need the Edgeworth expansion for the distribution of the standardised estimate. This is an expansion in $n − 1 / 2$ about the normal distribution, where n is typically the sample size. The first few terms of this expansion were originally given for the special case of a sample mean. In earlier work we derived it for any standard estimate, hugely expanding its application. We define an estimate $w ^$ of an unknown vector w in $R p$, as a standard estimate, if $E w ^ → w$ as $n → ∞$, and for $r ≥ 1$ the rth-order cumulants of $w ^$ have magnitude $n 1 − r$ and can be expanded in $n − 1 .$ Here we present a significant extension. We give the expansion of the distribution of any smooth function of $w ^$, say $t ( w ^ )$ in $R q ,$ giving its distribution to $n − 5 / 2$. We do this by showing that $t ( w ^ )$, is a standard estimate of $t ( w )$. This provides far more accurate approximations for the distribution of $t ( w ^ )$ than its asymptotic normality.
MSC:
60B12; 60B20; 60E05; 62E20; 62F12; 62G86; 62H10

## 1. Introduction and Summary

Suppose that $w ^$ is a standard or Type A estimate of an unknown w in $R p$ with respect to a given parameter n. That is, $E w ^ → w$ as $n → ∞$ and for $r ≥ 1 ,$ its rth-order cumulants have magnitude $n 1 − r$ and can be expanded as
$k ¯ 1 − r = κ ( w ^ i 1 , … , w ^ i r ) = ∑ e = r − 1 ∞ n − e k ¯ e 1 − r f o r 1 ≤ i 1 , … , i r ≤ p ,$
where the cumulant coefficients $k ¯ e 1 − r = k e i 1 − i r$ do not depend on n, or at least are bounded as $n → ∞$. So $k ¯ 0 1 = w i 1 .$ For example, (1) holds for $w ^$ a function of a sample mean. We show that if $t ( w ^ )$ is a smooth function of a standard estimate $w ^$, then it is a standard estimate of $t ( w )$. We establish this for unbiased $w ^$ in Theorem 2, and for biased $w ^$ in Theorem 3. More generally, we define $w ^$ as a Type B estimate if $E w ^ → w$ as $n → ∞ ,$ and for $r ≥ 1$,
$k ¯ 1 − r = ∑ d = 2 r − 2 ∞ n − d / 2 b ¯ d 1 − r f o r 1 ≤ i 1 , … , i r ≤ p , b ¯ d 1 − r = b d i 1 … i r .$
For example, this type arises when considering one-sided confidence regions. If $t ( w ^ )$ is a smooth function of a Type B estimate, then it is a Type B estimate of $t ( w )$. So for a Type A estimate, $b ¯ d 1 − r$ is $k ¯ e 1 − r$ for $d = 2 e$ and 0 for d odd. n is typically the sample size or the minimum sample size if there is more than one sample.
Section 3 and Section 4 show that a smooth function of $w ^$, say $t ( w ^ )$, is a standard estimate of $t = t ( w )$. These sections provide the cumulant coefficients of $t ( w ^ )$ in terms of those of $w ^$ and the derivatives of $t ( w )$. Section 3 does this for $w ^$ unbiased and Section 4 for $w ^$ biased. So they can be thought of as chain rules for obtaining the cumulant coefficients for $t ( w ^ )$ from those of $w ^$. We use the notation $Y n = O ( n − γ )$ to mean that $n γ Y n$ is bounded as $n → ∞ .$ We provide the cumulant coefficients required for Edgeworth expansions of $t ^$ to $O ( n − 5 / 2 )$. Cumulant coefficients up to $O ( n − 1 )$ were given in [1]. Cumulant coefficients up to $O ( n − r / 2 )$ use the rth derivatives of $t ( w )$. Section 5 specialises to univariate $t ( w )$ with examples. Theorem 3 and Corollary 4 rectify $a ¯ 2 12 = K 2 j 1 j 2$ and $a 22$ on pages 67 and 59 of [2]. Section 2 extends the shorthand bar notation above and gives the foundation theorem.
We now summarise the expressions for Edgeworth expansions of $w ^$ for standard and Type B estimates in terms of the cumulant coefficients $k ¯ e 1 − r$ and $b ¯ d 1 − r$ given in [3,4,5]:
$P r o b . ( Y n w ≤ x ) = ∑ r = 0 ∞ n − r / 2 P r ( x ) , p Y n w ( x ) = ∑ r = 0 ∞ n − r / 2 p r ( x ) ,$
$w h e r e Y n w = n 1 / 2 ( w ^ − w − b 1 n − 1 / 2 ) , ( b 1 ) i = b 1 i , P 0 ( x ) = Φ V ( x ) ,$
$P r ( x ) = B ˜ r ( e ( − ∂ / ∂ x ) ) Φ V ( x ) f o r r ≥ 1 ,$
$e j ( t ) = ∑ r = 1 j + 2 b ¯ r + j 1 … r t i 1 … t i r / r ! , b ¯ r + j 1 … r = b r + j i 1 … i r ,$
$Φ V ( x )$ is the multivariate normal distribution with zero mean and covariance $V = ( b ¯ 2 12 )$, $B ˜ r ( e )$ is the complete ordinary Bell polynomial of [6]:
$B ˜ 1 ( e ) = e 1 , B ˜ 2 ( e ) = e 2 + e 1 2 , B ˜ 3 ( e ) = e 3 + 2 e 1 e 2 + e 1 3 , B ˜ 4 ( e ) = e 4 + 2 e 1 e 3 + e 2 2 + 3 e 1 2 e 2 + e 1 4 .$
This equation provides the 5th-order Edgeworth expansion for the distribution of $Y n w$, extending it up to $O ( n − 5 / 2 )$. It is important to note that (5) utilises the tensor summation convention of implicitly summing $i 1 , … , i r$ over their range $1 , … , p$. For example,
$f o r ∂ i = ∂ / ∂ x i a n d ∂ ¯ k = ∂ i k , P 1 ( x ) = e 1 ( − ∂ / ∂ x ) ) Φ V ( x ) = ∑ r = 1 3 b ¯ r + 1 1 … r ( − ∂ ¯ 1 ) … ( − ∂ ¯ r ) Φ V ( x ) / r ! = k ¯ 1 1 ( − ∂ ¯ 1 ) Φ V ( x ) + k ¯ 2 1 − 3 ( − ∂ ¯ 1 ) ( − ∂ ¯ 2 ) ( − ∂ ¯ 3 ) Φ V ( x ) / 6$
for a standard estimate. For a standard estimate, $b 1 = 0$ in (3) and the cumulant coefficients needed for $P r ( x ) , p r ( x )$ of (2) are $k ¯ 0 1 = w i 1$,
$f o r r = 0 : k ¯ 1 12 ; f o r r = 1 : k ¯ 1 1 , k ¯ 2 1 − 3 ; f o r r = 2 : k ¯ 2 12 , k ¯ 3 1 − 4 ;$
$f o r r = 3 : k ¯ 2 1 , k ¯ 3 1 − 3 , k ¯ 4 1 − 5 ; f o r r = 4 : k ¯ 3 12 , k ¯ 4 1 − 4 , k ¯ 5 1 − 6 .$
Therefore, to derive the 5th-order Edgeworth expansion for the distribution of $n 1 / 2 ( t ( w ^ ) − t ( w ) )$ for $w ^$ a standard estimate, we simply substitute the coefficients in (6) and (7) in the expression for $P r ( x ) , r ≤ 4$, with those corresponding to $t ( w ^ )$ as provided in Section 3, Section 4 and Section 5.
Equation (9) of [3] provides $P r ( x )$ for the more general case where $P 0 ( x )$ is the distribution function of Y in $R p$ which depends on n but is asymptotic to $Φ V ( x )$ and has a Type B expansion. One can choose $P 0 ( x )$ so that the number of terms in each $P r ( x )$ greatly reduces: see Withers and Nadarajah (2012d) [7,8]. When $w ^$ is lattice, further terms need to be added: see for example Chapter 5 of [9], [10], and for the density of $Y n w$, p211 of [11], Section 5 of [12], and Section 6 of [13]. Corollary 1 of [3] gives the tilted Edgeworth expansion for $t ( w ^ )$, sometimes called the saddlepoint approximation, or the small sample expansion as it is a series in $n − 1$ not just $n − 1 / 2$. It is very useful for the tails of the distribution where Edgeworth expansions perform poorly. Cumulant coefficients are also needed for bias reduction, Bayesian inference, confidence regions and power. See [7,8,14,15,16,17,18]. For examples. For a historical overview of Edgeworth expansions, refer to Section 7.
In summary, this paper gives high-order expansions for the distribution of a wide range of estimates, by determining the cumulant coefficients required for any smooth function of a standard estimate. This approach offers unprecedented accuracy for these distributions and eliminates the necessity for simulation methods.

## 2. Foundations

Considering $w = ( w 1 , … , w p )$ in $R p$ and an estimate $w ^ ,$ assume that $E w ^ → w$ as $n → ∞$ and that for $r ≥ 1 ,$ its rth-order cumulants have magnitude $n 1 − r$. Given $i 1 , … , i r$ in $1 , 2 , … , p$, we write these cumulants in shorthand as
For example, if $w ^ = X ¯$ is the mean of a random sample of size n, then (8) holds since $k ¯ 1 − r = n 1 − r κ ( X i 1 , … , X i r )$ where $X i$ is the ith component of X. According to Theorem 1, Equation (8) is valid if $w ^$ is a smooth function of one or more sample means. Let $t : R p → R q$ be a smooth function in a neighbourhood of w with jth component $t j = t j ( w ) , j = 1 , … , q$ and finite partial derivatives
where $∂ i = ∂ / ∂ w i .$ Superscripts i are reserved for the cumulants of $w ^$ and subscripts for partial derivatives of $t ( w )$. Superscripts j are reserved for the components of $t ( w )$ and for the joint cumulants of $t ^ = t ( w ^ )$. This bar shorthand allows us to shorten expressions by suppressing the is and js. We write the cumulants of $t ^ = t ( w ^ )$ as
For example, $k ¯ 12 = k i 1 i 2$ and $K ¯ 12 = K j 1 j 2$ imply that the covariance of $w ^$ is represented by $( k ¯ 12 )$, and the covariance of $t ^$ is represented by $( K ¯ 12 )$, both of which scale as $O ( n − 1 )$. Next, we demonstrate that
In other words, employing the tensor sum convention. The rest of this section and all proofs can be skipped on a first reading. Theorem 1 provides the cumulants of $t ^ = t ( w ^ )$ when $w ^$ is unbiased.
We use the notation $∑ N f j 1 j 2 …$ to denote summing over all N permutations of $j 1 , j 2 , …$ resulting distinct terms.
Theorem 1.
Suppose $E w ^ = w$ and Equation (8) holds. Then for $r ≥ 1$ and $1 ≤ j 1 , … ,$ $j r ≤ q ,$ $K ¯ 1 − r$ of (9) satisfies
and the leading are as follows.
that is,
where
where
where
where
where
where
where
where
$T 1 − 7 1 − 6 = ∑ 30 t ¯ 16 1 t ¯ 2 2 t ¯ 3 3 t ¯ 4 4 t ¯ 5 5 t ¯ 7 6 , U 1 − 7 1 − 6 = ∑ 60 t ¯ 15 1 t ¯ 2 2 t ¯ 3 3 t ¯ 4 4 t ¯ 6 5 t ¯ 7 6 , T 1 − 8 1 − 6 = ∑ 60 t ¯ 157 1 t ¯ 2 2 t ¯ 3 3 t ¯ 4 4 t ¯ 6 5 t ¯ 8 6 + ∑ 180 t ¯ 15 1 t ¯ 27 2 t ¯ 3 3 t ¯ 4 4 t ¯ 6 5 t ¯ 8 6 + ∑ 120 t ¯ 15 1 t ¯ 67 2 t ¯ 2 3 t ¯ 3 4 t ¯ 4 5 t ¯ 8 6 , U 1 − 8 1 − 6 = ∑ 90 t ¯ 147 1 t ¯ 2 2 t ¯ 3 3 t ¯ 5 4 t ¯ 6 5 t ¯ 8 6 + ∑ 360 t ¯ 14 1 t ¯ 27 2 t ¯ 3 3 t ¯ 5 4 t ¯ 6 5 t ¯ 8 6 + ∑ 90 t ¯ 17 1 t ¯ 48 2 t ¯ 2 3 t ¯ 3 4 t ¯ 5 5 t ¯ 6 6 , T 1 − 9 1 − 6 = ∑ 60 t ¯ 1468 1 t ¯ 2 2 t ¯ 3 3 t ¯ 5 4 t ¯ 7 5 t ¯ 9 6 + ∑ 360 t ¯ 146 1 t ¯ 28 2 t ¯ 3 3 t ¯ 5 4 t ¯ 7 5 t ¯ 9 6 + ∑ 360 t ¯ 146 1 t ¯ 58 2 t ¯ 2 3 t ¯ 3 4 t ¯ 7 5 t ¯ 9 6 + ∑ 180 t ¯ 468 1 t ¯ 15 2 t ¯ 2 3 t ¯ 3 4 t ¯ 7 5 t ¯ 9 6 + ∑ 120 t ¯ 14 1 t ¯ 26 2 t ¯ 38 3 t ¯ 5 4 t ¯ 7 5 t ¯ 9 6 + ∑ 720 t ¯ 14 1 t ¯ 26 2 t ¯ 58 3 t ¯ 3 4 t ¯ 7 5 t ¯ 9 6 + ∑ 360 t ¯ 14 1 t ¯ 56 2 t ¯ 78 3 t ¯ 2 4 t ¯ 3 5 t ¯ 9 6 , T 1 − 10 1 − 6 = ∑ 6 t ¯ 13579 1 t ¯ 2 2 t ¯ 4 3 t ¯ 6 4 t ¯ 8 5 t ¯ 10 6 + ∑ 120 t ¯ 1357 1 t ¯ 29 2 t ¯ 4 3 t ¯ 6 4 t ¯ 8 5 t ¯ 10 6 + ∑ 90 t ¯ 135 1 t ¯ 279 2 t ¯ 4 3 t ¯ 6 4 t ¯ 8 5 t ¯ 10 6 + ∑ 360 t ¯ 135 1 t ¯ 27 2 t ¯ 49 3 t ¯ 6 4 t ¯ 8 5 t ¯ 10 6 + ∑ 360 t ¯ 135 1 t ¯ 27 2 t ¯ 89 3 t ¯ 4 4 t ¯ 6 5 t ¯ 10 6 + ∑ 360 t ¯ 13 1 t ¯ 25 2 t ¯ 47 3 t ¯ 69 4 t ¯ 8 5 t ¯ 10 6 .$
Note 1.
For reference regarding N in $∑ N$, refer to page 48 of [19]. It is important to note that the notation $∑ N$ in terms like $T 1 − s 1 − r$ only applies for $N < r !$ in the context where they are used. For example, writing $( a b c ) = t ¯ 13 a t ¯ 2 b t ¯ 4 c$ and recalling that $∑ N$ only permutes superscripts but leaves subscripts alone, we have
$T 1 − 4 1 − 3 = ∑ N ( 123 ) = ( 123 ) + ( 213 ) + ( 321 )$
with $N = 3$ not $3 !$ since
$∑ 3 ! ( 123 ) = ( 123 ) + ( 132 ) + ( 213 ) + ( 231 ) + ( 321 ) + ( 312 ) = ∑ k = 1 6 S k$
say, when multiplied by $k ¯ 12 k ¯ 34$, as in , gives $∑ k = 1 6 S k ′$ say, where for $k = 1 , 2 , 3 , S 2 k ′ = S 2 k − 1 ′$. For example, $T 1 − 4 1 − 3 k ¯ 12 k ¯ 34$ in above is shorthand for $∑ 3 t ¯ 13 1 t ¯ 2 2 t ¯ 4 3 k ¯ 12 k ¯ 34$. For,
$S 2 ′ = t ¯ 4 2 k ¯ 43 t ¯ 31 1 k ¯ 12 t ¯ 2 3 = t ¯ 1 2 k ¯ 12 t ¯ 13 1 k ¯ 34 t ¯ 4 3 = S 1 ′ ⇒ T 1 − 4 1 − 3 k ¯ 12 k ¯ 34 = S 1 ′ + S 3 ′ + S 5 ′ .$
Proof
This result can be derived by substituting $A ¯ 1 − r j = A i 1 … i r j$ by $t ¯ 1 − r 1 / r ! = t . i 1 … i r j 1 / r !$ according to [19]. □
Likewise, one can readily derive from pages 51–53 of [19]. The tensor form $2 K ¯ 1 1 = t ¯ 12 1 k ¯ 1 12$ can be conceptualised as a molecule or molecular structure of 2 atoms, $t ¯ 12 1$ and $k ¯ 1 12$, connected by the double bond 1, 2, represented as $i 1 , i 2$. is a linear combination of $t ¯ 1 − 3 1 k ¯ 1 − 3$, 2 atoms linked by the triple bond 1,2,3, and secondly $k ¯ 12 t ¯ 1 − 4 1 k ¯ 34$. The last expression has the structure of CO2, with 2 identical atoms each linked by a double bond to a central atom. Just as such bonds are depicted in chemistry to illustrate the structure of a molecule, they can be very useful here to illustrate the difference in structure of similar mathematical expressions. $S 1 ′$ of Note 1 is a linear molecular form with the 4 single bonds 1,2,3,4 and 4 distinct atoms, $t ¯ 1 2 , t ¯ 1 1 , t ¯ 12 1 ,$ and $k ¯ 12 .$ Other expressions have more complex structures. Doubling the last term in yields $T 1 − 4 12 k ¯ 12 k ¯ 34 = S 12 + S 21 + S$ where $S 12 = k ¯ 12 t ¯ 1 − 3 1 k ¯ 34 t ¯ 4 2$ exhibits a linear structure with a double bond between 1 and 2, followed by two single bonds, 3 and 4. Additionally, $S = t ¯ 31 1 k ¯ 12 t ¯ 24 2 k ¯ 43$ forms a square or rectangle with four single bonds 1,2,4,3 arranged along successive edges of the square. These pictorial forms are a very useful way to distinguish similar expressions in $∑ N f j 1 j 2 …$.
Section 6 provides the ’more complicated’ terms referred to (but not given) on p49 of [19] when $w ^$ is biased. It can be used for an alternative Proof of Theorem 3 below. From Theorem 1, Edgeworth expansions can be obtained for the distribution and density of the standardised form of $t ( w ^ )$,
$Y n t = n 1 / 2 ( t ^ − t ) = n 1 / 2 ( t ( w ^ ) − t ( w ) ) ,$
of the form
$P r o b . ( Y n t ≤ x ) = ∑ r = 0 ∞ P r n ( x ) , p Y n t ( x ) = ∑ r = 0 ∞ p r n ( x ) ,$
where $P r n ( x ) , p r n ( x )$ are $O ( n − r / 2 ) .$ The of Theorem 1 needed for $P r n ( x ) , p r n ( x )$ are as follows.

## 3. Cumulant Coefficients for $t ( w ^ )$ when $E w ^ = w$

We now show that for $r ≥ 1$ and $1 ≤ j 1 , … , j r ≤ q$, the cumulant coefficient $K ¯ 1 − r$ from Equation (10) can be expanded as
$K ¯ 1 − r = K j 1 − j r = κ ( t ^ j 1 , … , t ^ j r ) = ∑ e = r − 1 ∞ n − e K ¯ e 1 − r$
Substituting ${ k ¯ 1 − r }$ with ${ K ¯ 1 − r }$ on the right-hand side of (4), denoted as RHS (4), provides the Edgeworth expansion for $Y n t$ as in Equation (12). If $π$ is a product of cumulants as in Equation (1), let $( π ) e$ denote the coefficient of $n − e$ in the expansion of $π$. For example, $( k ¯ 1 − r ) e = k ¯ e 1 − r ,$
$( k ¯ 12 k ¯ 34 ) 3 = k ¯ 1 12 k ¯ 2 34 + k ¯ 2 12 k ¯ 1 34 , ( k ¯ 12 k ¯ 34 ) 4 = k ¯ 1 12 k ¯ 3 34 + k ¯ 2 12 k ¯ 2 34 + k ¯ 3 12 k ¯ 1 34 , ( k ¯ 1 − 3 k ¯ 45 ) 4 = k ¯ 2 1 − 3 k ¯ 2 45 + k ¯ 3 1 − 3 k ¯ 1 45 , ( k ¯ 12 k ¯ 34 k ¯ 56 ) 4 = k ¯ 1 12 k ¯ 1 34 k ¯ 2 56 + k ¯ 1 12 k ¯ 2 34 k ¯ 1 56 + k ¯ 2 12 k ¯ 1 34 k ¯ 1 56 .$
Now, let us provide the elements of the expansion (14) when $E w ^ = w$.
Theorem 2.
Assume that $w ^$ is an unbiased estimate of w satisfying Equation (1) and $t ( w )$ has finite derivatives. In this case, Equation (14) holds with bounded cumulant coefficients
and so forth. The leading coefficients needed for $P r ( x ) , p r ( x )$ of (4) for the distribution of $Y n t$ of (12) are given in the $T , U , V$ notation of Theorem 1 as follows.
Proof
Substituting (1) into of Theorem 1 gives say. So by (10), (14) and (16) hold. is of [2]. □
Note  2.
(11) made explicit the 3 terms needed in $T 1 − 4 1 − 3$ for $P 1 ( x )$ of Theorem 2. Similarly $P 2 ( x )$ needs the 12 terms
$T 1 − 5 1 − 4 = ∑ 12 ( 1234 ) = ( 1234 ) + ( 1243 ) + ( 2413 ) + ( 2431 ) + ( 3124 ) + ( 3142 ) + ( 3241 ) + ( 3412 ) + ( 4123 ) + ( 4132 ) + ( 4231 ) + ( 4321 )$
where $( a b c d ) = t ¯ 14 a t ¯ 2 b t ¯ 3 c t ¯ 5 d .$ It also needs the $4 + 12$ terms $T 1 − 6 1 − 4 = A + B$ where
$A = ∑ 4 ( 1234 ) = ( 1234 ) + ( 2134 ) + ( 3124 ) + ( 4123 ) f o r ( a b c d ) = t ¯ 135 a t ¯ 2 b t ¯ 4 c t ¯ 6 d , B = ∑ 12 ( 1234 ) = ( 1234 ) + ( 1423 ) + ( 1432 ) + ( 1324 ) + ( 2134 ) + ( 2314 ) + ( 2413 ) + ( 3124 ) + ( 3214 ) + ( 3412 ) + ( 4213 ) + ( 4312 ) f o r ( a b c d ) = t ¯ 13 a t ¯ 25 b t ¯ 4 c t ¯ 6 d .$

## 4. Cumulant Coefficients for $t ( w ^ )$ when $E w ^ ≠ w$

We proceed by removing the assumption of $w ^$ being unbiased. We utilise $K ¯ e 1 − r$ from Theorem 2, and the shorthand $f ¯ ▪ m = ∂ i m f$ where again $∂ i = ∂ / ∂ w i .$ A significant distinction arises compared to Theorem 2: in that case, $k ¯ e 1 − r$ was treated as an algebraic expression. However, now we must consider each of them as a function of w. Thus, we assume that the distribution of $w ^$ is determined by w. This assumption is necessary to derive higher order confidence intervals for $t ( w )$ when $q = 1$: see [20]. It is demonstrated that for $Y n t$ from Equation (12), $P 2 ( x ) , p 2 ( x )$ require the first derivatives $k ¯ 1 ▪ i 12 = ∂ i k ¯ 1 12$ where $∂ i = ∂ / ∂ w i$, $P 3 ( x ) , p 3 ( x )$ need the 1st derivatives $k ¯ 2 ▪ 4 1 − 3$, and so on. The derivatives of $K ¯ e 1 − r$ are computed using Leibniz’s rule for the derivatives of a product. For example,
$K ¯ 1 ▪ 3 12 = ( t ¯ 1 1 t ¯ 2 2 k ¯ 1 12 ) ▪ 3 = ( ∑ 12 2 t ¯ 1 1 t ¯ 23 2 ) k ¯ 1 12 + t ¯ 1 1 t ¯ 2 2 k ¯ 1 ▪ 3 12 f o r ∑ 12 2 t ¯ 1 1 t ¯ 23 2 = t ¯ 13 1 t ¯ 2 2 + t ¯ 1 1 t ¯ 23 2 , K ¯ 1 ▪ 3 1 = ( t ¯ 12 1 k ¯ 1 12 ) ▪ 3 / 2 = t ¯ 1 − 3 1 k ¯ 1 12 / 2 + t ¯ 12 1 k ¯ 1 ▪ 3 12 / 2 , K ¯ 1 ▪ 34 12 = ∑ 12 2 [ ( t ¯ 14 1 t ¯ 23 2 + t ¯ 1 1 t ¯ 2 − 4 2 ) k ¯ 1 12 + t ¯ 1 1 t ¯ 23 2 k ¯ 1 ▪ 4 12 + t ¯ 14 1 t ¯ 2 2 k ¯ 1 ▪ 3 12 ] + t ¯ 1 1 t ¯ 2 2 k ¯ 1 ▪ 34 12 , ( t ¯ 1 1 t ¯ 2 2 t ¯ 3 3 k ¯ 2 1 − 3 ) ▪ 4 = ( t ¯ 1 1 t ¯ 2 2 t ¯ 3 3 ) ▪ 4 k ¯ 2 1 − 3 + t ¯ 1 1 t ¯ 2 2 t ¯ 3 3 k ¯ 2 ▪ 4 1 − 3 , ( t ¯ 1 1 t ¯ 2 2 t ¯ 3 3 ) ▪ 4 = t ¯ 1 1 t ¯ 2 2 t ¯ 34 3 + t ¯ 1 1 t ¯ 24 2 t ¯ 3 3 + t ¯ 14 1 t ¯ 2 2 t ¯ 3 3 , ( T 1 − 4 1 − 3 k ¯ 1 12 k ¯ 1 34 ) ▪ 5 = T 1 − 4 ▪ 5 1 − 3 k ¯ 1 12 k ¯ 1 34 + T 1 − 4 1 − 3 ( k ¯ 1 12 k ¯ 1 34 ) ▪ 5 , T 1 − 4 ▪ 5 1 − 3 = ∑ 3 ( t ¯ 135 1 t ¯ 2 2 t ¯ 4 3 + t ¯ 13 1 t ¯ 25 2 t ¯ 4 3 + t ¯ 13 1 t ¯ 2 2 t ¯ 45 3 ) , ( k ¯ 1 12 k ¯ 1 34 ) ▪ 5 = k ¯ 1 ▪ 5 12 k ¯ 1 34 + k ¯ 1 12 k ¯ 1 ▪ 5 34 .$
Theorem 3.
Let $w ^$ in $R p$ be a biased standard estimate of w satisfying (1) where $k ¯ e 1 − r$ depend on w. Then $t ^ = t ( w ^ )$ in $R q$ is a standard estimate of $t ( w )$:
$κ ( t ^ j 1 , … , t ^ j r ) = ∑ e = r − 1 ∞ n − e a ¯ e 1 − r f o r r ≥ 1 , 1 ≤ j 1 , … , j r ≤ q ,$
$w h e r e a ¯ e 1 − r = K ¯ e 1 − r + D ¯ e 1 − r , D ¯ r − 1 1 − r = 0 ,$
for $K ¯ e 1 − r$ of Theorem 2, and the other $D ¯ e 1 − r = D e j 1 … j r$ needed for $P r ( x ) , p r ( x )$ of (4) for $Y n t$ of (12) are as follows.
$F o r P 0 ( x ) : D ¯ 1 12 = 0 ⇒ a ¯ 1 12 = K ¯ 1 12 = K 1 j 1 j 2 = t ¯ 1 1 t ¯ 2 2 k ¯ 1 12 . F o r P 1 ( x ) : D ¯ 1 1 = t ¯ 1 1 k ¯ 1 1 ⇒ a ¯ 1 1 = K ¯ 1 1 + D ¯ 1 1 = t ¯ 1 1 k ¯ 1 1 + t ¯ 12 1 k ¯ 1 12 / 2 , F o r P 2 ( x ) : D ¯ 2 12 = K ¯ 1 ▪ 3 12 k ¯ 1 3 = [ ( t ¯ 13 1 t ¯ 2 2 + t ¯ 1 1 t ¯ 23 2 ) k ¯ 1 12 + t ¯ 1 1 t ¯ 2 2 k ¯ 1 ▪ 3 12 ] k ¯ 1 3 ⇒ a ¯ 2 12 = t ¯ 1 1 t ¯ 2 2 k ¯ 2 12 + T 1 − 3 12 k ¯ 2 1 − 3 / 2 + T 1 − 4 12 k ¯ 1 12 k ¯ 1 34 / 2 + [ ( t ¯ 13 1 t ¯ 2 2 + t ¯ 1 1 t ¯ 23 2 ) k ¯ 1 12 + t ¯ 1 1 t ¯ 2 2 k ¯ 1 ▪ 3 12 ] k ¯ 1 3 . F o r P 3 ( x ) : D ¯ 2 1 = K ¯ 1 , 1 1 + K ¯ 0 , 2 1 , K ¯ 1 , 1 1 = K ¯ 1 ▪ 3 1 k ¯ 1 3 , K ¯ 0 , 2 1 = t ¯ 1 1 k ¯ 2 1 + t ¯ 12 1 k ¯ 1 1 k ¯ 1 2 / 2 ⇒ a ¯ 2 1 = t ¯ 1 1 k ¯ 2 1 + t ¯ 12 1 ( k ¯ 2 12 + k ¯ 1 1 k ¯ 1 2 + k ¯ 1 ▪ 3 12 k ¯ 1 3 ) / 2 + t ¯ 1 − 3 1 ( k ¯ 2 1 − 3 / 6 + k ¯ 1 1 k ¯ 1 23 / 2 ) + t ¯ 1 − 4 1 k ¯ 1 12 k ¯ 1 34 / 8 , D ¯ 3 1 − 3 = K ¯ 2 ▪ 4 1 − 3 k ¯ 1 4 = ( t ¯ 1 1 t ¯ 2 2 t ¯ 3 3 k ¯ 2 1 − 3 ) ▪ 4 k ¯ 1 4 + ( T 1 − 4 1 − 3 k ¯ 1 12 k ¯ 1 34 ) ▪ 5 k ¯ 1 5 . F o r P 4 ( x ) : D ¯ 3 12 = K ¯ 2 , 1 12 + K ¯ 1 , 2 12 , K ¯ 2 , 1 12 = K ¯ 2 ▪ 3 12 k ¯ 1 3 , K ¯ 1 , 2 12 = K ¯ 1 ▪ 3 12 k ¯ 2 3 / 2 + K ¯ 1 ▪ 34 12 k ¯ 1 3 k ¯ 1 4 , D ¯ 4 1 − 4 = K ¯ 3 ▪ 5 1 − 4 k ¯ 1 5 , D ¯ 5 1 − 6 = 0 .$
For $E t j 1 ( w ^ )$ to $O ( n − 5 )$ we also need $D ¯ j = D ¯ j 1 , j = 3 , 4 ,$ given by
$D ¯ 3 = K ¯ 2 , 1 + K ¯ 1 , 2 + K ¯ 0 , 3 , K ¯ 2 , 1 = K ¯ 2 ▪ 1 k ¯ 1 1 , K ¯ 2 ▪ 1 = ( t ¯ 1 − 3 k ¯ 2 23 + t ¯ 23 k ¯ 2 ▪ 1 23 ) / 2 + ( t ¯ 1 − 4 k ¯ 2 2 − 4 + t ¯ 2 − 4 k ¯ 2 ▪ 1 2 − 4 ) / 6 + t ¯ 1 − 5 k ¯ 1 23 k ¯ 1 45 / 8 + t ¯ 2 − 5 k ¯ 1 23 k ¯ 1 ▪ 1 45 / 4 , K ¯ 1 , 2 = K ¯ 1 ▪ 1 k ¯ 2 1 + K ¯ 1 ▪ 12 k ¯ 1 1 k ¯ 1 2 / 2 , 2 K ¯ 1 ▪ 1 = t ¯ 1 − 3 k ¯ 1 23 + t ¯ 23 k ¯ 1 ▪ 1 23 , 2 K ¯ 1 ▪ 12 = t ¯ 1 − 4 k ¯ 1 34 + ∑ 12 2 t ¯ 2 − 4 k ¯ 1 ▪ 1 34 + t ¯ 34 k ¯ 1 ▪ 12 34 , K ¯ 0 , 3 = t ¯ 1 k ¯ 3 1 + t ¯ 12 k ¯ 1 1 k ¯ 2 2 + t ¯ 1 − 3 k ¯ 1 1 k ¯ 1 2 k ¯ 1 3 / 6 , D ¯ 4 = K ¯ 3 , 1 + K ¯ 2 , 2 + K ¯ 1 , 3 + K ¯ 0 , 4 , K ¯ 3 , 1 = K ¯ 3 ▪ 1 k ¯ 1 1 , K ¯ 3 ▪ 1 = t ¯ 1 − 3 k ¯ 3 23 / 2 + t ¯ 23 k ¯ 3 ▪ 1 23 / 2 + t ¯ 1 − 4 k ¯ 3 2 − 4 / 6 + t ¯ 2 − 4 k ¯ 3 ▪ 1 2 − 4 / 6 + t ¯ 1 − 5 k ¯ 1 23 k ¯ 2 45 / 4 + t ¯ 2 − 5 k ¯ 1 23 k ¯ 2 ▪ 1 45 / 2 + ( t ¯ 1 − 5 k ¯ 3 2 − 5 + t ¯ 2 − 5 k ¯ 3 ▪ 1 2 − 5 ) / 24 + ( t ¯ 1 − 6 k ¯ 2 2 − 4 k ¯ 1 56 + t ¯ 2 − 6 k ¯ 2 ▪ 1 2 − 4 k ¯ 1 56 + t ¯ 2 − 6 k ¯ 2 2 − 4 k ¯ 1 ▪ 1 56 ) / 12 + k ¯ 1 23 k ¯ 1 45 ( t ¯ 1 − 7 k ¯ 1 67 / 48 + t ¯ 2 − 7 k ¯ 1 ▪ 1 67 / 16 ) ,$
$K ¯ 2 , 2 = K ¯ 2 ▪ 1 k ¯ 2 1 + K ¯ 2 ▪ 12 k ¯ 1 1 k ¯ 1 2 / 2 , K ¯ 2 ▪ 1 = t ¯ 1 − 3 k ¯ 2 23 / 2 + t ¯ 23 k ¯ 2 ▪ 1 23 / 2 + t ¯ 1 − 4 k ¯ 2 2 − 4 / 6 + t ¯ 2 − 4 k ¯ 2 ▪ 1 2 − 4 / 6 + t ¯ 1 − 5 k ¯ 1 23 k ¯ 1 45 / 8 + t ¯ 2 − 5 k ¯ 1 23 k ¯ 1 ▪ 1 45 / 4 , 2 K ¯ 2 ▪ 12 = t ¯ 1 − 4 k ¯ 2 34 + ∑ 12 2 t ¯ 2 − 4 k ¯ 2 ▪ 1 34 + t ¯ 34 k ¯ 2 ▪ 12 34 + ( t ¯ 1 − 5 k ¯ 2 3 − 5 + ∑ 12 2 t ¯ 13 − 5 k ¯ 2 ▪ 2 3 − 5 + t ¯ 3 − 5 k ¯ 2 ▪ 12 3 − 5 ) / 3 + t ¯ 1 − 6 k ¯ 1 34 k ¯ 1 56 / 4 + ∑ 12 2 t ¯ 13 − 6 k ¯ 1 34 k ¯ 1 ▪ 2 56 / 2 + t ¯ 3 − 6 ( k ¯ 1 ▪ 2 34 k ¯ 1 ▪ 1 56 + k ¯ 1 34 k ¯ 1 ▪ 12 56 ) / 2 , K ¯ 1 , 3 = K ¯ 1 ▪ 1 k ¯ 3 1 + K ¯ 1 ▪ 12 k ¯ 1 1 k ¯ 2 2 + K ¯ 1 ▪ 123 k ¯ 1 1 k ¯ 1 2 k ¯ 1 3 / 6 , 2 K ¯ 1 ▪ 1 = t ¯ 1 − 3 k ¯ 1 23 + t ¯ 23 k ¯ 1 ▪ 1 23 , 2 K ¯ 1 ▪ 12 = t ¯ 1 − 4 k ¯ 1 34 + ∑ 12 2 t ¯ 134 k ¯ 1 ▪ 2 34 + t ¯ 34 k ¯ 1 ▪ 12 34 , 2 K ¯ 1 ▪ 123 = t ¯ 1 − 5 k ¯ 1 45 + ∑ 1 − 3 3 ( t ¯ 1345 k ¯ 1 ▪ 2 45 + t ¯ 3 − 5 k ¯ 1 ▪ 12 45 ) + t ¯ 45 k ¯ 1 ▪ 1 − 3 45 , K ¯ 0 , 4 = t ¯ 1 k ¯ 4 1 + t ¯ 12 ( k ¯ 1 1 k ¯ 3 2 + k ¯ 2 1 k ¯ 2 2 / 2 ) + t ¯ 1 − 3 k ¯ 1 1 k ¯ 1 2 k ¯ 2 3 / 2 + t ¯ 1 − 4 k ¯ 1 1 k ¯ 1 2 k ¯ 1 3 k ¯ 1 4 / 24 .$
Proof
$K ¯ 1 − r ( w ) = K ¯ 1 − r$ and $K ¯ e 1 − r ( w ) = K ¯ e 1 − r$ are functions of w. By (14)
$K ¯ 1 − r ( w n ) = ∑ e = r − 1 ∞ n − e K ¯ e 1 − r ( w n ) f o r w n = E w ^ = w + d n ,$
where by (1), $d n$ has $i 1$th component $d ¯ n 1 = d n i 1 = ∑ e = 1 ∞ n − e k ¯ e 1 .$ Consider the Taylor series expansion
$K ¯ k 1 − r ( w + d n ) = K ¯ k 1 − r + K ¯ k ▪ 1 1 − r d ¯ n 1 + K ¯ k ▪ 12 1 − r d ¯ n 1 d ¯ n 2 / 2 ! + ⋯ = ∑ e = 0 ∞ K ¯ k , e 1 − r n − e say .$
Substituting into (14) gives (17) with
$a ¯ c 1 − r = ∑ k + e = c K ¯ k , e 1 − r = ∑ e = 0 c − r + 1 K ¯ c − e , e 1 − r .$
Also $K ¯ k , 0 1 − r = K ¯ k 1 − r$ so that (18) holds with
$D ¯ c 1 − r = ∑ e = 1 c − r + 1 K ¯ c − e , e 1 − r : D ¯ r 1 − r = K ¯ r − 1 , 1 1 − r , D ¯ r + 1 1 − r = ∑ e = 1 2 K ¯ r + 1 − e , e 1 − r , ⋯$
An alternative proof can be obtained using Section 6. This corrects $C e = a ¯ e 1$ given in Appendix B of [21]. Ref. [2] uses $K k j 1 … j r = K ¯ k 1 − r$ for $a ¯ k 1 − r$ but the expression for $K 2 a b$ on p67, lines 2–3 omitted the term $A i a A j b k 1 , k i j k 1 k$. That is, the last term in $a ¯ 2 12$ of Theorem 3 was omitted. Similarly the results on p67 for $r = 3 , 4$ are only true when the $w ^$ is unbiased or the cumulant coefficients of $w ^$ do not depend on w, as they omit the derivatives of $k ¯ e 1 − r$. The examples given there are not affected as $w ^$ is unbiased. Nor are the nonparametric examples of [22] and [23] affected, as the empirical distribution is an unbiased estimate of a distribution. Likewise $w ^$ is unbiased for the examples of [20]. M-estimates are biased but the results of [16] are not affected as only $K 1 j 1 j 2 , K 1 j 1 , K 2 j 1 j 2 j 3$ are given. No changes are needed for [3,4,17,24]. Applications to non-parametric and parametric confidence intervals were given in [22] and [20,23] and to ellipsoidal confidence regions and power in [4] and [25]. For nonparametric problems, $F ( x )$ and its empirical distribution $F n ( x )$ play the role of w and $w ^$; since it is unbiased, no corrections are needed. For $q = 1 , a r i = a ¯ i 1 − r$ were given for parametric and non-parametric problems in [22] and [2,23] and expressions for the classic Edgeworth expansion of $Y n w$ in terms of $a r i$ were given in [14]. For $q ≥ 1$, $a ¯ i 1 − r$ for parametric problems were given in [2], and can be obtained easily from $a r i$ given when $q = 1$ for 1-sample and multi-sample non-parametric problems in [22] and [23] and for semi-parametric problems in [16,24]. All these results can be extended to samples with independent non-identically distributed residuals, as done in [26] Section 6 and [17]. The extension to matrix $w ^$ just needs a slight change in notation. For example, in [17], $w ^$ can be viewed as a function of the mean of n independent complex random matrices, although n is actually the number of transmitters or receivers. Extensions to dependent random variables are also possible: see [27].

## 5. Cumulant Coefficients for Univariate $t ( w ^ )$

Now suppose that $q = 1$. Let be the coefficient of $n − e$ in . We write $K ¯ e 1 − r$ as $K r e$. For $E w ^ = w$, (14), (16) and (20) become
For $E w ^ ≠ w$, (17)–(19) become
$K r = κ r ( t ^ ) = ∑ e = r − 1 ∞ n − e a r e , r ≥ 1 ; a r e = K r e + D r e , D r c = ∑ e = 1 c − r + 1 K r , c − e , e : D r , r − 1 = 0 , D r r = K r , r − 1 , 1 , D r , r + 1 = ∑ e = 1 2 K r , r + 1 − e , e , ⋯$
Here, we give the cumulant coefficients $K r e$ needed for the Edgeworth expansion of $Y n t$ of (12) for $P r ( x ) , r ≤ 4$. We do this when $E w ^ = w$ in Corollary 1 and when $E w ^ ≠ w$ in Corollaries 3 and 4. To show more clearly the expressions we need in molecular form, we introduce the following ions, (expressions with unpaired suffixes),
$s i 1 = s ¯ 1 = k ¯ 1 12 t ¯ 2 , u ¯ 1 = t ¯ 12 s ¯ 2 = t ¯ 12 k ¯ 1 23 t ¯ 3 , X ¯ 34 = k ¯ 1 31 t ¯ 12 k ¯ 1 24 , z ¯ 12 = t ¯ 1 − 3 s ¯ 3 , v ¯ 1 = k ¯ 1 12 u ¯ 2 = X ¯ 14 t ¯ 4 , x ¯ 1 = t ¯ 12 v ¯ 2 , S ¯ 1 = k ¯ 2 12 t ¯ 2 , y ¯ 1 = k ¯ 2 1 − 3 t ¯ 2 t ¯ 3 , Y ¯ 1 = t ¯ 12 y ¯ 2 .$
where a suffix does not have a match then summation does not occur. For example, the RHS of $s ¯ 1 = k ¯ 1 12 t ¯ 2$ sums over $i 2$ but not $i 1$. Let $v , c 01 , c 02 , c 21 , c 22 , c 23 ,$ $c 11 , … , c 1 , 10 , c 31 , … , c 3 , 11$ be the 27 functions of $ω$ given on p4234–4235 of [20], labelled there as $I 2 2 0 , I 1 1 0 , … , I 301 222 000$. By Corollaries 1 and 3 below, those needed for $P r ( x ) , r ≤ 2$, of (4), that is, for the Edgeworth expansion of $Y n t$ of (12) to $O ( n − 3 / 2 )$, are the following molecules.
$F o r P 0 ( x ) : v = K 21 = t ¯ 1 k ¯ 1 12 t ¯ 2 . F o r P 1 ( x ) , K 11 : c 02 = t ¯ 12 k ¯ 1 12 ; f o r D 11 : c 01 = t ¯ 1 k ¯ 1 1 ; f o r K 32 : c 21 = t ¯ 1 t ¯ 2 t ¯ 3 k ¯ 2 1 − 3 = t ¯ 1 y ¯ 1 , c 23 = s ¯ 1 t ¯ 12 s ¯ 2 = s ¯ 1 u ¯ 1 . F o r P 2 ( x ) , K 22 : c 11 = t ¯ 1 k ¯ 2 12 t ¯ 2 = t ¯ 1 S ¯ 1 , c 15 = t ¯ 1 k ¯ 2 1 − 3 t ¯ 23 , c 19 = t ¯ 12 X ¯ 12 , c 1 , 10 = s ¯ 1 t ¯ 1 − 3 k ¯ 1 23 = z ¯ 23 k ¯ 1 23 ; f o r D 22 : c 12 = k ¯ 1 1 k ¯ 1 ▪ 1 23 t ¯ 2 t ¯ 3 , c 16 = k ¯ 1 1 u ¯ 1 = k ¯ 1 1 t ¯ . 12 k ¯ 1 23 t ¯ 3 ; f o r K 43 : c 31 = t ¯ 1 t ¯ 2 t ¯ 3 t ¯ 4 k ¯ 3 1 − 4 , c 36 = y ¯ 3 u ¯ 3 , c 3 , 10 = u ¯ 1 k ¯ 1 12 u ¯ 2 , c 3 , 11 = s ¯ 1 s ¯ 2 s ¯ 3 t ¯ 1 − 3 .$
Each molecule can be written as a shape. For example, $c 19$ is a rectangle. We now give the molecules $L j , L i j$ needed for the Edgeworth expansion to $O ( n − 5 / 2 )$, that is, for $P r ( x )$ for $r = 3 , 4$. Note that $P r ( x )$ needs the derivatives of $t ( w )$ up to order $r + 1$.
$F o r P 3 ( x ) , K 12 : L 1 = t ¯ 12 k ¯ 2 12 , L 2 = t ¯ 1 − 3 k ¯ 2 1 − 3 , L 3 = t ¯ 1 − 4 k ¯ 1 12 k ¯ 1 34 ; f o r K 33 : L 4 = t ¯ 1 t ¯ 2 t ¯ 3 k ¯ 3 1 − 3 , L 5 = u ¯ 1 S ¯ 1 , L 6 = t ¯ 13 t ¯ 2 t ¯ 4 k ¯ 3 1 − 4 , L 71 = z ¯ 12 k ¯ 2 1 − 3 t ¯ 3 , L 72 = y ¯ 1 t ¯ 145 k ¯ 1 45 , L 73 = t ¯ 12 k ¯ 2 1 − 3 u ¯ 3 , L 74 = t ¯ 14 k ¯ 1 45 t ¯ 52 k ¯ 2 1 − 3 t ¯ 3 , L 81 = k ¯ 1 12 t ¯ 1 − 4 s ¯ 3 s ¯ 4 , L 82 = k ¯ 1 12 t ¯ 1 − 3 v ¯ 3 , L 83 = X ¯ 34 z ¯ 34 , L 84 = X ¯ 14 t ¯ 45 k ¯ 1 56 t ¯ 61 , a s e x a g o n , f o r K 54 : L 9 = t ¯ 1 ⋯ t ¯ 5 k ¯ 4 1 − 5 , L 10 = u ¯ 1 t ¯ 2 t ¯ 3 t ¯ 4 k ¯ 3 1 − 4 , L 11 = y ¯ 1 Y ¯ 1 = y ¯ 1 t ¯ 12 y ¯ 2 , L 121 = y ¯ 1 t ¯ 1 − 3 s ¯ 2 s ¯ 3 , L 122 = t ¯ 1 k ¯ 2 1 − 3 u ¯ 2 u ¯ 3 , L 123 = Y ¯ 2 v ¯ 2 , L 131 = s ¯ 1 ⋯ s ¯ 4 t ¯ 1 − 4 , L 132 = s ¯ 1 s ¯ 2 t ¯ 1 − 3 v ¯ 3 , L 133 = v ¯ 1 t ¯ 12 v ¯ 2 = v ¯ 1 x ¯ 1 . F o r P 4 ( x ) , K 23 : L 14 = t ¯ 1 t ¯ 2 k ¯ 3 12 , L 15 = t ¯ 12 k ¯ 2 1 − 3 t ¯ 3 , L 161 = S ¯ 1 t ¯ 1 − 3 k ¯ 1 23 , L 162 = z ¯ 12 k ¯ 2 12 , L 171 = X ¯ 24 t ¯ 24 . L 181 = t ¯ 1 − 3 k ¯ 3 1 − 4 t ¯ 4 , L 182 = t ¯ 12 k ¯ 3 1 − 4 t ¯ 34 , L 191 = k ¯ 2 1 − 3 t ¯ 1 − 4 s ¯ 4 , L 192 = k ¯ 1 12 t ¯ 1 − 4 k ¯ 2 3 − 5 t ¯ 5 , L 193 = t ¯ 1 − 3 k ¯ 2 2 − 4 t ¯ 45 k ¯ 1 51 , L 194 = t ¯ 12 k ¯ 2 1 − 3 t ¯ 3 − 5 k ¯ 1 45 , L 201 = k ¯ 1 12 k ¯ 1 34 t ¯ 1 − 5 s ¯ 5 , L 202 = k ¯ 1 12 t ¯ 1 − 4 X ¯ 34 , L 203 = k ¯ 1 12 t ¯ 1 − 3 k ¯ 1 34 t ¯ 4 − 6 k ¯ 1 56 , L 204 = t ¯ 135 ( k ¯ 1 12 k ¯ 1 34 k ¯ 1 56 ) t ¯ 246 ; f o r K 44 : L 21 = t ¯ 1 ⋯ t ¯ 4 k ¯ 4 1 − 4 , L 221 = Y ¯ 2 k ¯ 2 23 t ¯ 3 , L 222 = u ¯ 1 t ¯ 2 t ¯ 3 k ¯ 3 1 − 3 , L 231 = s ¯ 1 z ¯ 12 S ¯ 2 . L 241 = x ¯ 2 S ¯ 2 , L 242 = u ¯ 1 k ¯ 2 12 u ¯ 2 . L 25 = t ¯ 12 k ¯ 4 1 − 5 t ¯ 3 t ¯ 4 t ¯ 5 , L 261 = t ¯ 1 t ¯ 2 k ¯ 3 1 − 4 t ¯ 3 − 5 s ¯ 5 , L 262 = t ¯ 1 t ¯ 2 t ¯ 3 k ¯ 3 1 − 4 t ¯ 4 − 6 k ¯ 1 56 , L 263 = t ¯ 12 k ¯ 3 1 − 4 t ¯ 3 u ¯ 4 , L 264 = t ¯ 1 t ¯ 2 k ¯ 3 1 − 4 ( t ¯ 35 t ¯ 46 ) k ¯ 1 56 , L 271 = t ¯ 1 k ¯ 2 1 − 3 t ¯ 2 − 4 y ¯ 4 , L 272 = t ¯ 12 k ¯ 2 1 − 3 Y ¯ 3 , L 273 = t ¯ 1 k ¯ 2 1 − 3 ( t ¯ 24 t ¯ 35 ) k ¯ 3 4 − 6 t ¯ 6 , L 281 = t ¯ 1 k ¯ 2 1 − 3 t ¯ 2 − 5 s ¯ 4 s ¯ 5 , L 282 = s ¯ 1 k ¯ 1 23 t ¯ 1 − 4 y ¯ 4 , L 283 = u ¯ 1 k ¯ 2 1 − 3 z ¯ 23 , L 284 = t ¯ 1 k ¯ 2 1 − 3 t ¯ 2 − 4 v ¯ 4 , L 285 = Y ¯ 2 k ¯ 1 23 t ¯ 3 − 5 k ¯ 1 45 , L 286 = t ¯ 12 k ¯ 2 1 − 3 z ¯ 34 s ¯ 4 , L 287 = t ¯ 1 k ¯ 2 1 − 3 t ¯ 24 k ¯ 1 45 z ¯ 53 , L 288 = y ¯ 1 t ¯ 1 − 3 X ¯ 23 , L 289 = t ¯ 12 k ¯ 2 1 − 3 x ¯ 3 , L 2810 = u ¯ 1 k ¯ 2 1 − 3 ( t ¯ 24 t ¯ 35 ) k ¯ 1 45 , L 2811 = t ¯ 1 k ¯ 2 1 − 3 ( t ¯ 24 k ¯ 1 45 t ¯ 36 k ¯ 1 67 ) t ¯ 57 , L 291 = k ¯ 1 12 t ¯ 1 − 5 s ¯ 3 s ¯ 4 s ¯ 5 , L 292 = k ¯ 1 12 t ¯ 1 − 4 v ¯ 3 s ¯ 4 , L 293 = X ¯ 34 t ¯ 3 − 6 s ¯ 5 s ¯ 6 , L 294 = k ¯ 1 12 t ¯ 1 − 3 k ¯ 1 34 t ¯ 4 − 6 s ¯ 5 s ¯ 6 , L 295 = k ¯ 1 12 ( z ¯ 13 z ¯ 24 ) k ¯ 1 34 , L 296 = k ¯ 1 12 t ¯ 1 − 3 k ¯ 1 34 x ¯ 4 , L 297 = k ¯ 1 12 t ¯ 135 v ¯ 5 t ¯ 24 k ¯ 1 34 , L 298 = X ¯ 14 t ¯ 45 k ¯ 1 56 z ¯ 61 , L 299 = X ¯ 14 t ¯ 45 X ¯ 58 ; f o r K 65 : L 30 = t ¯ 1 ⋯ t ¯ 6 k 5 1 − 6 , L 31 = u ¯ 1 t ¯ 2 t ¯ 3 t ¯ 4 t ¯ 5 k ¯ 4 1 − 5 , L 32 = t ¯ 1 t ¯ 2 t ¯ 3 k ¯ 4 1 − 5 t ¯ 45 , L 331 = t ¯ 1 t ¯ 2 t ¯ 3 k ¯ 3 1 − 4 z ¯ 45 s ¯ 5 , L 332 = u ¯ 1 u ¯ 2 k ¯ 3 1 − 4 t ¯ 3 t ¯ 4 , L 333 = t ¯ 1 t ¯ 2 t ¯ 3 k ¯ 3 1 − 4 x ¯ 4 L 341 = t ¯ 1 − 3 y ¯ 1 s ¯ 2 y ¯ 3 , L 342 = Y ¯ 2 k ¯ 2 2 − 4 t ¯ 3 u ¯ 4 , L 343 = Y ¯ 1 k ¯ 1 12 Y ¯ 2 , L 351 = y ¯ 3 t ¯ 3 − 6 s ¯ 4 s ¯ 5 s ¯ 6 , L 352 = t ¯ 1 u ¯ 2 k ¯ 2 1 − 3 z ¯ 34 s ¯ 4 , L 353 = y ¯ 1 t ¯ 12 v ¯ 2 , L 354 = Y ¯ 4 k ¯ 1 45 t ¯ 5 − 7 s ¯ 6 s ¯ 7 , L 355 = u ¯ 1 u ¯ 2 u ¯ 3 k ¯ 2 1 − 3 , L 356 = t ¯ 1 u ¯ 2 k ¯ 2 1 − 3 x ¯ 3 , L 357 = t ¯ 3 − 5 y ¯ 3 v ¯ 4 s ¯ 5 . L 361 = s ¯ 1 ⋯ s ¯ 5 t ¯ 1 − 5 , L 362 = s ¯ 1 s ¯ 2 s ¯ 3 t ¯ 1 − 4 v ¯ 4 , L 363 = s ¯ 1 z ¯ 13 k ¯ 1 34 t ¯ 4 − 6 s ¯ 5 s ¯ 6 , L 364 = v ¯ 1 z ¯ 12 v ¯ 2 , L 365 = s ¯ 1 z ¯ 13 k ¯ 1 34 x ¯ 4 , L 366 = x ¯ 1 k ¯ 1 12 x ¯ 2 .$
These $c r s$ and $L j$ do not use derivatives of $k ¯ e 1 − r$, the cumulant coefficients of $w ^$.
Corollary 1.
Suppose that $w ^$ is an unbiased standard estimate of w in $R p$ with respect to n, and that $q = 1$. Then the cumulants of $t ^ = t ( w ^ )$ can be expanded as (21) with bounded cumulant coefficients $K r e$. The leading coefficients needed for $P r ( x )$ of (4) for the distribution of $Y n t$ of (12) are as follows.
$L 7 = L 71 + 3 L 72 / 2 + 3 ∑ k = 3 4 L 7 k , L 8 = 3 L 81 / 2 + 6 L 82 + 3 L 83 + 3 L 84 ,$
Proof
Since $q = 1 , ∑ N$ becomes N. We write $T 1 − s 1 − r , U 1 − s 1 − r , V 1 − s 1 − r$ as $T 1 − s r , U 1 − s r , V 1 − s r$. By Theorem 2 we need the following.
$T 1 − 4 3 / 3 = t ¯ 13 t ¯ 2 t ¯ 4 , T 1 − 3 2 / 2 = t ¯ 12 t ¯ 3 , T 1 − 4 2 / 2 = t ¯ 1 − 3 t ¯ 4 + t ¯ 13 t ¯ 24 , T 1 − 5 4 / 12 = t ¯ 14 t ¯ 2 t ¯ 3 t ¯ 5 , T 1 − 6 4 / 4 = t ¯ 135 t ¯ 2 t ¯ 4 t ¯ 6 + 3 t ¯ 13 t ¯ 25 t ¯ 4 t ¯ 6 , T 1 − 5 3 / 3 = t ¯ 124 t ¯ 3 t ¯ 5 + 3 t ¯ 145 t ¯ 2 t ¯ 3 / 2 + 3 t ¯ 12 t ¯ 34 t ¯ 5 + 3 t ¯ 14 t ¯ 25 t ¯ 3 , T 1 − 6 3 = 3 t ¯ 1235 t ¯ 4 t ¯ 6 / 2 + 6 t ¯ 1 − 3 t ¯ 45 t ¯ 6 + 3 t ¯ 135 t ¯ 24 t ¯ 6 + t ¯ 13 t ¯ 25 t ¯ 46 , T 1 − 6 5 / 20 = t ¯ 15 t ¯ 2 t ¯ 3 t ¯ 4 t ¯ 6 , U 1 − 6 5 / 15 = t ¯ 14 t ¯ 2 t ¯ 3 t ¯ 5 t ¯ 6 , T 1 − 7 5 / 30 = t ¯ 146 t ¯ 2 t ¯ 3 t ¯ 5 t ¯ 7 + 2 t ¯ 14 t ¯ 26 t ¯ 3 t ¯ 5 t ¯ 7 + 2 t ¯ 14 t ¯ 56 t ¯ 2 t ¯ 3 t ¯ 7 , T 1 − 8 5 = t ¯ 1357 t ¯ 2 t ¯ 4 t ¯ 6 t ¯ 8 + 60 t ¯ 135 t ¯ 27 t ¯ 4 t ¯ 6 t ¯ 8 + 60 t ¯ 13 t ¯ 25 t ¯ 47 t ¯ 6 t ¯ 8 . U 1 − 4 2 = t ¯ 1 − 3 t ¯ 4 / 3 + t ¯ 12 t ¯ 34 / 4 , T 1 − 5 2 = t ¯ 1 − 4 t ¯ 5 / 3 + t ¯ 1245 t ¯ 3 / 2 + t ¯ 124 t ¯ 35 + t ¯ 145 t ¯ 23 / 2 , T 1 − 6 2 = t ¯ 1 − 5 t ¯ 6 + 2 t ¯ 1235 t ¯ 46 + t ¯ 1 − 3 t ¯ 4 − 6 + 2 t ¯ 135 t ¯ 246 / 3 , T 1 − 5 4 / 12 = t ¯ 14 t ¯ 2 t ¯ 3 t ¯ 5 , U 1 − 5 4 / 4 = t ¯ 12 t ¯ 3 t ¯ 4 t ¯ 5 , U 1 − 6 4 / 2 = 3 t ¯ 125 t ¯ 3 t ¯ 4 t ¯ 6 + 2 t ¯ 156 t ¯ 2 t ¯ 3 t ¯ 4 + 6 t ¯ 12 t ¯ 35 t ¯ 4 t ¯ 6 + 3 t ¯ 15 t ¯ 26 t ¯ 3 t ¯ 4 , V 1 − 6 4 / 6 = t ¯ 124 t ¯ 3 t ¯ 5 t ¯ 6 + t ¯ 12 t ¯ 34 t ¯ 5 t ¯ 6 + t ¯ 14 t ¯ 25 t ¯ 3 t ¯ 6 , T 1 − 7 4 = 6 t ¯ 1246 t ¯ 3 t ¯ 5 t ¯ 7 + 3 t ¯ 1456 t ¯ 2 t ¯ 3 t ¯ 7 + 24 t ¯ 124 t ¯ 36 t ¯ 5 t ¯ 7 + 12 t ¯ 124 t ¯ 56 t ¯ 3 t ¯ 7 + 12 t ¯ 145 t ¯ 26 t ¯ 3 t ¯ 7 + 12 t ¯ 146 t ¯ 23 t ¯ 5 t ¯ 7 + 24 t ¯ 146 t ¯ 25 t ¯ 3 t ¯ 7 + 4 t ¯ 146 t ¯ 57 t ¯ 2 t ¯ 3 + 12 t ¯ 456 t ¯ 17 t ¯ 2 t ¯ 3 + 12 t ¯ 12 t ¯ 34 t ¯ 56 t ¯ 7 + 12 t ¯ 14 t ¯ 25 t ¯ 36 t ¯ 7 + 12 t ¯ 14 t ¯ 26 t ¯ 57 t ¯ 3 , T 1 − 8 4 = 2 t ¯ 12357 t ¯ 4 t ¯ 6 t ¯ 8 + 12 t ¯ 1235 t ¯ 47 t ¯ 6 t ¯ 8 + 6 t ¯ 1357 t ¯ 24 t ¯ 6 t ¯ 8 + 6 t ¯ 123 t ¯ 457 t ¯ 6 t ¯ 8 + 6 t ¯ 135 t ¯ 247 t ¯ 6 t ¯ 8 + 12 t ¯ 123 t ¯ 45 t ¯ 67 t ¯ 8 + 24 t ¯ 135 t ¯ 24 t ¯ 67 t ¯ 8 + 6 t ¯ 135 t ¯ 27 t ¯ 48 t ¯ 6 + 3 t ¯ 13 t ¯ 25 t ¯ 47 t ¯ 68 , T 1 − 7 6 / 30 = t ¯ 16 t ¯ 2 t ¯ 3 t ¯ 4 t ¯ 5 t ¯ 7 , U 1 − 7 6 / 60 = t ¯ 15 t ¯ 2 t ¯ 3 t ¯ 4 t ¯ 6 t ¯ 7 , T 1 − 8 6 / 60 = t ¯ 157 t ¯ 2 t ¯ 3 t ¯ 4 t ¯ 6 t ¯ 8 + 3 t ¯ 15 t ¯ 27 t ¯ 3 t ¯ 4 t ¯ 6 t ¯ 8 + 2 t ¯ 15 t ¯ 67 t ¯ 2 t ¯ 3 t ¯ 4 t ¯ 8 , U 1 − 8 6 / 90 = t ¯ 147 t ¯ 2 t ¯ 3 t ¯ 5 t ¯ 6 t ¯ 8 + 4 t ¯ 14 t ¯ 27 t ¯ 3 t ¯ 5 t ¯ 6 t ¯ 8 + t ¯ 17 t ¯ 48 t ¯ 2 t ¯ 3 t ¯ 5 t ¯ 6 , T 1 − 9 6 / 60 = t ¯ 1468 t ¯ 2 t ¯ 3 t ¯ 5 t ¯ 7 t ¯ 9 + 6 t ¯ 146 t ¯ 28 t ¯ 3 t ¯ 5 t ¯ 7 t ¯ 9 + 6 t ¯ 146 t ¯ 58 t ¯ 2 t ¯ 3 t ¯ 7 t ¯ 9 + 3 t ¯ 468 t ¯ 15 t ¯ 2 t ¯ 3 t ¯ 7 t ¯ 9 + 2 t ¯ 14 t ¯ 26 t ¯ 38 t ¯ 5 t ¯ 7 t ¯ 9 + 12 t$