Preprint
Article

5th Order Multivariate Edgeworth Expansions for Parametric Estimates

Altmetrics

Downloads

75

Views

25

Comments

0

A peer-reviewed article of this preprint also exists.

This version is not peer-reviewed

Submitted:

02 March 2024

Posted:

05 March 2024

You are already at the latest version

Alerts
Abstract
The only cases where exact distributions of estimates are known is for samples from exponential families, and then only for special functions of the parameters. So statistical inference was traditionally based on the asymptotic normality of estimates. To improve on this we need the {\it Edgeworth expansion} for the distribution of the standardized estimate. This is an expansion in $n^{-1/2}$ about the normal distribution, where $n$ is typically the sample size. The 1st few terms of this expansion were originally given for the special case of a sample mean. In earlier work we derived it for {\it any } standard estimate, hugely expanding its application. We call an estimate $\hat{w}$ of an unknown vector $w\in R^p$, a {\it standard estimate}, if $E\ \hat{w}\rightarrow w$ as $n\rightarrow \infty$, and for $r\geq 1$ the $r$th order cumulants of $\hat{w}$ have magnitude $n^{1-r}$ and can be expanded in $n^{-1}.$ Here we give another huge extension. We give the expansion of the distribution of {\it any smooth function} of $\hat{w}$, say $t(\hat{w})\in R^q,$ giving its distribution to $n^{-5/2}$. We do this by showing that $t(\hat{w})$, is a standard estimate of $t(w)$. This provides far more accurate approximations for the distribution of $t(\hat{w})$ than its asymptotic normality. %NOT USED: The building blocks of the Edgeworth expansions are the {\it cumulant coefficients } of the estimate. These cumulant coefficients are also needed for bias reduction, Bayes estimates and confidence regions. We give {\it chain rules} for the cumulant coefficients of $t(\hat{w})$ in terms of those of $\hat{w}$ and the derivatives of $t(w)$, up to those needed for 5th order Edgeworth expansions for $t(\hat{w})$ and its {\it tilted expansion}, useful for the tail of the distribution.
Keywords: 
Subject: Computer Science and Mathematics  -   Probability and Statistics

1. Introduction and summary

Suppose that w ^ is a standard or Type A estimate of an unknown w R p with respect to a given parameter n. That is, E   w ^ w as n and for r 1 , its rth order cumulants have magnitude n 1 r and can be expanded as
k ¯ 1 r = κ ( w ^ i 1 , , w ^ i r ) = e = r 1 n e   k ¯ e 1 r f o r 1 i 1 ,   ,   i r p ,
where the cumulant coefficients  k ¯ e 1 r = k e j 1 j r do not depend on n, or at least are bounded as n . So k ¯ 0 1 = w i 1 . For example (1) holds for w ^ a function of a sample mean. We show that if t ( w ^ ) is a smooth function of a standard estimate w ^ , then it is a standard estimate of t ( w ) . This is done for w ^ unbiased in Theorem 3.1, and for w ^ biased in Theorem 4.1. More generally we call w ^ a Type B estimate if E   w ^ w as n , and for r 1 ,
k ¯ 1 r = d = 2 r 2 n d / 2   b ¯ d 1 r f o r 1 i 1 ,   ,   i r p ,   b ¯ d 1 r = b d i 1 i r .
For example this type arises when considering 1-sided confidence regions. If t ( w ^ ) is a smooth function of a Type B estimate, then it is a Type B estimate of t ( w ) . So for a Type A estimate, b ¯ d 1 r is k ¯ e 1 r for d = 2 e and 0 for d odd. n is typically the sample size or the minimum sample size if there is more than one sample.
§3 and §4 show that a smooth function of w ^ , say t ( w ^ ) , is a standard estimate of t = t ( w ) . They give the cumulant coefficients of t ( w ^ ) in terms of those of w ^ and the derivatives of t ( w ) . §3 does this for w ^ unbiased and §4 for w ^ biased. So they can be thought of as chain rules for obtaining the cumulant coefficients for t ( w ^ ) from those of w ^ . We give the cumulant coefficients needed for Edgeworth expansions of t ^ to O ( n 5 / 2 ) . Those to O ( n 1 ) were given in Withers and Nadarajah (2022). Those to O ( n r / 2 ) use the rth derivatives of t ( w ) . §5 specialises to univariate t ( w ) with examples. Theorem 4.1 and Corollary 5.4 correct a ¯ 2 12 = K 2 j 1 j 2 and a 22 on p67 and p59 of Withers (1982). §2 extends the shorthand bar notation above and gives the foundation theorem.
We now summarise the expressions for Edgeworth expansions of w ^ for standard and Type B estimates in terms of the cumulant coefficients k ¯ e 1 r and b ¯ d 1 r given in Withers and Nadarajah (2010b, 2012a, 2014a):
P r o b . ( Y n w x ) = r = 0 n r / 2 P r ( x ) ,   p Y n w ( x ) = r = 0 n r / 2 p r ( x ) ,
where Y n w = n 1 / 2 ( w ^ w b 1 n 1 / 2 ) ,   ( b 1 ) i = b 1 i ,   P 0 ( x ) = Φ V ( x ) ,
P r ( x ) = B ˜ r ( e ( / x ) )   Φ V ( x ) for r 1 ,
e j ( t ) = r = 1 j + 2   b ¯ r + j 1 r   t i 1 t i r / r ! ,   b ¯ r + j 1 r = b r + j i 1 i r ,
Φ V ( x ) is the multivariate normal distribution with zero mean and covariance V = ( b ¯ 2 12 ) , B ˜ r ( e ) is the complete ordinary Bell polynomial of Comtet (1974):
B ˜ 1 ( e ) = e 1 ,   B ˜ 2 ( e ) = e 2 + e 1 2 ,   B ˜ 3 ( e ) = e 3 + 2 e 1 e 2 + e 1 3 , B ˜ 4 ( e ) = e 4 + 2 e 1 e 3 + e 2 2 + 3 e 1 2 e 2 + e 1 4 .
This gives the 5th order Edgeworth expansion for the distribution of Y n w , that is, it gives (2) to O ( n 5 / 2 ) . Note that (5) uses the tensor summation convention of implicitly summing i 1 , , i r over their range 1 , , p . For example
for i = / x i and ¯ k = i k , P 1 ( x ) = e 1 ( / x ) )   Φ V ( x ) = r = 1 3 b ¯ r + 1 1 r   ( ¯ 1 ) ( ¯ r )   Φ V ( x ) = k ¯ 1 1   ( ¯ 1 )   Φ V ( x ) + k ¯ 2 1 3   ( ¯ 1 ) ( ¯ 2 ) ( ¯ 3 )   Φ V ( x )
for a standard estimate. For a standard estimate, b 1 = 0 in (3) and the cumulant coefficients needed for P r ( x ) , p r ( x ) of (2) are k ¯ 0 1 = w i 1 ,
for r = 0 :   k ¯ 1 12 ;   for r = 1 :   k ¯ 1 1 ,   k ¯ 2 1 3 ;   for r = 2 :   k ¯ 2 12 ,   k ¯ 3 1 4 ;
Preprints 100389 i001
So to obtain the 5th order Edgeworth expansion for the distribution of n 1 / 2 ( t ( w ^ ) t ( w ) ) for w ^ a standard estimate, we just need to replace the coefficients in (6) and (7) where they appear in P r ( x ) ,   r 4 , by those of t ( w ^ ) given in §3-§5.
(9) of Withers and Nadarajah (2010b) gives P r ( x ) for the more general case where P 0 ( x ) is the distribution function of Y R p which depends on n but is asymptotic to Φ V ( x ) and has a Type B expansion. One can choose P 0 ( x ) so that the number of terms in each P r ( x ) greatly reduces: see Withers and Nadarajah (2012d, 2014c, 2015). When w ^ is lattice, further terms need to be added: see for example Chapter 5 of Bhattacharya and Rao (1976), Cai (2005), and for the density of Y n w , p211 of Barndoff-Nielsen and Cox (1989), §5 of Daniels (1983), and §6 of Daniels (1987). Corollary 1 of Withers and Nadarajah (2010b) gives the tilted Edgeworth expansion for t ( w ^ ) , sometimes called the saddlepoint approximation, or the small sample expansion as it is a series in n 1 not just n 1 / 2 . It is very useful for the tails of the distribution where Edgeworth expansions perform poorly. Cumulant coefficients are also needed for bias reduction, Bayesian inference, confidence regions and power. See Withers (1984) and Withers and Nadarajah (2008, 2010a, 2011b, 2011c, 2012b, 2012f, 2014b, 2014c, 2015) for examples. For a history of Edgeworth expansions, see §7.
In summary, this paper gives high order expansions for the distribution of a vast range of estimates, by obtaining the cumulant coefficients needed for any smooth function of a standard estimate. This provides unprecedented accuracy for these distributions and avoids the need for simulation methods.

2. Foundations

Given w = ( w 1 , , w p ) R p and an estimate w ^ , suppose that E   w ^ w as n and that for r 1 , its rth order cumulants have magnitude n 1 r . Given i 1 , , i r in 1 , 2 , , p , we write these cumulants in shorthand as
k ¯ 1 r = k i 1 i r = κ ( w ^ i 1 , , w ^ i r ) = O ( n 1 r ) as n .
For example if w ^ = X ¯ is the mean of a random sample of size n, then (1) holds since k ¯ 1 r = n 1 r κ ( X i 1 , , X i r ) where X i is the ith component of X. By Theorem 2.1, (1) holds if w ^ is a smooth function of one or more sample means. Let t : R p R q be a smooth function in a neighbourhood of w with jth component t j = t j ( w ) ,   j = 1 , , q and finite partial derivatives
t ¯ s r k = t i s i r j k = i s i r t j k ( w ) ,   t ¯ s r k = t i s i r j k f o r s r
where i = / w i .  We reserve i’s as superscripts for the cumulants of w ^ and subscripts for partial derivatives of t ( w ) . We reserve j’s as superscripts for the components of t ( w ) and for the joint cumulants of t ^ = t ( w ^ ) . This bar shorthand allows us to shorten expressions by suppressing the i’s and j’s. We write the cumulants of t ^ = t ( w ^ ) as
K ¯ 1 r = K j 1 j r = κ ( t ^ j 1 , , t ^ j r ) where t ^ = t ( w ^ ) ,   t ^ j = t j ( w ^ ) .
For example
k ¯ 12 = k i 1 i 2 ,   K ¯ 12 = K j 1 j 2 c o v a r ( w ^ ) = ( k ¯ 12 ) , and c o v a r ( t ^ ) = ( K ¯ 12 ) .
are O ( n 1 ) . We now show that
K ¯ 12 =   1 K ¯ 12 + O ( n 2 ) w h e r e 1 K ¯ 12 = t ¯ 1 1 t ¯ 2 2   k ¯ 12 , t h a t i s , 1 K j 1 j 2 = t i 1 j 1 t i 2 j 2 k i 1 i 2 ,
using the tensor sum convention. The rest of this section and all proofs can be skipped on a 1st reading. Theorem 2.1 gives the cumulants of t ^ = t ( w ^ ) when w ^ is unbiased.
We shall use the notation N f j 1 j 2 to mean summing over all N permutations of j 1 , j 2 , giving distinct terms.
Theorem 1. 
Suppose that E   w ^ = w and that (1) holds. Then for r 1 and 1 j 1 ,   ,   j r q ,   K ¯ 1 r of (2) satisfies
K ¯ 1 r = e = r 1   e K ¯ 1 r w h e r e   e K ¯ 1 r =   e K j 1 j r = O ( n e ) a s n ,
and the leading   e K ¯ 1 r are as follows.
  0 K ¯ 1 = t ¯ 1 , t h a t i s , 0 K j 1 = t j 1 ,   1 K ¯ 1 = t ¯ 12 1 k ¯ 12 / 2 , t h a t i s ,   1 K j 1 = t i 1 i 2 j 1   k i 1 i 2 / 2 = i 1 , i 2 = 1 p t i 1 i 2 j 1   k i 1 i 2 / 2 ,   2 K ¯ 1 = t ¯ 1 3 1   k ¯ 1 3 / 6 + t ¯ 1 4 1   k ¯ 12 k ¯ 34 / 8 , t h a t i s , 2 K j 1 = t i 1 i 2 i 3 j 1 k i 1 i 2 i 3 / 6 + t i 1 i 4 j 1 k ¯ i 1 i 2 k ¯ i 3 i 4 / 8 ,   3 K ¯ 1 = t ¯ 1 4 1 k ¯ 1 4 / 24 + t ¯ 1 5 1 k ¯ 1 3 k ¯ 45 / 12 + t ¯ 1 6 1   k ¯ 12 k ¯ 34 k ¯ 56 / 48 ,   4 K ¯ 1 = t ¯ 1 5 1 k ¯ 1 5 / 120 + t ¯ 1 6 1   ( k ¯ 1 4 k ¯ 56 / 48 + k ¯ 1 3 k ¯ 4 6 / 72 ) + t ¯ 1 7 1   k ¯ 1 3 k ¯ 45 k ¯ 67 / 48 + t ¯ 1 8 1   k ¯ 12 k ¯ 34 k ¯ 56 k ¯ 78 / 384 ,
  1 K ¯ 12 = t ¯ 1 1 t ¯ 2 2   k ¯ 12 ,   2 K ¯ 12 = T 1 3 12   k ¯ 1 3 / 2 + T 1 4 12   k ¯ 12 k ¯ 34 / 2 w h e r e T 1 3 12 = 2 t ¯ 12 1 t ¯ 3 2 ,   T 1 4 12 = 2 t ¯ 1 3 1 t ¯ 4 2 + t ¯ 13 1 t ¯ 24 2 ,   2 t ¯ a b 1 t ¯ c d 2 = t ¯ a b 1 t ¯ c d 2 + t ¯ a b 2 t ¯ c d 1 ,   3 K ¯ 12 = U 1 4 12   k ¯ 1 4 + T 1 5 12   k ¯ 1 3 k ¯ 45 + T 1 6 12   k ¯ 12 k ¯ 34 k ¯ 56 / 4 w h e r e U 1 4 12 = 2 t ¯ 1 3 1 t ¯ 4 2 / 6 + t ¯ 12 1 t ¯ 34 2 / 4 , T 1 5 12 = 2 ( t ¯ 1 4 1 t ¯ 5 2 / 6 + t ¯ 1245 1 t ¯ 3 2 / 4 + t ¯ 124 1 t ¯ 35 2 / 2 + t ¯ 1455 1 t ¯ 23 2 / 4 ) , T 1 6 12 = 2 t ¯ 1 5 1 t ¯ 6 2 / 2 + 2 t ¯ 1235 1 t ¯ 46 2 + t ¯ 1 3 1 t ¯ 4 6 2 + 2 t ¯ 1355 1 t ¯ 246 1 / 3 ,   2 K ¯ 1 3 = t ¯ 1 1 t ¯ 2 2 t ¯ 3 3   k ¯ 1 3 + T 1 4 1 3   k ¯ 12 k ¯ 34 w h e r e T 1 4 1 3 = 3 t ¯ 13 1 t ¯ 2 2 t ¯ 4 3 ,   3 K ¯ 1 3 = T 1 4 1 3   k ¯ 1 4 / 2 + T 1 5 1 3   k ¯ 1 3 k ¯ 45 + T 1 6 1 3   k ¯ 12 k ¯ 34 k ¯ 56 w h e r e T 1 5 1 3 = 6 t ¯ 124 1 t ¯ 3 2 t ¯ 5 3 / 2 + 3 t ¯ 145 1 t ¯ 2 2 t ¯ 3 3 / 2 + 6 t ¯ 12 1 t ¯ 34 2 t ¯ 5 3 / 2 + 3 t ¯ 14 1 t ¯ 25 2 t ¯ 3 3 , T 1 6 1 3 = 3 t ¯ 1235 1 t ¯ 4 2 t ¯ 6 3 / 2 + 6 t ¯ 1 3 1 t ¯ 45 2 t ¯ 6 3 + 6 t ¯ 135 1 t ¯ 24 2 t ¯ 6 3 / 2 + t ¯ 13 1 t ¯ 25 2 t ¯ 46 3 ,   3 K ¯ 1 4 = t ¯ 1 1 t ¯ 4 4   k ¯ 1 4 + T 1 5 1 4   k ¯ 1 3 k ¯ 45 + T 1 6 1 4   k ¯ 12 k ¯ 34 k ¯ 56 w h e r e T 1 5 1 4 = 12 t ¯ 14 1 t ¯ 2 2 t ¯ 3 3 t ¯ 5 4 ,   T 1 6 1 4 = 4 t ¯ 135 1 t ¯ 2 2 t ¯ 4 3 t ¯ 6 4 + 12 t ¯ 13 1 t ¯ 25 2 t ¯ 4 3 t ¯ 6 4 ,   4 K ¯ 1 4 = U 1 5 1 4   k ¯ 1 5 / 2 + U 1 6 1 4   k ¯ 1 4 k ¯ 56 + V 1 6 1 4   k ¯ 1 3 k ¯ 4 6 + T 1 7 1 4   k ¯ 1 3 k ¯ 45 k ¯ 67 + T 1 8 1 4   k ¯ 12 k ¯ 34 k ¯ 56 k ¯ 78 w h e r e U 1 5 1 4 = 4 t ¯ 12 1 t ¯ 3 2 t ¯ 4 3 t ¯ 5 4 , U 1 6 1 4 = 12 t ¯ 125 1 t ¯ 3 2 t ¯ 4 3 t ¯ 6 4 / 2 + 4 t ¯ 156 1 t ¯ 2 2 t ¯ 3 3 t ¯ 4 4 / 2 + 24 t ¯ 12 1 t ¯ 35 2 t ¯ 4 3 t ¯ 6 4 / 2 + 6 t ¯ 15 1 t ¯ 26 2 t ¯ 3 3 t ¯ 4 4 ) , V 1 6 1 4 = 12 t ¯ 124 1 t ¯ 3 2 t ¯ 5 3 t ¯ 6 4 / 2 + 12 t ¯ 12 1 t ¯ 34 2 t ¯ 5 3 t ¯ 6 4 / 2 + 6 t ¯ 14 1 t ¯ 25 2 t ¯ 3 3 t ¯ 6 4 , T 1 7 1 4 = 12 t ¯ 1246 1 t ¯ 3 2 t ¯ 5 3 t ¯ 7 4 / 2 + 12 t ¯ 1456 1 t ¯ 2 2 t ¯ 3 3 t ¯ 7 4 / 2 + 24 t ¯ 124 1 t ¯ 36 2 t ¯ 5 3 t ¯ 7 4 / 2 + 24 t ¯ 124 1 t ¯ 56 2 t ¯ 3 3 t ¯ 7 4 / 2 + 24 t ¯ 145 1 t ¯ 26 2 t ¯ 3 3 t ¯ 7 4 / 2 + 12 t ¯ 146 1 t ¯ 23 2 t ¯ 5 3 t ¯ 7 4 + 24 t ¯ 146 1 t ¯ 25 2 t ¯ 3 3 t ¯ 7 4 + 12 t ¯ 146 1 t ¯ 57 2 t ¯ 2 3 t ¯ 3 4 / 2 + 12 t ¯ 456 1 t ¯ 17 2 t ¯ 2 3 t ¯ 3 4 + 24 t ¯ 12 1 t ¯ 34 2 t ¯ 56 3 t ¯ 7 4 / 2 + 12 t ¯ 14 1 t ¯ 25 2 t ¯ 36 3 t ¯ 7 4 + 12 t ¯ 14 1 t ¯ 26 2 t ¯ 57 3 t ¯ 3 4 ) , , T 1 8 1 4 = 4 t ¯ 12357 1 t ¯ 4 2 t ¯ 6 3 t ¯ 8 4 / 2 + 24 t ¯ 1235 1 t ¯ 47 2 t ¯ 6 3 t ¯ 8 4 / 2 + 12 t ¯ 1357 1 t ¯ 24 2 t ¯ 6 3 t ¯ 8 4 / 2 + 12 t ¯ 123 1 t ¯ 457 2 t ¯ 6 3 t ¯ 8 4 / 2 + 12 t ¯ 135 1 t ¯ 247 2 t ¯ 6 3 t ¯ 8 4 / 2 + 24 t ¯ 123 1 t ¯ 45 2 t ¯ 67 3 t ¯ 8 4 / 2 + 24 t ¯ 135 1 t ¯ 24 2 t ¯ 67 3 t ¯ 8 4
+ 12 t ¯ 135 1 t ¯ 27 2 t ¯ 48 3 t ¯ 6 4 / 2 + 3 t ¯ 13 1 t ¯ 25 2 t ¯ 47 3 t ¯ 68 4 ,   4 K ¯ 1 5 = t ¯ 1 1 t ¯ 5 5   k ¯ 1 5 + T 1 6 1 5   k ¯ 1 4 k ¯ 56 + U 1 6 1 5   k ¯ 1 3 k ¯ 4 6 + T 1 7 1 5   k ¯ 1 3 k ¯ 45 k ¯ 67 + T 1 8 1 5   k ¯ 12 k ¯ 34 k ¯ 56 k ¯ 78 w h e r e T 1 6 1 5 = 20 t ¯ 15 1 t ¯ 2 2 t ¯ 3 3 t ¯ 4 4 t ¯ 6 5 ,   U 1 6 1 5 = 15 t ¯ 14 1 t ¯ 2 2 t ¯ 3 3 t ¯ 5 4 t ¯ 6 5 , T 1 7 1 5 = 30 t ¯ 146 1 t ¯ 2 2 t ¯ 3 3 t ¯ 5 4 t ¯ 7 5 + 60 t ¯ 14 1 t ¯ 26 2 t ¯ 3 3 t ¯ 5 4 t ¯ 7 5 + 60 t ¯ 14 1 t ¯ 56 2 t ¯ 2 3 t ¯ 3 4 t ¯ 7 5 , T 1 8 1 5 = 5 t ¯ 1357 1 t ¯ 2 2 t ¯ 4 3 t ¯ 6 4 t ¯ 8 5 / 5 + 60 t ¯ 135 1 t ¯ 27 2 t ¯ 4 3 t ¯ 6 4 t ¯ 8 5 + 60 t ¯ 13 1 t ¯ 25 2 t ¯ 47 3 t ¯ 6 4 t ¯ 8 5 ,   5 K ¯ 1 6 = t ¯ 1 1 t ¯ 6 5   k ¯ 1 6 + T 1 7 1 6   k ¯ 1 5 k ¯ 67 + U 1 7 1 6   k ¯ 1 4 k ¯ 5 7 + T 1 8 1 6   k ¯ 1 4 k ¯ 56 k ¯ 78 + U 1 8 1 6   k ¯ 1 3 k ¯ 4 6 k ¯ 78 + T 1 9 1 6   k ¯ 1 3 k ¯ 45 k ¯ 67 k ¯ 89 + T 1 10 1 6   k ¯ 12 k ¯ 34 k ¯ 56 k ¯ 78 k ¯ 9 , 10 w h e r e T 1 7 1 6 = 30 t ¯ 16 1 t ¯ 2 2 t ¯ 3 3 t ¯ 4 4 t ¯ 5 5 t ¯ 7 6 ,   U 1 7 1 6 = 60 t ¯ 15 1 t ¯ 2 2 t ¯ 3 3 t ¯ 4 4 t ¯ 6 5 t ¯ 7 6 , T 1 8 1 6 = 60 t ¯ 157 1 t ¯ 2 2 t ¯ 3 3 t ¯ 4 4 t ¯ 6 5 t ¯ 8 6 + 180 t ¯ 15 1 t ¯ 27 2 t ¯ 3 3 t ¯ 4 4 t ¯ 6 5 t ¯ 8 6 + 120 t ¯ 15 1 t ¯ 67 2 t ¯ 2 3 t ¯ 3 4 t ¯ 4 5 t ¯ 8 6 , U 1 8 1 6 = 90 t ¯ 147 1 t ¯ 2 2 t ¯ 3 3 t ¯ 5 4 t ¯ 6 5 t ¯ 8 6 + 360 t ¯ 14 1 t ¯ 27 2 t ¯ 3 3 t ¯ 5 4 t ¯ 6 5 t ¯ 8 6 + 90 t ¯ 17 1 t ¯ 48 2 t ¯ 2 3 t ¯ 3 4 t ¯ 5 5 t ¯ 6 6 , T 1 9 1 6 = 60 t ¯ 1468 1 t ¯ 2 2 t ¯ 3 3 t ¯ 5 4 t ¯ 7 5 t ¯ 9 6 + 360 t ¯ 146 1 t ¯ 28 2 t ¯ 3 3 t ¯ 5 4 t ¯ 7 5 t ¯ 9 6 + 360 t ¯ 146 1 t ¯ 58 2 t ¯ 2 3 t ¯ 3 4 t ¯ 7 5 t ¯ 9 6 + 180 t ¯ 468 1 t ¯ 15 2 t ¯ 2 3 t ¯ 3 4 t ¯ 7 5 t ¯ 9 6 + 120 t ¯ 14 1 t ¯ 26 2 t ¯ 38 3 t ¯ 5 4 t ¯ 7 5 t ¯ 9 6 + 720 t ¯ 14 1 t ¯ 26 2 t ¯ 58 3 t ¯ 3 4 t ¯ 7 5 t ¯ 9 6 + 360 t ¯ 14 1 t ¯ 56 2 t ¯ 78 3 t ¯ 2 4 t ¯ 3 5 t ¯ 9 6 , T 1 10 1 6 = 6 t ¯ 13579 1 t ¯ 2 2 t ¯ 4 3 t ¯ 6 4 t ¯ 8 5 t ¯ 10 6 + 120 t ¯ 1357 1 t ¯ 29 2 t ¯ 4 3 t ¯ 6 4 t ¯ 8 5 t ¯ 10 6 + 90 t ¯ 135 1 t ¯ 279 2 t ¯ 4 3 t ¯ 6 4 t ¯ 8 5 t ¯ 10 6 + 360 t ¯ 135 1 t ¯ 27 2 t ¯ 49 3 t ¯ 6 4 t ¯ 8 5 t ¯ 10 6 + 360 t ¯ 135 1 t ¯ 27 2 t ¯ 89 3 t ¯ 4 4 t ¯ 6 5 t ¯ 10 6 + 360 t ¯ 13 1 t ¯ 25 2 t ¯ 47 3 t ¯ 69 4 t ¯ 8 5 t ¯ 10 6 .
NOTE 2.1. 
For N in N , see p48 of James and Mayne (1962). The understanding here is that N in terms like T 1 s 1 r only make sense for N < r ! in the context where they occur. For example, writing ( a b c ) = t ¯ 13 a t ¯ 2 b t ¯ 4 c and recalling that N only permutes superscripts but leaves subscripts alone, we have
T 1 4 1 3 = N ( 123 ) = ( 123 ) + ( 213 ) + ( 321 )
with N = 3 not 3 ! since
3 ! ( 123 ) = ( 123 ) + ( 132 ) + ( 213 ) + ( 231 ) + ( 321 ) + ( 312 ) = k = 1 6 S k
say, when multiplied by k ¯ 12 k ¯ 34 , as in   2 K ¯ 1 3 , gives k = 1 6 S k say, where for k = 1 , 2 , 3 ,   S 2 k = S 2 k 1 . For example T 1 4 1 3   k ¯ 12 k ¯ 34 in   2 K ¯ 1 3 above is shorthand for 3 t ¯ 13 1 t ¯ 2 2 t ¯ 4 3   k ¯ 12 k ¯ 34 . For,
S 2 = t ¯ 4 2 k ¯ 43 t ¯ 31 1 k ¯ 12 t ¯ 2 3 = t ¯ 1 2 k ¯ 12 t ¯ 13 1 k ¯ 34 t ¯ 4 3 = S 1 T 1 4 1 3   k ¯ 12 k ¯ 34 = S 1 + S 3 + S 5 .
PROOF This follows by replacing A ¯ 1 r j = A i 1 i r j by t ¯ 1 r 1 / r ! = t . i 1 i r j 1 / r ! in James and Mayne (1962). □
Similarly one may easily obtain   4 K ¯ 12 ,   4 K ¯ 1 3 from p51–53 of James and Mayne (1962). The tensor form 2 K ¯ 1 1 = t ¯ 12 1 k ¯ 1 12 can be viewed as a molecule or molecular form of 2 atoms, t ¯ 12 1 and k ¯ 1 12 , linked by the double bond 1,2, that is, i 1 , i 2 .   2 K ¯ 1 is a linear combination of t ¯ 1 3 1   k ¯ 1 3 , 2 atoms linked by the triple bond 1,2,3, and secondly k ¯ 12   t ¯ 1 4 1   k ¯ 34 . The last expression has the structure of C O 2 , with 2 identical atoms each linked by a double bond to a central atom. Just as such bonds are depicted in chemistry to illustrate the structure of a molecule, they can be very useful here to illustrate the difference in structure of similar mathematical expressions. S 1 of Note 2.1 is a linear molecular form with the 4 single bonds 1,2,3,4 and 4 distinct atoms, t ¯ 1 2 ,   t ¯ 1 1 ,   t ¯ 12 1 , and k ¯ 12 . Other expressions have more complex structures. Twice the last term in   2 K ¯ 12 is T 1 4 12   k ¯ 12 k ¯ 34 = S 12 + S 21 + S where S 12 = k ¯ 12   t ¯ 1 3 1   k ¯ 34   t ¯ 4 2 is linear with a double bond, 1 and 2, then 2 single bonds, 3 and 4; S = t ¯ 31 1   k ¯ 12   t ¯ 24 2   k ¯ 43 forms a square or rectangle with 4 single bonds 1,2,4,3 on successive edges of the square. These pictorial forms are a very useful way to distinguish similar expressions in N f j 1 j 2 .
§6 provides the ’more complicated’ terms referred to (but not given) on p49 of James and Mayne (1962) when w ^ is biased. It can be used for an alternative proof of Theorem 4.1 below. From Theorem 2.1, Edgeworth expansions can be obtained for the distribution and density of the standardized form of t ( w ^ ) ,
Y n t = n 1 / 2 ( t ^ t ) = n 1 / 2 ( t ( w ^ ) t ( w ) ) ,
of the form
P r o b . ( Y n t x ) = r = 0 P r n ( x ) ,   p Y n t ( x ) = r = 0 p r n ( x ) ,
where P r n ( x ) , p r n ( x ) are O ( n r / 2 ) . The   e K ¯ 1 r of Theorem 2.1 needed for P r n ( x ) , p r n ( x ) are as follows.
For P 0 n ( x ) , p 0 n ( x ) :     0 K ¯ 1 = t ¯ 1 ,   1 K ¯ 12 .   For P 1 n ( x ) , p 1 n ( x ) :     1 K ¯ 1 ,   2 K ¯ 1 3 . For P 2 n ( x ) , p 2 n ( x ) :     2 K ¯ 12 ,   3 K ¯ 1 4 .   For P 3 n ( x ) , p 3 n ( x ) :     2 K ¯ 1 ,     3 K ¯ 1 3 ,   4 K ¯ 1 5 . For P 4 n ( x ) , p 4 n ( x ) :     3 K ¯ 1 2 ,     4 K ¯ 1 4 ,   5 K ¯ 1 6 .

3. Cumulant Coefficients for t ( w ^ ) when E   w ^ = w

We now show that for r 1 ,   1 j 1 ,   ,   j r q , K ¯ 1 r of (3) can be expanded as
K ¯ 1 r = K j 1 j r = κ ( t ^ j 1 , , t ^ j r ) = e = r 1   n e K ¯ e 1 r
Replacing { k ¯ 1 r } by { K ¯ 1 r } in the righthand side of (4), written RHS(4), gives the Edgeworth expansion for Y n t of (5). For π a product of cumulants of type (1), let ( π ) e be the coefficient of n e in the expansion of π . For example ( k ¯ 1 r ) e = k ¯ e 1 r ,
Preprints 100389 i002
We now give the elements of the expansion (1) when E   w ^ = w .
Theorem 2. 
Suppose that w ^ is anunbiasedestimate of w satisfying (1) and that t ( w ) has finite derivatives. Then (1) holds with bounded cumulant coefficients
K ¯ e 1 r = K e j 1 j r = k = r 1 e   k K ¯ e 1 r : K ¯ r 1 1 r =   r 1 K ¯ r 1 1 r ,   K ¯ r 1 r =   r 1 K ¯ r 1 r +   r K ¯ r 1 r ,  
The leading coefficients needed for P r ( x ) , p r ( x ) of () for the distribution of Y n t of (5) are given in the T , U , V notation of Theorem 2.1 as follows.
K ¯ 0 1 = t ¯ 1 , t h a t i s , K 0 j 1 = t j 1 = t j 1 ( w ) .     0 K ¯ e 1 = 0 f o r e 1 . F o r P 0 ( x ) :   K ¯ 1 12 =   1 K ¯ 1 12 = t ¯ 1 1 t ¯ 2 2   k ¯ 1 12 , t h a t i s , K 1 j 1 j 2 = t i 1 j 1 t i 2 j 2 k 1 i 1 i 2 . F o r P 1 ( x ) :   K ¯ 1 1 =   1 K ¯ 1 1 = t ¯ 12 1 k ¯ 1 12 / 2 , t h a t i s , K 1 j 1 = t i 1 i 2 j 1 k 1 i 1 i 2 / 2 , K ¯ 2 1 3 =   2 K ¯ 2 1 3 = t ¯ 1 1 t ¯ 2 2 t ¯ 3 3   k ¯ 2 1 3 + T 1 4 1 3   k ¯ 1 12 k ¯ 1 34 . F o r P 2 ( x ) :   K ¯ 2 12 =   1 K ¯ 2 12 +   2 K ¯ 2 12 f o r   1 K ¯ 2 12 = t ¯ 1 1 t ¯ 2 2   k ¯ 2 12 ,   2 K ¯ 2 12 = T 1 3 12   k ¯ 2 1 3 / 2 + T 1 4 12   k ¯ 1 12 k ¯ 1 34 / 2 , K ¯ 3 1 4 =   3 K ¯ 3 1 4 = ( t ¯ 1 1 t ¯ 4 4 )   k ¯ 3 1 4 + T 1 5 1 4   k ¯ 2 1 3 k ¯ 1 45 + T 1 6 1 4   k ¯ 1 12 k ¯ 1 34 k ¯ 1 56 . F o r P 3 ( x ) :   K ¯ 2 1 =   1 K ¯ 2 1 +   2 K ¯ 2 1 f o r   1 K ¯ 2 1 = t ¯ 12 1 k ¯ 2 12 / 2 ,     2 K ¯ 2 1 = t ¯ 1 3 1 k ¯ 2 1 3 / 6 + t ¯ 1 4 1   k ¯ 1 12 k ¯ 1 34 / 8 ,   t h a t i s , K 2 j 1 = t i 1 i 2 j 1 k 2 i 1 i 2 / 2 + t i 1 i 2 i 3 j 1 k 2 i 1 i 2 i 3 / 6 + t i 1 i 4 j 1 k 1 i 1 i 2 k 1 i 3 i 4 / 8 , K ¯ 3 1 3 = k = 2 3   k K ¯ 3 1 3 f o r   2 K ¯ 3 1 3 = t ¯ 1 1 t ¯ 2 2 t ¯ 3 3   k ¯ 3 1 3 + T 1 4 1 3   ( k ¯ 12 k ¯ 34 ) 3 ,   3 K ¯ 3 1 3 = T 1 4 1 3   k ¯ 3 1 4 / 2 + T 1 5 1 3   k ¯ 2 1 3 k ¯ 1 45 + T 1 6 1 3   k ¯ 1 12 k ¯ 1 34 k ¯ 1 56 , K ¯ 4 1 5 =   4 K ¯ 4 1 5 = t ¯ 1 1 t ¯ 5 5   k ¯ 4 1 5 + T 1 6 1 5   k ¯ 3 1 4 k ¯ 1 56 + U 1 6 1 5   k ¯ 2 1 3 k ¯ 2 4 6 + T 1 7 1 5   k ¯ 2 1 3 k ¯ 1 45 k ¯ 1 67 + T 1 8 1 5   k ¯ 1 12 k ¯ 1 34 k ¯ 1 56 k ¯ 1 78 . F o r P 4 ( x ) :   K ¯ 3 12 = k = 1 3   k K ¯ 3 12 f o r   1 K ¯ 3 12 = t ¯ 1 1 t ¯ 2 2   k ¯ 3 12 ,   2 K ¯ 3 12 = T 1 3 12   k ¯ 3 1 3 / 2 + T 1 4 12   ( k ¯ 12 k ¯ 34 ) 3 / 2 ,   3 K ¯ 3 12 = U 1 4 12   k ¯ 3 1 4 + T 1 5 12   k ¯ 2 1 3 k ¯ 1 45 + T 1 6 12   k ¯ 1 12 k ¯ 1 34 k ¯ 1 56 / 4 , K ¯ 4 1 4 = k = 3 4   k K ¯ 4 1 4 f o r   3 K ¯ 4 1 4 = ( t ¯ 1 1 t ¯ 4 4 )   k ¯ 4 1 4 + T 1 5 1 4   ( k ¯ 1 3 k ¯ 45 ) 4 + T 1 6 1 4   ( k ¯ 12 k ¯ 34 k ¯ 56 ) 4 ,     4 K ¯ 4 1 4 = U 1 5 1 4   k ¯ 4 1 5 / 2 + U 1 6 1 4   k ¯ 3 1 4 k ¯ 1 56 + V 1 6 1 4   k ¯ 2 1 3 k ¯ 2 4 6 + T 1 7 1 4   k ¯ 2 1 3 k ¯ 1 45 k ¯ 1 67 + T 1 8 1 4   k ¯ 1 12 k ¯ 1 34 k ¯ 1 56 k ¯ 1 78 , K ¯ 5 1 6 =   5 K ¯ 5 1 6 = t ¯ 1 1 t ¯ 6 5   k ¯ 5 1 6 + T 1 7 1 6   k ¯ 4 1 5 k ¯ 1 67 + U 1 7 1 6   k ¯ 3 1 4 k ¯ 2 5 7 + T 1 8 1 6   k ¯ 3 1 4 k ¯ 1 56 k ¯ 1 78 + U 1 8 1 6   k ¯ 2 1 3 k ¯ 2 4 6 k ¯ 2 78 + T 1 9 1 6   k ¯ 2 1 3 k ¯ 1 45 k ¯ 1 67 k ¯ 1 89 + T 1 10 1 6   k ¯ 1 12 k ¯ 1 34 k ¯ 1 56 k ¯ 1 78 k ¯ 1 9 , 10 . A l s o , f o r K ¯ 0 1 , K ¯ 1 1 , K ¯ 2 1 a b o v e , E   t j 1 ( w ^ ) = e = 0 4 n e K ¯ e 1 + O ( n 5 ) w h e r e
K ¯ 3 1 = k = 1 3   k K ¯ 3 1 f o r   1 K ¯ 3 1 = t ¯ 12 1   k ¯ 3 12 / 2 ,     2 K ¯ 3 1 = t ¯ 1 3 1   k ¯ 3 1 3 / 6 + t ¯ 1 4 1   k ¯ 1 12 k ¯ 2 34 / 4 ,   3 K ¯ 3 1 = t ¯ 1 4 1   k ¯ 3 1 4 / 24 + t ¯ 1 5 1   k ¯ 2 1 3 k ¯ 1 45 / 12 + t ¯ 1 6 1   k ¯ 1 12 k ¯ 1 34 k ¯ 1 56 / 48 , K ¯ 4 1 = k = 1 4   k K ¯ 4 1 f o r   1 K ¯ 4 1 = t ¯ 12 1 k ¯ 4 12 / 2 ,   2 K ¯ 4 1 = t ¯ 1 3 1 k ¯ 4 1 3 / 6 + t ¯ 1 4 1   ( 2 k ¯ 1 12 k ¯ 3 34 + k ¯ 2 12 k ¯ 2 34 ) / 8 ,   3 K ¯ 4 1 = t ¯ 1 4 1   k ¯ 4 1 4 / 24 + t ¯ 1 5 1   ( k ¯ 2 1 3 k ¯ 2 45 + k ¯ 3 1 3 k ¯ 1 45 ) / 12 + t ¯ 1 6 1   k ¯ 1 12 k ¯ 1 34 k ¯ 2 56 / 16 ,   4 K ¯ 4 1 = t ¯ 1 5 1   k ¯ 4 1 5 / 120 + t ¯ 1 6 1   ( k ¯ 3 1 4 k ¯ 1 56 / 48 + k ¯ 2 1 3 k ¯ 2 4 6 / 72 ) + t ¯ 1 7 1   k ¯ 2 1 3 k ¯ 1 45 k ¯ 1 67 / 48 + t ¯ 1 8 1   k ¯ 1 12 k ¯ 1 34 k ¯ 1 56 k ¯ 1 78 / 384 .
PROOF Substituting (1) into   k K ¯ 1 r of Theorem 2.1 gives
  k K ¯ 1 r = e = k   k K ¯ e 1 r n e say. So by (3), (1) and (3) hold. □
  k K ¯ e 1 r =   k K e j 1 j r is   k V ˜ e j 1 j r of Withers (1982).
NOTE 3.1.(4) made explicit the 3 terms needed in T 1 4 1 3 needed for P 1 ( x ) of Theorem 3.1. Similarly P 2 ( x ) needs the 12 terms
T 1 5 1 4 = 12 ( 1234 ) = ( 1234 ) + ( 1243 ) + ( 2413 ) + ( 2431 ) + ( 3124 ) + ( 3142 ) + ( 3241 ) + ( 3412 ) + ( 4123 ) + ( 4132 ) + ( 4231 ) + ( 4321 )
where ( a b c d ) = t ¯ 14 a t ¯ 2 b t ¯ 3 c t ¯ 5 d . It also needs the 4 + 12 terms T 1 6 1 4 = A + B where
A = 4 ( 1234 ) = ( 1234 ) + ( 2134 ) + ( 3124 ) + ( 4123 ) f o r ( a b c d ) = t ¯ 135 a t ¯ 2 b t ¯ 4 c t ¯ 6 d , B = 12 ( 1234 ) = ( 1234 ) + ( 1423 ) + ( 1432 ) + ( 1324 ) + ( 2134 ) + ( 2314 ) + ( 2413 ) + ( 3124 ) + ( 3214 ) + ( 3412 ) + ( 4213 ) + ( 4312 ) f o r ( a b c d ) = t ¯ 13 a t ¯ 25 b t ¯ 4 c t ¯ 6 d .

4. Cumulant Coefficients for t ( w ^ ) when E   w ^ w

We now remove the assumption that w ^ is unbiased. We use K ¯ e 1 r of Theorem 3.1, and the shorthand f ¯ m = i m f where again i = / w i . There is a key difference with Theorem 3.1: there k ¯ e 1 r was treated as an algebraic expression. But now we must view each of them as a function of w. So we assume that the distribution of w ^ is determined by w. This is needed to obtain higher order confidence intervals for t ( w ) when q = 1 : see Withers (1989). We show that for Y n t of (5), P 2 ( x ) , p 2 ( x ) need the 1st derivatives k ¯ 1 i 12 = i k ¯ 1 12 for i = / w i , P 3 ( x ) , p 3 ( x ) need the 1st derivatives k ¯ 2 4 1 3 , and so on. The derivatives of K ¯ e 1 r are given by Leibniz’s rule for the derivatives of a product. For example
K ¯ 1 3 12 = ( t ¯ 1 1 t ¯ 2 2   k ¯ 1 12 ) 3 = ( 12 2 t ¯ 1 1 t ¯ 23 2 )   k ¯ 1 12 + t ¯ 1 1 t ¯ 2 2   k ¯ 1 3 12 f o r 12 2 t ¯ 1 1 t ¯ 23 2 = t ¯ 13 1 t ¯ 2 2 + t ¯ 1 1 t ¯ 23 2 , K ¯ 1 3 1 = ( t ¯ 12 1 k ¯ 1 12 ) 3 / 2 = t ¯ 1 3 1 k ¯ 1 12 / 2 + t ¯ 12 1 k ¯ 1 3 12 / 2 , K ¯ 1 34 12 = 12 2 [ ( t ¯ 14 1 t ¯ 23 2 + t ¯ 1 1 t ¯ 2 4 2 ) k ¯ 1 12 + t ¯ 1 1 t ¯ 23 2 k ¯ 1 4 12 + t ¯ 14 1 t ¯ 2 2 k ¯ 1 3 12 ] + t ¯ 1 1 t ¯ 2 2 k ¯ 1 34 12 , ( t ¯ 1 1 t ¯ 2 2 t ¯ 3 3   k ¯ 2 1 3 ) 4 = ( t ¯ 1 1 t ¯ 2 2 t ¯ 3 3 ) 4 k ¯ 2 1 3 + t ¯ 1 1 t ¯ 2 2 t ¯ 3 3   k ¯ 2 4 1 3 ,   ( t ¯ 1 1 t ¯ 2 2 t ¯ 3 3 ) 4 = t ¯ 1 1 t ¯ 2 2 t ¯ 34 3 + t ¯ 1 1 t ¯ 24 2 t ¯ 3 3 + t ¯ 14 1 t ¯ 2 2 t ¯ 3 3 , ( T 1 4 1 3   k ¯ 1 12 k ¯ 1 34 ) 5 = T 1 4 5 1 3   k ¯ 1 12 k ¯ 1 34 + T 1 4 1 3   ( k ¯ 1 12 k ¯ 1 34 ) 5 , T 1 4 5 1 3 = 3 ( t ¯ 135 1 t ¯ 2 2 t ¯ 4 3 + t ¯ 13 1 t ¯ 25 2 t ¯ 4 3 + t ¯ 13 1 t ¯ 2 2 t ¯ 45 3 ) ,   ( k ¯ 1 12 k ¯ 1 34 ) 5 = k ¯ 1 5 12 k ¯ 1 34 + k ¯ 1 12 k ¯ 1 5 34 .
Theorem 3. 
Let w ^ R p be abiasedstandard estimate of w satisfying (1) where k ¯ e 1 r depend on w. Then t ^ = t ( w ^ ) R q is a standard estimate of t ( w ) :
κ ( t ^ j 1 , , t ^ j r ) = e = r 1 n e   a ¯ e 1 r f o r r 1 ,   1 j 1 ,   ,   j r q ,
Preprints 100389 i003
for K ¯ e 1 r of Theorem 3.1, and the other D ¯ e 1 r = D e j 1 j r needed for P r ( x ) , p r ( x ) of (4) for Y n t of (5) are as follows.
F o r P 0 ( x ) :   D ¯ 1 12 = 0 a ¯ 1 12 = K ¯ 1 12 = K 1 j 1 j 2 = t ¯ 1 1 t ¯ 2 2   k ¯ 1 12 , F o r P 1 ( x ) : D ¯ 1 1 = t ¯ 1 1 k ¯ 1 1 a ¯ 1 1 = K ¯ 1 1 + D ¯ 1 1 = t ¯ 1 1 k ¯ 1 1 + t ¯ 12 1 k ¯ 1 12 / 2 , F o r P 2 ( x ) :   D ¯ 2 12 = K ¯ 1 3 12   k ¯ 1 3 = [ ( t ¯ 13 1 t ¯ 2 2 + t ¯ 1 1 t ¯ 23 2 )   k ¯ 1 12 + t ¯ 1 1 t ¯ 2 2   k ¯ 1 3 12 ]   k ¯ 1 3 a ¯ 2 12 = t ¯ 1 1 t ¯ 2 2   k ¯ 2 12 + T 1 3 12   k ¯ 2 1 3 / 2 + T 1 4 12   k ¯ 1 12 k ¯ 1 34 / 2 + [ ( t ¯ 13 1 t ¯ 2 2 + t ¯ 1 1 t ¯ 23 2 )   k ¯ 1 12 + t ¯ 1 1 t ¯ 2 2   k ¯ 1 3 12 ]   k ¯ 1 3 . F o r P 3 ( x ) :   D ¯ 2 1 = K ¯ 1 , 1 1 + K ¯ 0 , 2 1 ,   K ¯ 1 , 1 1 = K ¯ 1 3 1   k ¯ 1 3 ,   K ¯ 0 , 2 1 = t ¯ 1 1 k ¯ 2 1 + t ¯ 12 1 k ¯ 1 1 k ¯ 1 2 / 2 a ¯ 2 1 = t ¯ 1 1 k ¯ 2 1 + t ¯ 12 1 ( k ¯ 2 12 + k ¯ 1 1 k ¯ 1 2 + k ¯ 1 3 12 k ¯ 1 3 ) / 2 + t ¯ 1 3 1 ( k ¯ 2 1 3 / 6 + k ¯ 1 1 k ¯ 1 23 / 2 ) + t ¯ 1 4 1 k ¯ 1 12 k ¯ 1 34 / 8 , D ¯ 3 1 3 = K ¯ 2 4 1 3   k ¯ 1 4 = ( t ¯ 1 1 t ¯ 2 2 t ¯ 3 3   k ¯ 2 1 3 ) 4   k ¯ 1 4 + ( T 1 4 1 3   k ¯ 1 12 k ¯ 1 34 ) 5   k ¯ 1 5 .
F o r P 4 ( x ) :   D ¯ 3 12 = K ¯ 2 , 1 12 + K ¯ 1 , 2 12 ,   K ¯ 2 , 1 12 = K ¯ 2 3 12   k ¯ 1 3 ,   K ¯ 1 , 2 12 = K ¯ 1 3 12   k ¯ 2 3 / 2 + K ¯ 1 34 12   k ¯ 1 3 k ¯ 1 4 , D ¯ 4 1 4 = K ¯ 3 5 1 4   k ¯ 1 5 ,   D ¯ 5 1 6 = 0 .
For E   t j 1 ( w ^ ) to O ( n 5 ) we also need D ¯ j = D ¯ j 1 , j = 3 , 4 , given by
D ¯ 3 = K ¯ 2 , 1 + K ¯ 1 , 2 + K ¯ 0 , 3 ,   K ¯ 2 , 1 = K ¯ 2 1 k ¯ 1 1 , K ¯ 2 1 = ( t ¯ 1 3 k ¯ 2 23 + t ¯ 23 k ¯ 2 1 23 ) / 2 + ( t ¯ 1 4 k ¯ 2 2 4 + t ¯ 2 4 k ¯ 2 1 2 4 ) / 6 + t ¯ 1 5 k ¯ 1 23 k ¯ 1 45 / 8 + t ¯ 2 5 k ¯ 1 23 k ¯ 1 1 45 / 4 , K ¯ 1 , 2 = K ¯ 1 1 k ¯ 2 1 + K ¯ 1 12 k ¯ 1 1 k ¯ 1 2 / 2 , 2 K ¯ 1 1 = t ¯ 1 3 k ¯ 1 23 + t ¯ 23 k ¯ 1 1 23 ,   2 K ¯ 1 12 = t ¯ 1 4 k ¯ 1 34 + 12 2 t ¯ 2 4 k ¯ 1 1 34 + t ¯ 34 k ¯ 1 12 34 , K ¯ 0 , 3 = t ¯ 1 k ¯ 3 1 + t ¯ 12 k ¯ 1 1 k ¯ 2 2 + t ¯ 1 3 k ¯ 1 1 k ¯ 1 2 k ¯ 1 3 / 6 , D ¯ 4 = K ¯ 3 , 1 + K ¯ 2 , 2 + K ¯ 1 , 3 + K ¯ 0 , 4 ,   K ¯ 3 , 1 = K ¯ 3 1 k ¯ 1 1 , K ¯ 3 1 = t ¯ 1 3 k ¯ 3 23 / 2 + t ¯ 23 k ¯ 3 1 23 / 2 + t ¯ 1 4 k ¯ 3 2 4 / 6 + t ¯ 2 4 k ¯ 3 1 2 4 / 6 + t ¯ 1 5 k ¯ 1 23 k ¯ 2 45 / 4 + t ¯ 2 5 k ¯ 1 23 k ¯ 2 1 45 / 2 + ( t ¯ 1 5 k ¯ 3 2 5 + t ¯ 2 5 k ¯ 3 1 2 5 ) / 24 + ( t ¯ 1 6 k ¯ 2 2 4 k ¯ 1 56 + t ¯ 2 6 k ¯ 2 1 2 4 k ¯ 1 56 + t ¯ 2 6 k ¯ 2 2 4 k ¯ 1 1 56 ) / 12 + k ¯ 1 23 k ¯ 1 45 ( t ¯ 1 7 k ¯ 1 67 / 48 + t ¯ 2 7 k ¯ 1 1 67 / 16 ) , K ¯ 2 , 2 = K ¯ 2 1 k ¯ 2 1 + K ¯ 2 12 k ¯ 1 1 k ¯ 1 2 / 2 , K ¯ 2 1 = t ¯ 1 3 k ¯ 2 23 / 2 + t ¯ 23 k ¯ 2 1 23 / 2 + t ¯ 1 4 k ¯ 2 2 4 / 6 + t ¯ 2 4 k ¯ 2 1 2 4 / 6 + t ¯ 1 5 k ¯ 1 23 k ¯ 1 45 / 8 + t ¯ 2 5 k ¯ 1 23 k ¯ 1 1 45 / 4 , 2 K ¯ 2 12 = t ¯ 1 4 k ¯ 2 34 + 12 2 t ¯ 2 4 k ¯ 2 1 34 + t ¯ 34 k ¯ 2 12 34 + ( t ¯ 1 5 k ¯ 2 3 5 + 12 2 t ¯ 13 5 k ¯ 2 2 3 5 + t ¯ 3 5 k ¯ 2 12 3 5 ) / 3 + t ¯ 1 6 k ¯ 1 34 k ¯ 1 56 / 4 + 12 2 t ¯ 13 6 k ¯ 1 34 k ¯ 1 2 56 / 2 + t ¯ 3 6 ( k ¯ 1 2 34 k ¯ 1 1 56 + k ¯ 1 34 k ¯ 1 12 56 ) / 2 , K ¯ 1 , 3 = K ¯ 1 1 k ¯ 3 1 + K ¯ 1 12 k ¯ 1 1 k ¯ 2 2 + K ¯ 1 123 k ¯ 1 1 k ¯ 1 2 k ¯ 1 3 / 6 , 2 K ¯ 1 1 = t ¯ 1 3 k ¯ 1 23 + t ¯ 23 k ¯ 1 1 23 ,   2 K ¯ 1 12 = t ¯ 1 4 k ¯ 1 34 + 12 2 t ¯ 134 k ¯ 1 2 34 + t ¯ 34 k ¯ 1 12 34 , 2 K ¯ 1 123 = t ¯ 1 5 k ¯ 1 45 + 1 3 3 ( t ¯ 1345 k ¯ 1 2 45 + t ¯ 3 5 k ¯ 1 12 45 ) + t ¯ 45 k ¯ 1 1 3 45 , K ¯ 0 , 4 = t ¯ 1 k ¯ 4 1 + t ¯ 12 ( k ¯ 1 1 k ¯ 3 2 + k ¯ 2 1 k ¯ 2 2 / 2 ) + t ¯ 1 3 k ¯ 1 1 k ¯ 1 2 k ¯ 2 3 / 2 + t ¯ 1 4 k ¯ 1 1 k ¯ 1 2 k ¯ 1 3 k ¯ 1 4 / 24 .
PROOF K ¯ 1 r ( w ) = K ¯ 1 r and K ¯ e 1 r ( w ) = K ¯ e 1 r are functions of w. By (1)
K ¯ 1 r ( w n ) = e = r 1   n e K ¯ e 1 r ( w n ) f o r w n = E   w ^ = w + d n ,
where by (1), d n has i 1 th component d ¯ n 1 = d n i 1 = e = 1 n e k ¯ e 1 . Consider the Taylor series expansion
K ¯ k 1 r ( w + d n ) = K ¯ k 1 r + K ¯ k 1 1 r   d ¯ n 1 + K ¯ k 12 1 r   d ¯ n 1 d ¯ n 2 / 2 ! + = e = 0 K ¯ k , e 1 r   n e
say. Substituting into (1) gives (1) with
a ¯ c 1 r = k + e = c K ¯ k , e 1 r = e = 0 c r + 1 K ¯ c e , e 1 r .
Also K ¯ k , 0 1 r = K ¯ k 1 r so that (2) holds with
D ¯ c 1 r = e = 1 c r + 1 K ¯ c e , e 1 r : D ¯ r 1 r = K ¯ r 1 , 1 1 r ,   D ¯ r + 1 1 r = e = 1 2 K ¯ r + 1 e , e 1 r ,                              
An alternative proof can be obtained using §6. This corrects C e = a ¯ e 1 given in Appendix B of Withers (1987). Withers (1982) uses K k j 1 j r = K ¯ k 1 r for a ¯ k 1 r but the expression for K 2 a b on p67, lines 2-3 omitted the term A i a A j b k 1 , k i j k 1 k . That is, the last term in a ¯ 2 12 of Theorem 4.1 was omitted. Similarly the results on p67 for r = 3 , 4 are only true when the w ^ is unbiased or the cumulant coefficients of w ^ do not depend on w, as they omit the derivatives of k ¯ e 1 r . The examples given there are not affected as w ^ is unbiased. Nor are the nonparametric examples of Withers (1983, 1988) affected, as the empirical distribution is an unbiased estimate of a distribution. Likewise w ^ is unbiased for the examples of Withers (1989). M-estimates are biased but the results of Withers and Nadarajah (2010a) are not affected as only K 1 j 1 j 2 ,   K 1 j 1 ,   K 2 j 1 j 2 j 3 are given. No changes are needed for Withers and Nadarajah (2010b, 2011a, 2011b, 2012a, 2012b, 2014b). Applications to non-parametric and parametric confidence intervals were given in Withers (1983, 1988, 1989) and to ellipsoidal confidence regions and power in Withers and Nadarajah (2012a) and Kakizawa (2015). For nonparametric problems, F ( x ) and its empirical distribution F n ( x ) play the role of w and w ^ ; since it is unbiased, no corrections are needed. For q = 1 ,   a r i = a ¯ i 1 r were given for parametric and non-parametric problems in Withers (1982, 1983, 1988), and expressions for the classic Edgeworth expansion of Y n w in terms of a r i were given in Withers (1984). For q 1 , a ¯ i 1 r for parametric problems were given in Withers (1982), and can be obtained easily from a r i given when q = 1 for 1 sample and multi sample non-parametric problems in Withers (1983, 1988), and for semi-parametric problems in Withers and Nadarajah (2010a, 2011a, 2014b). All these results can be extended to samples with independent non-identically distributed residuals, as done in Withers and Nadarajah (2010 §6, 2011b, 2012b). The extension to matrix  w ^ just needs a slight change in notation. For example in Withers and Nadarajah (2011b, 2011c, 2012b), w ^ can be viewed as a function of the mean of n independent complex random matrices, although n is actually the number of transmitters or receivers. Extensions to dependent random variables are also possible: see Withers and Nadarajah (2012c).

5. Cumulant Coefficients for Univariate t ( w ^ )

Now suppose that q = 1 . Let   k K r e be the coefficient of n e in   k K 1 r . We write K ¯ e 1 r as K r e . For E   w ^ = w , (1), (3) and (4) become
  K r = κ r ( t ^ ) = e = r 1   n e K r e ,   r 1 ;   K r e = k = r 1 e   k K r e : K r , r 1 =   r 1 K r , r 1 ,   K r r = k = r 1 r   k K r r ,   K r , r + 1 = k = r 1 r + 1   k K r , r + 1 ,  
For E   w ^ w , (1), (2) and (3) become
K r = κ r ( t ^ ) = e = r 1   n e a r e ,   r 1 ;   a r e = K r e + D r e ,   D r c = e = 1 c r + 1 K r , c e , e : D r , r 1 = 0 ,   D r r = K r , r 1 , 1 ,   D r , r + 1 = e = 1 2 K r , r + 1 e , e ,  
Here we give the cumulant coefficients K r e needed for the Edgeworth expansion of Y n t of (5) for P r ( x ) ,   r 4 . We do this when E   w ^ = w in Corollary 5.1 and when E   w ^ w in Corollaries 5.3 and 5.4. To show more clearly the expressions we need in molecular form, we introduce the following ions, (expressions with unpaired suffixes),
Preprints 100389 i004
Where a suffix does not have a match then summation does not occur. For example the RHS of s ¯ 1 = k ¯ 1 12 t ¯ 2 sums over i 2 but not i 1 . Let v , c 01 , c 02 , c 21 , c 22 , c 23 ,   c 11 , , c 1 , 10 , c 31 , , c 3 , 11 be the 27 functions of ω given on p4234–4235 of Withers (1989), labelled there as I 2 2 0 , I 1 1 0 , , I 301 222 000 . By Corollaries 5.1, 5.3 below, those needed for P r ( x ) ,   r 2 , of (4), that is, for the Edgeworth expansion of Y n t of (5) to O ( n 3 / 2 ) , are the following molecules.
For P 0 ( x ) :   v = K 21 = t ¯ 1 k ¯ 1 12 t ¯ 2 . For P 1 ( x ) ,   K 11 :   c 02 = t ¯ 12 k ¯ 1 12 ;   D 11 :   c 01 = t ¯ 1 k ¯ 1 1 ; f o r K 32 :   c 21 = t ¯ 1 t ¯ 2 t ¯ 3 k ¯ 2 1 3 = t ¯ 1 y ¯ 1 ,   c 23 = s ¯ 1 t ¯ 12 s ¯ 2 = s ¯ 1 u ¯ 1 . For P 2 ( x ) , K 22 :   c 11 = t ¯ 1 k ¯ 2 12 t ¯ 2 = t ¯ 1 S ¯ 1 ,   c 15 = t ¯ 1 k ¯ 2 1 3 t ¯ 23 ,   c 19 = t ¯ 12 X ¯ 12 , c 1 , 10 = s ¯ 1 t ¯ 1 3 k ¯ 1 23 = z ¯ 23 k ¯ 1 23 ; f o r D 22 :   c 12 = k ¯ 1 1 k ¯ 1 1 23 t ¯ 2 t ¯ 3 ,   c 16 = k ¯ 1 1 u ¯ 1 = k ¯ 1 1 t ¯ . 12 k ¯ 1 23 t ¯ 3 , f o r K 43 :   c 31 = t ¯ 1 t ¯ 2 t ¯ 3 t ¯ 4 k ¯ 3 1 4 ,   c 36 = y ¯ 3 u ¯ 3 ,   c 3 , 10 = u ¯ 1 k ¯ 1 12 u ¯ 2 ,   c 3 , 11 = s ¯ 1 s ¯ 2 s ¯ 3 t ¯ 1 3 .
Each molecule can be written as a shape. For example c 19 is a rectangle. We now give the molecules L j , L i j needed for the Edgeworth expansion to O ( n 5 / 2 ) , that is, for P r ( x ) for r = 3 , 4 . Note that P r ( x ) needs the derivatives of t ( w ) up to order r + 1 .
For P 3 ( x ) ,   K 12 : L 1 = t ¯ 12 k ¯ 2 12 ,   L 2 = t ¯ 1 3 k ¯ 2 1 3 ,   L 3 = t ¯ 1 4   k ¯ 1 12 k ¯ 1 34 ; f o r K 33 : L 4 = t ¯ 1 t ¯ 2 t ¯ 3   k ¯ 3 1 3 ,   L 5 = u ¯ 1 S ¯ 1 ,   L 6 = t ¯ 13 t ¯ 2 t ¯ 4   k ¯ 3 1 4 ,   L 71 = z ¯ 12   k ¯ 2 1 3 t ¯ 3 , L 72 = y ¯ 1 t ¯ 145   k ¯ 1 45 ,   L 73 = t ¯ 12 k ¯ 2 1 3 u ¯ 3 ,   L 74 = t ¯ 14 k ¯ 1 45 t ¯ 52 k ¯ 2 1 3 t ¯ 3 , L 81 = k ¯ 1 12 t ¯ 1 4 s ¯ 3 s ¯ 4 ,   L 82 = k ¯ 1 12 t ¯ 1 3 v ¯ 3 ,   L 83 = X ¯ 34 z ¯ 34 , L 84 = X ¯ 14 t ¯ 45 k ¯ 1 56 t ¯ 61 , a s e x a g o n , f o r K 54 : L 9 = t ¯ 1 t ¯ 5   k ¯ 4 1 5 ,   L 10 = u ¯ 1 t ¯ 2 t ¯ 3 t ¯ 4   k ¯ 3 1 4 ,   L 11 = y ¯ 1 Y ¯ 1 = y ¯ 1 t ¯ 12 y ¯ 2 , L 121 = y ¯ 1 t ¯ 1 3 s ¯ 2 s ¯ 3 ,   L 122 = t ¯ 1 k ¯ 2 1 3 u ¯ 2 u ¯ 3 ,   L 123 = Y ¯ 2 v ¯ 2 , L 131 = s ¯ 1 s ¯ 4   t ¯ 1 4 ,   L 132 = s ¯ 1 s ¯ 2 t ¯ 1 3 v ¯ 3 ,   L 133 = v ¯ 1 t ¯ 12 v ¯ 2 = v ¯ 1 x ¯ 1 . For P 4 ( x ) , K 23 :   L 14 = t ¯ 1 t ¯ 2   k ¯ 3 12 ,   L 15 = t ¯ 12 k ¯ 2 1 3 t ¯ 3 ,   L 161 = S ¯ 1 t ¯ 1 3 k ¯ 1 23 , L 162 = z ¯ 12 k ¯ 2 12 ,   L 171 = X ¯ 24 t ¯ 24 .   L 181 = t ¯ 1 3 k ¯ 3 1 4 t ¯ 4 ,   L 182 = t ¯ 12 k ¯ 3 1 4 t ¯ 34 ,   L 191 = k ¯ 2 1 3 t ¯ 1 4 s ¯ 4 ,
L 192 = k ¯ 1 12 t ¯ 1 4 k ¯ 2 3 5 t ¯ 5 ,   L 193 = t ¯ 1 3 k ¯ 2 2 4 t ¯ 45 k ¯ 1 51 ,   L 194 = t ¯ 12 k ¯ 2 1 3 t ¯ 3 5 k ¯ 1 45 , L 201 = k ¯ 1 12 k ¯ 1 34 t ¯ 1 5 s ¯ 5 ,   L 202 = k ¯ 1 12 t ¯ 1 4 X ¯ 34 ,   L 203 = k ¯ 1 12 t ¯ 1 3 k ¯ 1 34 t ¯ 4 6 k ¯ 1 56 , L 204 = t ¯ 135   ( k ¯ 1 12 k ¯ 1 34 k ¯ 1 56 )   t ¯ 246 ; f o r K 44 : L 21 = t ¯ t ¯ 4   k ¯ 4 1 4 ,   L 221 = Y ¯ 2 k ¯ 2 23 t ¯ 3 ,   L 222 = u ¯ 1 t ¯ 2 t ¯ 3   k ¯ 3 1 3 , L 231 = s ¯ 1 z ¯ 12 S ¯ 2 .   L 241 = x ¯ 2 S ¯ 2 ,   L 242 = u ¯ 1 k ¯ 2 12 u ¯ 2 . L 25 = t ¯ 12 k ¯ 4 1 5 t ¯ 3 t ¯ 4 t ¯ 5 ,   L 261 = t ¯ 1 t ¯ 2 k ¯ 3 1 4 t ¯ 3 5 s ¯ 5 ,   L 262 = t ¯ 1 t ¯ 2 t ¯ 3 k ¯ 3 1 4 t ¯ 4 6 k ¯ 1 56 , L 263 = t ¯ 12 k ¯ 3 1 4 t ¯ 3 u ¯ 4 ,   L 264 = t ¯ 1 t ¯ 2 k ¯ 3 1 4 ( t ¯ 35 t ¯ 46 ) k ¯ 1 56 , L 271 = t ¯ 1 k ¯ 2 1 3 t ¯ 2 4 y ¯ 4 ,   L 272 = t ¯ 12 k ¯ 2 1 3 Y ¯ 3 ,   L 273 = t ¯ 1 k ¯ 2 1 3   ( t ¯ 24 t ¯ 35 )   k ¯ 3 4 6 t ¯ 6 , L 281 = t ¯ 1 k ¯ 2 1 3 t ¯ 2 5 s ¯ 4 s ¯ 5 ,   L 282 = s ¯ 1 k ¯ 1 23 t ¯ 1 4 y ¯ 4 ,   L 283 = u ¯ 1 k ¯ 2 1 3 z ¯ 23 , L 284 = t ¯ 1 k ¯ 2 1 3 t ¯ 2 4 v ¯ 4 ,   L 285 = Y ¯ 2 k ¯ 1 23 t ¯ 3 5 k ¯ 1 45 ,   L 286 = t ¯ 12 k ¯ 2 1 3 z ¯ 34 s ¯ 4 , L 287 = t ¯ 1 k ¯ 2 1 3 t ¯ 24 k ¯ 1 45 z ¯ 53 ,   L 288 = y ¯ 1 t ¯ 1 3 X ¯ 23 , L 289 = t ¯ 12 k ¯ 2 1 3 x ¯ 3 ,   L 2810 = u ¯ 1 k ¯ 2 1 3   ( t ¯ 24 t ¯ 35 )   k ¯ 1 45 , L 2811 = t ¯ 1 k ¯ 2 1 3   ( t ¯ 24 k ¯ 1 45   t ¯ 36 k ¯ 1 67 )   t ¯ 57 ,   L 291 = k ¯ 1 12 t ¯ 1 5 s ¯ 3 s ¯ 4 s ¯ 5 ,   L 292 = k ¯ 1 12 t ¯ 1 4 v ¯ 3 s ¯ 4 , L 293 = X ¯ 34 t ¯ 3 6 s ¯ 5 s ¯ 6 ,   L 294 = k ¯ 1 12 t ¯ 1 3 k ¯ 1 34 t ¯ 4 6 s ¯ 5 s ¯ 6 ,   L 295 = k ¯ 1 12 ( z ¯ 13 z ¯ 24 ) k ¯ 1 34 , L 296 = k ¯ 1 12 t ¯ 1 3 k ¯ 1 34 x ¯ 4 ,   L 297 = k ¯ 1 12   t ¯ 135 v ¯ 5 t ¯ 24   k ¯ 1 34 , L 298 = X ¯ 14 t ¯ 45 k ¯ 1 56 z ¯ 61 ,   L 299 = X ¯ 14 t ¯ 45 X ¯ 58 ; f o r K 65 :   L 30 = t ¯ 1 t ¯ 6   k 5 1 6 ,   L 31 = u ¯ 1 t ¯ 2 t ¯ 3 t ¯ 4 t ¯ 5 k ¯ 4 1 5 ,   L 32 = t ¯ 1 t ¯ 2 t ¯ 3 k ¯ 4 1 5 t ¯ 45 , L 331 = t ¯ 1 t ¯ 2 t ¯ 3 k ¯ 3 1 4 z ¯ 45 s ¯ 5 ,   L 332 = u ¯ 1 u ¯ 2 k ¯ 3 1 4 t ¯ 3 t ¯ 4 ,   L 333 = t ¯ 1 t ¯ 2 t ¯ 3 k ¯ 3 1 4 x ¯ 4 , L 341 = t ¯ 1 3 y ¯ 1 s ¯ 2 y ¯ 3 ,   L 342 = Y ¯ 2 k ¯ 2 2 4 t ¯ 3 u ¯ 4 ,   L 343 = Y ¯ 1 k ¯ 1 12 Y ¯ 2 , L 351 = y ¯ 3 t ¯ 3 6 s ¯ 4 s ¯ 5 s ¯ 6 ,   L 352 = t ¯ 1 u ¯ 2 k ¯ 2 1 3 z ¯ 34 s ¯ 4 , L 353 = y ¯ 1 t ¯ 12 v ¯ 2 ,   L 354 = Y ¯ 4 k ¯ 1 45 t ¯ 5 7 s ¯ 6 s ¯ 7 ,   L 355 = u ¯ 1 u ¯ 2 u ¯ 3 k ¯ 2 1 3 , L 356 = t ¯ 1 u ¯ 2 k ¯ 2 1 3 x ¯ 3 ,   L 357 = t ¯ 3 5 y ¯ 3 v ¯ 4 s ¯ 5 .   L 361 = s ¯ 1 s ¯ 5   t ¯ 1 5 , L 362 = s ¯ 1 s ¯ 2 s ¯ 3 t ¯ 1 4 v ¯ 4 ,   L 363 = s ¯ 1 z ¯ 13 k ¯ 1 34 t ¯ 4 6 s ¯ 5 s ¯ 6 ,   L 364 = v ¯ 1 z ¯ 12 v ¯ 2 , L 365 = s ¯ 1 z ¯ 13 k ¯ 1 34 x ¯ 4 ,   L 366 = x ¯ 1 k ¯ 1 12 x ¯ 2 .
These c r s and L j don’t use derivatives of k ¯ e 1 r , the cumulant coefficients of w ^ .
Corollary 1. 
Suppose that w ^ is anunbiasedstandard estimate of
w R p with respect to n, and that q = 1 . Then the cumulants of t ^ = t ( w ^ ) can be expanded as (1) with bounded cumulant coefficients K r e . The leading coefficients needed for P r ( x ) of (4) for the distribution of Y n t of (5) are as follows.
K 10 = t ¯ = t ( w ) .   F o r P 0 ( x ) :   K 21 = v = t ¯ 1   k ¯ 1 12   t ¯ 2 . F o r P 1 ( x ) :   K 11 = c 02 / 2 ,   K 32 = c 21 + 3 c 23 . F o r P 2 ( x ) :   K 22 = k = 1 2   k K 22 ,   1 K 22 = c 11 ,     2 K 22 = c 15 + c 19 / 2 + c 1 , 10 , K 43 = c 31 + 12 c 36 + 12 c 3 , 10 + 4 c 3 , 11 . F o r P 3 ( x ) :   K 12 = k = 1 2   k K 12 ,     1 K 12 = L 1 / 2 ,     2 K 12 = L 2 / 6 + L 3 / 8 ;
Preprints 100389 i005
Preprints 100389 i006
Preprints 100389 i007
Preprints 100389 i008
L 35 = k = 1 7 d k L 35 k ,   d 1 = 1 ,   d 2 = d 3 = d 7 = 6 ,   d 4 = 3 ,   d 5 = 2 ,   d 6 = 12 , L 36 = k = 1 6 e k L 36 k ,   e 1 = 1 ,   e 2 = 20 ,   e 3 = 15 ,   e 7 = 6 ,   e 4 = e 5 = e 6 = 60 . A l s o , K 13 = t ¯ 12 k ¯ 3 12 / 2 + t ¯ 1 3 k ¯ 3 1 3 / 6 + t ¯ 1 4   ( k ¯ 12 k ¯ 34 ) 3 / 8 + t ¯ 1 4 k ¯ 3 1 4 / 24 + t ¯ 1 5 k ¯ 2 1 3 k ¯ 1 45 / 12 + t ¯ 1 6   k ¯ 1 12 k ¯ 1 34 k ¯ 1 56 / 48 , K 14 = t ¯ 12 k ¯ 4 12 / 2 + t ¯ 1 3 k ¯ 4 1 3 / 6 + t ¯ 1 4   [ ( k ¯ 12 k ¯ 34 ) 4 / 8 + k ¯ 4 1 4 / 24 ] + t ¯ 1 5   [ k ¯ 4 1 5 / 120 + ( k ¯ 1 3 k ¯ 45 ) 4 / 12 ] + t ¯ 1 6   [ ( k ¯ 12 k ¯ 34 k ¯ 56 ) 4 / 48 + k ¯ 3 1 4 k ¯ 1 56 / 48 + k ¯ 2 1 3 k ¯ 2 4 6 / 72 ] + t ¯ 1 7   k ¯ 2 1 3 k ¯ 1 45 k ¯ 1 67 / 48 + t ¯ 1 8   k ¯ 1 12 k ¯ 1 34 k ¯ 1 56 k ¯ 1 78 / 384 .
PROOF Since q = 1 ,   N becomes N. We write T 1 s 1 r , U 1 s 1 r , V 1 s 1 r as T 1 s r , U 1 s r , V 1 s r . By Theorem 3.1 we need the following.
T 1 4 3 / 3 = t ¯ 13 t ¯ 2 t ¯ 4 ,   T 1 3 2 / 2 = t ¯ 12 t ¯ 3 ,   T 1 4 2 / 2 = t ¯ 1 3 t ¯ 4 + t ¯ 13 t ¯ 24 , T 1 5 4 / 12 = t ¯ 14 t ¯ 2 t ¯ 3 t ¯ 5 ,   T 1 6 4 / 4 = t ¯ 135 t ¯ 2 t ¯ 4 t ¯ 6 + 3 t ¯ 13 t ¯ 25 t ¯ 4 t ¯ 6 , T 1 5 3 / 3 = t ¯ 124 t ¯ 3 t ¯ 5 + 3 t ¯ 145 t ¯ 2 t ¯ 3 / 2 + 3 t ¯ 12 t ¯ 34 t ¯ 5 + 3 t ¯ 14 t ¯ 25 t ¯ 3 , T 1 6 3 = 3 t ¯ 1235 t ¯ 4 t ¯ 6 / 2 + 6 t ¯ 1 3 t ¯ 45 t ¯ 6 + 3 t ¯ 135 t ¯ 24 t ¯ 6 + t ¯ 13 t ¯ 25 t ¯ 46 , T 1 6 5 / 20 = t ¯ 15 t ¯ 2 t ¯ 3 t ¯ 4 t ¯ 6 ,   U 1 6 5 / 15 = t ¯ 14 t ¯ 2 t ¯ 3 t ¯ 5 t ¯ 6 , T 1 7 5 / 30 = t ¯ 146 t ¯ 2 t ¯ 3 t ¯ 5 t ¯ 7 + 2 t ¯ 14 t ¯ 26 t ¯ 3 t ¯ 5 t ¯ 7 + 2 t ¯ 14 t ¯ 56 t ¯ 2 t ¯ 3 t ¯ 7 , T 1 8 5 = t ¯ 1357 t ¯ 2 t ¯ 4 t ¯ 6 t ¯ 8 + 60 t ¯ 135 t ¯ 27 t ¯ 4 t ¯ 6 t ¯ 8 + 60 t ¯ 13 t ¯ 25 t ¯ 47 t ¯ 6 t ¯ 8 .   U 1 4 2 = t ¯ 1 3 t ¯ 4 / 3 + t ¯ 12 t ¯ 34 / 4 , T 1 5 2 = t ¯ 1 4 t ¯ 5 / 3 + t ¯ 1245 t ¯ 3 / 2 + t ¯ 124 t ¯ 35 + t ¯ 145 t ¯ 23 / 2 , T 1 6 2 = t ¯ 1 5 t ¯ 6 + 2 t ¯ 1235 t ¯ 46 + t ¯ 1 3 t ¯ 4 6 + 2 t ¯ 135 t ¯ 246 / 3 , T 1 5 4 / 12 = t ¯ 14 t ¯ 2 t ¯ 3 t ¯ 5 ,   U 1 5 4 / 4 = t ¯ 12 t ¯ 3 t ¯ 4 t ¯ 5 , U 1 6 4 / 2 = 3 t ¯ 125 t ¯ 3 t ¯ 4 t ¯ 6 + 2 t ¯ 156 t ¯ 2 t ¯ 3 t ¯ 4 + 6 t ¯ 12 t ¯ 35 t ¯ 4 t ¯ 6 + 3 t ¯ 15 t ¯ 26 t ¯ 3 t ¯ 4 , V 1 6 4 / 6 = t ¯ 124 t ¯ 3 t ¯ 5 t ¯ 6 + t ¯ 12 t ¯ 34 t ¯ 5 t ¯ 6 + t ¯ 14 t ¯ 25 t ¯ 3 t ¯ 6 , T 1 7 4 = 6 t ¯ 1246 t ¯ 3 t ¯ 5 t ¯ 7 + 3 t ¯ 1456 t ¯ 2 t ¯ 3 t ¯ 7 + 24 t ¯ 124 t ¯ 36 t ¯ 5 t ¯ 7 + 12 t ¯ 124 t ¯ 56 t ¯ 3 t ¯ 7 + 12 t ¯ 145 t ¯ 26 t ¯ 3 t ¯ 7 + 12 t ¯ 146 t ¯ 23 t ¯ 5 t ¯ 7 + 24 t ¯ 146 t ¯ 25 t ¯ 3 t ¯ 7 + 4 t ¯ 146 t ¯ 57 t ¯ 2 t ¯ 3 + 12 t ¯ 456 t ¯ 17 t ¯ 2 t ¯ 3 + 12 t ¯ 12 t ¯ 34 t ¯ 56 t ¯ 7 + 12 t ¯ 14 t ¯ 25 t ¯ 36 t ¯ 7 + 12 t ¯ 14 t ¯ 26 t ¯ 57 t ¯ 3 , T 1 8 4 = 2 t ¯ 12357 t ¯ 4 t ¯ 6 t ¯ 8 + 12 t ¯ 1235 t ¯ 47 t ¯ 6 t ¯ 8 + 6 t ¯ 1357 t ¯ 24 t ¯ 6 t ¯ 8 + 6 t ¯ 123 t ¯ 457 t ¯ 6 t ¯ 8 + 6 t ¯ 135 t ¯ 247 t ¯ 6 t ¯ 8 + 12 t ¯ 123 t ¯ 45 t ¯ 67 t ¯ 8 + 24 t ¯ 135 t ¯ 24 t ¯ 67 t ¯ 8 + 6 t ¯ 135 t ¯ 27 t ¯ 48 t ¯ 6 + 3 t ¯ 13 t ¯ 25 t ¯ 47 t ¯ 68 ,   T 1 7 6 / 30 = t ¯ 16 t ¯ 2 t ¯ 3 t ¯ 4 t ¯ 5 t ¯ 7 ,   U 1 7 6 / 60 = t ¯ 15 t ¯ 2 t ¯ 3 t ¯ 4 t ¯ 6 t ¯ 7 ,
T 1 8 6 / 60 = t ¯ 157 t ¯ 2 t ¯ 3 t ¯ 4 t ¯ 6 t ¯ 8 + 3 t ¯ 15 t ¯ 27 t ¯ 3 t ¯ 4 t ¯ 6 t ¯ 8 + 2 t ¯ 15 t ¯ 67 t ¯ 2 t ¯ 3 t ¯ 4 t ¯ 8 , U 1 8 6 / 90 = t ¯ 147 t ¯ 2 t ¯ 3 t ¯ 5 t ¯ 6 t ¯ 8 + 4 t ¯ 14 t ¯ 27 t ¯ 3 t ¯ 5 t ¯ 6 t ¯ 8 + t ¯ 17 t ¯ 48 t ¯ 2 t ¯ 3 t ¯ 5 t ¯ 6 , T 1 9 6 / 60 = t ¯ 1468 t ¯ 2 t ¯ 3 t ¯ 5 t ¯ 7 t ¯ 9 + 6 t ¯ 146 t ¯ 28 t ¯ 3 t ¯ 5 t ¯ 7 t ¯ 9 + 6 t ¯ 146 t ¯ 58 t ¯ 2 t ¯ 3 t ¯ 7 t ¯ 9 + 3 t ¯ 468 t ¯ 15 t ¯ 2 t ¯ 3 t ¯ 7 t ¯ 9 + 2 t ¯ 14 t ¯ 26 t ¯ 38 t ¯ 5 t ¯ 7 t ¯ 9 + 12 t ¯ 14 t ¯ 26 t ¯ 58 t ¯ 3 t ¯ 7 t ¯ 9 + 6 t ¯ 14 t ¯ 56 t ¯ 78 t ¯ 2 t ¯ 3 t ¯ 9 , T 1 10 6 / 6 = t ¯ 13579 t ¯ 2 t ¯ 4 t ¯ 6 t ¯ 8 t ¯ 10 + 20 t ¯ 1357 t ¯ 29 t ¯ 4 t ¯ 6 t ¯ 8 t ¯ 10 + 15 t ¯ 135 t ¯ 279 t ¯ 4 t ¯ 6 t ¯ 8 t ¯ 10 + 60 t ¯ 135 t ¯ 27 t ¯ 49 t ¯ 6 t ¯ 8 t ¯ 10 + 60 t ¯ 135 t ¯ 27 t ¯ 89 t ¯ 4 t ¯ 6 t ¯ 10 + 60 t ¯ 13 t ¯ 25 t ¯ 47 t ¯ 69 t ¯ 8 t ¯ 10 . F o r P 1 ( x ) :   T 1 4 3   k ¯ 1 12 k ¯ 1 34 = 3 c 23 . F o r P 2 ( x ) :   T 1 3 2   k ¯ 2 1 3 / 2 = c 15 ,   T 1 4 2   k ¯ 1 12 k ¯ 1 34 / 2 = c 19 / 2 + c 1 , 10 ; K 43 = c 31 + g 1 + g 2 f o r g 1 = T 1 5 4   k ¯ 2 1 3 k ¯ 1 45 = 12 c 36 , g 2 = T 1 6 4   k ¯ 1 12 k ¯ 1 34 k ¯ 1 56 = 4 c 3 , 11 + 12 c 3 , 10 . F o r P 3 ( x ) :   2 K 33 = L 4 + 3 L 5 ,   L 5 = T 1 4 3 / 3   ( k ¯ 12 k ¯ 34 ) 3 = t ¯ 13 t ¯ 2 t ¯ 4   ( k ¯ 12 k ¯ 34 ) 3 = 2 L 5 , L 6 = T 1 4 3 / 3   k ¯ 3 1 4 ,   L 7 = T 1 5 3 / 3   k ¯ 2 1 3 k ¯ 1 45 , L 71 = s ¯ 4 t ¯ 412   k ¯ 2 1 3 t ¯ 3 ,     L 8 = T 1 6 3   k ¯ 1 12 k ¯ 1 34 k ¯ 1 56 , L 81 = t ¯ 1235 t ¯ 4 t ¯ 6   k ¯ 1 12 k ¯ 1 34 k ¯ 1 56 ,   L 82 = t ¯ 1 3 t ¯ 45 t ¯ 6   k ¯ 1 12 k ¯ 1 34 k ¯ 1 56 ,   L 83 = t ¯ 135 t ¯ 24 t ¯ 6   k ¯ 1 12 k ¯ 1 34 k ¯ 1 56 , L 84 = t ¯ 13 t ¯ 25 t ¯ 46   k ¯ 1 12 k ¯ 1 34 k ¯ 1 56 . K 54 i s g i v e n b y ( ) w i t h L 10 = T 1 6 5 / 20   k ¯ 3 1 4 k ¯ 1 56 ,   L 11 = U 1 6 5 / 15   k ¯ 2 1 3 k ¯ 2 4 6 , L 12 = T 1 7 5 / 30   k ¯ 2 1 3 k ¯ 1 45 k ¯ 1 67 ,   L 121 = t ¯ 2 t ¯ 3 k ¯ 2 1 3 t ¯ 146 s ¯ 4 s ¯ 6 ,   L 13 = T 1 8 5   k ¯ 1 12 k ¯ 1 34 k ¯ 1 56 k ¯ 1 78 . F o r P 4 ( x ) :   2 K 23 = L 15 + L 16 + L 17 / 2 ,   L 15 = T 1 3 2 / 2   k ¯ 3 1 3 ,   L 16 = k = 1 2 L 16 k , L 162 = s ¯ 2 t ¯ 2 4 k ¯ 2 34 ,   L 17 = T 1 4 2 / 2   ( k ¯ 12 k ¯ 34 ) 3 = k = 1 2 L 17 k ,   L 172 = L 171 ;   3 K 23 = k = 18 20 L k :   L 18 = U 1 4 2 k ¯ 3 1 4 = L 181 / 3 + L 182 / 4 , L 19 = T 1 5 2 k ¯ 2 1 3 k ¯ 1 45 = k = 1 4 L 19 k / g k f o r g 1 = 3 , g 2 = g 3 = g 4 = 1 , L 192 = t ¯ 3 k ¯ 2 1 3 t ¯ 1245 k ¯ 1 45 ,   L 193 = t ¯ 412 k ¯ 2 1 3 t ¯ 35 k ¯ 1 54 ,   L 194 = k ¯ 1 45 t ¯ 451 k ¯ 2 1 3 t ¯ 23 , L 20 = T 1 6 2 k ¯ 1 12 k ¯ 1 34 k ¯ 1 56 / 4 = L 201 / 4 + L 202 / 2 + L 203 / 4 + L 204 / 6 , L 202 = k ¯ 1 12 t ¯ 1235   ( k ¯ 1 34 k ¯ 1 56 )   t ¯ 46 .   3 K 44 = L 21 + 12 L 22 + 4 L ,   L = T 1 6 4 / 4   ( k ¯ 12 k ¯ 34 k ¯ 56 ) 4 = L 23 + 3 L 24 , L 22 = T 1 5 4 / 12   ( k ¯ 1 3 k ¯ 45 ) 4 = ( L 221 + L 222 ) b y ( ) ,
L 23 = t ¯ 135 t ¯ 2 t ¯ 4 t ¯ 6   ( k ¯ 12 k ¯ 34 k ¯ 56 ) 4 = k = 1 3 L 23 k b y ( ) ,   L 23 k L 231 . B y ( ) , L 24 = t ¯ 13 t ¯ 25 t ¯ 4 t ¯ 6   ( k ¯ 12 k ¯ 34 k ¯ 56 ) 4 = 2 L 241 + L 242 ;   4 K 44 = 2 L 25 + 2 L 26 + 6 L 27 + L 28 + L 29 ,   L 25 = k ¯ 4 1 5 U 1 5 4 / 4 , L 26 = U 1 6 4 / 2   k ¯ 3 1 4 k ¯ 1 56 ,   L 27 = V 1 6 4 / 6   k ¯ 2 1 3 k ¯ 2 4 6 = k = 1 3 L 27 k , L 28 = T 1 7 4 k ¯ 2 1 3 k ¯ 1 45 k ¯ 1 67 ,   L 29 = T 1 8 4 k ¯ 1 12 k ¯ 1 34 k ¯ 1 56 k ¯ 1 78 ; f o r K 65 :   L 31 = T 1 7 6 / 30   k 4 1 5 k 1 67 , L 32 = U 1 7 6 / 60   k 4 1 5 k 1 67 / v ,   L 33 = T 1 8 6 / 60   k ¯ 3 1 4 k ¯ 1 56 k ¯ 1 78 ,   L 34 = U 1 8 6 / 90   k ¯ 2 1 3 k ¯ 2 4 6 k ¯ 2 78 , L 35 = T 1 9 6 / 60   k ¯ 2 1 3 k ¯ 1 45 k ¯ 1 67 k ¯ 1 89 ,   L 36 = T 1 10 6 / 6   k ¯ 1 12 k ¯ 1 34 k ¯ 1 56 k ¯ 1 78 k ¯ 1 9 , 10 .              
Example 1. 
Suppose that E   w ^ = w and t ( w ) is linear in w. Then K 1 e = 0 for e 1 . For r 4 , the K i j needed for P r ( x ) of () for the distribution of Y n t of (5) are as follows.
K 10 = t ¯ = t ( w ) .   F o r P 0 ( x ) :   K 21 = v .   F o r P 1 ( x ) :   K 32 = c 21 . F o r P 2 ( x ) :   K 22 =   1 K 22 = c 11 ,   K 43 = c 31 . F o r P 3 ( x ) :   K 33 =   2 K 33 = L 4 ,   K 54 = L 9 . F o r P 4 ( x ) :   K 23 =   1 K 23 = L 14 ,   K 44 =   3 K 44 = L 21 ,   K 65 = L 30 .
For, u ¯ 1 , v ¯ 1 , x ¯ 1 , z ¯ 12 are 0, as are most c i j , L k and   2 K 22 ,   3 K 33 ,   2 K 23 ,   3 K 23 ,   4 K 44 . So for r = 0 , , 4 , for P r ( x ) we only need to calculate these 3 c i j and 5 L j .
Let G γ be a gamma random variable with known mean γ . Its rth cumulant is ( r 1 ) ! γ .
Example 2. 
Linear combinations of scale parameters.Suppose that E   w ^ = w and t ( w ) is linear, the components of w ^ are independent, and for 1 i p ,   w ^ i / w i has a distribution with known rth cumulant n 1 r κ r i . Then K r e = 0 for e r 1 and
K r , r 1 = i = 1 p t i r κ r i ( w i ) r .
For example if w ^ i / w i is a standard exponential variable then κ r i = ( r 1 ) ! .
For s r and any function f i s i r , set s r f ¯ s r = i s , , i r f i s i r summed over their range. In Example 5.3 their range is 1 , 2 ; for example in L 123 ,   1 t 2 i 1 v ¯ 1 = i 1 = 1 k t 2 i 1 v ¯ 1 . In Example 5.4 their range is 1 , , k ; for example u ¯ 1 = 2 t ¯ 12 s ¯ 2 = i 2 = 1 k t ¯ 12 s ¯ 2 .
Example 3. 
Suppose that μ ^ N 1 ( μ , V / n ) and V ^ / V = χ f 2 / f = G γ / γ are independent, where γ = f / 2 has magnitude n. Set ν = γ / n . Then
κ r ( μ ^ ) = μ δ r 1 + V n 1 δ r 2 , κ r ( V ^ ) = k r n 1 r for k r = ( r 1 ) ! ν 1 r V r , and cross-cumulants of w ^ are zero. Take p = 2 ,   w 1 = μ ,   w 2 = V . Then by Corollary 5.1, K r e are given in terms of
s 1 = t 1 V ,   s 2 = t 2 k 2 ,   u 1 = t 11 t 1 V + t 12 t 2 k 2 ,   u 2 = t 12 t 1 V + t 22 t 2 k 2 , v 1 = V u 1 ,   v 2 = k 2 u 2 ,
as follows.
F o r P 0 ( x ) :   K 21 = v = t 1 2 V + t 2 2 k 2 / 2 . F o r P 1 ( x ) :   c 02 = t 11 + t 22 k 2 ,   c 21 = t 2 3 k 3 ,   c 23 = i = 1 2 t i i ( s i ) 2 . F o r P 2 ( x ) :   c 11 = 0 ,   c 15 = t 22 t 2 k 3 ,   c 19 = ( t 11 V ) 1 + ( t 22 k 2 ) 2 + 2 t 12 V k 2 , c 31 = t 2 4 k 4 ,   c 36 = t 2 2 k 3 u 2 ,   c 3 , 10 = u 1 2 V + u 2 2   k 2 , c 3 , 11 = i = 1 2 ( s i ) 3 t i i i + 3 12 2 ( s 1 ) 2 s 2 t 112 w h e r e 12 2 f 12 = f 12 + f 21 . F o r P 3 ( x ) :   L 1 = L 4 = L 5 = 0 ,   L 2 = t 222 k 3 , L 3 = t 1111 V 2 + 2 t 1122 V k 2 + t 2222 k 2 2 , L 6 = t 22 t 2 2 k 4 ,   L 71 = z 22 k 3 t 2 ,   L 72 = y 2 ( t 211 V + t 222 k 3 ) ,   L 73 = t 22 k 3 u 2 , L 74 = ( V t 12 2 + k 2 t 22 2 ) k 3 t 2 ,   L 81 = ( V t 1 i 2 i 3 i 4 + k 2 t 2 i 2 i 3 i 4 ) s ¯ 3 s ¯ 4 ,
L 82 = 1 ( V t 11 i 1 + k 2 t 22 i 1 ) v i 1 ,   L 83 = V 2 t 11 z 11 + 2 V k 2 t 12 z 12 + k 2 2 t 22 z 22 , L 84 = 1 3 k ¯ 1 11 k ¯ 1 22 k ¯ 1 33 t ¯ 12 t ¯ 23 t ¯ 31 , L 9 = t 2 5 k 5 ,   L 10 = u 2 t 2 3 k 4 ,   L 11 = ( y 2 ) 2 t 22 ,   L 121 = y 2 12 t 2 i 1 i 2 s ¯ 1 s ¯ 2 , L 122 = t 2 k 3 u 2 2 ,   L 123 = y 2 1 t 2 i 1 v ¯ 1 ,   L 13 = 1 3 s ¯ 1 s ¯ 2 t ¯ 1 3 k ¯ 1 33 u ¯ 3 .
Similarly one can write down the Ls needed for P 4 ( x ) .
Example 4. 
Suppose that we have the summary statistics from k samples of size n i from normal populations with means and variances μ i , V i ,   1 i k . Take p = 2 k ,   w i = μ i ,   w i + k = V i ,   1 i k . So we have p independent statistics, μ ^ i N ( μ i , V i / n i ) and V ^ i V i χ f i 2 / f i = V i G γ i / γ i ,   1 i k where γ i = f i / 2 has magnitude n, the total sample size. Set
ν i = γ i / n ,   λ i = n i / n ,   τ i = V i / λ i .
Then κ r ( μ ^ i ) = μ i δ r 1 + V i ( λ i n ) 1 δ r 2 , κ r ( V ^ i ) = k r i ( λ i n ) 1 r for k r i = ( r 1 ) ! ν i 1 r , and cross-cumulants of w ^ are zero. Suppose that t ( w ) only depends on μ 1 , , μ k , as in Example 3.3 of Withers (1982). (The notation there is slightly different.) Then
s ¯ 1 = t ¯ 1 τ ¯ 1 ,   u ¯ 1 = 2 t ¯ 12 s ¯ 2 ,   v ¯ 1 = τ ¯ 1 u ¯ 1 ,   z ¯ 12 = 3 t ¯ 1 3 s ¯ 3 ,
and by Corollary 5.1, the coefficients needed are as follows.
F o r P 0 ( x ) :   K 21 = v = t ¯ 1 s ¯ 1 = 1 t ¯ 1 2 τ ¯ 1 , F o r P 1 ( x ) :   c 02 = 1 t ¯ 11 τ ¯ 1 ,   c 21 = 0 ,   c 23 = s ¯ 1 t ¯ 12 s ¯ 2 = 12 t ¯ 1 τ ¯ 1 t ¯ 12 t ¯ 2 τ ¯ 2 , F o r P 2 ( x ) ,   K 22 :   c 11 = c 15 = 0 ,   c 19 = 12 t ¯ 12 2 τ ¯ 1 τ ¯ 2 , c 1 , 10 = 12 t ¯ 1 τ ¯ 1 t ¯ 122 τ ¯ 2 ; F o r K 43 :   c 31 = c 36 = 0 ,   c 3 , 10 = 1 u ¯ 1 2 τ ¯ 1 ,   c 3 , 11 = 1 3 s ¯ 1 s ¯ 2 s ¯ 3 t ¯ 1 3 . F o r P 3 ( x ) ,   K 12 = L 3 / 6 = 12 t ¯ 1122 τ ¯ 1 τ ¯ 2 / 6 a s L 1 = L 2 = 0 ; f o r K 33 :   L 4 = L 5 = L 6 = L 7 = 0 ,   L 81 = 1 3 τ ¯ 1 t ¯ 1123 s ¯ 2 s ¯ 3 ,   L 82 = 12 τ ¯ 1 t ¯ 122 v ¯ 2 , L 83 = 12 t ¯ 12 τ ¯ 1 τ ¯ 2 z ¯ 12 = 1 3 τ ¯ 1 τ ¯ 2 τ ¯ 3 t ¯ 12 t ¯ 123 t ¯ 3 ,   L 83 = 1 3 τ ¯ 1 τ ¯ 2 τ ¯ 3 t ¯ 12 t ¯ 23 t ¯ 31 ; f o r K 54 :   L k = 0 f o r 10 k 14 . F o r P 4 ( x ) ,   K 23 :   L i , k = 0 f o r i = 15 , 16 , 18 , 19 ,   L 171 = 12 t ¯ 12 2 τ ¯ 1 τ ¯ 2 , L 201 = 1 3 τ ¯ 1 τ ¯ 2 t ¯ 11223 s ¯ 5 ,   L 202 = 1 3 τ ¯ 1 τ ¯ 2 τ ¯ 3 t ¯ 1123 t ¯ 23 ,   L 203 = 1 3 τ ¯ 1 τ ¯ 2 τ ¯ 3 t ¯ 112 t ¯ 233 , L 204 = 1 3 τ ¯ 1 τ ¯ 2 τ ¯ 3 t ¯ 123 2 ;
K 44 = L 29 a s L k = 0 f o r k = 21 , ( 22 , k ) , ( 23 , 1 ) , ( 24 , k ) , 25 , ( 26 , k ) , ( 27 , k ) , ( 28 , k ) , L 291 = 1 4 τ ¯ 1 s ¯ 2 s ¯ 3 s ¯ 4 t ¯ 11234 ,   L 292 = 1 3 τ ¯ 1 t ¯ 1123 v ¯ 2 s ¯ 3 ,   L 293 = 1 4 τ ¯ 1 τ ¯ 2 s ¯ 3 s ¯ 4 t ¯ 1234 , L 294 = 1 4 τ ¯ 1 τ ¯ 2 t ¯ 112 t ¯ 234 s ¯ 3 s ¯ 4 ,   L 295 = 12 τ ¯ 1 z ¯ 12 2 τ ¯ 2 ,   L 296 = 1 3 τ ¯ 1 τ ¯ 2 τ ¯ 3 t ¯ 112 t ¯ 23 , L 297 = 12 τ ¯ 1 x ¯ 12 t ¯ 12 f o r x ¯ 12 = 3 t ¯ 123 v ¯ 3 ,   L 298 = 1 3 τ ¯ 1 τ ¯ 2 τ ¯ 3 t ¯ 12 t ¯ 23 z ¯ 31 , L 299 = 1 4 τ ¯ 1 τ ¯ 2 τ ¯ 3 τ ¯ 4 t ¯ 12 t ¯ 23 t ¯ 34 t ¯ 41 ; K 65 = 6 L 36 , a s t h e o t h e r c o m p o n e n t s o f K 65 a r e 0 . A l s o , K 13 = 1 3 τ ¯ 1 τ ¯ 2 τ ¯ 3 t ¯ 112233 / 48 ,   K 14 = 1 4 τ ¯ 1 τ ¯ 2 τ ¯ 3 τ ¯ 4 t ¯ 11223344 / 384 .
Corollary 2. 
Set Y n = v 1 / 2 Y n t = ( n / v ) 1 / 2 ( t ( w ^ ) t ( w ) ) . Then
P r o b . ( Y n x ) = r = 0 n r / 2 P r v ( x )
where P r v ( x ) is P r ( x ) of Corollary 5.1 with K r e replaced by K r e / v r / 2 .
PROOF This is straightforward. □
Looking at K r e , c a b , v , s ¯ 2 , u ¯ 3 as functions of w, we denote their partial derivatives with respect to w ¯ 3 = w i 3 , say. by K ¯ r e 3 , c ¯ a b 3 , v ¯ 3 , s ¯ 3 2 , u ¯ 3 3 and similarly for higher derivatives. We shall give the ones we need in Lemma 5.1. When constructing confidence regions, one needs to assume that the distribution of w ^ is determined by w. So far we’ve not assumed this. For w ^ biased, we need
Corollary 3. 
Let w ^ R p be abiasedstandard estimate of w satisfying (1) where k ¯ e 1 r may depend on w. Then for q = 1 , t ^ = t ( w ^ ) is a standard estimate of t = t ( w ) R :
κ r ( t ^ ) = e = r 1 n e a r e f o r r 1 , w h e r e a r e = K r e + D r e ,   D r , r 1 = 0 ,
and the other D r e needed for P j ( x ) of () for the distribution of Y n t of (5) are as follows.
F o r P 0 ( x ) :   D 21 = 0 a 21 = K 21 = v = t ¯ 1 k ¯ 1 12 t ¯ 2 . F o r P 1 ( x ) : D 11 = c 01 a 11 = c 01 + c 02 / 2 ,   a 32 = K 32 = c 21 + 3 c 23 ,
F o r P 2 ( x ) :   D 22 = v ¯ 3   k ¯ 1 3 = c 12 + 2 c 16 ,   a 43 = K 43 . F o r P 3 ( x ) :   D 12 = K 11 , 1 + K 10 , 2 ,   K 11 , 1 = K ¯ 11 3   k ¯ 1 3 ,   K 10 , 2 = t ¯ 1 k ¯ 2 1 + k ¯ 1 1 t ¯ 12 k ¯ 1 2 / 2 , D 33 = K ¯ 32 4   k ¯ 1 4 ,   a 54 = K 54 . F o r P 4 ( x ) :   D 23 = K 22 , 1 + K 21 , 2 ,   K 22 , 1 = K ¯ 22 3   k ¯ 1 3 ,   K 21 , 2 = v ¯ 3   k ¯ 2 3 / 2 + v ¯ 34   k ¯ 1 3 k ¯ 1 4 , D 44 = K ¯ 43 5   k ¯ 1 5 ,   a 65 = K 65 .
PROOF This follows from Theorem 4.1. D r c = c = 1 c r + 1 K r , c r , e where K r k e is the coefficient of n e in the expansion of K r k ( E   w ^ ) about K r k . □
For r s , and any X ¯ r s , let r s N X ¯ r s sums over all N permutations of i r , , i s giving distinct terms. For example
3 5 3 t ¯ 13 t ¯ 245 = t ¯ 13 t ¯ 245 + t ¯ 14 t ¯ 235 + t ¯ 15 t ¯ 234 .
The derivatives of v = K 21 and K r e needed for Corollary 5.3 are given by by
Lemma 1. 
v ¯ 3 = 2 u ¯ 3 + T ¯ 3 ,   v ¯ 34 = k = 0 2 v ¯ 34 k , w h e r e T ¯ 3 = t ¯ 1 t ¯ 2 k ¯ 1 3 12 , v ¯ 340 = 2 z ¯ 34 + 2 t ¯ 31 k ¯ 1 12 t ¯ 24 ,   v ¯ 341 = 2 t ¯ 1 ( k ¯ 1 3 12 t ¯ 24 + k ¯ 1 4 12 t ¯ 23 ) ,   v ¯ 342 = t ¯ 1 t ¯ 2 k ¯ 1 34 12 .
Preprints 100389 i009
Preprints 100389 i010
K ¯ 43 5 = c 31 5 + 12 c 36 5 + 12 c 3 , 10 5 + 4 c 3 , 11 5 f o r c 31 5 = 4 t ¯ 51   k ¯ 3 1 4 t ¯ 2 t ¯ 3 t ¯ 4 + t ¯ 1 t ¯ 4   k ¯ 3 5 1 4 ,   c 36 5 = 2 t ¯ 51 k ¯ 2 1 3 t ¯ 2 u ¯ 3 + t ¯ 1 t ¯ 2 u ¯ 3 k ¯ 2 5 1 3 + y ¯ 3 t ¯ 3 5 s ¯ 4 + Y ¯ 4 ( k ¯ 1 5 41 t ¯ 1 + k ¯ 1 41 t ¯ 15 ) . c 3 , 10 3 = u ¯ 1 k ¯ 1 3 12 u ¯ 2 + 2 v ¯ 1 s ¯ 2 t ¯ 1 3 + 2 t ¯ 1 x ¯ 2 k ¯ 1 3 12 + 2 x ¯ 1 k ¯ 1 12 t ¯ 23 ,   c 3 , 11 5 = 3 s ¯ 1 s ¯ 2 t ¯ 1 3 s ¯ 5 3 = 3 s ¯ 1 s ¯ 2 t ¯ 1 3 t ¯ 4 k ¯ 1 5 34 + 3 s ¯ 1 s ¯ 2 t ¯ 1 3 k ¯ 1 34 t ¯ 45 .
PROOF For example substitute u ¯ 3 5 = t ¯ 3 5 s ¯ 4 + t ¯ 34 k ¯ 1 5 41 t ¯ 1 + t ¯ 34 k ¯ 1 41 t ¯ 15 into
c 36 5 = 2 t ¯ 1 t ¯ 25 k ¯ 2 1 3 u ¯ 3 + t ¯ 1 t ¯ 2 ( k ¯ 2 5 1 3 u ¯ 3 + k ¯ 2 1 3 u ¯ 3 5 ) .
So now we can write D r e needed for Corollary 5.3 in molecular form:
Corollary 4. 
Assume that the conditions of Corollary 5.3 hold. Then D r e and K r e , j given there satisfy
D 22 = c 12 + 2 c 16 a 22 = c 11 + c 15 + c 19 / 2 + c 1 , 10 + c 12 + 2 c 16 . F o r D 12 ,   K 11 , 1 = ( t ¯ 1 3   k ¯ 1 12 + t ¯ 12   k ¯ 1 3 12 ) k ¯ 1 3 / 2 . D 33 = ( 3 y ¯ 3 t ¯ 34 + t ¯ 1 t ¯ 2 t ¯ 3 k ¯ 2 4 1 3 + 3 s ¯ 1 s ¯ 2 t ¯ 124 + 6 v ¯ 1 t ¯ 14 + 6 u ¯ 1 k ¯ 1 4 12 t ¯ 2 ) k ¯ 1 4 . F o r D 23 ,   K 22 , 1 = ( 2 t ¯ 31 S ¯ 1 + t ¯ 1 t ¯ 2   k ¯ 2 3 12 ) k ¯ 1 3 + ( t ¯ 14 k ¯ 2 1 3 t ¯ 23 + t ¯ 1 k ¯ 2 1 3 t ¯ 2 4 + t ¯ 1 k ¯ 2 4 1 3 t ¯ 23 ) k ¯ 1 4 + ( t ¯ 125 X ¯ 12 + k ¯ 1 5 23 t ¯ 34 k ¯ 1 41 t ¯ 12 ) k ¯ 1 5 + [ k ¯ 1 23 t ¯ 1 3 ( k ¯ 1 15 t ¯ 54 + t ¯ 5 k ¯ 1 4 15 ) + s ¯ 1 k ¯ 1 23 t ¯ 1 4 + s ¯ 1 t ¯ 1 3 k ¯ 1 4 23 ] k ¯ 1 4 ; K 21 , 2 = ( 2 u ¯ 3 + t ¯ 1 t ¯ 2 k ¯ 1 3 12 )   k ¯ 2 3 / 2 + [ 4 t ¯ 1 k ¯ 1 3 12 t ¯ 24 + s ¯ 1 t ¯ 134 + t ¯ 31 k ¯ 1 12 t ¯ 24 + t ¯ 1 t ¯ 2 k ¯ 1 34 12 ]   k ¯ 1 3 k ¯ 1 4 . D 44 = [ 4 t ¯ 51   k ¯ 3 1 4 t ¯ 2 t ¯ 3 t ¯ 4 + t ¯ 1 t ¯ 4   k ¯ 3 5 1 4 + 24 t ¯ 51 k ¯ 2 1 3 t ¯ 2 u ¯ 3 + 12 t ¯ 1 t ¯ 2 u ¯ 3 k ¯ 2 5 1 3 + 12 y ¯ 3 ( t ¯ 3 5 s ¯ 4 + t ¯ 34 k ¯ 1 5 41 t ¯ 1 + t ¯ 34 k ¯ 1 41 t ¯ 15 ) ]   k ¯ 1 5 + 12 [ 2 v ¯ 2 t ¯ 2 4 s ¯ 4 + 2 t ¯ 1 x ¯ 2 k ¯ 1 3 12 + 2 x ¯ 1 k ¯ 1 12 t ¯ 23 + 12 u ¯ 1 k ¯ 1 3 12 u ¯ 2 ]   k ¯ 1 3 + 12 ( s ¯ 1 s ¯ 2 t ¯ 1 3 t ¯ 4 k ¯ 1 5 34 + s ¯ 1 s ¯ 2 t ¯ 1 3 k ¯ 1 34 t ¯ 45 )   k ¯ 1 5 .
PROOF a 1 e were given for i 4 by Theorem 4.1. Corollaries 5.3, 5.4 agree with a 11 , a 32 , a 22 , a 43 given for P r ( x ) , r 2 on p59 of Withers (1982) except that D 22 in a 22 was overlooked. □

6. An Extension to Theorem 2.1

Here we remove the condition in Theorem 2.1 that E   w ^ = w and give the extra terms referred to but not given on p49 of James and Mayne (1962). We use   e K ¯ 1 r of Theorem 2.1, and the shorthand f ¯ m = i m f ¯ where i = / w i . Suppose that for r 1 , the rth order cumulants of (1) can be expanded as
k ¯ 1 r = κ ( w ^ i 1 , , w ^ i r ) = e = r 1   e k ¯ 1 r f o r 1 i 1 ,   ,   i r p , w h e r e   e k ¯ 1 r = O ( n e ) , a n d t h a t w ^ w a s n , s o t h a t   0 k ¯ 1 = w ¯ 1 .
There is a key difference with Theorem 2.1: there,   e k ¯ 1 r was treated as an algebraic expression. But now we must view each of them as a function of w. So we assume that the distribution of w ^ is determined by w.
The derivatives of   e K ¯ 1 r of Theorem 2.1 are given by Leibniz’s rule for the derivatives of a product:
  1 K ¯ 3 12 = ( t ¯ 1 1 t ¯ 2 2   k ¯ 12 ) 3 = ( t ¯ 13 1 t ¯ 2 2 + t ¯ 1 1 t ¯ 23 2 )   k ¯ 12 + t ¯ 1 1 t ¯ 2 2   k ¯ 3 12 ,   1 K ¯ 3 1 = ( t ¯ 12 1 k ¯ 12 ) 3 / 2 = t ¯ 1 3 1 k ¯ 12 / 2 + t ¯ 12 1 k ¯ 3 12 / 2 , ( t ¯ 1 1 t ¯ 2 2 t ¯ 3 3   k ¯ 1 3 ) 4 = ( t ¯ 1 1 t ¯ 2 2 t ¯ 3 3 ) 4 k ¯ 1 3 + t ¯ 1 1 t ¯ 2 2 t ¯ 3 3   k ¯ 4 1 3 ,   ( t ¯ 1 1 t ¯ 2 2 t ¯ 3 3 ) 4 = t ¯ 1 1 t ¯ 2 2 t ¯ 34 3 + t ¯ 1 1 t ¯ 24 2 t ¯ 3 3 + t ¯ 14 1 t ¯ 2 2 t ¯ 3 3 , ( T 1 4 1 3   k ¯ 12 k ¯ 34 ) 5 = ( T 1 4 1 3 ) 5   k ¯ 12 k ¯ 1 34 + T 1 4 1 3   ( k ¯ 12 k ¯ 34 ) 5 , ( T 1 4 1 3 ) 5 = 3 ( t ¯ 135 1 t ¯ 2 2 t ¯ 4 3 + t ¯ 13 1 t ¯ 25 2 t ¯ 4 3 + t ¯ 13 1 t ¯ 2 2 t ¯ 45 3 ) ,   ( k ¯ 12 k ¯ 34 ) 5 = k ¯ 5 12 k ¯ 34 + k ¯ 12 k ¯ 5 34 .
Theorem 4. 
Let w ^ R p be a biased standard estimate of w satisfying (1). Then t ^ = t ( w ^ ) R p is a standard estimate of t = t ( w ) :
κ ( t ^ j 1 , , t ^ j r ) = e = r 1   e a ¯ 1 r f o r r 1 ,   1 j 1 ,   ,   j r q ,
Preprints 100389 i011
and the other   e D ¯ 1 r =   e D j 1 j r needed for P r n ( x ) of (6) for the distribution of Y n t of (5) are as follows.
F o r P 0 ( x ) :     1 D ¯ 12 = 0   1 a ¯ 12 =   1 K ¯ 12 =   1 K j 1 j 2 = t ¯ 1 1 t ¯ 2 2   1 k ¯ 12 ] , F o r P 1 ( x ) :   1 D ¯ 1 =   1 t ¯ 1 k ¯ 1   1 a ¯ 1 = t ¯ 1 1   1 k ¯ 1 + t ¯ 12 1   1 k ¯ 12 / 2 , F o r P 2 ( x ) :     2 D ¯ 12 =   1 K ¯ 3 12     1 k ¯ 3 a ¯ 2 12 = t ¯ 1 1 t ¯ 2 2     2 k ¯ 12 + T 1 3 12     2 k ¯ 1 3 / 2 + T 1 4 12     1 k ¯ 12   1 k ¯ 34 / 2 + ( t ¯ 13 1 t ¯ 2 2 + t ¯ 1 1 t ¯ 23 2 )     1 k ¯ 12 + t ¯ 1 1 t ¯ 2 2     1 k ¯ 3 12   1 k ¯ 3 . F o r P 3 ( x ) :     2 D ¯ 1 =   1 K ¯ 1 1 +   0 K ¯ 2 1 ,     1 K ¯ 1 1 = (   1 K ¯ 1 ) 3     1 k ¯ 3 ,   0 K ¯ 2 1 = t ¯ 1 1   2 k ¯ 1 + t ¯ 12 1   1 k ¯ 1   1 k ¯ 2 / 2   2 a ¯ 1 = t ¯ 1 1   2 k ¯ 1 + t ¯ 12 1 (   2 k ¯ 12 +   1 k ¯ 1   1 k ¯ 2 +   1 K ¯ 3 12   1 k ¯ 3 ) / 2 + t ¯ 1 3 1 (   2 k ¯ 1 3 / 6 +   1 k ¯ 1   1 k ¯ 23 / 2 ) + t ¯ 1 4 1   1 k ¯ 12   1 k ¯ 34 / 8 ,
  3 D ¯ 1 3 =   2 K ¯ 4 1 3   k ¯ 1 4 = ( t ¯ 1 1 t ¯ 2 2 t ¯ 3 3     2 k ¯ 1 3 ) 4     1 k ¯ 4 + ( T 1 4 1 3     1 k ¯ 12   1 k ¯ 34 ) 5     1 k ¯ 5 . F o r P 4 ( x ) :     3 D ¯ 12 =   2 K ¯ 1 12 +   1 K ¯ 2 12 ,     2 K ¯ 1 12 =   2 K ¯ 3 12     1 k ¯ 3 ,   1 K ¯ 2 12 =   1 K ¯ 3 12     2 k ¯ 3 / 2 +   1 K ¯ 34 12     1 k ¯ 3   1 k ¯ 4 ,     4 D ¯ 1 4 =   3 K ¯ 5 1 4     1 k ¯ 5 ,     5 D ¯ 1 5 = 0 .
PROOF K ¯ 1 r ( w ) = K ¯ 1 r and   e K ¯ 1 r ( w ) =   e K ¯ 1 r are functions of w. By (3)
K ¯ 1 r ( w n ) = e = r 1   k K ¯ 1 r ( w n ) f o r w n = E   w ^ = w + d n ,
where by (1), d n has i 1 th component d ¯ n 1 = d n i 1 = e = 1   e k ¯ 1 . Consider the Taylor series expansion
  k K ¯ 1 r ( w + d n ) =   k K ¯ 1 r +   k K ¯ 1 1 r   d ¯ n 1 +   k K ¯ 12 1 r   d ¯ n 1 d ¯ n 2 / 2 ! + = e = 0   k K ¯ e 1 r
say. Substituting into (3) gives (2) with
  c a ¯ 1 r = k + e = c   k K ¯ e 1 r = e = 0 c r + 1   c e K ¯ e 1 r .
Also   k K ¯ 0 1 r =   k K ¯ 1 r so that (4) holds with
  c D ¯ 1 r = e = 1 c r + 1   c e K ¯ e 1 r :     r D ¯ 1 r =   r 1 K ¯ 1 1 r ,     r + 1 D ¯ 1 r = e = 1 2   r + 1 e K ¯ e 1 r ,                
The Edgeworth expansion (6) holds if {   e K ¯ 1 r } are replaced by {   e a ¯ 1 r } .

7. Discussion

Approximations to the distributions of estimates is of vital importance in statistical inference. Asymptotic normality uses just the 1st term of the Edgeworth expansion. That approximation can be greatly improved with further terms. When the estimate is a sample mean, basic results were given by Chebyshev, Charlier and Edgeworth in the 19th century with major advances in the 20th century by Cramer, Rao and many others. See Stuart and Ord (1987) for some historical references. For a derivation of the Edgeworth expansion for a sample mean from the Gram-Charlier expansion, see Withers and Nadarajah (2009, 2014a) for the univariate and vector cases. These showed for the 1st time that the coefficients in these expansions were Bell polynomials in the cumulants.
The first extension from a sample mean for univariate estimates was by Cornish and Fisher (1937) and Fisher and Cornish (1960). They assumed that the rth cumulant of the estimate was κ r ( w ^ ) = n 1 r k r where k r is a constant. However in applications they assumed that w ^ was a Type A estimate, and collected terms. It was not until Withers (1984) that explicit results were given a univariate Type A estimate. Major advances were made in Withers and Nadarajah (2010b). This gave explicit results for the terms in the Edgeworth expansion of a Type A or B estimate using Bell polynomials, as outlined in §1. It also allowed for expansions about asymptotically normal random variables. The advantage of this approach in greatly reducing the number of terms in each P r ( x ) was illustrated in Withers and Nadarajah (2012d, 2014c).
For univariate estimates, Cornish and Fisher (1937) also showed how to invert the Edgeworth expansion to obtain an expansion for the distribution quantiles. This was extended to Type A estimates in Withers (1984). For extensions to transformations of multivariate estimates, like t ( w ^ ) = ( w ^ w ) ) V 1 ( w ^ w ) ) , see Hill and Davis (1968) and Withers and Nadarajah (2012a, 2012e). An application to the amplitude and phase of the mean of a complex sample is given in Withers and Nadarajah (2013b).
Turning now to smooth functions of a Type A estimate, the 1st univariate results were given by Withers (1982, 1983). These built on a deep result of James and Mayne (1962). This is why if w ^ is a Type A (or B) estimate of w, then a smooth function of w ^ , say t ( w ^ ) , is a Type A (or B) estimate of t ( w ) .
The extension from a vector to a matrix estimate is just a matter of relabelling: a single sum becomes a double sum. The first examples of this we know of are in Withers and Nadarajah (2011a, 2011b, 2011c, 2012b, 2020). The extension to a complex scalar or vector or matrix w was given in these same papers. The 1st of these 3 papers applied it to the multi-tone problem in electrical engineering, and the other 4 papers to channel capacity problems where w ^ is a weighted mean of complex matrix random variables, and n is no longer a sample size, but the number of transmitters or receivers.
A different type of extension can be obtained by identifying a sample mean w ^ = X ¯ from a distribution F ( x ) with its empirical distribution F n ( x ) , and t ( w ) with T ( F ) , a smooth functional of F ( x ) , such as the bivariate correlation. T ( F n ) is a Type A estimate of T ( F ) , and its cumulant coefficient can be read off those of t ( w ^ ) . In this way one obtains the Edgeworth expansion for n 1 / 2 ( T ( F n ) T ( F ) ) . See Withers (1983) Withers and Nadarajah (2008, 2010c, 2012c, 2013a)
A caveat on the use of an Edgeworth expansion is that including more terms makes it more inaccurate in the tails. This is where the tilted expansions, also known as saddlepoint, or small sample expansions, become essential. Results for the density of Y n w for a sample mean, were given in §5 of Daniels (1983) and §6 of Daniels (1987). Withers and Nadarajah (2010b) shows how the cumulant coefficients given in this paper can be used to obtain the tilted expansion for the distribution and density of any Type A estimate.

8. Conclusion

Let w ^ be a Type A estimate of an unknown parameter w R p . Its cumulant coefficients are defined by (1). They are the building blocks of the Edgeworth expansion (2) in powers of n 1 / 2 for its distribution of w ^ . n is typically the sample size. Those coefficients needed for the rth term, P r ( x ) , are given in (6) and (7). Let t ( w ^ ) R q be a smooth function of w ^ . Then it is a Type A estimate of t ( w ) R q . This paper gives its cumulant coefficients in terms of those of w ^ and the derivatives of w ^ . Replacing the coefficients in (2) by these coefficients provides the Edgeworth expansion of n 1 / 2 ( t ( w ^ ) t ( w ) ) to n 5 / 2 .
The tilted Edgeworth expansion for w ^ needed for accuracy in the tails, was given in Withers and Nadarajah (2010b) in terms of its cumulant coefficients. Replacing these by those of t ( w ^ ) given here gives the tilted Edgeworth expansion for t ( w ^ ) .
In many practical statistical estimation problems, simulations are a popular way to approximate distributions. However these generally have the severe limitation that the parameters chosen cannot represent the whole parametric landscape.
We have given a number of applications to electrical engineering. For example numerical comparisons of the 1st 3 approximations to channel capacity for multiple arrays were given in Withers and Nadarajah (2020). In that case p = 1 , so that an expansion for the percentile was possible. Thre are a host of other practical applications possible to electrical engineering and other fields.
Finally we mention some possible future research directions. Chain rules for t ( w ^ ) can be applied to obtain the cumulant coefficients of its Studentized form. This can be followed up with expansions for the coverage probability of confidence regions, and corrections making them more accurate. Cumulant coefficients can be applied to bias reduction, Bayesian inference, confidence regions and power. The Edgeworth expansion can give a negative value in the tails of a distribution. Tilted expansions avoid this. Another way to get around this, is to choose y R p such that n x = P r o b . ( Y n w x + n 1 / 2 y ) Φ V ( x ) is O ( n 1 ) . For p > 1 such y can be chosen in an infinite number of ways, so that it may be possible to choose it so that n x = O ( n 3 / 2 ) or smaller. One could also replace n 1 / 2 y by y ( n ) = n 1 / 2 y 0 + n 1 y 1 say, giving more choices.

Abbreviations

The following abbreviations are used in this manuscript:
MDPI Multidisciplinary Digital Publishing Institute
DOAJ Directory of open access journals
TLA Three letter acronym
LD linear dichroism

Appendix A: Some comments on the references

Here we give some comments and corrections to some of our papers.
Withers (1982): To the expression for a 22 on p.59 add c 12 + 2 c 16 where
c 12 = I 2 12 01 = t i t j k 1 , k i j k 1 k ,   c 16 = I 11 12 00 = t i k 1 i j t j k k 1 k .
This correction does not effect applications in which ω ^ is unbiased, as in Withers (1982, 1988).
In the expression on p.60 for ( a 22 ) 2 , I 31 23 22 should be c 36 = I 31 23 00 .
On p61, 4 lines before Table 1, replace n / 2 ) r 1 by n / 2 ) 1 r .
On p67 add to K 2 a b ,   t i a t j b k 1 , k i j k 1 k . For r = 3 , 4 see §4.
On p68 in (A3) replace r + 2 r by r + 2 2 . Changing to the simpler notation of Withers and Nadarajah (2008), denote the expressions for I 2 2 0 , . . . , I 301 222 000 and I 3 22 01 , . . . , I 31 222 001 given on p.58–59 by σ 2 = V = V ( w ) = t i k 1 i j t j ,   c 01 , c 02 , c 21 , and c 23 , c 11 , c 15 , c 19 , c 1 , 10 , c 31 , c 36 , c 3 , 10 , c 3 , 11 , c 22 , c 12 , c 14 , c 17 , c 32 , c 34 , c 35 , c 38 , c 39 . So the expressions on pp59-60 become
a 11 = c 01 + c 02 / 2 ,   a 32 = c 21 + 3 c 23 , a 22 = c 11 + c 15 + c 19 / 2 + c 1 , 10 + c 12 + 2 c 16 b y C o r o l l a r y 5.3 , a 43 = c 31 + 12 c 36 + 12 c 3 , 10 + 4 c 3 , 11 , ( a 11 ) 1 = σ 1 ( c 01 + c 02 / 2 ) σ 3 ( c 22 / 2 + c 23 ) ,   ( a 32 ) 2 = σ 3 ( c 21 3 c 22 3 c 23 ) , ( a 22 ) 2 = i = 1 3 V i A i ,   ( a 43 ) 4 = i = 2 3 V i B i f o r A 1 = c 11 c 14 / 2 + c 15 2 c 17 c 19 / 2 ,   A 2 = ( c 01 + c 02 / 2 ) ( c 22 + 2 c 23 ) c 32 c 34 + c 35 2 c 36 4 c 38 + 2 c 39 2 c 3 , 10 2 c 3 , 11 , A 3 = 7 ( c 22 + 2 c 23 ) 2 / 4 ,   B 2 = c 31 6 c 32 6 c 34 + 3 c 35 24 c 38 12 c 3 , 10 8 c 3 , 11 , B 3 = 6 ( c 22 + 2 c 23 ) ( c 21 + 3 c 22 + 3 c 23 ) .
We now illustrate how the results on p.60 were obtained. Let c r s denote c r s when t ( w ^ ) is replaced by its Studentized form t ( 0 ) ( w ^ ) . Then
( a 22 ) 2 = c 11 + c 15 + c 19 / 2 + c 1 , 10 + c 12 + 2 c 16 ,
The first few derivatives of t ( 0 ) ( ω ^ ) at w, and of V ( w ) , are
t 0 . i = V 1 / 2 t i ,   t 0 . i j = V 1 / 2 t i j V 3 / 2 ( t i V j + t j V i ) / 2 , t 0 . i j k = V 1 / 2 t i j k V 3 / 2 i j k 3 ( t i j V k + V i j t k ) / 2 + 3 V 5 / 2 i j k 3 t i V j V k / 4 , V i = 2 t i a k 1 a b t b + k 1 , i a b t a t b ,   a n d V i j = 2 t i j a k 1 a b t b + 2 t i a k 1 a b t j b + 2 i j 2 t i a k 1 , j a b t b + k 1 , i j a b t a t b ,   w h e r e i j 2 a i b j = a i b j + a j b i ,   i j k 3 a i j b k = a i j b k + a i k b j + a j k b i   f o r   a i j = a j i . S o c 11 = V 1 c 11 ,   c 15 = V 1 c 15 V 2 M 1 , c 19 = V 1 c 19 2 V 2 M 4 + V 3 ( V M 2 + M 3 2 ) / 2 , c 1 , 10 = V 1 c 1 , 10 V 2 ( M 3 c 02 + 2 M 4 + V M 5 + 2 M 6 ) / 2 + 3 V 3 ( V M 2 + 2 M 3 2 ) / 4 , c 12 = V 1 c 12 ,   c 16 = V 1 c 16 V 2 ( V M 7 + c 01 M 3 ) / 2 ,   w h e r e
M 1 = t i t j k 2 i j k V k = c 32 + 2 c 36 ,   M 2 = V i k 1 i j V j = c 35 + 4 c 39 + 4 c 3 , 10 , M 3 = t i k 1 i j V j = c 22 + 2 c 23 = c ˜ 22 s a y ,   M 4 = t i k 1 i j t j k k k l V l = c 39 + 2 c 3 , 10 , M 5 = k 1 i j V i j = 2 c 1 , 10 + 2 c 19 + c 14 + 4 c 17 , M 6 = t i k 1 i j V j k k 1 k l t l = 2 c 3 , 10 + 2 c 3 , 11 + c 34 + 4 c 38 , M 7 = k 1 i V i = c 12 + 2 c 16 c 19 = V 1 c 19 + 2 V 2 ( 4 c 35 c 3 , 10 ) + V 3 c ˜ 22 2 / 2 , c 1 , 10 = i = 1 3 V i b i f o r b 1 = c 14 / 2 2 c 17 c 19 ,   b 3 = 3 c ˜ 22 2 / 2 , b 2 = c ˜ 22 c 02 / 2 c 34 + 3 c 35 / 4 4 c 38 + 2 c 39 c 3 , 10 2 c 3 , 11 .
Substitution into (A1) yields ( a 22 ) 2 . The other a r i = ( a r i ) r given on p.60 are obtained similarly.
Withers (1984):
p393 In the 5 line expression for f 4 ( x , L ) , replace ( x 3 x ) / 30 by ( x 3 3 x ) / 30 , and + 2473 ) / 7776 by + 2473 x ) / 7776 .
p394 In (3.4) replace ( x 2 1 ) / 2 , by ( x 2 1 ) / 6 ,
p394 In line 2 of Section 4, ’of Section 2’ should be ’of Section 3’.
The following corrigendum for a printer’s error appeared in
Withers, C.S. (1986) Jnl. Royal Statist. Soc. B, 48 p258:
The expression l 3 4 ( 252 x 5 1688 x 3 + 1511 x ) / 7776 should be added to the last line on p393.
That also gives g 5 ( y ) and g 6 ( y ) for the last line on p393.
Withers (1987):
p2371 (2.4): i = 1 n i C i need not converge. We only require an asymptotic expansion. The same is true for (3.2) p2375.
p2371, 3rd to last paragraph. Replace ’Appendix C, which also’ by ’Appendix D. Appendix C’
p2372, Example 2.2, line 2. Replace t 2 j ( θ ) by t ( 2 j ) ( θ ) , the 2 j derivative of t ( θ ) . In line 3 and in Example 3.1, ( y ) j = y ( y 1 ) ( y j + 1 ) .
p2377 line 2: replace (1.2) by (2.4)
p2377 line 3: replace / N | by / N
p2377 line 9: replace ’Section 2’ by ’Section 3’ Since E 1 = 0 ,   C 1 of Example 3.2 p2376 is unaffected.
p2378: these expression for C j are correct if θ ^ is unbiased. In that case the terms on p2378 with a 1 in the top line are zero so that C j has only m j terms where m 1 = 1 ,   m 2 = 3 ,   m 3 = 6 ,   m 4 = 12 . However if θ ^ is biased, then these expression for C j did not allow for contributions from replacing θ by E   θ ^ in the cumulant coefficients k j a 1 a r of (3.2). These are corrected in Withers, C.S. and Nadarajah, S. (Submitted), Bias-reduced estimates for parametric problems.
p2379 Appendix D. Add at start: For p = 1 see (3.4) of
Withers, C.S. and Nadarajah, S. (2013), Delta and jackknife estimates of low bias for functions of binomial and multinomial parameters. Journal of Multivariate Analysis, 118, 138–147. DOI: 10.1016/j.jmva.2013.02.006
Withers (1988):
p729: in the 10th line from the bottom, replace “their range 1 p ” by “their range 1 k
p732 line 9: T x i x j . . . a i a j . . . should be T x 1 x 2 . . . a 1 a 2 . . . .
p734: in the expression for h 1 in the 5th to last line, replace H e 2 by H e 2 / 6 .
p737: in line 11, “Section 1 and 2 of Withers (1983a)” should read “Section 1 and 2 of Withers (1983b)”.
p741: in the 4th equation from the bottom, at the end of the line, replace [ 1 j ] T 2 j , by [ 1 j ] 1 T 2 j ,
Withers and Nadarajah (2008):
p743 para 2, line 4. Replace ’about zero.’ by ’about zero when G puts mass 1 at x.’
p754, 756. Replace w i n a by w i n a . Different samples can have different weights.
p754 2nd to last line. The first term on RHS, c 11 / 2 , should be c 11 .
p755, line 6. There is a typesetting error in the first of the 2 lines for a 220 . Replace the first line with
a 220 = σ 2 ( c 11 c 12 c 14 / 2 + c 15 2 c 17 c 19 / 2 ) σ 4 [ ( c 01 + c 02 / 2 ) ( c 22 + 2 c 23 ) + c 32
p756. The 3rd and 4th lines after (8), should be
  [ 1 r ] a = T F   x a r d F a ( x ) ,   [ 1 , 12 , 2 r ] a 1 a 2 = T F   x 1 a 1 T F   x 1 x 2 a 1 a 2 T F   x 2 a 2 r d F a 1 ( x 1 ) d F a 2 ( x 2 ) ,
Withers and Nadarajah (2009):
p 272. Line 3: Convergence of S ( t ) is not needed, since B r k is a finite sum.
κ r on LHS(1.1) should be k r .
p 273 last paragraph: also f ( x ) / ϕ ( x ) is only meaningful if X is dimension-free.
p 275. (2.8) is correct but since B 1 = B 2 = 0 , (2.8) can also be written
f 2 / ϕ = 1 + k = 3 B k 2 / k ! .
In the 5th line of Section 3 insert after B r k ( α ) , `at α 1 = α 2 = 0 ’.
The first line of (3.1) should read
B r = ( B r k ϵ r 2 k :   1 k r / 3 )
(3.2) can be written K s = s 2 [ s / 3 ] where [ x ] is the integral part of x.
p276. In the expression for B 10 ,   B 10 , 18 should be B 10 , 1 .
p 277. In the 2nd to last line, b 64 should be B 61 .
p 278. In the expession for b 4 , the first term should be doubled. In the expession for b 5 , b 82 should be B 82 .
Withers and Nadarajah (2010a):
p3. In 5th and 6th to last lines, replace = n 2 K a b + O ( n 2 ) by = n 1 K a b + O ( n 2 )
p5. 2 lines above Theorem 2.2, replace “third moments” by “third central moments”
p7, lines 2-3: delete “and its Studentised version”
p7, lines 3-4: delete “or n 1 / 2 ( β a β a ) J ^ a a 1 / 2
p7, line 7-10: delete from “So, a one-sided” to “by O ( n 1 / 2 ) ;
p9. Move “Set ϕ = m = p + 1 . on the last 2 lines of p9 and 1st line of p 10 to just before “Set” on p9 line 9.
p10, lines 14-15: replace “ g i j where” by “ g i j .” and move the rest of the sentence, “ g i j = g N . i j ” to the line after (6.1) p9, preceded by the word “Set”
Withers and Nadarajah (2010b):
p1129 line 7: replace b k i 1 . . . i r ( Y n ) by b k i 1 . . . i r ( Y )
To the 9th to last line we can add
P ˜ 3 ( t , B ) = e 3 ( t ) + e 1 ( t ) e 2 ( t ) + e 1 ( t ) 3 / 6 ,
From p1130 line 6 to the end of §5: replace s by p, the dimension of θ .
p1130 line 7 is clearer we replace line 8 by
f o r p ( n ) ( k ) ( y ) = k 1 k p p ( n ) ( y ) ,   k = / y k ,
p1130 line 9. replace H ν + k ( y ) by H ν + k ( y , V )
p1130 The 5th and 6th to last lines:
for example K j i 1 . . . i r = k j ( i 1 . . . i r ) ( t ) = i 1 i r k j ( t ) where i = / t i .
p1132. A note on Corollary 3.2. For the duality of I ( x ) and k 0 ( t ) see p176 of McCullagh, P., (1987) Tensor methods in statistics. Chapman and Hall, London.
p1133. In line 14 replace H ν + λ ( θ , V t ) by H ν + λ ( 0 , V t )
Withers and Nadarajah (2014a):
p81. In (2.14), replace J r ( x ) and J r ( x ) by J r ( x ) / r ! and J r ( x ) / r ! .
p81. The 2nd line after (2.15) should read
J r ( x ) = V y ¯ V x ¯ H ¯ r ( y ¯ , V ¯ ) ϕ V ¯ ( y ¯ ) d y ¯
The next line is correct:
J r ( x ) = V y ¯ V x ¯ ( y ¯ ) r ϕ V ¯ ( y ¯ ) d y ¯ .
p82. In (2.20), replace J r ( y 1 , y 2 ) by J r ( y 1 , y 2 ) / r ! .
p 85. In Withers, C.S. and Nadarajah, S. (2009), replace ’via’ by ’in terms of’.
Withers and Nadarajah (2014c):
p 676. Multiply RHS of (1.13) by n 1 / 2 . That is, replace it by
Y J K θ = ( n / s 2 K θ ) 1 / 2 ( θ ^ s 1 J θ ) ,   J 0 , K 1 .
p 699. In the editing of the original paper of 64 pages down to 21 pages, some details had to be removed. Here are some more details for Theorem 1.2 after (1.24).
1 = 0 i f J 1 ,   2 = A ¯ 43 H 2 i f K 2 , 3 = A ¯ 33 H 2 + A ¯ 54 H 4 i f J 2 ,   4 = A ¯ 44 H 3 + A ¯ 65 H 5 i f K 3 , 5 = A ¯ 34 H 2 + A ¯ 55 H 4 + A ¯ 76 H 6 i f J 3 , 6 = A ¯ 45 H 3 + A ¯ 66 H 5 + A ¯ 87 H 7 i f K 4 . r e = 0 i f r 3 f o r e = h , f , g ,   4 e = [ 4 2 ] 0   e ( 4 2 ) , 5 e = [ 45 ] 0   e ( 45 ) + [ 34 ] 1   e ( 34 ) , 6 e = { [ π ] 0   e ( π ) : π = 5 2 , 46 , 4 3 } + { [ π ] 1   e ( π ) : π = 4 2 , 35 } + [ 3 2 ] 2   e ( 3 2 ) ,
where the e ( π ) , [ π ] i needed for e r ( x ) ,   1 r 6 , are as follows.
h ( i j ) = H i + j + 1 s o t h a t h ( 1 i 1 2 i 2 ) = H 1 i 1 + 2 i 2 + 1 . F o r r = 4 :   [ 4 2 ] 0 = A ¯ 43 2 / 2 ! ,   h ( 4 2 ) = H 7 , f ( 4 2 ) = H 7 H 1 H 3 2 ,   g ( 4 2 ) = H 7 2 H 3 H 4 + H 1 H 3 2 .
F o r r = 5 :   [ 45 ] 0 = A ¯ 43 A ¯ 54 ,   h ( 45 ) = H 8 , f ( 45 ) = H 8 H 1 H 3 H 4 ,   g ( 45 ) = H 8 H 3 H 5 H 4 2 + H 1 H 3 H 4 ,   1 = A ¯ 33 A ¯ 43 ,   h ( 34 ) = H 6 ,   f ( 34 ) = H 6 H 1 H 2 H 3 , g ( 34 ) = H 6 H 2 H 4 H 3 2 + H 1 H 2 H 3 . F o r r = 6 :   [ 5 2 ] 0 = A ¯ 54 2 / 2 ! ,   h ( 5 2 ) = H 9 , f ( 5 2 ) = H 9 H 1 H 4 2 ,   g ( 5 2 ) = H 9 2 H 4 H 5 + H 1 H 4 2 ,   [ 46 ] 0 = A ¯ 43 A ¯ 65 ,   h ( 46 ) = H 9 , f ( 46 ) = H 9 H 1 H 3 H 5 ,   g ( 46 ) = H 9 H 3 H 6 H 4 H 5 + H 1 H 3 H 5 ,   [ 4 3 ] 0 = A ¯ 43 3 / 3 ! ,   h ( 4 3 ) = H 11 ,   f ( 4 3 ) = H 11 3 H 1 H 3 H 7 H 2 H 3 3 + 3 H 1 2 H 3 3 , g ( 4 3 ) = H 11 3 H 3 H 8 3 H 4 H 7 + 3 H 1 H 3 H 7 + 3 H 3 2 H 5 + 6 H 3 H 4 2 9 H 1 H 3 2 H 4 H 2 H 3 3 + 3 H 1 2 H 3 3 ,   [ 4 2 ] 1 = A ¯ 43 A ¯ 44 ,   [ 35 ] 1 = A ¯ 33 A ¯ 54 ,   f ( 35 ) = H 7 H 1 H 2 H 4 ,   g ( 35 ) = H 7 H 3 H 4 H 2 H 5 ,   [ 3 2 ] 2 = A ¯ 33 2 / 2 ! ,   f ( 3 2 ) = H 5 H 1 H 2 2 ,   g ( 3 2 ) = H 5 2 H 2 H 3 + H 1 H 2 2 .
p 702 §2. In the 3rd equation of Theorem 2.1, ln   Γ ( μ ) should be ln   Γ ( m ) . p 704. Disregard Table 3.
Withers and Nadarajah (2015):
In (22) and the formulas for d 2 , , d 6 that follow, replace d r by
d ¯ r = r d r = c r ( x ) / ( r 1 ) ! .
As stated this gives c r = c r ( x ) = r ! d r . For example
c 2 = a 1 ,   c 3 = 2 a 1 2 + a 2 ,   c 4 = 3 ! a 1 3 + 7 a 1 a 2 + a 3 , c 5 = 4 ! a 1 4 + 46 a 1 2 a 2 + 11 a 1 a 3 + 7 a 2 2 + a 4 , c 6 = 5 ! a 1 5 + 326 a 1 3 a 2 + 101 a 1 2 a 3 + 127 a 1 a 2 2 + 16 a 1 a 4 + 25 a 2 a 3 + a 5 .
In the first reference, [1], replace J. J. Alfredo by J. A. Jimenez.

References

  1. Barndoff-Nielsen, O.E. and Cox, D.R. (1989). Asymptotic techniques for use in statistics. Chapman & Hall, London.
  2. Bhattacharya, R.N. and Rao, Ranga R. (1976). Normal approximation and asymptotic expansions, Wiley, New York.
  3. Cai, T. (2005) One-sided confidence intervals in discrete distributions. Journal of Stat. Planning and Inference, 131, 63–88.
  4. Comtet,L. (1974) Advanced combinatorics. Reidel, Dordrecht.
  5. Cornish, E.A. and Fisher, R. A. (1937) Moments and cumulants in the specification of distributions. Rev. de l’Inst. Int. de Statist., 5, 307–322. Reproduced in The collected papers of R.A. Fisher, 4.
  6. Daniels, H.E. (1983) Saddlepoint approximations for estimating equations. Biometrika 70, 89–96.
  7. Daniels, H.E. (1987) Tail probability expansions. Intern. Statist. Review 55, 37–48.
  8. Fisher, R. A. and Cornish, E.A. (1960) The percentile points of distributions having known cumulants. Technometrics, 2, 209–225.
  9. Hill, G.W. and Davis, A.W. (1968) Generalised asymptotic expansions of Cornish-Fisher type. Ann. Math. Statist., 39, 1264–1273.
  10. James, G.S. and Mayne, A.J. (1962). Cumulants of functions of random variables. Sankhya A 24, 47–54.
  11. Stuart, A. and Ord, K. (1987). Kendall’s advanced theory of statistics, 1. 5th edition. Griffin, London.
  12. Withers, C.S. (1982) The distribution and quantiles of a function of parameter estimates. Annals of the Institute of Statistical Mathematics, Series A, 34 (1), 55–68. Corrigendum: http://freepages.misc.rootsweb.com/~kitwithers/research/1982a.pdf.
  13. Withers, CS (1983) Expansions for the distribution and quantiles of a regular functional of the empirical distribution with applications to nonparametric confidence intervals. Annals Statist., 11 (2), 577–587.
  14. Withers, C.S. (1984) Asymptotic expansions for distributions and quantiles with power series cumulants. J. Roy. Statist. Soc. B, 46, 389–396.
  15. Withers, C.S. (1987) Bias reduction by Taylor series. Commun. Statist. - Theor. Meth., 16, 2369–2383.
  16. Withers, CS (1988) Nonparametric confidence intervals for functions of several distributions. Annals of the Institute of Statistical Mathematics, Part A, 40 (4), 727–746.
  17. Withers, C.S. (1989) Accurate confidence intervals when nuisance parameters are present. Comm. Statist. - Theory and Methods, 18, 4229–4259.
  18. Withers, C.S. and Nadarajah, S. (2008) Edgeworth expansions for functions of weighted empirical distributions with applications to nonparametric confidence intervals, Journal of Nonparametric Statistics, 20, 751–768. http://www.informaworld.com/smpp/1162981568-85928283/content~db=all~content=a905308777.
  19. Withers, C.S. and Nadarajah, S. (2009) Charlier and Edgeworth expansions for distributions and densities in terms of Bell polynomials. Probab. Math. Statist. 29, 271–280.
  20. Withers, C.S. and Nadarajah, S. (2010a) The bias and skewness of M-estimators in regression, Electronic Journal of Statistics, 4, 1–14. http://projecteuclid.org/DPubS/Repository/1.0/Disseminate?view=body&id=pdfview_1&handle=euclid.ejs/1262876992.
  21. Withers, C.S. and Nadarajah, S. (2010b) Tilted Edgeworth expansions for asymptotically normal vectors. Annals of the Institute of Statistical Mathematics, 62 (6), 1113–1142. [CrossRef]
  22. Withers, C.S. and Nadarajah, S. (2010c) The distribution and quantiles of functionals of weighted empirical distributions when observations have different distributions. Statistics and Probability Letters, 80 (13-14), 1093–1102. [CrossRef]
  23. Withers, C.S. and Nadarajah, S. (2011a) Expansions for the distribution of M-estimates with applications to the multi-tone problem. ESIAM - Probability and Statistics, 15, 139–167. [CrossRef]
  24. Withers, C.S. and Nadarajah, S. (2011b) Channel capacity for MIMO systems with multiple frequencies and delay spread. Applied Mathematics and Information Sciences, 5 (3), 480–483.
  25. Withers, C.S. and Nadarajah, S. (2011c) Reciprocity for MIMO systems. European Transations on Telecommunications, 22 (6), 276–281. [CrossRef]
  26. Withers, CS and Nadarajah, S (2012a) Improved confidence regions based on Edgeworth expansions. Computational Statistics and Data Analysis, 56 (12), 4366–4380.
  27. Withers, C.S. and Nadarajah, S. (2012b) The distribution of Foschini’s lower bound for channel capacity. Advances in Applied Probability, 44 (1), 260–269. [CrossRef]
  28. Withers, C.S. and Nadarajah, S. (2012c) Cornish-Fisher expansions for sample autocovariances and other functions of sample moments of linear processes. Brazilian Journal of Probability and Statistics, 26 (2), 149–166. [CrossRef]
  29. Withers, C.S. and Nadarajah, S. (2012d) Cornish-Fisher expansions about the F-distribution. Applied Mathematics and Computation, 218 (15), 7947–7957. [CrossRef]
  30. Withers, C.S. and Nadarajah, S. (2012e) Transformations of multivariate Edgeworth-type expansions Statistical Methodology, 9 (3), 423–439. [CrossRef]
  31. Withers, CS and Nadarajah, S (2012f) Nonparametric estimates of low bias. Revstat Statistical Journal, 10 (2), 229–283.
  32. Withers, C. S. and Nadarajah, S. (2013a) Cornish-Fisher expansions for functionals of the partial sum empirical distribution. Statistical Methodology, 12, 1–15. [CrossRef]
  33. Withers, C.S. and Nadarajah, S. (2013b) The distribution of the amplitude and phase of the mean of a sample of complex random variables. Journal of Multivariate Analysis, 113, 128–152. [CrossRef]
  34. Withers, C.S. and Nadarajah, S. (2014a) The dual multivariate Charlier and Edgeworth expansions. Statistics and Probability Letters, 87 (1), 76–85. [CrossRef]
  35. Withers, C.S. and Nadarajah, S. (2014b) Asymptotic properties of M-estimators in linear and nonlinear multivariate regression models, Metrika, 77, 647–673. [CrossRef]
  36. Withers, C.S. and Nadarajah, S. (2014c) Expansions about the gamma for the distribution and quantiles of a standard estimate. Methodol. Comput. Appl. Probab., 16 (3), 693–713. [CrossRef]
  37. Withers, C.S. and Nadarajah, S. (2015) Edgeworth-Cornish-Fisher-Hill-Davis expansions for normal and non-normal limits via Bell polynomials, Stochastics An International Journal of Probability and Stochastic Processes,87 (5), 794–805. [CrossRef]
  38. Withers, C.S. and Nadarajah, S. (2020) The distribution and percentiles of channel capacity for multiple arrays. Sadhana, Indian Academy of Science, 45 (1), 1–25. 2020. https://www.ias.ac.in/article/fulltext/sadh/045/0155.
  39. Withers, C.S. and Nadarajah, S. (2022) Chain rules for multivariate cumulant coefficients. Stat. 11 (1). 2022:11:e451. [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Copyright: This open access article is published under a Creative Commons CC BY 4.0 license, which permit the free download, distribution, and reuse, provided that the author and preprint are cited in any reuse.
Prerpints.org logo

Preprints.org is a free preprint server supported by MDPI in Basel, Switzerland.

Subscribe

© 2024 MDPI (Basel, Switzerland) unless otherwise stated