Next Article in Journal
In Silico Comparison of the Hemicelluloses Xyloglucan and Glucuronoarabinoxylan in Protecting Cellulose from Degradation
Previous Article in Journal
On Roof Geometry for Urban Wind Energy Exploitation in High-Rise Buildings
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Fast Computation of the Non-Central Chi Square PDF Outside the HDR Under a Requisite Precision Constraint

College of Engineering, The University of Massachusetts Dartmouth, North Dartmouth, MA 02747-2300, USA
Computation 2015, 3(2), 326-335; https://doi.org/10.3390/computation3020326
Submission received: 27 February 2015 / Revised: 11 June 2015 / Accepted: 15 June 2015 / Published: 19 June 2015
(This article belongs to the Section Computational Engineering)

Abstract

:
Computation of the non-central chi square probability density function is encountered in diverse fields of applied statistics and engineering. The distribution is commonly computed as a Poisson mixture of central chi square densities, where the terms of the sum are computed starting with the integer nearest the non-centrality parameter. However, for computation of the values in either tail region these terms are not the most significant and starting with them results in an increased computational load without a corresponding increase in accuracy. The most significant terms are shown to be a function of both the non-centrality parameter, the degree of freedom and the point of evaluation. A computationally simple approximate solution to the location of the most significant terms as well as the exact solution based on a Newton–Raphson iteration is presented. A quadratic approximation of the interval of summation is also developed in order to meet a requisite number of significant digits of accuracy. Computationally efficient recursions are used over these improved intervals. The method provides a means of computing the non-central chi square probability density function to a requisite accuracy as a Poisson mixture over all domains of interest.

1. Introduction

Efficiently evaluating the non-central chi square probability density function (PDF) is of practical importance to a number of problems in applied statistics [1]. For instance, the ratio of two densities f(x)/p(x), where p(x) is a non-central chi square density, may be necessary in an importance sampling scheme or for hypothesis testing. The non-central chi square density [2], where λ is the non-centrality parameter and ν the degree of freedom, arises in the general case where x = e′Σ−1e and e ~ N ( μ , ), eRν, and Σ positive definite. It follows that x ~ χ ν 2 ( λ ) where λ = µ′Σ−1µ. The notation e ~ N ( μ , ) implies that e has Gaussian distribution [3], N ( μ , ) = ( 2 π ) ν / 2 | | 1 / 2 exp ( 1 2 ( e μ ) 1 ( e μ ) ) and x ~ χ ν 2 ( λ ) denotes that x is distributed as a non-central chi square with ν degrees of freedom and non-centrality parameter λ. For a non-central chi square variate E[x] = λ + ν, and Var[x] = 4λ + 2ν. The density at x can be represented [2] by
p ( x ; ν , λ ) = 1 / 2 ( x / λ ) ( ν 2 ) / 4 I ν 2 1 ( x λ ) exp ( ( x + λ ) / 2 )
where Iν(x) is the modified Bessel function of order ν. However this representation is numerically problematic for large x or large λ. The asymptotic result ( x ( λ + ν ) ) / ( 2 ν + 4 λ ) ~ N ( 0 , 1 ) as λ or ν → ∞ can be useful, but is not a universal substitute for direct computation of p(x; ν, λ) even for relatively large λ. For this reason the discrete mixture representation
p ( x ; ν , λ ) = n = 0 ( λ / 2 ) n n ! exp ( λ / 2 ) p ( x ; ν + 2 n , 0 ) = E n / λ [ p ( x ; ν + 2 n , 0 ) ]
where it is recognized that the expectation is over the Poisson density [4] P(n|λ/2) = (λ/2)nexp(−λ/2)/n! and
p ( x ; ν , 0 ) = p ν ( x ) = x ν / 2 1 2 ν / 2 Γ ( ν / 2 ) exp ( x / 2 )
is the central chi square probability density function (PDF) [3] with ν degrees of freedom. The non-central chi square density at the point x is therefore the average of an infinite set of central chi square densities evaluated at x.
The goal of this article is to present a method to compute the value of the distribution function of a non-central chi square variate using the mixture representation and to do so with minimal computational effort to a specified accuracy in all domains of interest, including both the high density region (HDR) and the tail regions. In doing so, we will illuminate the often employed standard method and discuss the computational weaknesses that method has in the tails of the distribution. The cumulative density function (CDF) 0 x p ( α ; ν , λ ) d α of the non central chi square has been addressed by [5] and the computational issues associated with the domains of evaluation have been addressed by a treatment dependent on the parameters, {x, ν, λ} [6].

2. Computation of p(x; ν, λ)

With the goal to compute p(x; ν, λ) to a specified accuracy for arbitrary x, λ and ν in a computationally efficient manner, start with the representation of the non-central chi-square density as a mixture of central chi-square densities:
p ( x ; ν , λ ) = n = 0 f ( x , n ; ν , λ ) ,
f ( x , n ; ν , λ ) = ( λ / 2 ) n n ! exp ( λ / 2 ) x ν / 2 + n 1 2 ν / 2 + n Γ ( ν / 2 + n ) exp ( x / 2 )
The sum is computed efficiently by ordering the terms to be summed according to their size, with the larger terms first and then for each of the terms to be summed exploiting the following recursions:
P ( n + 1 | λ ) = λ 2 ( n + 1 ) P ( n | λ ) p ν + 2 n + 2 ( x ) = x 2 n + ν p ν + 2 n ( x ) ,
thus ensuring that each term is computed in a computational efficient manner.
It is necessary to determine which terms maximally contribute to the sum and including these in the summation in order to meet the requisite accuracy. This domain of summation must provide for maximal computational efficiency for a given accuracy, or conversely minimal error for a pre-specified computational budget.
A popular approach, which is the basis of most algorithms in use, is to start the recursion at the mode of the Poisson density
n P = E [ n | λ ] = λ / 2
and proceed to sum in both directions until the relative accuracy is obtained. This approach is termed Algorithm 1.
Algorithm 1.
Algorithm 1.
n* = ⌊λ/2⌋, Equation (5), ϵ = 10B
• Compute P (n*|λ), pν+2n*(x)
• Initialize: k = 1, ⌊p(x; ν, λ)⌋1 = P (n*|λ) · χ2ν+2n*
• 1. If n*k ≥ 0 compute Rk = P (n* + k|λ) · pν+2n*+2k(x) + P (n*k|λ) · pν+2n*−2k(x). Else if n*k < 0 compute Rk = P (n* + k|λ) · pν+2n*+2k(x) via Equation (5).
 2. Update ⌊p(x; ν, λ)⌋k = ⌊p(x; ν, λ)⌋k−1 + Rk−1
 3. If Rk/⌊p(x; ν, λ)⌋k > ϵ then k = k + 1 and repeat.
 4. Else ⌊p(x; ν, λ)⌋M1 = ⌊p(x; ν, λ)⌋k stop.
This approach is nearly optimal for x in the high probability density region, however it is quite inefficient in the tail regions of the density. That is the mode, ⌊λ/2⌋ is quite distant from the maximally contributing terms of the summation in regions other than the high density region. For this reason, starting at ⌊λ/2⌋, as in Algorithm 1, wrongly focuses computation on terms that contribute little information to the numerical result. The optimal solution would start at the location of the peak of f(x, n; λ, ν). Determine this starting point by noting that as a function of n
f ( x , n ; λ , ν ) ( x λ / 4 ) n Γ ( n + 1 ) Γ ( v / 2 + n .
Define
d n l n f ( n ) = d d n l n f ( x , n ; λ , ν ) = l n ( x λ / 4 ) ψ ( n + 1 ) ψ ( n + ν / 2 )
and employ the series expansion for ψ(n + 1), the digamma function
ψ ( n ) = l n ( n ) + 1 2 n + k = 1 B 2 k 2 k n 2 k
where Bk the k-th Bernoulli number, to yield
d n l n f ( n ) = l n ( x λ / 4 ) l n ( ( n + 1 ) ( n + ν / 2 ) ) + 1 / 2 n + ν / 2 + 1 / 2 n + 1 + O ( n 2 ) .
Define
n * = a r g m a x n f ( x , n / ν , λ )
and an approximate solution for the integer n ≥ 0 is
l n ( x λ / 4 ) l n ( ( n + 1 ) ( n + ν / 2 ) )
implying
n * m a x [ 0 , 1 2 ( x λ + ( ν 2 + 1 ) 2 2 ν ( ν / 2 + 1 ) ) ] .
Notice that for xE[x|λ, ν] = λ + ν and λ > ν > 2 it follows that n*(x = λ + ν) ≈ λ/2 = E[n|λ], the expected value of the Poisson mixture weights. The advantage of using Equation (11) is that it is both relatively simple, requiring only a square root operation on integers, and is accurate at locating the largest terms of the sum in both the high density region and the tail regions. As will be shown for domains outside of the high density region n* is computationally efficiency relative to starting at ⌊λ/2⌋.
To determine argmaxnf(x, n|ν, λ) exactly, use a few Newton–Raphson iterations starting from Equation (11) as
n k + 1 = n k D g ( n k ) I 1 ( n k ) , n 1 = n * Equation ( 11 )
where
I ( n ) = d 2 d n 2 l n f ( x , n ; λ , ν ) = ψ ( n + 1 ) ψ ( n + ν / 2 ) .
Since the solution is needed only to the nearest integer, a single iteration of Equation (12) is sufficient.
Algorithm 2.
Algorithm 2.
• Compute n*(x, λ) by Equations (11) or (12), ϵ = 10B.
• Compute P (n*|λ), pν+2n* (x).
• Initialize: k = 1, ⌊p(x; ν, λ)⌋1 = P (n*|λ) · pν+2n* (x)
• 1. If n*k ≥ 0 compute Rk = P (n* + k|λ) · pν+2n*+2k(x) + P (n*k|λ) · pν+2n*−2k(x). Else if n*k < 0 compute Rk = P (n* + k|λ) · pν+2n*+2k(x) via Equation (5).
 2. Update ⌊p(x; ν, λ)⌋k = ⌊p(x; ν, λ)⌋k−1 + Rk−1
 3. If Rk/⌊p(x; ν, λ)⌋k > then k = k + 1 and repeat
 4. Else ⌊p(x; ν, λ)⌋M2 = ⌊p(x; ν, λ)⌋k stop.
The result is computationally more efficient than Algorithm 1 for x outside the high probability region for a pre-specified accuracy. It does, however, require the initial computation of n* via Equations (11) or (12) as well as retaining the need for computing the comparisons of Step 3 for each iteration. Employing the more accurate starting point of Equation (12) requires a Newton–Raphson iteration and thus the evaluation of the information scale, I(n). Since integer arithmetic is computationally insignificant relative to floating point arithmetic, the approach has merit for x outside the high probability region.
There is a further advantage to computing the information scale. The information scale I(n) gives an approximate measure of the number of terms required. This brings us to the last method presented: eliminate Step 3 from Algorithm 2 by simply approximating the domain of summation with a Laplace approximation [7] to f(x, n; λ, ν). The information scale is approximated with
I ( n ) = d 2 d n 2 l n f ( x , n ; λ , ν ) = 1 n + 1 1 n + ν / 2 1 / 2 ( n + ν / 2 ) 2 1 / 2 ( n + 1 ) 2 + O ( n 3 ) .
These derivatives are approximations based on the truncated expansion of l n ( Γ ( n ) ) = ( n 1 / 2 ) l n ( n ) n + l n ( 2 π ) + 1 / 12 n + O ( n 3 ) [8], where O(np) is polynomial in n with maximal degree p.
The decimal place contribution of the term f(x, n; λ, ν) is
b x ( n ) = l o g 10 k = 1 f ( x , k ; λ , ν ) l o g 10 f ( x , n ; λ , ν )
with a lower bound
b x ( n ) > l o g 10 max k { f ( x , k | λ , ν ) } l o g 10 f ( x , n ; λ , ν ) .
We can attain accuracy to B decimal places by including in the sum terms for which
{ n : l n f ( x , n ; λ , ν ) > B l n ( 10 ) + m a x k { l n f ( x , k ; λ , ν ) } } .
Therefore let
n * ( x , ν , λ ) = a r g m a x n l n f ( x , n ; ν , λ ) , Equation ( 12 ) D * ( B ) = { n : | n n * | 2 < B l n ( 10 ) I 1 ( n * ) / 2
and include in the summation only those terms within the set D*(B). The advantage of computing the domain of summation a priori is that it obviates the need to compute the relative errors and the computation can be performed with a simple for loop. The terms can be summed from least to greatest, with terms near n = n* + D*(B) summed first in order to minimize accumulation errors. The disadvantage is that the set is larger than necessary, due to the approximation of Equation (15) as well as the Laplace approximation.
Algorithm 3.
Algorithm 3.
• Compute : n*(x, λ) by Equation (12),
L b = n * 2 B l n ( 10 ) / I ( n * ), U b = n * + 2 B l n ( 10 ) / I ( n * ), Equation (17),
• Compute P (n|λ), pν+2n(x)
• Initialize: kl = ku = 1, ⌊p(x; ν, λ)⌋1 = P(n*|λ) · pν+2n(x)
• 1. Compute
  – R k u = P ( n + k u | λ ) p ν + 2 n + 2 k u ( x ),
  – R k l = P ( n k l | λ ) p ν + 2 n 2 k l ( x ) Equation (5).
 2. Update p ( x ; ν , λ ) k = p ( x ; ν , λ ) k 1 + R k l + R k u
 3. – If ku < Ub then ku = ku + 1 repeat, else R k u = 0.
  – If kl > Lb then kl = kl − 1 repeat, else R k l = 0,
  – If ku = Ub & kl = Lb then ⌊p(x; ν, λ)⌋M3 = ⌊p(x; ν, λ)⌋k stop.
where it is understood that the comparison of step 3 need not be made, as a for loop can perform this function implicitly.

3. Results

The three algorithms are compared to illustrate the importance of initialization of the algorithm by proper determination of n*(x, ν, λ) in the computation of the non-central chi square PDF in the tail regions. First, Figure 1 depicts the normalized joint densities
r ( x , n | λ , ν ) = l o g 10 f ( x , n ; λ , ν ) m a x n ( x , n | λ , ν )
for ν = 4 and for diverse non-centrality parameters of λ =1, 8, 20 and 100. The initialization point, n* for Algorithm 2 and Algorithm 3 are shown, as well as the conventional starting point ⌊λ/2⌋ of Algorithm 1. The difference between the actual maximum and ⌊λ/2⌋ is quite stark outside the HDR. Also shown are the Laplace approximate upper (Ub) and lower bounds (Lb) associated with B = 4 used for Algorithm 3.
Figure 2 likewise depicts the normalized densities r(x, n|λ, ν) for a degree of freedom parameter value of ν = 20 and a range of non-centrality parameters λ =1, 8, 20 and 100. The increase in degree of freedom parameter relative to Figure 1 is shown to not significantly alter n*(x, ν, λ). The upper and lower bounds of Algorithm 3 for B = 8 are shown. It is noted that for λ = 100 depicted in the lower right of Figures 1 and 2, the difference between n* and ⌊λ/2⌋ in the lower tail of the distribution is also quite stark.
In order to demonstrate the performance of each of the three algorithms, they are each displayed relative to the value of the PDF computed from
p ( x ; ν , λ ) 100 = n | n n * | < 100 ( λ / 2 ) n n ! exp ( λ / 2 ) x ν / 2 + n 1 2 ν / 2 + n Γ ( ν / 2 + n ) exp ( x / 2 )
using the Algorithm 3 and summing least to greatest to obviate any accumulation errors. Define the relative errors of the 3 algorithms as follows
e l ( x ) = p ( x ; ν , λ ) M , l p ( x ; ν , λ ) 100 , l = 1 , 2 , 3
Figure 3 displays the relative errors for the three algorithms, with ϵ = 10−4. To the left and center are Algorithms 1 and 2 respectively, and the figure demonstrates the close proximity of each to a relative error of 10−4. To the right is Algorithm 3, and it can be seen that the use of the bounding approximation of Equation (15) implies that more terms are being included, such that an extra decimal place is attained across all non-centrality parameters shown.
Figure 4 provides the number of terms necessary to attain the performance shown in Figure 3 for each of the algorithms. It is noted that only for λ = 100 in Algorithm 3 are the number of terms in excess of Algorithm 1 within the HDR. It is noteworthy that Algorithm 3 outperforms Algorithm 1 by over 2 × 40 floating point operations per evaluation in the tail region, since there are two additional floating point operation per iteration in Algorithm 1 that are not necessary in Algorithm 2.

4. Conclusions

The Poisson mixture representation of the non-central chi-square PDF is a useful and commonly employed means of computing the PDF at various domain values. The summation of terms is often initialized at the integer closest to half of the non-centrality parameter, the mode of the Poisson mixture weights ⌊λ/2⌋, and is efficient in the high probability density region (HDR). Outside of the HDR, the method is computationally inefficient, since the dominant terms in the sum are distant from the mode ⌊λ/2⌋. This shortcoming is addressed here by providing the maximally contributing terms in the sum regardless of the domain of evaluation. We do so in order to meet a requisite accuracy and computational load. The approach provides a simple means to retain the mixture representation for computation of the PDF outside of the HDR in a manner that is computationally reasonable. We present two methods to accomplish this task outside the HDR that provide a computation versus accuracy trade-off that is comparable within the HDR.

Acknowledgments

Funding by the Office of Naval Research, the University of Massachusetts College of Engineering and the Space and Naval Warfare Systems Center Pacific.

Conflicts of Interest

The author declares no conflict of interest.

References

  1. Liu, H.; Tang, Y.; Zhang, H.H. A new chi-square approximation to the distribution of non-negative definite quadratic forms in non-central normal variables. Comput. Stat. Data Anal 2009, 53, 853–856. [Google Scholar]
  2. Johnson, N.L.; Kotz, S.; Balakrishnan, N. Distributions in Statistics: Continuous Univariate Distributions-2; John Wiley & Sons: Hoboken, NJ, USA, 1970; pp. 433–473. [Google Scholar]
  3. Johnson, N.L.; Kotz, S.; Balakrishnan, N. Distributions in Statistics: Continuous Univariate Distributions-1; John Wiley & Sons: Hoboken, NJ, USA, 1970; pp. 415–475. [Google Scholar]
  4. Johnson, N.L.; Kotz, S.; Balakrishnan, N. Univariate Discrete Distributions, 3rd ed; John Wiley & Sons: Hoboken, NJ, USA, 2005; pp. 156–205. [Google Scholar]
  5. Nuttall, A.H. Some integrals involving the Qm function. IEEE Trans. Inf. Theory 1975, 21, 95–96. [Google Scholar]
  6. Ross, A.H. Algorithm for calculating the non-central chi-square distribution. IEEE Trans. Inf. Theory 1999, 45, 1327–1333. [Google Scholar]
  7. Laplace, P.S. Memoir on the probability of causes of events. Statist. Sci 1986, 1, 364–378. [Google Scholar]
  8. Abramowitz, M.; Stegun, I. Handbook of Mathematical Functions, with Formulas, Graphs, and Mathematical Tables; National Bureau of Standards Dover Publications, Inc: Mineola, NY, USA, 1964. [Google Scholar]
Figure 1. The magnitude of the normalized terms r(x, n|λ, ν) for ν = 4. The peak is shown as a function of x for various non-centrality parameters λ. The difference between ⌊λ/2⌋ and the actual n* = argmaxnf(x, n|λ, ν) in the tail regions is apparent. The upper (Ub) and lower (Lb) bounds of Algorithm 3 for the interval of summation associated with B = 4 decimal place accuracy.
Figure 1. The magnitude of the normalized terms r(x, n|λ, ν) for ν = 4. The peak is shown as a function of x for various non-centrality parameters λ. The difference between ⌊λ/2⌋ and the actual n* = argmaxnf(x, n|λ, ν) in the tail regions is apparent. The upper (Ub) and lower (Lb) bounds of Algorithm 3 for the interval of summation associated with B = 4 decimal place accuracy.
Computation 03 00326f1
Figure 2. The magnitude of the normalized terms r(x, n|λ, ν) for ν = 20. The peak is shown as a function of x for various non-centrality parameters λ. The difference between ⌊λ/2⌋ and the actual n* = argmaxnr(x, n|λ, ν) in the tail regions is apparent. The upper (Ub) and lower (Lb) bounds of Algorithm 3 for the interval of summation associated with B = 8 decimal place accuracy.
Figure 2. The magnitude of the normalized terms r(x, n|λ, ν) for ν = 20. The peak is shown as a function of x for various non-centrality parameters λ. The difference between ⌊λ/2⌋ and the actual n* = argmaxnr(x, n|λ, ν) in the tail regions is apparent. The upper (Ub) and lower (Lb) bounds of Algorithm 3 for the interval of summation associated with B = 8 decimal place accuracy.
Computation 03 00326f2Computation 03 00326f2a
Figure 3. Left to right, the relative errors with B = 4, for the three algorithms. Left, Algorithm 1 iterating starting from ⌊λ/2⌋. Center, Algorithm 2 iterating from n*. To the right, Algorithm 3 iterating from n* and with the index set computed a priori.
Figure 3. Left to right, the relative errors with B = 4, for the three algorithms. Left, Algorithm 1 iterating starting from ⌊λ/2⌋. Center, Algorithm 2 iterating from n*. To the right, Algorithm 3 iterating from n* and with the index set computed a priori.
Computation 03 00326f3
Figure 4. The number of terms necessary for the three algorithms to attain the accuracy shown in Figure 3 for ν = 4 and for various λ of 1, 8, 20 and 100. Algorithm 1, blue dashed. Algorithm 2, red dashed. Algorithm 3, black solid.
Figure 4. The number of terms necessary for the three algorithms to attain the accuracy shown in Figure 3 for ν = 4 and for various λ of 1, 8, 20 and 100. Algorithm 1, blue dashed. Algorithm 2, red dashed. Algorithm 3, black solid.
Computation 03 00326f4

Share and Cite

MDPI and ACS Style

Gendron, P.J. Fast Computation of the Non-Central Chi Square PDF Outside the HDR Under a Requisite Precision Constraint. Computation 2015, 3, 326-335. https://doi.org/10.3390/computation3020326

AMA Style

Gendron PJ. Fast Computation of the Non-Central Chi Square PDF Outside the HDR Under a Requisite Precision Constraint. Computation. 2015; 3(2):326-335. https://doi.org/10.3390/computation3020326

Chicago/Turabian Style

Gendron, Paul J. 2015. "Fast Computation of the Non-Central Chi Square PDF Outside the HDR Under a Requisite Precision Constraint" Computation 3, no. 2: 326-335. https://doi.org/10.3390/computation3020326

Article Metrics

Back to TopTop