Next Article in Journal
Quantitative Trading through Random Perturbation Q-Network with Nonlinear Transaction Costs
Previous Article in Journal
A Comparison of Existing Bootstrap Algorithms for Multi-Stage Sampling Designs
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Evaluation of the Gauss Integral

Institute of Physics, University of Tartu, W. Ostwaldi 1, 50411 Tartu, Estonia
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Stats 2022, 5(2), 538-545; https://doi.org/10.3390/stats5020032
Submission received: 21 May 2022 / Revised: 8 June 2022 / Accepted: 9 June 2022 / Published: 10 June 2022

Abstract

:
The normal or Gaussian distribution plays a prominent role in almost all fields of science. However, it is well known that the Gauss (or Euler–Poisson) integral over a finite boundary, as is necessary, for instance, for the error function or the cumulative distribution of the normal distribution, cannot be expressed by analytic functions. This is proven by the Risch algorithm. Regardless, there are proposals for approximate solutions. In this paper, we give a new solution in terms of normal distributions by applying a geometric procedure iteratively to the problem.

1. Introduction

The normal or Gaussian distribution plays a prominent role in almost all fields of science, as the sum of random variables tends to the normal distribution if the quite general conditions of the central limit theorem [1] are satisfied. Besides the unbounded normal integral, the bounded integral or error function is crucial for the determination of probabilities. However, there is no analytic expression found for this function, a fact that can be tested by using the Risch algorithm [2,3]. Powerful modern computer facilities but also simple personal computers allow for a numerical calculation of the error function with any needed precision. However, if a multitude of such calculations has to be performed in a limited time, for instance, in Monte Carlo simulations, the processing time becomes essential. In order to speed up these calculations, simple and more educated approximations have been proposed in the literature. The spectrum of approximations contains, for instance, the Gaussian exponential function, including either numerical constants [4] or powers and square roots [5], approximations using ordinary and hyperbolic tangent functions [6], a rational function of an exponential function with the exponent given by a power series [7], or an approximation by Jacobi theta functions [8]. Without knowing the error function explicitly, expectation values can be calculated by an approximation of the normal distribution by a series in ordinary exponential functions [9].
The present paper contains a continuation of this topic. Employing a geometric approach, we provide an approximation of the squared error function by a finite sum of N Gaussian exponential functions with different widths, where the values of which are constrained to fixed intervals. We show that, by fine-tuning these width parameters, one can optimise the precision, which, even for the leading order N = 1 , is better than the error estimates given by the constraints in Ref. [5]. In addition, by choosing N as appropriately large, one can afford an arbitrary precision. On the other hand, even on a personal computer, the calculation with our leading order approximation is obtained 34 times faster than an exact numerical calculation, the processing time for higher orders being multiplied by N.
Our paper is organised as follows. In Section 2, we introduce the basic concepts for the calculation of the Gaussian integral that are necessary for the understanding of our geometric approach. The precisions of the leading order approximation obtained here and simple, straightforward extensions of this approximation are discussed in Section 3. In Section 4, we explain the geometric background for our approximation and provide a systematic way to create higher order approximations. The iterative construction of higher order approximations is explained in general in Section 5 in terms of partitions before we turn to the partition into N = 2 p intervals for increasing values of p. In Section 6, we explain a similar ternary construction. In Section 7, we provide our conclusions and an outlook on possible extensions. The convergence of our iterative procedure is discussed in more detail in Appendix A. In addition, we discuss the continuum limit, which, of course, cannot be part of the algorithm but allows, as a bonus, for a different representation of the error function.

2. Basic Concepts

The error function is based on the standard normal density distribution
ρ ( x ) = 1 2 π e x 2 / 2
which does not have a direct practical meaning, while it is desirable to evaluate the integral of this function over a bounded interval [ t , t ] , leading to the probability P ( t ) to find the result within this interval,
P ( t ) = t t ρ ( x ) d x = t t ρ ( y ) d y .
From Equations (1) and (2), the square of probability is given by
P 2 ( t ) = 1 2 π t t ρ ( x ) d x t t ρ ( y ) d y = 1 2 π t t t t e ( x 2 + y 2 ) / 2 d x d y ,
where the integration area is a square in Figure 1A. Introducing polar coordinates x = r cos φ and y = r sin φ , one obtains
P 2 = 1 2 π e r 2 / 2 r d r d φ .
The integral in Equation (4) is analytically calculable if the integration is performed over the interior of some circle with radius R. Indeed,
I 2 ( R ) = 1 2 π 0 2 π d φ 0 R e r 2 / 2 r d r = 1 e R 2 / 2 .
Here, the function I ( R ) increases monotonically with R as the integral in Equation (4) is taken over a positive function. This is why I ( R = m ) < P < I ( R = M ) , with m = t and M = t 2 ; see Figure 1A. Therefore,
P ( t ) = 1 e k 2 ( t ) t 2 / 2 ,
where 1 < k ( t ) < 2 . Using a PC for analyzing the set of Equations (1), (2), and (6), one concludes that k ( t ) is even more constrained by 1 < k ( t ) < 4 / π . Hence,
P m ( t ) < P ( t ) < P M ( t ) ,
and P M ( t ) P m ( t ) has a maximum of 0.0592 at t = t 0 = 1.0668 . The Inequality (7) proves to be incomparably elegant, easy to remember, and much more accurate than the best result of Ref. [5], which, if transformed into the present formalism, will be
P m ( t ) = 1 4 2 π exp ( t 2 / 2 ) 3 t + t 2 + 8 ,
P M ( t ) = 1 1 2 π ( t 2 + 4 t ) exp ( t 2 / 2 ) .
The largest range P M ( t ) P m ( t ) 0.330 for these constraints occurs for t = 0 . Compared to this, even at the leading order observed so far, our value for P M ( t ) P m ( t ) < 0.0592 0.330 is more restrictive. In more detail, if, in Equation (6), we choose k = 4 / π , the error will be below 0.006 , but if we take k = 1.116 , the maximal error is only 0.0033 . Even modern reviews on this subject do not have better results [10].

3. Simple Extensions

By adding additional terms to the leading order approximation, one can increase the precision further. For the normal integral
P ( t ) = 1 2 π t t e x 2 / 2 d x
and the leading order approximation, for k 1 = 1.116 , one has
| P ( t ) 1 e k 1 2 t 2 / 2 | < 0.0033 .
However, the precision increases by a factor of 14 0.0033 / 0.00024 by using
| P ( t ) 1 1 2 ( e k 1 2 t 2 / 2 + e k 2 2 t 2 / 2 ) | < 0.00024 ,
where k 1 = 1.01 , k 2 = 1.23345 . The next order of precision has
| P ( t ) 1 1 3 ( e k 1 2 t 2 / 2 + e k 2 2 t 2 / 2 + e k 3 2 t 2 / 2 ) | < 0.00003 ,
where k 1 = 1.02335 , k 2 = 1.05674 , and k 3 = 1.28633 . Therefore, this formula with three exponentials is at least 8 times more precise than the one with two exponentials, and it is at least 110 times more precise than Equation (11) with one exponential only. Finally, it is 11 , 000 0.33 / 0.00003 times more precise than the approximation in Ref. [5]. As it turns out, the values for the parameters k i for i running from 1 to N take values between 1 and 2 , while the sum of the exponential factors is divided by N. Still, there is a degree of arbitrariness in the determination of these parameters. In order to remove this arbitrariness, in the following, we develop an iterative method based on geometry.

4. Geometric Background of Our Procedure

In order to understand our method, we refer to Figure 1B for the first step. Starting with the square with side length 2 t representing the square P 2 ( t ) of the probability, we turn this square by an angle of π / 4 to obtain P 2 ( t ) again. In union and intersection, these two overlayed squares construct two eight-angle figures. In order to obtain the area of the larger figure, one has to subtract the area of the smaller figure from the twofold square area, as this smaller figure is covered twice by the two squares. Accordingly, for the integrals over the probability density, one obtains the relation
Ω ( t ) = 2 P 2 ( t ) ω ( t )
between the probabilities. Now,
ω ( t ) = 1 e k 1 2 t 2 / 2 , Ω ( t ) = 1 e k 2 2 t 2 / 2 ,
where
1 < k 1 < 1 / cos θ , 1 / cos θ < k 2 < 2 ,
and the angle θ = π / 8 is enclosed between the x axis and the vector u shown in Figure 1B. We study Δ ( k 1 , k 2 , t ) = P ( k 1 , k 2 , t ) P ( t ) with
P ( k 1 , k 2 , t ) = 1 1 2 e k 1 2 t 2 / 2 + e k 2 2 t 2 / 2 .
Drawing three-dimensional graphics and looking for a minimum of | Δ ( k 1 , k 2 , t ) | , one obtains
| Δ ( k 1 , k 2 , t ) | < 0.00024 ,
where k 1 = 1.01 and k 2 = 1.23345 . This is the starting point.

5. Basic Construction of the Procedure

In order to construct the iteration, one performs a partition of the figure describing ω ( t ) , Ω ( t ) , or both of these, by repeating the geometric construction shown before. For instance, taking only the larger eight-angle figure describing Ω ( t ) , one can turn this figure by an angle θ = π / 16 and overlay the new figure with the old one. In doing so, one can separate a new larger and smaller 16-angle figure in the same way as was carried out before for the 8-angle figures. Accordingly, by geometric means, one obtains new constraints. In order to describe the procedure in a unique way, in each iterative step, we rename k n by k 2 n , and, if this new k 2 n is subject to a partition, the smaller and larger figure of this partition are related to the values k 2 n 1 and k 2 n , respectively.
Using the case in the previous section as an illustrative example for the procedure, we might keep the smaller eight-angle figure related to ω ( t ) but apply a partition to the larger eight-angle figure related to Ω ( t ) . Accordingly, k 1 is replaced by k 2 and k 2 is replaced by k 4 , but this new k 4 is again split up into k 3 and k 4 . The constraint for the lowest parameter k 2 (the former k 1 ) remains the same,
1 k 2 1 / cos ( π / 8 )
whereas, for the two new higher parameters, we obtain
1 / cos ( 2 π / 16 ) k 3 1 / cos ( 3 π / 16 )
and
1 / cos ( 3 π / 16 ) k 4 1 / cos ( 4 π / 16 ) = 2 .
The intervals are consecutive, but π / 8 is replaced by 2 π / 16 in order to indicate the new partition. Finally, the upper limit stays at 1 / cos ( 4 π / 16 ) = 1 / cos ( π / 4 ) = 2 . For these values k 2 , k 3 , and k 4 , one obtains the approximation
P ( t ) P ( k 2 , k 3 , k 4 , t )
with
P ( k 2 , k 3 , k 4 , t ) = 1 1 2 e k 2 2 t 2 / 2 1 4 e k 3 2 t 2 / 2 1 4 e k 4 2 t 2 / 2
because of the geometric transformations of Figure 1 and the corresponding double use of Equation (14). Note that the set of parameters k 2 , k 3 , k 4 is different from the set k 1 , k 2 , and k 3 in Equation (13). Indeed, if, for Equation (23), one uses k 2 = 1.025187 , k 3 = 1.1249 , and k 4 = 1.31336 , the precision improves to 0.000015 . Still, it is obvious that this example is only half of an iteration step, and one could definitely achieve a greater result by also performing the partition for the smaller eight-angle figure, leading to four parameters separated uniformly,
1 k 1 1 cos ( π / 16 ) k 2 1 cos ( π / 8 ) k 3 1 cos ( 3 π / 16 ) k 4 2 .
Therefore, a full iteration step is increasing the number of parameters k n by a factor of two, and, after p full iteration steps, one has N = 2 p parameters. Each iteration step is finalized by optimizing the N (or less) parameters k n . For any finite (or even very large) N, the constraints
k n min ( N ) k n k n max ( N )
with k n min ( N ) = k n 1 max ( N ) can be calculated from geometry observations in a similar fashion. In practice, for a small set of parameters, we use a graphical method. For instance, the method applied to obtain the three values k 2 , k 3 , and k 4 in Equation (23) was to look for the solution of the system of three equations
Q ( t = 1 ) = 0 , Q ( t = 2 ) = 0 , Q ( t = 2 ) = 0 ,
where
Q ( t ) = P 2 ( k 2 , k 3 , k 4 , t ) P 2 ( t ) .
The values t = 1 , 2 , and 2 are used as nodes for this approximation. Their choice depends on the application of the approximation and has to be adjusted to the number of width parameters to be determined. Each equation in (26) can be treated individually. Therefore, the solution is very easy to find. From Q ( t = 1 ) = 0 , one extracts the function k 2 = k 2 ( k 3 , k 4 ) . Inserting these solutions into Q ( t = 2 ) = 0 , one extracts the two positive functions k 3 = k 3 ( k 4 ) and k 3 = k ˜ 3 ( k 4 ) . Inserting these solutions into Q ( t = 2 ) = 0 and plotting the function Q ( t = 2 , k 4 ) , one finds the position of the zero, which proves to be k 4 = 1.31336 . Using this knowledge, one obtains k 3 = k 3 ( k 4 ) and k 2 = k 2 ( k 3 , k 4 ) as well. However, as k 3 = k ˜ 3 ( k 4 ) is given for k 4 < 1 only, this is not a valid solution, as k n 1 for all n. Note that the graphical method cannot be applied any more for N 4 . Instead, we used a random number generator to create values for the parameters k n in the respective intervals in Equation (24). Proceeding in this way, for N = 4 ( p = 2 ), we obtain the values k 1 = 1.00725 , k 2 = 1.04665 , k 3 = 1.12192 , and k 4 = 1.3129 , and a precision of 0.00001 , which is, again, the lowest precision for a given N. As becomes obvious, the lowest precisions are obtained for uniform partitions. This is not only the case for N being a power of 2 but also for N being a power of 3, as discussed in the next section.

6. A Similar Ternary Procedure

As the approximation (13) gained high precision, we tried and succeeded in finding a geometric interpretation for this, as is shown in Figure 2. In this ternary approach, the initial step is to rotate the square not by an angle of π / 4 as in the previous approach but by an angle of π / 6 . The overlapping squares in Figure 2 can be split up into three 12-angle figures that, at the same time, determine the constraints for the parameters k i ,
1 k 1 1 / cos ( π / 12 ) k 2 1 / cos ( π / 6 ) k 3 1 / cos ( π / 4 ) .
Note that the values k 1 = 1.02335 , k 2 = 1.05674 and k 3 = 1.28633 chosen in Equation (13) fit into these intervals. This procedure can be continued iteratively in a ternary way, i.e., turning the 12-angle figures by an angle of π / 18 , and generally by the angle α = π / ( 2 · 3 p ) . In the next section, we deal with the convergence of this and the previous procedure for increasing values of p.

7. Conclusions and Outlook

In this paper, we have given an approximation for the Gauss integral with a finite boundary in terms of the square root of a normalized sum of normal distributions plus one, each of those distributions depending on the (symmetric) boundary [ t , t ] of the integral and a set of maximally N parameters k n . By simple geometrical means, it is shown that these parameters are constrained to intervals given by the inverse cosine with equally distributed angles. We performed this approximation procedure in both a binary ( N = 2 p ) and a ternary way ( N = 3 p ) and showed that the procedure converges for an increasing degree p. The continuum limit leads to a further approximation.

Author Contributions

Conceptualization, D.M.; methodology, D.M. and S.G.; writing—original draft preparation, D.M.; writing—review and editing, S.G. and D.M.; visualization, D.M. and S.G.; All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the European Regional Development Fund under Grant No. TK133.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A. On the Convergence of the Procedure

In general, one has
P 2 ( k 1 , k 2 , , k N , t ) = 1 1 N n = 1 N e k n 2 t 2 / 2 .
For geometry reasons, in the limit N , the largest parameter in the infinite set { k 1 , k 2 , , k } must be 2 , whereas the lowest one must be 1. The reason is that, in using the technique as in Figure 1B over and over again, the final areas of integration turn to perfect circles between the radii t and t 2 . The convergence of this method becomes obvious by considering the backstep iteration. Suppose we start with an approximation for a given set of parameters k n with a given precision. The degeneration of two adjacent parameters means that a partition is skipped, leading to a more imprecise approximation as the degree of freedom in choosing different parameter values is lost.
For the general analysis, we calculate the convergence by fixing the parameters in Equation (A1) to the upper boundary, k n = 1 / cos ( π n / ( 4 · 2 p ) ) , and analyse
Δ N ( N , t ) = Δ ( k 1 , , k N , t ) = P ( k 1 , , k N , t ) P ( t )
for N = 2 p and a fixed value of t, e.g., t 0 = 1.0668 , at which, the uncertainty range of Equation (7) turns out to be maximal. One obtains the values in Table A1 demonstrating the convergence of the approximations.
Table A1. Deviations in the uniform N = 2 p approximations for increasing p. Note that values higher than p = 15 could not be checked with the PC at hand.
Table A1. Deviations in the uniform N = 2 p approximations for increasing p. Note that values higher than p = 15 could not be checked with the PC at hand.
p1112131415
| Δ N ( 2 p , t 0 ) | 0.00004 0.00002 0.00001 0.000005 0.0000026
For the ternary procedure, we again fix the parameters to the upper boundary, k n = 1 / cos ( π n / ( 4 · 3 p ) ) , and analyse Δ N ( N , t ) for N = 3 p and for the same fixed value t 0 = 1.0668 . One obtains the values in Table A2.
Table A2. Deviations in the uniform N = 3 p approximations for increasing p. Note that values higher than p = 10 could not be checked with the PC at hand.
Table A2. Deviations in the uniform N = 3 p approximations for increasing p. Note that values higher than p = 10 could not be checked with the PC at hand.
p678910
| Δ N ( 3 p , t 0 ) | 0.0001 0.00004 0.00001 0.000004 0.000001
The values in Table A1 and Table A2 can be approximated by the formula | Δ p ( N , 1 ) | < 0.09 / N , i.e., the deviation is inversely proportional to N. This can be seen as follows. The worst error of the squared Gauss integral P 2 ( t ) is given by using the approximations where the k n takes the maximal or minimal values, respectively. The difference between these squares of extremal values is given by
P M 2 ( t ) P m 2 ( t ) = 1 N n = 1 N e ( k n min ) 2 t 2 / 2 e ( k n max ) 2 t 2 / 2 = H ( t ) N ,
where H ( t ) = e t 2 / 2 e t 2 is obtained by using the property k n min = k n 1 max in order to cancel intermediate consecutive terms. Using the third binomial to obtain P M 2 ( t ) P m 2 ( t ) 2 P ( t ) ( P M ( t ) P m ( t ) ) , one has
P M ( t ) P m ( t ) H ( t ) 2 P ( t ) N .
Finally, one can use P ( t ) > P min ( t ) = 1 e t 2 / 2 to obtain
P M ( t ) P m ( t ) < 1 2 N H ( t ) 1 e t 2 / 2 < 0.09 N ,
where H ( t ) 1 e t 2 / 2 2 2 3 3 / 5 5 is used. The error for P ( t ) itself is, at most, the difference between the two extremal values. Equation (A1) can be considered as the discretised form of the Gauss integral. Applying the continuum limit i f i ( z i ) Δ z i f ( z ) d z , one obtains
P 2 ( t ) = 1 4 π 0 π / 4 exp t 2 2 cos 2 ϕ d ϕ .
The exponential function can be expanded into a series of finite degree N. Again, we obtain an approximation, as
| P 2 ( t ) 4 π n = 1 N ( 1 ) n 1 n ! t 2 n 2 n c n | < t 2 N N ! N
with
c n = 0 π / 4 d ϕ cos 2 n ϕ = k = 0 n 1 1 2 k + 1 n 1 k = 2 F 1 ( 1 / 2 , 1 n ; 3 / 2 ; 1 ) ,
where 2 F 1 ( a , b ; c ; z ) is the hypergeometric function.

References

  1. Kendall, M.; Stuart, A. The Advanced Theory of Statistics; Charles Griffin and Co.: London, UK, 1965. [Google Scholar]
  2. Risch, R.H. The problem of integration in finite terms. Trans. Amer. Math. Soc. 1969, 139, 167–189. [Google Scholar] [CrossRef]
  3. Risch, R.H. The solution of the problem of integration in finite terms. Bull. Amer. Math. Soc. 1970, 76, 605–608. [Google Scholar] [CrossRef] [Green Version]
  4. Ordaz, M. A simple approximation to the Gaussian distribution. Struct. Saf. 1991, 9, 315–318. [Google Scholar] [CrossRef]
  5. Shenton, L.R. Inequalities for the Normal Integral Including a New Continued Fraction. Biometrika 1954, 41, 177–189. [Google Scholar] [CrossRef]
  6. Vazquez-Leal, H.; Castenada-Sheissa, R.; Filobello-Nino, U.; Sarmiento-Reyes, A.; Sanchez Orea, J. High Accurate Simple Approximation of Normal Distribution Integral. Math. Probl. Eng. 2012, 2012, 124029. [Google Scholar] [CrossRef] [Green Version]
  7. Sandoval-Hernandez, M.A.; Vazquez-Leal, H.; Filobello-Nino, U.; Hernandez-Martinez, L. New handy and accurate approximation for the Gaussian integrals with applications to science and engineering. Open Math. 2019, 17, 1774–1793. [Google Scholar] [CrossRef]
  8. Zhang, R. On Uniform Approximations of Normal Distributions by Jacobi Theta Functions. arXiv 2018, arXiv:1810.08535. [Google Scholar]
  9. Chesneau, C.; Navarro, F. On some applicable approximations of Gaussian type integrals. J. Math. Model. 2019, 7, 221–229. [Google Scholar]
  10. Latala, R. On Some Inequalities for Gaussian Measures. In Proceedings of the ICM, Beijing, China, 20–29 August 2002; Volume 2, pp. 813–822. [Google Scholar]
Figure 1. (A) As the integrant is positive, the value of the integral over the interior of the square lies between the values of integrals over the interiors of the circumferences of radii m and M. (B) The previous integration square is taken and rotated by π / 4 and put together with the exact copy of the square. The integral over the common inner area is denoted by ω .
Figure 1. (A) As the integrant is positive, the value of the integral over the interior of the square lies between the values of integrals over the interiors of the circumferences of radii m and M. (B) The previous integration square is taken and rotated by π / 4 and put together with the exact copy of the square. The integral over the common inner area is denoted by ω .
Stats 05 00032 g001
Figure 2. The case of three “boxes” and, accordingly, three approximating exponents. The picture becomes more and more rotationally symmetric as the number of boxes grows.
Figure 2. The case of three “boxes” and, accordingly, three approximating exponents. The picture becomes more and more rotationally symmetric as the number of boxes grows.
Stats 05 00032 g002
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Martila, D.; Groote, S. Evaluation of the Gauss Integral. Stats 2022, 5, 538-545. https://doi.org/10.3390/stats5020032

AMA Style

Martila D, Groote S. Evaluation of the Gauss Integral. Stats. 2022; 5(2):538-545. https://doi.org/10.3390/stats5020032

Chicago/Turabian Style

Martila, Dmitri, and Stefan Groote. 2022. "Evaluation of the Gauss Integral" Stats 5, no. 2: 538-545. https://doi.org/10.3390/stats5020032

APA Style

Martila, D., & Groote, S. (2022). Evaluation of the Gauss Integral. Stats, 5(2), 538-545. https://doi.org/10.3390/stats5020032

Article Metrics

Back to TopTop