Next Article in Journal
Multi-Physics Inverse Homogenization for the Design of Innovative Cellular Materials: Application to Thermo-Elastic Problems
Next Article in Special Issue
Stochastic Neural Networks for Automatic Cell Tracking in Microscopy Image Sequences of Bacterial Colonies
Previous Article in Journal
Acknowledgment to Reviewers of MCA in 2021
Previous Article in Special Issue
Approximating the Steady-State Temperature of 3D Electronic Systems with Convolutional Neural Networks
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Arbitrarily Accurate Analytical Approximations for the Error Function

School of Electrical Engineering, Computing and Mathematical Sciences, Curtin University, GPO Box U1987, Perth 6845, Australia
Math. Comput. Appl. 2022, 27(1), 14; https://doi.org/10.3390/mca27010014
Submission received: 26 November 2021 / Revised: 20 January 2022 / Accepted: 27 January 2022 / Published: 9 February 2022
(This article belongs to the Collection Feature Papers in Mathematical and Computational Applications)

Abstract

:
A spline-based integral approximation is utilized to define a sequence of approximations to the error function that converge at a significantly faster manner than the default Taylor series. The real case is considered and the approximations can be improved by utilizing the approximation erf ( x ) 1 for | x | > x o and with x o optimally chosen. Two generalizations are possible; the first is based on demarcating the integration interval into m equally spaced subintervals. The second, is based on utilizing a larger fixed subinterval, with a known integral, and a smaller subinterval whose integral is to be approximated. Both generalizations lead to significantly improved accuracy. Furthermore, the initial approximations, and those arising from the first generalization, can be utilized as inputs to a custom dynamic system to establish approximations with better convergence properties. Indicative results include those of a fourth-order approximation, based on four subintervals, which leads to a relative error bound of 1.43 × 10−7 over the interval [ 0 ,   ] . The corresponding sixteenth-order approximation achieves a relative error bound of 2.01 × 10−19. Various approximations that achieve the set relative error bounds of 10−4, 10−6, 10−10, and 10−16, over [ 0 ,   ] , are specified. Applications include, first, the definition of functions that are upper and lower bounds, of arbitrary accuracy, for the error function. Second, new series for the error function. Third, new sequences of approximations for exp ( x 2 ) that have significantly higher convergence properties than a Taylor series approximation. Fourth, the definition of a complementary demarcation function e C ( x ) that satisfies the constraint e C 2 ( x ) + erf 2 ( x ) = 1 . Fifth, arbitrarily accurate approximations for the power and harmonic distortion for a sinusoidal signal subject to an error function nonlinearity. Sixth, approximate expressions for the linear filtering of a step signal that is modeled by the error function.
MSC (2020):
33B20; 41A10; 41A15; 41A58

1. Introduction

The error function arises in many areas of mathematics, science, and scientific applications including diffusion associated with Brownian motion (Fick’s second law), the heat kernel for the heat equation, e.g., [1], the modeling of magnetization, e.g., [2], the modelling of transitions between two levels, for example, with the modeling of smooth or soft limiters, e.g., [3] and the psychometric function, e.g., [4,5], the modeling of amplifier non-linearities, e.g., [6,7], and the modeling of rubber-like materials and soft tissue, e.g., [8,9]. It is widely used in the modeling of random phenomena as the error function defines the cumulative distribution of the Gaussian probability density function and examples include, the probability of error in signal detection, option pricing via the Black–Scholes formula, etc. Many other applications exist. In general, the error function is associated with a macro description of physical phenomena and the de Moivre–Laplace theorem is illustrative of the link between fundamental outcomes and a higher-level model.
The error function is defined on the complex plane according to
erf ( z ) = 2 π γ e λ 2 d λ ,   z C ,
where the path γ is between the points zero and z and is arbitrary. Associated functions are the complementary error function, the Faddeyeva function, the Voigt function, and Dawson’s integral, e.g., [10]. The Faddeyeva function and the Voigt function, for example, have applications in spectroscopy, e.g., [11]. The error function can also be defined in terms of the spherical Bessel functions, e.g., Equation (7.6.8) of [10], and the incomplete Gamma function, e.g., Equation (7.11.1) of [10]. Marsaglia [12] provides a brief insight into the history of the error function.
For the real case, which is the case considered in this paper, the error function is defined by the integral
erf ( x ) = 2 π 0 x e λ 2 d λ ,   x R .
The widely used, and associated, cumulative distribution function for the standard normal distribution is defined according to
Φ ( x ) = 1 2 π x exp [ λ 2 2 ] d λ = 0 . 5 + 0 . 5 erf [ x 2 ] .
Being defined by an integral, which does not have an explicit analytical form, there is interest in approximations of the error function and, in recent decades, many approximations have been developed. Table 1 details indicative approximations for the real case and their relative errors are shown in Figure 1. Most of the approximations detailed in this table are custom and have a limited relative error bound with bounds in the range of 3.05 × 10−5 [13] to 7.07 × 10−3 [14]. It is preferable to have an approximation form that can be generalized to create approximations that converge to the error function. Examples include the standard Taylor series, the Bürmann series, and the approximation by Abrarov, which are defined in Table 1.
Many of the approximations detailed in Table 1 can be improved upon by approximating the associated residual function, denoted g, via a Padé approximant or a basis set decomposition. Examples of possible approximation forms, and the resulting residual functions, are given in Table 2. One example is that of a 4/2 Padé approximant for the function g 3 , which leads to the approximation:
erf ( x ) 1 exp [ x 2 4 π [ 1 + n 1 x 1 + n 2 x 1 2 + n 3 x 1 3 + n 4 x 1 4 1 + d 1 x 1 + d 2 x 1 2 ] ] ,   x 1 = x x + 1 , x 0 ,
n 1 = 279 10,000,000 ,   n 2 = 303,923 10,000,000 ,   n 3 = 34,783 5,000,000 ,   n 4 = 40,793 10,000,000 d 1 = 21,941,279 10,000,000 ,   d 2 = 3,329,407 2,500,000 .
The relative error bound in this approximation is 4.02 × 10−7. Higher-order Padé approximants can be used to generate approximations with a lower relative error bound. Matic [15] provides a similar approximation with an absolute error of 5.79 × 10−6.
An approximation of the error function can also be obtained by combining separate approximations, which are accurate, respectively, for | x | 1 and | x | 1 , via a demarcation function d
erf ( x ) = 2 x π d ( x ) + [ 1 e x 2 π x ] [ 1 d ( x ) ] ,       x 0 ,
where
d ( x ) = π x erf ( x ) π x + e x 2 2 x 2 π x + e x 2 .
Naturally, an approximation of d requires an approximation of the error function. Unsurprisingly, the relative error in the approximation for the error function equals the relative error in the approximation utilized to approximate the error function in d.
Finally, efficient numerical implementation of the error function is of interest and Chevillard [16] and De Schrijver [17] provide results and an overview. Highly accurate piecewise approximations have long since been defined, e.g., [18].
The two-point spline-based approximations for functions and integrals, detailed in [19], have recently been applied to find arbitrarily accurate approximations for the hyperbolic tangent function [20]. In this paper, the general two-point spline approximation form is applied to define a sequence of convergent approximations for the error function. The basic form of the approximation of order n, denoted f n , is
erf ( x ) f n ( x ) = p n , 0 ( x ) + p n , 1 ( x ) e x 2 ,
where p n , 1 is a polynomial of order n and p n , 0 is a polynomial of order less than n. Convergence of the sequence of approximations f 1 , f 2 , to erf ( x ) is shown and the convergence is significantly better than the default Taylor series. For example, the second-order approximation
f 2 ( x ) = x π [ 1 x 2 30 ] + x π [ 1 + 11 x 2 30 + x 4 15 ] e x 2
yields a relative error bound of 0.056 over the interval [0, 2] which is better than a fifteenth-order Taylor series approximation. The approximations can be improved by utilizing the approximation erf ( x ) 1 for | x | 1 and thereby establishing approximations with a set relative error bound over the interval [ 0 , ) .
Two generalizations are detailed. The first is of the form
erf ( x ) p 0 ( x ) + p 1 ( x ) e k 1 x 2 + + p m ( x ) e k m x 2
and is based on utilizing approximations associated with m equally spaced subintervals of the interval [0, x]. The second is based on utilizing a fixed subinterval within [0, x] and then approximating the error function over the remainder of the interval. Both generalizations lead to significantly improved accuracy. For example, a fourth-order approximation based on four variable subintervals, when used with the approximation erf ( x ) 1 for x 1 , has a relative error bound of 1.43 × 10−7 over the interval [ 0 , ] . The corresponding sixteenth-order approximation has a relative error bound of 2.01 × 10−19. Finally, by utilizing the solutions of a custom dynamical system, approximations with better convergence properties can be established.
Applications of the proposed approximations for the error function include, first, the definition of functions that are upper and lower bounds, of arbitrary accuracy, for the error function. Second, new series for the error function. Third, new sequences of approximations for exp ( x 2 ) that have significantly higher convergence properties than a Taylor series approximation. Fourth, the definition of a complementary demarcation function e C ( x ) that satisfies the constraint e C 2 ( x ) + e r f 2 ( x ) = 1 . Fifth, arbitrarily accurate approximations for the power and harmonic distortion for a sinusoidal signal subject to an error function nonlinearity. Sixth, approximate expressions for the linear filtering of a step signal modeled by the error function.
Section 2 details the spline-based approximation for the error function and its convergence. Improved approximations, obtained by utilizing the nature of the error function for large arguments, are detailed in Section 3. Two generalizations, with the potential for lower relative error bounds, are detailed in Section 4 and Section 5. Section 6 details how the initial approximations, and the approximations arising from the first generalization, can be utilized as inputs to a custom dynamical system to establish approximations with better convergence properties. Applications are specified in Section 7. Conclusions are stated in Section 8.

1.1. Notes and Notation

As erf ( x ) = erf ( x ) , it is sufficient to consider approximations for the interval [ 0 ,   ) .
For a function f defined over the interval [ α ,   β ] , an approximating function f A has a relative error, at a point x 1 , defined according to re ( x 1 ) = 1 f A ( x 1 ) / f ( x 1 ) . The relative error bound for the approximating function over the interval [ α ,   β ] is defined according to
re B = max { | r e ( x 1 ) | : x 1 [ α , β ] } .
The notation f ( k ) ( x ) = d k d x k f ( x ) is used. The symbol u denotes the unit step function.
Mathematica has been used to facilitate the analysis and to obtain numerical results.

1.2. Background Results

The following result underpins the bounds proposed for the error function:
Lemma 1. 
Upper and Lower Functional Bounds. A positive approximation  f A  to a positive function  f  over the interval  [ α ,   β ] , with a relative error bound
ε B < 1 f A ( x ) f ( x ) < ε B ,     x [ α , β ] , ε B > 0 ,
leads to the following upper and lower bounded functions:
f A ( x ) 1 + ε B < f ( x ) < f A ( x ) 1 ε B ,   x [ α , β ] .
The relative error bounds, over the interval  [ α ,   β ] , for the upper and lower bounded functions, respectively, are:
2 ε B 1 + ε B ,   2 ε B 1 ε B .
Proof. 
The definition of the relative error bound, as specified by Equation (12), leads to
1 ε B < f A ( x ) f ( x ) < 1 + ε B ,
which implies
1 ε B 1 + ε B < f A ( x ) / ( 1 + ε B ) f ( x ) < 1 ,     1 < f A ( x ) / ( 1 ε B ) f ( x ) < 1 + ε B 1 ε B
and the relative error bounds:
0 < 1 f A ( x ) / ( 1 + ε B ) f ( x ) < 1 1 ε B 1 + ε B = 2 ε B 1 + ε B 1 1 + ε B 1 ε B = 2 ε B 1 ε B < 1 f A ( x ) / ( 1 ε B ) f ( x ) < 0

Convergent Integral Approximations

One application of the proposed approximations for the error function requires knowledge of when function convergence implies convergence of associated integrals.
Lemma 2. 
Convergent Integral Approximation. If a sequence of functions  f 1 , f 2 ,  converges, over the interval  [ 0 , x ] , to a bounded and integrable function  f , then point-wise convergence is sufficient for the associated integrals to be convergent, i.e., for
lim n 0 x f n ( λ ) d λ = 0 x f ( λ ) d λ .
Proof. 
The required result follows if it is possible to interchange the order of limit and integration, i.e.,
lim n 0 x f n ( λ ) d λ = 0 x lim n f n ( λ ) d λ = 0 x f ( λ ) d λ .
Standard conditions for when the interchange is valid are specified by the monotone and dominated convergence theorems, e.g., [29], (p. 26). Sufficient conditions for a valid interchange include point-wise function convergence, and for f to be integrable and bounded. □

2. Spline-Based Approximations for Error Function

2.1. Spline Approximation for Error Function

The following nth-order, two-point spline-based approximation for an integral has been detailed in Equation (48) of [19], for a function f that is at least (n + 1)th-order differentiable:
α x f ( λ ) d λ = k = 0 n c n , k ( x α ) k + 1 [ f ( k ) ( α ) + ( 1 ) k f ( k ) ( x ) ] + R n ( α , x ) n { 0 , 1 , 2 , }
where
c n , k = n ! ( n k ) ! ( k + 1 ) ! ( 2 n + 1 k ) ! 2 ( 2 n + 1 ) ! ,   k { 0 , 1 , , n } ,
R n ( 1 ) ( α , x ) = k = 0 n c n , k ( k + 1 ) ( x α ) k [ f ( k ) ( α ) + ( 1 ) k + 1 n + 1 n k + 1 f ( k ) ( x ) ] + c n , n ( 1 ) n + 1 ( x α ) n + 1 f ( n + 1 ) ( x ) .
Direct application of this result to the integral defining the error function leads to the results stated in Theorem 1:
Theorem 1. 
Spline-Based Integral Approximation for Error Function. The error function can be defined according to
erf ( x ) = f n ( x ) + ε n ( x ) ,
where  f n  is the nth-order spline-based integral approximation defined according to
f n ( x ) = 2 π k = 0 n c n , k x k + 1 [ p ( k , 0 ) + ( 1 ) k p ( k , x ) e x 2 ]
and  ε n ( x )  is the associated residual function whose derivative is
ε n ( 1 ) ( x ) = 2 e x 2 π 2 π k = 0 n c n , k ( k + 1 ) x k p ( k , 0 ) 2 e x 2 π k = 0 n c n , k x k ( 1 ) k [ ( k + 1 2 x 2 ) p ( k , x ) + x p ( 1 ) ( k , x ) ]
In these equations,
p ( k , x ) = p ( 1 ) ( k 1 , x ) 2 x p ( k 1 , x ) ,   p ( 0 , x ) = 1 .
A more general approximation is
erf ( x ) erf ( α ) = 2 π k = 0 n c n , k ( x α ) k + 1 [ p ( k , α ) e α 2 + ( 1 ) k p ( k , x ) e x 2 ] + ε n ( α , x ) = f n ( α , x ) + ε n ( α , x ) .
Proof. 
The proof is detailed in Appendix A. □

2.1.1. Note

The polynomial function p ( k , x ) is equivalently defined by the kth-order Hermite polynomial function ([21], p. 775, Equation (23.3.10)) and an explicit form is
p ( k , x ) = i = 0 k / 2 ( 1 ) i + k k ! i ! ( k 2 i ) ! 2 k 2 i x k 2 i .
A change of variable r = k 2 i , and noting that i { 0 , 1 , , k / 2 } implies r { k , k 2 , , k 2 k / 2 } , leads to the alternative form
p ( k , x ) = r = k 2 k / 2 k d k , r x r ,   d k , r = ( 1 ) ( 3 k r ) / 2 [ 1 + ( 1 ) r [ k 2 k / 2 ] 2 ] k ! 2 r [ ( k r ) / 2 ] ! r ! .
Substitution of this form into Equation (24) leads to the direct polynomial form for the nth-order approximation to the error function:
f n ( x ) = 2 π k = 0 n c n , k p ( k , 0 ) x k + 1 + 2 e x 2 π k = 0 n [ ( 1 ) k c n , k r = k 2 k 2 k d k , r x r + k + 1 ] .

2.1.2. Approximations

The polynomial functions p, as defined by Equations (26), (28), and (29), have the explicit forms:
p ( 0 , x ) = 1 ,           p ( 1 , x ) = 2 x ,           p ( 2 , x ) = 2 [ 1 2 x 2 ] , p ( 3 , x ) = 12 x [ 1 2 x 2 / 3 ] ,               p ( 4 , x ) = 12 [ 1 4 x 2 + 4 x 4 / 3 ] ,          
Approximations for the error function, as defined by Equation (24) and for orders zero to five, are:
f 0 ( x ) = x π + x π e x 2
f 1 ( x ) = x π + x π [ 1 + x 2 3 ] e x 2
f 2 ( x ) = x π [ 1 x 2 30 ] + x π [ 1 + 11 x 2 30 + x 4 15 ] e x 2
f 3 ( x ) = x π [ 1 x 2 21 ] + x π [ 1 + 8 x 2 21 + 17 x 4 210 + x 6 105 ] e x 2
f 4 ( x ) = x π [ 1 x 2 18 + x 4 1260 ] + x π [ 1 + 7 x 2 18 + 37 x 4 420 + 4 x 6 315 + x 8 945 ] e x 2
f 5 ( x ) = x π [ 1 2 x 2 33 + x 4 660 ] + x π [ 1 + 13 x 2 33 + 61 x 4 660 + 67 x 6 4620 + 16 x 8 10,395 + x 10 10,395 ] e x 2

2.2. Results

The relative error in the zero- to tenth-order spline-based series approximations, along with the relative error in Taylor series approximations of orders 1–15, are detailed in Figure 2. The clear superiority, in terms of convergence, of the spline-based series relative to the Taylor series is evident. The relative error in the spline approximations, of orders 16, 20, 24, 28, and 32, are shown in Figure 3.
The Mathematica code underpinning the results shown in Figure 2, is detailed in Supplementary Material. Such code is indicative of the code underpinning the results detailed in the paper.

2.3. Approximation for Large Arguments

Zero- and first-order approximations for the error function, and for the case of x 1 , are
erf ( x ) 1 ,         erf ( x ) 1 e x 2 π x .
The relative errors in such approximations, respectively, are
re ( x ) = 1 1 erf ( x ) ,         re ( x ) 1 1 e x 2 / π x erf ( x ) ,
and their graphs are shown in Figure 3.

2.4. Convergence

To prove the convergence of the sequence of functions f 0 , f 1 , f 2 , , defined by Theorem 1, to the error function, it is sufficient to prove that the corresponding sequence of residual functions ε 0 , ε 1 , ε 2 , converge to zero. This can be shown by considering the derivatives of the residual functions defined by Equation (25). The derivatives of the residual functions of orders zero, one, and two, respectively, are:
ε 0 ( 1 ) ( x ) = 1 π [ 1 + 2 x 2 ] e x 2 1 π
ε 1 ( 1 ) ( x ) = 1 π [ 1 + x 2 + 2 x 4 3 ] e x 2 1 π
ε 2 ( 1 ) ( x ) = 1 π [ 1 + 9 x 2 10 + 2 x 4 5 + 2 x 6 15 ] e x 2 1 π [ 1 x 2 10 ] .
Theorem 2. 
Convergence of Spline-Based Approximations. For all fixed values of x, the derivatives of the residual functions converge to zero as the order increases, i.e., for all fixed values of x it is the case that
lim n ε n ( 1 ) ( x ) = 0 ,   x > 0 .
This is sufficient for the convergence of the residual functions, i.e., lim n ε n ( x ) = 0 ,   x > 0 , and, hence, for x fixed:
lim n f n ( x ) = erf ( x ) ,   x > 0 .
The convergence is nonuniform.
Proof. 
The proof is detailed in Appendix B. □

2.5. Improved Approximation Based on Iteration

Consider the general result
0 x erf ( λ ) d λ = 1 π x [ 1 e x 2 ] + x erf ( x )
By using the approximations, f n , as defined in Theorem 1, in the integral, improved approximations for the error function can be defined.
Theorem 3. 
Improved Approximation via Iteration. An improved approximation, of order n, for the error function is
F n ( x ) = 1 π x [ 1 e x 2 ] + 2 π k = 0 n c n , k p ( k , 0 ) k + 2 x k + 1 + 2 x π k = 0 n [ ( 1 ) k c n , k r = k 2 k / 2 k d k , r 2 [ r + k 2 ] ! [ 1 e x 2 i = 0 r + k 2 x 2 i i ! ] ]
where c n , k and d k , r are defined, respectively, by Equations (21) and (29).
Proof. 
From Equation (45), it follows that
erf ( x ) 1 π x [ 1 e x 2 ] + 1 x 0 x f n ( λ ) d λ .
As
0 x x u e λ 2 d λ = 1 2 [ u 1 2 ] ! [ 1 e x 2 i = 0 ( u 1 ) / 2 x 2 i i ! ] ,           u { 1 , 3 , 5 , } ,
it follows, from the form for f n detailed in Equation (30), that
erf ( x ) 1 π x [ 1 e x 2 ] + 2 π k = 0 n c n , k p ( k , 0 ) k + 2 x k + 1 + 2 x π k = 0 n [ ( 1 ) k c n , k r = k 2 k 2 k d k , r 2 [ r + k 2 ] ! [ 1 e x 2 i = 0 r + k 2 x 2 i i ! ] ]
which is the required result. □

2.5.1. Explicit Approximations

Approximations to the error function, of orders zero to five, are:
f 0 ( x ) = 3 2 π x [ 1 + x 2 3 ] 3 e x 2 2 π x
f 1 ( x ) = 5 3 π x [ 1 + 3 x 2 10 ] 5 3 π x [ 1 + x 2 10 ] e x 2
f 2 ( x ) = 7 4 π x [ 1 + 2 x 2 7 x 4 210 ] 7 4 π x [ 1 + x 2 7 + 2 x 4 105 ] e x 2
f 3 ( x ) = 9 5 π x [ 1 + 5 x 2 18 5 x 4 756 ] 9 5 π x [ 1 + x 2 6 + 23 x 4 756 + x 6 378 ] e x 2
f 4 ( x ) = 11 6 π x [ 1 + 3 x 2 11 x 4 132 + x 6 13,860 ] 11 6 π x [ 1 + 2 x 2 11 + 5 x 4 132 + 16 x 6 3465 + x 8 3465 ] e x 2
f 5 ( x ) = 13 7 π x [ 1 + 7 x 2 26 7 x 4 858 + 7 x 6 51,480 ] 13 7 π x [ 1 + 5 x 2 26 + 37 x 4 858 + 313 x 6 51,480 + 7 x 8 12,850 + x 10 38,610 ] e x 2
Note that integration of these expressions leads to functions defined, in part, by the Gamma function that is an integral. This makes further iteration impractical.

2.5.2. Results

The relative errors in even order approximations, of orders 0–10, are shown in Figure 4. A comparison of the results detailed in Figure 2 and Figure 4 show the clear improvement in the approximations specified by Equation (46).

3. Improved Approximations

3.1. Improved Approximation for Error Function

An improved approximation for the error function can be achieved by noting, as illustrated in Figure 3, that the approximation erf ( x ) 1 is increasingly accurate for the case of x 1 and for x increasing. By switching at a suitable point x o , as illustrated in Figure 5, from a spline-based approximation to the approximation erf ( x ) 1 , an improved approximation is achieved. Naturally, it is possible to switch to the approximation erf ( x ) 1 e x 2 / π x , or higher-order approximations, in a similar manner.
Theorem 4. 
Improved Approximation for Error Function. Improved approximations for the error function, based on the nth-order approximations detailed in Theorems 1 and 3, and consistent with the illustration shown inFigure 5, are
erf ( x ) f n ( x ) u [ x o ( n ) x ] + [ 1 u [ x o ( n ) x ] ] erf ( x ) F n ( x ) u [ x o ( n ) x ] + [ 1 u [ x o ( n ) x ] ]
where the transition points, respectively, are defined according to
x o ( n ) = x : | 1 1 erf ( x ) | = | 1 f n ( x ) erf ( x ) | x o ( n ) = x : | 1 1 erf ( x ) | = | 1 F n ( x ) erf ( x ) |
Proof. 
The improved approximation results follow from optimally switching, as illustrated in Figure 5 and at the point specified by Equation (57), to the approximation erf ( x ) 1 , which has a lower relative error magnitude. □

Transition Points and Relative Error Bounds

The transition points, for various orders of spline approximation, are specified in Table 3. The relationship between the transition point and order is shown in Figure 6 for the case of the approximations f n . This relationship can be approximated, with a second-order polynomial, according to
x o ( n ) = 1.3607 + 0.20511 n 0.002932 n 2 ,         0 n 24 .
However, as small variations in x o ( n ) can lead to significant changes in the maximum relative error in the approximation for the error function, precise values for x o ( n ) are preferable.
The graphs of the relative errors in the approximations f n to erf(x), as specified by Equation (56), are shown in Figure 7 for orders 2, 4, 6, …, 20. The relative error bounds that can be achieved over the interval [ 0 , ] , using the optimally chosen transition points, are detailed in Table 3.

3.2. Improved Approximation for Taylor Series

The approximation of erf ( x ) 1 for x 1 can be utilized to improve the relative error bound for a Taylor series approximation according to
erf ( x ) T n ( x ) u [ x o ( n ) x ] + [ 1 u [ x o ( n ) x ] ] ,
where T n is a nth-order Taylor series, with n odd, as specified in Table 1. The optimum transition points and relative error bounds, for selected orders, are detailed in Table 4. The variation of the relative errors, with order, are shown in Figure 8. The change in the optimum transition point can be approximated according to
x o ( n ) 0.932 + 0.0560 n 0.0003503 n 2 ,   3 n 61 ,
but, again, as small variations in x o ( n ) can lead to significant changes in the maximum relative error in the approximation for the error function, precise values for x o ( n ) are preferable. The clear superiority in the convergence of the spline-based series is evident from a visual comparison of the relative errors shown in Figure 7 and Figure 8.

4. Variable Subinterval Approximations for Error Function

An improved analytic approximation for the error function can be achieved by demarcating the interval [ 0 ,   x ] into variable subintervals, e.g., the subintervals [ 0 ,   x / 4 ] , [ x / 4 ,   x / 2 ] , [ x / 2 ,   3 x / 4 ] , and [ 3 x / 4 ,   x ] for the four-subintervals case, and by utilizing spline-based integral approximations for each subinterval. Chiani [30] utilized subintervals to enhance approximations for the complementary error function.
Theorem 5. 
Variable Subinterval Approximations for Error Function. The nth-order spline-based approximation to the error function, based on m equal-width subintervals, is
f n , m ( x ) = 2 π i = 0 m 1 [ k = 0 n c n , k [ x m ] k + 1 [ p [ k , i x m ] exp [ i 2 x 2 m 2 ] + ( 1 ) k p [ k , ( i + 1 ) x m ] exp [ ( i + 1 ) 2 x 2 m 2 ] ] ]
where
p ( k , x ) = p ( 1 ) ( k 1 , x ) 2 x p ( k 1 , x ) ,   p ( 0 , x ) = 1 .
An alternative form is
f n , m ( x ) = 2 π [ p n , 0 ( x ) + i = 1 m 1 p n , i ( x ) exp [ i 2 x 2 m 2 ] + p n , m ( x ) exp ( x 2 ) ]
where
p n , 0 ( x ) = k = 0 n c n , k m k + 1 p ( k , 0 ) x k + 1 p n , i ( x ) = k = 0 n c n , k m k + 1 [ 1 + ( 1 ) k ] p ( k , i x m ) x k + 1 p n , m ( x ) = k = 0 n c n , k m k + 1 ( 1 ) k p ( k , x ) x k + 1 .
Proof. 
The first result follows by applying Equation (27) in Theorem 1 to the subintervals [ 0 , x / m ] , [ x / m , 2 x / m ] ,   , [ x x m , x ] . The alternative form arises by expanding the outer summation in Equation (61) and collecting terms of similar form. □

4.1. Explicit Expressions

A first-order approximation, based on m subintervals, is
f 1 , m ( x ) = x m π [ 1 + 2 i = 1 m 1 exp [ i 2 x 2 m 2 ] + exp ( x 2 ) ] + x 3 3 m 2 π exp ( x 2 ) .
For the four-subintervals case, explicit expressions are
f 0 , 4 ( x ) = x 4 π [ 1 + 2 exp [ x 2 16 ] + 2 exp [ x 2 4 ] + 2 exp [ 9 x 2 16 ] + exp ( x 2 ) ]
f 1 , 4 ( x ) = x 4 π [ 1 + 2 exp [ x 2 16 ] + 2 exp [ x 2 4 ] + 2 exp [ 9 x 2 16 ] + exp ( x 2 ) ] + x 3 48 π exp ( x 2 )
Using the alternative form, a fourth-order expression is
f 4 , 4 ( x ) = x 4 π [ 1 x 2 288 + x 4 322,560 ] + x 2 π exp [ x 2 16 ] [ 1 x 2 288 + 47 x 4 107,520 x 6 1,290,040 + x 8 61,931,520 ] + x 2 π exp [ x 2 4 ] [ 1 x 2 288 + 187 x 4 107,520 x 6 322,560 + x 8 3,870,720 ] + x 2 π exp [ 9 x 2 16 ] [ 1 x 2 288 + 1261 x 4 322,560 x 6 143,360 + 3 x 8 2,293,760 ] + x 4 π exp ( x 2 ) [ 1 + 31 x 2 288 + 101 x 4 15,360 19 x 6 80,640 + x 8 241,920 ] +
A fourth-order spline approximation, which utilizes 16 subintervals, is detailed in Appendix C. This expression, when utilized with the transition point x o = 7.1544 , yields an approximation with a relative error bound of 4.82 × 10−16.

Results

The relative errors in the spline approximations of orders one to six, and for the case of four equal subintervals [ 0 , x / 4 ] , [ x / 4 , x / 2 ] , [ x / 2 ,   3 x / 4 ] , and [ 3 x / 4 ,   x ] , are shown in Figure 9.

4.2. Improved Approximation

The spline approximations utilizing variable subintervals can be improved by using the transition to the approximation erf ( x ) 1 at a suitable point, as specified by Equation (56). The relative error in the spline approximations of orders one to seven, and for the case of four equal subintervals [ 0 , x / 4 ] , [ x / 4 , x / 2 ] , [ x / 2 ,   3 x / 4 ] , and [ 3 x / 4 ,   x ] , are updated in Figure 10 to show the improvement associated with utilizing the optimum transition point to the approximation erf ( x ) 1 . The relative error bounds, and transition points, are detailed in Table 5 and Table 6 for the cases of four and 16 subintervals.

Examples

A first-order approximation, based on m subintervals, as specified by Equation (65), yields the relative error bound of 7.21 × 10−5 for four subintervals and the transition point x o = 3.2928 ; 4.51 × 10−6 for eight subintervals with a transition point x o = 4.784 ; 2.82 × 10−7 for 16 subintervals and a transition point of x o = 6.88 ; and 1.10 × 10−9 for 64 subintervals and a transition point x o = 15.7888 .
The fourth-order approximation, based on four equal subintervals, as specified by Equation (68), leads to the relative error bound of 1.43 × 10−7 when used with the transition point x o = 3.7208 . A sixteenth-order approximation, based on four equal subintervals, leads to an error bound of 2.01 × 10−19 when used with the optimum transition point of x o = 6.3736 .

5. Dynamic Constant plus Spline Approximation

Consider the demarcation of the areas, as illustrated in Figure 11 and based on a resolution Δ , that define the error function. It follows that
erf ( x ) = k = 0 x / Δ c k + 2 π Δ x / Δ x e λ 2 d λ { c 0 = 0 c k = erf ( k Δ ) erf [ ( k 1 ) Δ ] ,   k 1 .
For the general case of nonuniformly spaced intervals, as defined by the set of monotonically increasing points { x 0 , x 1 , x 2 , , x m } , and where it is not necessarily the case that x > x m , the error function is defined according to
erf ( x ) = k = 1 m c k u ( x x k ) + 2 π x S x e λ 2 d λ ,
where c 0 = 0 , x 0 = 0 and
c k = erf ( x k ) erf [ x k 1 ] ,     x S = k = 1 m [ x k x k 1 ] u ( x x k ) .
A spline-based approximation, as defined by Equation (27), can be utilized for the unknown integrals in Equations (69) and (70). This leads to the results stated in Theorem 6:
Theorem 6. 
Error Function Approximation: Dynamic Constant Plus a Spline Approximation. The error function, as defined by Equations (69) and (70), can be approximated, respectively, by
f n , Δ ( x ) = k = 0 x / Δ c k + 2 π [ k = 0 n c n , k [ x Δ x Δ ] k + 1 [ p [ k , Δ x Δ ] exp [ Δ 2 x Δ 2 ] + ( 1 ) k p ( k , x ) e x p ( x 2 ) ] ]
erf ( x ) k = 1 m c k u ( x x k ) + 2 π [ k = 0 n c n , k ( x x S ) k + 1 [ p ( k , x S ) e x p ( x S 2 ) + ( 1 ) k p ( k , x ) e x p ( x 2 ) ] ]
Proof. 
These results arise from spline approximation of order n, as defined by Equation (27), for the integrals, respectively, over the intervals [ Δ x / Δ , x ] and [ x S , x ] . □

5.1. Approximations of Orders Zeros to Four

Approximations of orders zero to four arising from Theorem 6 are:
f 0 , Δ ( x ) = k = 0 x / Δ c k + x Δ x / Δ π [ e Δ 2 x / Δ 2 + e x 2 ]
f 1 , Δ ( x ) = k = 0 x / Δ c k + x Δ x / Δ π [ e Δ 2 x / Δ 2 + e x 2 ] ( x Δ x / Δ ) 2 3 π [ Δ x / Δ e Δ 2 x / Δ 2 x e x 2 ]
f 2 , Δ ( x ) = k = 0 x / Δ c k + x Δ x / Δ π [ e Δ 2 x / Δ 2 + e x 2 ] 2 ( x Δ x / Δ ) 2 5 π [ Δ x / Δ e Δ 2 x / Δ 2 x e x 2 ] ( x Δ x / Δ ) 3 30 π [ [ 1 2 Δ 2 x / Δ 2 ] e Δ 2 x / Δ 2 + [ 1 2 x 2 ] e x 2 ]
f 3 , Δ ( x ) = k = 0 x / Δ c k + x Δ x / Δ π [ e Δ 2 x / Δ 2 + e x 2 ] 3 ( x Δ x / Δ ) 2 7 π [ Δ x / Δ e Δ 2 x / Δ 2 x e x 2 ] ( x Δ x / Δ ) 3 21 π [ [ 1 2 Δ 2 x / Δ 2 ] e Δ 2 x / Δ 2 + [ 1 2 x 2 ] e x 2 ] + ( x Δ x / Δ ) 4 70 π [ Δ x / Δ [ 1 2 Δ 2 x / Δ 2 3 ] e Δ 2 x / Δ 2 x [ 1 2 x 2 3 ] e x 2 ] .
f 4 , Δ ( x ) = k = 0 x / Δ c k + x Δ x / Δ π [ e Δ 2 x / Δ 2 + e x 2 ] 4 ( x Δ x / Δ ) 2 9 π [ Δ x / Δ e Δ 2 x / Δ 2 x e x 2 ] ( x Δ x / Δ ) 3 18 π [ [ 1 2 Δ 2 x / Δ 2 ] e Δ 2 x / Δ 2 + [ 1 2 x 2 ] e x 2 ] + ( x Δ x / Δ ) 4 42 π [ Δ x / Δ [ 1 2 Δ 2 x / Δ 2 3 ] e Δ 2 x / Δ 2 x [ 1 2 x 2 3 ] e x 2 ] + ( x Δ x / Δ ) 5 1260 π [ [ 1 4 Δ 2 x / Δ 2 + 4 Δ 4 x / Δ 4 3 ] e Δ 2 x / Δ 2 + [ 1 4 x 2 + 4 x 4 3 ] e x 2 ]

5.2. Results

For a resolution of Δ = 1 / 2 , the coefficients are tabulated in Table 7.
A resolution of Δ = 1 / 2 yields a relative error bound of 1.16 × 10−5 for a second-order approximation, a relative error bound of 1.35 × 10−9 for a fourth-order approximation, a relative error bound of 7.15 × 10−14 for a sixth-order approximation and a relative error bound of 9.03 × 10−37 for a sixteenth-order approximation. These bounds are based on 10,000 equally spaced samples in the interval [0, 8].
The variation of the relative error bound with resolution, and order, is detailed in Figure 12. The nature of the variation of the relative error, for orders two, three, and four, is shown in Figure 13 for a resolution of 0.5. It is possible to obtain better results by using nonuniformly spaced intervals but the improvement, in general, does not warrant the increase in complexity.

6. A Dynamic System to Yield Improved Approximations

It is possible to utilize the approximations detailed in Theorems 1 and 5 as the basis for determining new approximations with a lower relative error. The approach is indirect and based on considering the feedback system illustrated in Figure 14, which has varying feedback. The differential equation characterizing the system is
y ( t ) + f M ( t ) y ( t ) = x ( t ) .
For specific input, x, and modulated feedback, f M , signals the output has a known form. For example, for the case of x ( t ) = f M ( t ) = erf ( t ) u ( t ) , the output signal, assuming zero initial conditions, is
y ( t ) = 1 e x p [ 1 π [ 1 e t 2 ] t erf ( t ) ] ,     t 0 .
For the case of
x ( t ) = f M ( t ) = 4 π e t 2 erf ( t ) u ( t )
the output signal, assuming zero initial conditions, is
y ( t ) = 1 e x p [ erf 2 ( t ) ] ,     t 0 .
This case facilitates approximations for the error function, which can be made arbitrarily accurate and which are valid for the positive real line.
Theorem 7. 
Dynamical System Approximations for Error Function. Based on the differential equation specified by Equation (79), a nth-order approximation to erf ( x ) , for the case of x 0 , can be defined according to
f n ( x ) = p n , 0 + p n , 1 ( x ) e x 2 + p n , 2 ( x ) e 2 x 2 ,
where, for the case of n 2 :
p n , 1 ( x ) = α 0 + α 2 x 2 + + α m x m ,     m = { n 1     m   odd n m   even p n , 2 ( x ) = β 0 + β 2 x 2 + + β 2 n x 2 n p n , 0 ( x ) = ( α 0 + β 0 )
and with
α m = 4 π c n , m a m , 0 α m 2 i = ( m 2 i + 2 ) α m 2 i + 2 2 4 π c n , m 2 i a m 2 i , 0 ,   i { 1 , , m 2 1 } α 0 = α 2 4 π c n , 0 a 0 , 0
β 2 n = 2 π ( 1 ) n c n , n a n , n β 2 n 2 i = ( n i + 1 ) β 2 n 2 i + 2 2 2 π k = n i min { 2 n 2 i , n } ( 1 ) k c n , k a k , 2 ( n i ) k ,   i { 1 , , n 1 } β 0 = β 2 2 2 π c n , 0 a 0 , 0
Here, the coefficients a i , j , i , j { 0 , 1 , , n } are defined by the expansion
p ( k , x ) = a k , 0 + a k , 1 x + a k , 2 x 2 + + a k , k x k ,     k { 0 , 1 , , n } ,
arising from the polynomials (Equation (26))
p ( k , x ) = p ( 1 ) ( k 1 , x ) 2 x p [ k 1 , x ] ,     p ( 0 , x ) = 1
Finally, it is the case that
lim n f n ( x ) = erf ( x ) ,     x 0 ,
with the convergence being uniform.
Proof. 
The proof is detailed in Appendix D. □

6.1. Explicit Approximations

Explicit approximations for orders zero to four for erf ( x ) , x 0 are:
f 0 ( x ) = 1 π 3 2 e x 2 e 2 x 2
f 1 ( x ) = 1 π 19 6 2 e x 2 7 e 2 x 2 6 [ 1 + 2 x 2 7 ]
f 2 ( x ) = 1 π 63 20 29 e x 2 15 [ 1 x 2 29 ] 73 e 2 x 2 60 [ 1 + 26 x 2 73 + 4 x 4 73 ]
f 3 ( x ) = 1 π 22 7 40 e x 2 21 [ 1 x 2 20 ] 26 e 2 x 2 21 [ 1 + 10 x 2 26 + x 4 13 + x 6 130 ]
f 4 ( x ) = 1 π 377 120 596 e x 2 315 [ 1 17 x 2 298 + x 4 1192 ] 3149 e 2 x 2 2520 [ 1 + 1258 x 2 3149 + 278 x 4 3149 + 112 x 6 9447 + 8 x 8 9447 ]

6.2. Results

The relative error bounds associated with the approximations to erf ( x ) are detailed in Table 8. The graphs of the relative errors in the approximations are shown in Figure 15. The clear advantage of the proposed approximations is evident, with the improvement increasing with the order of the initial approximation (i.e., a function with an initial lower relative error bound leads to an increasingly lower relative error bound). The other clear advantage of the approximations, as is evident in Figure 15, is that the relative error is bounded as x .

6.3. Extension

By utilizing the approximations detailed in Theorem 5, similar approximations can be detailed, with lower relative error. For example, the first-order approximation, f 1 , 4 , which is based on four equal subintervals and is defined by Equation (67), yields the approximation
f 1 , 4 ( x ) = 1 π 128,177 40,800 e x 2 2 16 e 17 x 2 / 16 17 4 e 5 x 2 / 4 5 16 e 25 x 2 / 16 25 25 e 2 x 2 96 [ 1 + 2 x 2 25 ]
which has a relative error bound of 2.83 × 10−6. With an optimum transition point of 3.292, the original approximation has a relative error bound of 7.21 × 10−5 (see Table 5).

6.4. Notes

First, the constants p n , 0 , n { 0 , 1 , } , as defined in Equation (84), form a series that in the limit converges to 1. It then follows that the corresponding series converges to π:
3 , 19 6 , 63 20 , 22 7 , 377 120 , 174,169 55,440 , 4,528,409 1,441,440 , .
Second, the square root functional structure has been utilized for approximations to the error function, as is evident from the approximations detailed in Table 1. It is easy to conclude that the form
f n ( x ) = p n , 0 p n , 1 ( x ) e k 1 x 2 p n , 2 ( x ) e k 2 x 2
is well suited for approximating the error function.

7. Applications

This section details indicative applications for the approximations to the error function that have been detailed above.
The distinct analytical forms specified in Theorems 1, 3, 5 and 7, for approximations to the error function, in general, facilitate analysis for different applications. For example, the form detailed in Theorem 1 underpins approximations to exp(−x2), as detailed in Section 7.2. The form detailed in Theorem 7 underpins analytical approximations for the power associated with the output of a nonlinearity modeled by the error function when subject to a sinusoidal signal. The approximations are detailed in Section 7.6.
For applications, where a set relative error bound over a set interval is required, the suitable approximation will depend, in part, on the domain over which an approximation is required as well as the level of the relative error bound that is acceptable. For example, the approximations detailed in Theorems 1 and 3 lead to simple analytical forms and modest relative error bounds over [ 0 ,   ) when used with an appropriate transition point for the approximation of erf ( x ) 1 . Without the use of a transition point, such approximations are likely to be best suited for a restricted domain—for example, the domain [ 0 ,     3 2 ] , which is consistent with the three sigma case arising from a Gaussian distribution. The fourth-order approximations, as specified by Equations (36) and (54), have relative error bounds, respectively, of 1.02 × 10−3 and 1.90 × 10−4 over [ 0 ,     3 2 ] . Section 7.1 provides examples of approximations that are consistent with set relative error bounds, over the interval [ 0 ,     ) , of 10−4, 10−6, 10−10, and 10−16.

7.1. Error Function Approximations: Set Relative Error Bounds

Consider the case where an approximation for the error function, with a relative error bound over the positive real line of 10−4, is required. A 47th-order Taylor series, with a transition point of x o = 2.752 , yields a relative bound of 1.00 × 104.
An eighth-order spline approximation, with a transition point of x o = 2.963 , yields a relative error bound of 2.79 × 10−5. The approximation, according to Equation (56), is
f 8 ( x ) = x π u [ x o x ] [ 1 7 x 2 102 + x 4 340 x 6 18,564 + x 8 5,250,960 + [ 1 + 41 x 2 102 + 101 x 4 1020 + 1591 x 6 92,820 + 4793 x 8 2,162,160 + 2017 x 10 9,189,180 + 38 x 12 2,297,295 + 31 x 14 34,459,425 + x 16 34,459,425 ] e x 2 ] + [ 1 u [ x o u ] ]
A seventh-order approximation, with a transition point of x o = 2.65 , yields a relative error bound of 1.79 × 10−4.
A first-order spline approximation, based on four equal subintervals [ 0 , x / 4 ] , [ x / 4 , x / 2 ] , [ x / 2 ,   3 x / 4 ] and [ 3 x / 4 , x ] , is defined according to
f 1 , 4 ( x ) = [ x 4 π [ 1 + 2 exp [ x 2 16 ] + 2 exp [ x 2 4 ] + 2 exp [ 9 x 2 16 ] + exp [ x 2 ] ] + x 3 48 π exp [ x 2 ] ] u [ x o x ] + [ 1 u [ x o u ] ]
and yields a relative error bound of 7.21 × 10−5 with the transition point x o = 3.292 .
A dynamic constant plus a spline approximation of order 2, and based on a resolution of Δ = 19 / 20 , achieves a relative error bound of 8.33 × 10−5 (10,000 points in the interval [0, 5]). The approximation is
f 2 , Δ ( x ) = k = 0 20 x / 19 c k + 1 π [ x 19 20 20 x 19 ] [ exp [ 361 400 20 x 19 2 ] + e x 2 ] 2 5 π [ x 19 20 20 x 19 ] 2 [ 19 20 20 x 19 exp [ 361 400 20 x 19 2 ] x e x 2 ] 1 30 π [ x 19 20 20 x 19 ] 3 [ [ 1 361 200 20 x 19 2 ] exp [ 361 400 20 x 19 2 ] + ( 1 2 x 2 ) e x 2 ]
where
c 0 = 0 ,     c k = erf [ 19 k 20 ] erf [ 19 ( k 1 ) 20 ] ,     k { 1 , 2 , } , c 1 = 0.82089081 ,     c 2 = 0.17189962 , c 3 = 7.1539145 × 10 3 ,     c 4 = 5.5579 × 10 5 ,     c 5 = c 6 = = 0 .
Here, the approximation of erf ( x ) 1 , for x 57 / 20 (after three intervals) can be utilized without impacting the relative error bound.
Utilizing a fourth-order spline approximation and iteration consistent with Theorem 7, the approximation
f 4 ( x ) = 1 π 377 120 596 e x 2 315 [ 1 17 x 2 298 + x 4 1192 ] 3149 e 2 x 2 2520 [ 1 + 1258 x 2 3149 + 278 x 4 3149 + 112 x 6 9447 + 8 x 8 9447 ]
yields a relative error bound of 1.82 × 10−5.
Details of approximations that are consistent with higher-order relative error bounds are detailed in Table 9.

7.2. Approximation for Exp(−x2)

A nth-order approximation to the Gaussian function exp ( x 2 ) is detailed in the following theorem:
Theorem 8. 
Approximation for Gaussian Function. A nth-order approximation, g n , to the Gaussian function exp ( x 2 ) is
g n ( x ) = k = 0 n c n , k ( k + 1 ) x k p ( k , 0 ) 1 + k = 0 n c n , k ( 1 ) k + 1 x k [ p ( k , x ) [ k + 1 2 x 2 ] + x p ( 1 ) ( k , x ) ]
where c n , k is defined by Equation (21) and p ( k , x ) is defined by Equation (26).
Proof. 
The proof is detailed in Appendix E. □

7.2.1. Approximations

Approximations to exp ( x 2 ) , of orders zero to five, are:
g 0 ( x ) = 1 1 + 2 x 2     g 1 ( x ) = 1 1 + x 2 + 2 x 4 3     g 2 ( x ) = 1 x 2 / 10 1 + 9 x 2 10 + 2 x 4 5 + 2 x 6 15
g 3 ( x ) = 1 x 2 / 7 1 + 6 x 2 7 + 5 x 4 14 + 2 x 6 21 + 2 x 8 105 g 4 ( x ) = 1 x 2 / 6 + x 4 / 252 1 + 5 x 2 6 + 85 x 4 252 + 11 x 6 126 + x 8 63 + 2 x 10 945
g 5 ( x ) = 1 2 x 2 / 11 + x 4 / 132 1 + 9 x 2 11 + 43 x 4 132 + x 6 12 + x 8 66 + x 10 495 + 2 x 12 10,395

7.2.2. Results

The relative errors in the above defined approximations to exp ( x 2 ) are detailed in Figure 16 for approximations of order 0, 2, 4, 6, 8, 10, and 12, along with the relative error in Taylor series for orders 1, 3, 5, …, 15. The clear superiority of the defined approximations is evident.

7.2.3. Comparison

The following nth-order approximation for exp(−x2) has been proposed in Equation (77) of [19]:
h n ( x ) = 1 c n , 0 x 2 + c n , 1 x 4 c n , 2 x 6 + + c n , n ( 1 ) n + 1 x 2 n + 2 1 + c n , 0 x 2 + c n , 1 x 4 + c n , 2 x 6 + + c n , n x 2 n + 2
where c n , k is defined by Equation (21). The relative error bounds over the interval [ 0 ,   3 / 2 ] (the three sigma bound case for Gaussian probability distributions) for this approximation, and the approximation defined by Equation (103), are detailed in Table 10. The tabulated results clearly show that this approximation is more accurate than the approximation detailed in Equation (103). The improvement is consistent with the higher-order Padé approximant being used.
The following approximations (seventh- and fifth-order) yield relative error bounds of less than 0.001 over the interval [ 0 ,   3 / 2 ] :
g 7 ( x ) = 1 x 2 5 + x 4 78 x 6 4290 1 + 4 x 2 5 + 61 x 4 195 + 34 x 6 429 + 83 x 8 5270 + x 10 495 + 7 x 12 32,175 + 4 x 14 225,225 + 2 x 16 2,027,025
h 5 ( x ) = 1 x 2 2 + 5 x 4 44 x 6 66 + x 8 792 x 10 15,840 + x 12 665,280 1 + x 2 2 + 5 x 4 44 + x 6 66 + x 8 792 + x 10 15,840 + x 12 665,280
A twenty-seventh-order Taylor series approximation yields a relative bound of 1.03 × 10−3.

7.3. Upper and Lower Bounded Approximations to Error Function

Establishing bounds for erf(x) has received modest research interest, e.g., [31], and published bounds for erf(x) for the case of x > 0 include Chu [32]:
1 exp [ p x 2 ] erf ( x ) 1 exp [ q x 2 ] ,     p ( 0 , 1 ] , q [ 4 / π , ] .
Corollary 4.2 of Neuman [33]:
2 x π exp [ x 2 3 ] erf ( x ) 4 x 3 π [ 1 + exp ( x 2 ) 2 ]
and refinements to the form proposed by Chu [32], e.g., Yang [34,35], Corollary 3.4 of [35]:
1 20 3 π [ 1 π 4 ] exp [ 8 x 2 5 ] 8 3 [ 1 5 2 π ] exp ( x 2 ) erf ( x ) 1 λ ( p 0 ) exp ( p 0 x 2 ) [ 1 λ ( p 0 ) ] exp [ μ ( p 0 ) x 2 ]
where
p 0 = 21 π 60 + 3 ( 147 π 2 920 π + 1440 ) 30 ( π 3 ) , λ ( p ) = 4 [ 7 π 20 5 ( π 3 ) p ] π ( 15 p 2 40 p + 28 ) ,   μ ( p ) = 4 ( 5 p 7 ) 5 ( 3 p 4 ) .
The relative error in these bounds is detailed in Figure 17.
Utilizing the results of Lemma 1, it follows that any of the approximations detailed in Theorems 4, 5, 6, or 7 can be utilized to create upper and lower bounded functions for erf(x), x > 0, of arbitrary accuracy and with an arbitrary relative error bound. For example, the approximation f 1 , 4 specified by Equation (99) yields the functional bounds:
f 1 , 4 ( x ) 1 + ε B erf ( x ) f 1 , 4 ( x ) 1 ε B ,     ε B = 7.21 × 10 5 ,     x > 0 ,
with a relative error bound of 8.33 × 10−5 for the lower bounded function and 1.44 × 10−4 for the upper bounded function. Such accuracy is better than the bounds underpinning the results shown in Figure 17. The sixteenth-order approximation, f 4 , 16 , based on four equal subintervals, specified in Appendix C and when used with a transition point x o = 7.1544 , leads to the functional bounds
f 4 , 16 ( x ) 1 + ε B erf ( x ) f 4 , 16 ( x ) 1 ε B ,   ε B = 4.82 × 10 16 ,   x > 0 ,
with a relative error bound of less than 9.64 × 10−16 for the lower bounded function and 9.32 × 10−16 for the upper bounded function.

7.4. New Series for Error Function

Consider the exact results
erf ( x ) = f n ( x ) + ε n ( x ) ,   n { 0 , 1 , 2 , } ,
where f n is specified by Equation (24) and ε n ( 1 ) ( x ) is specified by Equation (25). By utilizing a Taylor series approximation for exp ( x 2 ) in ε n ( 1 ) ( x ) and then integrating, an approximation for ε n ( x ) can be established. This leads to new series for the error function.
Theorem 9. 
New Series for Error Function. Based on zero-, first-, and second-order approximations, the following series for the error function are valid:
erf ( x ) = x π + x π e x 2 + x 3 π [ 1 3 3 x 2 2 5 + 5 x 4 6 7 7 x 6 24 9 + 9 x 8 120 11 + ( 1 ) k ( 2 k + 1 ) x 2 k ( 2 k + 3 ) ( k + 1 ) ! + ]
erf ( x ) = x π + x π [ 1 + x 2 3 ] e x 2 + x 5 6 π [ 1 5 3 x 2 ( 3 / 2 ) 7 + 5 x 4 4 9 7 x 6 15 11 + 9 x 8 72 13 + ( 1 ) k ( 2 k + 1 ) ( k + 1 ) ! x 2 k ( 2 k + 5 ) r = 1 k [ i = 1 r 2 i + 1 ] + ]
erf ( x ) = x π [ 1 x 2 30 ] + x π [ 1 + 11 x 2 30 + x 4 15 ] e x 2 + 1 π x 7 60 [ 1 3 1 3 7 3 5 x 2 2 3 9 + 5 7 x 4 4 5 11 7 9 x 6 9 10 13 + + ( 1 ) k ( 2 k + 1 ) ( 2 k + 3 ) ( k + 1 ) ! x 2 k 3 ( 2 k + 7 ) r = 2 k + 1 [ 2 i = 2 r i ] + ]
Further series, based on higher-order approximations, can also be established.
Proof. 
The proof is detailed in Appendix F. □

Results

The relative errors associated with the zero- and second-order series are shown in Figure 18 and Figure 19. Clearly, the relative error improves as the number of terms used in the series expansion increases. The significant improvement in the relative error, for small values of x, is evident. A comparison with the relative errors associated with Taylor series approximations, as shown in Figure 2, shows the improved performance.
The second-order approximation arising from Equation (118), i.e.,
erf ( x ) = x π + x π [ 1 + x 2 3 ] e x 2 + x 5 6 π [ 1 5 2 x 2 7 ]  
yields a relative error bound of less than 0.001 for the interval [0, 0.87] and less than 0.01 for the interval [0, 1.1].

7.5. Complementary Demarcation Functions

Consider a complementary function e C which is such that
e C 2 ( x ) + erf 2 ( x ) = 1 , x 0 .
With the approximation detailed in Theorem 7 (and by noting that lim n p n , 0 = 1 —see Equation (96)), it is the case that
e C 2 ( x ) = lim n [ p n , 1 ( x ) e x 2 + p n , 2 ( x ) e 2 x 2 ]
and, thus, e C ( x ) can be defined independently of the error function. This function is shown in Figure 20 along with erf ( x ) 2 . These two functions act as complementary demarcation functions for the interval [ 0 , ) . The transition point is x o = 0.74373198514677 as
erf ( x ) | x = 0.74373198514677   = 1 2 .

7.6. Power and Harmonic Distortion: Erf Modeled Nonlinearity

The error function is often used to model nonlinearities and the harmonic distortion created by such a nonlinearity is of interest. Examples include the harmonic distortion in magnetic recording, e.g., [2,36], and the harmonic distortion arising, in a communication context, by a power amplifier, e.g., [7]. For these cases, the interest was in obtaining, with a sinusoidal input signal defined by a sin ( 2 π f o t ) , the harmonic distortion created by an error function nonlinearity over the input amplitude range of [−2, 2].
Consider the output signal of a nonlinearity modelled by the error function:
y ( t ) = erf [ a sin ( 2 π f o t ) ] .
For such a case, the output power is defined according to
P = 1 T 0 T y 2 ( t ) d t = 1 T 0 T erf 2 [ a sin ( 2 π f o t ) ] d t , T = 1 / f o ,
and the output amplitude associated with the kth harmonic is
2 T 0 T erf [ a sin ( 2 π f o t ) ] sin ( 2 π k f o t ) d t .
To determine an analytical approximation to the output power, the approximations stated in Theorem 7 lead to relatively simple expressions. Consider the third-order approximation, as specified by Equation (93), which has a relative error bound of 2.03 × 10−4 for the positive real line. For such a case, the output signal is approximated according to
y 3 ( t ) = 1 π [ 22 7 40 e [ a sin ( 2 π f o t ) ] 2 21 [ 1 [ a sin ( 2 π f o t ) ] 2 20 ] 26 e 2 [ a sin ( 2 π f o t ) ] 2 21 [ 1 + 10 [ a sin ( 2 π f o t ) ] 2 26 + [ a sin ( 2 π f o t ) ] 4 13 + [ a sin ( 2 π f o t ) ] 6 130 ] ] 1 / 2
and is shown in Figure 21.
The power in y 3 can be readily determined (e.g., via Mathematica) and it then follows that an approximation to the true power is
P ( a ) 22 7 π 40 21 π I 0 [ a 2 2 ] [ 1 a 2 40 ] e a 2 / 2 26 21 π I 0 [ a 2 ] [ 1 + 5 a 2 26 + 41 a 4 1040 + a 6 260 ] e a 2 a 2 21 π I 1 [ a 2 2 ] e a 2 / 2 + 37 a 2 140 π I 1 [ a 2 ] [ 1 + 43 a 2 222 + 2 a 4 111 ] e a 2
where I 0 and I 1 , respectively, are the zero- and first-order Bessel functions of the first kind. The variation in output power is shown in Figure 22.

Harmonic Distortion

To establish analytical approximations for the harmonic distortion, the functional forms detailed in Theorem 7 are not suitable. However, the functional forms detailed in Theorem 1 do lead to analytical approximations that are valid over a restricted domain. Consider a fourth-order spline approximation, as specified by Equation (36), which approximates the error function over the range [−2, 2] with a relative error bound that is better than 0.001 and leads to the approximation
y 4 ( t ) = a sin ( 2 π f o t ) π [ 1 a 2 sin ( 2 π f o t ) 2 18 + a 4 sin ( 2 π f o t ) 4 1260 ] + a sin ( 2 π f o t ) π [ 1 + 7 a 2 sin ( 2 π f o t ) 2 18 + 37 a 4 sin ( 2 π f o t ) 4 420 + 4 a 6 sin ( 2 π f o t ) 6 315 + a 8 sin ( 2 π f o t ) 8 945 ] e a 2 sin ( 2 π f o t ) 2
The amplitude of the kth harmonic in such a signal is given by
c 4 , k = 2 T 0 T y 4 [ a sin ( 2 π f o t ) ] sin ( 2 π k f o t ) d t = 2 T 0 1 y 4 [ a sin ( 2 π λ ) ] sin ( 2 π k λ ) d λ
where the change of variable λ = f o t has been used. The first, third, fifth, and seventh harmonic levels are:
c 4 , 1 T = 2 a 2 π [ 1 a 2 24 + a 4 2016 ] + 2 a 2 π I 0 [ a 2 2 ] [ 1 + 11 a 2 24 + 11 a 4 105 + a 6 70 + a 8 945 ] e a 2 / 2 5 2 a 6 π I 1 [ a 2 2 ] [ 1 + 1481 a 2 4200 + 38 a 4 525 + 29 a 6 3150 + a 8 1575 ] e a 2 / 2
c 4 , 3 T = 2 a 3 144 π [ 1 a 2 56 ] 115 2 a 84 π I 0 [ a 2 2 ] [ 1 + 403 a 2 1380 + 6 a 4 115 + 31 a 6 5175 + 2 a 8 5175 ] e a 2 / 2 + 115 2 21 a π I 1 [ a 2 2 ] [ 1 + 8 a 2 23 + 163 a 4 1840 + 76 a 6 5175 + 11 a 8 6900 + a 10 10,350 ] e a 2 / 2
c 4 , 5 T = 2 a 5 40,320 π + 262 2 15 a π I 0 [ a 2 2 ] [ 1 + 1943 a 2 7336 + 1485 a 4 29,344 + 73 a 6 11,004 + 13 a 8 22,008 + a 10 33,012 ] e a 2 / 2 1048 2 15 a 3 π I 1 [ a 2 2 ] [ 1 + 1943 a 2 7336 + 1201 a 4 14,672 + 5125 a 6 352,128 + 5 a 8 2751 + 41 a 10 264,096 + a 12 132,048 ] e a 2 / 2
c 4 , 7 T = 6784 2 21 a 3 π I 0 [ a 2 2 ] [ 1 + 779 a 2 3392 + 631 a 4 13,568 + 2047 a 6 325,632 + 25 a 8 40,704 + 17 a 10 407,040 + a 12 610,560 ] e a 2 / 2 27,136 2 21 a 5 π I 1 [ a 2 2 ] [ 1 + 779 a 2 3392 + 1055 a 4 13,568 + 137 a 6 10,176 + 2269 a 8 1,302,528 + 67 a 10 407,040 + a 12 92,160 + a 14 2,442,240 ] e a 2 / 2
The variation, with the input signal amplitude, of the harmonic distortion, as defined by c 4 , k 2 / c 4 , 1 2 , is shown in Figure 23.

7.7. Linear Filtering of an Error Function Step Signal

Consider the case of a practical step input signal that is modeled by the error function, erf ( t / γ ) and the case where such a signal is input to a 2nd-order linear filter with a transfer function defined by
H ( s ) = 1 [ 1 + s 2 π f p ] 2     h ( t ) = t e t / τ τ 2 u ( t ) ,   τ = 1 2 π f p .
Theorem 10. 
Linear Filtering of an Error Function Signal. The output of a second-order linear filter, defined by Equation (135), to an error function input signal, defined by erf ( t / γ ) , is
y ( t ) = erf [ t γ ] u ( t ) + π θ e t / τ τ [ [ γ 2 2 τ ( t + τ ) ] exp [ γ 2 4 τ 2 ] [ erf [ γ 2 τ ] erf [ γ 2 τ t γ ] ] γ π exp [ γ 2 4 τ 2 ] exp [ [ t γ γ 2 τ ] 2 ] + γ π ] u ( t )
and can be approximated by the nth-order signal
y n ( t ) = f n [ t γ ] u ( t ) + e t / τ τ [ [ γ 2 2 τ ( t + τ ) ] exp [ γ 2 4 τ 2 ] [ f n [ γ 2 τ ] f n [ γ 2 τ t γ ] ] γ π exp [ γ 2 4 τ 2 ] exp [ [ t γ γ 2 τ ] 2 ] + γ π ] u ( t )
where f n is defined by one of the approximations detailed in Theorems 4, 5, 6, or 7. It is the case that
lim n y n ( t ) = y ( t ) ,   t ( 0 , ) .
Proof. 
The proof is detailed in Appendix G. □

Results

For an input signal erf ( t / γ ) u ( t ) , γ = 1 / 2 , input into a second-order linear filter with f p = 1 , the output signal is shown in Figure 24. The relative errors in the approximations to the output signal are shown in Figure 25 for the case of approximations as specified by Equation (56) and with the use of optimum transition points.

7.8. Extension to Complex Case

By definition, the error function, for the general complex case, is defined according to
erf ( z ) = 2 π γ e λ 2 d λ ,   z C ,
where the path γ is between the points zero and z and is arbitrary. For the case of z = x + j y , and a path along the x axis to the point (x, 0) and then to the point z , the error function is defined according to Equation (5) of [37]:
erf ( x + j y ) = 2 π 0 x e λ 2 d λ + 2 j π 0 y e ( x + j λ ) 2 d λ = erf ( x ) + 2 e x 2 π 0 y e λ 2 sin ( 2 x λ ) d λ + 2 j e x 2 π 0 y e λ 2 cos ( 2 x λ ) d λ
Explicit approximations for erf ( x + j y ) then arise when integrable approximations for the two-dimensional surfaces exp ( y 2 ) sin ( 2 x y ) and exp ( y 2 ) cos ( 2 x y ) over [ 0 , x ] × [ 0 , y ] are available. Naturally, significant existing research exists, e.g., [28,37].

7.9. Approximation for the Inverse Error Function

There are many applications where the inverse error function is required and accurate approximations for this function are of interest. From the research underpinning this paper, the author’s view is that finding approximations to the inverse error function is best treated directly and as a separate problem, rather than approaching it via finding the inverse of an approximation to the error function.

8. Conclusions

This paper has detailed analytical approximations for the real case of the error function, underpinned by a spline-based integral approximation, which have significantly better convergence than the default Taylor series. The original approximations can be improved by utilizing the approximation erf ( x ) 1 for x > x o , with x o being dependent on the order of approximation. The fourth-order approximations arising from Theorems 1 and 3, with respective transition points of x o = 2.3715 and x o = 2.6305 , achieve relative error bounds over the interval [ 0 , ] , respectively, of 1.03 × 10−3 and 2.28 × 10−4. The respective sixteenth-order approximations, with x o = 3.9025 and x o = 4.101 , have relative error bounds of 3.44 × 10−8 and 6.66 × 10−9.
Further improvements were detailed via two generalizations. The first was based on utilizing integral approximations for each of the m equally spaced subintervals in the required interval of integration. The second was based on utilizing a fixed subinterval within the interval of integration with a known tabulated area, and then utilizing an integral approximation over the remainder of the interval. Both generalizations lead to significantly improved accuracy. For example, a fourth-order approximation based on four subintervals, with x o = 3.7208 , achieves a relative error bound of 1.43 × 10−7 over the interval [ 0 , ] . A sixteenth-order approximation, with x o = 6.3726 , has a relative error bound of 2.01 × 10−19.
Finally, it was shown that a custom feedback system, with inputs defined by either the original error function approximations or approximations based on the use of subintervals, leads to analytical approximations with improved accuracy that are valid over the positive real line without utilizing the approximation erf ( x ) 1 for suitably large values of x . The original fourth-order error function approximation yields an approximation with a relative error bound of 1.82 × 10−5 over the interval [ 0 , ] . The original sixteenth-order approximation yields an approximation with a relative error bound of 1.68 × 10−14.
Applications of the approximations include, first, approximations to achieve the specified error bounds of 10−4, 10−6, 10−10, and 10−16 over the positive real line. Second, the definitions of functions that are upper and lower bounds, of arbitrary accuracy, for the error function. Third, new series for the error function. Fourth, new sequences of approximations for exp ( x 2 ) that have significantly higher convergence properties than a Taylor series approximation. Fifth, a complementary demarcation function satisfying the constraint e C 2 ( x ) + e r f 2 ( x ) = 1 was defined. Sixth, arbitrarily accurate approximations for the power and harmonic distortion for a sinusoidal signal subject to an error function nonlinearity. Seventh, approximate expressions for the linear filtering of a step signal that is modeled by the error function.

Supplementary Materials

The following supporting information can be downloaded at: https://www.mdpi.com/article/10.3390/mca27010014/s1, File S1. Mathematica Code for Figure 2 Results.

Funding

This research received no external funding.

Acknowledgments

The support of Abdelhak M. Zoubir, SPG, Technische Universität Darmstadt, Darmstadt, Germany, who hosted a visit during which the research for, and writing of, this paper was completed, is gratefully acknowledged. An anonymous reviewer provided a comment, which, after consideration, led to the improved approximations detailed in Theorem 3.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A. Proof of Theorem 1

Consider f ( x ) = exp ( x 2 ) . Successive differentiation of this function leads to the iterative formula
f ( k ) ( x ) = p ( k , x ) exp ( x 2 ) ,
where
p ( k , x ) = p ( 1 ) ( k 1 , x ) 2 x p ( k 1 , x ) ,   p ( 0 , x ) = 1 .
It then follows from Equation (20) that
2 π α x exp ( λ 2 ) d λ 2 π k = 0 n c n , k ( x α ) k + 1 [ f ( k ) ( α ) + ( 1 ) k f ( k ) ( x ) ] = 2 π k = 0 n c n , k ( x α ) k + 1 [ p ( k , α ) exp ( α 2 ) + ( 1 ) k p ( k , x ) exp ( x 2 ) ]
The result for the case of α = 0 then yields the nth-order approximation for the error function:
f n ( x ) = 2 π k = 0 n c n , k x k + 1 [ p ( k , 0 ) + ( 1 ) k p ( k , x ) e x 2 ]
To determine ε n ( 1 ) ( x ) consider the equality erf ( x ) = f n ( x ) + ε n ( x ) . Differentiation yields
ε n ( 1 ) ( x ) = 2 e x 2 π 2 π k = 0 n c n , k ( k + 1 ) x k [ p ( k , 0 ) + ( 1 ) k p ( k , x ) e x 2 ] 2 e x 2 π k = 0 n c n , k x k + 1 ( 1 ) k [ p ( 1 ) ( k , x ) 2 x p ( k , x ) ]
and the required result:
ε n ( 1 ) ( x ) = 2 e x 2 π 2 π k = 0 n c n , k ( k + 1 ) x k p ( k , 0 ) 2 e x 2 π k = 0 n c n , k x k ( 1 ) k [ ( k + 1 2 x 2 ) p ( k , x ) + x p ( 1 ) ( k , x ) ]

Appendix B. Proof of Theorem 2

The use of a Taylor series expansion for exp ( x 2 ) in the definitions of ε 0 ( 1 ) ( x ) and ε 1 ( 1 ) ( x ) , as defined by Equations (40) and (41), yields:
ε 0 ( 1 ) ( x ) = x 2 π [ 1 3 x 2 2 + 5 x 4 6 7 x 6 24 + 3 x 8 40 11 x 10 720 + 13 x 12 5040 x 14 2688 + ] = x 2 π [ c 0 , 0 c 0 , 1 x 2 + + ( 1 ) k c 0 , k x 2 k + ] ,   c 0 , k = 2 k ! 1 ( k + 1 ) ! , k 0
ε 1 ( 1 ) ( x ) = x 4 6 π [ 1 2 x 2 + 5 x 4 4 7 x 6 16 + x 8 8 11 x 10 420 ]
From a consideration of ε 0 ( 1 ) ( x ) and ε 1 ( 1 ) ( x ) , as well as higher-order residual functions, it can be readily seen that the polynomial terms of order zero to 2n + 1 in ε 1 ( 1 ) ( x ) have coefficients of zero. It then follows that ε n ( 1 ) ( x ) can be written as
ε n ( 1 ) ( x ) = 1 π x 2 n + 2 x n , 0 g n ( x ) , n { 0 , 1 , 2 , } ,
where
g n ( x ) = 1 d n , 1 x 2 + d n , 2 x 4 + ( 1 ) k d n , k x 2 k +
and where it can readily be shown that
x n , 0 = 2 n i = 0 n ( 2 i + 1 ) 2 n n ! ,   n { 0 , 1 , 2 , } .
Graphs of the magnitude of the residual functions, ε n ( 1 ) ( x ) , for orders zero, two, four, six, and eight, are shown in Figure A1. The magnitude of the functions defined by g n , for the same orders, are shown in Figure A2 and it is evident that
| g n ( x ) | k o ,   x 0 , n { 0 , 1 , 2 , } ,
for a fixed constant k o , which is of the order of unity. Hence:
| ε n ( 1 ) ( x ) | k o π x 2 n + 2 x n , 0 ,   x 0 , n { 0 , 1 , 2 , } .
Figure A1. Graphs of | ε n ( 1 ) ( x ) | for orders zero, two, four, six and eight.
Figure A1. Graphs of | ε n ( 1 ) ( x ) | for orders zero, two, four, six and eight.
Mca 27 00014 g0a1
Figure A2. Graphs of | g n ( x ) | for orders zero, two, four, six and eight.
Figure A2. Graphs of | g n ( x ) | for orders zero, two, four, six and eight.
Mca 27 00014 g0a2
Further, as x n , 0 2 n n ! , it follows, for all fixed values of x, that
lim n ε n ( 1 ) ( x ) = 0 ,   x 0 .
The convergence is not uniform.
It then follows, for all fixed values of x, that there exists an order of approximation, n, such that the error in the approximation ε n ( 1 ) ( x ) can be made arbitrarily small, i.e., ε o > 0 there exists a number N ( x ) such that
| ε n ( 1 ) ( x ) | < ε o ,     n > N ( x ) .
In general, N ( x ) increases with x. Thus, ε o > 0 there exists a number N x o ( x o ) such that
| ε n ( 1 ) ( x ) | < ε o ,     x [ 0 , x o ] , n > N x o ( x o ) .
Finally, as ε n ( 1 ) ( 0 ) = 0 , for all n, it then follows, for x fixed, that
| ε n ( x ) | = | 0 x ε n ( 1 ) ( λ ) d λ | < ε o x ,     n > N x ( x ) ,
which proves convergence.

Appendix C. Fourth-Order Spline Approximation—The Sixteen-Subintervals Case

Consistent with Theorem 5, a fourth-order spline approximation, which utilizes 16 subintervals, is
f 4 , 16 ( x ) = x 16 π [ 1 16 x 2 73,728 + 16 x 4 1,321,205,760 ] + x 8 π exp [ x 2 256 ] [ 1 x 2 4608 + 47 x 4 27,525,120 x 6 5,284,823,040 + x 8 4,058,744,094,720 ] + x 8 π exp [ x 2 64 ] [ 1 x 2 4608 + 187 x 4 27,525,120 x 6 1,321,205,760 + x 8 253,671,505,920 ] + x 8 π exp [ 9 x 2 256 ] [ 1 x 2 4608 + 1261 x 4 82,575,360 x 6 587,202,560 + 3 x 8 150,323,855,360 ] + x 8 π exp [ x 2 16 ] [ 1 x 2 4608 + 249 x 4 9,175,040 x 6 330,301,440 + x 8 15,854,469,120 ] + x 8 π exp [ 25 x 2 256 ] [ 1 x 2 4608 + 389 x 4 9,175,040 5 x 6 1,056,964,608 + 125 x 8 811,748,818,944 ] + x 8 π exp [ 9 x 2 64 ] [ 1 x 2 4608 + 5041 x 4 82,575,360 x 6 146,800,640 + 3 x 8 9,395,240,960 ] + x 8 π exp [ 49 x 2 256 ] [ 1 x 2 4608 + 2287 x 4 27,525,120 7 x 6 754,974,720 + 343 x 8 579,820,584,960 ] + x 8 π exp [ x 2 4 ] [ 1 x 2 4608 + 2987 x 4 27,525,120 x 6 82,575,360 + x 8 990,904,320 ] + x 8 π exp [ 81 x 2 256 ] [ 1 x 2 4608 + 11341 x 4 82,575,360 9 x 6 587,202,560 + 243 x 8 150,323,855,360 ] + x 8 π exp [ 25 x 2 64 ] [ 1 x 2 4608 + 4667 x 4 27,525,120 5 x 6 264,241,152 + 125 x 8 50,734,301,184 ] + x 8 π exp [ 121 x 2 256 ] [ 1 x 2 4608 + 5647 x 4 27,525,120 121 x 6 5,284,823,040 + 14641 x 8 4,058,744,094,720 ] + x 8 π exp [ 9 x 2 16 ] [ 1 x 2 4608 + 20161 x 4 82,575,360 x 6 36,700,160 + 3 x 8 587,202,560 ] + x 8 π exp [ 169 x 2 256 ] [ 1 x 2 4608 + 2629 x 4 9,175,040 169 x 6 5,284,823,040 + 28,561 x 8 4,058,744,094,720 ] + x 8 π exp [ 49 x 2 64 ] [ 1 x 2 4608 + 3049 x 4 9,175,040 7 x 6 188,743,680 + 343 x 8 36,238,786,560 ] + x 8 π exp [ 225 x 2 256 ] [ 1 x 2 4608 + 31501 x 4 82,575,360 5 x 6 117,440,512 + 375 x 8 30,064,771,072 ] + x 16 π exp ( x 2 ) [ 1 + 127 x 2 4608 + 3929 x 4 9,175,040 + 79 x 6 20,643,840 + x 8 61,931,520 ]
When this approximation is utilized with the transition point x o = 7.1544 , the relative error bound in the approximation to the error function, over the interval ( 0 ,   ) , is 4.82 × 10−16.

Appendix D. Proof of Theorem 7

Consider the differential Equation
y n ( t ) + g n ( t ) y n ( t ) = g n ( t ) ,   y n ( 0 ) = 0 ,
for the case where g n is based on the nth-order approximation f n to the error function, defined in Theorem 1, and is defined according to
g n ( t ) = 4 π e t 2 f n ( t ) = 8 e t 2 π k = 0 n c n , k t k + 1 [ p ( k , 0 ) + ( 1 ) k p ( k , t ) e t 2 ] ,   t 0 .
To find a solution to the differential equation for such a driving signal, first note that the solution to the differential equation for the case of g n ( t ) = 4 π e t 2 erf ( t ) is
y n ( t ) = 1 exp [ erf 2 ( t ) ] .
Second, with g n defined by Equation (A20), the following signal form
y n ( t ) = 1 exp [ [ p n , 0 + p n , 1 ( t ) e t 2 + p n , 2 ( t ) e 2 t 2 ] ] ,   t 0 ,
has potential as a basis for finding solutions for the unknown polynomial functions p n , 1 and p n , 2 and the unknown constant p n , 0 . With such a form, the initial condition of y n ( 0 ) = 0 implies
p n , 0 = [ p n , 1 ( 0 ) + p n , 2 ( 0 ) ] .
It is the case that
y n ( t ) = [ p n , 1 ( 1 ) ( t ) e t 2 2 t p n , 1 ( t ) e t 2 + p n , 2 ( 1 ) ( t ) e 2 t 2 4 t p n , 2 ( t ) e 2 t 2 ] [ 1 y n ( t ) ] .
Substitution of y n ( t ) and y n ( t ) into the differential equation yields
[ p n , 1 ( 1 ) ( t ) e t 2 2 t p n , 1 ( t ) e t 2 + p n , 2 ( 1 ) ( t ) e 2 t 2 4 t p n , 2 ( t ) e 2 t 2 ] exp [ [ p n , 0 + p n , 1 ( t ) e t 2 + p n , 2 ( t ) e 2 t 2 ] ] + 4 π e t 2 f n ( t ) · [ 1 exp [ [ p n , 0 + p n , 1 ( t ) e t 2 + p n , 2 ( t ) e 2 t 2 ] ] ] = 4 π e t 2 f n ( t )
which implies
p n , 1 ( 1 ) ( t ) e t 2 2 t p n , 1 ( t ) e t 2 + p n , 2 ( 1 ) ( t ) e 2 t 2 4 t p n , 2 ( t ) e 2 t 2 4 π e t 2 f n ( t ) = 0
and
p n , 1 ( 1 ) ( t ) e t 2 2 t p n , 1 ( t ) e t 2 + p n , 2 ( 1 ) ( t ) e 2 t 2 4 t p n , 2 ( t ) e 2 t 2 = 8 π e t 2 k = 0 n c n , k t k + 1 [ p ( k , 0 ) + ( 1 ) k p ( k , t ) e t 2 ] .
Thus:
p n , 1 ( 1 ) ( t ) 2 t p n , 1 ( t ) = 8 π k = 0 n c n , k p ( k , 0 ) t k + 1 p n , 2 ( 1 ) ( t ) 4 t p n , 2 ( t ) = 8 π k = 0 n c n , k ( 1 ) k p ( k , t ) t k + 1
To solve for the polynomials p n , 1 and p n , 2 , first note (see Equation (26)) that
p ( k , t ) = a k , 0 + a k , 1 t + a k , 2 t 2 + + a k , k t k ,   k { 0 , 1 , , n } ,
for appropriately defined coefficients a k , j , j { 0 , 1 , , k } .

Appendix D.1. Solving for Coefficients of First Polynomial

Substitution of p ( k , 0 ) from Equation (A29) into the differential equation defining p n , 1 yields
p n , 1 ( 1 ) ( t ) 2 t p n , 1 ( t ) = 8 π k = 0 n c n , k a k , 0 t k + 1 .
With a n , 0 = 0 for n odd, the maximum power for t on the right side of the differential equation is t n + 1 , n even, and t n for n odd. Thus, the form required for p n , 1 is
p n , 1 ( t ) = { α 0 + α 1 t + + α n 1 t n 1     n   odd α 0 + α 1 t + + α n t n     n   even
Substitution then yields
[ α 1 + 2 α 2 t + + ( n 1 ) α n 1 t n 2 ] 2 t [ α 0 + α 1 t + + α n 1 t n 1 ] = 8 π k = 0 n 1 c n , k a k , 0 t k + 1     n   odd [ α 1 + 2 α 2 t + + n α n t n 1 ] 2 t [ α 0 + α 1 t + + α n t n ] = 8 π k = 0 n c n , k a k , 0 t k + 1     n   even
For the case of n even, equating coefficients associated with set powers of t , yields:
t n + 1 : 2 α n = 8 π c n , n a n , 0       α n = 4 π c n , n a n , 0 t n : 2 α n 1 = 8 π c n , n 1 a n 1 , 0       α n 1 = 4 π c n , n 1 a n 1 , 0 t n 1 : n α n 2 α n 2 = 8 π c n , n 2 a n 2 , 0       α n 2 = n 2 α n 4 π c n , n 2 a n 2 , 0 t 2 : 3 α 3 2 α 1 = 8 π c n , 1 a 1 , 0       α 1 = 3 α 3 2 4 π c n , 1 a 1 , 0 t 1 : 2 α 2 2 α 0 = 8 π c n , 0 a 0 , 0       α 0 = α 2 4 π c n , 0 a 0 , 0
With the odd coefficients a 1 , 0 , a 3 , 0 , ..., a n 1 , 0 being zero, it follows that the corresponding odd coefficients α n 1 , α n 3 , , α 1 are also zero. For the even coefficients, the algorithm is:
α m = 4 π c n , m a m , 0 α m 2 i = ( m 2 i + 2 ) α m 2 i + 2 2 4 π c n , m 2 i a m 2 i , 0 ,     i { 1 , , m 2 1 } α 0 = α 2 4 π c n , 0 a 0 , 0
where m = n. For the case of n being odd, the odd coefficients α n , α n 2 , , α 1 are again zero and the algorithm is the same as that specified in Equation (A34) with m = n − 1.

Appendix D.2. Solving for Coefficients of Second Polynomial

Substitution of p ( k , t ) from Equation (A29) into the differential equation defining p n , 2 yields:
p n , 2 ( 1 ) ( t ) 4 t p n , 2 ( t ) = 8 π k = 0 n c n , k ( 1 ) k [ i = 0 k a k , i t k + i + 1 ] .
The coefficients a k , i that are associated with a given power of t are illustrated in Figure A3. It then follows, for a fixed power of t, say t r , that the associated coefficients, a k , i , are
k { r 2 , r 2 + 1 , , min { r 1 , n } } i = r k 1 .
Thus:
p n , 2 ( 1 ) ( t ) 4 t p n , 2 ( t ) = 8 π r = 1 2 n + 1 [ k = r 2 min { r 1 , n } c n , k ( 1 ) k a k , r k 1 ] t r .
Figure A3. Illustration of the coefficients that potentially are non-zero for a set power of t. The illustration is for the case of n = 8.
Figure A3. Illustration of the coefficients that potentially are non-zero for a set power of t. The illustration is for the case of n = 8.
Mca 27 00014 g0a3
With
p n , 2 ( t ) = β 0 + β 1 t + + β m t m ,
the differential equation implies
[ β 1 + 2 β 2 t + + m β m t m 1 ] 4 t [ β 0 + β 1 t + + β m t m ] = 8 π r = 1 2 n + 1 [ k = r 2 min { r 1 , n } c n , k ( 1 ) k a k , r k 1 ] t r .
The requirement, thus, is for m = 2 n . Equating coefficients (see Figure A3) yields:
t 2 n + 1 : 4 β 2 n = 8 π ( 1 ) n c n , n a n , n       β 2 n = 2 π ( 1 ) n c n , n a n , n t 2 n : 4 β 2 n 1 = 8 π [ ( 1 ) n c n , n a n , n 1 ]       β 2 n 1 = 2 π ( 1 ) n c n , n a n , n 1 t 2 n 1 : 2 n β 2 n 4 β 2 n 2 = 8 π [ ( 1 ) n 1 c n , n 1 a n 1 , n 1 + ( 1 ) n c n , n a n , n 2 ]       β 2 n 2 = 2 n β 2 n 4 2 π [ ( 1 ) n 1 c n , n 1 a n 1 , n 1 + ( 1 ) n c n , n a n , n 2 ] t 3 : 4 β 4 4 β 2 = 8 π [ c n , 1 a 1 , 1 + c n , 2 a 2 , 0 ]       β 2 = β 4 2 π [ c n , 1 a 1 , 1 + c n , 2 a 2 , 0 ]   t 2 : 3 β 3 4 β 1 = 8 π c n , 1 a 1 , 0       β 1 = 3 β 3 4 + 2 π c n , 1 a 1 , 0 t 1 : 2 β 2 4 β 0 = 8 π c n , 0 a 0 , 0       β 0 = 2 β 2 4 2 π c n , 0 a 0 , 0
As the coefficients a n , n 1 , a n , n 3 , , a 1 , 0 are zero, the algorithm is:
β 2 n = 2 π ( 1 ) n c n , n a n , n β 2 n 2 i = [ 2 n 2 i + 2 ] β 2 n 2 i + 2 4 2 π k = n i min { 2 n 2 i , n } ( 1 ) k c n , k a k , 2 ( n i ) k ,   i { 1 , , n 1 } β 0 = β 2 2 2 π c n , 0 a 0 , 0

Appendix E. Proof of Theorem 8

Consider the results stated in Theorem 1:
erf ( x ) = 2 π 0 x e λ 2 d λ 2 π k = 0 n c n , k x k + 1 [ p ( k , 0 ) + ( 1 ) k p ( k , x ) e x 2 ] .
Differentiation then yields
e x 2 k = 0 n c n , k ( k + 1 ) x k [ p ( k , 0 ) + ( 1 ) k p ( k , x ) e x 2 ] + k = 0 n c n , k ( 1 ) k x k + 1 [ p ( 1 ) ( k , x ) e x 2 2 x p ( k , x ) e x 2 ]
which leads to the required result:
e x 2 k = 0 n c n , k ( k + 1 ) x k p ( k , 0 ) 1 + k = 0 n c n , k ( 1 ) k + 1 x k [ p ( k , x ) ( k + 1 2 x 2 ) + x p ( 1 ) ( k , x ) ]

Appendix F. Proof of Theorem 9

Consider the exact result
erf ( x ) = f 0 ( x ) + ε 0 ( x )
where f 0 is specified by Equation (32) and the derivative of the error term, ε 0 ( 1 ) ( x ) , is specified by Equation (40). By utilizing a Taylor series approximation for exp ( x 2 ) , ε 0 ( 1 ) ( x ) can be written as
ε 0 ( 1 ) ( x ) = x 2 π [ 1 3 x 2 2 + 5 x 4 6 7 x 6 24 + 9 x 8 120 + ( 1 ) k ( 2 k + 1 ) x 2 k ( k + 1 ) ! + ]
Integration yields
ε 0 ( x ) = x 3 π [ 1 3 3 x 2 2 5 + 5 x 4 6 7 7 x 6 24 9 + 9 x 8 120 11 + ( 1 ) k ( 2 k + 1 ) x 2 k ( 2 k + 3 ) ( k + 1 ) ! + ]
and the following series for the error function then follows:
erf ( x ) = x π + x π e x 2 + x 3 π [ 1 3 3 x 2 2 5 + 5 x 4 6 7 7 x 6 24 9 + 9 x 8 120 11 + ( 1 ) k ( 2 k + 1 ) x 2 k ( 2 k + 3 ) ( k + 1 ) ! + ]
The series associated with first- and second-order approximations follow in an analogous manner.

Appendix G. Proof of Theorem 10

The filter output is given by the convolution integral:
y ( t ) = 0 t erf [ λ γ ] ( t λ ) e [ t λ ] / τ τ 2 d λ = e t / τ τ 2 [ t 0 t erf [ λ γ ] e λ / τ d λ 0 t erf [ λ γ ] λ e λ / τ d λ ] .
Using the integral results, e.g., Equations (4.2.1) and (4.2.5) of [38]:
0 t erf ( a λ ) e b λ d λ = 1 b erf ( a t ) e b t 1 b exp [ b 2 4 a 2 ] erf [ a t b 2 a ] + 1 b exp [ b 2 4 a 2 ] erf [ b 2 a ]
0 t erf ( a λ ) λ e b λ d λ = 1 b [ t 1 b ] erf ( a t ) e b t 1 b exp [ b 2 4 a 2 ] [ [ b 2 a 2 1 b ] erf [ a t b 2 a ] 1 a π exp [ [ a t b 2 a ] 2 ] ] + 1 b exp [ b 2 4 a 2 ] [ [ b 2 a 2 1 b ] erf [ b 2 a ] 1 a π exp [ b 2 4 a 2 ] ]
with a = 1 / γ , b = 1 / τ , b 2 / 4 a 2 = γ 2 / 4 τ 2 , it then follows that
y ( t ) = t e t / τ τ 2 [ τ e r f [ t γ ] e t / τ τ exp [ γ 2 4 τ 2 ] e r f [ t γ γ 2 τ ] + τ exp [ γ 2 4 τ 2 ] e r f [ γ 2 τ ] ] e t / τ τ 2 [ τ ( t τ ) e r f [ t γ ] e t / τ τ exp [ γ 2 4 τ 2 ] [ [ γ 2 2 τ τ ] erf [ t γ γ 2 τ ] γ π exp [ [ t γ γ 2 τ ] 2 ] ] + τ exp [ γ 2 4 τ 2 ] [ [ γ 2 2 τ τ ] erf [ γ 2 τ ] γ π exp [ γ 2 4 τ 2 ] ] ]
Simplifying, and using the fact that the error function is an odd function, yields the required result:
y ( t ) = e r f [ t γ ] + e t / τ τ [ exp [ γ 2 4 τ 2 ] [ γ 2 2 τ ( t + τ ) ] [ erf [ γ 2 τ ] erf [ γ 2 τ t γ ] ] γ π exp [ γ 2 4 τ 2 ] exp [ [ t γ γ 2 τ ] 2 ] + γ π ]
To prove convergence, consider
lim n y n ( t ) = lim n 0 t f n [ λ γ ] h ( t λ ) d λ = 0 t erf [ λ γ ] h ( t λ ) d λ
where lim n f n ( x ) = erf ( x ) and h is the impulse response of the second-order filter. The interchange of limit and integration is valid, consistent with Lemma 2, as the integrand comprises differentiable bounded functions.

References

  1. Lebedev, N.N. Special Functions and Their Applications; Dover Publications: Dover, DE, USA, 1971. [Google Scholar]
  2. Fujiwara, T. Wavelength response of harmonic distortion in AC-bias recording. IEEE Trans. Magn. 1980, 16, 501–506. [Google Scholar] [CrossRef]
  3. Lee, J.; Woodring, D. Considerations of Nonlinear Effects in Phase-Modulation Systems. IEEE Trans. Commun. 1972, 20, 1063–1073. [Google Scholar] [CrossRef]
  4. Klein, S.A. Measuring, estimating, and understanding the psychometric function: A commentary. Percept. Psychophys. 2001, 63, 1421–1455. [Google Scholar] [CrossRef] [PubMed]
  5. Rinderknecht, M.D.; Lambercy, O.; Gassert, R. Performance metrics for an application-driven selection and optimization of psychophysical sampling procedures. PLoS ONE 2018, 13, e0207217. [Google Scholar] [CrossRef]
  6. Shi, Q. OFDM in bandpass nonlinearity. IEEE Trans. Consum. Electron. 1996, 42, 253–258. [Google Scholar] [CrossRef]
  7. Taggart, D.; Kumar, R.; Raghavan, S.; Goo, G.; Chen, J.; Krikorian, Y. Communication system performance—Detailed modeling of a power amplifier with two modulated input signals. In Proceedings of the 2005 IEEE Aerospace Conference, Bozeman, MT, USA, 4 December 2005; pp. 1398–1409. [Google Scholar] [CrossRef]
  8. Li, W. Damage Models for Soft Tissues: A Survey. J. Med. Biol. Eng. 2016, 36, 285–307. [Google Scholar] [CrossRef]
  9. Ogden, R.W.; Roxburgh, D.G. A pseudo-elastic model for the Mullins effect in filled rubber. Proc. R. Soc. Lond. Ser. A Math. Phys. Eng. Sci. 1999, 455, 2861–2877. [Google Scholar] [CrossRef]
  10. Temme, N.M. Error Functions, Dawson’s and Fresnel Integrals. In NIST Handbook of Mathematical Functions; Olver, F.W., Lozier, D.W., Boisvert, R.F., Clark, C.W., Eds.; National Institute of Standards and Technology and Cambridge University Press: Cambridge, MA, USA, 2010. [Google Scholar]
  11. Schrerier, F. The Voigt and complex error function: A comparison of computational methods. J. Quant. Spectrosc. Radiat. Transf. 1992, 48, 743–762. [Google Scholar] [CrossRef]
  12. Marsaglia, G. Evaluating the Normal Distribution. J. Stat. Softw. 2004, 11, 1–11. [Google Scholar] [CrossRef]
  13. Sandoval-Hernandez, M.A.; Vazquez-Leal, H.; Filobello-Nino, U.; Hernandez-Martinez, L. New handy and accurate approximation for the Gaussian integrals with applications to science and engineering. Open Math. 2019, 17, 1774–1793. [Google Scholar] [CrossRef]
  14. Menzel, R. Approximate closed form solution to the error function. Am. J. Phys. 1975, 43, 366–367, Erratum in Am. J. Phys. 1975, 43, 923–923 . [Google Scholar] [CrossRef]
  15. Matíc, I.; Radoičić, R.; Stefanica, D. A sharp Pólya-based approximation to the normal CDF. Appl. Math. Comput. 2018, 322, 111–122. [Google Scholar] [CrossRef]
  16. Chevillard, S. The functions erf and erfc computed with arbitrary precision and explicit error bounds. Inf. Comput. 2012, 216, 72–95. [Google Scholar] [CrossRef]
  17. De Schrijver, S.K.; Aghezzaf, E.-H.; Vanmaele, H. Double precision rational approximation algorithms for the standard normal first and second order loss functions. Appl. Math. Comput. 2012, 219, 2320–2330. [Google Scholar] [CrossRef]
  18. Cody, W.J. Rational Chebyshev approximations for the error function. Math. Comput. 1969, 23, 631–637. [Google Scholar] [CrossRef]
  19. Howard, R.M. Dual Taylor Series, Spline Based Function and Integral Approximation and Applications. Math. Comput. Appl. 2019, 24, 35. [Google Scholar] [CrossRef]
  20. Howard, R.M. Arbitrarily Accurate Spline Based Approximations for the Hyperbolic Tangent Function and Applications. Int. J. Appl. Comput. Math. 2021, 7, 1–59. [Google Scholar] [CrossRef]
  21. Abramowitz, M.; Stegun, I.A. Handbook of Mathematical Functions with Formulas, Graphs and Mathematical Tables; US Department of Commerce: Dover, DE, USA, 1964.
  22. Nandagopal, M.; Sen, S.; Rawat, A. A Note on the Error Function. Comput. Sci. Eng. 2010, 12, 84–88. [Google Scholar] [CrossRef]
  23. Schöpf, H.M.; Supancic, P.H. On Bürmann’s theorem and its application to problems of linear and nonlinear heat transfer and diffusion: Expanding a function in powers of its derivative. Math. J. 2014, 16, 1–44. [Google Scholar]
  24. Winitzki, S. A Handy Approximation for the Error Function and Its Inverse. 2008. Available online: https://scholar.google.com/citations?user=Q9U40gUAAAAJ&hl=en&oi=sra (accessed on 10 November 2020).
  25. Soranzo, A.; Epure, E. Simply explicitly invertible approximations to 4 decimals of error function and normal cumulative distribution function. arXiv 2012, arXiv:1201.1320v1. [Google Scholar]
  26. Vedder, J.D. Simple approximations for the error function and its inverse. Am. J. Phys. 1987, 55, 762–763. [Google Scholar] [CrossRef]
  27. Vazquez-Leal, H.; Castaneda-Sheissa, R.; Filobello-Nino, U.; Sarmiento-Reyes, A.; Orea, J.S. High Accurate Simple Approximation of Normal Distribution Integral. Math. Probl. Eng. 2012, 2012, 1–22. [Google Scholar] [CrossRef]
  28. Abrarov, S.M.; Quine, B.M. A rapid and highly accurate approximation for the error function of complex argument. arXiv 2013, arXiv:1308.3399. [Google Scholar]
  29. Champeney, D.C. A Handbook of Fourier Theorems; Cambridge University Press: Cambridge, UK, 1987. [Google Scholar]
  30. Chiani, M.; Dardari, D. Improved exponential bounds and approximation for the Q-function with application to average error probability computation. In Proceedings of the Global Telecommunications Conference, GLOBECOM’02, IEEE, Taipei, Taiwan, 17–21 November 2002; Volume 2, pp. 1399–1402. [Google Scholar] [CrossRef]
  31. Alzer, H. Error function inequalities. Adv. Comput. Math. 2010, 33, 349–379. [Google Scholar] [CrossRef]
  32. Chu, J.T. On Bounds for the Normal Integral. Biometrika 1955, 42, 263. [Google Scholar] [CrossRef]
  33. Neuman, E. Inequalities and Bounds for the Incomplete Gamma Function. Results Math. 2013, 63, 1209–1214. [Google Scholar] [CrossRef]
  34. Yang, Z.-H.; Chu, Y.-M. On approximating the error function. J. Inequalities Appl. 2016, 2016, 311. [Google Scholar] [CrossRef]
  35. Yang, Z.-H.; Qian, W.-M.; Chu, Y.; Zhang, E. On approximating the error function. Math. Inequalities Appl. 2018, 21, 469–479. [Google Scholar] [CrossRef]
  36. Abuelma’atti, M.T. A note on the harmonic distortion in AC-bias recording. IEEE Trans. Magn. 1988, 24, 3259–3260. [Google Scholar] [CrossRef]
  37. Salzer, H.E. Formulas for calculating the Error function of a complex variable. Math. Tables Other Aids Comput. 1951, 5, 67–70. [Google Scholar] [CrossRef]
  38. Ng, E.W.; Geller, M. A table of integrals of the error functions. J. Res. Nat. Bur. Stand. B Math. Sci. 1969, 73B, 1–20. [Google Scholar] [CrossRef]
Figure 1. Graph of the magnitude of the relative error in the approximations, detailed in Table 1, for erf(x).
Figure 1. Graph of the magnitude of the relative error in the approximations, detailed in Table 1, for erf(x).
Mca 27 00014 g001
Figure 2. Graph of the magnitude of the relative errors in approximations to erf(x): zero to tenth order integral spline based series and first, third, ..., fifteenth order Taylor series (dotted).
Figure 2. Graph of the magnitude of the relative errors in approximations to erf(x): zero to tenth order integral spline based series and first, third, ..., fifteenth order Taylor series (dotted).
Mca 27 00014 g002
Figure 3. Graph of the magnitude of the relative errors associated with the approximation erf ( x ) 1 and erf ( x ) 1 exp ( x 2 ) / π x along with the relative error in spline approximations of orders 16, 20, 24, 28 and 32.
Figure 3. Graph of the magnitude of the relative errors associated with the approximation erf ( x ) 1 and erf ( x ) 1 exp ( x 2 ) / π x along with the relative error in spline approximations of orders 16, 20, 24, 28 and 32.
Mca 27 00014 g003
Figure 4. Graph of the magnitude of the relative errors in the approximations to erf(x), of even orders, as specified by Equation (46). The dotted results are for the fourth order approximation specified by Theorem 1 (Equation (36)).
Figure 4. Graph of the magnitude of the relative errors in the approximations to erf(x), of even orders, as specified by Equation (46). The dotted results are for the fourth order approximation specified by Theorem 1 (Equation (36)).
Mca 27 00014 g004
Figure 5. Illustration of the crossover point where the magnitude of the relative error in the approximation erf ( x ) 1 equals the magnitude of the relative error in a set order spline approximation.
Figure 5. Illustration of the crossover point where the magnitude of the relative error in the approximation erf ( x ) 1 equals the magnitude of the relative error in a set order spline approximation.
Mca 27 00014 g005
Figure 6. Graph of the relationship between the optimum transition point x o ( n ) , as defined by Equation (57) for the case of f n and the order of the spline approximation.
Figure 6. Graph of the relationship between the optimum transition point x o ( n ) , as defined by Equation (57) for the case of f n and the order of the spline approximation.
Mca 27 00014 g006
Figure 7. Graph of the relative errors in the approximations, f n , to erf(x), of orders 2, 4, 6, …, 20, based on utilizing the approximation erf ( x ) 1 in an optimum manner.
Figure 7. Graph of the relative errors in the approximations, f n , to erf(x), of orders 2, 4, 6, …, 20, based on utilizing the approximation erf ( x ) 1 in an optimum manner.
Mca 27 00014 g007
Figure 8. Graph of the magnitude of the relative error in Taylor series approximations to erf ( x ) that utilize an optimized change to the approximation erf ( x ) 1 .
Figure 8. Graph of the magnitude of the relative error in Taylor series approximations to erf ( x ) that utilize an optimized change to the approximation erf ( x ) 1 .
Mca 27 00014 g008
Figure 9. Graph of the relative errors in spline approximations to erf ( x ) , of orders one to six and based on four variable sub-intervals of equal width.
Figure 9. Graph of the relative errors in spline approximations to erf ( x ) , of orders one to six and based on four variable sub-intervals of equal width.
Mca 27 00014 g009
Figure 10. Graph of the relative errors in approximations to erf ( x ) : first to seventh order spline based series based on four sub-intervals of equal width and with utilization of the approximation erf ( x ) 1 at the optimum transition point.
Figure 10. Graph of the relative errors in approximations to erf ( x ) : first to seventh order spline based series based on four sub-intervals of equal width and with utilization of the approximation erf ( x ) 1 at the optimum transition point.
Mca 27 00014 g010
Figure 11. Illustration of areas comprising erf(x).
Figure 11. Illustration of areas comprising erf(x).
Mca 27 00014 g011
Figure 12. Graph of the relative error bound, versus the order of approximation, for various set resolutions.
Figure 12. Graph of the relative error bound, versus the order of approximation, for various set resolutions.
Mca 27 00014 g012
Figure 13. Graph of the relative errors, based on a resolution of Δ = 0.5 , in second to fourth order approximations to erf(x).
Figure 13. Graph of the relative errors, based on a resolution of Δ = 0.5 , in second to fourth order approximations to erf(x).
Mca 27 00014 g013
Figure 14. Feedback system with dynamically varying (modulated) feedback.
Figure 14. Feedback system with dynamically varying (modulated) feedback.
Mca 27 00014 g014
Figure 15. Graph of the relative errors in approximations, of orders one to eight, to erf(x) as defined in Theorem 7.
Figure 15. Graph of the relative errors in approximations, of orders one to eight, to erf(x) as defined in Theorem 7.
Mca 27 00014 g015
Figure 16. Graph of the magnitude of the relative errors in approximations to exp(−x2), as defined by Equation (103), of orders 0, 2, 4, 6, 8, 10 and 12. The dotted curves are the relative errors associated with Taylor series of orders 1, 3, 5, 7, 9, 11, 13 and 15.
Figure 16. Graph of the magnitude of the relative errors in approximations to exp(−x2), as defined by Equation (103), of orders 0, 2, 4, 6, 8, 10 and 12. The dotted curves are the relative errors associated with Taylor series of orders 1, 3, 5, 7, 9, 11, 13 and 15.
Mca 27 00014 g016
Figure 17. Relative error in upper and lower bounds to erf(x) as, respectively, defined by Equation (110)–(112). The parameters p = 1 and q = π/4 have been used for the bounds defined by Equation (110).
Figure 17. Relative error in upper and lower bounds to erf(x) as, respectively, defined by Equation (110)–(112). The parameters p = 1 and q = π/4 have been used for the bounds defined by Equation (110).
Mca 27 00014 g017
Figure 18. Relative error in the approximations f 0 ( x ) and f 0 ( x ) + ε 0 ( x ) to erf(x) where the residual function ε 0 ( x ) is approximated by the stated order.
Figure 18. Relative error in the approximations f 0 ( x ) and f 0 ( x ) + ε 0 ( x ) to erf(x) where the residual function ε 0 ( x ) is approximated by the stated order.
Mca 27 00014 g018
Figure 19. Relative error in the approximations f 2 ( x ) and f 2 ( x ) + ε 2 ( x ) to erf(x) where the residual function ε 2 ( x ) is approximated by the stated order.
Figure 19. Relative error in the approximations f 2 ( x ) and f 2 ( x ) + ε 2 ( x ) to erf(x) where the residual function ε 2 ( x ) is approximated by the stated order.
Mca 27 00014 g019
Figure 20. Graph of the signals e C 2 ( x ) and erf(x)2.
Figure 20. Graph of the signals e C 2 ( x ) and erf(x)2.
Mca 27 00014 g020
Figure 21. Graph of y 3 ( t ) for the case of f o = 1 and for amplitudes of a = 0.5, a = 1, a = 1.5 and a = 2.
Figure 21. Graph of y 3 ( t ) for the case of f o = 1 and for amplitudes of a = 0.5, a = 1, a = 1.5 and a = 2.
Mca 27 00014 g021
Figure 22. Graph of the input power, output power and ratio of output power to input power as the amplitude of the input signal varies.
Figure 22. Graph of the input power, output power and ratio of output power to input power as the amplitude of the input signal varies.
Mca 27 00014 g022
Figure 23. Graph of the variation of harmonic distortion with amplitude.
Figure 23. Graph of the variation of harmonic distortion with amplitude.
Mca 27 00014 g023
Figure 24. Graph of the input signal erf ( t / γ ) , γ = 1 / 2 and the corresponding approximation to the output of a second order linear filter with f p = 1 , τ = 1 / 2 π .
Figure 24. Graph of the input signal erf ( t / γ ) , γ = 1 / 2 and the corresponding approximation to the output of a second order linear filter with f p = 1 , τ = 1 / 2 π .
Mca 27 00014 g024
Figure 25. Graph of the relative errors associated with the output signal, shown in Figure 24, for approximations to the error function (Equation (56)) of orders six to twelve which utilize optimum transition points.
Figure 25. Graph of the relative errors associated with the output signal, shown in Figure 24, for approximations to the error function (Equation (56)) of orders six to twelve which utilize optimum transition points.
Mca 27 00014 g025
Table 1. Examples of published approximations for erf ( x ) , 0 < x < . For the third and second last approximations, the coefficient definitions are detailed in the associated reference. The stated relative error bounds arise from sampling the interval [0, 5] with 10,000 uniformly spaced points.
Table 1. Examples of published approximations for erf ( x ) , 0 < x < . For the third and second last approximations, the coefficient definitions are detailed in the associated reference. The stated relative error bounds arise from sampling the interval [0, 5] with 10,000 uniformly spaced points.
#ReferenceApproximationRelative Error Bound
1Taylor series T n ( x ) = 2 π [ x x 3 3 1 + x 5 5 2 ! + ( 1 ) ( n 1 ) / 2 x n n [ ( n 1 ) / 2 ] ! ] n { 1 , 3 , 5 , }
2Abramowitz [21], p. 297, Equation (7.1.6) 2 π [ x + 2 x 3 1 3 + 2 2 x 5 3 5 + 2 3 x 7 3 5 7 + + 2 n x 2 n + 1 1 3 5 ( 2 n + 1 ) ] e x 2
3Abramowitz [21], p. 299, Equation (7.1.26) 1 [ a 1 1 + p x + a 2 ( 1 + p x ) 2 + a 3 ( 1 + p x ) 3 + a 4 ( 1 + p x ) 4 + a 5 ( 1 + p x ) 5 ] e x 2 8.09 × 10−6
4Menzel [14] and Nadagopal [22] 1 exp [ 4 x 2 π ] 7.07 × 10−3
5Bürmann series [23], Equation (33) 2 π 1 exp ( x 2 ) [ π 2 + 31 200 e x 2 341 8000 e 2 x 2 ] 3.61 × 10−3
6Winitzki [24], Equation (3) 1 exp [ x 2 4 / π + a x 2 1 + a x 2 ] , a = 8 ( π 3 ) 3 π ( 4 π ) 3.50 × 10−4
7Soranzo [25], Equation (1) 1 exp [ x 2 a 1 + a 2 x 2 1 + b 2 x 2 + b 3 x 4 ] , { a 1 = 1.2735457 a 2 = 0.1487936 b 2 = 0.1480931 b 3 = 5.160 × 10 4 1.20 × 10−4
8Vedder [26], Equation (5) tanh [ 167 x 148 + 11 x 3 109 ] 4.65 × 10−3
9Vazquez-Leal [27], Equation (3.1) tanh [ 39 x 2 π 111 2 tan 1 [ 35 x 111 π ] ] 1.88 × 10−4
10Sandoval-Hernandez [13], Equation (23) 2 1 + exp [ α 1 x + α 3 x 3 + α 5 x 5 + α 7 x 7 + α 9 x 9 ] 1 3.05 × 10−5
11Abrarov [28], Equation (16) 1 e x 2 [ 1 e τ m x τ m x + τ m 2 x π n = 1 N a n [ 1 ( 1 ) n e τ m x ] n 2 π 2 + τ m 2 x 2 ] a n = 2 π τ m exp [ n 2 π 2 / τ m 2 ] , τ m = 12 3.27 × 10−3
(N = 6)
Table 2. Residual functions associated with approximations for erf ( x ) , 0 < x < .
Table 2. Residual functions associated with approximations for erf ( x ) , 0 < x < .
#Error FunctionResidual Function
1 tanh [ 2 x π ] + g 1 ( x ) g 1 ( x ) = erf ( x ) tanh [ 2 x π ]
2 tanh [ 2 x π [ 1 + g 2 ( x ) ] ] g 2 ( x ) = π 2 x · atanh [ erf ( x ) ] 1
3 1 exp [ x 2 · 4 π [ 1 + g 3 ( x ) ] ] g 3 ( x ) = π 4 x 2 · ln [ 1 erf ( x ) 2 ] 1
Table 3. The transition points, x o , and the resulting relative error bounds for the spline-based approximations specified by Equation (56). The transition points are based on sampling the interval [0, 5] with 10,000 points.
Table 3. The transition points, x o , and the resulting relative error bounds for the spline-based approximations specified by Equation (56). The transition points are based on sampling the interval [0, 5] with 10,000 points.
Approx. Order: nTransition Point for f n Relative Error Bound for f n Transition Point for F n Relative Error Bound for F n
01.30850.08511.4650.0400
11.4920.03621.7690.0126
21.6581.95 × 10−21.9296.42 × 10−3
31.89757.36 × 10−32.17252.13 × 10−3
42.37151.03 × 10−32.63052.28 × 10−4
62.47154.75 × 10−42.731.13 × 10−4
82.9632.79 × 10−53.18556.69 × 10−6
103.07851.35 × 10−53.3242.59 × 10−6
123.46259.78 × 10−73.672.12 × 10−7
143.58454.00 × 10−73.82056.57 × 10−8
163.90253.44 × 10−84.1016.66 × 10−9
184.02851.22 × 10−84.2571.75 × 10−9
204.3001.20 × 10−94.4932.11 × 10−10
224.4293.76 × 10−104.6524.75 × 10−11
244.66554.18 × 10−114.8546.70 × 10−12
Table 4. The transition points, and resulting relative error bounds, for Taylor series approximations specified by Equation (59). The transition points are based on sampling the interval [0, 4] with 10,000 points.
Table 4. The transition points, and resulting relative error bounds, for Taylor series approximations specified by Equation (59). The transition points are based on sampling the interval [0, 4] with 10,000 points.
Order: n Transition   Point :   x o ( n ) Relative   Error   Bound   in   T n
10.88640.266
31.0780.146
51.2220.0917
71.3440.0609
91.45320.0416
131.64520.0204
171.81440.0105
211.96725.44 × 10−3
252.10842.89 × 10−3
292.241.55 × 10−3
372.48124.53 × 10−4
452.701.35 × 10−4
532.9024.09 × 10−5
613.091.24 × 10−5
Table 5. Transition point and relative error bound for the four equal subintervals case. The transition points are based on sampling the interval [0, 8] with 10,000 points.
Table 5. Transition point and relative error bound for the four equal subintervals case. The transition points are based on sampling the interval [0, 8] with 10,000 points.
Spline OrderTransition PointRelative Error Bound
02.70165.32 × 10−3
13.2927.21 × 10−5
23.45441.27 × 10−6
43.72081.43 × 10−7
84.66164.34 × 10−11
125.67849.75 × 10−16
166.37362.01 × 10−19
207.15444.62 × 10−24
247.71361.06 × 10−27
Table 6. Transition point and relative error bound for the 16 equal subintervals case. The transition points are based on sampling the interval [0, 12] with 10,000 points.
Table 6. Transition point and relative error bound for the 16 equal subintervals case. The transition points are based on sampling the interval [0, 12] with 10,000 points.
Spline OrderTransition PointRelative Error Bound
05.50083.32 × 10−4
16.87962.82 × 10−7
27.02243.14 × 10−10
47.15444.82 × 10−16
87.59966.22 × 10−27
128.20324.16 × 10−31
168.92441.66 × 10−36
209.72844.68 × 10−43
2410.5841.21 × 10−50
Table 7. Coefficient values for the case of Δ = 1 / 2 .
Table 7. Coefficient values for the case of Δ = 1 / 2 .
kDefinition for c k c k
1 erf ( 1 / 2 ) 5.204998778 × 10−1
2 erf ( 1 ) erf ( 1 / 2 ) 3.222009151 × 10−1
3 erf ( 3 / 2 ) erf ( 1 ) 1.234043535 × 10−1
4 erf ( 2 ) erf ( 3 / 2 ) 2.921711854 × 10−2
5 erf ( 5 / 2 ) erf ( 2 ) 4.270782964 × 10−3
6 erf ( 3 ) erf ( 5 / 2 ) 3.848615204 × 10−4
7 erf ( 7 / 2 ) erf ( 3 ) 2.134739863 × 10−5
8 erf ( 4 ) erf ( 7 / 2 ) 7.276811144 × 10−7
9 erf ( 9 / 2 ) erf ( 4 ) 1.522064186 × 10−8
10 erf ( 5 ) erf ( 9 / 2 ) 1.950785844 × 10−10
11 erf ( 11 / 2 ) erf ( 5 ) 1.530101947 × 10−12
12 erf ( 6 ) erf ( 11 / 2 ) 7.336328181 × 10−15
Table 8. Relative error bounds, over the interval ( 0 , ) , for approximations to erf ( x ) as defined in Theorem 7.
Table 8. Relative error bounds, over the interval ( 0 , ) , for approximations to erf ( x ) as defined in Theorem 7.
Order of Approx.Relative Error Bound: Original Series—Optimum Transition Point (Table 3)Relative Error Bound: Approx. Defined by Equation (83)
00.08512.68 × 10−2
10.03623.98 × 10−3
21.95 × 10−21.34 × 10−3
37.36 × 10−32.03 × 10−4
41.03 × 10−31.82 × 10−5
64.75 × 10−49.20 × 10−7
82.79 × 10−51.69 × 10−8
101.35 × 10−57.43 × 10−10
129.78 × 10−71.67 × 10−11
144.00 × 10−76.47 × 10−13
163.44 × 10−81.68 × 10−14
181.22 × 10−85.90 × 10−16
201.20 × 10−91.73 × 10−17
223.76 × 10−105.56 × 10−19
244.18 × 10−111.79 × 10−20
Table 9. Approximations that are consistent with a set relative error bound. The actual relative error bound is specified by re B .
Table 9. Approximations that are consistent with a set relative error bound. The actual relative error bound is specified by re B .
Relative Error BoundSpline Approx: Theorem 4Variable Interval Approx: Theorem 5Dynamic Constant Plus Spline Approx: Theorem 6Iterative Approx: Theorem 7
10−6 order = 12 x o = 3.4625 re B = 9.78 × 10 7 order = 5 3   subintervals x o = 3.51 re B = 6.96 × 10 7 order = 3 resolution = 3 / 4 re B = 5.53 × 10 7 order = 6 re B = 9.20 × 10 7
10−10 order = 23 x o = 4.581 re B = 9.31 × 10 11 order = 8 4   subintervals x o = 4.6616 re B = 4.34 × 10 11 order = 4 resolution = 3 / 8 re B = 9.12 × 10 11 order = 11 re B = 1.34 × 10 10 order = 12 re B = 1.67 × 10 11
10−16 order = 39 x o = 5.9017 re B = 7.21 × 10 17 order = 11 6   subintervals x o = 5.98 re B = 2.75 × 10 17 order = 6 resolution = 1 / 4 re B = 1.01 × 10 17 order = 19 re B = 1.18 × 10 16 order = 20 re B = 1.73 × 10 17
Table 10. Relative error bounds for approximations over the interval [ 0 ,   3 / 2 ] .
Table 10. Relative error bounds for approximations over the interval [ 0 ,   3 / 2 ] .
Order of Approx.Relative Error Bound: Equation (103)Relative Error Bound: Equation (77) of [19]
08.0035.6
13.746.98
20.9570.767
31.25 × 10−15.25 × 10−2
48.04 × 10−32.42 × 10−3
57.71 × 10−37.98 × 10−5
62.09 × 10−31.97 × 10−6
73.72 × 10−43.75 × 10−8
85.02 × 10−55.69 × 10−10
104.54 × 10−77.22 × 10−14
121.09 × 10−94.62 × 10−18
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Howard, R.M. Arbitrarily Accurate Analytical Approximations for the Error Function. Math. Comput. Appl. 2022, 27, 14. https://doi.org/10.3390/mca27010014

AMA Style

Howard RM. Arbitrarily Accurate Analytical Approximations for the Error Function. Mathematical and Computational Applications. 2022; 27(1):14. https://doi.org/10.3390/mca27010014

Chicago/Turabian Style

Howard, Roy M. 2022. "Arbitrarily Accurate Analytical Approximations for the Error Function" Mathematical and Computational Applications 27, no. 1: 14. https://doi.org/10.3390/mca27010014

Article Metrics

Back to TopTop