1. Introduction
The error function arises in many areas of mathematics, science, and scientific applications including diffusion associated with Brownian motion (Fick’s second law), the heat kernel for the heat equation, e.g., [
1], the modeling of magnetization, e.g., [
2], the modelling of transitions between two levels, for example, with the modeling of smooth or soft limiters, e.g., [
3] and the psychometric function, e.g., [
4,
5], the modeling of amplifier non-linearities, e.g., [
6,
7], and the modeling of rubber-like materials and soft tissue, e.g., [
8,
9]. It is widely used in the modeling of random phenomena as the error function defines the cumulative distribution of the Gaussian probability density function and examples include, the probability of error in signal detection, option pricing via the Black–Scholes formula, etc. Many other applications exist. In general, the error function is associated with a macro description of physical phenomena and the de Moivre–Laplace theorem is illustrative of the link between fundamental outcomes and a higher-level model.
The error function is defined on the complex plane according to
where the path
is between the points zero and
and is arbitrary. Associated functions are the complementary error function, the Faddeyeva function, the Voigt function, and Dawson’s integral, e.g., [
10]. The Faddeyeva function and the Voigt function, for example, have applications in spectroscopy, e.g., [
11]. The error function can also be defined in terms of the spherical Bessel functions, e.g., Equation (7.6.8) of [
10], and the incomplete Gamma function, e.g., Equation (7.11.1) of [
10]. Marsaglia [
12] provides a brief insight into the history of the error function.
For the real case, which is the case considered in this paper, the error function is defined by the integral
The widely used, and associated, cumulative distribution function for the standard normal distribution is defined according to
Being defined by an integral, which does not have an explicit analytical form, there is interest in approximations of the error function and, in recent decades, many approximations have been developed.
Table 1 details indicative approximations for the real case and their relative errors are shown in
Figure 1. Most of the approximations detailed in this table are custom and have a limited relative error bound with bounds in the range of 3.05 × 10
−5 [
13] to 7.07 × 10
−3 [
14]. It is preferable to have an approximation form that can be generalized to create approximations that converge to the error function. Examples include the standard Taylor series, the Bürmann series, and the approximation by Abrarov, which are defined in
Table 1.
Many of the approximations detailed in
Table 1 can be improved upon by approximating the associated residual function, denoted
g, via a Padé approximant or a basis set decomposition. Examples of possible approximation forms, and the resulting residual functions, are given in
Table 2. One example is that of a 4/2 Padé approximant for the function
, which leads to the approximation:
The relative error bound in this approximation is 4.02 × 10
−7. Higher-order Padé approximants can be used to generate approximations with a lower relative error bound. Matic [
15] provides a similar approximation with an absolute error of 5.79 × 10
−6.
An approximation of the error function can also be obtained by combining separate approximations, which are accurate, respectively, for
and
, via a demarcation function
d
where
Naturally, an approximation of d requires an approximation of the error function. Unsurprisingly, the relative error in the approximation for the error function equals the relative error in the approximation utilized to approximate the error function in d.
Finally, efficient numerical implementation of the error function is of interest and Chevillard [
16] and De Schrijver [
17] provide results and an overview. Highly accurate piecewise approximations have long since been defined, e.g., [
18].
The two-point spline-based approximations for functions and integrals, detailed in [
19], have recently been applied to find arbitrarily accurate approximations for the hyperbolic tangent function [
20]. In this paper, the general two-point spline approximation form is applied to define a sequence of convergent approximations for the error function. The basic form of the approximation of order
n, denoted
, is
where
is a polynomial of order
n and
is a polynomial of order less than
n. Convergence of the sequence of approximations
to
is shown and the convergence is significantly better than the default Taylor series. For example, the second-order approximation
yields a relative error bound of 0.056 over the interval [0, 2] which is better than a fifteenth-order Taylor series approximation. The approximations can be improved by utilizing the approximation
for
and thereby establishing approximations with a set relative error bound over the interval
.
Two generalizations are detailed. The first is of the form
and is based on utilizing approximations associated with
m equally spaced subintervals of the interval [0,
x]. The second is based on utilizing a fixed subinterval within [0,
x] and then approximating the error function over the remainder of the interval. Both generalizations lead to significantly improved accuracy. For example, a fourth-order approximation based on four variable subintervals, when used with the approximation
for
, has a relative error bound of 1.43 × 10
−7 over the interval
. The corresponding sixteenth-order approximation has a relative error bound of 2.01 × 10
−19. Finally, by utilizing the solutions of a custom dynamical system, approximations with better convergence properties can be established.
Applications of the proposed approximations for the error function include, first, the definition of functions that are upper and lower bounds, of arbitrary accuracy, for the error function. Second, new series for the error function. Third, new sequences of approximations for that have significantly higher convergence properties than a Taylor series approximation. Fourth, the definition of a complementary demarcation function that satisfies the constraint . Fifth, arbitrarily accurate approximations for the power and harmonic distortion for a sinusoidal signal subject to an error function nonlinearity. Sixth, approximate expressions for the linear filtering of a step signal modeled by the error function.
Section 2 details the spline-based approximation for the error function and its convergence. Improved approximations, obtained by utilizing the nature of the error function for large arguments, are detailed in
Section 3. Two generalizations, with the potential for lower relative error bounds, are detailed in
Section 4 and
Section 5.
Section 6 details how the initial approximations, and the approximations arising from the first generalization, can be utilized as inputs to a custom dynamical system to establish approximations with better convergence properties. Applications are specified in
Section 7. Conclusions are stated in
Section 8.
1.1. Notes and Notation
As , it is sufficient to consider approximations for the interval .
For a function
defined over the interval
, an approximating function
has a relative error, at a point
, defined according to
. The relative error bound for the approximating function over the interval
is defined according to
The notation is used. The symbol u denotes the unit step function.
Mathematica has been used to facilitate the analysis and to obtain numerical results.
1.2. Background Results
The following result underpins the bounds proposed for the error function:
Lemma 1. Upper and Lower Functional Bounds.
A positive approximation to a positive function over the interval ,
with a relative error boundleads to the following upper and lower bounded functions: The relative error bounds, over the interval ,
for the upper and lower bounded functions, respectively, are: Proof. The definition of the relative error bound, as specified by Equation (12), leads to
which implies
and the relative error bounds:
□
Convergent Integral Approximations
One application of the proposed approximations for the error function requires knowledge of when function convergence implies convergence of associated integrals.
Lemma 2. Convergent Integral Approximation. If a sequence of functions converges, over the interval ,
to a bounded and integrable function ,
then point-wise convergence is sufficient for the associated integrals to be convergent, i.e., for Proof. The required result follows if it is possible to interchange the order of limit and integration, i.e.,
Standard conditions for when the interchange is valid are specified by the monotone and dominated convergence theorems, e.g., [
29], (p. 26). Sufficient conditions for a valid interchange include point-wise function convergence, and for
to be integrable and bounded. □
7. Applications
This section details indicative applications for the approximations to the error function that have been detailed above.
The distinct analytical forms specified in Theorems 1, 3, 5 and 7, for approximations to the error function, in general, facilitate analysis for different applications. For example, the form detailed in Theorem 1 underpins approximations to exp(−
x2), as detailed in
Section 7.2. The form detailed in Theorem 7 underpins analytical approximations for the power associated with the output of a nonlinearity modeled by the error function when subject to a sinusoidal signal. The approximations are detailed in
Section 7.6.
For applications, where a set relative error bound over a set interval is required, the suitable approximation will depend, in part, on the domain over which an approximation is required as well as the level of the relative error bound that is acceptable. For example, the approximations detailed in Theorems 1 and 3 lead to simple analytical forms and modest relative error bounds over
when used with an appropriate transition point for the approximation of
. Without the use of a transition point, such approximations are likely to be best suited for a restricted domain—for example, the domain
, which is consistent with the three sigma case arising from a Gaussian distribution. The fourth-order approximations, as specified by Equations (36) and (54), have relative error bounds, respectively, of 1.02 × 10
−3 and 1.90 × 10
−4 over
.
Section 7.1 provides examples of approximations that are consistent with set relative error bounds, over the interval
, of 10
−4, 10
−6, 10
−10, and 10
−16.
7.1. Error Function Approximations: Set Relative Error Bounds
Consider the case where an approximation for the error function, with a relative error bound over the positive real line of 10−4, is required. A 47th-order Taylor series, with a transition point of , yields a relative bound of 1.00 × 104.
An eighth-order spline approximation, with a transition point of
, yields a relative error bound of 2.79 × 10
−5. The approximation, according to Equation (56), is
A seventh-order approximation, with a transition point of , yields a relative error bound of 1.79 × 10−4.
A first-order spline approximation, based on four equal subintervals
,
,
and
, is defined according to
and yields a relative error bound of 7.21 × 10
−5 with the transition point
.
A dynamic constant plus a spline approximation of order 2, and based on a resolution of
, achieves a relative error bound of 8.33 × 10
−5 (10,000 points in the interval [0, 5]). The approximation is
where
Here, the approximation of , for (after three intervals) can be utilized without impacting the relative error bound.
Utilizing a fourth-order spline approximation and iteration consistent with Theorem 7, the approximation
yields a relative error bound of 1.82 × 10
−5.
Details of approximations that are consistent with higher-order relative error bounds are detailed in
Table 9.
7.2. Approximation for Exp(−x2)
A nth-order approximation to the Gaussian function is detailed in the following theorem:
Theorem 8. Approximation for Gaussian Function. A nth-order approximation,to the Gaussian functioniswhere is defined by Equation (21) and is defined by Equation (26). 7.2.1. Approximations
Approximations to
, of orders zero to five, are:
7.2.2. Results
The relative errors in the above defined approximations to
are detailed in
Figure 16 for approximations of order 0, 2, 4, 6, 8, 10, and 12, along with the relative error in Taylor series for orders 1, 3, 5, …, 15. The clear superiority of the defined approximations is evident.
7.2.3. Comparison
The following
nth-order approximation for exp(−
x2) has been proposed in Equation (77) of [
19]:
where
is defined by Equation (21). The relative error bounds over the interval
(the three sigma bound case for Gaussian probability distributions) for this approximation, and the approximation defined by Equation (103), are detailed in
Table 10. The tabulated results clearly show that this approximation is more accurate than the approximation detailed in Equation (103). The improvement is consistent with the higher-order Padé approximant being used.
The following approximations (seventh- and fifth-order) yield relative error bounds of less than 0.001 over the interval
:
A twenty-seventh-order Taylor series approximation yields a relative bound of 1.03 × 10−3.
7.3. Upper and Lower Bounded Approximations to Error Function
Establishing bounds for erf(
x) has received modest research interest, e.g., [
31], and published bounds for erf(
x) for the case of
x > 0 include Chu [
32]:
Corollary 4.2 of Neuman [
33]:
and refinements to the form proposed by Chu [
32], e.g., Yang [
34,
35], Corollary 3.4 of [
35]:
where
The relative error in these bounds is detailed in
Figure 17.
Utilizing the results of Lemma 1, it follows that any of the approximations detailed in Theorems 4, 5, 6, or 7 can be utilized to create upper and lower bounded functions for erf(
x),
x > 0, of arbitrary accuracy and with an arbitrary relative error bound. For example, the approximation
specified by Equation (99) yields the functional bounds:
with a relative error bound of 8.33 × 10
−5 for the lower bounded function and 1.44 × 10
−4 for the upper bounded function. Such accuracy is better than the bounds underpinning the results shown in
Figure 17. The sixteenth-order approximation,
, based on four equal subintervals, specified in
Appendix C and when used with a transition point
, leads to the functional bounds
with a relative error bound of less than 9.64 × 10
−16 for the lower bounded function and 9.32 × 10
−16 for the upper bounded function.
7.4. New Series for Error Function
Consider the exact results
where
is specified by Equation (24) and
is specified by Equation (25). By utilizing a Taylor series approximation for
in
and then integrating, an approximation for
can be established. This leads to new series for the error function.
Theorem 9. New Series for Error Function. Based on zero-, first-, and second-order approximations, the following series for the error function are valid: Further series, based on higher-order approximations, can also be established.
Results
The relative errors associated with the zero- and second-order series are shown in
Figure 18 and
Figure 19. Clearly, the relative error improves as the number of terms used in the series expansion increases. The significant improvement in the relative error, for small values of
x, is evident. A comparison with the relative errors associated with Taylor series approximations, as shown in
Figure 2, shows the improved performance.
The second-order approximation arising from Equation (118), i.e.,
yields a relative error bound of less than 0.001 for the interval [0, 0.87] and less than 0.01 for the interval [0, 1.1].
7.5. Complementary Demarcation Functions
Consider a complementary function
which is such that
With the approximation detailed in Theorem 7 (and by noting that
—see Equation (96)), it is the case that
and, thus,
can be defined independently of the error function. This function is shown in
Figure 20 along with
. These two functions act as complementary demarcation functions for the interval
. The transition point is
as
7.6. Power and Harmonic Distortion: Erf Modeled Nonlinearity
The error function is often used to model nonlinearities and the harmonic distortion created by such a nonlinearity is of interest. Examples include the harmonic distortion in magnetic recording, e.g., [
2,
36], and the harmonic distortion arising, in a communication context, by a power amplifier, e.g., [
7]. For these cases, the interest was in obtaining, with a sinusoidal input signal defined by
, the harmonic distortion created by an error function nonlinearity over the input amplitude range of [−2, 2].
Consider the output signal of a nonlinearity modelled by the error function:
For such a case, the output power is defined according to
and the output amplitude associated with the
kth harmonic is
To determine an analytical approximation to the output power, the approximations stated in Theorem 7 lead to relatively simple expressions. Consider the third-order approximation, as specified by Equation (93), which has a relative error bound of 2.03 × 10
−4 for the positive real line. For such a case, the output signal is approximated according to
and is shown in
Figure 21.
The power in
can be readily determined (e.g., via Mathematica) and it then follows that an approximation to the true power is
where
and
, respectively, are the zero- and first-order Bessel functions of the first kind. The variation in output power is shown in
Figure 22.
Harmonic Distortion
To establish analytical approximations for the harmonic distortion, the functional forms detailed in Theorem 7 are not suitable. However, the functional forms detailed in Theorem 1 do lead to analytical approximations that are valid over a restricted domain. Consider a fourth-order spline approximation, as specified by Equation (36), which approximates the error function over the range [−2, 2] with a relative error bound that is better than 0.001 and leads to the approximation
The amplitude of the
kth harmonic in such a signal is given by
where the change of variable
has been used. The first, third, fifth, and seventh harmonic levels are:
The variation, with the input signal amplitude, of the harmonic distortion, as defined by
, is shown in
Figure 23.
7.7. Linear Filtering of an Error Function Step Signal
Consider the case of a practical step input signal that is modeled by the error function,
and the case where such a signal is input to a 2nd-order linear filter with a transfer function defined by
Theorem 10. Linear Filtering of an Error Function Signal. The output of a second-order linear filter, defined by Equation (135), to an error function input signal, defined by, isand can be approximated by the nth-order signalwhere is defined by one of the approximations detailed in Theorems 4, 5, 6, or 7. It is the case that Results
For an input signal
,
, input into a second-order linear filter with
, the output signal is shown in
Figure 24. The relative errors in the approximations to the output signal are shown in
Figure 25 for the case of approximations as specified by Equation (56) and with the use of optimum transition points.
7.8. Extension to Complex Case
By definition, the error function, for the general complex case, is defined according to
where the path
is between the points zero and
and is arbitrary. For the case of
, and a path along the
x axis to the point (
x, 0) and then to the point
, the error function is defined according to Equation (5) of [
37]:
Explicit approximations for
then arise when integrable approximations for the two-dimensional surfaces
and
over
are available. Naturally, significant existing research exists, e.g., [
28,
37].
7.9. Approximation for the Inverse Error Function
There are many applications where the inverse error function is required and accurate approximations for this function are of interest. From the research underpinning this paper, the author’s view is that finding approximations to the inverse error function is best treated directly and as a separate problem, rather than approaching it via finding the inverse of an approximation to the error function.
8. Conclusions
This paper has detailed analytical approximations for the real case of the error function, underpinned by a spline-based integral approximation, which have significantly better convergence than the default Taylor series. The original approximations can be improved by utilizing the approximation for , with being dependent on the order of approximation. The fourth-order approximations arising from Theorems 1 and 3, with respective transition points of and , achieve relative error bounds over the interval , respectively, of 1.03 × 10−3 and 2.28 × 10−4. The respective sixteenth-order approximations, with and , have relative error bounds of 3.44 × 10−8 and 6.66 × 10−9.
Further improvements were detailed via two generalizations. The first was based on utilizing integral approximations for each of the m equally spaced subintervals in the required interval of integration. The second was based on utilizing a fixed subinterval within the interval of integration with a known tabulated area, and then utilizing an integral approximation over the remainder of the interval. Both generalizations lead to significantly improved accuracy. For example, a fourth-order approximation based on four subintervals, with , achieves a relative error bound of 1.43 × 10−7 over the interval . A sixteenth-order approximation, with , has a relative error bound of 2.01 × 10−19.
Finally, it was shown that a custom feedback system, with inputs defined by either the original error function approximations or approximations based on the use of subintervals, leads to analytical approximations with improved accuracy that are valid over the positive real line without utilizing the approximation for suitably large values of . The original fourth-order error function approximation yields an approximation with a relative error bound of 1.82 × 10−5 over the interval . The original sixteenth-order approximation yields an approximation with a relative error bound of 1.68 × 10−14.
Applications of the approximations include, first, approximations to achieve the specified error bounds of 10−4, 10−6, 10−10, and 10−16 over the positive real line. Second, the definitions of functions that are upper and lower bounds, of arbitrary accuracy, for the error function. Third, new series for the error function. Fourth, new sequences of approximations for that have significantly higher convergence properties than a Taylor series approximation. Fifth, a complementary demarcation function satisfying the constraint was defined. Sixth, arbitrarily accurate approximations for the power and harmonic distortion for a sinusoidal signal subject to an error function nonlinearity. Seventh, approximate expressions for the linear filtering of a step signal that is modeled by the error function.