Arbitrarily Accurate Analytical Approximations for the Error Function

In this paper a spline based integral approximation is utilized to propose a sequence of approximations to the error function that converge at a significantly faster manner than the default Taylor series. The approximations can be improved by utilizing the approximation erf(x) approximately equal to one for x>>1. Two generalizations are possible, the first is based on demarcating the integration interval into m equally spaced sub-intervals. The second, it based on utilizing a larger fixed sub-interval, with a known integral, and a smaller sub-interval whose integral is to be approximated. Both generalizations lead to significantly improved accuracy. Further, the initial approximations, and the approximations arising from the first generalization, can be utilized as the inputs to a custom dynamical system to establish approximations with better convergence properties. Indicative results include those of a fourth order approximation, based on four sub-intervals, which leads to a relative error bound of 1.43 x 10-7 over the positive real line. Various approximations, that achieve the set relative error bounds of 10-4, 10-6, 10-10 and 10-16, over the positive real, are specified. Applications include, first, the definition of functions that are upper and lower bounds, of arbitrary accuracy, for the error function. Second, new series for the error function. Third, new sequences of approximations for exp(-x2) which have significantly higher convergence properties that a Taylor series approximation. Fourth, the definition of a complementary demarcation function eC(x) which satisfies the constraint eC(x)^2 + erf(x)^2 = 1. Fifth, arbitrarily accurate approximations for the power and harmonic distortion for a sinusoidal signal subject to a error function nonlinearity. Sixth, approximate expressions for the linear filtering of a step signal that is modelled by the error function.


Introduction
The error function arises in many areas of mathematics, science and scientific applications including diffusion associated with Brownian motion (Fick's second law), the heat kernel for the heat equation (e.g. Lebedev 1971), the modelling of magnetization (e.g. Fujiwara, 1980), the modelling of transitions between two levels, for example, with the modelling of smooth or soft limiters (e.g. Lee 1972) and the psychometric function (e.g. Klein 2001, Rinderknecht 2018, the modelling of amplifier non-linearities (e.g. Shi 1996 andTaggart 2005) and the modelling of rubber like materials and soft tissue (e.g. Li 2016 andOgden 1999). It is widely used in the modelling of random phenomena as the error function defines the cumulative distribution of the Gaussian probability density function and examples include, the probability of error in signal detection, option pricing via the Black-Scholes formula etc. Many other applications exist. In general, the error function is associated with a macro description of physical phenomena and the De-Moivre Laplace theorem is illustrative of the link between fundamental outcomes and a higher level model.
The error function is defined on the complex plane according to (1) where the path is between the points zero and and is arbitrary. Associated functions are the complementary error function, the Faddeyeva function, the Voigt function and Dawson's integral (e.g. Temme 2010). The Faddeyeva function and the Voigt function, for example, have application in spectroscopy (e.g. Schrerier 1992). The error function can also be defined in terms of the spherical Bessel functions (e.g. Temme 2010, eqn. 7.6.8) and the Print Date: 26/7/22 © Roy Howard 2022 incomplete Gamma function (e.g. Temme 2010, eqn. 7.11.1). Marsaglia (2004) provides a brief insight into the history of the error function.
For the real case, which is the case considered in this paper, the error function is defined by the integral The widely used, and associated, cumulative distribution function for the standard normal distribution is defined according to Being defined by an integral, which does not have an explicit analytical form, there is interest in approximations for the error function and over recent decades many approximations have been developed. Table 1 details indicative approximations for the real case and their relative errors are shown in Figure 1. Most of the approximations detailed in this table are custom and have a limited relative error bound with bounds in the range of (Sandoval-Hernandez) to (Menzel). It is preferable to have an approximation form that can be generalized to create approximations that converge to the error function. Examples include the standard Taylor series and the Bürmann series defined in Table 1. Table 1 can be improved upon by approximating the associated residual function, denoted , via a Padé approximant or a basis set decomposition. Examples of some of the possible approximation forms, and the resulting residual functions, are detailed in Table 2. One example is that of a Padé approximant for the function which leads to the approximation:

Many of the approximations detailed in
(4) The relative error bound in this approximation is . Higher order Padé approximants can be used to generate approximations with a lower relative error bound. Matic 2018 provides a similar approximation with an absolute error of .
An approximation for the error function can also be obtained by combining separate approximations, which are accurate, respectively, for and , via a demarcation function Naturally, an approximation for is required which requires an approximation for the error function. Unsurprisingly, the relative error in the approximation for the error function equals the relative error in the approximation utilized to approximate the error function in .
Finally, efficient numerical implementation of the error function is of interest and Chevillard (2012) and De Schrijver (2018) provide results and an overview. Highly accurate piecewise approximations have long been defined, e.g. Cody 1969.
The two point spline based approximations for functions and integrals, detailed in Howard 2019, have recently been applied to find arbitrarily accurate approximations for the hyperbolic tangent function (Howard 2021). In this paper, the general two point spline approximation form is applied to define a sequence of convergent approximations for the error function. The basic form of the approximation of order , denoted , is is a polynomial of order and is a polynomial of order less than . Convergence of the sequence of approximations to is shown and the convergence is significantly better than the default Taylor series. For example, the second order approximation (9) yields as relative error bound of over the interval which is better that a fifteenth order Taylor series approximation. The approximations can be improved by utilizing the approximation for and thereby establishing approximations with a set relative error bound over the interval .
Two generalizations are detailed. The first is of the form and is based on utilizing approximations associated with equally spaced sub-intervals of the interval . The second, is based on utilizing a fixed sub-interval within and then approximating the error function over the remainder of the interval. Both generalizations lead to significantly improved accuracy. For example, a fourth order approximation based on four variable sub-intervals, when used with the approximation for , has a relative error bound of over the interval . The corresponding sixteenth order approximation has a relative error bound of . Finally, by utilizing the solutions of a custom dynamical system, approximations with better convergence properties can be established.
Applications of the proposed approximations for the error function include, first, the definition of functions that are upper and lower bounds, of arbitrary accuracy, for the error function. Second, new series for the error function. Third, new sequences of approximations for which have significantly higher convergence properties that a Taylor series approximation. Fourth, the definition of a complementary demarcation function which satisfies the constraint , Fifth, arbitrarily accurate approximations for the power and harmonic distortion for a sinusoidal signal subject to a error function nonlinearity. Sixth, approximate expressions for the linear filtering of a step signal modelled by the error function.
Section 2 details the spline based approximation for the error function and its convergence. Improved approximations, obtained by utilizing the nature of the error function for large arguments, are detailed in section 3. Two generalizations, with potential for lower relative error bounds, are detailed in sections 4 and 5. Section 6 details how the initial approximations, and the approximations arising from the first generalization, can be utilized as the inputs to a custom dynamical system to establish approximations with better convergence properties. Applications are specified in section 7. Conclusions are stated in section 8.  Table 1

Notes and Notation
As it is sufficient to consider approximations for the interval .
For a function defined over the interval , an approximating function has a relative error, at a point , defined according to . The relative error bound for the approximating function over the interval is defined according to (11) The notation is used. The symbol denotes the unit step function.
Mathematica has been used to facilitate analysis and to obtain numerical results.

Background Results
The following result underpins the bounds proposed for the error function:

Lemma 1 Upper and Lower Functional Bounds
A positive approximation to a positive function over the interval , with a relative error bound (12) leads to the following upper and lower bounded functions: The relative error bounds, over the interval , for the upper and lower bounded functions, respectively, are:

Proof
The definition of the relative error bound, as specified by Equation 12, leads to which implies (16) and the relative error bounds:

Convergent Integral Approximations
One application of the proposed approximations for the error function requires knowledge of when function convergence implies convergence of associated integrals.

Lemma 2 Convergent Integral Approximation
If a sequence of functions converges, over the interval , to a bounded and integrable function , then point-wise convergence is sufficient for the associated integrals to be convergent, i.e. for

Proof
The required result follows if it is possible to interchange the order of limit and integration, i.e.
Standard conditions for when the interchange is valid are specified by the monotone and dominated convergence theorems (e.g. Champeney 1987, p. 26). Sufficient conditions for a valid interchange include point-wise function convergence, and for to be integrable and bounded.

Spline Approximation for Error Function
The following order, two point spline based, approximation for an integral has been detailed in Howard 2019 (eqn. 48) for a function that is at least order differentiable: where (21) Direct application of this result to the integral defining the error function leads to the following result:

Theorem 2.1 Spline Based Integral Approximation for Error Function
The error function can be defined according to where is the order spline based integral approximation defined according to (24) and is the associated residual function whose derivative is A more general approximation is

Proof
The proof is detailed in Appendix 1.

Note
The polynomial function is equivalently defined by the order Hermite polynomial function (Abramowitz, 1964, p. 775, equation 23.3.10) and an explicit form is Approximations for the error function, as defined by Equation 24 and for orders zero to five, are:

Results
The relative error in the zero to tenth order spline based series approximations, along with the relative error in Taylor series approximations of order one to fifteen, are detailed in Figure 2. The clear superiority, is terms of convergence, of the spline based series, relative to the Taylor series, is evident. The relative error in the spline approximations, of orders 16, 20, 24, 28 and 32, are shown in Figure 3.
The Mathematica code underpinning the results shown in Figure 2, is detailed in Appendix 2. Such code is indicative of the code underpinning the results detailed in the paper.

Approximation for Large Arguments
Zero and first order approximations for the error function, and for the case of , are The relative errors in such approximations, respectively, are and their graphs are shown in Figure 3.

Convergence
To prove convergence of the sequence of functions , defined by Theorem 2.1, to the error function, it is sufficient to prove that the corresponding sequence of residual functions converge to zero. This can be shown by considering the derivatives of the residual functions defined by Equation 25. The derivatives of the residual functions of orders zero, one and two, respectively, are:

Theorem 2.2 Convergence of Spline Based Approximations
For all fixed values of , the derivatives of the residual functions converge to zero as the order increases, i.e. for all fixed values of it is the case that (43) This is sufficient for the convergence of the residual functions, i.e. , , and, hence, for fixed: The convergence is non-uniform.

Proof
The proof is detailed in Appendix 3.

Improved Approximation via Iteration
Consider the general result By using the approximations, , as defined in Theorem 2.1, in the integral, improved approximations for the error function can be defined.

Theorem 2.3 Improved Approximation via Iteration
An improved approximation, of order , for the error function is order 20 order 24 order 28 order 16 order 32 which is the required result.

Explicit Approximations
Approximations to the error function, of orders zero to five, are: Note that integration of these expressions leads to functions defined, in part, on the Gamma function which is an integral. This makes further iteration impractical.

Results
The relative errors in even order approximations, of orders zero to ten, are shown in Figure 4. A comparison of the results detailed in Figure 2 and Figure 4 show the clear improvement in the approximations specified by Equation 46.
Improved Approximations

Improved Approximation for Error Function
An improved approximation for the error function can be achieved by noting, as illustrated in Figure 3, that the approximation is increasingly accurate for the case of and for increasing. By switching at a suitable point , as illustrated in Figure 5, from a spline based approximation to the approximation , an improved approximation is achieved. Naturally, it is possible to switch to the approximation , or higher order approximations, in a similar manner.

Theorem 3.1 Improved Approximation for Error Function
An improved approximation for the error function, based on a order spline approximation detailed in Theorem 2.1 or Theorem 2.3, and consistent with the illustration shown in Figure 5, is where the transition points, respectively, are defined according to The improved approximation results follow from optimally switching, as illustrated in Figure 5 and at the point specified by Equation 57, to the approximation which has a lower relative error magnitude.

Transition Points and Relative Error Bounds
The transition points, for various orders of spline approximation, are specified in Table 3. The relationship between the transition point and order is shown in Figure 6 for the case of the approximations . This relationship can be approximated, with a second order polynomial, according to  Figure 5. Illustration of the crossover point where the magnitude of the relative error in the approximation equals the magnitude of the relative error in a set order spline approximation.
However, as small variations in can lead to significant changes in the maximum relative error in the approximation for the error function, precise values for are preferable.
The graphs of the relative errors in the approximations to , as specified by Equation 56, are shown in Figure 7 for orders . The relative error bounds that can be achieved, over the interval , using the optimally chosen transition points, are detailed in Table 3.

Improved Approximation for Taylor Series
The approximation of for can be utilized to improve the relative error bound for a Taylor series approximation according to  , and the order of the spline approximation.
is a order Taylor series, odd, as specified in Table 1. The optimum transition points and the relative error bounds, for selected orders, are detailed in Table 4. The variation of the relative errors, with order, are shown in Figure 8. The change in the optimum transition point can be approximated according to

Variable Sub-interval Approximations for Error Function
An improved analytic approximation for the error function can be achieved by demarcating the interval into variable sub-intervals, e.g. the sub-intervals , , and for the four sub-interval case, and by utilizing spline based integral approximations for each sub-interval. Chiani, 2002, utilized sub-intervals to enhance approximations for the complementary error function.

Theorem 4.1 Variable Sub-Interval Approximations for Error Function
The order spline based approximation to the error function, based on equal width sub-intervals, is An alternative form is

Proof
The first result follows by applying Equation 27 in Theorem 2.1 to the sub-intervals , , , . The alternative form arises by expanding the outer summation in Equation 61 and collecting terms of similar form.

Explicit Expressions
A first order approximation, based on sub-intervals, is For the four sub-interval case, explicit expressions are Using the alternative form, a fourth order expression is

(68)
A fourth order spline approximation, which utilizes sixteen sub-intervals, is detailed in Appendix 4. This expression, when utilized with the transition point , yields an approximation with a relative error bound of .

Results
The relative errors in the spline approximations of orders one to six, and for the case of four equal sub-intervals , , and , are shown in Figure 9.

Improved Approximation
The spline approximations utilizing variable sub-intervals can be improved by using the transition to the approximation at a suitable point as specified by Equation 56. The relative error in the spline approximations of orders one to seven, and for the case of four equal sub-intervals , , and , are updated in Figure 10 to show the improvement associated with utilizing the optimum transition point to the approximation . The relative error bounds, and transition points, are detailed in Table 5 and Table 6 for the cases of four and sixteen sub-intervals.

Dynamic Constant plus Spline Approximation
Consider the demarcation of the areas, as illustrated in Figure 11 and based on a resolution , that define the error function. It follows that For the general case of non-uniformly spaced intervals, as defined by the set of monotonically increasing points , and where it is not necessarily the case that , the error function is defined according to (70) where , and A spline based approximation, as defined by Equation 27 These results arise from spline approximation of order , as defined by Equation 27, for the integrals, respectively, over the intervals and .

Approximations of Orders Zeros to Four
Approximations of orders zero to four arising, from Theorem 5.1, are:

Results
For a resolution of , the coefficients are tabulated in Table 7.
A resolution of yields a relative error bound of for a second order approximation, a relative error bound of for a fourth order approximation, a relative error bound of for a sixth order approximation and a relative error bound of for a sixteenth order approximation. These bounds are based on 10000 equal spaced samples in the interval .
The variation of the relative error bound with resolution, and order, is detailed in Figure 12. The nature of the variation of the relative error, for orders two, three and four, is shown in Figure 13 for the case of resolution of . It is possible to obtain better results by using non-uniformly spaced intervals but the improvement, in general, does not warrant the increase in complexity.

A Dynamical System to Yield Improved Approximations
It is possible to utilize the approximations detailed in Theorem 2.1 and Theorem 4.1 as the basis for determining new approximations with a lower relative error. The approach is indirect and based on considering the feedback system illustrated in Figure 14 which has dynamically varying feedback. The differential equation characterizing the system is For specific input, , and modulated feedback, , signals the output has a known form. For example, for the case of , the output signal, assuming zero initial conditions, is For the case of (81) the output signal, assuming zero initial conditions, is This case facilitates approximations for the error function, which can be made arbitrarily accurate and which are valid for the positive real line.

Theorem 6.1 Dynamical System Approximations for Error Function
Based on the differential equation specified by Equation 79, a order approximation to , for the case of , can be defined according to (83) where, for the case of : and with Here, the coefficients , are defined by the expansion Finally, it is the case that with the convergence being uniform.

Proof
The proof is detailed in Appendix 5.

Results
The relative error bounds associated with the approximations to , are detailed in Table 8. The graphs of the relative errors in the approximations are shown in Figure 15. The clear advantage of the proposed approximations is evident with the improvement increasing with the order of the initial approximation (i.e. a function with an initial lower relative error bound leads to an increasingly lower relative error bound). The other clear advantage of the approximations, as is evident in Figure 15, is that the relative error is bounded as . Table 8. Relative error bounds, over the interval , for approximations to as defined in Theorem 6.1.

Order of approx.
Relative error bound: original series -optimum transition point (Table 3) Relative error bound: approx.

Extension
By utilizing the approximations detailed in Theorem 4.1 similar approximations can be detailed, with lower relative error. For example, the first order approximation, , which is based on four equal sub-intervals and is defined by Equation 67, yields the approximation (95) which has a relative error bound of . With an optimum transition point of the original approximation has a relative error bound of (see Table 5).

Notes
First, the constants , , as defined in Equation 84, form a series that in the limit converges to .
It then follows that the corresponding series converges to : Second, the square root functional structure has been utilized for approximations to the error function as is evident from the approximations detailed in Table 1. It is easy to conclude that the form is well suited for approximating the error function.

Applications
This section details indicative applications of the approximations for the error function that have been detailed above.
The distinct analytical forms, that are specified in Theorem 2.1, Theorem 2.3, Theorem 4.1 and Theorem 6.1, for approximations to the error function, in general, facilitate analysis for different applications. For example, the form detailed in Theorem 2.1 underpins approximations to as detailed in Section 7.2. The form detailed in Theorem 6.1 underpins analytical approximations for the power associated with the output of a nonlinearity modelled by the error function when subject to a sinusoidal signal. The approximations are detailed in Section 7.6.
For applications, where a set relative error bound over a set interval is required, the approximation that is appropriate will depend, in part, on the domain over which an approximation is required as well as the level of the relative error bound that is acceptable. For example, the approximations detailed in Theorem 2.1 and Theorem 2.3 lead to simple analytical forms and with modest relative error bounds over when used with an appropriate transition point to the approximation of . Without the use of a transition point, such approximations are likely to be best suited for a restricted domain, for example, the domain which is consistent with the three sigma case arising from a Gaussian distribution. The fourth order approximations, as specified by

Error Function Approximations: Set Relative Error Bounds
Consider the case where an approximation for the error function, with a relative error bound over the positive real line of , is required. A order Taylor series, with a transition point of , yields a relative bound of .
An eighth order spline approximation, with a transition point of , yields a relative error bound of . The approximation, according to Equation 56, is  A first order spline approximation, based on four equal sub-intervals , , and , is defined according to (99) and yields a relative error bound of with the transition point .
A dynamic constant plus a spline approximation of order , and based on a resolution of achieves a relative error bound of (10000 points in the interval ). The approximation is Here, the approximation of , for (after three intervals) can be utilized without impacting the relative error bound.
Utilizing a fourth order spline approximation and iteration consistent with Theorem 6.1, the approximation (102) yields a relative error bound of .   Details of approximations that are consistent with higher order relative error bounds are detailed in Table 9.

Approximation for Exp(-x 2 )
A order approximation to the Gaussian function is detailed in the following theorem:

Theorem 7.1 Approximation for Gaussian Function
A order approximation, to the Gaussian function is where is defined by Equation 21 and is defined by Equation 26.

Proof
The proof is detailed in Appendix 6.

Approximations
Approximations to , of orders zero to five, are:

Results
The relative errors in the above defined approximations to are detailed in Figure 16 for approximations of order 0, 2, 4, 6, 8, 10 and 12 along with the relative error in Taylor series for orders . The clear superiority of the defined approximations is evident.

Comparison
The following order approximation for has been proposed in Howard 2019 (eqn. 77): where is defined by Equation 21. The relative error bounds over the interval (the three sigma bound case for Gaussian probability distributions) for this approximation, and the approximation defined by Equation 103, are detailed in Table 10. The tabulated results clearly show that this approximation is more accurate the approximation detailed in Equation 103. The improvement is consistent with the higher order Padé approximant being used.
The following approximations (seventh and fifth order) yield relative error bounds of less than over the interval :

Upper and Lower Bounded Approximations to Error Function
Establishing bounds for has received modest research interest, e.g. Alzer 2010 and published bounds for for the case of include: Chu 1955: The relative error in these bounds are detailed in Figure 17.
Utilizing the results of Lemma 1, it follows that any of the approximations detailed in Theorem 3.1, Theorem 4.1, Theorem 5.1 or Theorem 6.1 can be utilized to create upper and lower bounded functions for , , of arbitrary accuracy and with an arbitrary relative error bound. For example, the approximation specified by Equation 99, yields the functional bounds: Order of approx.

Relative error bound: Equation 103
Relative  with a relative error bound of less than for the lower bounded function and for the upper bounded function.

(116)
where is specified by Equation 24 and is specified by Equation 25. By utilizing a Taylor series approximation for in and then by integrating, an approximation for can be established. This leads to new series for the error function.

Theorem 7.2 New Series for Error Function
Based on zero, first and second order approximations the following series for the error function are valid: .  Further series, based on higher order approximations, can also be established.

Proof
The proof is detailed in Appendix 7.

Results
The relative errors associated with the zero and second order series are shown in Figure 18 and Figure 19. Clearly, the relative error improves as the number of terms used in the series expansion increases. The significant improvement in the relative error, for small, is evident. A comparison with the relative errors associated with Taylor series approximations, as shown in Figure 2, shows the improved performance.
The second order approximation arising from Equation 118, i.e.
yields a relative error bound of less than 0.001 for the interval and less than for the interval . where the residual function is approximated by the stated order. where the residual function is approximated by the stated order.

Consider a complementary function which is such that (121)
With the approximation detailed in Theorem 6.1 (and by noting that -see Equation 96) it is the case that and, thus, can be defined independently of the error function. This function is shown in Figure 20 along with . These two functions act as complementary demarcation functions for the interval . The transition point is as

Power and Harmonic Distortion: Erf Modelled Non-linearity
The error function is often used to model nonlinearities and the harmonic distortion created by such a nonlinearity is of interest. Examples include the harmonic distortion in magnetic recording, e.g. Abuelma'atti 1988, Fujiwara, 1980, and the harmonic distortion arising, in a communication context, by a power amplifier, e.g. Taggart 2005. For these cases the interest was in obtaining, with a sinusoidal input signal defined by , the harmonic distortion created by an error function nonlinearity over the input amplitude range of .
Consider the output signal of a nonlinearity modelled by the error function: For such a case, the output power is defined according to (125) and the output amplitude associated with the harmonic is To determine an analytical approximation to the output power, the approximations stated in Theorem 6.1 lead to relatively simple expressions. Consider the third order approximation, as specified by Equation 93, which has a relative error bound of for the positive real line. For such a case, the output signal is approximated according to and is shown in Figure 21.
The power in can be readily be determined (e.g. via use of Mathematica) and it then follows that an approximation to the true power is (128) where and , respectively, are the zero and first order Bessel functions of the first kind. The variation in output power is shown in Figure 22.

Harmonic Distortion
To establish analytical approximations for the harmonic distortion, the functional forms detailed in Theorem 6.1 are not suitable. However, the functional forms detailed in Theorem 2.1 do lead to analytical approximations which are valid over a restricted domain. Consider, a fourth order spline approximation, as specified by Equation 36, which approximates the error function over the range with a relative error bound that is better than and leads to the approximation The amplitude of the harmonic in such a signal is given by where the change of variable has been used. The first, third, fifth and seventh harmonic levels are: The variation, with the input signal amplitude, of the harmonic distortion, as defined by , is shown in Figure 23.

Linear Filtering of a Error Function Step Signal
Consider the case of a practical step input signal that is modelled by the error function, and the case where such a signal is input to a order linear filter with a transfer function defined by

Theorem 7.3 Linear Filtering of an Error Function Signal
The output of a second order linear filter, defined by Equation 135, to an error function input signal, defined by , is and can be approximated by the order signal where is defined by one of the approximations detailed in Theorem 3.1, Theorem 4.1, Theorem 5.1 or Theorem 6.1. It is the case that (138)

Proof
The proof is detailed in Appendix 8.

Results
For an input signal , , input into a second order linear filter with , the output signal is shown in Figure 24. The relative errors in the approximations to the output signal are shown in Figure 25 for the case of approximations as specified by Equation 56 and with the use of optimum transition points.

Extension to Complex Case
By definition, the error function, for the general complex case, is defined according to where the path is between the points zero and and is arbitrary. For the case of , and a path along the axis to the point and then to the point , the error function is defined according to (Salzer 1951, eqn. 5) (140) Explicit approximations for then arise when integrable approximations for the two dimensional surfaces and over are available. Naturally, significant existing research exists, e.g. Salzer 1951, Abrarov, 2013

Approximation for the Inverse Error Function
There are many applications where the inverse error function is required and accurate approximations for this function is of interest. From the research underpinning this paper, the authors view is that finding approximations to the inverse error function is best treated directly and as a separate problem, rather than approaching it via finding the inverse of an approximation to the error function.

Conclusion
This paper has detailed analytical approximations for the real case of the error function, underpinned by a spline based integral approximation, which have significantly better convergence than the default Taylor series. The original approximations can be improved by utilizing the approximation for , with being dependent on the order of approximation. The fourth order approximations arising from Theorem 2.1 and Theorem 2.3, with respective transition points of and , achieve relative error bounds over the interval , respectively, of and . The respective sixteenth order approximations, with and , have relative error bounds of and .
Further improvements were detailed via two generalizations. The first was based on utilizing integral approximations for each of the equally spaced sub-intervals in the required interval of integration. The second, was based on utilizing a fixed sub-interval within the interval of integration, with a known tabulated area, and then utilizing an integral approximation over the remainder of the interval. Both generalizations lead to significantly improved accuracy. For example, a fourth order approximation based on four sub-intervals, with , achieves a relative error bound of over the interval . A sixteenth order approximation, with , has a relative error bound of .
Finally, it was shown that a custom feedback system, with inputs defined by either the original error function approximations or approximations based on the use of sub-intervals, leads to analytical approximations with improved accuracy and which are valid over the positive real line without utilizing the approximation for suitably large. The original fourth order error function approximation yields an approximation with a rela-  Applications of the approximations were detailed and these include, first, approximations to achieve the specified error bounds of , , and over the positive real line. Second, the definitions of functions that are upper and lower bounds, of arbitrary accuracy, for the error function. Third, new series for the error function.
Fourth, new sequences of approximations for which have significantly higher convergence properties that a Taylor series approximation. Fifth, a complementary demarcation function satisfying the constraint was defined. Sixth, arbitrarily accurate approximations for the power and harmonic distortion for a sinusoidal signal subject to an error function nonlinearity. Seventh, approximate expressions for the linear filtering of a step signal that is modelled by the error function.

Acknowledgement
The support of Prof. A. Zoubir, SPG, Technische Universität Darmstadt, Darmstadt, Germany, who hosted a visit where the research for, and writing of, this paper was completed, is gratefully acknowledged. An anonymous reviewer provided a comment, which after consideration, led to the improved approximations detailed in Theorem 2.3.

Appendix 1:
Proof of Theorem 2.1 Consider . Successive differentiation of this function leads to the iterative formula The result for the case of then yields the order approximation for the error function: To determine consider the equality . Differentiation yields (145) and the required result:  e   Graphs of the magnitude of the residual functions, , for orders zero, two, four, six and eight, are shown in Figure 26. The magnitude of the functions defined by , for the same orders, are shown in Figure 27 and it is evident that (152) for a fixed constant which is of the order of unity. Hence: Further, as , it follows, for all fixed values of , that ------------  Figure 26. Graphs of for orders zero, two, four, six and eight.
The convergence is not uniform. Finally, as , for all , it then follows, for fixed, that which proves convergence.

Appendix 4: Fourth Order Spline Approximation -Sixteen Sub-interval Case
Consistent with Theorem 4.1, a fourth order spline approximation, which utilizes sixteen sub-intervals, is   y n  t   g n t  y n t   + g n t   = y n 0   0 = g n nth f n

A5.1 Solving for Coefficients of First Polynomial
Substitution of from Equation 169, into the differential equation defining , yields With for odd, the maximum power for on the right hand side of the differential equation is , even, and for odd. Thus, the form required for is Substitution then yields For the case of even, equating coefficients associated with set powers of , yields: With the odd coefficients , , , being zero, it follows that the corresponding odd coefficients are also zero. For the even coefficients, the algorithm is: where . For the case of being odd, the odd coefficients are again zero and the algorithm is the same as that specified in Equation 174 with .

A5.2 Solving for Coefficients of Second Polynomial
Substitution of from Equation 169, into the differential equation defining , yields: k 0 = n  = a n 0  0 = n t t n 1 + n t n n p n 1  p n 1  t    0  1 t   n 1 -t n 1 -  1 2 2 t  n n t n 1 -