Next Article in Journal
On the Četaev Condition for Nonholonomic Systems
Previous Article in Journal
Neural DE: An Evolutionary Method Based on Differential Evolution Suitable for Neural Network Training
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Introducing the Leal Method for the Approximation of Integrals with Asymptotic Behaviour: Special Functions

by
Hector Vazquez-Leal
1,*,
Mario A. Sandoval-Hernandez
2,
Uriel A. Filobello-Nino
1,
Jesus Huerta-Chua
3,
Rosalba Aguilar-Velazquez
4 and
Jose A. Dominguez-Chavez
1
1
Facultad de Instrumentación Electrónica, Universidad Veracruzana, Cto. Gonzalo Aguirre Beltrán s/n, Zona Universitaria, Xalapa 91000, Veracruz, Mexico
2
Centro de Bachillerato Tecnologico Industrial y de Servicios No. 190, Boca del Río 94297, Veracruz, Mexico
3
Instituto Tecnológico Superior de Poza Rica, Calle Luis Donaldo Colosio Murrieta s/n, Col. Arroyo del Maíz, Poza Rica 93230, Veracruz, Mexico
4
Facultad de Contaduría y Administración, Circuito Gonzalo Aguirre Beltrán s/n, Zona Universitaria, Xalapa 91000, Veracruz, Mexico
*
Author to whom correspondence should be addressed.
AppliedMath 2025, 5(1), 28; https://doi.org/10.3390/appliedmath5010028
Submission received: 20 December 2024 / Revised: 27 January 2025 / Accepted: 6 February 2025 / Published: 12 March 2025

Abstract

:
This work presents the Leal method for the approximation of integrals without known exact solutions, capable of multi-expanding simultaneously at different points. This method can be coupled with asymptotic approximations and the least squares method to extend the domain of convergence. The complete elliptic integral of the first kind, the Gamma function, and the error function are treated with this new method, resulting in highly accurate and easily computable approximations, exhibiting a wide region of convergence compared to other reported works. Finally, a comparison of computing time using Fortran between our proposals and other approximations from the literature is presented and discussed.

1. Introduction

In the analysis of phenomena in sciences and engineering, it is common to encounter various types of nonlinear differential equations and integrals without exact solutions. Therefore, a common approach to understanding the behaviour of these solutions indirectly is through the application of numerical methods. Nonetheless, such algorithms suffer from several drawbacks, including false states of equilibrium, oscillations, numerical instabilities, among others. Consequently, the numerical solution obtained may not represent the real solution [1]. Therefore, a wide field of research is focused on obtaining continuous solutions in the form of analytical approximations for the real solutions of nonlinear problems.
There are several semi-analytic methods for obtaining approximate solutions, such as the Rational Perturbation Homotopy method (RPHM) [2]. In this work, the Power Extender Series method (PSEM) [3,4] serves as the basis and will be used alongside the asymptotic analysis method [5], which examines the behaviour of functions as their arguments approach infinity. This approach includes asymptotic expansions, series, estimates, and notations that are essential for deriving approximate or asymptotic solutions to problems that cannot be solved exactly. One of the key methodologies in asymptotic theory is the Euler–Maclaurin summation formula [6], which aids in analysing complex integrals and sums. In [4], the Lambert W function was approximated using PSEM. However, for large argument values, asymptotic analysis was applied, utilizing asymptotic series to obtain accurate approximations.
Special functions in mathematical physics are solutions to differential equations that model a wide range of mathematical problems. These functions are extensively used in physics to address issues in fields such as astronomy, classical mechanics, quantum mechanics, and fluid dynamics. Special functions are distinguished by their inability to be expressed as finite combinations of elementary functions through arithmetic operations or functional compositions. These functions often exhibit asymptotic behaviour, which can be analysed to understand their properties in specific limits [3]. Typically, these functions are expressed using integrals or series.
The exploration of special functions in mathematical physics began with research across various domains, including planetary motion [7], solving algebraic equations, and Runge’s discovery of errors in polynomial interpolation [8]. Additionally, these functions have been studied in relation to oscillatory motions in both classical and quantum mechanics [9] and in the analysis of specific functions in complex variables [10]. These studies have laid a robust foundation for comprehending and utilizing special functions across numerous areas of physics and mathematics.
The significance of special functions in mathematical physics lies in their extensive applications across mathematics, engineering, and daily scenarios. For instance, the normal function plays a critical role in the processing of digital images, signals, and statistics [3]. The error function is utilized in digital communication and the study of transport phenomena [3]. Fresnel functions are vital for applications such as diffraction [11], electromagnetic wave transmission, and antenna design [12]. Elliptic functions are used in mechanics and mutual inductance [11], among other areas. Additionally, the Gamma function finds applications in fields like statistics and physics [13].
In this work, we propose a novel method for approximating integrals. This method is designed to utilize asymptotic expansions [5] and correct the non-asymptotic zones by pairing the derivatives at the expansion points with respect to the trial function and the derivatives obtained through the Taylor series. This method will be referred to for simplicity as the Leal method, resulting in highly accurate approximations, and w ( x ) represents the asymptotic series expansion [5] of the special function with a wide domain of convergence. This paper is organized as follows. First, in Section 2, the Leal method is introduced. Section 3 presents a basic introduction to the least squares method. The case studies are presented in Section 4: the complete elliptic integral K ( x ) [13,14,15,16,17,18] of the first kind, the Gamma function Γ ( x ) [19,20], and the error function [3,21,22,23]. Next, a discussion on computing convergence is presented in Section 5. Then, a numerical analysis and discussion are presented in Section 6. Finally, in Section 7, the concluding remarks from this work are presented.

2. Introduction to Leal Method

In general terms, integrals can be approximated by their Taylor series [24,25] using a particular expansion point. Therefore, we can obtain the derivatives of the exact solution at a given expansion point even if we do not know its analytical formulation. Hence, the derivatives of the exact solution are expressed as follows:
U ( x k ) , U ( x k ) , U ( x k ) , U ( x k ) , U ( i v ) ( x k ) ,
where U represents the exact solution, x k is an expansion point, and the prime denotes the derivative of U with respect to x.
Now, we can express the approximation of the nonlinear problem as
U ˜ ( x ) = f ( x , a 1 , a 2 , , a n ) + w ( x ) ,
where U ˜ is the approximation of U, f ( x , a 1 , a 2 , , a n ) is a trial function (TF) [26], a i ( i = [ 1 , , n ] ) are n constants to be determined by this method, and w ( x ) represents the asymptotic series expansion [5] of the special function.
Next, we match the derivatives of (1) and (2) for different expansion points to build the following system of equations:
U ˜ ( x 1 ) = U ( x 1 ) , U ˜ ( x 1 ) = U ( x 1 ) , , U ˜ ( m ) ( x 1 ) = U ( m ) ( x 1 ) , U ˜ ( x 2 ) = U ( x 2 ) , U ˜ ( x 2 ) = U ( x 2 ) , , U ˜ ( p ) ( x 2 ) = U ( p ) ( x 2 ) , U ˜ ( x j ) = U ( x j ) , U ˜ ( x j ) = U ( x j ) , , U ˜ ( q ) ( x j ) = U ( q ) ( x j ) ,
where j represents the number of expansion points.
From this point, we propose two variations of the Leal method:
1.
If n = j + ( m + p + + q ) with j expansion points, then, to obtain U ˜ ( x ) , we solve (3) for the a parameters using Newton–Raphson [1], among others.
2.
If n > j + ( m + p + + q ) with j expansion points, then we solve (3) symbolically for j + ( m + p + + q ) variables and finally use the least squares method with the remaining variables to fit U ˜ ( x ) with respect to U ( x ) . The suggested interval to perform the fitting is the zone that exhibits the poorest accuracy for the pure asymptotic solution ( w ( x ) ).
Figure 1 illustrates the flowchart of the Leal method for approximating special functions, detailing the necessary steps. For instance, it includes the derivation of the asymptotic series of the special function represented by w ( x ) and the approximation U ˜ ( x ) . An important aspect to note is the criterion n = j + ( m + p + + q ) within the diagram. If this criterion is satisfied, the first variation of the Leal method is applied, solving for the a parameters. Otherwise, the problem is solved differently, and the remaining constants are determined using the least squares method (LSM).
It is important to remark on some key aspects of this process for both variations of the method. w ( x ) is obtained by means of asymptotic techniques and its variations. This means that we assured the accuracy of U ˜ ( x ) for the limit behaviour of U ( x ) at x constant or x ± . Another aspect to consider is that TF can be formulated using polynomial terms, rational polynomial terms, transcendental functions, and combinations, as long as its effect over U ˜ ( x ) vanishes as w ( x ) reaches the limit region. Furthermore, it is important to note that the solution of (3) produces a symbiotic relationship between f and w in order to enlarge the domain of convergence of U ˜ ( x ) . What is more, optionally, U ( x ) can be transformed into another function U T ( x ) in order to simplify or ease the approximation of the limit region. In this case, we apply the same procedure to obtain U ˜ T ( x ) . Later, we apply the inverse transform to recover the approximation U ˜ ( x ) of the original nonlinear problem. The multiple expansion points aid to spread the error among them, extending the convergence domain of the approximation. Finally, the application of w ( x ) is optional if U ˜ T ( x ) and U ˜ ( x ) are adequately proposed to model the limiting behaviour, as we will see in the case study of Gamma function.
In other words, the Leal method is an approximation method inspired by the PSEM framework to solve integral problems, with the potential to address differential problems. The proposed solution consists of two parts. The first part, w ( x ) , is an asymptotic expansion series [5] of the function, for example, the special function. This expansion exhibits asymptotic characteristics; however, w ( x ) may lack accuracy in certain neighbourhoods, such as initial regions with non-asymptotic behaviour. To address this limitation, we introduce the TF function, referred to as the trial function. This function improves the behaviour of the approximation in these challenging regions.
The PSEM method uses series decomposition (e.g., Taylor series) for both the TF function and w ( x ) . The method then matches these decompositions in terms of their derivatives, comparing them with the exact derivatives computed for specific integral or differential problems. This procedure allows for the determination of adjustment constants a 0 , a 1 , , a n to achieve the best possible approximation. Some of these constants can be obtained analytically by computing derivatives. Alternatively, it is possible to use the least mean squares (LMS) method to determine them. It is important to note that this decomposition process, as employed by the PSEM method, allows the approximate functions to inherit properties of the original function, such as discontinuities and other specific behaviours. These characteristics are preserved in the series expansion of the functions.

3. Least Squares Method (LSM)

The least squares method is a mathematical and statistical technique employed to determine the optimal function that fits a given set of observed data. Its primary objective is to minimize the sum of the squared residuals, the differences between the observed values and those estimated by the model. By squaring these residuals, the method assigns greater weight to larger discrepancies, thereby enhancing the precision of the estimation [1,27].
Let ( x 0 , y 0 ), ( x 1 , y 1 ), ( x 2 , y 2 ), ⋯, ( x i , y i ) be the coordinates of a data set, such that i = 0 , 1 , 2 , , n , and the adjustment curve is y = f ( x , a 0 , a 1 , a 2 , , a j ) , where a j ( j = 0 , 1 , 2 , ) are adjustment constants. The least squares approximations attempt to minimize the sum of the squares from the vertical distances of y i values to the ideal model f ( x ) and obtain the model function S ( a 0 , a 1 , a 2 , , a j ) , which minimizes the square error defined by
S ( a 0 , a 1 , a 2 , , a j ) = i = 1 n ( ( y i ) f ( x i , a 0 , a 1 , a 2 , , a j ) ) 2 .
By construction, this represents a convex quadratic form in the parameters a 0 , a 1 , , a j when the model f is linear with respect to these parameters. Even in the case of nonlinear models, the problem is formulated as the minimization of S, ensuring that the search for critical points corresponds to identifying minima. Thus, (4) is designed to minimize the error with respect to the sample set. To do this, partial derivatives are defined with respect to every adjustment constant, creating a system of nonlinear equations given as
S a 0 = a 0 i = 1 n ( ( y i ) f ( x i , a 0 , a 1 , a 2 , , a j ) ) 2 = 0 , S a 1 = a 1 i = 1 n ( ( y i ) f ( x i , a 0 , a 1 , a 2 , , a j ) ) 2 = 0 , S a 2 = a 2 i = 1 n ( ( y i ) f ( x i , a 0 , a 1 , a 2 , , a j ) ) 2 = 0 , S a n = S a n i = 1 n ( ( y i ) f ( x i , a 0 , a 1 , a 2 , , a j ) ) 2 = 0 .
The equations in (5) are obtained by setting the partial derivatives of S with respect to each parameter a j to zero. This identifies stationary points (minima, maxima, or saddle points). However, the structure of S (a sum of squares) guarantees that these points are global minima in linear cases or local minima in nonlinear cases under standard regularity conditions. The LSM inherently seeks minima because maximizing S would contradict its purpose of reducing error [28,29]. For linear models, S is strictly convex, ensuring a unique minimum. For nonlinear models, while convexity is not guaranteed, the LSM framework still targets minima by iteratively refining parameters. Second-order conditions (e.g., positive definiteness of the Hessian matrix) confirm minima, but in practice, the LSM focuses on first-order conditions for computational efficiency, relying on the problem’s formulation to avoid maxima [28,29]. By solving the system (5) for the adjustment constants a j , a model can be derived that minimizes the error with respect to the given data set. This approach facilitates the creation of a continuous function in the space, optimizing the fit with the observed samples [28]. Consequently, various models can be tested, and the most suitable one can be selected based on its ability to provide the best fit. In our work, the trial function f ( x , a 0 , a 1 , , a j ) is designed to approximate the exact solution. Maximizing the error would be counterproductive; thus, the LSM’s minimization framework aligns with the goal of achieving the best fit. This is consistent with standard optimization theory and practice [28,29].
Figure 2 illustrates this methodology, where the asterisk symbol denotes the data points. Each data point ( x i , y i ) , where i = 1 , 2 , 3 , , contributes to determining the optimal function that best represents the data set, depicted by the blue line.

4. Leal Method Applied to Approximate Special Functions

In this section, we will present the approximations for the following special functions:
K ( x ) = 0 π / 2 d θ 1 x 2 sin 2 θ ,
Γ ( x ) = 0 e t t x 1 d t ,
and
erf ( x ) = 2 π 0 x exp ( t 2 ) d t ,
which are the complete elliptic integral K ( x ) [14,15,16,17,18] of the first kind, the Gamma function Γ ( x ) [19,20], and the error function [21,22,23], respectively. This section begins with a demonstrative case study illustrating the application of the Leal method. It is important to remark that all the power series expansions of this work were obtained applying the built-in package ”MultiSeries” of Maple 2021.

4.1. Approximation Procedure Using Variation of LEAL Method, Variant 2

To understand the application of Leal’s method, we considered the f ( x ) given by
f ( x ) = 1 x + 1 d t
First, we will obtain the series expansion w ( x ) for the six first terms of f ( x ) using Maple command “series”. The series expansion for w ( x ) is given by
w ( x ) = 137 60 + 6 x 5 x 2 + 10 3 x 3 5 4 x 4 + 1 5 x 5 .
Second, the proposed trial function is
TF ( x ) = a 0 + a 1 x 1 + b 0 x ( 1 x ) ,
where we will obtain a 0 and a 1 analytically and b 0 using the LMS method. There are n = 3 constants to be determined. Following Leal’s method, variant 2, we propose the following coupled approximation:
f ˜ ( x ) = w ( x ) + a 0 + a 1 x 1 + b 0 x ( 1 x ) .
Applying the Leal method with expansion point X 0 = 0.9 , then j = 1 , m = 1 , we have n > j + ( m ) and
Applying Leal’s method with the expansion point X 0 = 0.9 , then j = 1 , m = 1 , we have n > j + m and
lim X 9 10 f ˜ ( X ) = lim X 9 10 f ( X ) , lim X 9 10 f ˜ ( X ) = lim X 9 10 f ( X ) .
Solving (13) for a 0 and a 1 , we obtain:
a 0 = ln 9 10 ( 9 b 0 + 10 ) 9 a 0 10 + 948243 b 1 1000000 + 316081 300000 , a 1 = ( 9000000 ( 9 ln 9 10 b 0 + 10 ln 9 10 9 b 0 10 + 948243 b 1 1000000 + 316081 300000 ) b 0 + 81 b 0 2 + 8100000 a 1 81000000 ln 9 10 b 0 + 90000000 ln 9 10 + 8534367 b 0 + 9482530 ) / 90000 ( 81 b 0 + 80 ) .
Now, after substituting a 0 and a 1 into (12) and applying the LMS method with the NonlinearFit command of Maple in the range x [ 0.001 , 0.85 ] (using 850 samples) to obtain b 1 , we have:
f ˜ ( x ) = 137 60 + 6 x 5 x 2 + 10 3 x 3 5 4 x 4 + 1 5 x 5 49 Ξ ( x ) ( x 1 ) 304 1223 18 x + 1 , Ξ ( x ) = Ψ ( x ) 1174 17 ln ( 10 ) 1105 8 ln ( 3 ) + 1138 3 2342 x 5 , Ψ ( x ) = 335 6 + 1241 x 18 .
Figure 3 shows the comparison between f ( x ) and f ˜ ( x ) . The figure demonstrates that they maintain good accuracy within the range [ 0 , 2 ] . The Appendix A provides the code for this case study.

4.2. Approximation Procedure for K ( x ) Using Variation 1 of Leal Method

First, we propose the following transform
K T ( X ) = ln 1 K ( 1 X ) + 1 ,
where X = 1 x .
Figure 4a,b illustrate the behaviour of the complete elliptic integral K ( x ) and its transformation K T ( x ) introducing the change of variable X = 1 x within the interval x [ 0 , 1 ] . It is important to notice how the A and B points and their correspondent A and B are swapped; in particular, the B point is projected from infinity to zero, simplifying the process of the solution.
Calculating the order two series expansion of (16) at X = 0 : results
K ˜ T 0 ( X ) = ln 2 + 3 ln ( 2 ) ln ( X ) 3 ln ( 2 ) ln ( X ) ( 1 + 3 ln ( 2 ) ln ( X ) ) X ( 3 ln ( 2 ) ln ( X ) ) ( 2 + 3 ln ( 2 ) ln ( X ) ) .
Now, we propose the trial function for this problem
TF ( X ) = ξ ( X ) Φ ( X ) = a 6 X 6 + a 5 X 5 + a 4 X 4 + a 3 X 3 + a 2 X 2 + a 1 X b 6 X 6 + b 5 X 5 + b 4 X 4 + b 3 X 3 + b 2 X 2 + b 1 X + 1 ,
where TF ( X ) tends to vanish in the proximity of the X = 0 as expected, assuring the predominance of K ˜ T 0 ( X ) in this region.
Then, following the Leal method, we propose the next coupled approximation
K ˜ T ( X ) = TF ( X ) + K ˜ T 0 ( X ) ,
where a 1 , , a 5 and b 1 , , b 6 are constants to be determined by the Leal method. Now, we obtain a set of 12 equations for the derivatives at X = 1 and X = 0.1
lim X 1 10 K ˜ T ( X ) = lim X 1 10 K T ( X ) , lim X 1 10 K ˜ T ( X ) = lim X 1 10 K T ( X ) , , lim X 1 10 K ˜ T ( v ) ( X ) = lim X 1 10 K T ( v ) ( X ) , lim X 1 K ˜ T ( X ) = lim X 1 K T ( X ) , lim X 1 K ˜ T ( X ) = lim X 1 K T ( X ) , , lim X 1 K ˜ T ( v ) ( X ) = lim X 1 K T ( v ) ( X ) ,
Solving (20) and substituting the result and X = 1 x into the inverse transform, we obtain
K ˜ 1 ( x ) = 1 exp ( K ˜ T ( 1 x ) ) 1 , 0 x 1 , K ˜ T ( 1 x ) = ξ ( 1 x ) Φ ( 1 x ) + K ˜ T 0 ( 1 x ) ,
and
ξ ( 1 x ) = 77804536 36924097 ( 1 x ) 6 34920949 11787120 ( 1 x ) 5 204405104 43666965 ( 1 x ) 4 7285956 9188933 ( 1 x ) 3 7482109 385464945 ( 1 x ) 2 + 77429 4322012602 ( 1 x ) , Φ ( 1 x ) = 115995201 158520115 ( 1 x ) 6 + 10497325 969996 ( 1 x ) 5 177670556 3625651 ( 1 x ) 4 + 66637664 14351787 ( 1 x ) 3 + 208027606 1714409 ( 1 x ) 2 + 157022980 5140819 151882161 5140819 x .

4.3. Approximation of K ( x ) Using Variation 2 of Leal Method

Let us propose the following transform:
K T 1 ( X ) = ln 1 K ( 1 X ) + 1 .
Next, we obtain the order two power series of (23) at X = 0 , resulting in the following asymptotic approximation
K ˜ T 1 ( X ) = ln 2 + 3 ln ( 2 ) ln ( X ) 3 ln ( 2 ) ln ( X ) ,
Substituting (24) into the inverse of (23) and replacing X = 1 x , we obtain
K A ( x ) = 3 2 ln ( 2 ) 1 2 ln ( 1 x ) ,
which is an asymptotic approximation for K ( x ) .
Now, following the Leal method procedure, we propose
K ˜ 2 ( x ) = K A ( x ) + p ( x ) q ( x ) ( 1 x 2 ) , p ( x ) = a 6 x 6 + a 5 x 5 + a 4 x 4 + a 3 x 3 + a 2 x 2 + a 1 x + a 0 , q ( x ) = b 10 x 10 + b 9 x 9 + b 8 x 8 + b 7 x 7 + b 6 x 6 + b 5 x 5 + b 4 x 4 + b 3 x 3 + b 2 x 2 + b 1 x + 1 .
Then, we construct the following system of equations
lim x 0 K ˜ 2 ( x ) = lim x 0 K ( x ) , lim x 0 K ˜ 2 ( x ) = lim x 0 K ( x ) , lim x 0 K ˜ 2 ( x ) = lim x 0 K ( x ) ,
to satisfy K ( x ) and its derivative at x = 0 .
Solving for a 0 and a 1 , it results
a 0 = 1 2 π 3 2 ln ( 2 ) , a 1 = 1 2 + 1 2 π 3 2 ln ( 2 ) b 1 , a 2 = 5 π 8 1 4 1 2 + π 2 3 ln ( 2 ) 2 b 1 b 1 + ( b 2 b 1 2 ) π 2 3 ln ( 2 ) 2 3 ln ( 2 ) 2 .
Now, after substituting a 0 and a 1 into (26) and applying LSM in the range of x [ 0.001 , 0.85 ] (using 849 samples), we obtain
K ˜ 2 ( x ) = K A ( x ) + p ( x ) q ( x ) ( 1 x 2 ) , p ( x ) = 52463 306859 x 6 + 338323 156484 x 5 1073376 185837 x 4 + 1061399 340464 x 3 + 398125 136349 358485 ln ( 2 ) 180098 x 2 + 349355 84514 + 617948 ln ( 2 ) 178087 x + π 2 3 2 ln ( 2 ) , q ( x ) = 92263 390200 x 10 + 104005 188081 x 9 126709 81341 x 8 + 396130 221997 x 7 + 679667 105164 x 6 806763 88483 x 5 2184029 393237 x 4 + 835234 87493 x 3 59644 109137 x 2 439845 190139 x + 1 ,
where the values of the remaining constants were converted into equivalent fractions using the ‘convert’ command from Maple using the option ‘rational’. The term ( 1 x 2 ) guarantees that the effect of TF will vanish keeping the limit behaviour of K ˜ 2 ( x ) as x 1 due to K A ( x ) .

4.4. Approximation of Γ ( x ) Using Variation 1 of Leal Method

First, we propose the following transformation for the Gamma function
Γ T ( x ) = ln 1 10 Γ ( x ) + 1 .
Figure 5a,b show the Γ ( x ) and Γ T 1 ( x ) behaviour in the interval x [ 0 , 2 ] . It is important to notice how the A point is projected from infinity to zero ( A ), allowing us to obtain the derivatives of Γ T 1 ( x ) at x = 0 .
Now, we propose the following approximation for Γ T ( x )
Γ ˜ T ( x ) = Π ( x ) Ψ ( x ) = a 5 x 5 + a 4 x 4 + a 3 x 3 + a 2 x 2 + a 1 x + a 0 b 6 x 6 + b 5 x 5 + b 4 x 4 + b 3 x 3 + b 2 x 2 + b 1 x + 1 ,
where a 0 , , a 5 and b 0 , , b 5 are constants to be determined by the Leal method. Next, we obtain a system of 11 equations for the derivatives
lim X 0 Γ ˜ T ( X ) = lim X 0 Γ T ( X ) , lim X 0 Γ ˜ T ( X ) = lim X 0 Γ T ( X ) , , lim X 0 Γ ˜ T ( v i i ) ( X ) = lim X 0 Γ T ( v i i ) ( X ) , lim X 4 3 Γ ˜ T ( X ) = lim X 4 3 Γ T ( X ) , lim X 4 3 Γ ˜ T ( X ) = lim X 4 3 Γ T ( X ) , lim X 2 Γ ˜ T ( X ) = lim X 2 Γ T ( X ) , lim X 2 Γ ˜ T ( X ) = lim X 2 Γ T ( X )
using three expansion points x = [ 0 , 4 / 3 , 2 ] .
Solving (32) and substituting the result into the inverse transform, we obtain
Γ ˜ 1 ( x ) = 1 10 exp Π ( x ) Ψ ( x ) 1 , 0 x 2 , Π ( x ) = 359065 156606257 x 5 308164 22170535 x 4 300501 40357138 x 3 + 671569 6173182 x 2 + 1 10 x , Ψ ( x ) = 8641 22448955 x 6 + 430048 89620273 x 5 + 416525 24162662 x 4 + 1055311 15655050 x 3 + 2159838 6348467 x 2 + 981206 1750073 x + 1 .
It is important to notice that we circumvented the calculation of an asymptotic approximation w ( x ) for Γ T ( x ) because Γ ˜ T ( x ) was suitable to reproduce with high accuracy the asymptotic behaviour, as x tends to the limit value at x = 0 .

4.5. Approximation of Γ ( x ) Using Variation 2 of Leal Method

Calculating the order two power series expansion of Γ ( x ) as x tends to infinity, it results in
Γ A ( x ) = 2 π x + 1 12 2 π x 3 ( 1 x ) x exp ( x ) .
Next, by substituting (34) into (30), we obtain the following asymptotic approximation:
Γ ˜ T A = ln 1 10 ( 1 x ) x exp ( x ) 2 π x + 1 12 2 π x 3 + 1 .
Now, using (35) as a reference, we propose the following trial function:
Γ ˜ T 2 = ln 1 10 ( 1 x + Δ ) x exp ( x ) + L 2 π x + Δ + 1 12 2 π ( x + Δ ) 3 + 1 , L = ( a 3 x 3 + a 2 x 2 + a 1 x 1 ) ( b 2 x 2 + b 1 x + 1 ) Δ , Δ = exp ( 5 x ) ,
where Δ perturbs the asymptotic behaviour of Γ ( x ) at x = 0 , allowing L to replicate the asymptote as x tends to zero and preserving at the same time limiting behaviour as x tends to infinity.
Next, following the Leal method, we construct the following set of equations:
lim x 0 Γ ˜ T ( x ) = lim x 0 Γ 2 ( x ) , , lim x 0 Γ ˜ T ( i i i ) ( x ) = lim x 0 Γ T 2 ( i i i ) ( x ) .
Then, isolating for a 1 , a 2 and a 3 from (37), it results in
a 1 = 13 12 2 π b 1 8 , a 2 = 4 2 π ( 13 48 ( b 1 γ ) 121 48 ) π 3 / 2 + 2 π b 1 + 1 8 b 2 + 9 2 , a 3 = 18 2 π ( ( 13 432 γ 2 + 13 216 b 1 121 216 γ 121 216 b 1 13 216 b 2 2147 864 ) π 3 / 2 + 13 2592 π 7 / 2 + ( b 1 + 2 9 b 2 + 677 216 ) 2 π ) .
It is important to notice that a 1 , a 2 , and a 3 guarantee the derivatives at x = 0 , and they are expressed in terms of the b constants; these have to be determined by LSM. Now, substituting (36) into the inverse of (30), we obtain
Γ ˜ 2 ( x ) = 2 π x + Δ + 1 12 2 π ( x + Δ ) 3 ( 1 x + Δ ) x exp ( x ) + L .
Then, we use LSM to fit Γ ˜ 2 ( x ) in the interval of x [ 0.01 , 1.2 ] (239 samples) using the remaining unknown constants, resulting b 1 = 2.09891849860793 and b 2 = 5.20339042933097 . Finally, Figure 6 displays the comparison of Γ ˜ 1 ( x ) and Γ ˜ 2 ( x ) with respect to Γ ( x ) ; this figure shows that they keep a good accuracy at [0,2]; however, Γ ˜ 1 ( x ) quickly loses accuracy after x > 2.55 , while Γ ˜ 2 ( x ) keeps it indefinitely.
Figure 7 shows that the relative error of Γ ˜ 1 ( x ) is smaller than that of Γ ˜ 2 ( x ) for x < 2.55 . Below this value, Γ ˜ 1 ( x ) is more accurate than Γ ˜ 2 ( x ) . It is possible to improve the performance of Γ ˜ 1 ( x ) and Γ ˜ 2 ( x ) if the trial function incorporates additional adjustment constants.

4.6. Approximation of Error Function Using Variation 2 of Leal Method

First, we calculate the order two power series expansion at infinity of the error function [3,21,22,23], resulting
erf ( x ) = 1 1 π x exp ( x 2 ) .
Next, we use (40) as a guide to propose the TF, resulting in
erf ˜ ( x ) = 1 1 π ( x + δ ) exp ( x 2 ) , δ = a 0 + ( a 1 + ( a 2 + a 3 x ) x ) x 1 + ( b 1 + ( b 2 + ( b 3 + b 4 x ) x ) x ) x .
Then, in order to replicate the behaviour of erf ( x ) at x = 0 and its first three derivatives, we construct the following system of equations
lim X 0 erf ˜ ( X ) = lim X 0 erf ( X ) , lim X 0 erf ˜ ( X ) = lim X 0 erf ( X ) , , lim X 0 erf ˜ ( X ) = lim X 0 erf ( X ) .
Next, we isolate a 0 , , a 3 from (42), resulting in
a 0 = 1 π , a 1 = 2 π + b 1 π 1 , a 2 = 4 ( π ) π 2 b 1 + ( b 2 1 ) π 3 / 2 + 2 b 1 π π 2 , a 3 = 1 π 7 / 2 2 b 2 + 8 3 π 5 / 2 8 π 3 / 2 + π 7 / 2 b 2 + b 1 b 3 π 3 4 π 2 b 1 .
Finally, we apply LSM ( x [ 0.1 , 2 ] with 98 samples) to (41) in order to calculate the values of b constants, resulting in b 1 = 3867 3293 , b 2 = 941 1425 , b 3 = 247 1272 , and b 4 = 56 2223 .

5. Computing Convergence

First, we will analyse the error difference among the successive orders of variation 1 of the Leal method in order to visualize the convergence of the approximation procedure as follows:
Convergence = a b ( E ˜ i + 1 ( x ) E ˜ i ( x ) ) 2 d x , i = 1 , 2 ,
where i denotes the successive orders of the Leal E ˜ approximations, and a and b define the chosen interval to visualize the convergence. The numerical integration was performed using the rule’s Simpson 1 3 .
For three case studies, we employed trial functions (TF) expressed as rational polynomial expressions of a given [ N / M ] order, where N represents the order of the numerator and M denotes the order of the denominator. Table 1 shows the calculated convergence, illustrating the successive orders for the three case studies presented in this paper.
For example, for the case of the erf ˜ ( x ) function, the notation [ 3 / 4 ] indicates that the exponent of the numerator polynomial is 3 and that of the denominator is 4, continuing this pattern until reaching [ 0 / 0 ] , which signifies that the trial function vanishes. For this function, we compute the convergence by using the following successive order shown in the last row of Table 1 in the interval [ 0 , 3 ] .
Figure 8, Figure 9 and Figure 10 show the convergence for the approximations of K ˜ 1 ( x ) , K ˜ 2 ( x ) , Γ ˜ 1 ( x ) , Γ ˜ 2 ( x ) , and erf ˜ ( x ) for the different orders expressed in the sequences in Table 1. For the case of approximations Γ ˜ , the error tends to reach low values below 1 × 10 14 for K ˜ 1 , 1 × 10 10 for K ˜ 2 . Additionally, it is important to highlight that the convergence behaviour of the Leal method follows the behaviour exhibited by approximate methods in the literature, such as Homotopy Analysis method (HAM) [30], Rational Homotopy Perturbation method (RHPM) [2], Variational Iteration method (VIM) [31], among others. That is, by adding more adjustment constants, convergence may present variations in accuracy, but the tendency of convergence reduces the error. This can be seen in Figure 8b and Figure 9a. The convergence of variation 2 of the Leal method relies on the least squares convergence reports from the literature [32,33,34].
Figure 11, Figure 12 and Figure 13 show the absolute error in the approximations obtained using the successive orders of the Leal E ˜ approximations for Γ ( x ) , K ( x ) , and erf ( x ) . The absolute error decreases as N and M in [ N / M ] increase in the trial function. It is important to note, as shown in Figure 9b, that the absolute error for K ˜ 2 ( x ) [ 4 / 10 ] is smaller than that for K ˜ 2 ( x ) [ 6 / 10 ] . This behaviour arises because the convergence at iteration 5 is greater than that at iteration 6, as observed in Figure 8b. It is worth mentioning that the convergence behaviour of the Leal method aligns with the behaviour exhibited by other approximation methods reported in the literature [2,30,31], among others. In general, the absolute behaviour shown in Figure 11, Figure 12 and Figure 13 is in good agreement with the convergence behaviour presented in Figure 8, Figure 9 and Figure 10.

6. Numerical Comparison and Discussion

The Leal method employs asymptotic expansions [5] and corrects the non-asymptotic regions by matching the derivatives at the expansion points with respect to the test function and those derived from the Taylor series. This technique effectively reconstructs the initial regions exhibiting non-asymptotic behaviour, as demonstrated by the approximate functions in the case studies presented in this paper. Once the system is configured with the adjustment constants, the Leal method enables the values of F ˜ ( x ) and their corresponding derivatives to be satisfied at the selected expansion points according to the chosen order. Users have the flexibility to select the expansion points and the maximum order of the derivative they wish to guarantee. Notably, this method inherently allows for the continuous entanglement of F ˜ ( x ) and its derivatives relative to the exact solution. Additionally, it is important to mention that variant 2 continues to meet the requirements for the derivatives at the expansion points while simultaneously allowing for the existence of adjustment constants between U ˜ ( x ) and U ( x ) .
Figure 14 presents the significant digits comparison among our proposals ( K ˜ 1 ( x ) and K ˜ 2 ( x ) ) and other reported approximations given by (A1)–(A5) from literature [13,14,15,17,18], see the equations in the Appendix A. It is important to notice that our proposals are the only ones that keep a good accuracy along the interval, exhibiting a clear advantage after x = 0.5 , in particular in the proximity to x = 1 where K ( x ) tends to infinity. Likewise, Figure 14 shows the result of 10 significant digits of precision for K ˜ 1 ( x ) ,and it is compatible with standard float (32 bit IEEE 754) of C/C++ due to its 7 digits of precision [35]. On the other hand, K ˜ 2 ( x ) exhibits a better accuracy than K ˜ 1 ( x ) particularly in the interval of [ 0.2 , 0.8 ] .
Likewise, Figure 15a shows a comparison of our approaches Γ ˜ 1 ( x ) , Γ ˜ 2 ( x ) versus others from the literature (A6), (A7), published in [19,20], and (A8), (A9) published in [13]. It shows how Γ ˜ 1 ( x ) reaches 7 S.D. or more in the interval of x [ 0 , 2 ] , while Γ ˜ 2 ( x ) depicts more than 3 digits in the poorest zone but replicates with high accuracy both asymptotic regions ( x 0 and x ). Note that Γ ˜ 1 and Γ ˜ 2 present more significant digits near x = 0 than all other approximations in the literature.
Figure 15b demonstrates that erf ˜ ( x ) provides more significant digits for x > 0.5 compared to all the approximations (A10)–(A12) reported in [21,23] and [3], respectively. In each case, the significant digits (S.D.) [36] are calculated using...
Significant Digits log 10 | H ( x ) H ˜ ( x ) H ( x ) | ,
where H ( x ) is the exact solution (numerical solution from Maple), and H ˜ ( x ) is the approximate solution.
The Leal method is an interesting tool for the approximate solution of special functions. Moreover, future work is required to extend the power of the Leal method for the approximation of nonlinear differential equations of initial value and boundary value, fractal calculus, fractional calculus, among others. It is important to propose a systematic procedure for the selection of suitable trial functions and conduct more research on the convergence of the proposed method. Currently, we propose the trial function in order to increase the accuracy of the poorest zone while maintaining accuracy in the asymptotic region.
We can find in the literature several approximation techniques applied to integrals. Among them, we can mention the Variational Iteration method, Taylor series, Adomian Decomposition method, Successive Approximations method, Laplace Transform method, Series Solution method, and asymptotic techniques [37].
We can mention some advantages of the Leal method over the above-mentioned techniques:
The Leal method is capable of providing algebraic expressions, similar to other approximation techniques.
The Leal method can be coupled to work in combination with power series expansions and asymptotic expansions, as reported in the case studies of this work. This strategy can produce a remarkable increase in the domain of convergence (see case studies). In fact, further research will focus on exploring the combination of the Leal method with the Variational Iteration Method, Adomian Decomposition Method, among others.
The Leal method can be applied without requiring the aforementioned coupling with other approximate methods (see Variation 1 of the Leal method in Section 4.2 and Section 4.4). In this case, the Leal method can be applied using basic knowledge of calculus and numerical methods; in contrast, some approximate methods are too cumbersome and require specialized knowledge. Taking these advantages into account, this method can also be applied to solve nonlinear ordinary differential equations.
Further research is required to explore the possibility of replacing the least squares method with a variational principle [38,39,40] to optimize some constants from the Leal method. The order of convergence and the theorem of convergence of the Leal method are open problems that deserve further work. This is a hard task that involves trial function selection and the nature of the integral to solve. Likewise, it is important to develop a systematic methodology for selecting the most appropriate trial functions prior to applying the least squares method to optimize its application. On the other hand, further research is needed to develop enhanced approximations that are compatible with the 64-bit IEEE 754 representation, ensuring 15 significant digits for all intervals of the approximations discussed in the presented case studies.

7. Conclusions

In this paper, we introduced Leal’s method for approximating special functions. The approximations were derived using various methodologies, including the Taylor series of the exact function equated with the Taylor series of the trial function (TF), asymptotic analysis, least squares, and a multiple expansion concept. Importantly, approaches using Leal’s method demonstrated superior performance in the asymptotic regions of each special function examined in the case studies. The Leal method can be combined with other approximate methods such as HPM, Padé, among others. One advantage of combining the least squares method (LSM) with Leal’s method is the extended convergence domain in regions exhibiting asymptotic behaviour. Overall, the approximations obtained with Leal’s method in the asymptotic zones showed more significant digits than other approximations found in the literature. Leal’s method has demonstrated an easy-to-apply methodology, with potential applications for solving nonlinear ordinary differential equations.

Author Contributions

Conceptualization, H.V.-L., M.A.S.-H. and U.A.F.-N.; Formal analysis, H.V.-L., M.A.S.-H., U.A.F.-N., J.H.-C. and R.A.-V.; Investigation, M.A.S.-H. and U.A.F.-N.; Methodology, H.V.-L., M.A.S.-H., U.A.F.-N. and J.H.-C.; Software, H.V.-L., M.A.S.-H., J.H.-C., R.A.-V. and J.A.D.-C.; Validation, H.V.-L., M.A.S.-H., J.H.-C., R.A.-V. and J.A.D.-C.; Visualization, H.V.-L., M.A.S.-H., J.H.-C., R.A.-V. and J.A.D.-C.; Writing—original draft, H.V.-L. and M.A.S.-H.; Writing—review & editing, H.V.-L. and M.A.S.-H. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

Data is contained within the article.

Acknowledgments

The authors would like to acknowledge Roberto Ruiz Gomez for his technical support on this article.

Conflicts of Interest

The authors declare no conflicts of interest.

Appendix A

Appendix A.1. Maple Code

  # Maple code for Leal method, variant 2,
  # for case study 1.
  restart;
  with(plots):
  with(Statistics):
  Digits := 10;
  #
  # Function to approximate.
  qq := int(1/x + 1, x);
  dow := qq:
  # Obtain series expansion w(x).
  ser1 := simplify(convert(series(dow, x = 1, 6), polynom), size);
  # Trial function
  TF=(a1*x + a0)*(1 - x)/(b0*x + 1);
  f2 := ser1 + TF;
  #Expansion point.
  X0 := 9/10:
  #Procedure expansion.
  extra0 := eval(f2, x = X0) = eval(dow, x = X0):
  t0 := a0 = solve(extra0, a0);
  # Substituting.
  f2b := eval(f2, [t0]);
  # Compute derivative.
  extra1 := eval(diff(f2, x), x = X0) = limit(diff(dow, x), x = X0):
  t1 := a1 = eval(solve(extra1, a1), t0);
  # Substituting.
  f2b := simplify(eval(f2, [t0, t1]));
  # Save the approximation in this variable.
  Aprox1 := f2b;
  # Samples for LMS
  datax := [seq(x, x = 0.001 .. 0.85, 0.001)]:
  datay := [seq(dow, x = 0.001 .. 0.85, 0.001)]:
  X := Vector(datax, datatype = float):
  Y := Vector(datay, datatype = float):
  Aprox1 := NonlinearFit(Aprox1, X, Y, x);
  # The convert/rational function converts a floating-point number
  # to an approximate rational number.
  Aprox := convert(Aprox1, rational, 5);
  # Ploting ~f(x) vs f(x)
  H5 := plot(qq, x = 0 .. 2, color=red, thickness=0, legend="Exact"):
  H6 := plot(Aprox, x = 0 .. 2, color = blue, linestyle = dot,
  thickness = 4, legend="~f(x)"):
  display({H5, H6},view = [0..2,-7..3], legendstyle = [font = ["TIMES", 13],
  location = bottom], labels = ["x", "y"], labelfont = ["HELVETICA", 13],
  labels = ["x values", "f(x)"], labelfont = ["HELVETICA", 13], numpoints = 15,
  labeldirections = ["horizontal", "horizontal"]);

Appendix A.2. Special Functions Equations

Elliptic functions K ( x )
K ( x ) = 0.512 1 0.776 x 2 + 0.258 1 0.987 x 2 + 0.800 1 0.177 x 2 .
K ( x ) = π 1 + 3 16 1 2 x 2 2 + 105 1024 1 2 x 2 4 + 1155 16384 1 2 x 2 6 2 x 2 + 4 .
K ( x ) = π 2 ( 1 + 1 4 x + 9 64 x 2 + 25 256 x 3 + 1225 16384 x 4 + 3969 65536 x 5 + 53361 1048576 x 6 + 184041 4194304 x 7 + 41409225 1073741824 x 8 ) .
K ( x ) = π 2 1 + x 2 + 1 4 x 4 + 1 36 x 6 + 1 576 x 8 + 1 14400 x 10 + 1 518400 x 12 + π x 2 ( 10769 x 6 + 1030 x 4 16128 x 2 110592 ) 294912 + π 24 e x 2 x 2 + 24 e x 2 5 x 8 16 x 6 36 x 4 48 x 2 24 48 .
K ( x ) = 11 7 17 35 ln ( x 2 + 1 ) + 2 23 ln 31 47 x 2 + 1 + 9 46 ln 7 38 x 2 + 1 .
Gamma functions Γ ( x )
Γ ( x ) = π x 1 e x 1 8 ( x 1 ) 3 + 4 ( x 1 ) 2 + x 29 30 1 6 .
Γ ( x ) = 2 π ( x 1 ) x 1 e x 1 1 + 1 12 ( x 1 ) 2 1 10 x 1 .
Γ ( x ) = 1 / ( exp ( 0.2084721215 x exp ( 0.6941829682 x 2 ) + 0.6902220099 x exp ( 0.1818111523 x 2 ) + 0.08383862837 x exp ( 1.832856342 x 2 ) ) 1 ) .
Γ ( x ) = 2 π ( x 1 ) ( x 1 ) e x 1 ( 1 + 1 / ( x 3 2 + 7.614042356 ( x 1 ) exp ( 7.782832554 x + 7.782832554 ) + 0.02209681117 ( x 1 ) exp ( 4.705349849 x + 4.705349849 ) ) ) 1 12 exp ( 7 / 720 ( x 1 ) 3 900 ( x 1 ) 49 1531 / ( 1975680 ( ( x 1 ) 7 + 34595736000 ( x 1 ) 5 16841 10219256619062120 ( x 1 ) 3 3687050653 + 56714988832486825 ( x 1 ) 13412221930189368 ) .
Error functions erf ( x )
erf ( x ) = 1 4 π + 0.14 | x | 2 | x | 2 1 + 0.14 | x | 2 sign ( x ) .
erf ( x ) = tanh 39 x 2 π 111 2 arctan 35 x 111 π .
erf ( x ) = 2 1 + exp ( Φ ) 1 , Φ = x 9 ( 105 π 4 9328 π 3 + 116928 π 2 483840 π + 645120 ) 5670 π 9 / 2 + 2 x 7 ( 15 π 3 532 π 2 + 3360 π 5760 ) 315 π 7 / 2 2 x 5 ( 3 π 2 40 π + 96 ) 15 π 5 / 2 + 4 x 3 ( π 4 ) 3 π 3 / 2 4 x π .

References

  1. Burden, R.L.; Faires, J.D. Numerical Analysis; Cengage Learning: Singapore, 2010. [Google Scholar]
  2. Saad-Albalawi, K.; Saad-Alkahtani, B.; Kumar, A.; Goswami, P. Numerical Solution of Time-Fractional Emden–Fowler-Type Equations Using the Rational Homotopy Perturbation Method. Symmetry 2023, 15, 258. [Google Scholar] [CrossRef]
  3. Sandoval-Hernandez, M.A.; Vazquez-Leal, H.; Filobello-Nino, U.; Hernandez-Martinez, L. New handy and accurate approximation for the Gaussian integrals with applications to science and engineering. Open Math. 2019, 17, 1774–1793. [Google Scholar] [CrossRef]
  4. Vazquez-Leal, H.; Sandoval-Hernandez, M.; Garcia-Gervacio, J.; Herrera-May, A.; Filobello-Nino, U. PSEM approximations for both branches of Lambert function with applications. Discret. Dyn. Nat. Soc. 2019, 2019, 8267951. [Google Scholar] [CrossRef]
  5. de Brujin, N.G. Asymptotic Methods in Analysis; Dover: Amsterdm, The Netherlands, 1961. [Google Scholar]
  6. Jakimczuk, R. Some Applications of the Euler-Maclaurin Summation Formula. Int. Math. Forum 2013, 8, 9–14. Available online: https://www.m-hikari.com/imf/imf-2013/1-4-2013/jakimczukIMF1-4-2013.pdf (accessed on 10 December 2024). [CrossRef]
  7. Herschel, J.F. A Brief Notice of the Life, Researches, and Discoveries of Friedrich Wilhelm Bessel; G. Barclay: Trangie, NSW, Australia, 1847. [Google Scholar]
  8. Butzer, P.; Jongmans, F. PL Chebyshev (1821–1894): A guide to his life and work. J. Approx. Theory 1999, 96, 111–138. [Google Scholar] [CrossRef]
  9. Schrödinger, E. An Undulatory Theory of the Mechanics of Atoms and Molecules. Phys. Rev. 1926, 28, 1049–1070. [Google Scholar] [CrossRef]
  10. Muñoz, J.L. Riemann: Una visión Nueva de la Geometría; Nivola: New York, NY, USA, 2006. [Google Scholar]
  11. Sandoval-Hernandez, M.; Vazquez-Leal, H.; Hernandez-Martinez, L.; Filobello-Nino, U.; Jimenez-Fernandez, V.; Herrera-May, A.; Castaneda-Sheissa, R.; Ambrosio-Lazaro, R.; Diaz-Arango, G. Approximation of Fresnel integrals with applications to diffraction problems. Math. Probl. Eng. 2018, 2018, 4031793. [Google Scholar] [CrossRef]
  12. Aznar, Á.C.; Roca, L.J.; Casals, J.M.R.; Robert, J.R.; Boris, S.B.; Bataller, M.F. Antenas; Alfaomega: London, UK, 2004. [Google Scholar]
  13. Sandoval-Hernández, M.; Hernández-Méndez, S.; Torreblanca-Bouchan, S.E.; Díaz-Arango, G.U. Actualización de contenidos en el campo disciplinar de matemáticas del componente propedéutico del bachillerato tecnológico: El caso de las funciones especiales. RIDE Rev. Iberoam. Investig. Desarro. Educ. 2021, 12, 23. [Google Scholar] [CrossRef]
  14. Rohedi, A.Y.; Pramono, Y.H.; Widodo, B.; Yahya, E. The Novelty of Infinite Series for the Complete Elliptic Integral of the First Kind. Preprints 2016. [Google Scholar] [CrossRef]
  15. Guedes, E.; Gandhi, K.R.R. On the Complete Elliptic Integrals and Babylonian Identity II: An Approximation for the Complete Elliptic Integral of the first kind. Bull. Math. Sci. Appl. 2013, 2, 72–78. [Google Scholar] [CrossRef]
  16. Borwein, J.M.; Borwein, P.B. Pi and the AGM; Wiley: New York, NY, USA, 1987. [Google Scholar]
  17. Abramowitz, M.; Stegun, I.A. Handbook of Mathematical Functions with Formulas, Graphs, and Mathematical Tables; National Bureau of Standards: Gaithersburg, MD, USA, 1973. [Google Scholar]
  18. Vatankhah, A.R. Approximate solutions to complete elliptic integrals for practical use in water engineering. J. Hydrol. Eng. 2011, 16, 942–945. [Google Scholar] [CrossRef]
  19. Ramanujan, A.; Srinivasa, A.; George, E. The Lost Notebook and Other Unpublished Papers; Narosa Publishing House New Delhi: New Delhi, India, 1988. [Google Scholar]
  20. Nemes, G. New asymptotic expansion for the Gamma function. Arch. Math. 2010, 95, 161–169. [Google Scholar] [CrossRef]
  21. Patel, J.K.; Read, C.B. Handbook of the Normal Distribution, 2nd ed.; V1ARCEL DEKKER, INC.: New York, NY, USA; Department of Statistics, Southern Methodist University Dallas: Dallas, TX, USA, 1996. [Google Scholar]
  22. Winitsky, S. A handy Approximation for the Error Function and Its Inverse. A Lecture Note Obtained through Private Communication. 2008. Available online: https://wenku.baidu.com/view/d2a5e11ec5da50e2524d7ff1?pcf=2&bfetype=new&bfetype=new&_wkts_=1738961370306&needWelcomeRecommand=1 (accessed on 12 December 2024).
  23. Vazquez-Leal, H.; Castaneda-Sheissa, R.; Filobello-Nino, U.; Sarmiento-Reyes, A.; Orea, J.S. High Accurate Simple Approximation of Normal Distribution Integral. Math. Probl. Eng. 2012, 2012, 124029. [Google Scholar] [CrossRef]
  24. He, J.-H.; Ji, F.Y. Taylor series solution for Lane–Emden equation. J. Math. Chem. 2019, 57, 1932–1934. [Google Scholar] [CrossRef]
  25. He, J.H. The simplest approach to nonlinear oscillators. Results Phys. 2019, 15, 102546. [Google Scholar] [CrossRef]
  26. Vazquez-Leal, H.; Sarmiento-Reyes, A. Power series extender method for the solution of nonlinear differential equations. Math. Probl. Eng. 2015, 2015, 717404. [Google Scholar] [CrossRef]
  27. Gerald, C.F. Análisis Numérico; Alfaomega: London, UK, 1997. [Google Scholar]
  28. Bates, D.M.; Watts, D.G. Nonlinear regression: Iterative estimation and linear approximations. Nonlinear Regres. Anal. Its Appl. 1988, 32–66. [Google Scholar] [CrossRef]
  29. Boyd, Stephen Convex Optimization; Cambridge University Press: Cambridge, UK, 2004.
  30. Alomari, A.K.; Noorani, M.S.M.; Nazar, R.M. Approximate analytical solutions of the Klein-Gordon equation by means of the homotopy analysis method. J. Qual. Meas. Anal. 2008, 4, 45–57. Available online: https://www.ukm.my/jqma/v4_1/JQMA-4-1-04-abstractrefs.pdf (accessed on 12 December 2024).
  31. Shirazian, M. A new acceleration of variational iteration method for initial value problems. Math. Comput. Simul. 2023, 214, 246–259. [Google Scholar] [CrossRef]
  32. Liu, Y.; Ding, F. Convergence properties of the least squares estimation algorithm for multivariable systems. Appl. Math. Model. 2013, 37, 476–483. [Google Scholar] [CrossRef]
  33. Deckers, K.; Bultheel, A. Rational interpolation: I. Least square convergence. J. Math. Anal. Appl. 2012, 395, 455–464. [Google Scholar] [CrossRef]
  34. Dieuleveu, A.; Flammarion, N.; Bach, F. Harder, Better, Faster, Stronger Convergence Rates. Appl. Math. Model.-Least-Squares Regres. 2017, 18, 1–51. [Google Scholar]
  35. Rajaraman, V. IEEE standard for floating point numbers. Resonance 2016, 21, 11–30. Available online: https://www.ias.ac.in/public/Volumes/reso/021/01/0011-0030.pdf (accessed on 10 December 2024). [CrossRef]
  36. Barry, D.; Culligan-Hensley, P.; Barry, S. Real values of the W-function. ACM Trans. Math. Softw. (TOMS) 1995, 21, 161–171. [Google Scholar] [CrossRef]
  37. Wazwaz, A.M. Linear and Nonlinear Integral Equations: Methods and Applications, 1st ed.; Springer Publishing Company, Incorporated: Berlin/Heidelberg, Germany, 2011. [Google Scholar]
  38. He, J.H.; Mo, L.F. Variational approach to the finned tube heat exchanger used in hydride hydrogen storage system. Int. J. Hydrogen Energy 2019, 38, 16177–16178. [Google Scholar] [CrossRef]
  39. He, J.H.; Chang, S. A variational principle for a thin film equation. J. Math. Chem. 2013, 57, 2075–2081. [Google Scholar] [CrossRef]
  40. He, J.H. Lagrange crisis and generalized variational principle for 3D unsteady flow. Int. J. Numer. Methods Heat Fluid Flow 2019, 30, 1189–1196. [Google Scholar] [CrossRef]
Figure 1. Flowchart for Leal method for the approximation of integrals.
Figure 1. Flowchart for Leal method for the approximation of integrals.
Appliedmath 05 00028 g001
Figure 2. Least squares method. Asterisks denote data used for the adjustment of the curve.
Figure 2. Least squares method. Asterisks denote data used for the adjustment of the curve.
Appliedmath 05 00028 g002
Figure 3. Convergence for erf ˜ ( x ) , variant 2.
Figure 3. Convergence for erf ˜ ( x ) , variant 2.
Appliedmath 05 00028 g003
Figure 4. A comparison between K ( x ) and its transformation K ˜ T ( X ) . (a) K ( x ) . (b) K ˜ T ( X ) .
Figure 4. A comparison between K ( x ) and its transformation K ˜ T ( X ) . (a) K ( x ) . (b) K ˜ T ( X ) .
Appliedmath 05 00028 g004
Figure 5. Γ ( x ) and Γ T ( x ) comparison. (a) Γ ( x ) in the interval 0 < x 2 . (b) Γ T ( x ) in the interval 0 < x 2 .
Figure 5. Γ ( x ) and Γ T ( x ) comparison. (a) Γ ( x ) in the interval 0 < x 2 . (b) Γ T ( x ) in the interval 0 < x 2 .
Appliedmath 05 00028 g005
Figure 6. Comparison of Γ ( x ) vs Γ ˜ 1 ( x ) , Γ ˜ 2 ( x ) , and the numerical solution referred to as the exact solution for practical purposes.
Figure 6. Comparison of Γ ( x ) vs Γ ˜ 1 ( x ) , Γ ˜ 2 ( x ) , and the numerical solution referred to as the exact solution for practical purposes.
Appliedmath 05 00028 g006
Figure 7. Relative error for Γ ˜ 1 ( x ) , Γ ˜ 2 ( x ) .
Figure 7. Relative error for Γ ˜ 1 ( x ) , Γ ˜ 2 ( x ) .
Appliedmath 05 00028 g007
Figure 8. Convergence for K ˜ 1 ( x ) and K ˜ 2 ( x ) . (a) Convergence for K ˜ 1 ( x ) , variant 1. (b) Convergence K ˜ 2 ( x ) , and variant 2.
Figure 8. Convergence for K ˜ 1 ( x ) and K ˜ 2 ( x ) . (a) Convergence for K ˜ 1 ( x ) , variant 1. (b) Convergence K ˜ 2 ( x ) , and variant 2.
Appliedmath 05 00028 g008
Figure 9. Convergence for Γ ˜ 1 ( x ) and Γ ˜ 2 ( x ) . (a) Convergence for Γ ˜ 1 ( x ) , variant 1. (b) Convergence Γ ˜ 2 ( x ) , and variant 2.
Figure 9. Convergence for Γ ˜ 1 ( x ) and Γ ˜ 2 ( x ) . (a) Convergence for Γ ˜ 1 ( x ) , variant 1. (b) Convergence Γ ˜ 2 ( x ) , and variant 2.
Appliedmath 05 00028 g009
Figure 10. Convergence for erf ˜ ( x ) .
Figure 10. Convergence for erf ˜ ( x ) .
Appliedmath 05 00028 g010
Figure 11. Absolute error in approximations for K ˜ 1 ( x ) and K ˜ 2 ( x ) . (a) Absolute error in approximations for K ˜ 1 ( x ) , variant 1. (b) Absolute error in approximations for K ˜ 2 ( x ) , variant 2.
Figure 11. Absolute error in approximations for K ˜ 1 ( x ) and K ˜ 2 ( x ) . (a) Absolute error in approximations for K ˜ 1 ( x ) , variant 1. (b) Absolute error in approximations for K ˜ 2 ( x ) , variant 2.
Appliedmath 05 00028 g011
Figure 12. Absolute error for Γ ˜ 1 ( x ) and Γ ˜ 2 ( x ) . (a) Absolute error in approximations for Γ ˜ 1 ( x ) , variant 1. (b) Absolute error in approximations for Γ ˜ 2 ( x ) , variant 2.
Figure 12. Absolute error for Γ ˜ 1 ( x ) and Γ ˜ 2 ( x ) . (a) Absolute error in approximations for Γ ˜ 1 ( x ) , variant 1. (b) Absolute error in approximations for Γ ˜ 2 ( x ) , variant 2.
Appliedmath 05 00028 g012
Figure 13. Absolute error for erf ˜ ( x ) , variant 2.
Figure 13. Absolute error for erf ˜ ( x ) , variant 2.
Appliedmath 05 00028 g013
Figure 14. Significant digits for K ˜ ( x ) and other approximations.
Figure 14. Significant digits for K ˜ ( x ) and other approximations.
Appliedmath 05 00028 g014
Figure 15. Significant digits for Γ ˜ ( x ) and erf ˜ ( x ) . (a) Significant digits for Γ ˜ ( x ) and other approximations. (b) Significant digits for erf ˜ ( x ) and other approximations.
Figure 15. Significant digits for Γ ˜ ( x ) and erf ˜ ( x ) . (a) Significant digits for Γ ˜ ( x ) and other approximations. (b) Significant digits for erf ˜ ( x ) and other approximations.
Appliedmath 05 00028 g015
Table 1. Calculated Convergence for the three case studies.
Table 1. Calculated Convergence for the three case studies.
Case StudyVariantOrderChosen Interval
11 i K ˜ 1 = [ [ 0 / 0 ] , [ 1 / 1 ] , [ 2 / 2 ] , , [ 6 / 6 ] ] [ 0 , 0.9999999 ]
2 i K ˜ 2 [ [ 0 / 0 ] , [ 1 / 10 ] , [ 2 / 10 ] , , [ 5 / 10 ] ] [ 0 , 0.9999999 ]
21 i Γ ˜ 1 = [ [ 1 / 0 ] , [ 1 / 1 ] , , [ 4 / 4 ] , [ 4 / 5 ] , [ 5 / 5 ] , [ 5 / 6 ] ] [ 1 × 10 5 , 2.5 ]
2 i Γ ˜ 2 = [ [ 0 / 0 ] , [ 1 / 1 ] , [ 1 / 2 ] , [ 2 / 2 ] , [ 3 / 2 ] , [ 4 / 3 ] , [ 5 / 4 ] ] [ 1 × 10 7 , 16 ]
32 i erf ˜ = [ [ 0 / 0 ] , [ 0 / 2 ] , [ 1 / 2 ] , [ 2 / 2 ] , [ 3 / 2 ] , [ 3 / 3 ] , [ 3 / 4 ] ] [ 0 , 3 ]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Vazquez-Leal, H.; Sandoval-Hernandez, M.A.; Filobello-Nino, U.A.; Huerta-Chua, J.; Aguilar-Velazquez, R.; Dominguez-Chavez, J.A. Introducing the Leal Method for the Approximation of Integrals with Asymptotic Behaviour: Special Functions. AppliedMath 2025, 5, 28. https://doi.org/10.3390/appliedmath5010028

AMA Style

Vazquez-Leal H, Sandoval-Hernandez MA, Filobello-Nino UA, Huerta-Chua J, Aguilar-Velazquez R, Dominguez-Chavez JA. Introducing the Leal Method for the Approximation of Integrals with Asymptotic Behaviour: Special Functions. AppliedMath. 2025; 5(1):28. https://doi.org/10.3390/appliedmath5010028

Chicago/Turabian Style

Vazquez-Leal, Hector, Mario A. Sandoval-Hernandez, Uriel A. Filobello-Nino, Jesus Huerta-Chua, Rosalba Aguilar-Velazquez, and Jose A. Dominguez-Chavez. 2025. "Introducing the Leal Method for the Approximation of Integrals with Asymptotic Behaviour: Special Functions" AppliedMath 5, no. 1: 28. https://doi.org/10.3390/appliedmath5010028

APA Style

Vazquez-Leal, H., Sandoval-Hernandez, M. A., Filobello-Nino, U. A., Huerta-Chua, J., Aguilar-Velazquez, R., & Dominguez-Chavez, J. A. (2025). Introducing the Leal Method for the Approximation of Integrals with Asymptotic Behaviour: Special Functions. AppliedMath, 5(1), 28. https://doi.org/10.3390/appliedmath5010028

Article Metrics

Back to TopTop