Abstract
In this paper, we present a comparative study between Sinc–Galerkin method and a modified version of the variational iteration method (VIM) to solve non-linear Sturm–Liouville eigenvalue problem. In the Sinc method, the problem under consideration was converted from a non-linear differential equation to a non-linear system of equations, that we were able to solve it via the use of some iterative techniques, like Newton’s method. The other method under consideration is the VIM, where the VIM has been modified through the use of the Laplace transform, and another effective modification has also been made to the VIM by replacing the non-linear term in the integral equation resulting from the use of the well-known VIM with the Adomian’s polynomials. In order to explain the advantages of each method over the other, several issues have been studied, including one that has an application in the field of spectral theory. The results in solutions to these problems, which were included in tables, showed that the improved VIM is better than the Sinc method, while the Sinc method addresses some advantages over the VIM when dealing with singular problems.
1. Introduction
The problem under consideration in this study, is of great importance in mathematics and physics. Many physical issues require an evaluation for the eigenvalues as well as finding the corresponding eigenfunctions to understand the physical interpretation, especially when dealing with vibrations and waves. A general non-linear Sturm–Liouville problem (NSLP) can be written by the following differential equation for a non-linear function F:
subject to the boundary conditions
where the given functions , and are assumed to be analytic on their domain . The constants that appear in the above equation are optional. Solving problem (1) with some boundary conditions (2), we obtain non-zero values for the parameter , we call them eigenvalues, which exist in infinite number, and characterized by many properties, real, increasing, and simple, and for each eigenvalue there is a corresponding solution, say , which known to be eigenfunction, see, [].
Many previous studies have discussed solutions to the problem (1), especially those with the applications in physics and engineering. When starting an applicable study, such as vibration and stability of deformable bodies issues, it is required to find the eigenvalues first. Engineers are interested in the location of the smallest eigenvalue, as this gives potentially the most visual structure of dynamics systems. The eigenvalues are also crucial in finding the stability region of solutions of NSLP []. In [], the collocation method of the weight residual methods are investigated for the approximate computation of higher SLP. Shannon sampling theory has been used to compute the eigenvalues of regular SLP []. Asymptotic formulas for eigenvalues associated with Hills equation have been studied in []. In [], the Sinc–Galerkin method was used to approximate solutions of non-linear problems involving non-linear second, fourth, and sixth-order differential equation. The Sinc–Galerkin method was also used in [,] to solve two-point boundary value problem with applications to chemical reactor theory. The authors of [,] compared the performance of Sinc–Galerkin methods using Sinc bases for solving linear and non-linear second-order two-point boundary value problem. The VIM was discovered and developed by He [,,]. Several studies dealt with modification of the method [,], and also several authors used the method to solve difficult problems with physical applications [,,]. In [] they compute eigenvalues and eigenfunctions of singular two-interval Sturm–Liouville problems. While in [], they solve fourth order linear differential equations. Fractional Sturm–Liouville problem based on the operational matrix method was presented in []. Some existence results for non-linear third-order integro-multi-point boundary value problems are discussed in [].
In this paper, we introduce two methods for solving (1), (2), Sinc–Galerkin method, and variational iteration method. Stenger [] originally proposed the numerical solution of ordinary differential equations with the Sinc–Galerkin method. Excellent exposition of the use of Sinc function to approximate differential equations, may be found in [,,]. A basis element may be transformed to any connected subset of the real line via a composition with a suitable conformal map. In conjunction with the Galerkin method for differential equations, perhaps the most distinctive feature of the basis is its resulting exponential convergence rate of the error, , where and where basis functions are used to build the approximation. Moreover, the convergence rate maintains when the solution of the differential equation has boundary singularities. Of equal practical significance is that the technique’s implementation requires no modification in the presence of singularities. Specifically, the statement of the quadrature, the mesh definition and the resulting matrix structure depend only on the parameters of the differential equation whether it is singular or non-singular.
This paper is organized as follows. The Sinc solution together with the Galerkin method and the development of the scheme is treated in Section 2. We formulate an iterative procedure, and provides a modification of the VIM to solve (1), (2) using VIM in Section 3. Numerical examples which demonstrates the convergence of the Sinc method and compares its performance with VIM are introduced in the last section.
2. Sinc Function Approximation
In this part, we will present an overview of some facts and concepts needed to study the topic of Sinc function, and use it in an ideal way to solve the systems in Equation (1), these concepts are taken from the books of Stenger and Lund [,].
2.1. Sinc Function Properties
It is known that the Sinc function is defined on , and because the period in which the issue is being considered is , we need to redefine the Sinc function in order to use it during the period and the use of conformal mapping. The Sinc function (known as the band-limited function) defined on the whole real line by
We denote the equal distances between points by , and we define the Sinc translated function as
If the function f is defined on all real numbers, and for a positive h, then we know that the following series, if it is converged, we call it the Whittaker cardinal series of f.
Definition 1.
For we define the infinite open strip
Definition 2.
Let be a simply connected domain with boundary with denoted as two distinct points on . Let ϕ be a conformal map of onto with ψ as its inverse map. Let . For given and a constant h, set for
It is well known that Sinc approximation has exponential error, in order to achieve that, our function under approximation has to satisfy some certain decaying condition, so we introduce the following class of functions denoted by , for more details about the terms appeared in the next definitions, readers are recommended to see [].
Definition 3.
Let be the class of complex-valued functions f defined on a path-connected domain ;
where , and ; and with
where C is a simple closed curve in .
The term of exponentially decaying is defined bellow.
Definition 4.
For , the is exponentially decaying with respect to ϕ if there exist positive constants K and α such that
We will use integrals to calculate the residual of functions that belong to class . The proof of the following Theorem can be found in [].
Theorem 1.
Suppose f belongs to the class . Then we have
If is decaying exponentially with respect to ϕ and belongs to , then for we get
2.2. The General Sinc–Galerkin Method
We will describe the general Galerkin method that is typically used to approximate solution of an operator equation in the form of a linear combination of the elements of a given linearly independent system. To solve the equation
we assume a suitable Hilbert space and . Let be a dense linearly independent set in and let . For the solution we define the approximation as a linear combination
with . The coefficients are to be determined. Galerkin’s idea is to require that residual
is orthogonal to each by the inner product in :
In matrix form:
These equations determine all . Taking to be Dirac delta functions, we see that Sinc collocation methods are special case of Sinc-Galekin methods. Galerkin’s method is a powerful tool not only for finding approximate solutions, but also for proving existence theorems of solutions of linear and non-linear equations, especially in problems involving partial differential equations.
2.3. The Sinc Methodology
To solve the differential equation under consideration (1) using the Sinc methodology for all x in , and since the Sinc function is defined and build on , we have to make some kind of adaptation of the Sinc via conformal mapping. Suppose that function is approximable on and it satisfies certain smoothness and boundedness conditions on a that contains . In this manner, we are dealing with the “eye-shaped” region (see Figure , p. 68, of []) that contains our domain . Therefore, to start our approximation on the domain , we define the eye-shaped domain in the -plane:
The eye-shaped domain is mapped conformally onto the infinite strip via
The conformal map maps onto , appropriate for using Sinc methodology to approximate solution of (1) with boundary conditions (2). According to constructing the new basis functions defined on as two functions compositions:
To go back to original domain, we use the inverse of :
For the equally interpolation points , we need to define the inverse images of the Sinc domain as
with
respectively. Discrete system for Equation (1) is obtain by rewriting Equation (1) as
where , and . In order to establish discrete system for (9), we replace and its derivatives by
The are constants appeared in (9) and they are specified later. Recall that
and suppose that metric internally product is possessed as
Here performs a weight function, selected for different cases. Although other logical reasoning exists, the option made here was a consequence of the request that boundary conditions at end points are zero. It is appropriate to pick . As a perfect debate on the options of the weight functions, we recommend a reader to see [,]. Orthogonalizing the remainder with regards to
yields the framework
Straight away we exchange by y. As a subsequent of the criterion for the Sinc–Galerkin procedure, the first expression in Equation (14) is integrated by parts twice and the latter expression once. Postulating the condition of vanishing at both end of the interval, Equation (14) reduces to
To build an approximate solution by means of the Sinc–Galerkin method, we must assess the integrals in (15) obtaining a linearly framework solution. The standard Sinc quadrature Formula (24) will be exercised. Specifics of the quadrature formula and situations controlling its error bounds (see []). To integrate a function defined on and fulfill the postulates of the quadrature formula, subsequently for we arrived at
The Sinc–Galerkin method seek to know the derivatives of composite Sinc functions predestined at the nodes. For a conformal map of onto , we use the next codes
and,
Thus, the expressions in the i-th equation of (15) are approximated by
the second integral in Equation (15) can be written as,
while the third integral in Equation (15) as,
Finally, the last integral on the left-hand side of Equation (15) can be approximated by the finite sum
while the integral on the right-hand side of Equation (15) has the form
The next coding will be needful to construct the system. Call the matrices
For instant, the th components in matrix are as in (18). Denote , where the superscript “T” indicate the transpose of a matrix. The discrete system for using the Sinc–Galerkin system coincide with (15) and can be written in a further, convenient exemplification:
Matrix U represents the Sinc approximation of the second derivative of y, written as
while:
With this view, we have reached the fact
Theorem 2.
The discrete solution for the non-linear SLP via the use of Sinc–Galerkin method for the locating the constants is given by
As a conclusion, we end up with a non-linear system of equations in unknown constants . The non-linear system can be solved via the use of Newton’s method. The values found for give raise of the approximate Sinc solution for .
2.4. Newton’s Method
Here we rewrite the system of non-linear Equation (20) in the form:
An appropriate method for solving (24) is Newton’s method, due to nonlinearity terms appeared in the system. Newton’s method suggests starting with an initial guess for the unknown function, an iteration i. Let be the guess. Let denotes the value of evaluated at the iteration. If the norm of is small enough, we are looking for updates for vectors , where , which can be written in components as
with the goal of reaching Taylor’s extension theorem for functions can be used to approximate that is close to as
where to denotes the Jacobian matrix given by given by
Ignoring the higher order terms and evaluating the Jacobian at , we arrive at
or
The equation above is a system of linear equations in unknowns We stop Newton’s iteration whenever for given . We also would like to mention that other iterative techniques can be used to solve our non-linear systems (20) such as secant method and quasi-Newton’s method.
3. The Variational Iteration Method
The VIM proposed by Ji-Huan He [,] is an analytical method based on Lagrange multipliers. Powerful, easy and effective method for solving large classes of linear and non-linear problems. For linear problems, the exact solution can be evaluated by only one iteration step because the Lagrange multiplier can be exactly identified. It needs no discretization and no perturbations unlike the traditional methods.
3.1. Analysis of the VIM
To illustrate the basic concepts of He’s VIM, we consider the system
Here T is differential operator that acts on the function y defined on an interval and g is a known analytic function defined for all .
The VIM is based on dividing T into linear and non-linear operators:
Here L is linear, N is non-linear differential operator and is the inhomogeneous term of the equation. The correction functional of Equation (26) can be constructed as follows
Here is the Lagrange multiplier, which can be identified optimally by variational theory, are the order approximate solutions and the are their restricted variations with their own variations . The successive approximation , will be readily obtained upon using the determined and by using any selective function . Consequently, the solution is given by
where has a limit as . The convergent proof for the series solution can be found in []. To increase the convergence rate of the truncated series solution derived from VIM, we do a simple modification, the Laplace VIM (LVIM), through the following steps (see, [])
- convert the truncated series obtained by VIM by using Laplace transform.
- approximate the result that we get from previous step using the Padé approximant.
- convert the output function that we get from the previous step using inverse Laplace transform.
In the next subsection, we introduce our modification for the VIM, where the main idea is to combine Laplace transform and Adomian polynomials to the original VIM. For more details see [].
3.2. Adomian—Variational Iteration Method(AVIM)
Considering the non-linear operator , the first few Adomian polynomials are given by
The basic idea of VIM linked with Adomian polynomials is to approximate the general type of differential Equation (1). The main idea of our modified method is to construct the correction functional via the use of VIM. The non-linear term is replaced by Adomian polynomials ([]) as
where the first few Adomian polynomials for a non-linear function F is given as above. Therefore, our approximate solution can be written in the form
Upon choosing the initial approximation as arbitrary, our first approximate solution has the form
And
And so on, we may compute terms as much as we need to reach sufficient good approximation. Some examples will be given to illustrate the idea. For more details on using Adomian’s techniques in finding eigenvalues see [].
3.3. Lagrange Multiplier for Special Kind of Equations
Consider the following ODE
where and are continuous real-valued functions and is a continuous and differentiable function with . Bratu, Emden–Fowler, LaneEmden, Poisson–Boltzmann Lagerstrom and more other equations are special cases of (33). For solving Equation (33) by VIM, we construct the correction functional as follows
Making the above correction functional stationary with respect to and noticing that , it yields
Therefore, the following stationary conditions are obtained as
Therefore, the Lagrange multiplier can be readily identified with
4. Numerical Results
To show the efficiency of the methods described in the previous sections, we present two examples. The first example is non-linear, and will be tested using VIM and their modifications appeared in Section 3.1 and Section 3.2 A Sinc solution will also be provided. The second example is chosen to deal with finding eigenvalues for Titchmarch equation and some tabulated numerical results are demonstrated. In the Sinc–Galerkin method, we choose d and . The step size and the summation limit N are selected, therefore the error is asymptotically balanced. Once N is chosen, the step size is determined by . In using Newton’s iterative technique, we start with a zero initial guess, and we stop criterion when we reach .
Example 1.
We consider a version of the Duffing equation []
With conditions
It is known that exact solution is given with .
For the VIM solution, we start with the functional correction in (40) written as
By (39), the Lagrange multiplier is identified as
In our case,
Using (43), we arrived at
We apply the LVIM, mentioned at the end of Section 3.1, to obtain the following iteration formula:
Given that the initial guess is chosen arbitrarily, we assume , so ourfirst approximation is given by
We use the conditions appeared in Equation (41) to get and In the same manner, using the obtained values , the second iteration is given by
Applying Laplace transform to both sides of Equation (47), we get
To simplify the matter, we replace
Equation (50) has its Padé approximation in the form:
As , the Equation (51) can be written in terms of the variable s:
To finalize the solution, we take the inverse of the Laplace transform and arrive at
Some numerical values for our approximate solution using the VIM and LVIM are shown in Table 1. The obtained results indicates an excellent agreement with the exact solution .
Table 1.
Numerical results for Example 1 by LVIM.
Now we calculate the solution of (40) using the improved method obtained by combining Adomian method with the VIM, which we called AVIM.
In (40), the non-linear term is . Therefore, the first few Adomian polynomials for are By Adomian-Variational Iteration Method [AVIM] and using (45), the correction functional of (40) can be written as
By choosing our initial guess to be , the first approximation is given by
We use (41) to obtain specific values and In the same manner, using (55), we calculate the next iteration as
Therefore, we calculate the rest of the iterations until we reach the fifth one. Because of the calculation’s length we will not record these iterations. The calculations in Table 2 were based on our fifth iteration. Table 1 shows that the solution by LVIM gives an excellent agreement with the exact solution more than the solution by AVIM.
Table 2.
Numerical results for Example 1 by AVIM.
The notation denotes the absolute error using that method. Referring to the calculations in the three tables, we can say that the LVIM is much better than other methods. We also note that the improved AVIM has given significant solutions away from the original. However, the method of Sinc–Galerkin gave good results over the whole period .
Example 2. Titchmarch Equation.
We consider the Titchmarch model
where m is a nonnegative integer. For the Sinc solution, we follow the procedure outlines in Section 2 and find numerical solution for (57). The first eigenvalues, listed in Table 3, are computed from the matrix system (20) for and three different values of the parameter m. For the solution using the modified variational iteration method (LVIM), we follow the same steps as in the previous example. Solving λ, we obtain an estimation for the first eigenvalue according to different values of the parameter m. These results are shown in Table 4. Similar results can also be found in [].
Table 3.
Numerical results for Example 1 by Sinc–Galerkin.
Table 4.
An estimate to the first eigenvalue of Equation (57).
The first eigenvalue of the comparison equation is . Table 3 predicts an estimate for the least eigenvalue that satisfies , which is consistent with the results obtained in []. The corresponding eigenfunction to the first eigenvalue for Equation (57) when were computed at some points in its domain, the results are listed in Table 5.
Table 5.
Comparison of solutions corresponding to the first eigenvalue of Equation (57).
Author Contributions
All authors contributed equally in this work. All authors have read and agreed to the published version of the manuscript.
Funding
This research received no external funding.
Acknowledgments
We wish to thank Bozidar Ivankovic for critically reading the manuscript who provided insight and expertise that were very useful in revising the paper. Also, our appreciation goes to our university (JUST) that supported the second author to complete the master’s thesis, as some results were taken from the master’s thesis for the second author [].
Conflicts of Interest
The authors declare no conflict of interest.
References
- Chanane, B. Computing the eigenvalues of singular Sturm-Liouville problems using the regularized sampling method. Appl. Math. Comput. 2007, 184, 972–978. [Google Scholar] [CrossRef]
- Bender, C.M.; Orszag, S.A. Advanced Mathematical Methods for Scientific and Engineers; McGraw-Hill International Editions; McGraw-Hill: New York, NY, USA, 1987. [Google Scholar]
- Celik, I. Approximate calculation of eigenvalues with the method of weighted residuals-collocation method. Appl. Math. Comput. 2005, 160, 401–410. [Google Scholar]
- Guseinov, G.S.; Karaca, I.Y. Instability intervals of a Hill’s equation with piecewise constant and alternating coefficient. Comput. Math. Appl. 2004, 47, 319–326. [Google Scholar] [CrossRef]
- El-Gamel, M.; Zayed, A.I. Sinc-Galerkin method for solving nonlinear boundary-value problems. Comput. Math. Appl. 2004, 48, 1285–1298. [Google Scholar] [CrossRef]
- Saadatmandi, A.; Razzaghi, M.; Dehghan, M. Sinc-Galerkin solution for nonlinear two-point boundary value problems with applications to chemical reactor theory. Math. Comput. Model. 2005, 42, 1237–1244. [Google Scholar] [CrossRef]
- Stenger, F. A Sinc-Galerkin method of solution of boundary value problems. Math. Comput. 1979, 33, 85–109. [Google Scholar]
- Mohsen, A.; El-Gamel, M. On the Galerkin and collocation methods for two-point boundary value problems using sinc bases. Comput. Math. Appl. 2008, in press. [Google Scholar] [CrossRef]
- Alquran, M.; Al-Khaled, K. Approximations of Sturm-Liouville eigenvalues using sinc-Galerkin and differential transform methods. Appl. Appl. Math. 2010, 5, 128–147. [Google Scholar]
- He, J.H. Variational iteration method—A kind of non-linear analytical technique: Some examples. Int. J. Non-Linear Mech. 1999, 34, 699–708. [Google Scholar] [CrossRef]
- He, J.H. Variational iteration method?some recent results and new interpretations. J. Comput. Appl. Math. 2007, 207, 3–17. [Google Scholar] [CrossRef]
- He, J.H.; Wu, X.H. Variational iteration method: New development and applications. Comput. Math. Appl. 2007, 54, 881–894. [Google Scholar] [CrossRef]
- Abassy, T.A.; El-Tawil, M.A.; El Zoheiry, H. Toward a modified variational iteration method. J. Comput. Appl. Math. 2007, 207, 137–147. [Google Scholar] [CrossRef]
- Jin, L. Application of modified variational iteration method to the Bratu-type problems. Int. J. Contemp Math. Sci. 2010, 5, 153–158. [Google Scholar]
- Islam, S.U.; Haq, S.; Ali, J. Numerical solution of special 12th-order boundary value problems using differential transform method. Commun. Nonlinear Sci. Numer. Simul. 2009, in press. [Google Scholar] [CrossRef]
- Abbasbandy, S. A new application of He’s variational iteration method for quadratic Riccati differential equation by using Adomian’s polynomials. J. Comput. Appl. Math. 2007, 207, 59–63. [Google Scholar] [CrossRef]
- Wazwaz, A.M. The variational iteration method for solving two forms of Blasius equation on a half-infinite domain. Appl. Math. Comput. 2007, 188, 485–491. [Google Scholar] [CrossRef]
- Mukhtarov, O.S.; Yücel, M. A Study of the Eigenfunctions of the Singular Sturm-Liouville Problem Using the Analytical Method and the Decomposition Technique. Mathematics 2020, 8, 415. [Google Scholar] [CrossRef]
- Qadir, R.R.; Jwamer, K.H.F. Refinement Asymptotic Formulas of Eigenvalues and Eigenfunctions of a Fourth Order Linear Differential Operator with Transmission Condition and Discontinuous Weight Function. Symmetry 2019, 11, 1060. [Google Scholar] [CrossRef]
- Khashshan, M.M.; Syam, M.I.; Al Mokhmari, A. A Reliable Method for Solving Fractional Sturm-Liouville Problems. Mathematics 2018, 6, 176. [Google Scholar] [CrossRef]
- Alsaedi, A.; Alsulami, M.; Srivastava, H.M.; Ahmad, B.; Ntouyas, S.K. Existence Theory for Nonlinear Third-Order Ordinary Differential Equations with Nonlocal Multi-Point and Multi-Strip Boundary Conditions. Symmetry 2019, 11, 281. [Google Scholar] [CrossRef]
- Stenger, F. Numerical Methods Based on Sinc and Analytic Functions; Springer: New York, NY, USA, 1993. [Google Scholar]
- Eggert, N.; Jarratt, M.; Lund, J. Sinc function computation of the eigenvalues of Sturm-Liouville problems. J. Comput. Phys. 1987, 69, 209–229. [Google Scholar] [CrossRef]
- McArthur, K.M.; Arthur, K.M. A Collocative Variation of the Sinc-Galerkin Method for Second Order Boundary Value Problems. In Computation and Control; Progress in Systems and Control Theory; Birkhäuser: Boston, MA, USA, 1989; Volume 1. [Google Scholar]
- Hazaimeh, A. Solution of Sturm-Liouville Differential Equation via the Use of Variational Iteration Method. Master’s Thesis, Jordan University of Science and Technology, Ar-Ramtha, Jordan, May 2020. [Google Scholar]
- Adomian, G. A review of the decomposition method and some recent results for nonlinear equations. Math. Comput. Model. 1990, 13, 17–43. [Google Scholar] [CrossRef]
- Singh, N.; Kumar, M. Adomian decomposition method for computing eigen-values of singular Sturm-Liouville problems. Natl. Acad. Sci. Lett. 2013, 36, 311–318. [Google Scholar] [CrossRef]
© 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).