Open Access This article is
- freely available
Math. Comput. Appl. 2018, 23(4), 73; https://doi.org/10.3390/mca23040073
Iterated Petrov–Galerkin Method with Regular Pairs for Solving Fredholm Integral Equations of the Second Kind
Departamento de Matemática, Facultad de Ingeniería, Universidad de Buenos Aires, C1063ACV Buenos Aires, Argentina
Author to whom correspondence should be addressed.
Received: 22 October 2018 / Accepted: 12 November 2018 / Published: 13 November 2018
In this work we obtain approximate solutions for Fredholm integral equations of the second kind by means of Petrov–Galerkin method, choosing “regular pairs” of subspaces, which are simply characterized by the positive definitiveness of a correlation matrix. This choice guarantees the solvability and numerical stability of the approximation scheme in an easy way, and the selection of orthogonal basis for the subspaces make the calculations quite simple. Afterwards, we explore an interesting phenomenon called “superconvergence”, observed in the 1970s by Sloan: once the approximations to the solution of the operator equation are obtained, the convergence can be notably improved by means of an iteration of the method, . We illustrate both procedures of approximation by means of two numerical examples: one for a continuous kernel, and the other for a weakly singular one.
Keywords:Fredholm integral equations; numerical solutions; Petrov–Galerkin method; regular pairs; iterated methods
Fredholm equations of the second kind are integral equations of the formwith an unknown function in a Banach space . The kernel and the right-hand side are given functions.
They appear in different areas of applied mathematics, sometimes as equivalent formulation for boundary value problems with ordinary differential equations, and there are many problems of mathematical physics that are modelled with Fredholm integral equations with different kernels (see, for example, [1,2,3]).
The equation may be writtenby defining the operator , .
If for the kernel the operator is bounded, a sufficient condition to guarantee the existence and uniqueness of a solution of Equation (2) is that (see , Theorem 2.14, p. 23).
Petrov–Galerkin is a projection method often proposed to find numerical approximate solutions to this type of integral equation. The idea is to choose appropriate sequences of finite dimensional subspaces of , and , the trial and test subspaces respectively, where the unknown and the data are to be projected.
In the case of being a Hilbert space with inner product , the Petrov–Galerkin method looks for such thatand, as and are subspaces of dimension , solving Equation (3) reduces to solve a linear algebraic system of equations represented by a matrix.
In  it is proved that, if is a compact linear operator not having 1 as an eigenvalue, and the pair is a regular pair—a concept to be defined in the next section—Equation (3) has a unique solution which satisfieswhere the constant C does not depend on .
Solvability and numerical stability of the approximation scheme are, in this way, assured, and the accuracy of the approximation to the unique solution of Equation (2) does not depend formally on , as can be noted in Equation (4). The goal is to choose test function subspaces that are easy to handle, while the quality of convergence of the method is preserved.
In addition, the convergence can even be improved by means of an iteration of the method: once the approximations are obtained, a new sequence of approximate solutions can be built by means of a simple procedure (see [6,7,8]):
In this work we choose pairs of simple subspaces generated by Legendre polynomials and show the goodness of the approximations in two numerical examples with known solution, one of them having singular kernel. We then improved the convergence by means of an iteration of the method and show why the approximation is better, even for small values of .
Let be a Hilbert space, the associated norm, and a compact linear operator. It is shown in  that, if , there exists a solution to Equation (2) for a given function, and it is unique. We are interested in looking for a good approximation to satisfying Equation (2).
For each let us consider subspaces , with . The Petrov–Galerkin method for Equation (2) is a numerical method to find satisfying Equation (3).
For the method to be useful, it is necessary to establish conditions under which Equation (3) has a unique solution and for the unique solution of Equation (2).
It is easy to show that the conditionensures the existence of a unique solution for Equation (3). From  (p. 243), convergence can be expected only ifso, from now on, the sequences of subspaces and are both chosen verifying this condition of denseness.
Following , and denoting by the sequences of subspaces, the pair is said to be ‘’regular’’ if there exists a linear surjective operator satisfying
It is easy to show that the surjectivity of and (i.) assure the condition of Equation (6).
From  (p. 411), the following theorem summarizes the conditions for the existence and uniqueness of the solutions of Equation (3) and their convergence to the solution of Equation (2):
Letbe a Hilbert space anda compact linear operator not having 1 as eigenvalue. Supposeandare finite dimensional subspaces of, with , verifying thatis a regular pair and, for each, there exist sequencesandso thatand. Then, there existssuch that, for, equationhas a unique solutionfor any given, that satisfies, whereis the unique solution ofand C is constant not dependent of n.
From , the characterization of a regular pair is simple by means of the so called “correlation matrix”. Let and be bases of and , respectively, and define the matrices , , the correlation matrix , and , for .
Note that for the real case, and are positive definite and is the symmetric part of the correlation matrix. We have proven (see ) the following
Ifis positive definite,is a regular pair.
For conciseness, we assume from here on.
Let us consider .
For the interval , is the subspace of polynomials of degree less than on each subinterval and . since every continuous functions with compact support on can be approximated by steps functions on subintervals of the form , and they are dense in . The condition of Equation (7) of denseness is, so, satisfied.
As the basis of , we will choose Legendre polynomials of degree less than , adapted to each of the subintervals : with , the Legendre polynomial of degree on and the characteristic function of the subinterval .
We rename to simplify the notation and choose the sequences of subspaces and , with .
Note that the condition of Equation (6), which assures the uniqueness of the solution of Equation (3) for each is fulfilled.
Indeed, suppose that , for between and satisfies that and for every ; then if is even or if is odd, which is impossible, since and for every and every .
Renaming the elements of the basis as for odd, for even and , it is easy to show that is a regular pair, since is a matrix with definite positive blocks on its principal diagonal and 0s everywhere else (for details, see ).
Once the approximations are obtained, an almost natural iteration procedure is possible to obtain new approximations of the real solution . Since the equation being solved is , or , we can define . This first iteration, applied to the Galerkin method, has been studied since the 1970s, because, under appropriated conditions of and , it reveals an interesting phenomenon called “superconvergence” (see [6,11], for instance), as the order of convergence can be notably improved.
In  (p. 42), the existence of a unique solution for Equation (5) and the improvement of the order of convergence of the iterated approximation for any projection method are guaranteed.
In  (p. 419), the superconvergence in the Petrov–Galerkin scheme applied to Fredholm equations of the second kind is explained and, under the same conditions of the Theorem 1 we have just enunciated, a theorem establishes that satisfiesfor the unique solution in of Equation (1), showing that the improvement of the order of convergence by the iteration procedure is due to the approximation of the kernel by elements of test subspace . In our work, the elements of test subspaces are piecewise constant functions on the dyadic subintervals .
We will now follow an idea from  (p. 67). For a Lipschitz function on an interval , with Lipschitz constant , let be the piecewise constant function defined by for , with a regular partition of with norm .
For any , if , so .
If the kernel satisfies that is a Lipschitz function with Lipschitz constant for each , it is and, then, for each and, consequently, .
Moreover, if , from Equation (8), , and the approximation is actually improved.
We will offer two numerical examples of the goodness of the Petrov–Galerkin method and iterated Petrov–Galerkin method with regular pairs, applied to Fredholm integral equations of the second kind: one with a continuous kernel, and the other with a weakly singular kernel (, p. 29; , p.7).
The kernel is said to be weakly singular if it verifieswith and
Both for a continuous kernel or a weakly singular one, is compact operator (see , p. 28, Theorem 2.28; and , p. 582, Theorem 1, respectively).
We have chosen “regular pairs” of subspaces, and orthogonal basis for them, reducing the difficulty of calculations.
We worked with and , with for odd, for even and .
Note that the trial space is generated by piecewise constant and piecewise linear orthogonal functions; in , only piecewise linear (not orthogonal) functions are used.
3.1. Numerical Examples
3.1.1. Example 1
The equationwith , has the exact solution .
The linear operator is compact because is continuous, and , thus 1 is not an eigenvalue of and convergence of the Petrov–Galerkin method to the (unique) exact solution is guaranteed.
In Figure 1a we plot the exact solution together with the approximations and . The quadratic errors with respect to the exact solution are, respectively, 157, , and .
In Figure 1b, the plots of the exact solution together with and are shown. The quadratic errors with respect to are, in this case, , and .
Note that the plots of the iterated approximations and the real solution are indistinguishable.
All the approximations were obtained by means of ad hoc designed algorithms, implemented with Wolfram Mathematica® 9.
3.1.2. Example 2
The equationwith , has the exact solution .
The kernel is weakly singular, with , according to (9).
Theorem 1 from  (p. 582) guarantees the compactness of the operator , because the necessary and sufficient conditions are verified: and for .
It is , thus 1 is not an eigenvalue of and convergence of the method to the (unique) exact solution is guaranteed.
In Figure 2a we plot the exact solution together with the approximations and obtained with Mathematica®, and in Figure 2b, the exact solution together with and , the last one being practically indistinguishable from the exact solution. By comparing quadratic errors, the improvement of the approximation can be appreciated: for , and .
The Petrov–Galerkin method is applied by choosing appropriate subspaces for projecting. The choice of a “regular pair” of subspaces (easily characterized by the positive definitiveness of a correlation matrix), and orthogonal basis for them, reduce the difficulty of calculations. Iteration is shown to be a very simple way for improving convergence in a remarkable way, and better orders of convergence can be shown, even for a weakly singular kernel. It is necessary to say that, in this second numerical example, we have had difficulties with the fluid implementation of the computational algorithms because of the improper integrals involved. However, not so many computations were necessary since with we have obtained very good results. In [14,15], the authors propose discrete methods to face the numerical difficulties arising from the calculation of improper integrals involved in the case of weakly singular kernel.
It is appropriate to point out that, in recent papers, different discrete Galerkin approaches were proposed to solve integral equations. In particular, meshless discrete Galerkin methods were successfully developed for solving Fredholm and Hammerstein integral equations for various bases. See, for example,  for an effective and stable method to estimate the solution to Hammerstein integral equations with free shape parameter radial basis functions, constructed on scattered points; [17,18], for effective computational meshless methods for solving Fredholm integral equations of the second kind with logarithmic and weakly singular kernels, using radial basis functions, meshless product integration and collocation methods; and [19,20], for efficient meshless methods for solving non-linear weakly singular Fredholm integral equations, combining discrete collocation method with locally supported radial basis functions and thin-plate splines.
Finally, a plausible line for our future work could be to explore and take advantages of some of these discrete methods of approximation to avoid the difficulties of calculations arising from the improper integrals when solving Fredholm integral equations of the second kind with weakly singular kernels.
Conceptualization, M.I.T.; formal analysis, S.A.S. and M.I.T.; investigation, S.A.S. and M.I.T.; methodology, M.I.T.; project administration, M.I.T.; software, S.A.S. and M.I.T.; supervision, M.I.T.; validation, S.A.S. and M.I.T.; visualization, S.A.S.; writing—original draft, S.A.S.; writing—review and editing, S.A.S. and M.I.T.
This research was partially supported by Universidad de Buenos Aires, UBACyT 2018-2021, 20020170100350BA.
Conflicts of Interest
The authors declare no conflict of interest.
- Lonseth, A. Sources and Applications of Integral Equations. SIAM Rev. 1977, 19, 241–278. [Google Scholar] [CrossRef]
- Kovalenko, E.V. Some approximate methods of solving integral equations of mixed problems. J. Appl. Math. Mech. 1989, 53, 85–92. [Google Scholar] [CrossRef]
- Assari, P. Thin plate spline Galerkin scheme for numerically solving nonlinear weakly singular Fredholm integral equations. Appl. Anal. 2018, 1–21. [Google Scholar] [CrossRef]
- Kress, R. Linear Integral Equations, 3rd ed.; Springer: New York, NY, USA, 2014. [Google Scholar]
- Chen, Z.; Xu, Y. The Petrov–Galerkin and iterated Petrov–Galerkin methods for second-kind integral equations. SIAM J. Numer. Anal., 1998, 35, 406–434. [Google Scholar] [CrossRef]
- Chandler, G. Superconvergence of Numerical Solutions to Second Kind Integral Equations. Ph.D. Thesis, Australian National University, Canberra, Australia, September 1979. [Google Scholar]
- Sloan, I.H. Improvement by Iteration for Compact Operator Equations. Math. Comput. 1976, 30, 756–764. [Google Scholar] [CrossRef]
- Sloan, I.H. The iterated Galerkin method for integral equations of the second kind. In Miniconference on Operator Theory and Partial Differential Equations; Centre for Mathematics and its Applications, Mathematical Sciences Institute, The Australian National University: Canberra, Australia, 1984; pp. 153–161. [Google Scholar]
- Chen, Z.; Micchelli, C.A.; Xu, Y. The Petrov–Galerkin method for second kind integral equations II: Multiwavelet schemes. Adv. Comput. Math., 1997, 7, 199–233. [Google Scholar] [CrossRef]
- Orellana Castillo, A.; Seminara, S.; Troparevsky, M.I. Regular pairs for solving Fredholm integral equations of the second kind. Poincare J. Anal. Appl. 2018, accepted, in press. [Google Scholar]
- Sloan, I.H. Superconvergence. In Numerical Solution of Integral Equations; Goldberg, M., Ed.; Plenum Press: New York, NY, USA, 1990; pp. 35–70. [Google Scholar]
- Vainikko, G. Weakly Singular Integral Equations. Available online: http://math.tkk.fi/opetus/funasov/2006/WSIElectures.pdf (accessed on 12 November 2018).
- Graham, I.G.; Sloan, I.H. On the Compactness of Certain Integral Operators. J. Math. Anal. Appl. 1979, 68, 580–594. [Google Scholar] [CrossRef]
- Chen, Z.; Xu, Y.; Zhao, J. The Discrete Petrov–Galerkin Method for Weakly Singular. Integral Equ. Appl. 1999, 11, 1–35. [Google Scholar] [CrossRef]
- Chen, Z.; Micchelli, C.A.; Xu, Y. Discrete wavelet Petrov–Galerkin methods. Adv. Comput. Math. 2002, 16, 1–28. [Google Scholar] [CrossRef]
- Assari, P.; Dehgan, M. A Meshless Discrete Galerkin Method Based on the Free Shape Parameter Radial Basis Functions for Solving Hammerstein Integral Equations. Numer. Math. Theory Methods Appl. 2018, 11, 540–568. [Google Scholar] [CrossRef]
- Assari, P.; Adibi, H.; Dehgan, M. A meshless discrete Galerkin (MDG) method for the numerical solution of integral equations with logarithmic kernels. J. Comput. Appl. Math. 2014, 267, 160–181. [Google Scholar] [CrossRef]
- Assari, P.; Adibi, H.; Dehgan, M. The numerical solution of weakly singular integral equations based on the meshless product integration (MPI) method with error analysis. Appl. Numer. Math. 2014, 81, 76–93. [Google Scholar] [CrossRef]
- Assari, P.; Dehgan, M. The numerical solution of two-dimensional logarithmic integral equations on normal domains using radial basis functions with polynomial precision. Eng. Comput. 2017, 33, 853–870. [Google Scholar] [CrossRef]
- Assari, P.; Asadi-Mehregan, F.; Dehghan, M. On the numerical solution of Fredholm integral equations utilizing the local radial basis function method. Int. J. Comput. Math. 2018, 1–28. [Google Scholar] [CrossRef]
Figure 1. (a) Approximations to exact solution of Equation (10) before iteration: Purple for , blue for , green for , yellow for , and, dashed in red, the exact solution. The quadratic errors with respect to the exact solution are, respectively, 157, , and ; (b) the approximations for and 3 after iteration are graphically indistinguishable from the exact solution of the equation (10). The quadratic errors with respect to are, in this case, , and .
Figure 2. (a) Approximations to exact solution of Equation (11) before iteration: purple for , blue for , green for , yellow for , orange for , and, dashed in red, the exact solution; (b) the same approximations after the iteration. For , the approximation is graphically indistinguishable from the exact solution of the Equation (11), and the quadratic errors is reduced from to
© 2018 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).