Iterated Petrov–Galerkin Method with Regular Pairs for Solving Fredholm Integral Equations of the Second Kind

: In this work we obtain approximate solutions for Fredholm integral equations of the second kind by means of Petrov–Galerkin method, choosing “regular pairs” of subspaces, { X n , Y n } , which are simply characterized by the positive deﬁnitiveness of a correlation matrix. This choice guarantees the solvability and numerical stability of the approximation scheme in an easy way, and the selection of orthogonal basis for the subspaces make the calculations quite simple. Afterwards, we explore an interesting phenomenon called “superconvergence”, observed in the 1970s by Sloan: once the approximations u n ∈ X n to the solution of the operator equation u − Ku = g are obtained, the convergence can be notably improved by means of an iteration of the method, u ∗ n = g + Ku n . We illustrate both procedures of approximation by means of two numerical examples: one for a continuous kernel, and the other for a weakly singular one.


Introduction
Fredholm equations of the second kind are integral equations of the form with u an unknown function in a Banach space X. The kernel k : [a, b] × [a, b] → R and the right-hand side g : [a, b] → R are given functions. They appear in different areas of applied mathematics, sometimes as equivalent formulation for boundary value problems with ordinary differential equations, and there are many problems of mathematical physics that are modelled with Fredholm integral equations with different kernels (see, for example, [1][2][3]).
The equation may be written by defining the operator K : X → X , K(u)(.) = b a k(., s)u(s)ds. If for the kernel k(t, s) the operator K is bounded, a sufficient condition to guarantee the existence and uniqueness of a solution of Equation (2) is that K < 1 (see [4], Theorem 2.14, p. 23).
Petrov-Galerkin is a projection method often proposed to find numerical approximate solutions to this type of integral equation. The idea is to choose appropriate sequences of finite dimensional subspaces of X, {X n } n∈N and {Y n } n∈N , the trial and test subspaces respectively, where the unknown u and the data g are to be projected. In the case of being X a Hilbert space with inner product ., . , the Petrov-Galerkin method looks for u n ∈ X n such that u n − Ku n , v = g, v ∀v ∈ Y n (3) and, as X n and Y n are subspaces of dimension d n < ∞, solving Equation (3) reduces to solve a linear algebraic system of equations represented by a d n × d n matrix.
In [5] it is proved that, if K : X → X is a compact linear operator not having 1 as an eigenvalue, and the pair {X n , Y n } is a regular pair-a concept to be defined in the next section-Equation (3) has a unique solution u n which satisfies where the constant C does not depend on n. Solvability and numerical stability of the approximation scheme are, in this way, assured, and the accuracy of the approximation u n to the unique solution u of Equation (2) does not depend formally on Y n , as can be noted in Equation (4). The goal is to choose test function subspaces Y n that are easy to handle, while the quality of convergence of the method is preserved.
In addition, the convergence can even be improved by means of an iteration of the method: once the approximations u n ∈ X n are obtained, a new sequence of approximate solutions u * n ∈ X n can be built by means of a simple procedure (see [6][7][8]): In this work we choose pairs of simple subspaces {X n , Y n } generated by Legendre polynomials and show the goodness of the approximations in two numerical examples with known solution, one of them having singular kernel. We then improved the convergence by means of an iteration of the method and show why the approximation is better, even for small values of n ∈ N.

Method
Let (X, ., . ) be a Hilbert space, . the associated norm, and K : X → X a compact linear operator. It is shown in [4] that, if K < 1, there exists a solution u ∈ X to Equation (2) for g ∈ X a given function, and it is unique. We are interested in looking for a good approximation to u ∈ X satisfying Equation (2).
For each n ∈ N 0 let us consider subspaces X n ⊂ X, Y n ⊂ X, with dim(X n ) = dim(Y n ) = d n < ∞. The Petrov-Galerkin method for Equation (2) is a numerical method to find u n ∈ X n satisfying Equation (3).
For the method to be useful, it is necessary to establish conditions under which Equation (3) has a unique solution u n ∈ X n and lim n→∞ u − u n X = 0 for u the unique solution of Equation (2). It is easy to show that the condition ensures the existence of a unique solution u n ∈ X n for Equation (3). From [4] (p. 243), convergence can be expected only if ∀x ∈ X, ∃{x n , n ∈ N} ⊂ X n : lim so, from now on, the sequences of subspaces {X n } n∈N and {Y n } n∈N are both chosen verifying this condition of denseness. Following [5], and denoting by {X n , Y n } the sequences of subspaces, the pair {X n , Y n } is said to be ''regular" if there exists a linear surjective operator Π n : X n → Y n satisfying i.
It is easy to show that the surjectivity of Π n and (i.) assure the condition of Equation (6). From [5] (p. 411), the following theorem summarizes the conditions for the existence and uniqueness of the solutions of Equation (3) and their convergence to the solution of Equation (2): Theorem 1. Let X be a Hilbert space and K : X → X a compact linear operator not having 1 as eigenvalue. Suppose X n and Y n are finite dimensional subspaces of X, with dim(X n ) = dim(Y n ), verifying that {X n , Y n } is a regular pair and, for each x ∈ X, there exist sequences {x n , n ∈ N} ⊂ X n and {y n , n ∈ N} ⊂ Y n so that lim n→∞ x n = x and lim n→∞ y n = x. Then, there exists n 0 ∈ N such that, for n > n 0 , equation u n − Ku n , v = g, v ∀v ∈ Y n has a unique solution u n ∈ X n for any given g ∈ X, that satisfies u − u n ≤ C.in f x∈X n x − u , where u ∈ X is the unique solution of u − Ku = g and C is constant not dependent of n.
From [9], the characterization of a regular pair is simple by means of the so called "correlation matrix". Let {ϕ n i , i = 1, . . . , d n } and {ψ n j , j = 1, . . . , d n } be bases of X n and Y n , respectively, and define the d n × d n matrices [G( Note that for the real case, G(X n ) and G(Y n ) are positive definite and G + (X n , Y n ) is the symmetric part of the correlation matrix. We have proven (see [10]) the following 2 n ), j = 0, 1, . . . , 2 n − 1, and they are dense in L 2 ([0, 1]). The condition of Equation (7) of denseness is, so, satisfied.
As the basis of S m n , we will choose Legendre polynomials of degree less than m, adapted to each of the subintervals I j,n : S m . χ I j,n , L i (x) the Legendre polynomial of degree i on [−1, 1] and χ I j,n the characteristic function of the subinterval I j,n .
We rename q i,n l := p i,n+1 l to simplify the notation and choose the sequences of subspaces X n = S 2 n = span{p 0,n 0 , p 0,n 1 , p 1,n 0 , p 1,n 1 , . . . , Note that the condition of Equation (6), X n ⊥ ∩ Y n = {0}, which assures the uniqueness of the solution of Equation (3) for each n, is fulfilled. Indeed, suppose that q j,n 0 ∈ Y n , for j between 0 and 2 n+1 − 1, satisfies that q j,n 0 ⊥p i,n 0 and q j,n 0 ⊥p i,n 1 for every i = 0, . . . , 2 n − 1; then for i even and ψ n j (x) := q j,n 0 , it is easy to show that {X n , Y n } is a regular pair, since G + (X n , Y n ) is a 2 n+1 × 2 n+1 matrix with definite positive 2 × 2 blocks on its principal diagonal and 0s everywhere else (for details, see [10]).
Once the approximations u n are obtained, an almost natural iteration procedure is possible to obtain new approximations of the real solution u. Since the equation being solved is u − Ku = g, or u = g + Ku, we can define u * n = g + Ku n . This first iteration, applied to the Galerkin method, has been studied since the 1970s, because, under appropriated conditions of K and g, it reveals an interesting phenomenon called "superconvergence" (see [6,11], for instance), as the order of convergence can be notably improved.
In [11] (p. 42), the existence of a unique solution u * n for Equation (5) and the improvement of the order of convergence of the iterated approximation for any projection method are guaranteed.
In [5] (p. 419), the superconvergence in the Petrov-Galerkin scheme applied to Fredholm equations of the second kind is explained and, under the same conditions of the Theorem 1 we have just enunciated, a theorem establishes that u * n satisfies for u the unique solution in L 2 ([0, 1]) of Equation (1), showing that the improvement of the order of convergence by the iteration procedure is due to the approximation of the kernel k by elements ψ of test subspace Y n . In our work, the elements of test subspaces Y n = S 1 n+1 are piecewise constant functions on the dyadic subintervals I j,n = ( j 2 n , j+1 2 n ), j = 0, 1, . . . , 2 n − 1. We will now follow an idea from [6] (p. 67). For f a Lipschitz function on an interval I, with Lipschitz constant L, let ψ 1 2 be the piecewise constant function defined by ψ 1 for each s ∈ [0, 1] and, consequently, ess sup s∈[0,1] inf ψ∈Y n k(., s) − ψ 2 ≤ 1 2 n+2 ess sup s∈[0,1] L s . Moreover, if ess sup s∈[0,1] L s < ∞, from Equation (8), u − u * n 2 ≤ C. 1 2 n+2 ess sup s∈[0,1] L s .inf x∈X n x − u 2 , and the approximation is actually improved.

Results
We will offer two numerical examples of the goodness of the Petrov-Galerkin method and iterated Petrov-Galerkin method with regular pairs, applied to Fredholm integral equations of the second kind: one with a continuous kernel, and the other with a weakly singular kernel ( [4], p. 29; [12], p.7).
We have chosen "regular pairs" of subspaces, and orthogonal basis for them, reducing the difficulty of calculations.
In Figure 1b, the plots of the exact solution together with u * 1 , u * 2 and u * 3 are shown. The quadratic errors with respect to te t are, in this case, ε * 0 ∼ 0.005657, ε * 1 ∼ 0.000849 and ε * 2 ∼ 0.000139. Note that the plots of the iterated approximations and the real solution are indistinguishable. All the approximations were obtained by means of ad hoc designed algorithms, implemented with Wolfram Mathematica ® 9.

Example 2
The equation It is ‖ ‖ ≤ (∫ (∫ ( , ) ) ) < 1, thus 1 is not an eigenvalue of and convergence of the method to the (unique) exact solution is guaranteed.

Example 2
The equation with In Figure 2a we plot the exact solution together with the approximations u 0 , u 1 , u 2 , u 3 and u 4 obtained with Mathematica ® , and in Figure 2b, the exact solution together with u * 0 , u * 1 and u * 2 , the last one being practically indistinguishable from the exact solution. By comparing quadratic errors, the improvement of the approximation can be appreciated: for n = 2, ε 2 = u − u 2 2 < 0.0046 and ε * 2 = u − u * 2 2 < 0.00023.

Discussion
The Petrov-Galerkin method is applied by choosing appropriate subspaces for projecting. The choice of a "regular pair" of subspaces (easily characterized by the positive definitiveness of a correlation matrix), and orthogonal basis for them, reduce the difficulty of calculations. Iteration is shown to be a very simple way for improving convergence in a remarkable way, and better orders of convergence can be shown, even for a weakly singular kernel. It is necessary to say that, in this second numerical example, we have had difficulties with the fluid implementation of the computational algorithms because of the improper integrals involved. However, not so many computations were necessary since with = 2 we have obtained very good results. In [14,15], the authors propose discrete methods to face the numerical difficulties arising from the calculation of improper integrals involved in the case of weakly singular kernel.
It is appropriate to point out that, in recent papers, different discrete Galerkin approaches were proposed to solve integral equations. In particular, meshless discrete Galerkin methods were successfully developed for solving Fredholm and Hammerstein integral equations for various bases. See, for example, [16] for an effective and stable method to estimate the solution to Hammerstein integral equations with free shape parameter radial basis functions, constructed on scattered points; [17,18], for effective computational meshless methods for solving Fredholm integral equations of the second kind with logarithmic and weakly singular kernels, using radial basis functions, meshless product integration and collocation methods; and [19,20], for efficient meshless methods for solving non-linear weakly singular Fredholm integral equations, combining discrete collocation method with locally supported radial basis functions and thin-plate splines.
Finally, a plausible line for our future work could be to explore and take advantages of some of these discrete methods of approximation to avoid the difficulties of calculations arising from the improper integrals when solving Fredholm integral equations of the second kind with weakly  (11) before iteration: purple for n = 0, blue for n = 1, green for n = 2, yellow for n = 3, orange for n = 4, and, dashed in red, the exact solution; (b) the same approximations after the iteration. For n = 2, the approximation is graphically indistinguishable from the exact solution of the Equation (11), and the quadratic errors is reduced from

Discussion
The Petrov-Galerkin method is applied by choosing appropriate subspaces for projecting. The choice of a "regular pair" of subspaces (easily characterized by the positive definitiveness of a correlation matrix), and orthogonal basis for them, reduce the difficulty of calculations. Iteration is shown to be a very simple way for improving convergence in a remarkable way, and better orders of convergence can be shown, even for a weakly singular kernel. It is necessary to say that, in this second numerical example, we have had difficulties with the fluid implementation of the computational algorithms because of the improper integrals involved. However, not so many computations were necessary since with n = 2 we have obtained very good results. In [14,15], the authors propose discrete methods to face the numerical difficulties arising from the calculation of improper integrals involved in the case of weakly singular kernel.
It is appropriate to point out that, in recent papers, different discrete Galerkin approaches were proposed to solve integral equations. In particular, meshless discrete Galerkin methods were successfully developed for solving Fredholm and Hammerstein integral equations for various bases. See, for example, [16] for an effective and stable method to estimate the solution to Hammerstein integral equations with free shape parameter radial basis functions, constructed on scattered points; [17,18], for effective computational meshless methods for solving Fredholm integral equations of the second kind with logarithmic and weakly singular kernels, using radial basis functions, meshless product integration and collocation methods; and [19,20], for efficient meshless methods for solving non-linear weakly singular Fredholm integral equations, combining discrete collocation method with locally supported radial basis functions and thin-plate splines.
Finally, a plausible line for our future work could be to explore and take advantages of some of these discrete methods of approximation to avoid the difficulties of calculations arising from the improper integrals when solving Fredholm integral equations of the second kind with weakly singular kernels.