An Ulm-Type Inverse-Free Iterative Scheme for Fredholm Integral Equations of Second Kind

: In this paper, we present an iterative method based on the well-known Ulm’s method to numerically solve Fredholm integral equations of the second kind. We support our strategy in the symmetry between two well-known problems in Numerical Analysis: the solution of linear integral equations and the approximation of inverse operators. In this way, we obtain a two-folded algorithm that allows us to approximate, with quadratic order of convergence, the solution of the integral equation as well as the inverses at the solution of the derivative of the operator related to the problem. We have studied the semilocal convergence of the method and we have obtained the expression of the method in a particular case, given by some adequate initial choices. The theoretical results are illustrated with two applications to integral equations, given by symmetric non-separable kernels.


Introduction
In this paper, we are concerned with obtaining an approximate solution of Fredholm integral equations of a second kind given by where −∞ < a < b < +∞, λ ∈ R, g(x) ∈ C[a, b] is a given function, and N(x, t) is a known function in [a, b] × [a, b], called the kernel of the integral equation. Finally, z(x) ∈ C[a, b] is the unknown function to be determined. This integral equation is said to be homogeneous if function g(x) is zero. Otherwise, if g(x) is not zero, it is said not to be homogeneous. The problem of solving integral equations is quite general, so in this work we focus on a particular type of integral equation called Fredholm integral equations [1]. In general, this kind of integral equation appears frequently in Mathematical Physics, Mathematics, and other fields of Science and Engineering [2][3][4][5][6]. Several physical processes (Fluid Mechanics, Biology, Chemistry, etc.) can be modeled by these equations. In addition, other purely mathematical problems, such as initial and boundary value problems, can be transformed into integral equations.
We that allow us to express Equation (1) in the following form: Therefore, if there exists (I − λN ) −1 , a solution of the Equation (1) is given by With Formula (3), it is possible to obtain the exact solution of integral Equation (1) in a theoretical way. However, in practice, the calculation of the inverse (I − λN ) −1 could be very complicated (or even impossible). Consequently, we present a strategy based on the symmetry between the problem of numerically solving linear integral equations and the problem of approximating the inverse of a linear operator. With this idea, the use of iterative methods gives us an alternative way of approaching this inverse and therefore the solution of the integral equation, instead of trying to calculate the exact solution of the problem (see [7][8][9][10]).
There exist other techniques to numerically solve Fredholm integral equations of a second kind, or even systems of such equations. For instance, in [11], the idea is to approximate the operator associated with the same integral equation, instead of approximating an inverse operator as we propose in this paper. In [12] or [13], discretization techniques (Galerkin method, collocation method) are used to solve the finite dimensional equations (I − λN n )z n = g n , related to (2), where N n is a discrete approximation of N and z n , g n the corresponding discrete solutions. Iterative techniques can be also used for other types of integral equations, as made in [14] or [15] for example.
Along this paper, we consider an operator G : Instead of approaching directly (I − λN ) −1 by means of an iterative method, as made in [10] for instance, in this paper, we apply Newton's method to the equation G(z) = 0. As we will see later (see (7)), the calculation of [G (z)] −1 is also related to the calculation of (I − λN ) −1 . Consequently, Newton's method is only applicable in cases where this inverse can be calculated (for instance, separable kernels, as seen in [16]). One of the targets of this work is to use Newton's method for solving G(z) = 0. In this point, we introduce the use of iterative methods for the calculus of the inverse. This is the main difference compared with other previous works as [10]: here, we consider firstly the equation G(z) = 0 and next we consider iterative methods for approaching the inverse instead of approaching the inverses directly.
Amongst the plethora of iterative methods for solving nonlinear equations F(x) = 0, in this work, we have chosen what is known as Ulm's method that is a Newton-type method given by The method presents some attractive features. Firstly, it has quadratic convergence, that is, the same order of convergence of Newton's method. Secondly, the proposed method does not contain inverse operators in its expression, or equivalently it is not necessary to solve a linear equation per iteration. Thirdly, in addition to solving the nonlinear equation F(x) = 0, the method produces successive approximations {B n } to the value of the inverse operator F (x * ) −1 , where x * is a solution of the equation. The method was firstly proposed by Ulm in [17], as a variant of a similar method given by Moser [18] that has just superlinear convergence. A study of its semilocal convergence by using the α-theory of Smale, as well as an application to approximate the solution of integral equations of Fredholm-type can be seen in [19]. The local convergence of the method can be seen in [20].

Construction of an Iterative Scheme of Ulm-Type
We consider the equation given by the operator G defined in (4). Let us note that finding a solution of the equation G(z) = 0 is equivalent to solving the integral Equation (1). Iterative methods are among the most used methods to solve these kinds of equations. The idea is to start with an initial approximation of z * , a solution of the equation G(z) = 0. Next, a sequence {z n } of approximations to the solution z * is obtained at each step, satisfying { z n − z n−1 } is strictly decreasing. Obviously, our interest is focused on the case lim n z n = z * . It is possible to obtain the sequence of approximations {z n } by using different iterative algorithms. For instance, Newton's method is one of the most used for this purpose. It is defined by the following recursive process: In practice, it is not easy to construct an iterative scheme like (6) for operators defined on infinite dimension spaces. The main difficulties arise for calculating at each step the inverse of the linear operator G (z n ) or, equivalently, for solving the associated linear equation.
In our case, we have that the operator G is Fréchet differentiable, with G (z) defined by Then, if there exists [G (z)] −1 , the following equality must be satisfied: N(x, t)w(t) dt, the value of J(x) can be obtained independently from w. To do this, we multiply the next-to-last equality by N(ξ, x), ξ ∈ [a, b], and we integrate it between a and b in the x variable. In this way, we obtain By means of the change of the variable x = t in the previous integrals, we obtain and, therefore, Consequently, Now, as a consequence of the last equation, we can rewrite an iteration of Newton's iterative scheme (6) as follows: Let us note that, if |λ| N < 1, Banach's Lemma on invertible operators guarantees the existence of the operator (I − λN ) −1 . Now, our target is to approximate the inverse of the linear operator by using iterative methods for solving nonlinear equations.
Within this set, we consider the subset of invertible operators: Now, we use Newton's method for approaching the inverse of a given linear operator Therefore, proceeding as in [16], Newton's iteration in this case can be written in the following way: We would like to highlight that Newton's method does not use inverse operators for approximating the inverse operator Now, taking into account the previous reasoning, we approximate (I − λN ) −1 in (8) by means of the Newton sequence (10) and then we define the following Ulm-type algorithm: Let us notice that (11) is an inverse-free iterative process, as the Ulm's method given by (5). In the next section, we are going to study the semilical convergence of this method as well as its order of convergence.

Main Convergence Result
As a first step in our study, we analyze the convergence of the sequence of linear operators {L n } defined in (10). Our target is to state a semilocal convergence result, without assuming the existence of the inverse A −1 .
Proof. As a direct application of (10), we obtain I − AL n = (I − AL n−1 ) 2 , and therefore: On the other hand, as I ≤ 1, we have Now, we apply recursively the previous inequality and we take into account (12). Thus, we obtain: As a consequence, by the definition of the sequence (10) and by the inequalities (12) and (13), we have the following bound for each m ∈ N: Therefore, as η(1 + η) < 1, for n = 0, we have: Then, L m ∈ B L 0 , L 0 1−η(1+η) for m ≥ 1. Moreover, we obtain: and it follows that {L n } is a Cauchy sequence. Then, {L n } converges to L * . In addition, as: we have lim m→∞ (I − AL n ) = 0 and then AL * = I.
Let us notice that, if we prove that A −1 exists, then L * = A −1 . Otherwise, if we do not suppose the existence of the inverse A −1 , and we consider L 0 such that AL 0 = L 0 A, we have: Hence, by following an inductive procedure, we deduce that AL n = L n A and then L * A = I. Thus, in this case, L * is the inverse operator of A. However, in general, if L 0 A = AL 0 , then L * = lim n→∞ L n satisfies only AL * = I, so that the sequence {L n } converges to the right inverse of A.

Lemma 2.
Under the conditions of Lemma 1, let us assume that z j ∈ Ω for j = 0, 1 . . . , n. Then, the sequence {z n } defined by (11) satisfies Proof. Firstly, using (4) and (11), as z n−1 , z n ∈ Ω, we have Secondly, taking norms in the previous equality and from Lemma 1, we get Thirdly, by a recursive procedure, as z j ∈ Ω for j = 0, 1 . . . , n, we obtain and the result is then proved.
With the aid of these two technical lemmas, we can prove a result of semilocal convergence for the sequence {z n } defined by (11). Theorem 1. Let A be the operator defined in (9) and let L 0 be an initial approach to A −1 such that In addition, let us assume that the initial guess z 0 satisfies B(z 0 , R) ∈ Ω with Then, the sequence {z k } defined in (11) belongs to B(z 0 , R) and converges quadratically to z * , a solution of G(z) = 0.
Proof. From Lemma 1, it follows that the sequence {L n } converges quadratically to the right inverse of A and On the other hand, if z j ∈ Ω for j = 0, 1 . . . , n, from Lemma 2, as 2 n − 1 ≥ n for n ≥ 0, we obtain Therefore, from (13), it follows that Then, if we take n = 0, it is obvious that z m ∈ B(z 0 , R) ⊆ Ω for all m ≥ 1. Moreover, it follows that the sequence {z n } is a Cauchy sequence and therefore there exists a function z * such that lim z n = z * . On the other hand, by (16) and the continuity of the function G, we obtain that G(z * ) = 0.
Next, to prove the quadratic convergence of the sequence {z n }. It is easy to check that N A = AN and then Thus, from Lemma 1, it follows z n+1 − z * ≤ I − L 0 A 2 n |λ| N z n − z * , and then {z k } converges quadratically to z * .

A Particular Case
Now, we consider the application of the iterative method (11) to the problem (1) in the particular case given by the initial choices z 0 (x) = g(x) and L 0 = I.
In this case, it is easy to check that, if |λ| N < , the condition (15) is verified.
Then, by Theorem 1, if R ∈ 0, (1 + N )|λ| N 1 − |λ| N g ⊆ Ω, the sequence {z k } defined in (11) belongs to B(z 0 , R) and converges quadratically to z * , a solution of G(z) = 0. In addition, the following sequences of iterates can be explicitly obtained, one {z k } for approaching a solution z * : and the other one {L k } for approaching the inverses of G (z k ): Note that the operator L k is given by the (2 k − 1)th partial sum of the series ∑ j≥0 (λN ) j that gives the inverse of the operator A = I − λN , when it exists. We can prove that Equations (17) and (18) hold by following an inductive procedure. For the operators L k , it is clear that the Formula (18) is true for k = 0. Now, if we assume that the Formula (18) is true for k ≤ n, we have: and therefore, taking into account the recurrence for L n given in (11): the Formula (18) is true for k = n + 1.
For the case of functions z k , it is clear again that the Formula (17) is true for k = 0. Now, let us suppose that the Formula (17) holds for k ≤ n. Then, taking into account the recurrence for L n given in (11), we have: By using the inductive hypothesis, we deduce G(z n ) = z n − λN (z n ) − g = −(λN (z n )) n+2 n g, g + λN (z n ) = 2 n +n ∑ j=0 λ j N j g and therefore, rearranging superscripts, Thus, the Formula (17) is true for k = n + 1. Notice that Equation (17) can be expressed in the following way to approximate the solution z * of Equation (1):

Numerical Examples
We finish this paper with two particular examples given by integral equations with symmetric non-separable kernels. Example 1. We consider the following Fredholm integral equation of Chandrasekhar type ( [5,21]): We take z 0 (x) = g(x) = 1 and L 0 = I as starting points in the couple of recurrences given by (11). Let us note that, with this choice of initial guesses, the conditions of Theorem 1 are fulfilled. Actually, that is less than We would like to highlight that, in this example, we can prove the existence of a solution of Equation (20) without prior knowledge of it.
Then, taking into account the procedure explained in Section 4, we can obtain the approximated values of and so on. In this case, we have approximated the integral operator N : We have used z n − z n−1 ≤ 10 −4 as a stopping criterium. As we can see in Figure 1, with just a couple of iterations, we obtain a good approximation of the solution. Figure 1. At the top, graphics of the first two iterates of Ulm's type method (11) applied to Equation (20). In the bottom, comparison between the second and third iterations (graphic of z 3 (x) − z 2 (x)).

Example 2.
We consider now another example where g(x) is chosen in order to z(x) = 1 be a solution. In this case, We take z 0 (x) = g(x) and L 0 = I in (11). With this choice of initial values, the conditions of Theorem 1 are fulfilled. Actually, Now, following the procedure explained in Section 4 and using, as in the previous example, a Gauss-Legendre quadrature formula with four nodes and weights for approximating the integral operator N : C[0, 1] → C[0, 1], given by N (y)(x) = 1 0 xte xt y(t) dt, we obtain, with the same stopping criterium, the approximations to the solution shown on the left side of Figure 2. In addition, as the exact solution is known, we can see the error committed by these iterations on the right side of Figure 2.
Finally, we finish with a numerical comparison between our method and Picard's iterative method, one of the most known iterative methods for solving equations, based on a fixed point strategy. For the case of Equation (21) and starting with the same initial seed y 0 (x) = g(x), the following sequence of approximations to the solution can be constructed: In Figure 3, we appreciate the faster convergence of Ulm's type method (11) compared with Picard's iteration (22). At the top of Figure 3, we compare the first iteration z 1 (x) of method (11) with two iterations y 1 (x) and y 2 (x) of Picard's method (22). In the bottom of Figure 3, the second iteration z 2 (x) is compared with four iterations of Picard's method. Figure 2. At the top, graphics of the first two iterates of Ulm's type method (11) applied to equation (21). In the bottom, graphics of the errors in these iterates.  (11) applied to Equation (21), denoted by z n (x) and iterations of Picard's method (22) applied to the same equation, denoted by y n (x).

Conclusions
We have considered the numerical solution of Fredholm integral equations of the second kind by means of iterative methods. This problem can be reduced, via the symmetry between solving linear integral equations and calculating inverse operators, to another one where it is needed to approximate the inverse of a given operator related with the integral equation. Different iterative methods can be used for this purpose, as we can see in the recent literature on this problem. For instance, Newton's and Picard's type methods have been used for this purpose. The use of iterative methods provides an alternative strategy for approaching this inverse and consequently the solution of the involved integral equation, instead of calculating the exact solution of the problem (see [7,9,10]).
In this paper, we have considered a variant of the well-known Ulm's method given by (5). The choice of this method is justified by three interesting numerical properties: 1.
It has quadratic convergence.

2.
It does not contain inverse operators, or, equivalently, it is not necessary to solve a linear equation at each iteration.

3.
The method generates successive approximations of the inverses of the derivative at the solution, a question that is interesting to analyze the stability of the problem.
A semilocal convergence study of the method is made and a particular case of its application to a given integral Equation (1) is considered. Actually, the expression of the method for the particular initial choices z 0 (x) = g(x) and L 0 = I is obtained. The paper finishes with two applications to integral equations, given by symmetric non-separable kernels.