A Picard-Type Iterative Scheme for Fredholm Integral Equations of the Second Kind

In this work, we present an application of Newton’s method for solving nonlinear equations in Banach spaces to a particular problem: the approximation of the inverse operators that appear in the solution of Fredholm integral equations. Therefore, we construct an iterative method with quadratic convergence that does not use either derivatives or inverse operators. Consequently, this new procedure is especially useful for solving non-homogeneous Fredholm integral equations of the first kind. We combine this method with a technique to find the solution of Fredholm integral equations with separable kernels to obtain a procedure that allows us to approach the solution when the kernel is non-separable.


Introduction
In general, an equation in which the unknown function is under a sign of integration is called an integral equation [1][2][3]. Both linear and nonlinear integral equations appear in numerous fields of science and engineering [4][5][6], because many physical processes and mathematical models can be described by them, so that these equations provide an important tool for modeling processes [7]. In particular, integral equations arise in fluid mechanics, biological models, solid state physics, kinetics chemistry, etc. In addition, many initial and boundary value problems can be easily turned into integral equations.
The definition of an integral equation given previously is very general, so in this work, we focus on some particular integral equations that are widely applied, such as Fredholm integral equations [8]. This kind of integral equation appears frequently in mathematical physics, engineering, and mathematics [9].
Fredholm integral equations of the first kind have the form: and those of the second kind can be written as: In both cases, −∞ < a < b < +∞, λ ∈ R, f (x) ∈ C[a, b] is a given function and N(x, t), defined in [a, b] × [a, b], is called the kernel of the integral equation. y(x) ∈ C [a, b] is the unknown function to be determined. The integral equation is said to be homogeneous when f (x) is the zero function and not homogeneous in other cases.
For the integral Equation (1) N(x, t)y(t) dt = f (x) + λ[N (y)](x), x ∈ [a, b], λ ∈ R. (2) If the operator F defined in (2) is a contraction, the Banach fixed point theorem [10], guarantees the existence of a unique fixed point of F in C [a, b]. In addition, this fixed point can be approximated by the iterative scheme: y 0 given in C[a, b], y n+1 = F (y n ), n ≥ 0. ( The operator F defined in (2) is a contraction in C[a, b] if there exists α ∈ (0, 1) such that: If the operator F is derivable in C[a, b], the condition F (y) < 1, for all y ∈ C[a, b], implies that F is a contraction. Note that: We can find in the literature many techniques to solve, in an exact or approximate way, Fredholm integral equations. For instance, the direct computation method and the successive approximations method appear amongst the most used methods for this issue. Another well-known technique consists of transforming the Fredholm equation into an equivalent boundary value problem. We can add to this list other recently developed techniques, such as the Adomian decomposition method [11], the modified decomposition method [12], or the variational iteration method [13]. The emphasis in this paper will be on the use of iterative processes rather than proving theoretical concepts of convergence and existence. The theorems of existence and convergence are important. The concern will be on the determination of the solution y(x) of the Fredholm integral equations of the first kind and the second kind.
In this paper, we present a Newton-type iterative method that shares many properties of Picard-type iterative methods, namely it is derivative-free and does not use inverse operators, although preserving the quadratic order of convergence that characterizes Newton's method. These features allow us to design an efficient iterative method. Actually, with a very reduced number of iterations, we can find competitive approximations to the solution of the involved Fredholm integral equation. This is one of the main targets of our research: to justify that it is enough to consider a few steps in our iterative procedure to reach a good approach to the solution.
The new iterative procedure is based on the following idea (see [14]): Newton's method gives a sequence that does not use inverses when it is applied to the problem of approximating the inverse of a given operator.
The iterative scheme (3) is known as the method of successive approximations for operator F . It converges to the fixed point y * for any function y 0 ∈ C[a, b].
It is known that a fixed point approximation can be expressed by different iteration functions y = H(y) with H : C[a, b] → C[a, b]. In our case, instead of (3), we consider the well-known iterative scheme of Picard [15], which can be written in the form: y 0 given in C[a, b], y n+1 = P (y n ) = y n − G(y n ), n ≥ 0, where G(y) = (I − F )(y). Obviously, a fixed point of P is a fixed point of F and vice versa. Moreover, both methods provide the same iterations, since P (y) = y − G(y) = y − (I − F )(y) = F (y).
It is clear that the Banach fixed point theorem does not allow us to locate a fixed point in a domain that is not all the space C[a, b]. Both the successive approximation method and Picard's method do not need either inverse operators or derivative operators. As a consequence, they only reach a linear order of convergence. Our aim in this paper is to construct iterative processes with a quadratic order of convergence, but without using inverse operators or derivative operators. In addition, we obtain results that allow us to locate the fixed point in a subset of C[a, b].
Notice that Equation (1) can be expressed as: Therefore, the solution y * (x) of Equation (1) is given by: From a theoretical point of view, Formula (5) gives the exact solution of Equation (1) or (4). However, for practical purposes, the calculus of the inverse (I − λN ) −1 could be not possible or very complicated. For this reason, we propose the use of iterative methods to approach this inverse and therefore the solution of the integral Equation (1).
In Section 2, we use Newton's method for this purpose to obtain a method with quadratic convergence for calculating inverse operators. In addition, we describe two procedures for approaching the solution of an integral Equation (1), one for separable kernels and another one for non-separable kernels. The approximated solutions obtained by the methods in Section 2 can be used as initial values for the iterative method explained in Section 3 for finding solutions of integral equations. Actually, by combining the two techniques given in these two sections, we can obtain, with a low number of steps, good approximations of the integral Equation (1), especially in the non-separable case. We illustrate the theoretical results in Section 3, with some numerical examples.

Newton's Method for the Calculus of Inverse Operators
As we said, we can calculate the solution of (1) by Formula (5). Therefore, we consider the problem of the approximation of the inverse of the linear operator A = I − λN by means of iterative methods for solving nonlinear equations.
To do this, we introduce the set: Actually, in this section, we use Newton's method to approach the inverse of a given linear operator , namely for solving: Therefore, the Newton iteration in this case can be written in the following way: or equivalently (to avoid inverse operators as recommended in [16]): Indeed, to obtain the corresponding algorithm, we only need to compute T (H m ). Therefore, given , and then: As a consequence, Newton's method is now given by the following algorithm: Observe that in this case, Newton's method does not use inverse operators for approximating the inverse operator Now, we prove a local convergence result for the sequence (6). To do this, we suppose that A −1 exists, or equivalently, that |λ| K < 1. Therefore, we obtain the following result.
In the rest of this paper, we use the following notation for open and closed balls centered at a point x 0 that belongs to a given Banach space X and with radius R: . Then, the sequence {H m } defined by (6) belongs to B A −1 , θ A and converges quadratically to A −1 . In addition, Proof. Taking into account the definition of the sequence (6), we have: Then, if we apply this equality recursively, for H m ∈ B A −1 , θ A , we have: We can apply (7) to obtain: and, therefore: Consequently, and therefore, by the hypothesis: Now, second, we want to prove a semilocal convergence result, without assuming the existence of A −1 .

Proof.
A direct application of (7) gives us: and therefore: On the other hand, Now, by applying the previous inequality recursively and taking into account (9), we obtain: Consequently, by the definition of the sequence (6), (9), and (10), we have for k ∈ N: Therefore, as δ(1 + δ) < 1, for m = 0, we have: Then, H k ∈ B H 0 , H 0 1−δ(1+δ) for k ≥ 1. Moreover, we obtain: and it follows that {H m } is a Cauchy sequence. Then, {H m } converges to H * . Moreover, as: it follows that lim m→∞ (I − H m A) = 0 and then H * A = I.
Notice that, if we prove that A −1 exists, then H * = A −1 . On the other hand, if we do not suppose that A −1 exists, if we consider H 0 such that H 0 A = AH 0 , we have: Therefore, from an inductive procedure, we obtain that H m A = AH m and then AH * = I. Therefore, in this case, H * is the inverse operator of A. However, if H 0 A = AH 0 , then H * = lim m→∞ H m satisfies only H * A = I, so that the sequence {H m } converges to the left inverse of A.
Our aim in the rest of the section is to face the problem of the calculus of the inverse of the the linear operator A = I − λN that appears in the solution of Fredholm integral Equation (1). We distinguish two situations, depending on whether the kernel N is separable or not. In the first case, an exact solution can be obtained by means of algebraic procedures, whereas in the second case, we use the sequence defined by Newton's method (6) for approximating A −1 = (I − λN ) −1 .

Separable Kernels
In the first case, we assume that N(x, t) is a separable kernel, that is: If we denote A j = b a β j (t)y * (t) dt, we have by (5): and: In addition, the integrals A j can be calculated independently of y * . To do this, we multiply the second equality of (12) by β i (x), and we integrate in the x variable. Therefore, we have: Now, if we denote: we obtain the following linear system of equations: Then, we assume 1 λ is not an eigenvalue of the matrix (a ij ). Thus, if A 1 , A 2 , . . . , A m is the solution of system (13), we can obtain directly the solution: From a practical point of view, we can solve the systems defined in (13) by using classical techniques, such as LUdecomposition or iterative methods for solving linear systems. We can also use any kind of specific scientific software for this purpose. Notice that System (13) depends directly on the integration needed for computing the coefficients a ij and b i . In the case of the impossibility of analytical integration, a numerical formula of integration must be used.

Non-Separable Kernels
Now, we wonder what happens when the kernel is not separable. With the aim of reaching a quadratic convergence, we can apply Newton's method to approximate the inverse of the operator A = I − λN defined in (5) to obtain the solution y * (x) of the Fredholm integral Equation (1).
Our idea is to approximate N(x, t) by a separable kernel N(x, t), that is: where N(x, t) = ∑ m i=1 α i (x)β i (t) and R(x, t) is the error in the approximation. We consider the operator: With this operator, we can consider: as the starting seed in the iterative process (6) to approximate A −1 .
To check if H 0 defined in (17) is a good choice, we can apply Theorem 3. We need to guarantee that: In this case, we have: where: Consequently, if the error R in (15) is small enough, H 0 could be considered as a good starting point for the iterative process (6). If, for example, N(x, t) is sufficiently derivable in some argument, we can apply the Taylor series to calculate the approximation given in (15), and then, the error made by the Taylor series will allow us to establish how much R approaches zero. Improving this approach will depend, in general, on the number of Taylor development terms. Now, we compute m steps in Newton's method (6) for approximating A −1 = (I − λN ) −1 . Therefore, once H m is calculated, we consider H m f (x) as an approximated solution of the Fredholm integral Equation (1). This is the main idea developed in the next section.

A Picard-Type Iterative Scheme from the Newton Method
As we said in the previous section, our target now is to obtain the solution y * of Equation (1). Therefore, as the limit H * of the sequence of linear operators defined in (6) satisfies H * A = I, with A = I − λN , we have: Therefore, for approximating the solution y * of Equation (1), we can consider the following iterative scheme, for A = I − λN : From the previous theorems, it is easy to prove the local and semilocal convergence of the iterative scheme (18). In addition, (7) for m = 0, we obtain:

Proof. From Equation
and: Now, by a mathematical inductive procedure, it is easy to prove that y m ∈ B y * , θ f A for n ∈ N.
On the other hand, taking into account (8), we have: Then, the result follows directly. Now, to prove a semilocal convergence result for the iterative scheme (18), we will not assume the existence of A −1 .
Then, the sequence {y m } defined by (18) belongs to B y 0 , and converges quadratically to y * , the solution of Equation (1).
Proof. If we consider the sequence {y m } given by (18), from (11), we have: then, as in (12), we obtain: and we obtain that {y m } is a Cauchy sequence. Then, {y m } converges to y.
Notice that y is the solution of Equation (1) if we verify that A y(x) = f (x). Then, from Theorem 2, the sequence {H m } converges to H * with H * A = I and y( We would like to indicate that this result allows us to locate the solution of Equation (1) in the closed ball:

Examples
We illustrate the theoretical results obtained in the previous sections with some examples. Firstly, we examine a case with a separable kernel. In this case, the technique developed in Section 2.1 can be applied.
Therefore, the solution of the linear system (13) is A 1 = −1/2, A 2 = 2/π, and by (14), we obtain the solution of the integral Equation (20): In the following example, we establish a procedure for approximating the solution of an integral equation with a non-separable kernel. For this issue, we approximate the given integral equation by another integral equation with a separable kernel. Next, we find the exact solution of this last integral equation with the technique developed in Section 2.1. This exact solution provides us a good approximation of the solution of the original integral equation with a non-separable kernel.
Then, we consider the linear Fredholm integral equation: Obviously, the difference between the solutions of Equations (21) and (23) will be given depending on the remaining R(θ, x, t). Depending on the number of terms considered to be the development of N(x, t), the rest will be further reduced, and therefore, the solutions of (21) and (23) are nearby. Note that, moreover, Equation (23) has a separable kernel, and we can obtain its exact solution by following the procedure shown in Section 2.1.

Example 3.
We consider the following Fredholm integral equation, It is easy to check that y * (x) = x 2 − 1 is the solution.
Obviously, in this case, f (x) = 2x 2 − 1 3 , and the kernel N(x, t) = x 3 e xt is non-separable. Then, for example, there exists θ ∈ (0, t) such that: Thus, if we consider m = 3, we have: and then: for the real functions: We consider as the initial function in the sequence (6), In our example, Note that: so by the Banach lemma on inverse operators, there exists H 0 = (I − λ N ) −1 and H 0 ≤ 9. Consequently, Now, we can use the algorithm (18) with this H 0 to approximate the solution of our problem. Actually, we can obtain the initial approach y 0 (x) = H 0 f (x) by following the procedure shown in Section 2.1. In this case, we have b 1 = (11 − 6e)/9, Next, to calculate the first approximation y 1 (x), we use (18) to get: We can compute H 0 y 0 (x) by the same technique described for separable kernels, just by changing f (x) by y 0 (x), to obtain: As N is a non-separable kernel, we can approximate N y 0 (x) by N y 0 (x) and follow the same procedure in Section 2.1. In this way, we have: Consequently, we obtain the following approximation for y 1 (x): As we can see in Figure 2, y 1 (x) improves considerably the initial approach y 0 (x). On the left side of Figure 2, we can appreciate that y 1 (x) practically overlaps the exact solution x 2 − 1. On the right side, we plot the error committed by y 0 (x) and y 1 (x) to approach the exact solution, that is the error functions: where y i (x), i = 0, 1 are the functions defined above by following our procedure. If we consider more terms in (25), we can obtain a better approximation of the exact solution of (24). Finally, we compare our procedure with the classical Picard iterative method defined in (3): x 3 e xt p i (t) dt, p 0 (x) = f (x), i ≥ 0.
On the left side of Figure 3, we plot the corresponding error committed by Picard's method. In this case, the error functions are: P i (x) = |p i (x) − x 2 + 1|, i = 1, 2, 3. (27) On the right side of this figure, we can appreciate that the error committed by only one iteration of our method is less than the error obtained with three Picard iterations and is more similar to the error committed by four iterations by Picard's method. . On the right, comparison among E 1 (x), P 3 (x), and P 4 (x).

Conclusions
In this work, we consider the numerical solution of Fredholm integral equations of the second kind. We transform this problem into another one where the key is to approximate the inverse of a given operator that allows us to solve the integral equation. In this way, we construct an iterative procedure based on an important characteristic of Newton's method: it does not use the inverse when it is applied to the nonlinear problem of calculating the inverse of an operator (see [14]). With this idea, we obtain a Picard-type iterative method, with quadratic convergence, that does not use either derivatives or inverse operators. This iterative method is more efficient and precise than the classical Picard iteration that is usually used for solving this kind of problem, at least when a discretization procedure is not used.
We think our method could be considered as a good method for obtaining starting points as a first approximation of the solution of Fredholm integral equations. Therefore, we can consider our method as a good predictor method, in a first attempt to obtain a starting point and next to use another corrector iterative method.