On the Existence of Solutions of Nonlinear Fredholm Integral Equations from Kantorovich ’ s Technique

The well-known Kantorovich technique based on majorizing sequences is used to analyse the convergence of Newton’s method when it is used to solve nonlinear Fredholm integral equations. In addition, we obtain information about the domains of existence and uniqueness of a solution for these equations. Finally, we illustrate the above with two particular Fredholm integral equations.


Introduction
Integral equations have numerous applications in almost all branches of the sciences and many physical processes and mathematical models in Engineering are usually governed by integral equations.The main feature of these equations is that they are usually nonlinear.In particular, nonlinear integral equations arise in fluid mechanics, biological models, solid state physics, kinetics chemistry, etc.In addition, many initial and boundary value problems can be easily turned into integral equations.One type of particularly interesting equation is a nonlinear Fredholm integral equation of the form where ∈ R, −∞ < a < b < +∞, the function f (s) is continuous on [a, b] and given, the kernel K(s, t) is a known continuous function in [a, b] × [a, b] and x is a solution to be determined.As integral equations of the form (1) cannot be solved exactly, we use numerical methods to solve them; we can apply different numerical techniques and some of them can be found in the references of this work.
For a general background on numerical methods for integral equations of the form (1), the books of Atkinson [1] and Delves and Mohamed [2] are recommended.For a review of less recent methods, we refer the reader to the survey by Atkinson [3].There is a great deal of publication on the numerical solution of Equation (1).In recent publications, different mathematical tools and numerical implementations have been applied to solve integral equations (1).In some of these publications, certain authors extensively use methods based on different kinds of wavelets [4,5].Polynomial approximation methods using different base functions, such as Chebyshev polynomials, have been introduced; see for example [6,7].An approximation with Sinc functions has been developed in [8].Sinc methods have increasingly been recognized as powerful tools for tackling problems in applied physics and engineering [9].Several different variants of numerical or theoretical studies on (1) have been developed in the literature.For some examples, see papers [10,11].In terms of iterative schemes for solving Equation (1), in [12], we can find an iterative scheme based on the homotopy analysis method, which is a general analytic approach to obtain series solutions of various types of nonlinear equations and based on homotopy.In particular, by means of the aforementioned method, we can construct a continuous mapping of an initial guess approximation to the exact solution of the equation to be solved.In [13], the authors present an adapted modification to the Newton-Kantorovich method.Finally, in [14], the Newton-Kantorovich method and quadrature methods are combined to develop a new method for solving Equation (1).
In this work, we propose using Newton's method for solving Equation (1).For this, we previously analysed the semilocal convergence of the method and then compared the efficacy of the method with the former techniques for solving a particular integral equation of the form (1). The semilocal convergence results need to know the conditions of the operator involved in the equation to be solved and the starting points of the iterative methods; the results show the existence of solutions of the equation that allow us to obtain the domain of existence of a solution.
The main interest of this work is two-fold.On the one hand, we conduct a qualitative study of Equation ( 1) and obtain results on the existence and uniqueness of a solution.On the other hand, we obtain the numerical resolution of the equation.For this, we previously consider a separable kernel K(s, t) and we directly approximate a solution of Equation (1).Secondly, by means of Taylor series, we consider the case of a non-separable kernel.For both aims, we use Newton's method, which is the most well-known iterative method for solving nonlinear equations.
For the first aim, we study the application of Newton's method to Equation (1) by analysing the convergence of the method and use its theoretical significance to draw conclusions about the existence and uniqueness of a solution, so that we can locate a solution of the equation from a domain of existence of solutions and then obtain a domain of uniqueness of solutions that allows us to isolate the solution previously located from other possible solutions of the equation.To achieve this aim, we use Kantorovich's technique [15], that was developed by the Russian mathematician L. V. Kantorovich at the beginning of the 1950s and is based on the concept of "majorizing sequence", which will be introduced later.For the second aim, we apply Newton's method to numerically solve Equation (1).
This paper is organized as follows.In Section 2, we consider a particular equation of the form (1) and present the above-mentioned Kantorovich's technique by introducing the concept of "majorizing sequence".In Section 3, from the theoretical significance of Newton's method, we obtain information about the existence and uniqueness of a solution for the nonlinear Fredholm integral equations introduced in Section 2. Finally, in Section 4, we illustrate all the above-mentioned with two applications where nonlinear Fredholm integral equations are involved and by considering separable and nonseparable kernels.

Kantorovich's Technique
As mentioned in the introduction, this paper has two main aims: to obtain conclusions about the existence and uniqueness of a solution of (1) by using the theoretical significance of Newton's method and to numerically approximate a solution of (1).
It is clear that solving (1) is equivalent to solving the equation F(x) = 0, where where for any other value of p.
For solving equation F(x) = 0, Newton's method is The method has already been applied to approximate solutions of nonlinear integral equations [16,17].However, the novelty of this work is in using Kantorovich's technique to obtain a convergence result for Newton's method when it is applied to solve (1) and, as a consequence, us the theoretical significance of the method to draw conclusions about the existence and uniqueness of a solution of (1) and about the region in which it is located, without finding the solution itself-this is sometimes more important than the actual knowledge of the solution.A solution is found by constructing a scalar function ad hoc which is used to define a majorizing sequence instead of using the classical quadratic polynomial of Kantorovich.
Kantorovich's technique consists of translating the problem of solving equation F(x) = 0 in Ω to solve a scalar equation ϕ(t) = 0 and this is done once x 0 ∈ Ω is fixed under certain conditions.In addition, the domains of existence and uniqueness of a solution for Equation ( 1) can be determined from the positive solutions of ϕ(t) = 0.
The idea of Kantorovich's technique is easy: once a real number t 0 is fixed, we define the scalar iterative method such that Condition ( 4) means that the scalar sequence {t n } majorizes the sequence {x n } or, in other words, {t n } is a majorizing sequence of {x n }.Obviously, if {t n } is convergent, {x n } also is.Therefore, the convergence of the sequence {x n } is a consequence of the convergence of the sequence {t n } and the latter problem is much easier than the former one.

The Auxiliary Scalar Function
We begin by analysing the operator F(x) given in (2).So, from (2), it follows that the Fréchet derivatives of operator F are for k = 2, 3, . . ., [p], where [p] denotes the integer part of the real number p ≥ 2.
In addition, provided that x − x 0 ≤ t − t 0 .Moreover, for p ≥ 3, we denote On the other hand, we observe that the existence of the operator [F (x 0 )] −1 must be guaranteed in the first step of Newton's method, since x 1 = x 0 − [F (x 0 )] −1 F(x 0 ).The existence of [F (x 0 )] −1 follows from the Banach lemma on invertible operators, so the operator [F (x 0 )] −1 exists and is such that In addition, we denote β = Now, we consider p ≥ 3 and denote ω k (t; Then, as a consequence of the latter, we can find scalar functions y(t) such that y (k) (t) = ω k (t; t 0 ), for k = 2, 3, . . ., [p], to construct a majorizing sequence {t n } as that given in (3) by solving the following initial value problem (see [18]): It is easy to see that there exists only one solution for the last initial value problem, that is: Notice that the scalar function defined in (7) and used to construct the scalar sequence {t n } given in (3) with ϕ(t) defined in (7), that majorizes {x n } in Ω, is independent of k, so we can choose any k, such that k = 2, 3, . . ., [p], to construct the last initial value problem that gives us ϕ(t).
If p ∈ [2, 3) and using only condition (5), we consider the initial value problem whose unique solution also is (7).
Once such a majorizing sequence {t n } is determined from ϕ(t), we have then to prove its convergence.For this, it is well known [15] that it is necessary that the scalar function ϕ(t) has at least one positive real zero greater than or equal to t 0 and sequence {t n } is increasing and convergent to this zero.

The Majorizing Sequence
We begin by studying the function given in (7).Firstly, we notice that we have considered any t 0 ≥ 0 in the last section, but we can consider t 0 = 0, so function ϕ(t) is reduced to This is a consequence of the fact that φ(t) = ϕ(t + t 0 ), which leads us to the sequence {t n = N ϕ (t n−1 )} n∈N , for any t 0 > 0, satisfies t n = N ϕ (t n−1 ) = t 0 + N φ (s n−1 ), n ∈ N, where s n = N φ (s n−1 ) with s 0 = 0, since we have, for t 0 ≥ 0 and s 0 = 0, for all n ∈ N. Therefore, the real sequences {t n } and {s n } given by Newton's method when they are constructed from ϕ(t) and φ(t), respectively, can be obtained, one from the other, by translation.Besides, t n − t n−1 = s n − s n−1 , for all n ∈ N, and all the results obtained previously are independent of the value t 0 ≥ 0, so we choose t 0 = 0 because, in practice, it is the most favourable situation.Secondly, we denote σ = min{t > 0 : φ (t) ≥ 0}, where φ(t) is given in (8).Note that there exists only one positive real zero σ of φ (t) in (0, +∞) satisfying σ = min{t > 0 : φ (t) ≥ 0}, since φ (0) = −1 < 0, φ (t) > 0 and φ (t) > 0 as t → +∞.
Thirdly, by taking into account the classical Fourier conditions [19] for the convergence of Newton's method in the scalar case, we establish that sequence {t n } is increasing and converges to r in the following result.
Theorem 2. If φ(σ) ≤ 0, then sequence {t n } is increasing and converges to the positive real zero r of φ(t).
Fourthly, we prove a system of recurrence relations in the next theorem that guarantees that {t n } is a majorizing sequence of {x n } in Ω, whose proof is similar to that given for Lemma 7 in [18].Theorem 3. Suppose that x n ∈ Ω, for all n ≥ 0, and p ≥ 3.If φ(σ) ≤ 0, then the following four bounds are satisfied for all n ∈ N: Note that (i), (ii) and (iv) are obvious if n = 0 and (iii) are not necessary to prove (iv), since it follows from the initial condition [F (x 0 )] −1 F(x 0 ) ≤ η.
Finally, if p ∈ [2, 3), we obtain a result similar to the last theorem which can be seen in [20].

Existence and Uniqueness of a Solution
Following Kantorovich's technique, the convergence of sequence {x n } in Ω is then guaranteed from the convergence of sequence {t n }, since {t n } majorizes {x n }, which allows us to draw conclusions on the location of a solution of equation (1).After locating a solution of Equation (1), we establish the uniqueness of a solution.For this, from now on, we denote Theorem 4. Let x 0 ∈ Ω be such that condition (6) is satisfied and φ(t) the function defined in (8).If φ(σ) ≤ 0, where σ = min{t > 0 : φ (t) ≥ 0}, and B(x 0 , r) ⊂ Ω, then Equation (1) has a solution x * (s) in B(x 0 , r) and it is unique in B(x 0 , R) ∩ Ω if r < R or in B(x 0 , r) if r = R, where r and R are two positive real zeros of φ(t).
Proof.From (i) and (ii), it is clear that x 1 − x 0 ≤ t 1 < r and x 1 ∈ B(x 0 , r) ⊂ Ω.If we now suppose that x j ∈ B(x 0 , r) ⊂ Ω, for j = 1, 2, . . ., n − 1, it follows, from Theorem 3, that the operator and therefore x n ∈ B(x 0 , r) and x n are well defined.
After that, it follows that {x n } is a Cauchy sequence; as a consequence, {t n } is a Cauchy sequence and lim n t n = r, since, from Theorem 2, the sequence {t n } is increasing and bounded above by r.Thus, {x n } is convergent, lim n x n = x * and In addition, the combination of this and (iii) yields F(x * ) = 0, where F is defined in (2).Next, from Section 2.1, we have provided that x − x 0 ≤ t − t 0 .Now, as t 0 = 0, it is clear that F (x) ≤ φ (t), for x − x 0 ≤ t, and, as a consequence of this, the uniqueness of a solution x * (s) follows exactly that given for Theorem 11 in [18].

Applications
In this section, we present two applications where the above study done is illustrated.Both applications arise from the two possibilities that may present kernel K(s, t), depending on whether it is separable or not.

Application 1
We first consider the following nonlinear Fredholm integral equation, with s ∈ [0, 1], that has been used by other authors as a numerical test [13,21].Observe that, in this case, kernel K(s, t) = cos(πs) sin(πt) is separable.Firstly, we apply Theorem 4 to obtain domains of existence and uniqueness of a solution.For this, we observe that the corresponding function F(x) defined in (2) and associated with (10) is defined in Ω = C[0, 1].We then observe that condition ( 6) is required in Theorem 4.However, if we pay attention to the integral equation, we observe that the kernel is separable and we can then determine the corresponding operator [F (x)] −1 .For this, we write [F (x)y](s) = z(s), so, if there exists [F (x)] −1 , we have x(t) 2 sin(πt)y(t)dt .
If we now denote 3 5 1 0 x(t) 2 sin(πt)y(t)dt = I, multiply next-to-last equality by 3  5 x(s) 2 sin(πs) and integrate it between 0 and 1, we obtain I = 1 0 x(s) 2 sin(πs)z(s)ds Therefore, . Now, as a consequence of the last formula, condition (6), that is required to prove the existence of the inverse operator [F (x 0 )] −1 , can be omitted, provided that Therefore, it is sufficient to choose some starting point x 0 (s) for Newton's method such that the previous inequality holds.As x 0 (s) = sin(πs) is a reasonable choice as a starting point for Newton's method, as we can see in [12][13][14], the last inequality holds, since 1 0 sin(πt) cos(πt)x 0 (t) 2 dt = 0, and condition ( 6) is omitted.
After that, taking into account that = 1 5 , S = 2 π , p = 3, x 0 = 1 and F(x 0 ) = 3 40 , we construct the auxiliary scalar function and see that it has two positive real zeros r = 0.1327 and R = 1.0589.Therefore, the domains of existence and uniqueness of a solution of Equation ( 10) are respectively On the other hand, we can write the function φ(t) in the following way and obtain a priori error estimates from Ostrsowski's technique [19], that allow us to determine the number of iterations that we have to apply in Newton's method to reach a previously fixed precision.
For this, we write α n = r − t n and γ n = R − t n , for all n ≥ 0.Then, and where P = 0.7718, Q = 0.7858, U = 0.0985 and V = 0.0967, and then taking into account that γ n+1 = (R − r) + α n+1 , we obtain for all n ≥ 0. In Table 1, we can see the a priori error estimates that lead to the well-known quadratic convergence of Newton's method. of equation ( 10), we compare the obtained results with those given by other authors when different numerical methods are applied to solve (10).
In Table 2, we show the real errors for n = 10 and n = 20 when the adapted Newton's method is used in [13] to solve (10) and some points of the interval involved are chosen.In Table 3, we show the real errors when a combination of Newton's method and quadrature methods [14] and an iterative scheme based on the homotopy analysis method [12] are applied.Notice that x * − x n ≤ ε n , since x * − x n ≤ r − t n , so we already improve the results obtained in Tables 2 and 3 by other authors with four iterations of Newton's method, x 4 (s) = sin(πs) + (0.075426688904937162) cos(πs), and the stopping criterion x n − x n−1 < 10 −32 .Finally, although we have already guaranteed that the numerical approximation given by x 4 (s) to the solution ψ(s) of equation ( 10) is of at least order 10 −17 , we see in Table 4 that this approximation is, in fact, of at least order 10 −30 .Table 2. Real errors for n = 10 and n = 20 when the adapted Newton's method given in [13]

Application 2
Secondly, we consider the following nonlinear integral Fredholm equation, with s ∈ − 1 2 , 1 2 .Observe that, in this case, kernel K(s, t) = e st is not separable.In addition, the corresponding function F(x) defined in (2) and associated with (11) is defined in Ω = C − 1 2 , 1 2 .From Equation (11), we see that x 0 (s) = s is a reasonable choice of starting point for Newton's method.In addition, condition (6) of Theorem 4 is satisfied, since | |pS x 0 p−1 = 0.1670 < 1, and the auxiliary scalar function φ(t) involved in our study is , that has two positive real zeros r = 0.0842 and R = 0.4921 As a consequence of Theorem 4, Equation ( 11) then has a solution x * (s) in B(x 0 , 0.0842) and it is unique in B(x 0 , 0.4921).
As kernel K(s, t) = e st is not separable, the application of Newton's method for solving (11) is not easy.Taking into account this fact, we first use Taylor's series to approximate K(s, t) = e st .So, where ∈ (min{0, t}, max{0, t}), and consider the integral equation Next, we take into account the following relation that is satisfied by x * (s) and x(s), that are respectively solutions of ( 11) and ( 12 x * (s) and ρ ≥ x(s) after taking norms in (11) and (12).Thus, if we want to obtain, for example, an approximation of the solution x * (s) of order 10 −9 , it is sufficient to choose j = 5 in (12).In this case, S = 1.0104,T = 6.2199 × 10 −8 , ρ * = ρ = 0.5842, so x * (s) − x(s) ≤ 9.9790 × 10 −9 .

Table 1 .
A priori error estimates.

Table 5 .
Sequence {t n } and a priori error estimates.