Open Access
This article is

- freely available
- re-usable

*Algorithms*
**2017**,
*10*(3),
89;
https://doi.org/10.3390/a10030089

Article

On the Existence of Solutions of Nonlinear Fredholm Integral Equations from Kantorovich’s Technique

Department of Mathematics and Computation, University of La Rioja, C/ Madre de Dios, 53, 26006 Logroño, Spain

*

Correspondence: [email protected]; Tel.: +34-941-299-447

^{†}

These authors contributed equally to this work.

Received: 16 May 2017 / Accepted: 30 July 2017 / Published: 2 August 2017

## Abstract

**:**

The well-known Kantorovich technique based on majorizing sequences is used to analyse the convergence of Newton’s method when it is used to solve nonlinear Fredholm integral equations. In addition, we obtain information about the domains of existence and uniqueness of a solution for these equations. Finally, we illustrate the above with two particular Fredholm integral equations.

Keywords:

Fredholm integral equation; Newton’s method; convergence; Kantorovich’s technique; domain of existence of a solution; domain of uniqueness of a solution.## 1. Introduction

Integral equations have numerous applications in almost all branches of the sciences and many physical processes and mathematical models in Engineering are usually governed by integral equations. The main feature of these equations is that they are usually nonlinear. In particular, nonlinear integral equations arise in fluid mechanics, biological models, solid state physics, kinetics chemistry, etc. In addition, many initial and boundary value problems can be easily turned into integral equations. One type of particularly interesting equation is a nonlinear Fredholm integral equation of the form
where $\varpi \in \mathbb{R}$, $-\infty <a<b<+\infty $, the function $f\left(s\right)$ is continuous on $[a,b]$ and given, the kernel $\mathfrak{K}(s,t)$ is a known continuous function in $[a,b]\times [a,b]$ and x is a solution to be determined.

$$x\left(s\right)=f\left(s\right)+\varpi {\int}_{a}^{b}\mathfrak{K}(s,t)x{\left(t\right)}^{p}dt,\phantom{\rule{1.em}{0ex}}s\in [a,b],\phantom{\rule{1.em}{0ex}}p\in \mathbb{R},\phantom{\rule{1.em}{0ex}}p\ge 2,$$

As integral equations of the form (1) cannot be solved exactly, we use numerical methods to solve them; we can apply different numerical techniques and some of them can be found in the references of this work.

For a general background on numerical methods for integral equations of the form (1), the books of Atkinson [1] and Delves and Mohamed [2] are recommended. For a review of less recent methods, we refer the reader to the survey by Atkinson [3]. There is a great deal of publication on the numerical solution of Equation (1). In recent publications, different mathematical tools and numerical implementations have been applied to solve integral equations (1). In some of these publications, certain authors extensively use methods based on different kinds of wavelets [4,5]. Polynomial approximation methods using different base functions, such as Chebyshev polynomials, have been introduced; see for example [6,7]. An approximation with Sinc functions has been developed in [8]. Sinc methods have increasingly been recognized as powerful tools for tackling problems in applied physics and engineering [9]. Several different variants of numerical or theoretical studies on (1) have been developed in the literature. For some examples, see papers [10,11]. In terms of iterative schemes for solving Equation (1), in [12], we can find an iterative scheme based on the homotopy analysis method, which is a general analytic approach to obtain series solutions of various types of nonlinear equations and based on homotopy. In particular, by means of the aforementioned method, we can construct a continuous mapping of an initial guess approximation to the exact solution of the equation to be solved. In [13], the authors present an adapted modification to the Newton–Kantorovich method. Finally, in [14], the Newton–Kantorovich method and quadrature methods are combined to develop a new method for solving Equation (1).

In this work, we propose using Newton’s method for solving Equation (1). For this, we previously analysed the semilocal convergence of the method and then compared the efficacy of the method with the former techniques for solving a particular integral equation of the form (1). The semilocal convergence results need to know the conditions of the operator involved in the equation to be solved and the starting points of the iterative methods; the results show the existence of solutions of the equation that allow us to obtain the domain of existence of a solution.

The main interest of this work is two-fold. On the one hand, we conduct a qualitative study of Equation (1) and obtain results on the existence and uniqueness of a solution. On the other hand, we obtain the numerical resolution of the equation. For this, we previously consider a separable kernel $\mathfrak{K}(s,t)$ and we directly approximate a solution of Equation (1). Secondly, by means of Taylor series, we consider the case of a non-separable kernel. For both aims, we use Newton’s method, which is the most well-known iterative method for solving nonlinear equations.

For the first aim, we study the application of Newton’s method to Equation (1) by analysing the convergence of the method and use its theoretical significance to draw conclusions about the existence and uniqueness of a solution, so that we can locate a solution of the equation from a domain of existence of solutions and then obtain a domain of uniqueness of solutions that allows us to isolate the solution previously located from other possible solutions of the equation. To achieve this aim, we use Kantorovich’s technique [15], that was developed by the Russian mathematician L. V. Kantorovich at the beginning of the 1950s and is based on the concept of “majorizing sequence”, which will be introduced later. For the second aim, we apply Newton’s method to numerically solve Equation (1).

This paper is organized as follows. In Section 2, we consider a particular equation of the form (1) and present the above-mentioned Kantorovich’s technique by introducing the concept of “majorizing sequence”. In Section 3, from the theoretical significance of Newton’s method, we obtain information about the existence and uniqueness of a solution for the nonlinear Fredholm integral equations introduced in Section 2. Finally, in Section 4, we illustrate all the above-mentioned with two applications where nonlinear Fredholm integral equations are involved and by considering separable and nonseparable kernels.

## 2. Kantorovich’s Technique

As mentioned in the introduction, this paper has two main aims: to obtain conclusions about the existence and uniqueness of a solution of (1) by using the theoretical significance of Newton’s method and to numerically approximate a solution of (1).

It is clear that solving (1) is equivalent to solving the equation $\mathfrak{F}\left(x\right)=0$, where $\mathfrak{F}:\Omega \subseteq \mathcal{C}[a,b]\to \mathcal{C}[a,b]$,
where

$$\left[\mathfrak{F}\left(x\right)\right]\left(s\right)=x\left(s\right)-f\left(s\right)-\varpi {\int}_{a}^{b}\mathfrak{K}(s,t)x{\left(t\right)}^{p}dt,\phantom{\rule{1.em}{0ex}}s\in [a,b],$$

$$\Omega =\left\{\begin{array}{c}\{x\left(s\right)\in \mathcal{C}[a,b]:x\left(s\right)\ge 0\}\text{}\mathrm{if}\text{}p=\frac{{q}_{1}}{{q}_{2}}\text{}\mathrm{with}\text{}{q}_{1},{q}_{2}\in \mathbb{N}\text{}\mathrm{and}\text{}{q}_{2}\text{}\mathrm{even},\hfill \\ \left\{x\right(s)\in \mathcal{C}[a,b]:x(s)0\}\text{}\mathrm{if}\text{}p\text{}\in \text{}\mathbb{R}\text{}\backslash \text{}\mathbb{Q}.\hfill \\ \mathcal{C}[a,b]\text{}\mathrm{for}\text{}\mathrm{any}\text{}\mathrm{other}\text{}\mathrm{value}\text{}\mathrm{of}\text{}p.\hfill \end{array}\right.$$

For solving equation $\mathfrak{F}\left(x\right)=0$, Newton’s method is

$${x}_{n}={x}_{n-1}-{\left[{\mathfrak{F}}^{\prime}\left({x}_{n-1}\right)\right]}^{-1}\mathfrak{F}\left({x}_{n-1}\right),\phantom{\rule{1.em}{0ex}}n\in \mathbb{N},\phantom{\rule{1.em}{0ex}}\mathrm{with}\text{}{x}_{0}\text{}\mathrm{given}\text{}\mathrm{in}\text{}\Omega .$$

The method has already been applied to approximate solutions of nonlinear integral equations [16,17]. However, the novelty of this work is in using Kantorovich’s technique to obtain a convergence result for Newton’s method when it is applied to solve (1) and, as a consequence, us the theoretical significance of the method to draw conclusions about the existence and uniqueness of a solution of (1) and about the region in which it is located, without finding the solution itself—this is sometimes more important than the actual knowledge of the solution. A solution is found by constructing a scalar function ad hoc which is used to define a majorizing sequence instead of using the classical quadratic polynomial of Kantorovich.

Kantorovich’s technique consists of translating the problem of solving equation $\mathfrak{F}\left(x\right)=0$ in $\Omega $ to solve a scalar equation $\phi \left(t\right)=0$ and this is done once ${x}_{0}\in \Omega $ is fixed under certain conditions. In addition, the domains of existence and uniqueness of a solution for Equation (1) can be determined from the positive solutions of $\phi \left(t\right)=0$.

The idea of Kantorovich’s technique is easy: once a real number ${t}_{0}$ is fixed, we define the scalar iterative method
such that

$${t}_{n}={N}_{\phi}\left({t}_{n-1}\right)={t}_{n-1}-{\displaystyle \frac{\phi \left({t}_{n-1}\right)}{{\phi}^{\prime}\left({t}_{n-1}\right)}},\phantom{\rule{1.em}{0ex}}n\in \mathbb{N},$$

$$\parallel {x}_{n}-{x}_{n-1}\parallel \le {t}_{n}-{t}_{n-1},\phantom{\rule{1.em}{0ex}}\mathrm{for}\text{}\mathrm{all}\phantom{\rule{1.em}{0ex}}n\in \mathbb{N}.$$

Condition (4) means that the scalar sequence $\left\{{t}_{n}\right\}$ majorizes the sequence $\left\{{x}_{n}\right\}$ or, in other words, $\left\{{t}_{n}\right\}$ is a majorizing sequence of $\left\{{x}_{n}\right\}$. Obviously, if $\left\{{t}_{n}\right\}$ is convergent, $\left\{{x}_{n}\right\}$ also is. Therefore, the convergence of the sequence $\left\{{x}_{n}\right\}$ is a consequence of the convergence of the sequence $\left\{{t}_{n}\right\}$ and the latter problem is much easier than the former one.

#### 2.1. The Auxiliary Scalar Function

We begin by analysing the operator $\mathfrak{F}\left(x\right)$ given in (2). So, from (2), it follows that the Fréchet derivatives of operator $\mathfrak{F}$ are
for $k=2,3,\dots ,\left[p\right]$, where $\left[p\right]$ denotes the integer part of the real number $p\ge 2$.

$$\left[{\mathfrak{F}}^{\prime}\left(x\right)y\right]\left(s\right)=y\left(s\right)-\varpi p{\int}_{a}^{b}\mathfrak{K}(s,t)x{\left(t\right)}^{p-1}y\left(t\right)\phantom{\rule{0.166667em}{0ex}}dt,$$

$$\left[{\mathfrak{F}}^{\left(k\right)}\left(x\right)({y}_{1}{y}_{2}\cdots {y}_{k})\right]\left(s\right)=-\varpi p(p-1)\cdots (p-k+1){\int}_{a}^{b}\mathfrak{K}(s,t)x{\left(t\right)}^{p-k}{y}_{1}\left(t\right){y}_{2}\left(t\right)\cdots {y}_{k}\left(t\right)\phantom{\rule{0.166667em}{0ex}}dt,$$

In addition,
where $S={max}_{s\in [a,b]}{\int}_{a}^{b}\left|\mathfrak{K}(s,t)\right|dt$ with the infinity-norm. Next, taking into account that $\parallel x\parallel \le \parallel {x}_{0}\parallel +\parallel x-{x}_{0}\parallel $, it follows
provided that $\parallel x-{x}_{0}\parallel \le t-{t}_{0}$. Moreover, for $p\ge 3$, we denote
for $i=2,\dots ,k-1$.

$$\parallel {\mathfrak{F}}^{\left(k\right)}\left(x\right)\parallel \le \left|\varpi \right|\left(\genfrac{}{}{0pt}{}{p}{k}\right)k!\phantom{\rule{0.166667em}{0ex}}S{\parallel x\parallel}^{p-k},$$

$$\parallel {\mathfrak{F}}^{\left(k\right)}\left(x\right)\parallel \le \left|\varpi \right|\left(\genfrac{}{}{0pt}{}{p}{k}\right){k!\phantom{\rule{0.166667em}{0ex}}S\parallel x\parallel}^{p-k}\le \left|\varpi \right|\left(\genfrac{}{}{0pt}{}{p}{k}\right)k!\phantom{\rule{0.166667em}{0ex}}S{\left(\parallel {x}_{0}\parallel +\parallel x-{x}_{0}\parallel \right)}^{p-k}$$

$$\le \left|\varpi \right|\left(\genfrac{}{}{0pt}{}{p}{k}\right)k!\phantom{\rule{0.166667em}{0ex}}S{\left(\parallel {x}_{0}\parallel +t-{t}_{0}\right)}^{p-k},$$

$$\parallel {\mathfrak{F}}^{\left(i\right)}\left({x}_{0}\right)\parallel \le \left|\varpi \right|\left(\genfrac{}{}{0pt}{}{p}{i}\right)i!\phantom{\rule{0.166667em}{0ex}}S{\parallel {x}_{0}\parallel}^{p-i}={M}_{i},$$

On the other hand, we observe that the existence of the operator ${\left[{\mathfrak{F}}^{\prime}\left({x}_{0}\right)\right]}^{-1}$ must be guaranteed in the first step of Newton’s method, since ${x}_{1}={x}_{0}-{\left[{\mathfrak{F}}^{\prime}\left({x}_{0}\right)\right]}^{-1}\mathfrak{F}\left({x}_{0}\right)$. The existence of ${\left[{\mathfrak{F}}^{\prime}\left({x}_{0}\right)\right]}^{-1}$ follows from the Banach lemma on invertible operators, so the operator ${\left[{\mathfrak{F}}^{\prime}\left({x}_{0}\right)\right]}^{-1}$ exists and is such that
provided that $\parallel I-{\mathfrak{F}}^{\prime}\left({x}_{0}\right)\parallel <1$; namely,

$$\parallel {\left[{\mathfrak{F}}^{\prime}\left({x}_{0}\right)\right]}^{-1}\parallel \le \frac{1}{1-\parallel I-{\mathfrak{F}}^{\prime}\left({x}_{0}\right)\parallel},$$

$$\left|\varpi \right|pS\parallel {x}_{0}{\parallel}^{p-1}<1.$$

In addition, we denote $\beta =\frac{1}{1-\left|\varpi \right|pS\parallel {x}_{0}{\parallel}^{p-1}}$ and do

$$\parallel {\left[{\mathfrak{F}}^{\prime}\left({x}_{0}\right)\right]}^{-1}\mathfrak{F}\left({x}_{0}\right)\parallel \le \parallel \mathfrak{F}\left({x}_{0}\right)\parallel \beta =\eta .$$

Now, we consider $p\ge 3$ and denote ${\omega}_{k}(t;{t}_{0})=\left|\varpi \right|\left(\genfrac{}{}{0pt}{}{p}{k}\right)k!\phantom{\rule{0.166667em}{0ex}}S{\left(\parallel {x}_{0}\parallel +t-{t}_{0}\right)}^{p-k}$, for $k=2,3,\dots ,\left[p\right]$. Then, as a consequence of the latter, we can find scalar functions $y\left(t\right)$ such that ${y}^{\left(k\right)}\left(t\right)={\omega}_{k}(t;{t}_{0})$, for $k=2,3,\dots ,\left[p\right]$, to construct a majorizing sequence $\left\{{t}_{n}\right\}$ as that given in (3) by solving the following initial value problem (see [18]):

$$\left\{\begin{array}{c}{\displaystyle {y}^{\left(k\right)}\left(t\right)=\left|\varpi \right|\left(\genfrac{}{}{0pt}{}{p}{k}\right)k!\phantom{\rule{0.166667em}{0ex}}S{\left(\parallel {x}_{0}\parallel +t-{t}_{0}\right)}^{p-k},}\hfill \\ {\displaystyle y\left({t}_{0}\right)=\frac{\eta}{\beta},\phantom{\rule{1.em}{0ex}}{y}^{\prime}\left({t}_{0}\right)=-\frac{1}{\beta},}\hfill \\ {y}^{\u2033}\left({t}_{0}\right)={M}_{2},\phantom{\rule{1.em}{0ex}}{y}^{\u2034}\left({t}_{0}\right)={M}_{3},\phantom{\rule{1.em}{0ex}}\dots ,\phantom{\rule{1.em}{0ex}}{y}^{(k-1)}\left({t}_{0}\right)={M}_{k-1}.\hfill \end{array}\right.$$

It is easy to see that there exists only one solution for the last initial value problem, that is:

$$\phi \left(t\right)={\int}_{{t}_{0}}^{t}{\int}_{{t}_{0}}^{{\theta}_{k-1}}\cdots {\int}_{{t}_{0}}^{{\theta}_{1}}{\omega}_{k}(z;{t}_{0})\phantom{\rule{0.166667em}{0ex}}dz\phantom{\rule{0.166667em}{0ex}}d{\theta}_{1}\cdots \phantom{\rule{0.166667em}{0ex}}d{\theta}_{k-1}+\sum _{i=2}^{k-1}\frac{{M}_{i}}{i!}{(t-{t}_{0})}^{i}-\frac{t-{t}_{0}}{\beta}+\frac{\eta}{\beta}$$

$$=\left|\varpi \right|S{\left(\parallel {x}_{0}\parallel +t-{t}_{0}\right)}^{p}-(t-{t}_{0})+\parallel \mathfrak{F}\left({x}_{0}\right)\parallel -\left|\varpi \right|S\parallel {x}_{0}{\parallel}^{p}.$$

Notice that the scalar function defined in (7) and used to construct the scalar sequence $\left\{{t}_{n}\right\}$ given in (3) with $\phi \left(t\right)$ defined in (7), that majorizes $\left\{{x}_{n}\right\}$ in $\Omega $, is independent of k, so we can choose any k, such that $k=2,3,\dots ,\left[p\right]$, to construct the last initial value problem that gives us $\phi \left(t\right)$.

If $p\in [2,3)$ and using only condition (5), we consider the initial value problem
whose unique solution also is (7).

$$\left\{\begin{array}{c}{\displaystyle {y}^{\u2033}\left(t\right)=\left|\varpi \right|\left(\genfrac{}{}{0pt}{}{p}{2}\right)2!\phantom{\rule{0.166667em}{0ex}}S{\left(\parallel {x}_{0}\parallel +t-{t}_{0}\right)}^{p-2},}\hfill \\ {\displaystyle y\left({t}_{0}\right)=\frac{\eta}{\beta},\phantom{\rule{1.em}{0ex}}{y}^{\prime}\left({t}_{0}\right)=-\frac{1}{\beta},}\hfill \end{array}\right.$$

Once such a majorizing sequence $\left\{{t}_{n}\right\}$ is determined from $\phi \left(t\right)$, we have then to prove its convergence. For this, it is well known [15] that it is necessary that the scalar function $\phi \left(t\right)$ has at least one positive real zero greater than or equal to ${t}_{0}$ and sequence $\left\{{t}_{n}\right\}$ is increasing and convergent to this zero.

#### 2.2. The Majorizing Sequence

We begin by studying the function given in (7). Firstly, we notice that we have considered any ${t}_{0}\ge 0$ in the last section, but we can consider ${t}_{0}=0$, so function $\phi \left(t\right)$ is reduced to

$$\varphi \left(t\right)=\left|\varpi \right|S{\left(\parallel {x}_{0}\parallel +t\right)}^{p}-t+\parallel \mathfrak{F}\left({x}_{0}\right)\parallel -\left|\varpi \right|S\parallel {x}_{0}{\parallel}^{p}.$$

This is a consequence of the fact that $\varphi \left(t\right)=\phi (t+{t}_{0})$, which leads us to the sequence ${\{{t}_{n}={N}_{\phi}\left({t}_{n-1}\right)\}}_{n\in \mathbb{N}}$, for any ${t}_{0}>0$, satisfies ${t}_{n}={N}_{\phi}\left({t}_{n-1}\right)={t}_{0}+{N}_{\varphi}\left({s}_{n-1}\right)$, $n\in \mathbb{N}$, where ${s}_{n}={N}_{\varphi}\left({s}_{n-1}\right)$ with ${s}_{0}=0$, since we have, for ${t}_{0}\ge 0$ and ${s}_{0}=0$,
for all $n\in \mathbb{N}$. Therefore, the real sequences $\left\{{t}_{n}\right\}$ and $\left\{{s}_{n}\right\}$ given by Newton’s method when they are constructed from $\phi \left(t\right)$ and $\varphi \left(t\right)$, respectively, can be obtained, one from the other, by translation. Besides, ${t}_{n}-{t}_{n-1}={s}_{n}-{s}_{n-1}$, for all $n\in \mathbb{N}$, and all the results obtained previously are independent of the value ${t}_{0}\ge 0$, so we choose ${t}_{0}=0$ because, in practice, it is the most favourable situation.

$${t}_{0}+{s}_{n}={t}_{0}+{N}_{\varphi}\left({s}_{n-1}\right)={t}_{0}+{s}_{n-1}-\frac{\varphi \left({s}_{n-1}\right)}{{\varphi}^{\prime}\left({s}_{n-1}\right)}$$

$$={t}_{0}+{s}_{n-1}-\frac{\phi ({s}_{n-1}+{t}_{0})}{{\phi}^{\prime}({s}_{n-1}+{t}_{0})}={t}_{n-1}-\frac{\phi \left({t}_{n-1}\right)}{{\phi}^{\prime}\left({t}_{n-1}\right)}={N}_{\phi}\left({t}_{n-1}\right)={t}_{n},$$

Secondly, we denote $\sigma =min\{t>0:{\varphi}^{\prime}\left(t\right)\ge 0\}$, where $\varphi \left(t\right)$ is given in (8). Note that there exists only one positive real zero $\sigma $ of ${\varphi}^{\prime}\left(t\right)$ in $(0,+\infty )$ satisfying $\sigma =min\{t>0:{\varphi}^{\prime}\left(t\right)\ge 0\}$, since ${\varphi}^{\prime}\left(0\right)=-1<0$, ${\varphi}^{\u2033}\left(t\right)>0$ and ${\varphi}^{\prime}\left(t\right)>0$ as $t\to +\infty $.

**Theorem**

**1.**

If $\varphi \left(\sigma \right)\le 0$, then $\varphi \left(t\right)$ has two real zeros r and R such that $0\le r\le \sigma \le R$.

Thirdly, by taking into account the classical Fourier conditions [19] for the convergence of Newton’s method in the scalar case, we establish that sequence $\left\{{t}_{n}\right\}$ is increasing and converges to r in the following result.

**Theorem**

**2.**

If $\varphi \left(\sigma \right)\le 0$, then sequence $\left\{{t}_{n}\right\}$ is increasing and converges to the positive real zero r of $\varphi \left(t\right)$.

Fourthly, we prove a system of recurrence relations in the next theorem that guarantees that $\left\{{t}_{n}\right\}$ is a majorizing sequence of $\left\{{x}_{n}\right\}$ in $\Omega $, whose proof is similar to that given for Lemma 7 in [18].

**Theorem**

**3.**

Suppose that ${x}_{n}\in \Omega $, for all $n\ge 0$, and $p\ge 3$. If $\varphi \left(\sigma \right)\le 0$, then the following four bounds are satisfied for all $n\in \mathbb{N}$:

- (i)
- there exists ${\left[{\mathfrak{F}}^{\prime}\left({x}_{n}\right)\right]}^{-1}$ and $\parallel {\left[{\mathfrak{F}}^{\prime}\left({x}_{n}\right)\right]}^{-1}\parallel \le -{\displaystyle \frac{1}{{\varphi}^{\prime}\left({t}_{n}\right)}}$,
- (ii)
- $\parallel {\mathfrak{F}}^{\left(i\right)}\left({x}_{n}\right)\parallel \le {\varphi}^{\left(i\right)}\left({t}_{n}\right)$, for $i=2,3,\dots ,k-1$,
- (iii)
- $\parallel \mathfrak{F}\left({x}_{n}\right)\parallel \le \varphi \left({t}_{n}\right)$,
- (iv)
- $\parallel {x}_{n+1}-{x}_{n}\parallel \le {t}_{n+1}-{t}_{n}$.

Note that (i), ($ii$) and ($iv$) are obvious if $n=0$ and ($iii$) are not necessary to prove ($iv$), since it follows from the initial condition $\parallel {\left[{\mathfrak{F}}^{\prime}\left({x}_{0}\right)\right]}^{-1}\mathfrak{F}\left({x}_{0}\right)\parallel \le \eta $.

Finally, if $p\in [2,3)$, we obtain a result similar to the last theorem which can be seen in [20].

## 3. Existence and Uniqueness of a Solution

Following Kantorovich’s technique, the convergence of sequence $\left\{{x}_{n}\right\}$ in $\Omega $ is then guaranteed from the convergence of sequence $\left\{{t}_{n}\right\}$, since $\left\{{t}_{n}\right\}$ majorizes $\left\{{x}_{n}\right\}$, which allows us to draw conclusions on the location of a solution of equation (1). After locating a solution of Equation (1), we establish the uniqueness of a solution. For this, from now on, we denote $\overline{B(x,\varrho )}=\{y\in \mathcal{C}[a,b]:\parallel y-x\parallel \le \varrho \}$ and $B(x,\varrho )=\{y\in \mathcal{C}[a,b]:\parallel y-x\parallel <\varrho \}$.

**Theorem**

**4.**

Let ${x}_{0}\in \Omega $ be such that condition (6) is satisfied and $\varphi \left(t\right)$ the function defined in (8). If $\varphi \left(\sigma \right)\le 0$, where $\sigma =min\{t>0:{\varphi}^{\prime}\left(t\right)\ge 0\}$, and $B({x}_{0},r)\subset \Omega $, then Equation (1) has a solution ${x}^{*}\left(s\right)$ in $\overline{B({x}_{0},r)}$ and it is unique in $B({x}_{0},R)\cap \Omega $ if $r<R$ or in $\overline{B({x}_{0},r)}$ if $r=R$, where r and R are two positive real zeros of $\varphi \left(t\right)$.

**Proof.**

From $\left(i\right)$ and $\left(ii\right)$, it is clear that $\parallel {x}_{1}-{x}_{0}\parallel \le {t}_{1}<r$ and ${x}_{1}\in B({x}_{0},r)\subset \Omega $. If we now suppose that ${x}_{j}\in B({x}_{0},r)\subset \Omega $, for $j=1,2,\dots ,n-1$, it follows, from Theorem 3, that the operator ${\left[{\mathfrak{F}}^{\prime}\left({x}_{n-1}\right)\right]}^{-1}$ exists with $\parallel {\left[{\mathfrak{F}}^{\prime}\left({x}_{n-1}\right)\right]}^{-1}\parallel \le -\frac{1}{{\varphi}^{\prime}\left({t}_{n-1}\right)}$, $\parallel {\mathfrak{F}}^{\left(i\right)}\left({x}_{n-1}\right)\parallel \le {\varphi}^{\left(i\right)}\left({t}_{n-1}\right)$, for $i=2,3,\dots ,k-1$, $\parallel \mathfrak{F}\left({x}_{n-1}\right)\parallel \le \varphi \left({t}_{n-1}\right)$ and $\parallel {x}_{n}-{x}_{n-1}\parallel \le {t}_{n}-{t}_{n-1}$, so that
and therefore ${x}_{n}\in B({x}_{0},r)$ and ${x}_{n}$ are well defined.

$$\parallel {x}_{n}-{x}_{0}\parallel \le \sum _{i=0}^{n-1}\parallel {x}_{i+1}-{x}_{i}\parallel \le \sum _{i=0}^{n-1}({t}_{i+1}-{t}_{i})={t}_{n}<r$$

After that, it follows that $\left\{{x}_{n}\right\}$ is a Cauchy sequence; as a consequence, $\left\{{t}_{n}\right\}$ is a Cauchy sequence and ${lim}_{n}{t}_{n}=r$, since, from Theorem 2, the sequence $\left\{{t}_{n}\right\}$ is increasing and bounded above by r. Thus, $\left\{{x}_{n}\right\}$ is convergent, ${lim}_{n}{x}_{n}={x}^{*}$ and
In addition, the combination of this and $\left(iii\right)$ yields $\mathfrak{F}\left({x}^{*}\right)=0$, where $\mathfrak{F}$ is defined in (2).

$$\parallel {x}^{*}-{x}_{n}\parallel \le r-{t}_{n},\phantom{\rule{1.em}{0ex}}n\ge 0.$$

Next, from Section 2.1, we have
provided that $\parallel x-{x}_{0}\parallel \le t-{t}_{0}$. Now, as ${t}_{0}=0$, it is clear that $\parallel {\mathfrak{F}}^{\u2033}\left(x\right)\parallel \le {\varphi}^{\u2033}\left(t\right)$, for $\parallel x-{x}_{0}\parallel \le t$, and, as a consequence of this, the uniqueness of a solution ${x}^{*}\left(s\right)$ follows exactly that given for Theorem 11 in [18]. ☐

$$\parallel {\mathfrak{F}}^{\u2033}\left(x\right)\parallel \le \left|\varpi \right|p(p-1)S{\left(\parallel {x}_{0}\parallel +t-{t}_{0}\right)}^{p-2},$$

## 4. Applications

In this section, we present two applications where the above study done is illustrated. Both applications arise from the two possibilities that may present kernel $\mathfrak{K}(s,t)$, depending on whether it is separable or not.

#### 4.1. Application 1

We first consider the following nonlinear Fredholm integral equation,
with $s\in [0,1]$, that has been used by other authors as a numerical test [13,21]. Observe that, in this case, kernel $\mathfrak{K}(s,t)=cos\left(\pi s\right)sin\left(\pi t\right)$ is separable.

$$x\left(s\right)=sin\left(\pi s\right)+\frac{1}{5}{\int}_{0}^{1}cos\left(\pi s\right)sin\left(\pi t\right)x{\left(t\right)}^{3}\phantom{\rule{0.166667em}{0ex}}dt,$$

Firstly, we apply Theorem 4 to obtain domains of existence and uniqueness of a solution. For this, we observe that the corresponding function $\mathfrak{F}\left(x\right)$ defined in (2) and associated with (10) is defined in $\Omega =\mathcal{C}[0,1]$. We then observe that condition (6) is required in Theorem 4. However, if we pay attention to the integral equation, we observe that the kernel is separable and we can then determine the corresponding operator ${\left[{\mathfrak{F}}^{\prime}\left(x\right)\right]}^{-1}$. For this, we write $\left[{\mathfrak{F}}^{\prime}\left(x\right)y\right]\left(s\right)=z\left(s\right)$, so, if there exists ${\left[{\mathfrak{F}}^{\prime}\left(x\right)\right]}^{-1}$, we have

$${\left[{\mathfrak{F}}^{\prime}\left(x\right)\right]}^{-1}z\left(s\right)=y\left(s\right)=z\left(s\right)+cos\left(\pi s\right)\left(\frac{3}{5}{\int}_{0}^{1}x{\left(t\right)}^{2}sin\left(\pi t\right)y\left(t\right)dt\right).$$

If we now denote $\frac{3}{5}{\int}_{0}^{1}x{\left(t\right)}^{2}sin\left(\pi t\right)y\left(t\right)dt=\mathcal{I}$, multiply next-to-last equality by $\frac{3}{5}x{\left(s\right)}^{2}sin\left(\pi s\right)$ and integrate it between 0 and 1, we obtain
provided that

$$\mathcal{I}=\frac{{\int}_{0}^{1}x{\left(s\right)}^{2}sin\left(\pi s\right)z\left(s\right)ds}{\frac{5}{3}-{\int}_{0}^{1}x{\left(s\right)}^{2}sin\left(\pi s\right)cos\left(\pi s\right)ds}$$

$${\int}_{0}^{1}x{\left(s\right)}^{2}sin\left(\pi s\right)cos\left(\pi s\right)ds\ne \frac{5}{3}.$$

Therefore,

$$y\left(s\right)={\left[{\mathfrak{F}}^{\prime}\left(x\right)\right]}^{-1}z\left(s\right)=z\left(s\right)+\frac{3}{5}cos\left(\pi s\right)\frac{{\int}_{0}^{1}sin\left(\pi t\right)x{\left(t\right)}^{2}z\left(t\right)dt}{1-\frac{3}{5}{\int}_{0}^{1}sin\left(\pi t\right)cos\left(\pi t\right)x{\left(t\right)}^{2}dt}.$$

Now, as a consequence of the last formula, condition (6), that is required to prove the existence of the inverse operator ${\left[{\mathfrak{F}}^{\prime}\left({x}_{0}\right)\right]}^{-1}$, can be omitted, provided that

$${\int}_{0}^{1}sin\left(\pi t\right)cos\left(\pi t\right){x}_{0}{\left(t\right)}^{2}dt\ne \frac{5}{3}.$$

Therefore, it is sufficient to choose some starting point ${x}_{0}\left(s\right)$ for Newton’s method such that the previous inequality holds. As ${x}_{0}\left(s\right)=sin\left(\pi s\right)$ is a reasonable choice as a starting point for Newton’s method, as we can see in [12,13,14], the last inequality holds, since ${\int}_{0}^{1}sin\left(\pi t\right)cos\left(\pi t\right){x}_{0}{\left(t\right)}^{2}dt=0$, and condition (6) is omitted.

After that, taking into account that $\varpi =\frac{1}{5}$, $S=\frac{2}{\pi}$, $p=3$, $\parallel {x}_{0}\parallel =1$ and $\parallel F\left({x}_{0}\right)\parallel =\frac{3}{40}$, we construct the auxiliary scalar function
and see that it has two positive real zeros $r=0.1327$ and $R=1.0589$. Therefore, the domains of existence and uniqueness of a solution of Equation (10) are respectively

$$\varphi \left(t\right)=\frac{1}{40\pi}(16{t}^{3}+48{t}^{2}+8(6-5\pi )t+3\pi )$$

$$\left\{h\in \mathcal{C}[0,1]:\parallel h\left(s\right)-sin\left(\pi s\right)\parallel \le r=0.1327\right\},$$

$$\left\{h\in \mathcal{C}[0,1]:\parallel h\left(s\right)-sin\left(\pi s\right)\parallel <R=1.0588\right\}.$$

On the other hand, we can write the function $\varphi \left(t\right)$ in the following way
and obtain a priori error estimates from Ostrsowski’s technique [19], that allow us to determine the number of iterations that we have to apply in Newton’s method to reach a previously fixed precision. For this, we write ${\alpha}_{n}=r-{t}_{n}$ and ${\gamma}_{n}=R-{t}_{n}$, for all $n\ge 0$. Then,
and

$$\varphi \left(t\right)=(r-t)(R-t)\ell \left(t\right),\phantom{\rule{1.em}{0ex}}\ell \left(t\right)=\frac{2}{5\pi}(t+\left(4.19156\right))$$

$$\varphi \left({t}_{n}\right)={\alpha}_{n}{\gamma}_{n}\ell \left({t}_{n}\right),\phantom{\rule{4pt}{0ex}}{\varphi}^{\prime}\left({t}_{n}\right)={\alpha}_{n}{\gamma}_{n}{\ell}^{\prime}\left({t}_{n}\right)-({\alpha}_{n}+{\gamma}_{n})\ell \left({t}_{n}\right)$$

$${\alpha}_{n+1}=r-{t}_{n}+\frac{\varphi \left({t}_{n}\right)}{{\varphi}^{\prime}\left({t}_{n}\right)}=\frac{{{\alpha}_{n}}^{2}\left({\gamma}_{n}{\ell}^{\prime}\left({t}_{n}\right)-\ell \left({t}_{n}\right)\right)}{{\alpha}_{n}{\gamma}_{n}{\ell}^{\prime}\left({t}_{n}\right)-({\alpha}_{n}+{\gamma}_{n})\ell \left({t}_{n}\right)}.$$

From $\frac{{\alpha}_{n+1}}{{\gamma}_{n+1}}=\frac{{\alpha}_{n}^{2}\left({\gamma}_{n}{\ell}^{\prime}\left({t}_{n}\right)-\ell \left({t}_{n}\right)\right)}{{{\gamma}_{n}}^{2}\left({\alpha}_{n}{\ell}^{\prime}\left({t}_{n}\right)-\ell \left({t}_{n}\right)\right)}$, it follows
where $P=0.7718$, $Q=0.7858$, $U=0.0985$ and $V=0.0967$, and then taking into account that ${\gamma}_{n+1}=(R-r)+{\alpha}_{n+1}$, we obtain
where ${\delta}_{n}=\left\{\frac{(R-r){V}^{{2}^{n}}}{P-{V}^{{2}^{n}}}\right\}$ and ${\epsilon}_{n}=\left\{\frac{(R-r){U}^{{2}^{n}}}{Q-{U}^{{2}^{n}}}\right\}$ for all $n\ge 0$. In Table 1, we can see the a priori error estimates that lead to the well-known quadratic convergence of Newton’s method.

$$P{\left(\frac{{\alpha}_{n}}{{\gamma}_{n}}\right)}^{2}\le \frac{{\alpha}_{n+1}}{{\gamma}_{n+1}}\le Q{\left(\frac{{\alpha}_{n}}{{\gamma}_{n}}\right)}^{2},$$

$$\frac{{\alpha}_{n+1}}{{\gamma}_{n+1}}\le {Q}^{{2}^{n+1}-1}{\left(\frac{{\alpha}_{0}}{{\gamma}_{0}}\right)}^{{2}^{n+1}}=\frac{{U}^{{2}^{n+1}}}{Q},$$

$$\frac{{\alpha}_{n+1}}{{\gamma}_{n+1}}\ge {P}^{{2}^{n+1}-1}{\left(\frac{{\alpha}_{0}}{{\gamma}_{0}}\right)}^{{2}^{n+1}}=\frac{{V}^{{2}^{n+1}}}{P},$$

$${\delta}_{n}\le r-{t}_{n}\le {\epsilon}_{n},$$

Now, taking into account the exact solution
of equation (10), we compare the obtained results with those given by other authors when different numerical methods are applied to solve (10).

$$\psi \left(s\right)=sin\left(\pi s\right)+\frac{1}{3}\left(20-\sqrt{391}\right)cos\left(\pi s\right)$$

In Table 2, we show the real errors for $n=10$ and $n=20$ when the adapted Newton’s method is used in [13] to solve (10) and some points of the interval involved are chosen. In Table 3, we show the real errors when a combination of Newton’s method and quadrature methods [14] and an iterative scheme based on the homotopy analysis method [12] are applied. Notice that $\parallel {x}^{*}-{x}_{n}\parallel \le {\epsilon}_{n}$, since $\parallel {x}^{*}-{x}_{n}\parallel \le r-{t}_{n}$, so we already improve the results obtained in Table 2 and Table 3 by other authors with four iterations of Newton’s method,
and the stopping criterion $\parallel {x}_{n}-{x}_{n-1}\parallel <{10}^{-32}$. Finally, although we have already guaranteed that the numerical approximation given by ${x}_{4}\left(s\right)$ to the solution $\psi \left(s\right)$ of equation (10) is of at least order ${10}^{-17}$, we see in Table 4 that this approximation is, in fact, of at least order ${10}^{-30}$.

$${x}_{4}\left(s\right)=sin\left(\pi s\right)+\left(0.075426688904937162\right)cos\left(\pi s\right),$$

#### 4.2. Application 2

Secondly, we consider the following nonlinear integral Fredholm equation,
with $s\in \left[-\frac{1}{2},\frac{1}{2}\right]$. Observe that, in this case, kernel $\mathfrak{K}(s,t)={\mathrm{e}}^{st}$ is not separable. In addition, the corresponding function $\mathfrak{F}\left(x\right)$ defined in (2) and associated with (11) is defined in $\Omega =\mathcal{C}\left[-\frac{1}{2},\frac{1}{2}\right]$.

$$x\left(s\right)=s+\frac{1}{2}{\int}_{-\frac{1}{2}}^{\frac{1}{2}}{\mathrm{e}}^{st}x{\left(t\right)}^{\frac{10}{3}}\phantom{\rule{0.166667em}{0ex}}dt,$$

From Equation (11), we see that ${x}_{0}\left(s\right)=s$ is a reasonable choice of starting point for Newton’s method. In addition, condition (6) of Theorem 4 is satisfied, since $\left|\varpi \right|pS\parallel {x}_{0}{\parallel}^{p-1}=0.1670<1$, and the auxiliary scalar function $\varphi \left(t\right)$ involved in our study is
that has two positive real zeros $r=0.0842$ and $R=0.4921$ As a consequence of Theorem 4, Equation (11) then has a solution ${x}^{*}\left(s\right)$ in $\overline{B({x}_{0},0.0842)}$ and it is unique in $B({x}_{0},0.4921)$.

$$\varphi \left(t\right)=\frac{1}{8}\left(-8t+{2}^{\frac{2}{3}}sinh\frac{1}{4}{(1+2t)}^{\frac{10}{3}}\right),$$

As kernel $\mathfrak{K}(s,t)={\mathrm{e}}^{st}$ is not separable, the application of Newton’s method for solving (11) is not easy. Taking into account this fact, we first use Taylor’s series to approximate $\mathfrak{K}(s,t)={\mathrm{e}}^{st}$. So,
where $\u03f5\in \left(min\{0,t\},max\{0,t\}\right)$, and consider the integral equation

$$\mathfrak{K}(s,t)={\mathrm{e}}^{st}=\tilde{\mathfrak{K}}(s,t)+\mathfrak{R}(\u03f5,s,t);\phantom{\rule{1.em}{0ex}}\tilde{\mathfrak{K}}(s,t)=\sum _{i=0}^{j-1}\frac{{s}^{i}\phantom{\rule{0.166667em}{0ex}}{t}^{i}}{i!},\phantom{\rule{1.em}{0ex}}\mathfrak{R}(\u03f5,s,t)=\frac{{\mathrm{e}}^{s\u03f5}}{j!}{s}^{j}\phantom{\rule{0.166667em}{0ex}}{t}^{j},$$

$$x\left(s\right)=s+\frac{1}{2}{\int}_{-\frac{1}{2}}^{\frac{1}{2}}\tilde{\mathfrak{K}}(s,t)x{\left(t\right)}^{\frac{10}{3}}\phantom{\rule{0.166667em}{0ex}}dt,\phantom{\rule{1.em}{0ex}}s\in \left[-\frac{1}{2},\frac{1}{2}\right].$$

Next, we take into account the following relation that is satisfied by ${x}^{*}\left(s\right)$ and $\tilde{x}\left(s\right)$, that are respectively solutions of (11) and (12):
where $\tilde{S}={max}_{s\in \left[-\frac{1}{2},\frac{1}{2}\right]}{\int}_{-\frac{1}{2}}^{\frac{1}{2}}\left|\tilde{\mathfrak{K}}(s,t)\right|dt$, $T={max}_{\u03f5\in \left[-\frac{1}{2},\frac{1}{2}\right]}\left({max}_{s\in \left[-\frac{1}{2},\frac{1}{2}\right]}{\int}_{-\frac{1}{2}}^{\frac{1}{2}}\left|\mathfrak{R}(\u03f5,s,t)\right|dt\right)$, ${\rho}^{*}\ge \parallel {x}^{*}\left(s\right)\parallel $ and $\tilde{\rho}\ge \parallel \tilde{x}\left(s\right)\parallel $ after taking norms in (11) and (12).

$$\parallel {x}^{*}\left(s\right)-\tilde{x}\left(s\right)\parallel \le \frac{\left|\varpi \right|T{\left({\rho}^{*}\right)}^{\frac{10}{3}}}{1-\frac{10}{3}\left|\varpi \right|\tilde{S}{\left(|{\rho}^{*}-\tilde{\rho}|+\tilde{\rho}\right)}^{\frac{7}{3}}},$$

Thus, if we want to obtain, for example, an approximation of the solution ${x}^{*}\left(s\right)$ of order ${10}^{-9}$, it is sufficient to choose $j=5$ in (12). In this case, $\tilde{S}=1.0104$, $T=6.2199\times {10}^{-8}$, ${\rho}^{*}=\tilde{\rho}=0.5842$, so $\parallel {x}^{*}\left(s\right)-\tilde{x}\left(s\right)\parallel \le 9.9790\times {10}^{-9}$.

Hence, if we now look for a solution $\tilde{x}\left(s\right)$ of (12) by Newton’s method, we look for an approximation ${x}_{n}\left(s\right)$ such that $\parallel \tilde{x}\left(s\right)-{x}_{n}\left(s\right)\parallel $ is of order ${10}^{-10}$, since
if we take into account (9). In this case, it is sufficient to choose a number of iterations n of Newton’s method such that $r-{t}_{n}$ is of order ${10}^{-10}$. Note that the last fact is possible, since the sequence $\left\{{t}_{n}\right\}$ is known a priori, as we can see in Table 5.

$$\parallel {x}^{*}\left(s\right)-{x}_{n}\left(s\right)\parallel \le \parallel {x}^{*}\left(s\right)-\tilde{x}\left(s\right)\parallel +\parallel \tilde{x}\left(s\right)-{x}_{n}\left(s\right)\parallel \le \parallel {x}^{*}\left(s\right)-\tilde{x}\left(s\right)\parallel +r-{t}_{n}$$

So, we can then apply Newton’s method from ${x}_{0}\left(s\right)=s$ to approximate a solution $\tilde{x}\left(s\right)$ of integral Equation (12), as we do in [22], and then choose the approximation
that is obtained after four iterations of Newton’s method with the stopping criterion $\parallel {x}_{n}\left(s\right)-{x}_{n-1}\left(s\right)\parallel <{10}^{-24}$, since $\parallel \tilde{x}\left(s\right)-{x}_{4}\left(s\right)\parallel \le r-{t}_{4}=2.4791\times {10}^{-15}$. In this case,
and, as a consequence, ${x}_{4}\left(s\right)$ is an approximation of the solution ${x}^{*}\left(s\right)$ of Equation (11) of the order ${10}^{-9}$ looked for.

$${x}_{4}\left(s\right)=(1.1509\times {10}^{-2})+\left(1.0004\right)t+(9.8357\times {10}^{-4}){t}^{2}$$

$$+(1.2743\times {10}^{-5}){t}^{3}+(1.5568\times {10}^{-5}){t}^{4}+(1.2117\times {10}^{-7}){t}^{5},$$

$$\parallel {x}^{*}\left(s\right)-{x}_{4}\left(s\right)\parallel \le \parallel {x}^{*}\left(s\right)-\tilde{x}\left(s\right)\parallel +\parallel \tilde{x}\left(s\right)-{x}_{4}\left(s\right)\parallel \le 9.9790\times {10}^{-9},$$

## Acknowledgments

This research was partially supported by Ministerio de Economía y Competitividad under grant MTM2014-52016-C2-1-P.

## Author Contributions

The contributions of the two authors have been similar. Both authors have worked together to develop the present manuscript.

## Conflicts of Interest

The authors declare no conflict of interest.

## References

- Atkinson, K.E. The Numerical Solution of Integral Equations of the Second Kind; Cambridge University Press: Cambridge, UK, 1997. [Google Scholar]
- Delves, L.M.; Mohamed, J.L. Computational Methods for Integral Equations; Cambridge University Press: Cambridge, UK, 1985. [Google Scholar]
- Atkinson, K.E. A survey of numerical methods for solving nonlinear integral equations. J. Integr. Equ. Appl.
**1992**, 4, 15–46. [Google Scholar] [CrossRef] - Babolian, E.; Shahsavaran, A. Numerical solution of nonlinear Fredholm integral equations of the second kind using Haar wavelets. J. Comput. Appl. Math.
**2009**, 225, 87–95. [Google Scholar] [CrossRef] - Mahmoudi, Y. Wavelet Galerkin method for numerical solution of nonlinear integral equation. Appl. Math. Comput.
**2005**, 167, 1119–1129. [Google Scholar] [CrossRef] - Darijani, A.; Mohseni-Moghadam, M. Improved polynomial approximations for the solution of nonlinear integral equations. Sci. Iran.
**2013**, 20, 765–770. [Google Scholar] - Yang, C. Chebyshev polynomial solution of nonlinear integral equations. J. Franklin Inst.
**2012**, 34, 9947–9956. [Google Scholar] [CrossRef] - Maleknejad, K.; Mollapourasl, R.; Alizadeh, M. Convergence analysis for numerical solution of Fredholm integral equation by Sinc approximation. Commun. Nonlinear Sci. Numer. Simul.
**2011**, 16, 2478–2485. [Google Scholar] - Stenger, F. Numerical Methods Based on Sinc and Analytic Functions; Springer: New York, NY, USA, 1993. [Google Scholar]
- Allouch, C.; Sbibih, D.; Tahrichi, M. Superconvergent Nystrom and degenerate kernel methods for Hammerstein integral equations. J. Comput. Appl. Math.
**2014**, 258, 30–41. [Google Scholar] [CrossRef] - Liang, J.; Yan, S.-H.; Agarwal, R.P.; Huang, T.-W. Integral solution of a class of nonlinear integral equations. Appl. Math. Comput.
**2013**, 219, 4950–4957. [Google Scholar] - Awawdeh, F.; Adawi, A.; Al-Shara, S. A numerical method for solving nonlinear integral equations. Int. Math. Forum
**2009**, 4, 805–817. [Google Scholar] - Nadir, M.; Khirani, A. Adapted Newton-Kantorovich method for nonlinear integral equations. J. Math. Stat.
**2016**, 12, 176–181. [Google Scholar] [CrossRef] - Saberi-Nadja, J.; Heidari, M. Solving nonlinear integral equations in the Urysohn form by Newton-Kantorovich-quadrature method. Comput. Math. Appl.
**2010**, 60, 2018–2065. [Google Scholar] - Kantorovich, L.V.; Akilov, G.P. Functional Analysis; Pergamon Press: New York, NY, USA, 1982. [Google Scholar]
- Ezquerro, J.A.; Hernández, M.A. The Newton method for Hammerstein equations. J. Comput. Anal. Appl.
**2005**, 7, 437–446. [Google Scholar] - Gutiérrez, J.M.; Hernández, M.A.; Salanova, M.A. On the approximate solution of some Fredholm integral equations by Newton’s method. Southwest J. Pure Appl. Math.
**2004**, 1, 1–9. [Google Scholar] - Ezquerro, J.A.; González, D.; Hernández, M.A. A semilocal convergence result for Newton’s method under generalized conditions of Kantorovich. J. Complexity
**2014**, 30, 309–324. [Google Scholar] [CrossRef] - Ostrowski, A.M. Solution of Equations and Systems of Equations; Academic Press: New York, NY, USA, 1966. [Google Scholar]
- Ezquerro, J.A.; González, D.; Hernández, M.A. Majorizing sequences for Newton’s method from initial value problem. J. Comput. Appl. Math.
**2012**, 236, 2246–2258. [Google Scholar] [CrossRef] - Rashidinia, J.; Parsa, A. Analytical-numerical solution for nonlinear integral equations of Hammerstein typ. Int. J. Math. Model. Comput.
**2012**, 2, 61–69. [Google Scholar] - Ezquerro, J.A.; Hernández-Verón, M.A. Newton’s Method: An Updated Approach of Kantorovich’s Theory; Birkhäuser: Cham, Switzerland, 2017. [Google Scholar]

n | ${\mathit{\delta}}_{\mathit{n}}$ | ${\mathit{\epsilon}}_{\mathit{n}}$ |
---|---|---|

0 | $1.3272\times {10}^{-1}$ | $1.3272\times {10}^{-1}$ |

1 | $1.1368\times {10}^{-2}$ | $1.1577\times {10}^{-2}$ |

2 | $1.0513\times {10}^{-4}$ | $1.1095\times {10}^{-4}$ |

3 | $9.2090\times {10}^{-9}$ | $1.0444\times {10}^{-8}$ |

4 | $7.0677\times {10}^{-17}$ | $9.2563\times {10}^{-17}$ |

**Table 2.**Real errors for $n=10$ and $n=20$ when the adapted Newton’s method given in [13] is applied.

t | $\mathit{n}=10$ | $\mathit{n}=20$ |
---|---|---|

$0.0$ | $5.44\times {10}^{-8}$ | $3.19\times {10}^{-16}$ |

$0.2$ | $4.40\times {10}^{-8}$ | $2.22\times {10}^{-16}$ |

$0.4$ | $1.68\times {10}^{-8}$ | $1.11\times {10}^{-16}$ |

$0.6$ | $1.68\times {10}^{-8}$ | $1.11\times {10}^{-16}$ |

$0.8$ | $4.40\times {10}^{-8}$ | $2.22\times {10}^{-16}$ |

$1.0$ | $5.44\times {10}^{-8}$ | $3.19\times {10}^{-16}$ |

t | [14]-Errors | [12]-Errors |
---|---|---|

$0.0$ | $4.98\times {10}^{-2}$ | $5.53\times {10}^{-15}$ |

$0.2$ | $4.03\times {10}^{-2}$ | $4.55\times {10}^{-15}$ |

$0.4$ | $1.53\times {10}^{-2}$ | $1.77\times {10}^{-15}$ |

$0.6$ | $1.53\times {10}^{-2}$ | $1.77\times {10}^{-15}$ |

$0.8$ | $4.03\times {10}^{-2}$ | $4.55\times {10}^{-15}$ |

$1.0$ | $1.53\times {10}^{-2}$ | $5.53\times {10}^{-15}$ |

t | $\mathit{\psi}\left(\mathit{t}\right)$ | ${\mathit{x}}_{4}\left(\mathit{t}\right)$ | $\parallel \mathit{\psi}\left(\mathit{t}\right)-{\mathit{x}}_{4}\left(\mathit{t}\right)\parallel $ |
---|---|---|---|

$0.0$ | $7.54\times {10}^{-2}$ | $7.54\times {10}^{-2}$ | $2.87\times {10}^{-30}$ |

$0.2$ | $6.48\times {10}^{-1}$ | $6.48\times {10}^{-1}$ | $2.32\times {10}^{-30}$ |

$0.4$ | $9.74\times {10}^{-1}$ | $9.74\times {10}^{-1}$ | $8.87\times {10}^{-31}$ |

$0.6$ | $9.27\times {10}^{-1}$ | $9.27\times {10}^{-1}$ | $8.87\times {10}^{-31}$ |

$0.8$ | $5.26\times {10}^{-1}$ | $5.26\times {10}^{-1}$ | $2.32\times {10}^{-30}$ |

$1.0$ | $-7.54\times {10}^{-2}$ | $-7.54\times {10}^{-2}$ | $2.87\times {10}^{-30}$ |

n | ${\mathit{t}}_{\mathit{n}}$ | $\mathit{r}-{\mathit{t}}_{\mathit{n}}$ |
---|---|---|

0 | 0 | $8.4216\times {10}^{-2}$ |

1 | $0.07528076$ | $8.9357\times {10}^{-3}$ |

2 | $0.08407562$ | $1.4088\times {10}^{-4}$ |

3 | $0.08421647$ | $3.6635\times {10}^{-8}$ |

4 | $0.08421651$ | $2.4791\times {10}^{-15}$ |

© 2017 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).