Next Article in Journal
Influence of 3D Centro-Symmetry on a 2D Retinal Image
Next Article in Special Issue
Matrix Method by Genocchi Polynomials for Solving Nonlinear Volterra Integral Equations with Weakly Singular Kernels
Previous Article in Journal
Conversion of Dual-Use Technology: A Differential Game Analysis under the Civil-Military Integration
Previous Article in Special Issue
Matrix Expression of Convolution and Its Generalized Continuous Form

Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

# A Numerical Method for Weakly Singular Nonlinear Volterra Integral Equations of the Second Kind

by
Sanda Micula
Department of Mathematics, Babeş-Bolyai University, 400084 Cluj-Napoca, Romania
Symmetry 2020, 12(11), 1862; https://doi.org/10.3390/sym12111862
Submission received: 21 October 2020 / Revised: 4 November 2020 / Accepted: 10 November 2020 / Published: 12 November 2020
(This article belongs to the Special Issue Integral Equations: Theories, Approximations and Applications)

## Abstract

:
This paper presents a numerical iterative method for the approximate solutions of nonlinear Volterra integral equations of the second kind, with weakly singular kernels. We derive conditions so that a unique solution of such equations exists, as the unique fixed point of an integral operator. Iterative application of that operator to an initial function yields a sequence of functions converging to the true solution. Finally, an appropriate numerical integration scheme (a certain type of product integration) is used to produce the approximations of the solution at given nodes. The resulting procedure is a numerical method that is more practical and accessible than the classical approximation techniques. We prove the convergence of the method and give error estimates. The proposed method is applied to some numerical examples, which are discussed in detail. The numerical approximations thus obtained confirm the theoretical results and the predicted error estimates. In the end, we discuss the method, drawing conclusions about its applicability and outlining future possible research ideas in the same area.
MSC:
65R20; 45D05; 45E10; 37C25; 65D30

## 1. Introduction

Many fields in the area of Applied Mathematics rely on knowledge of integral equations, as they arise naturally in various applications in Mathematics, Engineering, Physics, and Technology. They can be used to model a wide range of physical problems such as heat conduction, diffusion, continuum mechanics, geophysics, electricity, magnetism, neutron transport, traffic theory, and many more. Integral equations provide solutions in designing efficient parametrization algorithms for algebraic curves, surfaces, and hypersurfaces. Many initial and boundary value problems associated with ordinary and partial differential equations can be reformulated as integral equations.
Singular and weakly singular integral equations are of particular interest, since they are used to solve inverse boundary value problems whose domains are fractal curves, where classical calculus cannot be used. Abel equations and other fractional order integral equations were studied extensively and are used in modeling various phenomena in biophysics, viscoelasticity, electrical circuits, etc.
Solvability and properties of singular Volterra integral equations were studied using various analytical and approximating methods. We mention existence (and uniqueness) results [1,2,3], resolvent methods [4], Laplace transforms [2,5,6], fixed point theorems [3,7], etc. Numerical solutions have been found, using product integration [8], collocation and iterated collocation [9,10,11,12], homotopy perturbation transform method [2,13], Tau method based on Jacobi functions [14], Nyström methods [8], quadrature schemes [15], variational iteration methods [6], block-pulse wavelets [16], modified quadratic spline approximation [17], reproducing kernel method [18], etc.
Researchers around the world have studied properties of the solutions, such as regularity [9,19], properties of the resolvent [4], monotonicity [20] and others [21,22,23].
In many applications modeled by integral equations, the kernels are not smooth, making it difficult both to find a solution and to approximate it numerically, as the convergence of approximate methods depends in general on the smoothness of the solution. Thus, classical analytical methods, such as projection methods perform poorly in such cases, as the linear system they lead to is generally badly conditioned and difficult to solve. Proof of convergence and error estimation can also be laborious, when classical calculus cannot be used. Oftentimes, they also have a high implementation cost. Hence, there is a high need for speedy, easy to use numerical methods for these types of equations. The method we propose is based on a classical fixed point result, adapted appropriately. Then, for the approximation of the integrals involved, the product integration numerical scheme we use is also quite efficient, since most of the computations can be done only once, not at each iteration.
In this paper, we consider a Volterra integral equation of the type
$u ( t ) = ∫ 0 t K ( t , s , u ( s ) ) d s + f ( t ) , t ∈ [ 0 , T ] ,$
with the kernel of the form
$K ( t , s , u ( s ) ) = a ( t , s , u ( s ) ) ( t − s ) α − 1$
where $0 < α < 1$ and $a : [ 0 , T ] × [ 0 , T ] × R → R , f : ( 0 , T ] → R$ are continuous functions. Later on, other smoothness assumptions will be made on a and f.
We derive conditions under which results from fixed point theory will provide the existence of a unique solution of this equation, as well as a sequence of successive iterations to approximate it. We briefly summarize the main results for the existence of fixed points of an operator on a Banach space.
Definition 1.
Let $( X , | | · | | )$ be a Banach space. A mapping $T : X → X$ is called a $q −$contraction if there exists a constant $0 ≤ q < 1$ such that
$| | T x − T y | | ≤ q | | x − y | | ,$
for all $x , y ∈ X$.
On Banach spaces, the well known contraction principle holds:
Theorem 1.
Consider a Banach space $( X , | | · | | )$ and let $T : X → X$ be a $q −$contraction. Then
(a)
T has exactly one fixed point, which means equation $x = T x$ has exactly one solution $x ∗ ∈ X$;
(b)
the sequence of successive approximations $x n + 1 = T x n , n ∈ N ,$ converges to the solution $x ∗$, where $x 0$ can be any arbitrary point in X;
(c)
for every $n ∈ N$, the following error estimate
$| | x n − x ∗ | | ≤ q n 1 − q | | T x 0 − x 0 | |$
holds.
Remark 1.
Theorem 1 remains valid when X is replaced by a closed subset $Y ⊆ X$, satisfying $T ( Y ) ⊆ Y$.
We use Banach’s theorem to establish, under certain conditions, the existence and uniqueness of a solution of Equation (1) and to approximate it by applying the operator successively. Then we use a suitable numerical integration scheme to approximate the values of the solution at given nodes. The numerical method thus resulted is quite easy to use and implement, while giving accurate approximations.
The paper is organized as follows. In Section 2 we derive necessary conditions for the existence and uniqueness of the solution and discuss its regularity. In Section 3 the numerical method is described, by use of a special type of product integration. The convergence and error analysis of the method are also discussed in details. Numerical examples are given in Section 4, illustrating the applicability of the proposed method. In Section 5, the advantages of this new method are summarized and future possible work ideas in the same area are discussed.

## 2. Existence and Uniqueness of the Solution

To solve Equation (1), we apply the contraction principle to the associated integral operator
$F u ( t ) = ∫ 0 t a ( t , s , u ( s ) ) ( t − s ) α − 1 d s + f ( t ) .$
Remark 2.
Since $a ∈ C [ 0 , T ] × [ 0 , T ] × R$ and $f ∈ C ( 0 , T ]$, it is well known that the operator $F : C [ 0 , T ] → C [ 0 , T ]$ is well defined, i.e., $F C [ 0 , T ] ⊆ C [ 0 , T ]$ (for the proof, see e.g., [7]).
Then we solve the integral Equation (1) by finding a fixed point for the operator F:
$u = F u .$
We consider the space $X = C [ 0 , T ]$ equipped with the Bielecki norm
$| | u | | τ : = max t ∈ [ 0 , T ] | u ( t ) | e − τ t , u ∈ X ,$
for some suitable constant $τ > 0$. Then, as is well known, $( X , | | · | | τ )$ is a Banach space (see e.g., [24]) and we have the following result.
Theorem 2.
Let $F : ( X , | | · | | τ ) → ( X , | | · | | τ )$ be defined by Equation (3). Assume that there exists a constant $L > 0$ such that
$a ( t , s , u ) − a ( t , s , v ) ≤ L | u − v | ,$
for all $t , s ∈ [ 0 , T ]$ and all $u , v ∈ R$. Then
(a)
Equation (4) has a unique solution $u ∗ ∈ X$;
(b)
the sequence of successive approximations
$u n + 1 = F u n , n = 0 , 1 , ⋯$
converges to the solution $u ∗$ for any $u 0 ∈ X$;
(c)
for every $n ∈ N$, the following error estimate
$| | u n − u ∗ | | τ ≤ q n 1 − q | | F u 0 − u 0 | | τ$
holds, where $q : = L Γ ( α ) τ α$ is the contraction constant.
Proof of Theorem 2.
Let $t ∈ [ 0 , T ]$ be fixed. By Equation (5), we have
$| F u ( t ) − F v ( t ) | ≤ ∫ 0 t | a ( t , s , u ( s ) ) − a ( t , s , v ( s ) ) | ( t − s ) α − 1 d s ≤ L ∫ 0 t | u ( s ) − v ( s ) | ( t − s ) α − 1 d s = L ∫ 0 t | u ( s ) − v ( s ) | e − τ s e τ s ( t − s ) α − 1 d s ≤ L | | u − v | | τ ∫ 0 t e τ s ( t − s ) α − 1 d s = L | | u − v | | τ e τ t ∫ 0 τ t e − y 1 τ y α − 1 1 τ d y ≤ L Γ ( α ) τ α | | u − v | | τ e τ t ,$
where the change of variables $y = τ ( t − s )$, $0 ≤ y ≤ τ t$ was used and $Γ ( α ) = ∫ 0 ∞ e − x x α − 1 d x$ denotes Euler’s Gamma function. Then
$| F u ( t ) − F v ( t ) | e − τ t ≤ L Γ ( α ) τ α | | u − v | | τ ,$
for every $t ∈ [ 0 , T ]$ and, so,
$| | F u − F v | | τ ≤ L Γ ( α ) τ α | | u − v | | τ .$
We can choose $τ > 0$ such that $q : = L Γ ( α ) τ α < 1$, so F is a $q −$contraction. The conclusions now follow from Theorem 1. □
Next, we address the question of smoothness of the solution of Equation (1). The following result holds:
Theorem 3.
Let the conditions of Theorem 2 hold. If, in addition, $f ∈ C 2 , 1 − α ( 0 , T ]$ and $a ∈ C 2 [ 0 , T ] × [ 0 , T ] × R$, then $u ∗ ∈ C 2 , 1 − α ( 0 , T ]$, also.
Remark 3.
For the proof, see e.g., [19] (with $i = j = k = 0$ and $ν = 1 − α$).
The Lipschitz condition in Theorem 2 can be very prohibitive if required on the entire space. To be able to use it on a wider range of applications, we restrict it to a closed subset. Let $| | · | |$ denote the Chebyshev norm on $C [ 0 , T ]$ (which is equivalent to the Bielecki norm) and consider the closed ball $B R : = { u ∈ C [ 0 , T ] | | | u − f | | ≤ R }$, for some $R > 0$. Then $B R ⊆ X$ and we have the following result.
Theorem 4.
Let us suppose that there exists a constant $L > 0$ such that
$a ( t , s , u ) − a ( t , s , v ) ≤ L | u − v | ,$
for all $t , s ∈ [ 0 , T ]$ and all $u , v ∈ [ R 1 − R , R 2 + R ]$, where $R 1 : = min t ∈ [ 0 , T ] f ( t ) , R 2 : = max t ∈ [ 0 , T ] f ( t )$. Further assume that
$M T α α ≤ R ,$
where $M : = max | a ( t , s , u ) |$ over all $t , s ∈ [ 0 , T ]$ and all $u , v ∈ [ R 1 − R , R 2 + R ]$. Then the conclusions of Theorem 2 hold on $B R$.
Proof of Theorem 4.
By Remark 1, all we need to show is that $F ( B R ) ⊆ B R$. Let $u ∈ B R$. Then, for all $t ∈ [ 0 , T ]$,
$R 1 − R ≤ u ( t ) ≤ R 2 + R ,$
so, for $u ∈ B R$, conditions in Equations (8) and (9) hold.
Fix $t ∈ [ 0 , T ]$. We have
$| F u ( t ) − f ( t ) | ≤ ∫ 0 t | a ( t , s , u ( s ) ) | ( t − s ) α − 1 d s ≤ M ∫ 0 t ( t − s ) α − 1 d s ≤ M T α α ≤ R .$
Thus, $| | F u − f | | ≤ R$ and $F ( B R ) ⊆ B R$. □

## 3. Numerical Method

We have now established that under the conditions of Theorem 4 a unique solution of Equation (1) exists and that it can be obtained as the limit of the sequence of successive approximations given in Equation (6). Still, the integrals involved in the iteration process cannot be computed exactly, so they have to be approximated numerically. We now proceed to approximate the values of the solution $u ∗ ( t )$ at a given set of nodes $0 = t 0 < … < t m = T$. That means the singular integrals in Equation (6) have to be approximated numerically at the nodes.

#### 3.1. Product Integration

For the numerical solution, we use product integration (see [24]). The idea is to approximate the integral
$I ( φ ) = ∫ a b φ ( s ) w ( s ) d s ,$
for $φ$ a smooth function and a singular weight function w, using a sequence of functions $φ m$ such that $| | φ − φ m | | → 0$ as $m → ∞$ and the integrals
$I m ( φ ) = ∫ a b φ m ( s ) w ( s ) d s$
can be easily computed. Then
$| I ( φ ) − I m ( φ ) | ≤ | | φ − φ m | | ∫ a b | w ( s ) | d s ,$
so $I m ( φ ) → I ( φ )$ as $m → ∞$, at least as fast as $φ m → φ$. Hence, for a set of nodes $a = s 0 < … < s m = b$, we use the approximation formula
$I ( φ ) = ∑ k = 1 m ∫ s k − 1 s k φ ( s ) w ( s ) d s ≈ ∑ k = 1 m ∫ s k − 1 s k φ k ( s ) w ( s ) d s = ∑ k = 0 m w k φ ( s k ) ,$
with the error given in Equation (10).
One of the easiest (in terms of keeping the algebra simple) product integration methods is the so-called product trapezoidal rule. The name comes from the fact that the idea is the same as the one used to produce the trapezoidal rule, i.e., start with piecewise linear interpolation of the function $φ$, in order to obtain the sequence $φ m$.
Next, we derive the formulas for approximating
$I ( φ ) = ∫ 0 b φ ( s ) ( b − s ) α − 1 d s ,$
for $φ ∈ C 2 [ 0 , b ]$ and $w ( s ) = ( b − s ) α − 1 , 0 < α < 1$. Let $s k = k h = k b m ,$ for $k = 0 , … , m$. Let
$φ m ( s ) = 1 h ( s j − s ) φ ( s j − 1 ) + ( s − s j − 1 ) φ ( s j ) , for s ∈ [ s j − 1 , s j ] , j = 1 , … , m .$
Then
$| | φ − φ m | | ≤ h 2 8 | | φ ″ | | ∞ and | I ( φ ) − I m ( φ ) | ≤ h 2 8 b α α | | φ ″ | | ∞ = b 2 8 m 2 b α α | | φ ″ | | ∞ .$
Now,
$I ( φ ) ≈ ∑ k = 0 m w k φ ( s k ) ,$
where
$w 0 = 1 h ∫ s 0 s 1 ( s 1 − s ) w ( s ) d s , w m = 1 h ∫ s m − 1 s m ( s − s m − 1 ) w ( s ) d s , w j = 1 h ∫ s j − 1 s j ( s − s j − 1 ) w ( s ) d s + ∫ s j s j + 1 ( s j + 1 − s ) w ( s ) d s , j = 1 , ⋯ , m − 1 .$
To simplify the computations, we make the substitution $s − s j − 1 = h y , 0 ≤ y ≤ 1 .$ We get
$w 0 = h ∫ 0 1 ( 1 − y ) w ( s 0 + h y ) d y = h ∫ 0 1 ( 1 − y ) ( b − h y ) α − 1 d y w m = h ∫ 0 1 y w ( s m − 1 + h y ) d y = h ∫ 0 1 y b − h ( m − 1 + y ) α − 1 d y w j = h ∫ 0 1 y w ( s j − 1 + h y ) d y + h ∫ 0 1 ( 1 − y ) w ( s j + h y ) d y = h ∫ 0 1 ( 1 − y ) b − h ( j − 1 + y ) α − 1 d y + h ∫ 0 1 y b − h ( j + y ) α − 1 d y .$
Let
$ψ 1 ( i ) = ∫ 0 1 y b − h ( i + y ) α − 1 d y , ψ 2 ( i ) = ∫ 0 1 ( 1 − y ) b − h ( i + y ) α − 1 d y , i = 0 , 1 , ⋯ , m − 1 .$
Then the coefficients in Equation (13) can be written as
$w 0 = h ψ 2 ( 0 ) , w m = h ψ 1 ( m − 1 ) , w j = h ψ 1 ( j − 1 ) + h ψ 2 ( j ) , j = 1 , ⋯ , m − 1 .$
Next, we apply these formulas to the integrals in Equation (6), i.e., to
$F u n ( t k ) = ∫ 0 t k a t k , s , u n ( s ) ( t k − s ) α − 1 d s ,$
for $h = T / m$ and $t k = k h , k = 0 , 1 , ⋯ , m$. For a fixed $k ∈ { 0 , ⋯ , m }$, let $w ( k ) ( s ) = ( t k − s ) α − 1$ denote the weight function. On each interval $[ 0 , t k ]$, we use the nodes ${ t 0 , ⋯ , t k }$. Please note that on each subinterval $[ 0 , t k ]$, we still have the same step size $t k k = k h k = h$. We now have
$∫ 0 t k a t k , s , u n ( s ) ( t k − s ) α − 1 d s = ∫ 0 t k a t k , s , u n ( s ) w ( k ) ( s ) d s = ∑ j = 0 k w j , k a t k , t j , u n ( t j ) + R n , k .$
In analogy to Equation (14), for $i = 0 , 1 , ⋯$, let
$ψ 1 , k ( i ) = ∫ 0 1 y t k − h ( i + y ) α − 1 d y = h α − 1 ∫ 0 1 y ( k − i − y ) α − 1 d y , ψ 2 , k ( i ) = ∫ 0 1 ( 1 − y ) t k − h ( i + y ) α − 1 d y = h α − 1 ∫ 0 1 ( 1 − y ) ( k − i − y ) α − 1 d y .$
Now, the coefficients in Equation (16) can be expressed as
$w 0 , k = h ψ 2 , k ( 0 ) , w k , k = h ψ 1 , k ( k − 1 ) , w j , k = h ψ 1 , k ( j − 1 ) + h ψ 2 , k ( j ) , j = 1 , ⋯ , k − 1 .$
Remark 4.
It is worth mentioning that by Equation (17), the functions $ψ 1 , k$ and $ψ 2 , k$ can be computed once, for $k = 0 , ⋯ , m$ and then be used in Equation (18) to find the coefficients $w j , k$ at every step, they do not have to be computed at each iteration n. This makes the implementation of the method very efficient.
By Equation (12), the error bound satisfies
$| R n , k | ≤ h 2 8 T α α | | a t , s , u n ( s ) s ″ | |$
Let us notice that this bound does not depend on k, thus, we will simply write $R n$, not $R n , k$. Also, let us note the following thing that will be useful in the next subsection: for a fixed $k ∈ { 0 , ⋯ , m }$, we have
$∑ j = 0 k w j , k = h ∑ j = 0 k − 1 ψ 1 , k ( j ) + ψ 2 , k ( j ) = h α α ∑ j = 0 k ( k − j ) α − ( k − j − 1 ) α = ( h k ) α α ≤ T α α .$

#### 3.2. Convergence and Error Analysis

Assuming the conditions of Theorems 3 and 4 hold, one can choose $u 0 ∈ B R ∩ C 2 , 1 − α ( 0 , T ]$, such that $u n ∈ B R ∩ C 2 , 1 − α ( 0 , T ]$. To analyze the convergence and give an error estimate, we make the following notations. Let
$M a = max r ≤ 2 | ∂ r a ( t , s , u ) ∂ t r 1 ∂ s r 2 ∂ u r 3 | , r = r 1 + r 2 + r 3 , M f = max { | | f | | , | | f ′ | | , | | f ″ | | } ,$
over all $t ∈ [ 0 , T ] , s ∈ [ 0 , t )$ and $u ∈ [ R 1 − R , R 2 + R ]$. If $f ∈ C 2 , 1 − α ( 0 , T ]$ and $a ∈ C 2 [ 0 , T ] × [ 0 , T ] × [ R 1 − R , R 2 + R ]$, one can find a constant $M 0 > 0$ such that the remainder in Equation (19) satisfies
$| R n | ≤ h 2 8 T α α M 0 , n = 0 , 1 , ⋯$
The constant $M 0$ may depend on $M a , M f$ or $τ$, but not on $m , k$ or n.
To simplify the writing, we make the following notation. Let
$γ = L T α α .$
Next we define our numerical method using Equation (5) iteratively, with initial point $u 0 ≡ f$. For every $k = 0 , m ¯$, we have
$u 0 ( t k ) = f ( t k ) , u n + 1 ( t k ) = ∫ 0 t k a t k , s , u n ( s ) ( t k − s ) α − 1 d s + f ( t k ) , n = 0 , 1 , ⋯$
We will approximate $u n ( t k )$ by $u ˜ n ( t k )$, obtained by applying Equation (16) to the integrals above:
$u 1 ( t k ) = ∫ 0 t k a t k , s , f ( s ) ( t k − s ) α − 1 d s + f ( t k ) = ∑ j = 0 k w j , k a t k , t j , f ( t j ) + R 1 + f ( t k ) = u ˜ 1 ( t k ) + R ˜ 1 ,$
with
$u ˜ 1 ( t k ) = ∑ j = 0 k w j , k a t k , t j , f ( t j ) + f ( t k ) .$
Denote the error at the nodes by
$e ( u n , u ˜ n ) : = max t k ∈ [ 0 , T ] | u n ( t k ) − u ˜ n ( t k ) | .$
By Equation (21), we have
$e ( u 1 , u ˜ 1 ) = | R ˜ 1 | = | R 1 | ≤ h 2 8 T α α M 0 .$
Similarly, we get
$u 2 ( t k ) = ∫ 0 t k a t k , s , u 1 ( s ) ( t k − s ) α − 1 d s + f ( t k ) = ∑ j = 0 k w j , k a t k , t j , u 1 ( t j ) + R 2 + f ( t k ) = ∑ j = 0 k w j , k a t k , t j , u ˜ 1 ( t j ) + R ˜ 1 + R 2 + f ( t k ) = u ˜ 2 ( t k ) + R ˜ 2 ,$
where
$u ˜ 2 ( t k ) = ∑ j = 0 k w j , k a t k , t j , u ˜ 1 ( t j ) + f ( t k ) .$
We have, by Equations (21) and (24),
$e ( u 2 , u ˜ 2 ) = | R ˜ 2 | ≤ L | R ˜ 1 | ∑ j = 0 k w j , k + | R 2 | ≤ L | R ˜ 1 | T α α + | R 2 | ≤ L h 2 8 T α α M 0 T α α + h 2 8 T α α M 0 = h 2 8 T α α M 0 ( 1 + γ ) .$
In a similar fashion, we get
$u n ( t k ) = ∫ 0 t k a t k , s , u n − 1 ( s ) ( t k − s ) α − 1 d s + f ( t k ) = ∑ j = 0 k w j , k a t k , t j , u n − 1 ( t j ) + R n + f ( t k ) = ∑ j = 0 k w j , k a t k , t j , u ˜ n − 1 ( t j ) + R ˜ n − 1 + R n + f ( t k ) = u ˜ n ( t k ) + R ˜ n ,$
with
$u ˜ n ( t k ) = ∑ j = 0 k w j , k a t k , t j , u ˜ n − 1 ( t j ) + f ( t k ) .$
The values $u ˜ n ( t k )$ can always be computed from the values at the previous step and, for the error, by induction, we have
$e ( u n , u ˜ n ) = | R ˜ n | ≤ L | R ˜ n − 1 | ∑ j = 0 k w j , k + | R n | ≤ L | R ˜ n − 1 | T α α + | R n | ≤ h 2 8 T α α M 0 γ 1 + ⋯ + γ n − 1 + h 2 8 T α α M 0 = h 2 8 T α α M 0 1 + ⋯ + γ n .$
Now we can give the following error estimate for our numerical method.
Theorem 5.
Assume the conditions of Theorem 4 hold with $f ∈ C 2 , 1 − α ( 0 , T ]$ and $a ∈ C 2 [ 0 , T ] × [ 0 , T ] × [ R 1 − R , R 2 + R ]$. Furthermore, assume that
$γ : = L T α α < 1 .$
Then the following error estimate holds
$e ( u ∗ , u ˜ n ) ≤ q n 1 − q | | F u 0 − u 0 | | τ + T 2 8 m 2 T α α M 0 1 1 − γ ,$
for all $n = 1 , 2 , ⋯$ and any $m ∈ N ∗$, where $u ∗$ is the true solution of Equation (4) and $u ˜ n$ are the approximations given by Equation (26).
Proof of Theorem 5.
By Equations (27) and (28),
$e ( u n , u ˜ n ) ≤ T 2 8 m 2 T α α M 0 1 1 − γ .$
Then, by Theorem 4,
$e ( u ∗ , u ˜ n ) ≤ e ( u ∗ , u n ) + e ( u n , u ˜ n ) ≤ q n 1 − q | | F u 0 − u 0 | | τ + T 2 8 m 2 T α α M 0 1 1 − γ ,$
where q is the contraction constant given in Theorem 2. □

## 4. Numerical Experiments

In this section, we give numerical examples of nonlinear weakly singular integral equations, to show the applicability of the method proposed.
Example 1.
Consider the integral equation
$u ( t ) = 1 12 ∫ 0 t u 2 ( s ) ( t − s ) − 1 / 2 d s + t 1 / 2 1 − 1 9 t , t ∈ [ 0 , 1 ] ,$
with exact solution $u ∗ ( t ) = t$.
We have $α = 1 / 2$, $a ( t , s , u ) = 1 12 u 2$ and $f ( t ) = t 1 / 2 1 − 1 9 t$. Let us check that all our theoretical assumptions are met.
For the function f, $R 1 = 0 , R 2 = 8 / 9$ and we choose $R = 1$. Then $M = 1 12 17 9 2$$≈ 0.2973$ and for all $u ∈ [ R 1 − R , R 2 + R ] ,$
$M T α α ≈ 0.5947 ≤ R .$
We have $∂ a ∂ u = 1 6 u$, so taking $L = max | ∂ a ∂ u |$, over $u ∈ [ R 1 − R , R 2 + R ]$, $L ≈ 0.3148$ and
$γ ≈ 0.6296 < 1 .$
Also, choosing $τ = 1$, we have
$L Γ ( α ) τ α ≈ 0.5580 < 1 .$
Thus, all conditions of Theorem 5 are satisfied.
We apply the product trapezoidal rule for the values $m = 12$ and $m = 24$, with the corresponding nodes $t k = 1 m k , k = 0 , m ¯$. In Table 1 we give the errors $e ( u ∗ , u ˜ n )$, with initial approximation $u 0 ( t ) = f ( t )$. Figure 1 displays the graphs of the true solution $u ∗ ( t )$ and of the approximate solution $u ˜ n$, for $n = 10$ iterations and $m = 24$ nodes, for the values $t ∈ [ 0 , 1 ]$. As both the errors in Table 1 and the graphs in Figure 1 show, there is very good agreement between the true values and the approximate ones of the solution at the nodes $t 0 , … , t m$.
Example 2.
Next, consider the equation
$u ( t ) = 1 18 ∫ 0 t sin 2 s + u 2 ( s ) ( t − s ) − 2 / 3 d s + cos t − 1 6 t 1 / 3 , t ∈ 0 , π 4 ,$
whose exact solution is $u ∗ ( t ) = cos t$.
Now $α = 1 / 3$, $a ( t , s , u ) = 1 / 18 sin 2 s + u 2$ and $f ( t ) = cos t − t 1 / 3 / 6$. We check the applicability of the method, by verifying all the theoretical assumptions. Here, $R 1 = f ( π / 4 ) ≈ 0.5533 , R 2 = 1$ and taking $R = 1 / 2$, we have, for $u ∈ [ R 1 − R , R 2 + R ]$, $M = 11 / 72 ≈ 0.1528$ and
$M T α α ≈ 0.4229 ≤ R .$
Again, taking $L = max | ∂ a ∂ u |$, over $u ∈ [ R 1 − R , R 2 + R ]$, we get $L = 1 / 6$ and
$γ ≈ 0.4613 < 1 .$
For $τ = 5$, we have
$L Γ ( α ) τ α ≈ 0.7227 < 1 .$
So all conditions in Theorem 5 are verified.
Table 2 contains the errors $e ( u ∗ , u ˜ n )$, with initial approximation $u 0 ( t ) = f ( t )$, for the values $m = 12$ and $m = 24$, with nodes $t k = π 4 m k , k = 0 , m ¯$. Figure 2 shows the graphs of the true solution $u ∗ ( t )$ and of the approximate solution $u ˜ n$, for $n = 10$ iterations and $m = 24$ nodes, for $t ∈ [ 0 , π / 4 ]$.
As seen in the examples above, the proposed method produces approximations that are in very good agreement with the exact values of the solution, thus confirming the theoretical results and error estimates given in the previous section.

## 5. Conclusions

In this paper, we presented a numerical iterative method for approximating solutions of nonlinear Volterra integral equations of the second kind, with weakly singular kernels. We used Banach’s fixed point theorem to establish the existence and uniqueness of the solution and to find a sequence converging to it (Picard iteration). Then we employed the product trapezoidal rule to approximate each iteration at a given set of nodes in the domain. The present method is fairly simple to use, its convergence is based on a classical fixed point result. It is also quite efficient and inexpensive in (the cost of) implementation, most of the computations can be done only once, not at each iteration (see Remark 4). Thus, when only values of the solution at some points are needed (as is the case in many applications), this method is more practical and accessible than other classical methods.
Yet, the method converges quite fast, with order $O ( q n )$ with respect to the number of successive approximations and order $O ( 1 m 2 )$ with respect to the number of nodes. As the examples show, it gives good approximations even with a relatively small number of iterations and of quadrature nodes.
In future works, other types of singularity of the kernel can be explored for Volterra or Fredholm integral equations. Also, more complicated kernels can be considered, such as kernels containing modified (or delayed) arguments, or other special types of kernels. Various other iteration techniques for fixed point successive approximations can be employed, such as Mann iteration, Krasnoselskii iteration, and others.

## Funding

This research received no external funding.

## Conflicts of Interest

The author declares no conflict of interest.

## References

1. Agarwal, R.P.; O’Regan, D. Singular Volterra integral equations. Appl. Math. Lett. 2000, 13, 115–120. [Google Scholar] [CrossRef] [Green Version]
2. Gorenflo, R.; Vessella, S. Abel Integral Equations: Analysis and Applications, Lecture Notes in Mathematics (1461); Springer: Berlin, Germany, 1991. [Google Scholar]
3. Wang, J.; Zhu, C.; Fečkan, M. Analysis of Abel-type nonlinear integral equations with weakly singular kernels. Bound. Value Probl. 2014, 2014, 20. [Google Scholar] [CrossRef] [Green Version]
4. Becker, L.C. Properties of the resolvent of a linear Abel integral equation: Implications for a complementary fractional equation. Electron. J. Qual. Theory 2016, 64, 1–38. [Google Scholar] [CrossRef]
5. Aghili, A.; Zeinali, H. Solution to Volterra singular integral equations and non homogenous time. Gen. Math. Notes 2013, 14, 6–20. [Google Scholar]
6. Wu, G.C.; Baleanu, D. Variational iteration method for fractional calculus—A universal approach by Laplace transform. Adv. Differ. Equ. 2013, 2013, 18. [Google Scholar] [CrossRef] [Green Version]
7. Andras, S. Weakly singular Volterra and Fredholm-Volterra integral equations. Stud. Univ. Babeş-Bolyai Math. 2003, 48, 147–155. [Google Scholar]
8. Bertram, B.; Ruehr, O. Product integration for finite-part singular integral equations: Numerical asymptotics and convergence acceleration. J. Comput. Anal. Appl. 1992, 41, 163–173. [Google Scholar] [CrossRef] [Green Version]
9. Brunner, H. The numerical solution of weakly singular Volterra integral equations by collocation on graded meshes. Math. Comput. 1985, 45, 417–437. [Google Scholar] [CrossRef]
10. Diogo, T. Collocation and iterated collocation methods for a class of weakly singular Volterra integral equations. J. Comput. Appl. Math. 2009, 229, 363–372. [Google Scholar] [CrossRef] [Green Version]
11. Assari, P. Solving weakly singular integral equations utilizing the meshless local discrete collocation technique. Alexandria Eng. J. 2018, 57, 2497–2507. [Google Scholar] [CrossRef]
12. Rehman, S.; Pedas, A.; Vainikko, G. Fast solvers of weakly singular integral equations of the second kind. Math. Mod. Anal. 2018, 23, 639–664. [Google Scholar] [CrossRef] [Green Version]
13. Kumar, S.; Kumar, A.; Kumar, D.; Singh, J.; Singh, A. Analytical solution of Abel integral equation arising in astrophysics via Laplace transform. J. Egypt. Math. Soc. 2015, 23, 102–107. [Google Scholar] [CrossRef] [Green Version]
14. Mokharty, P.; Ghoreishi, F. Convergence analysis of the operational Tau method for Abel-type Volterra integral equations. Electron. Trans. Numer. Anal. 2014, 41, 289–305. [Google Scholar]
15. Diogo, T.; Ford, N.J.; Lima, P.; Valtchev, S. Numerical methods for a Volterra integral equation with non-smooth solutions. J. Comput. Appl. Math. 2006, 189, 412–423. [Google Scholar] [CrossRef] [Green Version]
16. Ali, M.R.; Mousa, M.M.; Ma, W.-X. Solution of nonlinear Volterra integral equations with weakly singular kernel by using the HOBW method. Adv. Math. Phys. 2019. [Google Scholar] [CrossRef] [Green Version]
17. Nadir, M.; Gagui, B. Quadratic numerical treatment for singular integral equations with logarithmic kernel. Int. J. Comput. Sci. Math. 2019, 10, 288–296. [Google Scholar]
18. Alvandi, A.; Paripour, M. Reproducing kernel method for a class of weakly singular Fredholm integral eq uations. J. Taibah Univ. Sci. 2018, 12, 409–414. [Google Scholar] [CrossRef] [Green Version]
19. Brunner, H.; Pedas, A.; Vainikko, G. The piecewise polynomial collocation method for nonlinear weakly singular Volterra equations. Math. Comput. 1999, 68, 1079–1095. [Google Scholar] [CrossRef] [Green Version]
20. Darwish, M.A. On monotonic solutions of an integral equation of Abel type. Math. Bohem. 2008, 133, 407–420. [Google Scholar] [CrossRef]
21. Sidorov, N.A.; Sidorov, D.N.; Krasnik, A.V. Solution of Volterra operator-integral equations in the nonregular case by the successive approximation method. Diff. Equ. 2010, 46, 882–891. [Google Scholar] [CrossRef]
22. Sidorov, D.N.; Sidorov, N.A. Convex majorants method in the theory of nonlinear Volterra equations. Banach J. Math. Anal. 2012, 6, 1–10. [Google Scholar] [CrossRef]
23. Noeiaghdam, S.; Dreglea, A.; He, J.; Avazzadeh, Z.; Suleman, M.; Fariborzi Araghi, M.A.; Sidorov, D.N.; Sidorov, N. Error Estimation of the Homotopy Perturbation Method to Solve Second Kind Volterra Integral Equations with Piecewise Smooth Kernels: Application of the CADNA Library. Symmetry 2020, 12, 1730. [Google Scholar] [CrossRef]
24. Atkinson, K.E. An Introduction to Numerical Analysis, 2nd ed.; John Wiley & Sons: New York, NY, USA, 1989. [Google Scholar]
Figure 1. Example 1, $n = 10$, $m = 24$.
Figure 1. Example 1, $n = 10$, $m = 24$.
Figure 2. Example 2, $n = 10$, $m = 24$.
Figure 2. Example 2, $n = 10$, $m = 24$.
Table 1. Errors $e ( u ∗ , u ˜ n )$ for Example 1.
Table 1. Errors $e ( u ∗ , u ˜ n )$ for Example 1.
nm
1224
11.084348 × 10$− 1$1.882162 × 10$− 2$
52.799553 × 10$− 4$5.567188 × 10$− 6$
106.813960 × 10$− 7$4.690204 × 10$− 9$
Table 2. Errors $e ( u ∗ , u ˜ n )$ for Example 2.
Table 2. Errors $e ( u ∗ , u ˜ n )$ for Example 2.
nm
1224
11.002977 × 10$− 1$3.014020 × 10$− 2$
52.315358 × 10$− 4$4.412851 × 10$− 5$
109.363611 × 10$− 7$5.525447 × 10$− 9$
 Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

## Share and Cite

MDPI and ACS Style

Micula, S. A Numerical Method for Weakly Singular Nonlinear Volterra Integral Equations of the Second Kind. Symmetry 2020, 12, 1862. https://doi.org/10.3390/sym12111862

AMA Style

Micula S. A Numerical Method for Weakly Singular Nonlinear Volterra Integral Equations of the Second Kind. Symmetry. 2020; 12(11):1862. https://doi.org/10.3390/sym12111862

Chicago/Turabian Style

Micula, Sanda. 2020. "A Numerical Method for Weakly Singular Nonlinear Volterra Integral Equations of the Second Kind" Symmetry 12, no. 11: 1862. https://doi.org/10.3390/sym12111862

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.