A Numerical Method for Weakly Singular Nonlinear Volterra Integral Equations of the Second Kind

: This paper presents a numerical iterative method for the approximate solutions of nonlinear Volterra integral equations of the second kind, with weakly singular kernels. We derive conditions so that a unique solution of such equations exists, as the unique ﬁxed point of an integral operator. Iterative application of that operator to an initial function yields a sequence of functions converging to the true solution. Finally, an appropriate numerical integration scheme (a certain type of product integration) is used to produce the approximations of the solution at given nodes. The resulting procedure is a numerical method that is more practical and accessible than the classical approximation techniques. We prove the convergence of the method and give error estimates. The proposed method is applied to some numerical examples, which are discussed in detail. The numerical approximations thus obtained conﬁrm the theoretical results and the predicted error estimates. In the end, we discuss the method, drawing conclusions about its applicability and outlining future possible research ideas in the same area.


Introduction
Many fields in the area of Applied Mathematics rely on knowledge of integral equations, as they arise naturally in various applications in Mathematics, Engineering, Physics, and Technology. They can be used to model a wide range of physical problems such as heat conduction, diffusion, continuum mechanics, geophysics, electricity, magnetism, neutron transport, traffic theory, and many more. Integral equations provide solutions in designing efficient parametrization algorithms for algebraic curves, surfaces, and hypersurfaces. Many initial and boundary value problems associated with ordinary and partial differential equations can be reformulated as integral equations.
Singular and weakly singular integral equations are of particular interest, since they are used to solve inverse boundary value problems whose domains are fractal curves, where classical calculus cannot be used. Abel equations and other fractional order integral equations were studied extensively and are used in modeling various phenomena in biophysics, viscoelasticity, electrical circuits, etc.
In many applications modeled by integral equations, the kernels are not smooth, making it difficult both to find a solution and to approximate it numerically, as the convergence of approximate methods depends in general on the smoothness of the solution. Thus, classical analytical methods, such as projection methods perform poorly in such cases, as the linear system they lead to is generally badly conditioned and difficult to solve. Proof of convergence and error estimation can also be laborious, when classical calculus cannot be used. Oftentimes, they also have a high implementation cost. Hence, there is a high need for speedy, easy to use numerical methods for these types of equations. The method we propose is based on a classical fixed point result, adapted appropriately. Then, for the approximation of the integrals involved, the product integration numerical scheme we use is also quite efficient, since most of the computations can be done only once, not at each iteration.
In this paper, we consider a Volterra integral equation of the type with the kernel of the form Later on, other smoothness assumptions will be made on a and f . We derive conditions under which results from fixed point theory will provide the existence of a unique solution of this equation, as well as a sequence of successive iterations to approximate it. We briefly summarize the main results for the existence of fixed points of an operator on a Banach space. Definition 1. Let (X, || · ||) be a Banach space. A mapping T : X → X is called a q−contraction if there exists a constant 0 ≤ q < 1 such that for all x, y ∈ X.
On Banach spaces, the well known contraction principle holds: Theorem 1. Consider a Banach space (X, || · ||) and let T : X → X be a q−contraction. Then (a) T has exactly one fixed point, which means equation x = Tx has exactly one solution x * ∈ X; (b) the sequence of successive approximations x n+1 = Tx n , n ∈ N, converges to the solution x * , where x 0 can be any arbitrary point in X; (c) for every n ∈ N, the following error estimate Remark 1. Theorem 1 remains valid when X is replaced by a closed subset Y ⊆ X, satisfying T(Y) ⊆ Y.
We use Banach's theorem to establish, under certain conditions, the existence and uniqueness of a solution of Equation (1) and to approximate it by applying the operator successively. Then we use a suitable numerical integration scheme to approximate the values of the solution at given nodes.
The numerical method thus resulted is quite easy to use and implement, while giving accurate approximations.
The paper is organized as follows. In Section 2 we derive necessary conditions for the existence and uniqueness of the solution and discuss its regularity. In Section 3 the numerical method is described, by use of a special type of product integration. The convergence and error analysis of the method are also discussed in details. Numerical examples are given in Section 4, illustrating the applicability of the proposed method. In Section 5, the advantages of this new method are summarized and future possible work ideas in the same area are discussed.

Existence and Uniqueness of the Solution
To solve Equation (1), we apply the contraction principle to the associated integral operator (3) T] (for the proof, see e.g., [7]).
Then we solve the integral Equation (1) by finding a fixed point for the operator F: We consider the space X = C[0, T] equipped with the Bielecki norm for some suitable constant τ > 0. Then, as is well known, (X, || · || τ ) is a Banach space (see e.g., [24]) and we have the following result.
Proof of Theorem 2. Let t ∈ [0, T] be fixed. By Equation (5), we have where the change of variables for every t ∈ [0, T] and, so, We can choose τ > 0 such that q := LΓ(α) The conclusions now follow from Theorem 1.
Next, we address the question of smoothness of the solution of Equation (1). The following result holds:

Remark 3.
For the proof, see e.g., [19] The Lipschitz condition in Theorem 2 can be very prohibitive if required on the entire space. To be able to use it on a wider range of applications, we restrict it to a closed subset. Let || · || denote the Chebyshev norm on C[0, T] (which is equivalent to the Bielecki norm) and consider the closed ball B R := {u ∈ C[0, T] ||u − f || ≤ R}, for some R > 0. Then B R ⊆ X and we have the following result.
where M := max |a(t, s, u)| over all t, s ∈ [0, T] and all u, v ∈ [R 1 − R, R 2 + R]. Then the conclusions of Theorem 2 hold on B R .
Proof of Theorem 4. By Remark 1, all we need to show is that so, for u ∈ B R , conditions in Equations (8) and (9) hold.

Numerical Method
We have now established that under the conditions of Theorem 4 a unique solution of Equation (1) exists and that it can be obtained as the limit of the sequence of successive approximations given in Equation (6). Still, the integrals involved in the iteration process cannot be computed exactly, so they have to be approximated numerically. We now proceed to approximate the values of the solution u * (t) at a given set of nodes 0 = t 0 < . . . < t m = T. That means the singular integrals in Equation (6) have to be approximated numerically at the nodes.

Product Integration
For the numerical solution, we use product integration (see [24]). The idea is to approximate the integral for ϕ a smooth function and a singular weight function w, using a sequence of functions ϕ m such that ||ϕ − ϕ m || → 0 as m → ∞ and the integrals so I m (ϕ) → I(ϕ) as m → ∞, at least as fast as ϕ m → ϕ. Hence, for a set of nodes a = s 0 < . . . < s m = b, we use the approximation formula with the error given in Equation (10).
One of the easiest (in terms of keeping the algebra simple) product integration methods is the so-called product trapezoidal rule. The name comes from the fact that the idea is the same as the one used to produce the trapezoidal rule, i.e., start with piecewise linear interpolation of the function ϕ, in order to obtain the sequence ϕ m .

Now, the coefficients in Equation
Remark 4. It is worth mentioning that by Equation (17), the functions ψ 1,k and ψ 2,k can be computed once, for k = 0, . . . , m and then be used in Equation (18) to find the coefficients w j,k at every step, they do not have to be computed at each iteration n. This makes the implementation of the method very efficient.
By Equation (12), the error bound satisfies Let us notice that this bound does not depend on k, thus, we will simply write R n , not R n,k . Also, let us note the following thing that will be useful in the next subsection: for a fixed k ∈ {0, . . . , m}, we have

Convergence and Error Analysis
Assuming the conditions of Theorems 3 and 4 hold, one can choose u 0 ∈ B R ∩ C 2,1−α (0, T], such that u n ∈ B R ∩ C 2,1−α (0, T]. To analyze the convergence and give an error estimate, we make the following notations. Let M a = max r≤2 ∂ r a(t, s, u) ∂t r 1 ∂s r 2 ∂u r 3 , r = r 1 + r 2 + r 3 , , one can find a constant M 0 > 0 such that the remainder in Equation (19) satisfies The constant M 0 may depend on M a , M f or τ, but not on m, k or n.
To simplify the writing, we make the following notation. Let Next we define our numerical method using Equation (5) iteratively, with initial point u 0 ≡ f . For every k = 0, m, we have We will approximate u n (t k ) byũ n (t k ), obtained by applying Equation (16) to the integrals above: Denote the error at the nodes by e(u n ,ũ n ) := max By Equation (21), we have Similarly, we get We have, by Equations (21) and (24), In a similar fashion, we get The valuesũ n (t k ) can always be computed from the values at the previous step and, for the error, by induction, we have e(u n ,ũ n ) = |R n | ≤ L|R n−1 | k ∑ j=0 w j,k + |R n | Now we can give the following error estimate for our numerical method.

Theorem 5. Assume the conditions of Theorem 4 hold with f
Then the following error estimate holds for all n = 1, 2, . . . and any m ∈ N * , where u * is the true solution of Equation (4) andũ n are the approximations given by Equation (26). (27) and (28),

Proof of Theorem 5. By Equations
Then, by Theorem 4, e(u * ,ũ n ) ≤ e(u * , u n ) + e(u n ,ũ n ) where q is the contraction constant given in Theorem 2.

Numerical Experiments
In this section, we give numerical examples of nonlinear weakly singular integral equations, to show the applicability of the method proposed.

Example 1. Consider the integral equation
with exact solution u * (t) = √ t.
For the function f , R 1 = 0, R 2 = 8/9 and we choose R = 1. Then M = 1 12 17 9 2 ≈ 0.2973 and We have ∂a ∂u Also, choosing τ = 1, we have Thus, all conditions of Theorem 5 are satisfied. We apply the product trapezoidal rule for the values m = 12 and m = 24, with the corresponding nodes t k = 1 m k, k = 0, m. In Table 1 we give the errors e(u * ,ũ n ), with initial approximation u 0 (t) = f (t).  Table 1 and the graphs in Figure 1 show, there is very good agreement between the true values and the approximate ones of the solution at the nodes t 0 , . . . , t m . Table 1. Errors e(u * ,ũ n ) for Example 1. Example 2. Next, consider the equation whose exact solution is u * (t) = cos t.
For τ = 5, we have So all conditions in Theorem 5 are verified. Table 2 contains the errors e(u * ,ũ n ), with initial approximation u 0 (t) = f (t), for the values m = 12 and m = 24, with nodes t k = π 4m k, k = 0, m. Figure 2 shows the graphs of the true solution u * (t) and of the approximate solutionũ n , for n = 10 iterations and m = 24 nodes, for t ∈ [0, π/4]. Table 2. Errors e(u * ,ũ n ) for Example 2. As seen in the examples above, the proposed method produces approximations that are in very good agreement with the exact values of the solution, thus confirming the theoretical results and error estimates given in the previous section.

Conclusions
In this paper, we presented a numerical iterative method for approximating solutions of nonlinear Volterra integral equations of the second kind, with weakly singular kernels. We used Banach's fixed point theorem to establish the existence and uniqueness of the solution and to find a sequence converging to it (Picard iteration). Then we employed the product trapezoidal rule to approximate each iteration at a given set of nodes in the domain. The present method is fairly simple to use, its convergence is based on a classical fixed point result. It is also quite efficient and inexpensive in (the cost of) implementation, most of the computations can be done only once, not at each iteration (see Remark 4). Thus, when only values of the solution at some points are needed (as is the case in many applications), this method is more practical and accessible than other classical methods.
Yet, the method converges quite fast, with order O(q n ) with respect to the number of successive approximations and order O 1 m 2 with respect to the number of nodes. As the examples show, it gives good approximations even with a relatively small number of iterations and of quadrature nodes.
In future works, other types of singularity of the kernel can be explored for Volterra or Fredholm integral equations. Also, more complicated kernels can be considered, such as kernels containing modified (or delayed) arguments, or other special types of kernels. Various other iteration techniques for fixed point successive approximations can be employed, such as Mann iteration, Krasnoselskii iteration, and others.