1. Introduction
This study is devoted to analyzing and computing the solution of nonlinear functional Volterra integral equations. These equations have applications in several areas such as physical sciences [
1,
2,
3], optimal control and economics [
4,
5,
6,
7], reformulation of more difficult mathematical problems [
8,
9], and epidemiology [
10,
11]. In [
12], sufficient conditions for the existence of a principal solution were derived for nonlinear Volterra equations and an explicit method was also proposed.
Since closed-form analytical solutions, in general, do not exist, numerical techniques provide a means of approximating them. For example, numerical algorithms have been proposed based on triangular functions [
13], collocation methods [
14,
15], CAS wavelets [
16,
17], the variational iteration methods [
18,
19,
20,
21], collocation–trapezoidal methods [
22,
23], linear programming [
24], Picard–trapezoidal rule [
9], and Taylor series [
25]. Moreover, see [
10,
11,
26,
27,
28,
29] for other ideas. Most of these methods are based on directly discretizing the original nonlinear integral equations without using any fixed-point iteration (such as the Picard iteration). In this work, we shall refer to this type of methods as
direct discretization (DD) algorithms. A typical example is the one proposed in [
22].
It is well known that, under suitable conditions and with an arbitrary initial function in a suitable Banach space, the fixed point of an appropriate operator can be approximated using an applicable fixed-point iteration technique, such as the Picard, Krasnoselskijj, Mann, or Ishikawa schemes [
30,
31,
32,
33], see also [
34]. These iterative methods can produce analytical expressions for the approximating functions, provided all the operations in the operator are analytically realizable. We shall refer to this type of approach as the
Picard-type (PT) schemes. See [
35] for an example.
The challenge with DD schemes is that they lead to nonlinear algebraic systems which require a lot of computational resources, time, and even high programming skills to solve. The PT schemes face the challenge of not being practically useful once the operations involved in the operator cannot be obtained analytically. This is usually the case in nonlinear problems. Micula [
9] came up with the idea of combining PT schemes with DD using the Picard iteration and trapezoidal rule; see also [
36] for Mann’s iteration.
It is known that the Mann iteration converges faster than the Picard’s [
36]; however, it is also proved in Theorem 9.4 of [
30] that for certain operators, given any Mann iteration that converges to the fixed point, there is always a Krasnoselskij iteration which converges faster. Therefore, the present paper develops the combined technique for functional integral equations using the Krasnoselkij iterative algorithm and a one-dimensional quadrature rule defined at collocation points. The advantages of the approach are as follows: First, unlike the DD schemes, it does not lead to a coupled nonlinear algebraic system, and not even linear systems are encountered. Hence, Newton or other nonlinear solvers are completely avoided and even linear iterative algorithms are also not needed. Second, unlike the PT schemes, every integral is explicitly approximated. A systematic analysis of the convergence of the approach is carried out and numerical examples are provided to show the second-order accuracy of the method. Moreover, the existence and uniqueness of the results in Micula [
9,
36] are obtained on the basis of a contraction assumption. In the current work, we prove the solvability of the problem without any contraction assumption by employing the generalized Banach contraction principle.
To be precise, the problem investigated in this work is the following nonlinear functional Volterra integral equation of the second kind:
where
.
The solvability of the problem is proved in
Section 2, whereas the numerical algorithm which begins with the Krasnoselskij iteration is derived in
Section 3. The error and convergence of the method are analyzed in
Section 4, whereas numerical examples are provided in
Section 5 to demonstrate the accuracy. In
Section 6 we assess the performance. Some concluding remarks are given in
Section 7.
3. Numerical Algorithm
This section details the numerical approximation of problem (
1). To this end, let
and define the mesh
. We also define the grid functions
Since
T is a contraction, the sequence
generated by
converges to the fixed point of
T [
30] with the error estimate [
31]:
where
Lemma 2 (See [
37])
. Let be points in the interval . Suppose that . Thenand there exists such that To derive the method, we first collocate problem (
1) at
. This gives:
Observe that this is exact as no approximation has been made. Hence, we can initialize the Krasnoselskij sequence (
9) from (
14) as follows:
Therefore, the iteration becomes:
We now collocate (
15) at
:
Using the trapezoidal rule in Lemma 2 to approximate the integral on the iterative scheme (
16), we have the algorithm:
The system (
17) constitutes the numerical algorithm for approximating problem (
1). We prove in the next section that a sequence of solutions computed with this scheme converges to the exact solution of problem (
1).
4. Convergence Analysis
Definition 1 (Maximum Error)
. The numerical error, , is the maximum error committed in approximating by using the scheme (17) for all when . That iswhere Remark 1. The goal in this section is to show that the quantity vanishes whenever , , and assumption (11) holds. Let us first prove the following lemma.
Lemma 3. Let γ be a Lipschitz operator with constant . Then Proof. The result is trivial when
. It is also trivial if
and
. We only prove the inequality for the case when
and
By the Lipschitz continuity of
, we have
Because of (
20), we can write the left side of the last inequality as:
Hence the result. □
Lemma 4 (Recurrence Relation)
. The error, , committed in using the scheme (17) to approximate the iterative process (16) satisfies the recurrence relation: Proof. First, since , it means that .
Setting
in (
17), we have:
Similarly, setting
, we obtain:
This proves the claim. □
Theorem 2 (Convergence)
. The error committed in using the scheme (17) to approximate the iterative process (16) satisfies:hencewhere and . Theorem 3 (Convergence)
. The numerical solution, , computed using the scheme (17), converges to the exact solution, . Proof. The error
between
and
is
where
is defined in (
11). Hence, the convergence result follows (see definition 1 and the remark that follows it). □
Remark 2. Observe the appearance of in (30). This implies that we require this quantity to be non-negative (since the left hand side of (30) is positive). Since , it follows that the requirement is satisfied if This is a requirement for the scheme to be convergent.
Remark 3 (Implication of the analysis). As observed in the proof of Theorem 3, the numerical error consists of two parts—the fixed-point iteration error and the quadrature error. Hence, both errors must converge to zero for the proposed method to converge to the exact solution, and the solutions computed with the method would be obtained at minimal computational cost compared to methods which involve solving nonlinear systems. However, since the convergence of the fixed-point iteration is guaranteed whenever the operator is a contraction, it then implies that if an appropriate quadrature rule is in place, then the contractivity of the associated operator guarantees that numerical solutions can be computed accurately and efficiently.
6. Performance Analysis
In this section, we reuse the examples in
Section 5 to assess the computational efficiency of the proposed method in comparison with a nonlinear system based (DD) method as proposed in [
22]. We achieve this by comparing CPU times used by each method. We will also briefly discuss the memory usage of the scheme. For example 1, we set
; for example 2, we set
; example 3 used
; whereas example 4 used
. Both algorithms (the proposed method and that of [
22]) solve each of the four example problems on
equal sub-intervals in [
].
Table 5,
Table 6,
Table 7 and
Table 8 display the results with the elapsed CPU time and error committed by each method in solving the problem. It is obvious that the proposed method highly outperforms the direct discretization (DD) method [
22] as it has much better computational efficiency. It is important to notice that the error of the DD scheme is not better than that of the new scheme.
In addition to the above merits, the new scheme is also more memory efficient than the DD scheme since it does not require to solve linear or nonlinear systems. Yet, from the programming point of view, the new scheme is very easy to program or implement. All these advantages lead to the conclusion that the present method is highly competitive.