1. Introductory Notes
1.1. Background
There exist many works handling the approximate solution of linear and nonlinear integral equations. However, tackling nonlinear integral equations would be more challenging due to the presence of nonlinearity which might be expensive for different solvers [
1,
2].
Some authors discussed the asymptotic error expansion of collocation-type and Nystrom-type methods for Volterra–Fredholm integral equations with nonlinearity, see [
3] for a complete discussion on this issue. One class of nonlinear internal equations is the mixed Hammerstein integral equations with several application in engineering problems [
2].
Since in the process of finding the solution of such integral equations, most of the time a system of algebraic equation would occur that must be solved quickly and accurately, thus we here bring the attention to develop and study a useful numerical solution scheme for solving nonlinear systems with application in tackling nonlinear integral equations.
Clearly, there are some other nonlinear problems in literature which could yield in tackling nonlinear system of equations, see e.g., [
4,
5].
1.2. Definition
Consider a nonlinear system of equations of algebraic type as follows [
6]:
which contains
m equations with
m unknowns and
while
,
,
…,
are the functions of coordinate. We can also write (
1) using
in a more compact form as
The purpose of this work is to study finding the solution of system (
1) via iteration process and discuss its application in solving nonlinear integral equations. As such, now let us briefly review some of the existing methods for finding its simple roots in the next subsection.
1.3. Existing Solvers
The Steffensen’s scheme for solving nonlinear systems is written as follows [
7]:
which is based upon the divided difference operator (DDO). The 1st order DDO of
A for the multidimensional nodes
x and
y is expressed by a component-to-component procedure as follows [
8]:
Recall that first-order divided difference of
A on
is a mapping as follows:
that reads
Here
shows the set of bounded linear functions. By considering
, one we can also express the first-order DDO as follows [
8]:
Traub in [
9] investigated another way based on the function
for approximating the Jacobian matrix of the Newton’s method and to obtain the Steffensen’s scheme based on a point-wise definition.
An improvement of (
3) was given in [
10,
11] as follows:
wherein
The point in (
8) in contrast to (
3) is that it applies two steps and of course two
m-D functional evaluations to reach a higher rate than quadratic. Here, the idea is to freeze the DDO per cycle and then increase the sub steps so as to gain as much as possible of order improvement, as well as some improvements in the computational efficiency index of the scheme.
Let us also recall some of the iteration schemes having the requirement of Jacobian computation now. The Jarratt’s iteration having fourth rate of convergence for solving (
1) is given by [
12]:
This fourth-order iteration expression requires the computation of two matrix inverses (based on the resolution of linear systems) to achieve its rate, which manifest that getting higher rate of convergence in the form of a two-step method is costly.
1.4. Motivation
All methods discussed until now are without memory; some improvements over such schemes can be done by considering additional memory terms.
Our motivation of pursuing this aim is not only limited at tackling nonlinear systems, but a motivation is to apply such schemes for practical engineering problems such as the nonlinear mixed integral equations, see e.g., [
13,
14,
15,
16].
The goal in our development is to reach a higher computational efficiency using as low as possible number of linear systems of equations and the functional evaluations. This is directly interlinked with the concept of scientific computing and numerical analysis which gives a meaning to the investigation and proposing novel numerical procedures.
1.5. Achievement and Contribution
The objective of this work is to present a two-step higher order scheme to solve system of nonlinear equations. As such, we present an iteration method with memory for finding both real and complex zeros. Our scheme does not require computing the Fréchet derivatives of the function.
1.6. Organization
We unfold this article as follows. In
Section 2, the derivation and contribution of an iteration expression is furnished.
Section 3 provides an error analysis for its convergence rate. The computational efficiency of different solvers by including not only the number of functional evaluations, but also the number of system of involved linear equations, the number of LU (Lower-Upper) factorizations as well as the other similar operations will be discussed in
Section 4 in detail.
Section 5 discusses the application of the proposed scheme. Concluding remarks are given in
Section 6.
2. A Derivative-Free Scheme
Here our attempt is to increase the computational efficiency index of (
8) without imposing several more steps or further DDDs in each cycle. To complete such a task, we rely on the concept of methods with memory which state that the convergence speed and efficiency of iterative methods could be improved by saving and using the already computed values of the functions and nodes.
In fact, the error equation of the uni-parametric family of methods (
8) includes a term of the form below:
The free nonzero parameter
in (
11) can clearly affect not only on the domain of convergence (attraction basins of the iterative method) but also to the improvement of the convergence order. When tackling a nonlinear system of equations, and since
is not known, we can use an approximation for
to make the whole relation (
11) approximately zero. Therefore, we may write
wherein
is an approximation of the solution (per cycle).
It is important to discuss how we approximate the matrix by employing some estimates to computed via the existing data.
To improve the performance of (
8) using the notion of methods with memory, we consider the following iteration expression:
To ease up the implementation of the scheme with memory, let us first consider
and
Thus, now we contribute the following scheme:
Lemma 1. Let be a nonempty convex domain. Suppose that A is thrice Fréchet differentiable on D, and that , for any and the initial value and the solution α are close to each other. By considering = and , one obtains the error relation below Proof. See [
17] for more details. ☐
To implement (
16), one needs to solve some linear systems of algebraic equations. This means that at each new step, a new LU factorization is needed, and no information can be exploited from the previous steps. However, there exists a body of literature about recycling this kind of information to obtain updated preconditioner for iterative solvers, [
18,
19,
20]. We leave future discussion about constructing and imposing such a preconditioner for future works in this field.
As long as the coefficient matrices are sparse or large and sparse, a Krylov subspace method can be employed to speed up the process. However, the merit in (
16) is that the two linear systems have one same coefficient matrix. Hence, only one LU factorization would be enough and by saving the decomposition, one can act it to two different right-hand-side vectors to get the solution vectors per sub cycles of (
16).
A challenging part of the implementation using (
16) is the incorporation of
. This is a not anymore a constant and it should be defined as a matrix. In this paper, whenever required the initial matrix
is specified by:
The choice of the initial matrix
affects directly on the whole process in order to arrive in the convergence phase as quickly as possible. Here, (
18) is in agreement with the dynamical studies of Steffensen-type methods with memory at which the basins of attractions are larger as long as the free parameter is close to zero.
Noting also that updating per cycle is again based on the already computed LU factorization while it should only act on the identity matrix to proceed.
3. Rate of Convergence
It is known that via the Taylor expansion of
in the node
x and integrating, one obtains:
It is here assumed that
is not singular and
is called the error at the
n-th iterate and [
6,
21]:
(
20) is the equation of error, whereas
H is a
p-linear function. This means that
. Moreover, we consider:
which would be a matrix.
Before stating the main theorem, it is pointed out that if
A be differentiable in terms of Fréchet concept in
D sufficiently. As in [
22], the
l-th differentiation of
A at
,
, is the following
l-linear function
so that
. It is also famous that, for
locating in a neighborhood of a root
of (
1), the Taylor expansion could be written and we have [
22]:
wherein
One finds
, because
and
. Moreover, for
we have:
where
I is the unit matrix of appropriate size. Here,
.
Theorem 1. Assume that in (1), is Fréchet differentiable sufficiently at any points of D at . Here we assume and . Then, (16) with a choice of suitable initial vector has 3.30 R-order of convergence. Proof. For the iteration scheme (
16) in the case without memory and using (
23)–(
25), we can obtain:
Let us now re-write (
26) in the asymptotical form as comes next:
Several symbolic calculations by taking into account that the coefficient of the error terms in our
m-D case are all matrices, Lemma 1, and their multiplications does not admit commutativity, one obtains that:
Combining (
28) and (
29) into (
27), we attain:
It shows that
wherein its convergence
r-order is given by:
The proof is ended. ☐
4. Efficiency
Here we only need to compute one LU factorization per cycle and act it two times for different linear systems with two different right hand sides and one time on an identity matrix for the acceleration matrix to achieve a higher speed rate.
It is recalled that the classical index of efficiency is defined by [
8]:
wherein
p is the convergence rate and
is the whole burden per cycle considering the number of functional evaluations.
When dealing with nonlinear system of equations, the cost of functional evaluations per cycle can be expressed as follows:
To evaluate A, m evaluations of functions are required.
To evaluate the associated Jacobian matrix needs evaluations of functions.
To evaluate the first-order DDO, we need evaluations of functions.
In addition, the LU factorization cost is plus in tackling the two involved triangular systems.
wherein is a weight that connects the cost of 1 evaluation of function with one flops. Here it is assumed that . No preconditioning is imposed in each cycle of these methods for solving the linear systems. This is done for all the compared methods.
To be more precise, we consider that the cost for computing each of the scalar functions is unit. The cost for computing other involved calculations are all also a factor of this unity cost. This is the way to give a flops-like efficiency index [
23].
Considering only the consumed functional evaluations per cycle might not be a key element for reporting the indices of efficiency when solving nonlinear systems of equations. The number of matrix products, scalar products, decomposition of LU and the solution of the triangular systems of algebraic linear equations are significant in estimating the real cost and superiority of a scheme in comparison to the existing solvers in literature [
23].
Hence, the results can be summarized as follows for large
m:
In our comparisons, we applied the Newton’s quadratically convergent iteration expression (NM) and also Steffensen’s method (SM), the third-order expression of Amat et al. (
8) denoted by AM, and the presented approach (
16) showed by PM, for tackling our nonlinear systems of algebraic equations. This is also plotted in
Figure 1 showing the competitiveness of the scheme with memory (
16).
6. Summary
For derivative-involved iteration schemes in solving nonlinear systems, we use the Jacobian matrix, i.e., , with entries . Higher order schemes, such as Chebyshev methods, need higher multi dimensional derivatives which make them less practical. To be more precise, the first Fréchet derivative is a matrix with elements, while the 2nd order Fréchet differentiation has entries (ignoring the symmetric feature).
In this work, we have developed and introduced a variant of Steffensen’s method with memory for tackling nonlinear problems. The scheme consists of two steps and requires the the computation of only one LU factorization which makes its computational efficiency index higher than some of the existing solvers in the literature.
The application of the iteration scheme for nonlinear integral equations via the collocation approach was discussed and its application for other types of nonlinear discretized set of equations obtained from practical problems such as the ones in [
26,
27] can be investigated similarly.