1. Introduction
The most commonly recurring problems in engineering, the physical and chemical sciences, computing and applied mathematics can be usually summed up as solving a non-linear equation of the form
with
being differentiable as, per Fréchet,
denotes complete normed linear spaces and
D is a non-empty, open and convex set.
Researchers have attempted for decades to trounce this nonlinearity. From the analytical view, these equations are very challenging to solve. The utilisation of iterative methods (IM) to find the solution of such non-linear equations is predominantly chosen among researchers for this very reason. The most predominantly used IM for solving such nonlinear equations is Newton’s method. In recent years, with advancements in science and mathematics, many new higher-order iterative methods for dealing with nonlinear equations have been found and are presently being employed [
1,
2,
3,
4,
5,
6,
7,
8]. Nevertheless, these results on the convergence of iterative methods that are currently being utilised in the above-mentioned articles are derived by applying high-order derivatives. In addition, no results address the error bounds, convergence radii or the domain in which the solution is unique.
The study of local convergence analysis (LCA) and semi-local analysis (SLA) of an IM permits calculating the radii of the convergence domains, error bounds and a region in which the solution is unique. The work in [
9,
10,
11,
12] discusses the results of local and semi-local convergence of different iterative methods. In the above-mentioned articles, important results discussing radii of convergence domains and measurements on error estimates are discussed, thereby expanding the utility of these iterative methods. Outcomes of these type of studies are crucial as they exhibit the difficulty in selecting starting points.
In this article, we establish theorems of convergence for two multi-step IMs with fifth (
2) and
(
3) order convergence proposed in [
8]. The methods are:
and
where
p is a positive integer.
It is worth emphasizing that (
2) and (
3) are iterative and not analytical methods. That is, a solution denoted by
is obtained as an approximation using these methods. The iterative methods are more popular than the analytical methods, since in general it is rarely possible to find the closed form of the solution in the latter form.
Motivation: The LCA of the methods (
2) and (
3) is given in [
8]. The order is specified using Taylor’s formula and requires the employment of higher-order derivatives not present in the method. Additionally, these works cannot give estimates on the error bounds
, the radii of convergence domains or the uniqueness domain. To observe the limitations of the Taylor series approach, consider
G on
by
Then, we can effortlessly observe that since
is unbounded, the conclusions on convergence of (
2) and (
3) discussed in [
8] are not appropriate for this example.
Novelty: The aforementioned disadvantages provide encourage us to introduce convergence theorems providing the domains and hence comparing the domains of convergence of (
2) and (
3) by considering hypotheses based only on
. This research work also presents important results for the estimation of the error bounds
and radii of the domain of convergence. Discussions about the exact location and the uniqueness of the root
are also provided in this work.
The rest of the details of this article can be outlined as follows:
Section 2 deals with LCA of the methods (
2) and (
3). The SLA considered more important than LC and not provided in [
8] is also dealt with in this article in
Section 3. The convergence outcomes are tested using numerical examples and are given in
Section 4. Example 4 deals with a real world application problem. In Example 5, we revisit the motivational example to show that
. Conclusions of this study are given in
Section 5.
2. Local Convergence Analysis
Some scalar functions are developed to prove the convergence. Let .
Suppose:
- (i)
There exists a function : which is non-decreasing and continuous (NC) and the equation admits a minimal solution (MS) . Set .
- (ii)
There exists a function
:
which is NC so that the equation
admits a MS
, with the function
:
being
- (iii)
The equation admits a MS . Set , where .
- (iv)
The equation
admits a MS
, provided the function
:
is defined by
where
In applications, the smallest version of the function shall be chosen.
The parameter
r is the radius of the convergence ball (RC) for the method (
2) (see Theorem 1).
Set .
Then, if
, it is implied that
The following conditions justify the introduction of the functions
and
and helps in proving the LC of the method (
2).
- ()
There exists with and .
- ()
for each . Set .
- ()
for each .
and
- ()
, with
r given in (
6).
Conditions (
)–(
) are employed to show the LC of the method (
2). Let
.
Theorem 1. Under the conditions ()–(), further assume that the starting point . Then, the sequence given by the method (2) is convergent to andandwhere (6) gives the formula for the radius r and the functions and are previously provided. Proof. Let us pick
. By applying the conditions (
), (
), (
6) and (
7), we observe in turn that
Estimate (
13) and the standard Banach lemma on linear invertible operators [
9,
10,
13] guarantee that
together with
Hypothesis
and (
14) imply that the iterate
exists. Thus, by the first sub-step of method (
2), we get in turn that
In view of (
), (
6), (9), (
14) (for
) and (
15), we obtain in turn that
Hence, the iterate
and the assertion (
11) hold if
. Notice also that (
14) holds for
, since
. Hence, the iterate
exists by the second sub-step of the method (
2). Moreover, the third sub-step gives
since the bracket gives
Furthermore, by (
6), (10), (
), (
14) (for
), (
16) and (
17), we can attain in turn that
Therefore, the iterate
and the assertion (
12) remain true for
. The induction for the assertions (
11) and (
12) is aborted by switching
,
,
by
in the above calculations. Finally, from the estimate
where
we deduce that the iterate
and
. □
Next, a region is determined containing only one solution.
Proposition 1. Suppose:
- (i)
(1) has a solution for some . - (ii)
The condition () holds in the ball .
- (iii)
There exist such that
Then, in the region , where , the Equation (1) has only one solution . Proof. Let us define the linear operator
. By utilizing the conditions (
) and (
), we attain in turn that
Therefore, we deduce that
, since the linear operator
and
□
Remark 1. (1) The parameter can be chosen to be r.
(2) The result of Theorem 1 can immediately be extended to hold for method (3) as follows: Define the following real functions on the interval
Assume that the equations
admits smallest solutions
. Define the parameter
by
Then, the parameter
is a RC for the method (
3).
Theorem 2. Under the conditions ()–() for , the sequence generated by (3) is convergent to . Proof. By applying Theorem 1, we get in turn that
Then, the calculations for the rest of the sub-steps are in turn:
where we also used the estimates
and
By switching
by
in the above calculations we get
Therefore, we deduce that and all the iterates . □
Remark 2. The conclusions of the solution given in Proposition 1 are also clearly valid for method (3). 3. Semi-Local Analysis
The convergence in this case uses the concept of a majorizing sequence.
Define the scalar sequence for
and
and for each
as follows
The sequence
is shown to be majorizing for method (
3). We now produce a general convergence result for it.
Lemma 1. Suppose that there exists so that for each Then, the sequence generated by (19) is non-decreasing (ND) and convergent to some . Proof. It follows by formula (
19) and condition (
20) that
is bounded above by
and ND. Thus, we can state that there exists
such that
. □
Remark 3. (1)The limit point is the unique least upper bound (LUB) for the sequence .
- (2)
A possible choice for , where the parameter is given in condition (i) of Section 2. - (3)
We can take , if the function is strictly increasing.
Next, again we relate the functions
and the sequence
to the method (
2). Suppose:
- ()
There exists a point and a parameter with and .
- ()
for each . Set .
- ()
for each .
- ()
- ()
.
Next, the preceding notation and the conditions (
)–(
) are employed to show the SLA of the method (
2).
Theorem 3. Assume the conditions ()–() hold. Then, the sequence produced by the method (2) is well-defined in the ball , remains in the ball for each and is convergent to some such that Proof. Mathematical induction is used to verify the assertions (
21) and (22). Method (
2), sequence (
19) and condition (
) imply
Thus, the iterate
and the assertion (
21) hold for
.
Let
be an arbitrary point. Then, it follows by (
) and the definition of
that for each
Hence, we have
and
In particular, for
,
and the iterate
exists. Suppose that (
21) holds for each
. We need the estimates
where we also used that
Then, by method (
2), (
25) and (
26), it follows that
and
Thus, the iterate
and the estimate (22) hold. Moreover, by the first sub-step of method (
2), we can formulate that
By the induction hypotheses, (
) and (
27), we have in turn
Furthermore, by applying first sub-step of (
2), (
19), (
24) (for
) and (
28) we get in turn
and
Therefore, the iterate
and the induction for the assertions (
21) and (22) is completed. Observe that the sequence
is Cauchy and hence convergent. Thus, the sequence
is also Cauchy by (
21) and (22) in a Banach space
. Consequently, there exists
so that
. Therefore, by the continuity of the operator
G, and the estimate (
27) for
, we deduce that
. Let
be an integer. Then, if we let
in the estimate
we show estimate (22). □
Next, a region is determined in which the solution is unique.
Proposition 2. Suppose:
- (i)
A solution of (1) exists for some . - (ii)
Condition () holds in the ball .
- (iii)
There exists such that
Then, in the region , where , the only solution of (1) is . Proof. Let
with
. Then, it follows by (
) and (
29) that for
,
thus, we conclude that
. □
Remark 4. (1) If the condition () is switched by or , then the conclusions of the Theorem (3) are still valid.
- (2)
Under all the conditions ()–(), we can set in Proposition 2 and .
5. Conclusions
Many applications in chemistry and physics require solving abstract equations by employing an iterative method. That is why a new local analysis based on generalized conditions is established using the first derivative, which is the only one present in current methods. The new approach determines upper bounds on the error distances and the domain containing only one solution. Earlier local convergence theories [
8] rely on derivatives which do not appear in the methods. Moreover, they do not give information on the error distances that can be computed, especially a priori. The same is true for the convergence region. The methods are extended further by considering the semi-local case, which is considered more interesting than the local and was not considered in [
8]. Thus, the applicability of these methods is increased in different directions. The technique relies on the inverse of the operator on the method. Other than that, it is method-free. That is why it can be employed with the same benefits on other such methods [
14,
15,
16,
17]. This will be the direction of our research in the near future.