1. Introduction
We look for a unique root
of the equation:
where
is a continuous operator defined on a convex subset
of
with values in
, and
. This is a relevant issue since several problems from mathematics, physics, chemistry, and engineering can be reduced to Equation (
1).
In general, either the lack, or the intractability of analytic solutions force researchers to adopt iterative techniques. However, when using that type of approach, we find problems such as slow convergence, converge to undesired root, divergence, computational inefficiency, or failure (see Traub [
1] and Petkovíc et al. [
2]). The study of the convergence of iterative algorithms can be classified into two categories, namely the semi-local and local convergence analysis. The first case is based on the information in the neighborhood of the starting point. This also gives criteria for guaranteeing the convergence of iteration algorithms. Therefore, a relevant issue is the convergence domain, as well as the radii of convergence of the algorithm.
Herein, we deal with the second case, that is the local convergence analysis. Let us consider a fourth order algorithm defined for
as:
where
is an initial point,
(
k is an arbitrary natural number),
satisfies
for
,
,
, and
is a continuous function. The fourth order convergence for Method (
2) was studied by Lee and Kim [
3] with Taylor series, hypotheses up to the fourth order derivative of function
, and hypotheses on the first and second partial derivatives of function
H. However, only the divided difference of the first order appears in (
2). Favorable computations were also given with related Kung–Traub methods [
1] of the form:
Notice that (
3) is obtained from (
2), if we define function
H as
. The assumptions on the derivatives of
and
H restrict the suitability of Algorithms (
2) and (
3). For instance, let us consider
on
,
as:
From this expression, we obtain:
We find that
is unbounded on
at the point
. Therefore, the results in [
3] cannot be applied for the analysis of the convergence of Methods (
2) or (
3). Notice that there are numerous algorithms and convergence results available in the literature [
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15]. Nonetheless, practice shows that the initial prediction must be in the neighborhood of the root for achieving convergence. However, how close must it be to the starting point? Indeed, local results do not give any information about the ball convergence radii.
We broaden the suitability of Methods (
2) and (
3) by using only assumptions on the first derivative of function
. Moreover, we estimate the computable radii of convergence and the error bounds from Lipschitz constants. Additionally, we discuss the range of initial estimate
that tells us how close it must be to achieve a granted convergence of (
2). This problem was not addressed in [
3], but is of capital importance in practical applications.
In what follows:
Section 2 addresses the study of local convergence (
2) and (
3).
Section 3 contains three numerical examples that illustrate the theoretical formulation. Finally,
Section 4 gives the concluding remarks.
2. Convergence Analysis
Let
and
be given constants. Furthermore, we consider that
,
are continuous functions such that:
for each
with
and that
and
h are nondecreasing functions on the interval
,
, respectively. For the local convergence analysis of (
2), we need to introduce a few functions and parameters. Let us define the parameters
and
given by:
and function
on the interval
by:
From the above functions, it is easy to see that
,
and
for
. Moreover, we consider the functions
q and
on
as:
It is straightforward to find that
and that
as
. By the intermediate value theorem, we know that
has zeros in the interval
. Let us assume that
is the smallest zero of function
on
, and set:
Furthermore, let us define functions
and
on
such that:
and:
From (
8), we have that
and from (
10) that
as
. Further, we assume that
R is the smallest zero of function
on
. Therefore, we have that for each
:
Let us denote by and the open and closed balls in with center and of radius , respectively.
Theorem 1. Let us assume that is a differentiable function and is a divided difference of first order of Ω. Furthermore, we consider that h and H are functions satisfying (
4), (
9),
, and that for each , we have: Then, the sequence obtained for by (
2)
is well defined, remains in for each and converges to , so that:and . Moreover, the limit point is the unique root of equation in Proof. By hypotheses
, (
14), (
17) and (
19), we further obtain:
so that:
which leads to (
20) for
and
. We need to show that
. Using (
15) and the definition of
R, we obtain:
From the Banach lemma on invertible functions [
7,
14], it follows that
and:
In view of (
14) and (
18), we have:
and similarly:
since
. Then, using the second substep of Methods (
2), (
11), (
14), (
16), (
25) and (
27), we obtain:
and so, (
21) is true for
and
. Next, we need to show that
and
, for
. Using (
14) and (
15), and the definition of
R, we obtain:
Hence,
and:
Then, by using (
4) and (
12) (for
), (
16), (
27), (
28), (
30) and (
31), we have:
Furthermore,
is well defined by (
24), (
32) and (
34). Using the third substep of (
2), (
12), (
27) (for
), (
28), (
32) and (
34), we get:
showing that (
22) is true for
and
. Replacing
,
, and
by
,
, and
, respectively, in the preceding estimates, we arrive at (
20)–(
22). From the estimates
we conclude that
and
. Finally, to illustrate the uniqueness, let
such that
. We assume
. Adopting (
15), we get:
Therefore, , and in view of the identity , we conclude that . □
Remark 1. - (a)
It follows from condition (
15)
and the estimate:and Condition (
14)
can be discarded and M substituted by:or since - (b)
We note that (
2)
does not change if we adopt the conditions of Theorem 1 instead of the stronger ones given in [3]. In practice, for the error bounds, we can consider the computational order of convergence (COC) [10]:or the approximated computational order of convergence (ACOC) [10]:In practice, we obtain the order of convergence that, avoiding the bounds, involves estimates higher than the first Fréchet derivative.
3. Numerical Examples
We consider some of the weight functions to solve a variety of univariate problems that are depicted in Examples 1–3.
Table 1,
Table 2 and
Table 3 display the minimum number of iterations necessary to obtain the required accuracy for the zeros of the functions
in Examples 1–3. Moreover, we include also the initial guess, the radius of convergence of the corresponding function, and the theoretical order of convergence. Additionally, we calculate the
approximated by means of (
37) and (
38).
All computations used the package with multiple precision arithmetic, adopting as a tolerance error and the stopping criteria:
and .
Example 1. Let . Let us define function Ω on by: Consequently, it results and . We obtain a different radius of convergence when using distinct types of weight functions (for details, please see [3]), COC (ξ) and s presented in Table 1. Example 2. Let (approximated root), and let us assume function Ω on by As a consequence, we get and . We have the distinct radius of convergence when using several weight functions (for details, please see [3]), COC (ξ) and s listed in Table 2. Example 3. Using the example of the introduction, we have , and the required zero is . We have different radii of convergence by adopting distinct types of weight functions (for details, please see [3]), COC (ξ) and s in Table 3.