1. Introduction
Numerical analysis is a wide-ranging discipline having close connections with mathematics, computer science, engineering and the applied sciences. One of the most basic and earliest problem of numerical analysis concerns with finding efficiently and accurately the approximate locally unique solution
of the equation of the form
where
F is a Fréchet differentiable operator defined on a convex subset
D of
X with values in
Y, where
X and
Y are the Banach spaces.
Analytical methods for solving such equations are almost non-existent for obtaining the exact numerical values of the required roots. Therefore, it is only possible to obtain approximate solutions and one has to be satisfied with approximate solutions up to any specified degree of accuracy, by relying on numerical methods which are based on iterative procedures. Therefore, researchers worldwide resort to an iterative method and they have proposed a plethora of iterative methods [
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16]. While, using these iterative methods researchers face the problems of slow convergence, non-convergence, divergence, inefficiency or failure (for detail please see Traub [
15] and Petkovic
et al. [
13]).
The convergence analysis of iterative methods is usually divided into two categories: semi-local and local convergence analysis. The semi-local convergence matter is, based on the information around an initial point, to give criteria ensuring the convergence of iteration procedures. On the other hand, the local convergence is based on the information around a solution, to find estimates of the radii of convergence balls. A very important problem in the study of iterative procedures is the convergence domain. Therefore, it is very important to propose the radius of convergence of the iterative methods.
We study the local convergence analysis of three step method defined for each
by
where
is an initial point,
,
is any two-point optimal fourth-order scheme. The eighth order of convergence of Scheme (
2) was shown in [
1] when
and
for
and
. That is when
is a divided difference of first order of operator
F [
5,
6]. The local convergence was shown using Taylor series expansions and hypotheses reaching up to the fifth order derivative. The hypotheses on the derivatives of
F and
H limit the applicability of Scheme (
2). As a motivational example, define function
F on
,
by
One can easily find that the function
is unbounded on
at the point
. Hence, the results in [
1], cannot apply to show the convergence of Scheme (
2) or its special cases requiring hypotheses on the fifth derivative of function
F or higher. Notice that, in particular, there is a plethora of iterative methods for approximating solutions of nonlinear equations [
1,
2,
3,
4,
5,
6,
7,
8,
10,
11,
12,
13,
14,
15,
16]. These results show that initial guess should be close to the required root for the convergence of the corresponding methods. However, how close an initial guess would be required for the convergence of the corresponding method? These local results give no information on the radius of the ball convergence for the corresponding method. The same technique can be applied to other methods.
In the present study we expand the applicability of Scheme (
2) using only hypotheses on the first order derivative of function
F. We also propose the computable radii of convergence and error bounds based on the Lipschitz constants. We further present the range of initial guess
that tells us how close the initial guess would be required for granted convergence of the Scheme (
2). This problem was not addressed in [
1]. The advantages of our approach are similar to the ones already mentioned for Scheme (
2).
The rest of the paper is organized as follows: in
Section 2, we present the local convergence analysis of Scheme (
2).
Section 3 is devoted to the numerical examples which demonstrate our theoretical results. Finally, the conclusion is given in the
Section 4.
2. Local Convergence: One Dimensional Case
In this section, we define some scalar functions and parameters to study the local convergence of Scheme (
2).
Let , , be given constants. Let us also assume , be nondecreasing and continuous function. Further, define function and .
Then, we have
. By Equation (
3) and the intermediate value theorem, function
has zeros in the interval
. Further, let
be the smallest such zero. Moreover, define functions
and
in the interval
by
We have and for each . We also get and as . Denote by the smallest zero of function on the interval . Furthermore, define functions q and on the interval by and .
Using
and Equation (
3), we deduce that function
has a smallest zero denoted by
.
Finally define functions
and
on the interval
by
Then, we get
and
as
. Denote by
the smallest zero of function
on the interval
. Define
Then, we have that
and for each
and
and
stand, respectively for the open and closed balls in
X with center
and radius
.
Next, we present the local convergence analysis of Scheme (
2) using the preceding notations.
Theorem 1. Let us consider be a Fréchet differentiable operator. Let us also assume be a divided difference of order one. Suppose that there exist such that Equation (
3)
holds and for each andMoreover, suppose that there exist and such that for each andwhere the radius of convergence r is defined by Equation (
4)
and . Then, the sequence generated by Scheme (
2)
for is well defined, remains in for each and converges to . Moreover, the following estimates holdandwhere the functions are defined by previously. Furthermore, for , the limit point is the only solution of equation in Proof. We shall show estimates Equations (
19)–(
21) hold with the help of mathematical induction. By hypotheses
, Equations (
5) and (
13), we get that
It follows from Equation (
22) and the Banach Lemma on invertible operators [
5,
14] that
is well defined and
Using the first sub step of Scheme (
2) for
, Equations (
4), (
6), (
11) and (
23), we get in turn
which shows Equation (
18) for
and
. Then, from Equations (
3) and (
12), we see that Equation (
20) follows. Hence,
. Next, we shall show that
and
.
Using Equations (
4), (
5), (
7), (
13), (
14) and (
24), we get in turn that
It follows from Equation (
25) that
Similarly, but using Equation (
8) instead of Equation (
7), we obtain in turn that
Hence,
is well defined by the third sub step of Scheme (
2) for
. We can write by Equation (
11)
Notice that
. Hence, we have that
. Then, by Equations (
17) and (
29) we get that
We also have that by replacing
by
in Equation (
30) that
since
.
Then, using the last substep of Scheme (
2) for
, Equations (
4), (
10), (
15), (
20) (for
), (
26), (
28), and (
31) that
which shows Equation (
21) for
and
. By simply replacing
,
by
,
in the preceding estimates we arrive at Equations (
19)–(
21). Then, from the estimates
we conclude that
and
. Finally, to show the uniqueness part, let
be such that
. Set
. Then, using Equation (
14), we get that
Hence,
. Then, in view of the identity
, we conclude that
☐
Remark 2.2 - (a)
In view of Equation (
11) and the estimate
condition Equation (
13) can be dropped and
M can be replaced by
or
since
- (b)
The results obtained here can be used for operators
F satisfying the autonomous differential equation [
5,
6] of the form
where
P is a known continuous operator. Since
we can apply the results without actually knowing the solution
Let as an example
Then, we can choose
.
- (c)
The radius
was shown in [
5,
6] to be the convergence radius for Newton’s method under conditions Equations (
11) and (
12). It follows from Equation (
4) and the definition of
that the convergence radius
r of the Scheme (
2) cannot be larger than the convergence radius
of the second order Newton’s method. As already noted,
is at least the size of the convergence ball given by Rheinboldt [
14]
In particular, for
, we have that
and
That is our convergence ball
is at most three times larger than Rheinboldt’s. The same value for
given by Traub [
15].
- (d)
We shall show that how to define function
and
l appearing in condition Equation (
3) for the method
Clearly method (
34) is a special case of Scheme (
2). If
then Method (
34) reduces to Kung-Traub method [
15]. We shall follow the proof of Theorem 1 but first we need to show that
. We get that
As in the case of function
p, function
, where
has a smallest zero denoted by
in the interval
. Set
. Then, we have from the last sub step of Method (
34) that
It follows from Equation (
36) that
and
. Then, the convergence radius is given by
3. Numerical Example and Applications
In this section, we shall check the effectiveness and validity of our theoretical results which we have proposed in
Section 2 on the scheme proposed by Sharma and Arora [
1]. For this purpose, we shall choose a variety of nonlinear equations which are mentioned in the following examples including motivational example. At this point, we chose the following eighth order methods proposed by Sharma and Arora [
1]
and
denoted by
,
and
, respectively.
The initial guesses
are selected with in the range of convergence domain which gives guarantee for convergence of the iterative methods. Due to the pages limit, all the values of parameters are done for only 5 significant digits and displayed in the
Table 1,
Table 2 and
Table 3 and examples Equations (
1)–(
3), although 100 significant digits are available. The considered test examples with corresponding initial guess, radius of convergence and necessary number of iterations (
n) for getting the desired accuracy are displayed in
Table 1,
Table 2 and
Table 3.
In addition, we also want to verify the theoretical order of convergence of Methods (
38)–(
40). Therefore, we calculate the computational order of convergence (COC) [
9] approximated by using the following formulas
or the approximate computational order of convergence (ACOC) [
9]
During the current numerical experiments with programming language Mathematica (Version 9), all computations have been done with multiple precision arithmetic, which minimize round-off errors. We use as a tolerance error. The following stopping criteria are used for computer programs: and .
Further, we use
and function
as defined above Equation (
37) in all the examples.
Example 1. Let and define function F on D byThen, we get and . We obtain different radius of convergence, COC (ρ) and n in the following Table 1.
Table 1.
Different values of parameters which satisfy Theorem 1.
Table 1.
Different values of parameters which satisfy Theorem 1.
Cases | | | | | | r | | n | ρ |
---|
| 0.66667 | 0.66667 | 0.28658 | 0.27229 | 0.76393 | 0.27229 | 0.25 | 4 | 9.0000 |
| 0.66667 | 0.66667 | 0.28658 | 0.27229 | 0.76393 | 0.27229 | 0.25 | 4 | 9.0000 |
| 0.66667 | 0.66667 | 0.28658 | 0.27229 | 0.76393 | 0.27229 | 0.25 | 4 | 9.0000 |
Example 2 Let , the space of continuous functions defined on be and equipped with the max norm. Let . Define function F on bywe have thatThen, for we obtain that and . We obtain different radius of convergence in the following Table 2.
Table 2.
Different values of parameters which satisfy Theorem 1.
Table 2.
Different values of parameters which satisfy Theorem 1.
| | | | | r |
---|
0.044444 | 0.066667 | 0.011303 | 0.022046 | 0.088889 | 0.011303 |
Example 3 Returning back to the motivation example at the introduction on this paper, we have , and our required zero is . We obtain different radius of convergence, COC (ρ) and n in the following Table 3.
Table 3.
Different values of parameters which satisfy Theorem 1.
Table 3.
Different values of parameters which satisfy Theorem 1.
Cases | | | | | | r | | n | ρ |
---|
| 0.0075648 | 0.0075648 | 0.0016852 | 0.0094361 | 0.0086685 | 0.0016852 | 0.310 | 6 | 8.0000 |
| 0.0075648 | 0.0075648 | 0.0016852 | 0.0094361 | 0.0086685 | 0.0016852 | 0.310 | 6 | 8.0000 |
| 0.0075648 | 0.0075648 | 0.0016852 | 0.0094361 | 0.0086685 | 0.0016852 | 0.310 | 6 | 8.0000 |
4. Conclusions
Most of the time, researchers mentioned that the initial guess should be close to the required root for the granted convergence of their proposed schemes for solving nonlinear equations. However, how close an initial guess would be required to grantee the convergence of the proposed method? We propose the computable radius of convergence and error bound by using Lipschitz conditions in this paper. Further, we also reduce the hypotheses from fourth order derivative of the involved function to only first order derivative. It is worth noticing that Scheme (
2) is not changing if we use the conditions of Theorem 1 instead of the stronger conditions proposed by Sharma and Arora (2015). Moreover, to obtain the error bounds in practice and order of convergence, we can use the computational order of convergence which is defined in numerical
Section 3. Therefore,we obtain in practice the order of convergence in a way that avoids the bounds involving estimates higher than the first Fréchet derivative. Finally, on account of the results obtained in
Section 3, it can be concluded that the proposed study not only expands the applicability but also gives the computable radius of convergence and error bound of the scheme given by Sharma and Arora (2015) for obtaining simple roots of nonlinear equations.