Next Article in Journal
On the Correlation between Banach Contraction Principle and Caristi’s Fixed Point Theorem in b-Metric Spaces
Next Article in Special Issue
Parallel Direct and Iterative Methods for Solving the Time-Fractional Diffusion Equation on Multicore Processors
Previous Article in Journal
Response Times Reconstructor Based on Mathematical Expectation Quotient for a High Priority Task over RT-Linux
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Unified Convergence Analysis of Chebyshev–Halley Methods for Multiple Polynomial Zeros

Faculty of Physics and Technology, University of Plovdiv Paisii Hilendarski, 24 Tzar Asen, 4000 Plovdiv, Bulgaria
Mathematics 2022, 10(1), 135; https://doi.org/10.3390/math10010135
Submission received: 10 December 2021 / Revised: 30 December 2021 / Accepted: 31 December 2021 / Published: 3 January 2022
(This article belongs to the Special Issue Numerical Analysis and Scientific Computing II)

Abstract

:
In this paper, we establish two local convergence theorems that provide initial conditions and error estimates to guarantee the Q-convergence of an extended version of Chebyshev–Halley family of iterative methods for multiple polynomial zeros due to Osada (J. Comput. Appl. Math. 2008, 216, 585–599). Our results unify and complement earlier local convergence results about Halley, Chebyshev and Super–Halley methods for multiple polynomial zeros. To the best of our knowledge, the results about the Osada’s method for multiple polynomial zeros are the first of their kind in the literature. Moreover, our unified approach allows us to compare the convergence domains and error estimates of the mentioned famous methods and several new randomly generated methods.

1. Introduction

Undoubtedly, the most popular iteration methods in the literature are the Newton’s method, the Halley’s method [1] and the Chebyshev’s method [2]. A vast historical survey of these illustrious iteration methods can be found in the papers of Ypma [3], Scavo and Thoo [4] and Ezquerro et al. [5]. It is well known that the Newton’s method is quadratically convergent while Halley and Chebyshev’s methods are cubically convergent to simple zeros. However, all these methods converge linearly if the zeros are multiple.
In 1870, Schröder [6] presented the following modification of Newton’s method:
x k + 1 = x k m f ( x k ) f ( x k ) ,
which restores the quadratic convergence when the multiplicity m 1 of the zero is known. Driven by the same reasons, in 1963 Obreshkov [7] developed the following modifications of Halley and Chebyshev’s methods:
x k + 1 = x k m + 1 2 m f ( x k ) f ( x k ) 1 2 f ( x k ) f ( x k ) 1
and
x k + 1 = x k m 2 2 f ( x k ) f ( x k ) 3 m m + f ( x k ) f ( x k ) f ( x k ) f ( x k ) .
The methods (2) and (3) are known as Halley’s method for multiple zeros and Chebyshev’s method for multiple zeros, and their convergence order is known to be three if the multiplicity m of the zero is known.
In 1994, Osada [8] defined the following third order modification of the Newton’s method:
x k + 1 = x k f ( x k ) 2 f ( x k ) m ( m + 1 ) ( m 1 ) 2 f ( x k ) 2 f ( x k ) f ( x k )
known as Osada’s method for multiple zeros. In 2008, he [9] used an arbitrary real parameter to construct an iteration family for multiple zeros that includes as special cases the methods (1)–(4). Another member of the mentioned Osada’s iteration family is Super-Halley method for multiple zeros which can be defined by (see [10] and references therein):
x k + 1 = x k f ( x k ) 2 f ( x k ) m + f ( x k ) 2 f ( x k ) 2 f ( x k ) f ( x k ) .
Let α C and δ = 1 α . We define the following extension of the Osada’s iteration family:
x k + 1 = T α ( x k ) ,
where the iteration function T α : C C is defined by:
T α ( x ) = x m F ( x ) 2 3 m 2 α ( 1 m ) + m ( δ α ) L ( x ) 1 α ( 1 m ) m α L ( x ) if f ( x ) 0 , x if f ( x ) = 0 ,
with F ( x ) and L ( x ) defined as follows:
F ( x ) = f ( x ) f ( x ) and L ( x ) = f ( x ) f ( x ) f ( x ) 2 = F ( x ) f ( x ) f ( x ) .
Apparently, the domain of the iteration function (7) is the set:
D = x C : f ( x ) 0 1 α ( 1 m ) m α L ( x ) 0 .
It is easy to see that the iteration (6) includes the Halley’s method (2) for α = 1 / 2 , the Chebyshev’s method (3) for α = 0 , Super–Halley method (5) for α = 1 and the Osada’s method (4) for α = 1 / ( 1 m ) and m > 1 . Hereafter, the iteration (6) shall be called Chebyshev–Halley family for multiple zeros. Note that Chebyshev–Halley family for simple zeros ( m = 1 ) has been firstly introduced and studied in its explicit form by Hernández and Salanova [11].
In 2009, Proinov [12] established two types of local convergence theorems about the Newton’s method (1) (applied to polynomials) under two different types of initial conditions. In 2015 and 2016, Proinov and Ivanov [13] and Ivanov [14] used the same two types of initial conditions to establish local convergence theorems about the Halley’s method (2) and the Chebyshev’s method (3) for multiple polynomial zeros. Very recently, Ivanov [10] have proved two general theorems (Theorems 3 and 4) that provide sets of initial approximations to ensure the Q-convergence ([15] (Definition 2.1)) of Picard iteration x k + 1 = T ( x k ) and have applied them to investigate the local convergence of Super–Halley method (5) for multiple polynomial zeros.
In this paper, we use the approach of [10] to investigate the Q-convergence of Chebyshev–Halley family (6). Thus, we obtain two kinds of local convergence theorems (Theorems 1 and 2) that supply the exact bounds of sets of initial approximations accompanied by a priori and a posteriori error estimates that guarantee the Q-convergence of the iteration (6). An assessment of the asymptotic error constant of the family (6) is also established. Our results unify and complement the results of Proinov and Ivanov [13] and Ivanov [10,14] about the methods (2), (3) and (5). On the other hand, the results about the Osada’s method (4) (Corollarys 4 and 5) are the first such results in the literature. At the end of our study, we use Theorem 1 to compare the convergence domains and error estimates of the methods (2)–(5) and several new randomly generated members of (6).

2. Main Results

From now on, C [ x ] shall denote the ring of univariate polynomials over C . Let f C [ x ] be a polynomial of degree n 2 and ξ C be a zero of f. We define the functions E : C R + and E : D C R + by
E ( x ) = | x ξ | d and E ( x ) = | x ξ | ρ ( x ) ,
where d denotes the distance from ξ to the nearest among the other zeros of f and ρ ( x ) denotes the distance from x to the nearest zero of f not equal to ξ . Note that, if ξ is an unique zero of f, then we set E ( x ) 0 . Also, we note that the domain of E is the set,
D = { x C : ρ ( x ) > 0 } .
In this section, we present two local convergence theorems about Chebyshev–Halley family (7) under two different kinds of initial conditions regarding the functions of initial conditions defined by (10).

2.1. Local Convergence Theorem of the First Kind

Furthermore, for the integers n and m ( n m 1 ) and the number α C , we define the real function ϕ α by:
ϕ α ( t ) = ( n m ) t 2 2 ( m n t ) g α ( t ) h α ( t ) ,
where the function g α is defined by:
g α ( t ) = 2 ( n m ) ( ( n m ) | δ | + m | α | ) t + m ( ( n m ) | 3 δ α | + m ) ( 1 t ) if α 1 / 2 , 2 n ( m n t ) if α = 1 / 2
and the function h α is defined by:
h α ( t ) = m | α | ( ( 2 m n ) t 2 2 m t + m ) | δ | ( m + ( n 2 m ) t ) 2 if R e ( α ) > 1 / 2 , | δ | ( m n t ) 2 m | α | ( n t 2 2 m t + m ) if R e ( α ) 1 / 2 and α 1 / 2 , 2 m ( 1 t ) ( m n t ) n ( n m ) t 2 if α = 1 / 2 ,
with δ = 1 α . It is worth mentioning that g α is positive and increasing while h α is decreasing on the interval ( 0 , m / n ) .
The following is our first main theorem of this paper:
Theorem 1.
Let f C [ z ] be a polynomial of degree n 2 and ξ C be a zero of f with known multiplicity m 1 . Suppose x 0 C is an initial guess satisfying the conditions:
E ( x 0 ) < m / n a n d Φ α ( E ( x 0 ) ) > 0 ,
where E is defined by (10) and the function Φ α is defined by:
Φ α ( t ) = h α ( t ) ( n m ) t 2 2 ( m n t ) g α ( t ) ,
with h α and g α defined by (13) and (12). Then the iteration (6) is well defined and convergent Q-cubically to ξ with error estimates for all k 0 :
| x k + 1 ξ | λ 3 k | x k ξ | a n d | x k ξ | λ ( 3 k 1 ) / 2 | x 0 ξ | ,
where λ = ϕ α ( E ( x 0 ) ) and the function ϕ α is defined by (11). Besides, the following estimate of the asymptotic error constant holds:
lim sup k | x k + 1 ξ | | x k ξ | 3 ( n m ) 2 | 3 δ α | + m ( n m ) 2 ( m d ) 2 | α | | δ | i f α 1 / 2 , n ( n m ) 2 ( m d ) 2 i f α = 1 / 2 .
In the cases α = 0 , α = 1 / 2 and α = 1 , we get the following consequences of Theorem 1 about Chebyshev, Halley and Super-Halley methods, which where proven in [10,13,14], but without the assessment of the asymptotic error constants:
Corollary 1
([14] (Theorem 2)). Let f C [ x ] be a polynomial of degree n 2 and ξ C be a zero of f with multiplicity m 1 . Suppose x 0 C satisfies the following initial conditions:
E ( x 0 ) < m / n a n d ϕ 0 ( E ( x 0 ) ) < 1 ,
where the function E : C R + is defined by (10) and the function ϕ 0 is defined by:
ϕ 0 ( t ) = 2 ( n m ) 3 t 3 + m ( n m ) ( 3 n 2 m ) ( 1 t ) t 2 2 ( m n t ) 3 .
Then the Chebyshev’s iteration (3) is well defined and convergent Q-cubically to ξ with error estimates (15), where λ = ϕ 0 ( E ( x 0 ) ) , and with the following estimate of the asymptotic error constant:
lim sup k | x k + 1 ξ | | x k ξ | 3 ( n m ) ( 3 n 2 m ) 2 ( m d ) 2 .
Corollary 2
([13] (Theorem 4.5)). Let f C [ x ] be a polynomial of degree n 2 and ξ C be a zero of f with multiplicity m 1 . Suppose x 0 C satisfies the following initial condition
E ( x 0 ) < R = 2 m n + m + ( n m ) ( 5 n m ) ,
where E : C R + is defined by (10). Then the Halley’s iteration (2) is well defined and convergent Q-cubically to ξ with error estimates (15) where λ = ϕ 1 / 2 ( E ( x 0 ) ) and the function ϕ 1 / 2 is defined by
ϕ 1 / 2 ( t ) = n ( n m ) t 2 2 m ( 1 t ) ( m n t ) n ( n m ) t 2 .
Besides, the following estimate of the asymptotic error constant holds:
lim sup k | x k + 1 ξ | | x k ξ | 3 n ( n m ) 2 ( m d ) 2 .
Corollary 3
([10] (Theorem 3)). Let f C [ x ] be a polynomial of degree n 2 and ξ C be a zero of f with multiplicity m 1 . Suppose x 0 C satisfies the following initial condition
E ( x 0 ) < R = 2 m n + m + 3 ( n m ) ( n + m ) ,
where E : C R + is defined by (10). Then Super-Halley iteration (5) is well defined and convergent Q-cubically to ξ with error estimates (15), where λ = ϕ 1 ( E ( x 0 ) ) and the function ϕ 1 is defined by:
ϕ 1 ( t ) = ( n m ) ( n + ( n 2 m ) t ) t 2 2 ( m n t ) ( ( 2 m n ) t 2 2 m t + m ) .
The estimate (18) of the asymptotic error constant holds.
In the case α = 1 / ( 1 m ) and m > 1 , we get the following consequence of Theorem 1 about the Osada’s method (4):
Corollary 4.
Let f C [ x ] be a polynomial of degree n 2 and ξ C be a zero of f with multiplicity m > 1 . Suppose x 0 C satisfies the following initial conditions:
E ( x 0 ) < m ( m 1 ) m ( n 1 ) + m ( n 1 ) ( n m ) a n d ϕ 1 / ( 1 m ) ( E ( x 0 ) ) < 1 ,
where E : C R + is defined by (10) and the function ϕ 1 / ( 1 m ) is defined by:
ϕ 1 / ( 1 m ) ( t ) = ( n m ) t 2 2 ( m n t ) 2 ( n m ) ( n m + 1 ) t + ( n ( 3 m + 1 ) 2 m ( m + 1 ) ) ( 1 t ) n ( n 1 ) t 2 2 m ( n 1 ) t + m ( m 1 ) .
Then the Osada’s iteration (4) is well defined and convergent Q-cubically to ξ with error estimates (15) where λ = ϕ 1 / ( 1 m ) ( E ( x 0 ) ) . Besides, the following estimate of the asymptotic error constant holds:
lim sup k | x k + 1 ξ | | x k ξ | 3 ( n m ) ( ( m + 1 ) ( n 2 m ) + 2 n m ) 2 ( m d ) 2 ( m 1 ) .

2.2. Local Convergence Theorem of the Second Kind

Before stating our second main theorem, for the integers n and m ( n m 1 ) and the number α C , we define the real function β α by:
β α ( t ) = ( n m ) t 2 2 ( m ( n m ) t ) v α ( t ) w α ( t )
where the function v α is defined by:
v α ( t ) = 2 ( n m ) ( ( n m ) | δ | + m | α | ) t + m ( n m ) | 3 δ α | + m 2 if α 1 / 2 , 2 n ( m ( n m ) t ) if α = 1 / 2
and the function w α is defined by:
w α ( t ) = m | α | ( m ( n m ) t 2 ) | δ | ( m + ( n m ) t ) 2 if R e ( α ) > 1 / 2 , | δ | ( m ( n m ) t ) 2 m | α | ( m + ( n m ) t 2 ) if R e ( α ) 1 / 2 and α 1 / 2 , 2 m ( m ( n m ) t ) n ( n m ) t 2 if α = 1 / 2 ,
with δ = 1 α . Obviously, the function v α is positive and increasing while the function w α is decreasing on the interval ( 0 , m / ( n m ) ) .
The next theorem is our second main result of this paper.
Theorem 2.
Let f C [ z ] be a polynomial of degree n 2 and ξ C be a zero of f with known multiplicity m 1 . Suppose x 0 C is an initial guess satisfying the conditions:
E ( x 0 ) < m / ( n m ) a n d Ψ α ( E ( x 0 ) ) 0 ,
where the function E is defined by (10) and the function Ψ is defined by
Ψ α ( t ) = 1 t ( 1 + t ) β α ( t )
with β α defined by (21). Then the iteration sequence (6) is well defined and convergent Q-cubically to ξ with error estimates for all k 0 :
| x k + 1 ξ | θ λ 3 k | x k ξ | a n d | x k ξ | θ k λ ( 3 k 1 ) / 2 | x 0 ξ | for all k 0 ,
where θ = ψ α ( E ( x 0 ) ) and λ = β α ( E ( x 0 ) ) / ψ α ( E ( x 0 ) ) with ψ α = Ψ α + β α .
In the cases α = 0 , α = 1 / 2 and α = 1 , from Theorem 2 we immediately get [14] (Theorem 3), [13] (Theorem 5.7) and [10] (Theorem 4) about Chebyshev, Halley and Super-Halley methods, respectively. In the case α = 1 / ( 1 m ) and m > 1 , we get the following consequence of Theorem 2 about the Osada’s method.
Corollary 5.
Let f C [ x ] be a polynomial of degree n 2 and ξ C be a zero of f with multiplicity m > 1 . Suppose x 0 C satisfies the following initial conditions:
E ( x 0 ) < m ( m 1 ) m ( n m ) + m ( n 1 ) ( n m ) a n d φ 1 / ( 1 m ) ( E ( x 0 ) ) 0 ,
where E is defined by (10) and the function φ 1 / ( 1 m ) is defined by (25) with
β 1 / ( 1 m ) ( t ) = ( n m ) t 2 2 ( m ( n m ) t ) 2 ( n m ) ( n m + 1 ) t + ( n m ) ( 3 m + 1 ) + m ( m 1 ) ( n m ) ( n m 1 ) t 2 2 m ( n m ) t + m ( m 1 ) .
Then the Osada’s iteration (4) is well defined and convergent Q-cubically to ξ with error estimates (26).

3. Proof of the Main Results

As mentioned, in 2020 Ivanov [10] proved two general convergence theorems that give exact sets of initial approximations to guarantee the Q-convergence of Picard iteration
x k + 1 = T x k .
In this section, we use this general approach to prove our main results stated in the previous section.

3.1. Preliminaries

To make the paper self-contained, we recall some important results that will be applied in the proofs of the main theorems.
Lemma 1
([10] (Lemma 2)). Let x , ξ C and ξ 1 , , ξ s C be all zeros of f different from ξ, then for any j = 1 , , s the following inequality holds:
| x ξ j | ( 1 E ( x ) ) d ,
where E : C R + is defined by (10).
Lemma 2
([10] (Lemma 1)). Let K be an arbitrary valued field and f K [ x ] be a polynomial of degree n 2 which splits over K. Let also ξ 1 , , ξ s be all distinct zeros of f with multiplicities m 1 , , m s j = 1 s m j = n .
(i)
If x K is such that f ( x ) 0 , then for all i = 1 , , s we have
f ( x ) f ( x ) = m i + a i x ξ i , w h e r e a i = ( x ξ i ) j i m j x ξ j .
(ii)
If x K is such that f ( x ) 0 and f ( x ) 0 , then for all i = 1 , , s we have f ( x ) f ( x ) = ( m i + a i ) 2 ( m i + b i ) ( x ξ i ) ( m i + a i ) , w h e r e a i i s d e f i n e d i n ( i ) a n d b i = ( x ξ i ) 2 j i m j ( x ξ j ) 2 .
For the proof of our first main result, we apply the next theorem that was proved in [10] without the estimate of the asymptotic error constant which, however, can be easily proven using the inequality in (30) and the concept of quasi-homogeneity of the exact degree of ϕ (see [16] (Definition 8)). Such a proof is performed in [16] (Proposition 3).
Theorem 3
([10] (Theorem 1)). Let T : D C C be an iterative function, ξ C and E : C R + be defined by (10). Let ϕ : J R + be a quasi-homogeneous function of exact degree p 0 such that for each x C with E ( x ) J , we have
x D a n d | T x ξ | ϕ ( E ( x ) ) | x ξ | .
If x 0 C is an initial approximation such that
E ( x 0 ) J a n d ϕ ( E ( x 0 ) ) < 1 ,
then Picard iteration (28) is well defined and convergent to ξ with Q-order r = p + 1 and with error estimates, for all k 0 :
| x k + 1 ξ | λ r k | x k ξ | a n d | x k ξ | λ S k ( r ) | x 0 ξ | .
In addition, for all k 0 the following error estimate holds:
| x k + 1 ξ | ( R d ) 1 r | x k ξ | r ,
where R is the minimal solution of ϕ ( t ) = 1 in the interval J \ { 0 } . Moreover, we have the following estimate of the asymptotic error constant:
lim sup k | x k + 1 ξ | | x k ξ | r 1 d p lim t 0 + ϕ ( t ) t p .
The following theorem shall be applied for the proof of our second main result:
Theorem 4
([10] (Theorem 2)). Let T : D C C be an iterative function, ξ C and E : D C R + be defined by (10). Let β : J R + be a nonzero quasi-homogeneous function of exact degree p 0 and for each x C with E ( x ) J , we have x D and | T x ξ | β ( E ( x ) ) | x ξ | . If x 0 C is an initial guess satisfying
E ( x 0 ) J a n d β ( E ( x 0 ) ) ψ ( E ( x 0 ) ) ,
where the function ψ is defined by
ψ ( t ) = 1 t ( 1 + β ( t ) ) ,
then Picard iteration (28) is well defined and convergent to ξ with Q-order r = p + 1 and with the following error estimates:
| x k + 1 ξ | θ λ r k | x k ξ | a n d | x k ξ | θ k λ S k ( r ) | x 0 ξ | for all k 0 ,
where λ = ϕ ( E ( x 0 ) ) with ϕ = β / ψ and θ = ψ ( E ( x 0 ) ) . Moreover, the following estimate of the asymptotic error constant holds:
lim sup k | x k + 1 ξ | | x k ξ | r 1 d p lim t 0 + β ( t ) t p .

3.2. Proof of Theorem 1

In the next lemma, we prove two inequalities that play a crucial role in the further proofs.
Lemma 3.
Let f C [ x ] be a polynomial of degree n 2 , α C be a parameter and ξ C be a zero of f with known multiplicity m 1 . Suppose x C is such that:
E ( x ) < m / n ,
where E : C R + is defined by (10). Then there exists a complex number A α such that:
| A α | m | α | ( ( 2 m n ) E ( x ) 2 2 m E ( x ) + m ) | δ | ( m ( 2 m n ) E ( x ) ) 2 ( 1 E ( x ) ) 2
or
| A α | | δ | ( m n ) E ( x ) 2 m | α | ( n E ( x ) 2 2 m E ( x ) + m ) ( 1 E ( x ) ) 2 ,
where δ = 1 α .
Proof. 
Let x C satisfy (35) and let ξ 1 , , ξ s be all distinct zeros of f with respective multiplicities m 1 , , m s . Then by E ( x ) < m / n 1 and Lemma 1, we infer that x is not a zero of f and so we can define the quantities a i and b i as in Lemma 2. Without loss of generality, for some 1 i s we put ξ = ξ i , m = m i , a = a i and b = b i .
Now, we shall prove that the number A α C defined by
A α = m α ( m + b ) + δ ( m + a ) 2 , with δ = 1 α
satisfies the claims of the lemma.
Using some known technics (see e.g., [14] (Lemma 1) and [10] (Lemma 3)) we reach the following estimates:
| a | ( n m ) E ( x ) 1 E ( x ) , | b | ( n m ) E ( x ) 2 ( 1 E ( x ) ) 2 , m n E ( x ) 1 E ( x ) | m + a | m + ( n 2 m ) E ( x ) 1 E ( x ) , ( 2 m n ) E ( x ) 2 2 m E ( x ) + m ( 1 E ( x ) ) 2 | m + b | n E ( x ) 2 2 m E ( x ) + m ( 1 E ( x ) ) 2 .
If m | α | | m + b | > | δ | | m + a | 2 , then using the reverse triangle inequality and (37) we obtain
| A α | m | α | | m + b | | δ | | m + a | 2 m | α | ( ( 2 m n ) E ( x ) 2 2 m E ( x ) + m ) ( 1 E ( x ) ) 2 | δ | ( m ( 2 m n ) E ( x ) ) 2 ( 1 E ( x ) ) 2
which proves the first claim. Otherwise, we get
| A α | | δ | | m + a | 2 m | α | | m + b | | δ | ( m n E ( x ) ) 2 ( 1 E ( x ) ) 2 m | α | ( n E ( x ) 2 2 m E ( x ) + m ) ( 1 E ( x ) ) 2
which proves the second claim and completes the proof of the lemma. □
The following is our main lemma in this section. In its proof, we avoid the case α = 1 / 2 since in this case the lemma coincides with Lemma 4.4 of [13] concerning the Halley’s method.
Lemma 4.
Suppose f C [ x ] is a polynomial of degree n 2 , α C is a parameter and ξ C is a zero of f with multiplicity m 1 . If x C is such that
E ( x ) < m / n a n d h α ( E ( x ) ) > 0 ,
where E : C R + is defined by (10) and h α is defined by (13), then
x D a n d | T α ( x ) ξ | ϕ α ( E ( x ) ) | x ξ | ,
where D is the set (9) and the real function ϕ α is defined by (11).
Proof. 
Let x C satisfy (38). If either m = n or x = ξ , then T α ( x ) = ξ and so the assertions of the lemma hold. Suppose that m < n and x ξ . Let ξ 1 , , ξ s be all distinct zeros of f with respective multiplicities m 1 , , m s and the quantities a i and b i be defined as in Lemma 2. Without loss of generality for some 1 i s we put ξ = ξ i , m = m i , a = a i and b = b i .
First we shall prove that x D . According to (9) we need to prove that f ( x ) 0 implies 1 α ( 1 m ) m α L ( x ) 0 . To do this, we shall prove the inequality | 1 α ( 1 m ) m α L ( x ) | > 0 which in fact is equivalent to
| δ ( m + a ) 2 + m α ( m + b ) | > 0 .
Indeed, from Lemma 2 (i), (37) and the first condition of (38) we get | m + a | > 0 which means that f ( x ) 0 and so by Lemma 2 we have
F ( x ) = x ξ m + a and L ( x ) = F ( x ) f ( x ) f ( x ) = 1 m + b ( m + a ) 2
which in turn leads to
| 1 α ( 1 m ) m α L ( x ) | = δ + m α m + b ( m + a ) 2 = | δ ( m + a ) 2 + m α ( m + b ) | | m + a | 2 .
Now, let us define the number A α by (36). If R e ( α ) > 1 / 2 , then from the second condition of (38) and the estimates (37) we obtain m | α | | m + b | > | δ | | m + a | 2 . Therefore, by Lemma 3 and the second condition of (38) we get
| δ ( m + a ) 2 + m α ( m + b ) | = | A α | h α ( E ( x ) ) ( 1 E ( x ) ) 2 > 0 .
In the case R e ( α ) 1 / 2 , we get the same conclusion but regarding m | α | | m + b | < | δ | | m + a | 2 . Consequently, (40) holds and therefore x D . Further, from (7) and (41) we obtain
T x ξ = x ξ m ( x ξ ) 2 ( m + a ) ( 3 m + ( m 2 ) α + m δ ) ( m + a ) 2 m ( δ α ) ( m + b ) δ ( m + a ) 2 + m α ( m + b ) = σ ( x ξ ) ,
where
σ α = [ m ( 3 δ α ) + 2 δ a ] a 2 + m ( m + 2 α a ) b 2 ( m + a ) [ δ ( m + a ) 2 + m α ( m + b ) ]
From this, the triangle inequality, the estimates (37) and the claims of Lemma 3, we obtain
| σ α | ( m | 3 δ α | + 2 | δ | | a | ) | a | 2 + m ( m + 2 | α | | a | ) | b | 2 | m + a | | A α | ( n m ) E ( x ) 2 2 ( m n E ( x ) ) g α ( E ( x ) ) h α ( E ( x ) ) = ϕ α ( E ( x ) ) ,
which completes the proof of the lemma. □
Proof 
(Proof of Theorem 1). Let T α : D C C be Chebyshev–Halley iterative function defined by (7). If m = n , then T α ( x ) = ξ for all x C and the conclusions of the theorem follow. Suppose m < n . From Lemma 4 and Theorem 3 it follows that the conclusions of Theorem 1 follow under the conditions
E ( x 0 ) < m / n , h α ( E ( x 0 ) ) > 0 and ϕ α ( E ( x 0 ) ) < 1 .
This completes the proof since h α ( t ) > 0 with ϕ α ( t ) < 1 is equivalent to Φ α ( t ) > 0 . □

3.3. Proof of Theorem 2

The proof of Theorem 2 is performed the same way as that of Theorem 1. One just have to use the estimate | x ξ | ρ ( x ) instead of (29) to reach the estimates
| a | ( n m ) E ( x ) , | b | ( n m ) E ( x ) 2 , m ( n m ) E ( x ) | m + a | m + ( n m ) E ( x ) , m ( n m ) E ( x ) 2 | m + b | m + ( n m ) E ( x ) 2
and then to apply Theorem 4 to Chebyshev–Halley iteration function (6) with β α defined by (21).

4. Comparative Analysis

In this section, we use Theorem 1 to compare the convergence domains and the error estimates of several particular members of Chebyshev–Halley iteration family (6). Define the function ϕ α by (11). It is easy to see that the initial condition (14) can be presented in the form E ( x 0 ) < R α , where R α is the unique solution of the equation ϕ α ( t ) = 1 in the interval ( 0 , m / n ) (see e.g., Corollarys 2 and 3). Observe that as bigger is R α as larger is the convergence domain of the respective method and better are its error estimates. In order to compare the convergence domains of the Halley’s method (2), the Chebyshev’s method (3), Super-Haley method (5) and the Osada’s method (4) with each other and with some other members of the family (6), in the following figures we depict the functions ϕ α obtained for the couples n = 5 , m = 3 and n = 10 , m = 2 with α = 0 , α = 1 / 2 , α = 1 , α = 1 / ( 1 m ) and with four complex numbers α randomly chosen from the rectangle
{ z C : | R e ( z ) | 1.1 and | I m ( z ) | 0.1 } .
Note that these two couples ( n , m ) have been chosen to highlight the cases n 2 m and n > 2 m since we know that (see [10] (Remark 1)) in the second case Corollary 3 provides a larger convergence domain with better error estimates for Super–Halley method than Corollary 2 for the Halley’s method and vice versa in the first case.
One can see from the graphs (Figure 1, Figure 2, Figure 3 and Figure 4) that in all considered cases except the first one (Figure 1), the randomly chosen method has a larger convergence domain and better error estimates than the Osada’s method. In the second case (Figure 2), the random method is better than the Chebyshev’s method while in the last case (Figure 4) the random method is better even than the Halley’s method.

5. Conclusions

Two kinds of convergence theorems (Theorems 1 and 2) that ensure exact sets of initial conditions, a priori and a posteriori error estimates right from the first step and assessments of the asymptotic error constant of Chebyshev–Halley iteration family (6) for multiple polynomial zeros have been proven in this paper. This results unify and complement the existing results about the known Halley, Chebyshev and Super-Haley methods for multiple polynomial zeros. The obtained theorems about Osada’s method (4) are new even in the case of simple zeros. Finally, this unifying study allowed us to compare the mentioned famous iteration methods with some new randomly generated ones. This comparison showed that our results assure larger convergence domains and better error estimates for Halley’s method (2) when n 2 m and for the Super-Haley method (5) when n > 2 m . However, in the second case, there exist many methods that are better than Halley’s method.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Conflicts of Interest

The author declares no conflict of interest.

References

  1. Halley, E. A new, exact, and easy method of finding the roots of any equations generally, and that without any previous reduction. Philos. Trans. R. Soc. 1694, 18, 136–148. (In Latin) [Google Scholar] [CrossRef] [Green Version]
  2. Chebyshev, P. Complete Works of P.L. Chebishev; USSR Academy of Sciences: Moscow, Russia, 1973; pp. 7–25. (In Russian) [Google Scholar]
  3. Ypma, T. Historical development of the Newton-Raphson method. SIAM Rev. 1995, 37, 531–551. [Google Scholar] [CrossRef] [Green Version]
  4. Scavo, T.; Thoo, J.B. On the geometry of Halley’s method. Am. Math. Mon. 1995, 102, 417–433. [Google Scholar] [CrossRef]
  5. Ezquerro, J.; Gutiérrez, J.M.; Hernández, M.; Salanova, M. Halley’s method: Perhaps the most rediscovered method in the world. In Margarita Mathematica; University La Rioja: Logroño, Franch, 2001; pp. 205–220. (In Spanish) [Google Scholar]
  6. Schröder, E. Über unendlich viele Algorithmen zur Auflösung der Gleichungen. Math. Ann. 1870, 2, 317–365. [Google Scholar] [CrossRef] [Green Version]
  7. Obreshkov, N. On the numerical solution of equations. Annu. Univ. Sofia Fac. Sci. Phys. Math. 1963, 56, 73–83. (In Bulgarian) [Google Scholar]
  8. Osada, N. An optimal multiple root-finding method of order three. J. Comput. Appl. Math. 1994, 51, 131–133. [Google Scholar] [CrossRef] [Green Version]
  9. Osada, N. Chebyshev–Halley methods for analytic functions. J. Comput. Appl. Math. 2008, 216, 585–599. [Google Scholar] [CrossRef] [Green Version]
  10. Ivanov, S.I. General Local Convergence Theorems about the Picard Iteration in Arbitrary Normed Fields with Applications to Super-Halley Method for Multiple Polynomial Zeros. Mathematics 2020, 8, 1599. [Google Scholar] [CrossRef]
  11. Hernández, M.; Salanova, M. A family of Chebyshev-Halley type methods. Int. J. Comput. Math. 1993, 47, 59–63. [Google Scholar] [CrossRef]
  12. Proinov, P.D. General local convergence theory for a class of iterative processes and its applications to Newton’s method. J. Complex. 2009, 25, 38–62. [Google Scholar] [CrossRef] [Green Version]
  13. Proinov, P.D.; Ivanov, S.I. On the convergence of Halley’s method for multiple polynomial zeros. Mediterr. J. Math. 2015, 12, 555–572. [Google Scholar] [CrossRef]
  14. Ivanov, S.I. On the convergence of Chebyshev’s method for multiple polynomial zeros. Results Math. 2016, 69, 93–103. [Google Scholar] [CrossRef]
  15. Jay, L.O. A note on Q-order of convergence. BIT 2001, 41, 422–429. [Google Scholar] [CrossRef]
  16. Proinov, P.D. Two Classes of Iteration Functions and Q-Convergence of Two Iterative Methods for Polynomial Zeros. Symmetry 2021, 13, 371. [Google Scholar] [CrossRef]
Figure 1. Graph of the functions ϕ 0 , ϕ 1 / 2 , ϕ 1 , ϕ 1 / ( 1 m ) and ϕ α for n = 5 , m = 3 and α = 0.285 + 0.006 i .
Figure 1. Graph of the functions ϕ 0 , ϕ 1 / 2 , ϕ 1 , ϕ 1 / ( 1 m ) and ϕ α for n = 5 , m = 3 and α = 0.285 + 0.006 i .
Mathematics 10 00135 g001
Figure 2. Graph of the functions ϕ 0 , ϕ 1 / 2 , ϕ 1 , ϕ 1 / ( 1 m ) and ϕ α for n = 5 , m = 3 and α = 0.874 0.097 i .
Figure 2. Graph of the functions ϕ 0 , ϕ 1 / 2 , ϕ 1 , ϕ 1 / ( 1 m ) and ϕ α for n = 5 , m = 3 and α = 0.874 0.097 i .
Mathematics 10 00135 g002
Figure 3. Graph of the functions ϕ 0 , ϕ 1 / 2 , ϕ 1 , ϕ 1 / ( 1 m ) and ϕ α for n = 10 , m = 2 and α = 0.487 0.083 i .
Figure 3. Graph of the functions ϕ 0 , ϕ 1 / 2 , ϕ 1 , ϕ 1 / ( 1 m ) and ϕ α for n = 10 , m = 2 and α = 0.487 0.083 i .
Mathematics 10 00135 g003
Figure 4. Graph of the functions ϕ 0 , ϕ 1 / 2 , ϕ 1 , ϕ 1 / ( 1 m ) and ϕ α for n = 10 , m = 2 and α = 1.016 + 0.030 i .
Figure 4. Graph of the functions ϕ 0 , ϕ 1 / 2 , ϕ 1 , ϕ 1 / ( 1 m ) and ϕ α for n = 10 , m = 2 and α = 1.016 + 0.030 i .
Mathematics 10 00135 g004
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Ivanov, S.I. Unified Convergence Analysis of Chebyshev–Halley Methods for Multiple Polynomial Zeros. Mathematics 2022, 10, 135. https://doi.org/10.3390/math10010135

AMA Style

Ivanov SI. Unified Convergence Analysis of Chebyshev–Halley Methods for Multiple Polynomial Zeros. Mathematics. 2022; 10(1):135. https://doi.org/10.3390/math10010135

Chicago/Turabian Style

Ivanov, Stoil I. 2022. "Unified Convergence Analysis of Chebyshev–Halley Methods for Multiple Polynomial Zeros" Mathematics 10, no. 1: 135. https://doi.org/10.3390/math10010135

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop