Next Article in Journal
Nonlinear Dynamics of Wave Packets in Tunnel-Coupled Harmonic-Oscillator Traps
Next Article in Special Issue
A General Optimal Iterative Scheme with Arbitrary Order of Convergence

Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

# Two Classes of Iteration Functions and Q-Convergence of Two Iterative Methods for Polynomial Zeros

by
Petko D. Proinov
Faculty of Mathematics and Informatics, University of Plovdiv Paisii Hilendarski, 24 Tzar Asen, 4000 Plovdiv, Bulgaria
Symmetry 2021, 13(3), 371; https://doi.org/10.3390/sym13030371
Submission received: 6 February 2021 / Revised: 17 February 2021 / Accepted: 19 February 2021 / Published: 25 February 2021

## Abstract

:
In this work, two broad classes of iteration functions in n-dimensional vector spaces are introduced. They are called iteration functions of the first and second kind at a fixed point of the corresponding iteration function. Two general local convergence theorems are presented for Picard-type iterative methods with high Q-order of convergence. In particular, it is shown that if an iterative method is generated by an iteration function of first or second kind, then it is Q-convergent under each initial approximation that is sufficiently close to the fixed point. As an application, a detailed local convergence analysis of two fourth-order iterative methods is provided for finding all zeros of a polynomial simultaneously. The new results improve the previous ones for these methods in several directions.
MSC:
47J25; 47J26; 65J15; 65H04

## 1. Introduction

This paper is devoted to the convergence of iterative methods for the simultaneous approximation of all zeros of an algebraic polynomial of degree $n ≥ 2$. Each method is generated by an iteration function in an n-dimensional normed space. The first method for simultaneously finding polynomial zeros was introduced by Weierstrass [1] in 1891. In his work, the main role is played by the symmetric Vieta’s system as well as by the elementary symmetric functions. By further developing Weierstrass’ ideas, a semilocal convergence theorem for Weierstrass’ method was proven in [2] over initial conditions given via the elementary symmetric functions.

#### 1.1. Classical Iterative Methods for Simultaneous Approximation of Polynomial Zeros

Throughout the paper, $( K , | · | )$ denotes a field with a nontrivial absolute value $| · |$ (see, e.g., Chapter 12 of [3]), and $K [ z ]$ denotes the ring of polynomials over $K$. Without loss of generality we assume that $K$ is complete. As usual $R$ and $C$ stand for the real and complex numbers, respectively. Let
$f ( z ) = a 0 z n + a 1 z n − 1 + … + a n − 1 z + a n$
be a polynomial in $K [ z ]$ of degree $n ≥ 2$. If f splits in $K$, we try to find all the zeros $ξ 1 , … , ξ n$ of f as a vector $ξ = ( ξ 1 , … , ξ n )$ in the space $K n$. Every iterative method for simultaneously finding all the zeros of a polynomial $f ∈ K [ z ]$ is given by a fixed point iteration
$x ( k + 1 ) = T ( x ( k ) ) , k = 0 , 1 , 2 , … ,$
where $T : D ⊂ K n → K n$ is an iteration function. We identify the iterative method (1) with its iteration function T. Let us recall two well-known iteration functions for simultaneous approximation of polynomial zeros:
Definition 1
(Weierstrass [1]). Weierstrass’ iteration function $T : D ⊂ K n → K n$ is defined by
$T ( x ) = x − W ( x ) ,$
where the Weierstrass correction $W : D ⊂ K n → K n$ is defined as follows
$W i ( x ) = f ( x i ) a 0 ∏ j = 1 , j ≠ i n ( x i − x j )$
and $D$ denotes the set of all vectors in $K n$ with pairwise distinct components.
Definition 2
(Ehrlich [4]). Ehrlich’s iteration function $T : D ⊂ K n → K n$ is defined by
$T i ( x ) = x i − f ( x i ) f ′ ( x i ) − f ( x i ) ∑ j = 1 , j ≠ i n 1 x i − x j = x i − W i ( x ) 1 + ∑ j = 1 , j ≠ i n W j ( x ) x i − x j .$
The first part of Formula (4) was introduced by Ehrlich [4] in 1967 and was rediscovered by Aberth [5] in 1973. The second formula in (4) is due to Börsch–Supan [6].

#### 1.2. Q-Order of Convergence

Let us recall the notions Q-convergence and R-convergence (see, e.g., Jay [7]).
Definition 3.
Let $( x ( k ) ) k = 0 ∞$ be a sequence in $( K n , ∥ · ∥ )$ which converges to a point $ξ ∈ K n$. The sequence $( x ( k ) ) k = 0 ∞$ converges to ξ with Q-order (at least) $r ≥ 1$ if there exist two constants $c = c ( r )$ and $K = K ( r )$ such that
$∥ x ( k + 1 ) − ξ ∥ ≤ c ∥ x ( k ) − ξ ∥ r for all k ≥ K .$
If $x ( k ) ≠ ξ$ for sufficiently large k, then $( x ( k ) ) k = 0 ∞$ converges to ξ with Q-order $r ≥ 1$ if and only if
$lim sup k → ∞ ∥ x ( k + 1 ) − ξ ∥ ∥ x ( k ) − ξ ∥ r < ∞ .$
The value of the above limit superior is said to be asymptotic error constant or asymptotic constant factor.
The notion of Q-order of convergence does not depend on the norm $∥ · ∥$ because of the equivalence of the norms on $K n$ (see, e.g., Chapter 12 of [3]), but the asymptotic error constant depends on the choice of the norm.
A weaker form of order of convergence is given by the concept of R-order of convergence.
Definition 4.
Let $( x ( k ) ) k = 0 ∞$ be a sequence in $( K n , ∥ · ∥ )$ which converges to a point $ξ ∈ K n$. The sequence $( x ( k ) ) k = 0 ∞$ converges to ξ with R-order (at least) $r ≥ 1$ if there exists a sequence of real numbers $( σ k ) k = 0 ∞$ converging to zero with Q-order (at least) r such that
$∥ x ( k ) − ξ ∥ ≤ σ k .$

#### 1.3. A Fourth-Order Root-Finding Method for Simple Polynomial Zeros

Definition 5
(Kyurkchiev [8], Zheng and Sun [9]). Let $f ∈ K [ z ]$ be a polynomial of degree $n ≥ 2$. Define the iteration function $T : D ⊂ K n → K n$ by
$T i ( x ) = x i − 1 A i ( x ) + B i ( x ) if f ( x i ) ≠ 0 , x i if f ( x i ) = 0 , ( i = 1 , … , n ) ,$
where
$A i ( x ) = f ′ ( x i ) f ( x i ) − ∑ j = 1 , j ≠ i n 1 x i − x j and B i ( x ) = ∑ j = 1 , j ≠ i n W j ( x ) ( x i − x j ) 2 ,$
and the domain D is defined by
$D = x ∈ D : A i ( x ) + B i ( x ) ≠ 0 whenever f ( x i ) ≠ 0 .$
The iterative method (5) was introduced by Kyurkchiev [8] in 1983 and was rediscovered by Zheng and Sun [9] in 1999. In 1983, Kyurkchiev [8] proved the following local convergence theorem for the method (5).
Theorem 1
(Kyurkchiev [8]). Let $f ∈ C [ z ]$ be a polynomial of degree $n ≥ 2$ which has n simple zeros $ξ 1 , … , ξ n$. Let $0 < h < 1$ and $c > 0$ be such that
$c < δ / ( e n + 2 ) ,$
where $e = 2.718 …$ and $δ = min i ≠ j | ξ i − ξ j |$. Suppose $x ( 0 ) ∈ C n$ is an initial approximation such that
$∥ x ( 0 ) − ξ ∥ ∞ ≤ c h ,$
where $ξ = ( ξ 1 , … , ξ n )$ and $∥ · ∥ ∞$ is max-norm on $R n$. Then the iteration (5) converges to ξ with R-order of convergence four and with error estimate
$∥ x ( k ) − ξ ∥ ∞ ≤ c h 4 k for all k ≥ 0 .$

#### 1.4. A Fourth-Order Root-Finding Method for Multiple Polynomial Zeros with Known Multiplicities

Let $f ∈ K [ z ]$ be a polynomial of degree $n ≥ 2$ and let $ξ 1 , … , ξ s$ be all distinct zeros of f with multiplicities $m 1 , … , m s$$( m 1 + … + m s = n )$. In 1998, Iliev [10] generalized the method (5) to the polynomials with zeros of arbitrary multiplicities.
Definition 6
(Iliev [10]). Let $f ∈ K [ z ]$ be a polynomial of degree $n ≥ 2$ which splits in $K$, and f has s distinct zeros with multiplicities $m 1 , … , m s$$( m 1 + ⋯ + m s = n )$. Define the iteration function $T : D ⊂ K s → K s$ by
$T i ( x ) = x i − m i A i ( x ) + B i ( x ) if f ( x i ) ≠ 0 , x i if f ( x i ) = 0 , ( i = 1 , … , s ) ,$
where
$A i ( x ) = f ′ ( x i ) f ( x i ) − ∑ j = 1 , j ≠ i s m j x i − x j and B i ( x ) = ∑ j = 1 , j ≠ i s m j W j ( x ) ( x i − x j ) 2 A j ( x ) m j m j − 1$
and the domain D is defined by
$D = x ∈ D : A i ( x ) + B i ( x ) ≠ 0 whenever f ( x i ) ≠ 0 .$
It is easy to see that in the case of simple zeros $( s = n )$, Definition 6 coincides with Definition 5. Iliev [10] proved the following local convergence result for the method (6).
Theorem 2
(Iliev [10]). Let $f ∈ C [ z ]$ be a polynomial of degree $n ≥ 2$ and let $ξ 1 , … , ξ s$ be all distinct zeros of f with multiplicities $m 1 , … , m s$$( m 1 + ⋯ + m s = n )$. Let $0 < h < 1$ and $c > 0$ be such that
$c < δ 2 and 2 n c 2 ( δ − 2 c ) 2 c δ − 2 c + 1 + c δ − 2 c ( N + M N + M ) < m ,$
where $δ = min i ≠ j | ξ i − ξ j |$, $m = min 1 ≤ i ≤ s m i$,
$M = 1 + c δ − 2 c n − 1 and N = 1 + n c 2 ( δ − 2 c ) 2 n − 1 − 1 .$
Suppose $x ( 0 ) ∈ C s$ is an initial approximation satisfying the following condition
$∥ x ( 0 ) − ξ ∥ ∞ ≤ c h ,$
where $ξ = ( ξ 1 , … , ξ s )$ and $∥ · ∥ ∞$ is max-norm on $R s$. Then the iteration (6) converges to ξ with R-order of convergence four and with error estimate
$∥ x ( k ) − ξ ∥ ∞ ≤ c h 4 k for all k ≥ 0 .$

#### 1.5. The Purpose of the Paper

The main purpose of the paper is twofold:
• To introduce and study two large classes of iteration functions in $K n$. They are called iteration functions of the first and second kind at a fixed point of the corresponding iteration function. Two general local convergence theorems are presented for Picard-type iterative methods with high Q-order of convergence. In particular, it is shown that if an iterative method is generated by an iteration function of first or second kind, then it is Q-convergent under each initial approximation that is sufficiently close to the fixed point.
• To improve and complement Theorems 1 and 2 in several directions. Some of the advantages of these results over the previous ones presented in [8,10] are as follows: Q-convergence of the methods (5) and (6); larger convergence domains; sharper a priori error estimates; a posteriori error estimates as well as upper bounds for the asymptotic error constants are obtained.

## 2. Two Kinds of Iteration Functions

In this section, two kinds of iteration functions are introduced in $K n$ with respect to two kinds of functions of initial conditions that were presented in [11]. They are called iteration functions of first and second kind. In Proposition 2–8, the properties of the iteration functions of first and second kind are studied as well as the relationship between them. In the term of iteration functions of first and second kind, two general convergence theorems are presented for iterative methods in $K n$ (Theorems 3 and 4) which play an important role in the next sections. As a consequence of Theorem 3, a general convergence theorem (Corollary 1) of the type of Theorem 1 is proven.
This section can be considered as a continuation of a previous paper [12], where the iteration functions of first and second kinds were used for the first time without a name.

#### 2.1. Notations

Assume that the real vector space $R n$ is equipped with coordinate-wise partial ordering defined by
$x ⪯ y if and only if x i ≤ y i for all i = 1 , ⋯ , n .$
As usual, the vector space $K n$ is endowed with the product topology. In the sequel, $K n$ is equipped with a norm defined by
$∥ x ∥ p = ∑ i = 1 n | x i | p 1 / p for some 1 ≤ p ≤ ∞ .$
In addition, $K n$ is equipped with a vector valued norm (cone norm) $∥ · ∥$ with values in $R n$ defined by
$∥ x ∥ = ( | x 1 | , … , | x n | ) .$
For two vectors $x ∈ K n$ and $y ∈ R n$, we denote by $x / y$ a vector in $R n$ defined by
$x y = | x 1 | y 1 , ⋯ , | x n | y n$
provided that y has only nonzero components. In the sequel, a function $d : K n → R n$ is used and defined by
$d ( x ) = ( d 1 ( x ) , … , d n ( x ) ) with d i ( x ) = min j ≠ i | x i − x j | ( i = 1 , … , n ) ,$
and a function $δ : K n → R +$ defined by
$δ ( x ) = min i ≠ j | x i − x j | .$
Assume by definition that $0 0 = 1$, and a sequence $( S k ) k = 0 ∞$ of functions $S k : R → R$ is defined as follows:
$S k ( t ) = ( t k − 1 ) / ( t − 1 ) for t ≠ 1 , k for t = 1 .$

#### 2.2. Quasi-Homogeneous Function

Definition 7
([13]). Let J be an interval on $R +$ containing 0. A function $φ : J → R +$ is called quasi-homogeneous of degree (at least) $m ≥ 0$ if
$φ ( λ t ) ≤ λ m φ ( t ) for all λ ∈ [ 0 , 1 ] and t ∈ J .$
The next definition complements the previous one by introducing the concept of quasi-homogeneous function of exact degree.
Definition 8.
A function $φ : J → R +$ is called quasi-homogeneous of exact degree $m ≥ 0$ if it is quasi-homogeneous of degree m, and
$lim t → 0 + φ ( t ) t m ≠ 0 .$
The following proposition gives some properties of quasi-homogeneous functions of exact degree. The proof is left to the reader.
Proposition 1.
The quasi-homogeneous functions of exact degree have the following useful properties:
(i)
A function ϕ is quasi-homogeneous on J of exact degree $m = 0$ if and only if ϕ is positive nondecreasing on J, right-continuous at 0, and such that $ϕ ( 0 ) < 1$.
(ii)
A function ϕ is quasi-homogeneous on J of exact degree $m > 0$ if and only if ϕ can be represented in the form $ϕ ( t ) = t m σ ( t )$ for all $t ∈ J$, where σ is a positive nondecreasing function on J, and right-continuous at 0.
(iii)
If ϕ is a quasi-homogeneous function on J of exact degree $m > 0$, then ϕ is strictly increasing on J and $ϕ ( 0 ) = 0$.
(iv)
If two functions f and g are quasi-homogeneous on $J$ of exact degree $p ≥ 0$ and $q ≥ 0$, respectively, then $f g$ is quasi-homogeneous on J of exact degree $p + q$.
(v)
If two functions f and g are quasi-homogeneous on J of exact degree $m = 0$, then $f + g$ is also quasi-homogeneous on J of exact degree 0 provided that $f ( 0 ) + g ( 0 ) < 1$.
(vi)
If two functions f and g are quasi-homogeneous on J of exact degree $p ≥ 0$ and $q > 0$, respectively, then $f + g$ is quasi-homogeneous on J of exact degree $m = min { p , q }$.
(vii)
If a function f is quasi-homogeneous on $J 1$ of exact degree $p ≥ 0$ and a function g is quasi-homogeneous on $J 2$ of exact degree $q ≥ 0$, then $g ∘ f$ is quasi-homogeneous of exact degree $p q$ on the interval $J = { t ∈ J 1 : f ( t ) ∈ J 2 }$ provided that $f ( 0 ) ∈ J 2$.
In the following example, a large class of quasi-homogeneous functions is given that will be used in the next sections. The proof follows from Proposition 1.
Example 1
([14]). Let $n ∈ N$ and ϕ be a quasi-homogeneous function on an interval J of exact degree $m ≥ 0$. Then the function
$φ ( t ) = ( 1 + ϕ ( t ) ) n − 1$
is also quasi-homogeneous function on J of exact degree m.

#### 2.3. Iteration Function of the First Kind

Definition 9
([12]). A function $T : D ⊂ K n → K n$ is said to be an iteration function of first kind at a point $ξ ∈ D$ if there exists a quasi-homogeneous function $ϕ : J → R +$ of degree $m ≥ 0$ such that for each vector $x ∈ K n$ with $E ( x ) ∈ J$, the following conditions are satisfied:
$x ∈ D and ∥ T ( x ) − ξ ∥ ⪯ ϕ ( E ( x ) ) ∥ x − ξ ∥ ,$
where the function $E : K n → R +$ is defined by
$E ( x ) = x − ξ d ( ξ ) p ( 1 ≤ p ≤ ∞ ) .$
The function ϕ is said to be a control function of T.
Remark 1.
It is easy to see that if $T : D ⊂ K n → K n$ is an iteration function of first kind at a point ξ, then ξ is a fixed point of T.
Before studying the properties of iteration functions of the first kind, two examples of such iteration functions are considered. Let $f ∈ K [ z ]$ be a polynomial of degree $n ≥ 2$ which has n simple zeros in $K$, and let $ξ ∈ K n$ be a root vector of f.
Recall that a vector $ξ$ in $K n$ is called a root vector of a polynomial f of degree n if
$f ( z ) = a ∏ j = 1 n ( z − ξ j ) for all z ∈ K .$
where $a ∈ K$.
Example 2
([14], Lemma 6.3). Weierstrass’ iteration function $T : D ⊂ K n → K n$ defined by (2) is an iteration function of the first kind with control function $ϕ : [ 0 , 1 / b ) → R +$ of exact degree $m = 1$ defined by
$ϕ ( t ) = 1 + a t ( n − 1 ) ( 1 − b t ) n − 1 − 1 ,$
and the constants $a = a ( n , p )$ and $b = b ( p )$ are defined by
$a = ( n − 1 ) 1 / q and b = 2 1 / q ,$
where $1 ≤ q ≤ ∞$ is defined by $1 / p + 1 / q = 1$.
Example 3
([15], Lemma 2.3). Ehrlich’s iteration function $T : D ⊂ K n → K n$ defined by (4) is an iteration function of the first kind at the point ξ with control function $ϕ : [ 0 , τ ) → R +$ of of exact degree degree $m = 2$ defined by
$ϕ ( t ) = a t 2 ( 1 − t ) ( 1 − b t ) − a t 2 ,$
where a and b are defined by (23) and $τ = 2 / ( b + 1 + ( b − 1 ) 2 + 4 a )$.
Proposition 2.
Let $T : D ⊂ K n → K n$ and $ξ ∈ K n$ be a fixed point of T with pairwise distinct components. If T is an iteration function of first kind at ξ with a control function $ϕ : J → R +$ of exact degree $m ≥ 0$, then for each vector $x ∈ K n$ with $E ( x ) ∈ J$, the following inequality is satisfied:
$∥ T ( x ) − ξ ∥ p ≤ σ ( E ( x ) ) δ ( ξ ) m ∥ x − ξ ∥ p r ,$
where $r = m + 1$ and $σ : J → R +$ is defined by $σ ( 0 ) = 0$ and $σ ( t ) = ϕ ( t ) / t m$ if $t > 0$.
Proof.
If $E ( x ) = 0$, then $x = ξ$ and (24) holds trivially. Let $E ( x ) > 0$. Then
$ϕ ( E ( x ) ) = E ( x ) m σ ( E ( x ) ) ≤ σ ( E ( x ) ) δ ( ξ ) m ∥ x − ξ ∥ p m .$
From (20) and the monotonicity of the norm $∥ · ∥ p$, we obtain
$∥ T ( x ) − ξ ∥ p ≤ ϕ ( E ( x ) ) ∥ x − ξ ∥ p .$
Combining the last two inequalities, we get (24).  □
Proposition 3.
Let $T : D ⊂ K n → K n$ and $ξ ∈ K n$ be a fixed point of T with pairwise distinct components. If T is an iteration function of first kind at ξ with a control function $ϕ : J → R +$ of exact degree $m ≥ 0$. If for some initial point $x ( 0 )$, the Picard sequence (1) is convergent to ξ, then its Q-order is $r = m + 1$ and the following upper bound for the asymptotic error constant holds:
$lim sup k → ∞ ∥ x ( k + 1 ) − ξ ∥ p ∥ x ( k ) − ξ ∥ p r ≤ 1 δ ( ξ ) m lim t → 0 + ϕ ( t ) t m .$
Proof.
If the Picard sequence (1) is convergent to $ξ$, then $E ( x ( k ) ) → 0$ as $k → ∞$. Therefore, $E ( x ( k ) ) ∈ J$ for sufficiently large k. For these values of k, we have both $x ( k ) ∈ K n$ and $E ( x ( k ) ) ∈ J$. Then by Proposition 2 for these values of k, we have
$∥ x ( k + 1 ) − ξ ∥ p ≤ σ ( E ( x ( k ) ) ) δ ( ξ ) m ∥ x ( k ) − ξ ∥ p r ,$
where $r = m + 1$ and the function $σ : J → R +$ is defined by $σ ( 0 ) = 0$ and $σ ( t ) = ϕ ( t ) / t m$ if $t > 0$. Then it follows from (25) that
$lim sup k → ∞ ∥ x ( k + 1 ) − ξ ∥ p ∥ x ( k ) − ξ ∥ p r ≤ 1 δ ( ξ ) m lim k → ∞ σ ( E ( x ( k ) ) = 1 δ ( ξ ) m lim t → 0 + σ ( t ) ,$
which coincides with the desired estimate for the asymptotic error constant.  □
The following general local convergence theorem refers to Picard-type iterative methods that are generated by an iteration function of the first kind. It is a more complete version of Theorem 3.1 of [12].
Theorem 3.
Suppose $T : D ⊂ K n → K n$ and $ξ ∈ K n$ is a fixed point of T with pairwise distinct components. Let T be an iteration function of first kind at $ξ ∈ K n$ with a control function $ϕ : J → R +$ of exact degree $m ≥ 0$, and let $x ( 0 ) ∈ K n$ be an initial approximation of ξ such that
$E ( x ( 0 ) ) ∈ J and ϕ ( E ( x ( 0 ) ) ) < 1 ,$
where the function $E : K n → R +$ is defined by (21). Then the following estimates hold true.
(i)
Convergence. The Picard iteration (1) starting from $x ( 0 )$ is well-defined, remains in the set
$U = { x ∈ K n : E ( x ) ∈ J , ϕ ( E ( x ) ) < 1 }$
and converges to ξ with Q-order $r = m + 1$.
(ii)
A priori error estimate. For all $k ≥ 0$, we have the following estimate:
$∥ x ( k ) − ξ ∥ ⪯ λ S k ( r ) ∥ x ( 0 ) − ξ ∥ ,$
where $λ = ϕ ( E ( x ( 0 ) ) )$.
(iii)
First a posteriori error estimate. For all $k ≥ 0$, we have the following error estimate:
$∥ x ( k + 1 ) − ξ ∥ ⪯ λ r k ∥ x ( k ) − ξ ∥ .$
(iv)
Second a posteriori error estimate. For all $k ≥ 0$, we have the following error estimate:
$∥ x ( k + 1 ) − ξ ∥ ⪯ ϕ ( E ( x ( k ) ) ) ∥ x ( k ) − ξ ∥ .$
(v)
Third a posteriori error estimate. For all $k ≥ 0$, we have the following error estimate:
$∥ x ( k + 1 ) − ξ ∥ p ≤ C k ∥ x ( k ) − ξ ∥ p r .$
where $( C k ) k = 0 ∞$ is a nonincreasing sequence defined by $C k = σ ( E ( x ( k ) ) ) / δ ( ξ ) m$ and the function $σ : J → R +$ is defined by $σ ( 0 ) = 0$ and $σ ( t ) = ϕ ( t ) / t m$ if $t > 0$.
(vi)
An estimate for the asymptotic error constant. We have the following estimate:
$lim sup k → ∞ ∥ x ( k + 1 ) − ξ ∥ p ∥ x ( k ) − ξ ∥ p r ≤ 1 δ ( ξ ) m lim t → 0 + ϕ ( t ) t m .$
Proof.
The fact that the Picard sequence (1) is well-defined and convergent to $ξ$ as well as the estimates (28) and (29) were proved in Theorem 3.1 of [12]. It follows from the proof of the just-mentioned theorem that $T ( U ) ⊂ U$ and that the sequence $( E ( x ( k ) ) k = 0 ∞$ is nonincreasing. Consequently, $E ( x ( k ) ) ∈ J$ for all $k ≥ 0$ and the sequence $( C k ) k = 0 ∞$ is nonincreasing. Then the estimates (30) and (31) follow from Definition 9 and Proposition 2, respectively. Finally, the convergence with Q-order $r = m + 1$ as well as the estimate (32) for the asymptotic error constant follow from Proposition 3.  □
Remark 2.
Note that the convergence domain U given by (27) contains an open ball with center ξ. This nice property of domain U will be proved in the proof of Proposition 4.
The following corollary of Theorem 3 complements Proposition 3.
Proposition 4.
Let $T : D ⊂ K n → K n$ and $ξ ∈ K n$ be a fixed point of T with pairwise distinct components. If T is an iteration function of first kind at $ξ ∈ K n$ with a control function $ϕ : J → R +$ of exact degree $m ≥ 0$. If an initial guess is sufficiently close to ξ, then the Picard sequence (1) is convergent to ξ with Q-order $r = m + 1$.
Proof.
It follows from Proposition 1 that $ϕ$ is right continuous at 0 and $ϕ ( 0 ) < 1$, i.e.,
$lim t → 0 + ϕ ( t ) = ϕ ( 0 ) < 1 .$
Hence, we can choose a positive number $R ∈ J$ such that $ϕ ( R ) < 1$. Suppose $x ( 0 ) ∈ K n$ is an initial approximation which lies in the open ball with center $ξ$ and radius $R δ ( ξ )$, i.e.,
$∥ x ( 0 ) − ξ ∥ p < R δ ( ξ ) .$
From this and the obvious inequality
$E ( x ( 0 ) ) = x ( 0 ) − ξ d ( ξ ) p ≤ ∥ x ( 0 ) − ξ ∥ p δ ( ξ ) ,$
we get $E ( x ( 0 ) ) < R$, where the function $E : K n → R +$ is defined by (21). By monotonicity of $ϕ$, we conclude that $ϕ ( E ( x ( 0 ) ) ) ≤ ϕ ( R ) < 1$. Hence, the initial conditions (26) are satisfied. Then it follows from Theorem 3 that the Picard sequence (1) converges to $ξ$ with Q-order $r = m + 1$.  □
The next corollary is a general convergence theorem of the same type as Theorem 1.
Corollary 1.
Suppose $T : D ⊂ K n → K n$ and $ξ ∈ K n$ is a fixed point of T with pairwise distinct components. Let T be an iteration function be an iteration function of first kind at $ξ ∈ K n$ with a control function $ϕ : J → R +$ of exact degree $m ≥ 0$. Let $0 < h < 1$ and $c > 0$ be such that
$c δ ( ξ ) ∈ J and ϕ c δ ( ξ ) < 1 .$
Suppose $x ( 0 ) ∈ K n$ is an initial approximation of ξ satisfying the following condition
$∥ x ( 0 ) − ξ ∥ p ≤ c h for some 1 ≤ p ≤ ∞ .$
Then the Picard iteration (1) is well-defined and converges to ξ with Q-order $r = m + 1$ and with error estimates
$∥ x ( k + 1 ) − ξ ∥ ⪯ h r k + 1 − r k ∥ x ( k ) − ξ ∥ and ∥ x ( k ) − ξ ∥ ⪯ h r k − 1 ∥ x ( 0 ) − ξ ∥ ,$
$∥ x ( k ) − ξ ∥ p ≤ c h r k for all k ≥ 0 .$
Besides, we have the estimate (32) for the asymptotic error constant.
Proof.
From (33) and (35), we get
$E ( x ( 0 ) ) ≤ c h δ ( ξ ) < c δ ( ξ ) .$
From this and the first part of (34), we conclude that $E ( x ( 0 ) ) ∈ J$. On the other hand, by quasi-homogeneity of $ϕ$, and the second part of (34), we obtain
$ϕ ( E ( x ( 0 ) ) ) ≤ ϕ c h δ ( ξ ) ≤ h m ϕ c δ ( ξ ) ≤ h m < 1 .$
Therefore, $x ( 0 )$ satisfies the initial conditions (26). Now it follows from Theorem 3 that the Picard iteration (1) is well-defined and converges to $ξ$ with Q-order $r = m + 1$ and with estimates (28), (29) and (32). From (28), (29) and $ϕ ( E ( x ( 0 ) ) ) ≤ h m$, we get the estimates (36). The estimate (37) follows trivially from the second estimate of (36) and the initial condition (35).  □

#### 2.4. Iteration Function of the Second Kind

Definition 10
([12]). A function $T : D ⊂ K n → K n$ is said to be an iteration function of second kind at a point $ξ ∈ K n$ if there exists a quasi-homogeneous function $β : J → R +$ of degree $m ≥ 0$ such that for each vector $x ∈ D$ with $E ( x ) ∈ J$, the following conditions are satisfied:
$x ∈ D and ∥ T ( x ) − ξ ∥ ⪯ β ( E ( x ) ) ∥ x − ξ ∥ ,$
where the function $E : D → R +$ is defined by
$E ( x ) = x − ξ d ( x ) p ( 1 ≤ p ≤ ∞ ) .$
The function β is said to be a control function of T.
Remark 3.
It can easily be proved that if $T : D ⊂ K n → K n$ is an iteration function of second kind at a point ξ which has pairwise distinct components, then ξ is a fixed point of T.
Let us consider two examples of the iteration functions of the second kind. Let $f ∈ K [ z ]$ be a polynomial of degree $n ≥ 2$ which has n simple zeros in $K$, and let $ξ ∈ K n$ be a root vector of f.
Example 4
([14], Lemma 7.2). Weierstrass’ iteration function $T : D ⊂ K n → K n$ defined by (2) is an iteration function with control function $ϕ : [ 0 , ∞ ) → R +$ of exact degree $m = 1$ defined by
$β ( t ) = 1 + a t n − 1 n − 1 − 1 ,$
where a is defined by (23).
Example 5
([15], Lemma 3.2). Ehrlich’s iteration function $T : D ⊂ K n → K n$ defined by (4) is an iteration function of the second kind at the point ξ with control function $ϕ : [ 0 , τ ) → R +$ of exact degree $m = 2$ defined by
$β ( t ) = a t 2 1 − t − a t 2 ,$
where $τ = 2 / ( 1 + 1 + 4 a )$ and a is defined by (23).
Proposition 5.
Let $T : D ⊂ K n → K n$ and $ξ ∈ K n$ be a fixed point of T with pairwise distinct components. If T is an iteration function of second kind at ξ with a control function $β : J → R +$ of exact degree $m ≥ 0$, then for each vector $x ∈ D$ with $E ( x ) ∈ J$, the following inequality is satisfied:
$∥ T ( x ) − ξ ∥ p ≤ σ ( E ( x ) ) δ ( x ) m ∥ x − ξ ∥ p r ,$
where $r = m + 1$ and $σ : J → R +$ is defined by $σ ( 0 ) = 0$ and $σ ( t ) = β ( t ) / t m$ if $t > 0$.
Proof.
The proof is analogous to the one of Proposition 2.  □
In the next lemma, the functions d and $δ$ defined by (17) and (18) are proven to be Lipschitz continuous. This lemma is used in the proofs of Propositions 6 and 7.
Lemma 1.
Both functions $δ : K n → R +$ and $d : K n → K n$ defined by (18) and (17) are Lipschitz continuous. More precisely, we have
$| δ ( x ) − δ ( y ) | ≤ b ∥ x − y ∥ p and ∥ δ ( x ) − δ ( y ) ∥ p ≤ b ∥ x − y ∥ p for all x , y ∈ K n ,$
where b is defined by (23).
Proof.
We prove only the first inequality of (42) because the proof of the second one is analogous. The first inequality of (42) is equivalent to the following inequalities:
$δ ( y ) − b ∥ x − y ∥ p ≤ δ ( x ) ≤ δ ( y ) + b ∥ x − y ∥ p .$
We shall prove only the left inequality since the proof of the right one is analogous. By the triangle inequality in $K$ and Hölder’s inequality, we obtain
$| x i − x j | ≥ | y i − y j | − | x i − y i | − | x j − y j | ≥ | y i − y j | − b ∥ x − y ∥ p .$
By taking the minimum over all $i ≠ j$, we get $δ ( x ) ≥ δ ( y ) − b ∥ x − y ∥ p$, which completes the proof.  □
Proposition 6.
Let $T : D ⊂ K n → K n$ and $ξ ∈ K n$ be a fixed point of T with pairwise distinct components. Suppose T is an iteration function of second kind at ξ with a control function $β : J → R +$ of exact degree $m ≥ 0$. If for some initial point $x ( 0 )$, the Picard sequence (1) is convergent to ξ, then its Q-order $r = m + 1$ and the following upper bound for the asymptotic error constant holds:
$lim sup k → ∞ ∥ x ( k + 1 ) − ξ ∥ p ∥ x ( k ) − ξ ∥ p r ≤ 1 δ ( ξ ) m lim t → 0 + β ( t ) t m .$
Proof.
Let the functions $δ$, d and E be defined by (18), (17) and (39), respectively. By Lemma 1, both functions $δ$ and d are continuous on $K n$. The continuity of d implies continuity of E on $D$.
Let the Picard sequence (1) be convergent to $ξ$, that is $x ( k ) → ξ$ as $k → ∞$. From this and continuity of $δ$ and E, taking into account that $ξ ∈ D$, we conclude that
$δ ( x ( k ) ) → δ ( ξ ) > 0 and E ( x ( k ) ) → 0 as k → ∞ .$
Therefore, for sufficiently large k, we have both $x ( k ) ∈ D$ and $E ( x ( k ) ) ∈ J$. Then it follows from Proposition 5 that for these values of k, we have
$∥ x ( k + 1 ) − ξ ∥ p ≤ σ ( E ( x ( k ) ) ) δ ( x ( k ) ) m ∥ x ( k ) − ξ ∥ p r ,$
where $r = m + 1$ and the function $σ : J → R +$ is defined by $σ ( 0 ) = 0$ and $σ ( t ) = β ( t ) / t m$ if $t > 0$. Let $x ( k ) ≠ 0$ for sufficiently large k. Then it follows from (43) and continuity of $δ$ that
$lim sup k → ∞ ∥ x ( k + 1 ) − ξ ∥ p ∥ x ( k ) − ξ ∥ p r ≤ lim k → ∞ σ ( E ( x ( k ) ) δ ( x ( k ) ) m = 1 δ ( ξ ) m lim t → 0 σ ( t ) ,$
which coincides with the announced estimate for the asymptotic error constant.  □
The next general local convergence theorem refers to Picard-type iterative methods that are generated by an iteration function of the second kind. This result is a more complete version of Theorem 4.1 of [12].
Theorem 4.
Let $T : D ⊂ K n → K n$ be an iteration function of second kind at a point $ξ ∈ K n$ with a nonzero control function $β : J → R +$ of exact degree $m ≥ 0$ and let $x ( 0 ) ∈ K n$ be an initial approximation with distinct components such that
$E ( x ( 0 ) ) ∈ J and Ψ ( E ( x ( 0 ) ) ) ≥ 0 ,$
where the function $E : K n → R +$ is defined by (39) and the function $Ψ : J → R$ is defined by
$Ψ ( t ) = 1 − b t − β ( t ) ( 1 + b t ) ,$
where b is defined by (23). Then the following estimates hold true.
(i)
Fixed point. The vector ξ is a fixed point of T with pairwise distinct components.
(ii)
Convergence. The Picard iteration (1) starting from $x ( 0 )$ is well-defined, remains in the set
$U = { x ∈ D : E ( x ) ∈ J , Ψ ( E ( x ) ) ≥ 0 } ,$
and converges to ξ with Q-order $r = m + 1$.
(iii)
A priori error estimate. For all $k ≥ 0$, we have the following estimate:
$∥ x ( k ) − ξ ∥ ⪯ θ k λ S k ( r ) ∥ x ( 0 ) − ξ ∥ ,$
where $λ = ϕ ( E ( x ( 0 ) ) )$, $θ = ψ ( E ( x ( 0 ) ) )$ and the functions ψ and ϕ are defined by
$ψ ( t ) = 1 − b t ( 1 + β ( t ) ) a n d ϕ ( t ) = β ( t ) / ψ ( t ) .$
(iv)
First a posteriori error estimate. For all $k ≥ 0$, we have the following error estimate:
$∥ x ( k + 1 ) − ξ ∥ ⪯ θ λ r k ∥ x ( k ) − ξ ∥ .$
(v)
Second a posteriori error estimate. For all $k ≥ 0$, we have the following error estimate:
$∥ x ( k + 1 ) − ξ ∥ ⪯ β ( E ( x ( k ) ) ) ∥ x ( k ) − ξ ∥ .$
(vi)
Third a posteriori error estimate. For all $k ≥ 0$, we have the following error estimate:
$∥ x ( k + 1 ) − ξ ∥ p ≤ C k ∥ x ( k ) − ξ ∥ p r .$
where $( C k ) k = 0 ∞$ is a sequence defined by $C k = σ ( E ( x ( k ) ) ) / δ ( x ( k ) ) m$ and the function $σ : J → R +$ is defined by $σ ( 0 ) = 0$ and $σ ( t ) = β ( t ) / t m$ if $t > 0$.
(vii)
An estimate for the asymptotic error constant. We have the following estimate:
$lim sup k → ∞ ∥ x ( k + 1 ) − ξ ∥ p ∥ x ( k ) − ξ ∥ p r ≤ 1 δ ( ξ ) m lim t → 0 + β ( t ) t m .$
Proof.
First we shall prove that if $t ∈ J$ is such that $Ψ ( t ) ≥ 0$, then
$0 ≤ t < 1 b , 0 ≤ β ( t ) < 1 , 0 < ψ ( t ) ≤ 1 , 0 ≤ ϕ ( t ) ≤ 1 .$
It follows from $Ψ ( t ) ≥ 0$ that
$β ( t ) ≤ 1 − b t 1 + b t and ψ ( t ) ≥ β ( t ) .$
We only need to prove the strict inequalities in (52) for $t > 0$ because the other ones are trivial. It follows from Proposition 1 that $β ( t ) > 0$ for $t > 0$. From this and (53), we conclude that $t < 1 / b$, $β ( t ) < 1$ and $ψ ( t ) ≥ β ( t ) > 0$ which completes the proof of (52).
It follows from (44) the first inequality in (52) that
$E ( x ( 0 ) ) < 1 b .$
According to Proposition 5.3 of [14], the last inequality yields that the vector $ξ$ has pairwise distinct components. From Remark 3, we get that $ξ$ is a fixed point of T.
The fact that the Picard sequence (1) is well-defined and convergent to $ξ$, as well as the estimates (47) and (48), were proven in Theorem 4.1 of [12]. It follows from the proof of the just-mentioned theorem that $T ( U ) ⊂ U$. From this, we deduce that $x ( k ) ∈ D$ and $E ( x ( k ) ) ∈ J$ for all $k ≥ 0$. Then, the estimates (49) and (50) follow from Definition 10 and Proposition 5, respectively. Finally, the convergence with Q-order $r = m + 1$ as well as the estimate (51) for the asymptotic error constant follow from Proposition 6.  □
Remark 4.
It follows from (52) that the constants λ and θ of Theorem 4 satisfy the following inequalities:
$0 ≤ λ ≤ 1 , 0 < θ ≤ 1 and λ θ < 1 .$
Remark 5.
We note that the convergence domain U given by (46) contains an open ball with center ξ. This property of domain U will be proved in the proof of Proposition 7.
The next statements complements Proposition 6.
Proposition 7.
Let $T : D ⊂ K n → K n$ and $ξ ∈ K n$ be a fixed point of T with pairwise distinct components. Suppose T is an iteration function of second kind at ξ with a control function $β : J → R +$ of exact degree $m ≥ 0$. If an initial guess is sufficiently close to ξ, then the Picard sequence (1) converges to ξ with Q-order $r = m + 1$.
Proof.
Let the function $Ψ : J → R$ be defined (45). Then
$lim t → 0 + Ψ ( t ) = 1 − lim t → 0 + β ( t ) = 1 − β ( 0 ) > 0 .$
Therefore, we can choose a positive number $R ∈ J$ such that $Ψ ( R ) > 0$. Let an initial approximation $x ( 0 ) ∈ K n$ lie in the closed ball with center $ξ$ and radius $ε δ ( ξ )$, i.e.,
$∥ x ( 0 ) − ξ ∥ p ≤ ε δ ( ξ ) , where ε = R 1 + b R .$
It follows from Lemma 1 that
$| δ ( ξ ) − δ ( x ( 0 ) ) | ≤ b ∥ x ( 0 ) − ξ ∥ p .$
From this and (54), we obtain
$δ ( ξ ) ≤ δ ( x ( 0 ) ) + b ∥ x ( 0 ) − ξ ∥ p ≤ δ ( x ( 0 ) ) + b ε δ ( ξ ) ,$
which implies
$δ ( ξ ) ≤ 1 1 − b ε δ ( x ( 0 ) ) .$
From this inequality and (54), we get
$∥ x ( 0 ) − ξ ∥ p ≤ ε δ ( ξ ) ≤ ε 1 − b ε δ ( x ( 0 ) ) = R δ ( x ( 0 ) ) .$
From this, we obtain
$E ( x ( 0 ) ) = x ( 0 ) − ξ d ( x ( 0 ) ) p ≤ ∥ x ( 0 ) − ξ ∥ p δ ( x ( 0 ) ) ≤ R .$
By monotonicity of $Ψ$, we conclude that $Ψ ( E ( x ( 0 ) ) ) ≥ Ψ ( R ) > 0$. Consequently, we have both $E ( x ( 0 ) ) ∈ J$ and $Ψ ( E ( x ( 0 ) ) ) > 0$. Hence, it follows from Theorem 4 that the Picard sequence (1) converges to $ξ$ with Q-order $r = m + 1$.  □

#### 2.5. Relationship between the First and the Second Kind Iteration Functions

The aim of this section is to show that an iteration function of the second kind can be transform into an iteration function of the first kind and vice versa.
From Example 2, Weierstrass’ iteration function (2) is seen as an iteration function of first kind with a control function $ϕ$ defined by (22). On the other hand, Example 4 shows that Weierstrass’ iteration function is an iteration function of second kind with control function $β$ defined by (40). An analogous situation of Ehrlich’s iteration function (4) is observed (see Examples 3 and 5).
In next proposition, it is proven that an iteration function of second kind with a control function $β$ at the same time is an iteration function of first order with control function $ϕ$.
Proposition 8.
Let $T : D ⊂ K n → K n$ and let $ξ ∈ K n$ be a fixed point of T with pairwise distinct components.
(i)
If T is an iteration function of second kind at ξ with a control function $β : J → R +$ of degree $m ≥ 0$, then there exists a quasi-homogeneous function $ϕ : J ¯ → R +$ of the same degree such that T is an iteration function of first kind at ξ with control function ϕ.
(ii)
If T is an iteration function of first kind at ξ with a control function $ϕ : J → R +$ of degree $m ≥ 0$, then there exists a quasi-homogeneous function $β : J ¯ → R +$ of the same degree such that T is an iteration function of second kind at ξ with control function β.
Proof.
For the interval J, there are three cases: $J = [ 0 , R ]$, $J = [ 0 , R )$ and $J = [ 0 , ∞ )$. We shall prove (i) in the case $J = [ 0 , R ]$ and (ii) in the case $J = [ 0 , ∞ )$ as the other cases can be proven analogously.
(i) Suppose T is an iteration function of second kind at $ξ$ with a control function $β : [ 0 , R ] → R +$ of degree $m ≥ 0$, where $R > 0$. We shall prove that T is an iteration function of first kind at $ξ$ with control function $ϕ : [ 0 , R ¯ ] → R +$ defined as follows:
$ϕ ( t ) = β t 1 − b t a n d R ¯ = R 1 + b R .$
where b is defined by (23). Note that the function $ϕ$ is well-defined on $[ 0 , R ¯ ]$ since $R ¯ < 1 / b$ and $R ¯ / ( 1 − b R ¯ ) ≤ R$.
It is easy to see that $ϕ$ is a quasi-homogeneous function of degree m. Moreover, if $β$ is quasi-homogeneous of exact degree m, then the same is true for $ϕ$.
Let the functions $E : D → R +$ and $E ¯ : K n → R +$ be defined by
$E ( x ) = x − ξ d ( x ) p and E ¯ ( x ) = x − ξ d ( ξ ) p .$
Let $x ∈ K n$ and $E ¯ ( x ) ≤ R ¯$. We have to prove that
$x ∈ D and ∥ T ( x ) − ξ ∥ ⪯ ϕ ( E ¯ ( x ) ) ∥ x − ξ ∥ .$
It follows from $E ¯ ( x ) < 1 / b$ and Proposition 5.3 of [14] that $x ∈ D$. Then it follows from Proposition 4.1 of [11] that
$E ¯ ( x ) ≥ ( 1 − b E ¯ ( x ) ) E ( x ) .$
From this, we obtain
$E ( x ) ≤ E ¯ ( x ) 1 − b E ¯ ( x ) ≤ R ¯ 1 − b R ¯ = R .$
Thus we have $x ∈ D$ and $E ( x ) ≤ R$, which implies (38). The last inequality yields
$β ( E ( x ) ) ≤ β E ¯ ( x ) 1 − b E ¯ ( x ) = ϕ ( E ¯ ( x ) ) .$
From this and (38), we get (55). To complete the proof, it remains to note that both functions $ϕ$ and $β$ are quasi-homogeneous of the same degree.
(ii) Suppose T is an iteration function of first kind at $ξ$ with a control function $ϕ : [ 0 , + ∞ ) → R +$ of degree $m ≥ 0$. We shall prove that T is an iteration function of second kind at $ξ$ with control function $β : [ 0 , 1 / b ) → R +$ defined by
$β ( t ) = ϕ t 1 − b t .$
Let the functions $E : K n → R +$ and $E ¯ : D → R +$ be defined by
$E ( x ) = x − ξ d ( ξ ) p and E ¯ ( x ) = x − ξ d ( x ) p$
Let $x ∈ D$ and $E ¯ ( x ) < 1 / b$. We have to prove that
$x ∈ D and ∥ T ( x ) − ξ ∥ ⪯ β ( E ¯ ( x ) ) ∥ x − ξ ∥ .$
It follows from Proposition 4.1 of [11] that $E ¯ ( x ) ≥ ( 1 − b E ¯ ( x ) ) E ( x ) .$ From this, we obtain
$E ( x ) ≤ E ¯ ( x ) 1 − b E ¯ ( x ) ,$
which yields (20) as well as
$ϕ ( E ( x ) ) ≤ ϕ E ¯ ( x ) 1 − b E ¯ ( x ) = β ( E ¯ ( x ) ) .$
From this and (20), we obtain (56).  □

## 3. Local Convergence of the First Kind of Kyurkchiev-Zheng-Sun’s Method

In this section, a new convergence theorem (Theorem 5) for Kyurkchiev–Zheng–Sun’s method (5) is provided under the first kind of initial conditions. Theorem 5 improves and complements the result of Kyurkchiev [8] (Theorem 1) in several directions. The first kind of initial conditions appears for the first time in 1962 in a well-known work of Dochev [16]. A classification of different kinds of initial conditions for simultaneous methods was given in [11].
Let $f ∈ K [ z ]$ be a polynomial of degree $n ≥ 2$ which has n simple zeros in $K$, and let $ξ ∈ K n$ be a root vector of f. In this section, the local convergence of the Kyurkchiev–Zheng–Sun’s method (5) is studied with respect to the function of initial conditions $E : K n → R +$ defined by (21).
This section begins with two useful inequalities in $K n$ which play an important role in this paper.
Lemma 2
([14], Lemma 6.1). Let $x , ξ ∈ K n$ and $1 ≤ p ≤ ∞$. If ξ has pairwise distinct components, then for all $i ≠ j$ the following inequalities hold:
$| x i − ξ j | ≥ ( 1 − E ( x ) ) d i ( ξ ) and | x i − x j | ≥ ( 1 − b E ( x ) ) d j ( ξ ) ,$
where $E : K n → R +$ is defined by (21) and b is defined by (23).
Lemma 3
([14], Proposition 5.5). Let $u ∈ K n$ and $1 ≤ p ≤ ∞$. Then the following inequality holds:
$∏ j = 1 n ( 1 + u j ) − 1 ≤ 1 + ∥ u ∥ p n 1 / p n − 1 .$
Lemma 4.
Let $f ∈ K [ z ]$ be a polynomial of degree $n ≥ 2$ which has n simple zeros in $K$, $ξ ∈ K n$ be a root vector of f and $1 ≤ p ≤ ∞$. Then $T : D ⊂ K n → K n$ defined by (5) is an iteration function of first kind with control function $ϕ : [ 0 , τ ) → R +$ of exact degree $m = 3$ defined by
$ϕ ( t ) = a t 2 ω ( t ) ( 1 − b t ) 2 − a t 2 ω ( t ) w i t h ω ( t ) = 1 + a t ( n − 1 ) ( 1 − b t ) n − 1 − 1 + t 1 − t ,$
where a and b are defined by (23) and τ is the unique solution in the interval $( 0 , 1 / b )$ of the equation
$( 1 − b t ) 2 − a t 2 ω ( t ) = 0 .$
Proof.
First of all, note that $ϕ$ is quasi-homogeneous function of exact degree $m = 3$. Let $x ∈ K n$ be such that $E ( x ) < τ$. According to Definition 9 we have to prove that
$x ∈ D and ∥ T ( x ) − ξ ∥ ⪯ ϕ ( E ( x ) ) ∥ x − ξ ∥ .$
First, we shall prove that $x ∈ D$. From, the second part of Lemma 2, we conclude that $x ∈ D$. Hence, according to (7), we have to prove that $f ( x i ) ≠ 0$ implies
$A i ( x ) + B i ( x ) ≠ 0 .$
We shall prove that (61) follows from $x i ≠ ξ i$. We can represent $A i ( x )$ in the form (see Lemma 2.1 of [15])
$A i ( x ) = 1 + μ i x i − ξ i ,$
where
$μ i = ( x i − ξ i ) ∑ j = 1 , j ≠ i n ξ j − x j ( x i − ξ j ) ( x i − x j ) .$
We also have the following estimate (see Lemma 2.2 of [15]):
$| μ i | ≤ a E ( x ) 2 ( 1 − E ( x ) ) ( 1 − b E ( x ) ) .$
Furthermore, we can represent $B i ( x )$ in the form
$B i ( x ) = ∑ j = 1 , j ≠ i n x j − ξ j ( x i − x j ) 2 ∏ ν = 1 , ν ≠ j n x j − ξ ν x j − x ν = ν i x i − ξ i ,$
where $ν i$ is defined by
$ν i = ( x i − ξ i ) ∑ j = 1 , j ≠ i n x j − ξ j ( x i − x j ) 2 ∏ ν = 1 , ν ≠ j n x j − ξ ν x j − x ν .$
From (62) and (65), we have
$A i ( x ) + B i ( x ) = 1 + μ i + ν i x i − ξ i$
which means that (61) is equivalent to
$μ i + ν i ≠ − 1 .$
It follows from (63) and (66) that
$μ i + ν i = ( x i − ξ i ) ∑ j = 1 , j ≠ i n ξ j − x j ( x i − ξ j ) ( x i − x j ) + x j − ξ j ( x i − x j ) 2 ∏ ν = 1 , ν ≠ j n x j − ξ ν x j − x ν = ( x i − ξ i ) ∑ j = 1 , j ≠ i n x j − ξ j ( x i − x j ) 2 ∏ ν = 1 , ν ≠ j n x j − ξ ν x j − x ν − ξ i − x j x i − ξ j = ( x i − ξ i ) ∑ j = 1 , j ≠ i n ( x j − ξ j ) ω i ( x i − x j ) 2 ,$
where
$ω j = ∏ ν = 1 , ν ≠ j n x j − ξ ν x j − x ν − ξ i − x j x i − ξ j = ∏ ν = 1 , ν ≠ j n 1 + x ν − ξ ν x j − x ν − 1 + ξ j − x j x i − ξ j .$
From Lemma 2 and (21), we obtain
$x ν − ξ ν x j − x ν ≤ | x ν − ξ ν | ( 1 − b E ( x ) ) d ν ( ξ ) ≤ E ( x ) 1 − b E ( x )$
and
$ξ j − x j x i − ξ j ≤ | ξ j − x j | ( 1 − E ( x ) ) d j ( ξ ) ≤ E ( x ) 1 − E ( x ) .$
It follows from the equality (70), the triangle inequality in $K$, Lemma 3 and the estimates (71) and (72) that
$| ω j | ≤ ∏ ν = 1 , ν ≠ j n 1 + x ν − ξ ν x j − x ν − 1 + ξ j − x j x i − ξ j ≤ ω ( E ( x ) ) .$
Now it follows from (69), (71), (73) and $E ( x ) < τ$ that
$| μ i + ν i | ≤ | x i − ξ i | ∑ j = 1 , j ≠ i n | x j − ξ j | | ω i | | x j − x j | 2 ≤ a E ( x ) 2 ω ( E ( x ) ) ( 1 − b E ( x ) ) 2 < 1 ,$
which proves (68), and so $x ∈ D$.
Second, we shall prove the second part of (60). In other words, we have to prove that
$| T i ( x ) − ξ i | ≤ ϕ ( E ( x ) ) | x i − ξ i | for every i = 1 , 2 , … , n .$
If $x i = ξ i$, then (75) becomes an equality. Suppose $x i ≠ ξ i$. By (5), (62) and (65), we obtain
$T i ( x ) − ξ i = ( x i − ξ i ) A i + ( x i − ξ i ) B i − 1 A i + B i = μ i + ν i 1 + μ i + ν i ( x i − ξ i ) .$
From this, the triangle inequality in $K$ and the inequality (74), we get
$| T i ( x ) − ξ i | ≤ | μ i + ν i | 1 − | μ i + ν i | | x i − ξ i | ≤ ϕ ( E ( x ) ) | x i − ξ i | ,$
which proves (75).  □
Now we are ready to state and prove the main result of this section which generalizes, improves, and complements the above-mentioned result of Kyurkchiev [8] (Theorem 1).
Theorem 5.
Let $f ∈ K [ z ]$ be a polynomial of degree $n ≥ 2$ which has n simple zeros in $K$, and let $ξ ∈ K n$ be a root vector of f. Suppose $x ( 0 ) ∈ K n$ is an initial approximation satisfying the following condition
$E ( x ( 0 ) ) < 1 / b and Ψ ( E ( x ( 0 ) ) ) < 1 ,$
where the function E is defined by (21), the function $Ψ : [ 0 , 1 / b ) → R +$ is defined by
$Ψ ( t ) = 2 a t 2 ω ( t ) ( 1 − b t ) 2 with ω ( t ) = 1 + a t ( n − 1 ) ( 1 − b t ) n − 1 − 1 + t 1 − t ,$
where the constants a and b are defined by (23). Then Kyurkchiev’s iteration (5) is well-defined and converges to ξ with Q-order $r = 4$ and with error estimates:
$∥ x ( k + 1 ) − ξ ∥ ⪯ λ 4 k ∥ x ( k ) − ξ ∥ and ∥ x ( k ) − ξ ∥ ⪯ ∥ λ ( 4 k − 1 ) / 3 ∥ x ( 0 ) − ξ ∥ for k ≥ 0 ,$
where $λ = ϕ ( E ( x ( 0 ) ) )$ and the function ϕ is defined by (58). Besides, the following estimate for the asymptotic error constant holds:
$lim sup k → ∞ ∥ x ( k + 1 ) − ξ ∥ ∥ x ( k ) − ξ ∥ 4 ≤ a ( a + 1 ) δ ( ξ ) 3 .$
Proof.
It follows from Theorem 3 and Lemma 4 that under the initial conditions
$E ( x ( 0 ) ) < τ and ϕ ( E ( x ( 0 ) ) ) < 1 ,$
the iteration (5) is well-defined and converges to $ξ$ with Q-order $r = 4$ and the estimates (80) and (81) are satisfied since
$lim t → 0 ϕ ( t ) / t 3 = a ( a + 1 ) .$
It is easy to show that ${ t ∈ [ 0 , τ ) : ϕ ( t ) < 1 } = { t ∈ [ 0 , 1 / b ) : Ψ ( t ) < 1 } .$ Therefore, the initial conditions (82) and (78) are equivalent. This completes the proof.  □
Corollary 2.
Let $f ∈ K [ z ]$ be a polynomial of degree $n ≥ 2$ which has n simple zeros in $K$ and $ξ ∈ K n$ be a root vector of f. Suppose $x ( 0 ) ∈ K n$ is an initial approximation satisfying the following condition
$E ( x ( 0 ) ) ≤ R = min 2 n − 1 − 1 b ( 2 n − 1 − 1 ) + a / ( n − 1 ) , 1 b + 2 ( a + b ) / ( a + b − 1 ) ,$
where the function E is defined by (21). Then Kyurkchiev-Zheng-Sun’s iteration (5) is well-defined and converges to ξ with Q-order $r = 4$, with the estimate (81) for the asymptotic error constant and with error estimates (80).
Proof.
Let $Ψ$ be defined on $[ 0 , 1 / b )$ by (79). It is obvious that $R < 1 / b$. Therefore, according to Theorem 5, we have to prove that
$Ψ ( R ) < 1$
since $Ψ$ is strictly increasing on $[ 0 , 1 / b )$. We write the definition of R in the form
$R = min { u , v } ,$
where
$u = 1 b + a / C ( n ) a n d v = 1 b + a / F ( n , p ) ,$
$C ( n ) = ( n − 1 ) ( 2 n − 1 − 1 ) a n d F ( n , p ) = a ( a + b − 1 ) 2 ( a + b ) .$
It is easy to see that
$R = u if C ( n ) ≤ F ( n , p ) , v if C ( n ) > F ( n , p ) .$
For example, this formula yields $R = v < u$ for $n = 2$ and for $n = 3$. On the other hand, the obvious inequality $C ( n ) ≤ 1$ yields $u ≤ 1 / ( a + b )$. Note that this inequality becomes equality if and only if $n = 2$. Hence, for $n = 2$ we have $u = 1 / ( a + b )$ and $R < u$. Using the inequality $u ≤ 1 / ( a + b )$, we obtain
$ω ( R ) ≤ ω ( u ) = 1 + u 1 − u = 1 1 − u ≤ a + b a + b − 1 .$
The last inequality in (88) becomes an equality only if $n = 2$ but in this case the first inequality in (88) is strict since $R < u$. Therefore,
$ω ( R ) < a + b a + b − 1 .$
From this and $R ≤ v$, we obtain
$Ψ ( R ) = 2 a R 2 ω ( R ) ( 1 − b R ) 2 < A R 2 ( 1 − b R ) 2 ≤ A v 2 ( 1 − b v ) 2 = 1 ,$
where $A = 2 a ( a + b ) / ( a + b − 1 )$. The last inequality implies $Ψ ( R ) < 1$, which completes the proof.  □
Remark 6.
It follows from (87) that the initial condition of Corollary 2 can also be written in the form
$E ( x ( 0 ) ) ≤ R = 2 n − 1 − 1 b ( 2 n − 1 − 1 ) + a / ( n − 1 ) for C ( n ) ≤ F ( n , p ) , 1 b + 2 ( a + b ) / ( a + b − 1 ) for C ( n ) > F ( n , p ) ,$
where $C ( n )$ and $F ( n , p )$ are defined by (86).
Now we shall prove that
$a ≥ 1 + 3 ⇒ C ( n ) ≤ F ( n , p ) .$
Let $a ≥ 1 + 3$. We have to prove that $C ( n ) ≤ F ( n , p )$. In view of $C ( n ) ≤ 1$, it is sufficient to prove that $F ( n , p ) ≥ 1$. It is easy to show that the last inequality is equivalent to
$a ≥ 3 − b + b 2 + 2 b + 9 / 2 ,$
which holds because the maximum of the right side over $b ∈ [ 1 , 2 ]$ equals $1 + 3$. This completes the proof of (91)
If $p = ∞$ then the inequality $a ≥ 1 + 3$ holds for $n ≥ 4$. Therefore, it follows from (90) and (91) that in the case $p = ∞$, the initial condition of Corollary 2 takes the form
$E ( x ( 0 ) ) ≤ R = 1 2 + 3 ( n − 1 ) for 2 ≤ n ≤ 3 , 2 n − 1 − 1 2 2 n − 1 − 1 for n ≥ 4 .$
Corollary 3.
Let $f ∈ K [ z ]$ be a polynomial of degree $n ≥ 2$ which has n simple zeros, $ξ ∈ K n$ be a root vector of f, and let $0 < h < 1$ and $c > 0$ be such that
$c ≤ R δ ,$
where $R = R ( n , p )$ is defined by (90) and $δ = δ ( ξ )$. Suppose $x ( 0 ) ∈ K n$ is an initial approximation satisfying the following condition
$∥ x ( 0 ) − ξ ∥ p ≤ c h for some 1 ≤ p ≤ ∞ .$
Then Kyurkchiev-Zheng-Sun’s iteration (5) is well-defined and converges to ξ with Q-order $r = 4$, with the estimate (81) for the asymptotic error constant and with error estimates
$∥ x ( k + 1 ) − ξ ∥ ⪯ h 4 k ∥ x ( k ) − ξ ∥ and ∥ x ( k ) − ξ ∥ ⪯ h 4 k − 1 ∥ x ( 0 ) − ξ ∥ ,$
$∥ x ( k ) − ξ ∥ p ≤ c h 4 k for all k ≥ 0 .$
Proof.
Let the functions $Ψ$ and $ϕ$ be defined by (79) and (58), respectively. According to definition of R and (84), we have $R < 1 / b$ and $Ψ ( R ) < 1$ which imply $R < τ$ and $ϕ ( R ) < 1$, where b is defined by (23) and $τ$ is the unique solution of the Equation (59) in the interval $( 0 , 1 / b )$. Therefore, it follows from (93) and the monotonicity of $ϕ$ that the conditions (34) of Corollary 1 are satisfied with $J = [ 0 , τ ]$. Therefore, the claims of Corollary 3 follow immediately from Corollary 1 and Lemma 4.  □
Remark 7.
Corollary 3 improves and complements Theorem 1 in several directions.

## 4. Local Convergence of the Second Kind of Kyurkchiev–Zheng–Sun’s Method

In this section, a local convergence theorem (Theorem 6) for Kyurkchiev–Zheng–Sun’s method (5) is presented under the second kind of initial conditions. To the best of the author’s knowledge, Theorem 6 is the first result of this type about the iterative method (5). As it is shown in [11], the second kind of initial conditions are more close to the semilocal initial conditions than the first kind of initial conditions. Note that the second kind of initial conditions were given for the first time in 1989 by Wang and Zhao [17] (see also [14]).
Let $f ∈ K [ z ]$ be a polynomial of degree $n ≥ 2$ which splits over $K$, and let $ξ ∈ K n$ be a root vector of f. In this section the local convergence of the Kyurkchiev iteration (5) is studied with respect to the function of initial conditions $E : D → R +$ defined by (39).
This section begins with a technical lemma.
Lemma 5
([14], Lemma 7.1). Let $x , ξ ∈ K n$ and $1 ≤ p ≤ ∞$ $( n ≥ 2 )$. If x has pairwise distinct components, then for $i ≠ j$ we have
$| x i − ξ j | ≥ ( 1 − E ( x ) ) d i ( x ) and | x i − x j | ≥ d j ( x ) ,$
where $E : D → R +$ is defined by (39).
Lemma 6.
Let $f ∈ K [ z ]$ be a polynomial of degree $n ≥ 2$ which has n simple zeros in $K$, $ξ ∈ K n$ be a root vector of f and $1 ≤ p ≤ ∞$. Then $T : D ⊂ K n → K n$ defined by (5) is an iteration function of second kind with control function $β : [ 0 , τ ) → R +$ of exact degree $m = 3$ defined by
$β ( t ) = a t 2 ω ( t ) 1 − a t 2 ω ( t ) with ω ( t ) = 1 + a t n − 1 n − 1 − 1 + t 1 − t ,$
where a is defined by (23) and τ is the unique solution in the interval $( 0 , 1 )$ of the equation
$1 − a t 2 ω ( t ) = 0 .$
Proof.
First we note that $β$ is a quasi-homogeneous function of exact degree $m = 3$. Suppose $x ∈ D$ is such that $E ( x ) < τ$. According to Definition 10, we have to prove that
$x ∈ D and ∥ T ( x ) − ξ ∥ ⪯ β ( E ( x ) ) ∥ x − ξ ∥ .$
The proofs of (99) are the same as the proof of the claim (60) of Lemma 4, except we must use Lemma 5 instead of Lemma 2, and we must replace the function $ϕ$ by $β$. More precisely, the estimates (64), (71), and (74) have to be replaced by the following estimates:
$| μ i | ≤ a E ( x ) 2 1 − E ( x )$
$x ν − ξ ν x j − x ν ≤ | x ν − ξ ν | d ν ( x ) ≤ E ( x ) ,$
$| μ i + ν i | ≤ | x i − ξ i | ∑ j = 1 , j ≠ i n | x j − ξ j | | ω i | | x j − x j | 2 ≤ a E ( x ) 2 ω ( E ( x ) ) < 1 ,$
respectively.  □
The next theorem is the main result of this section. To the best of the author’s knowledge, this theorem is the first convergence result about Kyurkchiev–Zheng–Sun’s method (5) under the second kind of initial conditions.
Theorem 6.
Let $f ∈ K [ z ]$ be a polynomial of degree $n ≥ 2$ which splits over $K$, $ξ ∈ K n$ be a root vector of f, and let $1 ≤ p ≤ ∞$. Suppose $x ( 0 ) ∈ K n$ is a vector with distinct components satisfying
$E ( x ( 0 ) ) < 1 and Φ ( E ( x ( 0 ) ) ) ≤ 1 ,$
where the function E is defined by (39), the function Φ is defined by
$Φ ( t ) = 2 a t 2 ω ( t ) + b t with ω ( t ) = 1 + a t n − 1 n − 1 − 1 + t 1 − t ,$
where the constants a and b are defined by (23). Then f has only simple zeros in $K$, and Kyurkchiev–Zheng–Sun’s iteration (5) is well-defined and converges to ξ with Q-order $r = 4$ and with error estimates
$∥ x ( k + 1 ) − ξ ∥ ⪯ θ λ 4 k ∥ x ( k ) − ξ ∥ and ∥ x ( k ) − ξ ∥ ⪯ θ k λ ( 4 k − 1 ) / 3 ∥ x ( 0 ) − ξ ∥ ,$
for all $k ≥ 0$, where $λ = ϕ ( E ( x ( 0 ) ) )$, $θ = ψ ( E ( x ( 0 ) ) )$, and the functions ϕ and ψ are defined by
$ψ ( t ) = 1 − a t 2 ω ( t ) − b t 1 − a t 2 ω ( t ) and ϕ ( t ) = a t 2 ω ( t ) 1 − Φ ( t ) .$
Proof.
Let $τ$ be the unique solution in the interval $( 0 , 1 )$ of the Equation (98), and let the functions $β$, $Ψ$, $ψ$ and $ϕ$ be defined on $[ 0 , τ )$ as follows:
$β ( t ) = a t 2 ω ( t ) 1 − a t 2 ω ( t ) , Ψ ( t ) = 1 − b t − β ( t ) ( 1 + b t ) = 1 − Φ ( t ) 1 − a t 2 ω ( t ) ,$
$ψ ( t ) = 1 − b t ( 1 + β ( t ) ) = 1 − a t 2 ω ( t ) − b t 1 − a t 2 ω ( t ) , ϕ ( t ) = β ( t ) ψ ( t ) = a t 2 ω ( t ) 1 − Φ ( t ) .$
We can see that the functions $ψ$ and $ϕ$ defined by (108) are the same functions defined by (106). It follows from Theorem 4 and Lemma 6 that all conclusions of the theorem are satisfied under the initial condition
$E ( x ( 0 ) ) < τ and Ψ ( E ( x ( 0 ) ) ) ≥ 0 .$
It is easy to see that if $Φ ( t ) ) ≤ 1$ for some $t ∈ ( 0 , 1 )$, then $t < τ$ and $Ψ ( t ) ≥ 0$. Then it follows from (103) that the conditions (109) are satisfied, which completes the proof.  □

## 5. Local Convergence of the First Kind of Iliev’s Method

In this section, a new local convergence theorem (Theorem 7) is proven for Iliev’s method (11) under initial conditions of the first kind. Theorem 7 improves and complements the result of Iliev [10] (Theorem 2) and generalizes, improves, and complements the result of Kyurkchiev [8] (Theorem 1).
Let $f ∈ K [ z ]$ be a polynomial of degree $n ≥ 2$ which splits over $K$ and $ξ 1 , … , ξ s$ be all distinct zeros of f with multiplicities $m 1 , … , m s$ ($m 1 + … + m s = n$), respectively.
In this section, the convergence of Iliev’s iteration (11) is studied with respect to the function of initial conditions $E : K s → R +$ defined by (21) with $ξ = ( ξ 1 , … , ξ s )$ for some $1 ≤ p ≤ ∞$.
For the sake of brevity, the quantities $a = a ( p , m 1 , … , m s )$ and $b = b ( p )$ are defined as follows:
$a = max 1 ≤ i ≤ s 1 m i ∑ j = 1 , j ≠ i s m j q 1 / q and b = 2 1 / q ,$
where $1 ≤ q ≤ ∞$ is defined by $1 / p + 1 / q = 1$.
In this and the next section, the notations introduced at the beginning of Section 2 are also used, but in s-dimensional space $K s$.
In the next lemma, it is shown that T is an iteration function of first kind with control function $ϕ$. Let us introduce a function $ω : [ 0 , 1 / b ) → R +$ defined by
$ω ( t ) = 1 + a γ ( t ) n − 1 n − 1 − 1 + t 1 − t with γ ( t ) = max t 1 − b t , a t 2 ( 1 − t ) ( 1 − b t ) ,$
where a and b are defined by (110).
Lemma 7.
Let $f ∈ K [ z ]$ be a polynomial of degree $n ≥ 2$ which splits over $K$, $ξ 1 , … , ξ s$ be all pairwise distinct zeros of f with multiplicities $m 1 , … , m s$ ($m 1 + … + m s = n$) and $1 ≤ p ≤ ∞$. Then $T : D ⊂ K n → K n$ defined by (11) is an iteration function of first kind with control function $ϕ : [ 0 , τ ) → R +$ of exact degree $m = 3$ defined by
$ϕ ( t ) = a t 2 ω ( t ) ( 1 − b t ) 2 − a t 2 ω ( t ) ,$
where the function ω is defined by (111), a and b are defined by (110), and τ is the unique solution in $( 0 , 1 / b )$ of the equation
$( 1 − b t ) 2 − a t 2 ω ( t ) = 0 .$
Proof.
Let $x ∈ K s$ and $E ( x ) ∈ [ 0 , 1 / b )$. According to Definition 9, we have to prove that
$x ∈ D and ∥ T ( x ) − ξ ∥ ⪯ ϕ ( E ( x ) ) ∥ x − ξ ∥ ,$
First, we shall prove that $x ∈ D$. It follows from Lemma 2 and $E ( x ) ∈ 1 / b$ that $x ∈ D$. Since x has distinct components, we have to prove that $f ( x i ) ≠ 0$ implies
$A i ( x ) + B i ( x ) ≠ 0 .$
We shall prove that (115) follows from $x i ≠ ξ i$. We can represent $A i ( x )$ in the following form (see [15], Lemma 1):
$A i ( x ) = m i ( 1 + μ i ) x i − ξ i ,$
where
$μ i = x i − ξ i m i ∑ j = 1 , j ≠ i s m j ( ξ j − x j ) ( x i − ξ j ) ( x i − x j ) .$
We define $ν i ∈ K$ by
$ν i = x i − ξ i m i B i ( x ) .$
From (116) and (118), we have
$A i ( x ) + B i ( x ) = 1 + μ i + ν i x i − ξ i ,$
which means that (115) is equivalent to
$μ i + ν i ≠ − 1 .$
Furthermore, using (3) and (116), we can represent $B i ( x )$ in the form
$B i ( x ) = ∑ j = 1 , j ≠ i s m j ( x j − ξ j ) m j ( x i − x j ) 2 A j ( x ) m j m j − 1 ∏ ν = 1 , ν ≠ j s x j − ξ ν x j − x ν m ν = ∑ j = 1 , j ≠ i s m j ( x j − ξ j ) m j ( x i − x j ) 2 1 + μ j x j − ξ j m j − 1 ∏ ν = 1 , ν ≠ j s x j − ξ ν x j − x ν m ν = ∑ j = 1 , j ≠ i s m j ( x j − ξ j ) ( x i − x j ) 2 1 + μ j m j − 1 ∏ ν = 1 , ν ≠ j s x j − ξ ν x j − x ν m ν .$
From (117), (118) and (121), we obtain
$μ i + ν i = x i − ξ i m i ∑ j = 1 , j ≠ i s m j ( ξ j − x j ) ( x i − ξ j ) ( x i − x j ) + x i − ξ i m i B i ( x ) = x i − ξ i m j ∑ j = 1 , j ≠ i s m j ( ξ j − x j ) ( x i − ξ j ) ( x i − x j ) + m j ( x j − ξ j ) ( x i − x j ) 2 1 + μ j m j − 1 ∏ ν = 1 , ν ≠ j s x j − ξ ν x j − x ν m ν$
Therefore, $μ i + ν i$ can be written in the form
$μ i + ν i = x i − ξ i m j ∑ j = 1 , j ≠ i s m j ( x j − ξ j ) ( x i − x j ) 2 x j − x i x i − ξ j + 1 + μ j m j − 1 ∏ ν = 1 , ν ≠ j s x j − ξ ν x j − x ν m ν = x i − ξ i m j ∑ j = 1 , j ≠ i s m j ( x j − ξ j ) ω j ( x i − x j ) 2 ,$
where
$ω j = x j − x i x i − ξ j + 1 + μ j m j − 1 ∏ ν = 1 , ν ≠ j s x j − ξ ν x j − x ν m ν = x j − ξ j x i − ξ j + 1 + μ j m j − 1 ∏ ν = 1 , ν ≠ j s 1 + x ν − ξ ν x j − x ν m ν − 1 .$
Let us present $ω j$ in the form
$ω j = A j + B j − 1 ,$
where
$A j = x j − ξ j x i − ξ j , B j = 1 + μ j m j − 1 ∏ ν = 1 , ν ≠ j s 1 + u ν m ν and u ν = x ν − ξ ν x j − x ν .$
From Lemma 2 and (21), we obtain
$| A j | = ξ j − x j x i − ξ j ≤ | ξ j − x j | ( 1 − E ( x ) ) d j ( x ) ≤ E ( x ) 1 − E ( x ) ,$
$| u ν | = x ν − ξ ν x j − x ν ≤ | x ν − ξ ν | ( 1 − b E ( x ) ) d ν ( x ) ≤ E ( x ) 1 − b E ( x ) ,$
$| μ j | = | x j − ξ j | m j ∑ ν = 1 , ν ≠ j s m ν | ξ ν − ν j | | x j − x ν | | x j − x ν | ≤ a E ( x ) 2 ( 1 − E ( x ) ) ( 1 − b E ( x ) ) .$
The expression $B j$ consists of $m j − 1$ factors equal to $1 + μ j$ and $∑ ν = 1 , ν ≠ j s m ν$ factors of type $1 + u ν$. So the number of factors of $B j$ is equal to
$m j − 1 + ∑ ν = 1 , ν ≠ j s m ν = − 1 + ∑ ν = 1 s m ν = n − 1 .$
Consequently, $B j$ can be represented in the form
$B j = ∏ k = 1 n − 1 1 + v k ,$
where $v k$ equals to $μ j$ or $u ν$. Hence, it follows from (127) and (128) that
$| v k | ≤ max a E ( x ) 2 ( 1 − E ( x ) ) ( 1 − b E ( x ) ) , E ( x ) 1 − b E ( x ) = γ ( E ( x ) ) ,$
where the function $γ$ is defined in (111). It follows from (129) and (130) and Lemma 3 that
$| B j − 1 | = ∏ k = 1 n − 1 1 + v k − 1 ≤ 1 + γ ( E ( x ) ) ( n − 1 ) 1 / p n − 1 − 1 .$
It follows from the equality (124), the triangle inequality in $K$ and the estimates (131) that
$| ω j | ≤ | A j | + | B j − 1 | ≤ E ( x ) 1 − E ( x ) + 1 + γ ( E ( x ) ) ( n − 1 ) 1 / p n − 1 − 1 = ω ( E ( x ) ) ,$
where the function $ω$ is defined in (136). Now it follows from (122) and Lemma 2 that
$| μ i + ν i | ≤ | x i − ξ i | ∑ j = 1 , j ≠ i s | x j − ξ j | | ω i | | x j − x j | 2 ≤ a E ( x ) 2 ω ( E ( x ) ) ( 1 − b E ( x ) ) 2 < 1 .$
From this and the definition of $τ$, we conclude that $| μ i + ν i | < 1$ which implies (120) and so $x ∈ D$.
Second, we prove the inequality in (60). In other words, we have to prove that
$| T i ( x ) − ξ i | ≤ ϕ ( E ( x ) ) | x i − ξ i | for every i = 1 , … , s .$
If $x i = ξ i$, then (134) becomes an equality. Suppose $x i ≠ ξ i$. By (11), (62) and (65), we obtain
$T i ( x ) − ξ i = x i − ξ i − 1 A i + B i = ( x i − ξ i ) A i + ( x i − ξ i ) B i − 1 A i + B i = μ i + ν i 1 + μ i + ν i ( x i − ξ i ) .$
From this, the triangle inequality in $K$ and the inequality (74), we get
$| T i ( x ) − ξ i | ≤ | μ i + ν i | 1 − | μ i + ν i | | x i − ξ i | ≤ ϕ ( E ( x ) ) | x i − ξ i | ,$
which proves (134). This completes the proof.  □
Next, the main result of this section are presented. It improves and complements the result of Iliev [10] and generalizes, improves, and complements Kyurkchiev’s result [8].
Theorem 7.
Let $f ∈ K [ z ]$ be a polynomial of degree $n ≥ 2$ which splits over $K$, $ξ 1 , … , ξ s$ be all distinct zeros of f with multiplicities $m 1 , … , m s$$( m 1 + … + m s = n )$, and $1 ≤ p ≤ ∞$. Let $x ( 0 ) ∈ K s$ be an initial approximation satisfying
$E ( x ( 0 ) ) < 1 / b and Ψ ( E ( x ( 0 ) ) ) < 1 ,$
where the function E is defined by (21) with $ξ = ( ξ 1 , … , ξ s )$, and the function Ψ is defined by
$Ψ ( t ) = 2 a t 2 ω ( t ) ( 1 − b t ) 2 ,$
where ω is defined by (111), and a and b are defined by (110). Then Iliev’s iteration (11) is well-defined and converges to ξ with Q-order $r = 4$ and with the following error estimates:
$∥ x ( k + 1 ) − ξ ∥ ⪯ λ 4 k ∥ x ( k ) − ξ ∥ and ∥ x ( k ) − ξ ∥ ⪯ λ ( 4 k − 1 ) / 3 ∥ x ( 0 ) − ξ ∥ for k ≥ 0 ,$
where $λ = ϕ ( E ( x ( 0 ) ) )$ and the function ϕ is defined by (112). Besides, we have the following estimate for the asymptotic error constant:
$lim sup k → ∞ ∥ x ( k + 1 ) − ξ ∥ ∥ x ( k ) − ξ ∥ 4 ≤ a ( a + 1 ) δ ( ξ ) 3 , where a = max 1 ≤ i ≤ s 1 m i ∑ j = 1 , j ≠ i s m j q 1 / q .$
Proof.
First we note that $lim t → 0 ϕ ( t ) / t 3 = a ( a + 1 )$. It follows from Theorem 3 and Lemma 7 that under the initial conditions
$E ( x ( 0 ) ) < τ and ϕ ( E ( x ( 0 ) ) ) < 1 ,$
the iteration (11) is well-defined and converges to $ξ$ with Q-order $r = 4$, and the estimates (137) and (138) hold true. It is easy to prove that $t ∈ [ 0 , 1 / b )$ and $Ψ ( t ) < 1$ imply $t ∈ [ 0 , τ )$ and $ϕ ( t ) < 1$. Therefore, the initial conditions (135) imply the conditions (139). This completes the proof.  □
Corollary 4.
Let $f ∈ K [ z ]$ be a polynomial of degree $n ≥ 2$ which splits over $K$, $ξ 1 , … , ξ s$ be all distinct zeros of f with multiplicities $m 1 , … , m s$$( m 1 + … + m s = n )$ and $1 ≤ p ≤ ∞$. Let $0 < h < 1$ and $c > 0$ be such that
$c / δ < 1 / b and Ψ ( c / δ ) ≤ 1 .$
where Ψ is defined by (136), $δ = δ ( ξ )$ and b is defined by (110). Suppose $x ( 0 ) ∈ K n$ is an initial approximation satisfying the following condition
$∥ x ( 0 ) − ξ ∥ p ≤ c h ,$
where $ξ = ( ξ 1 , … , ξ n )$. Then Iliev’s iteration (11) is well-defined and converges to ξ with Q-order $r = 4$ with the estimate (138) for the asymptotic error constant and with the following error estimates:
$∥ x ( k + 1 ) − ξ ∥ ⪯ h 4 k ∥ x ( k ) − ξ ∥ and ∥ x ( k ) − ξ ∥ ⪯ h 4 k − 1 ∥ x ( 0 ) − ξ ∥ ,$
$∥ x ( k ) − ξ ∥ p ≤ c h 4 k for all k ≥ 0 .$
Proof.
It is easy to show that (140) implies (34) with $J = [ 0 , τ )$ and $ϕ$ defined by (112). Then the claims of the theorem follow immediately from Corollary 1 and Lemma 4.  □
Remark 8.
Corollary 4 improves and complements Theorem 2 in several directions.
Corollary 5.
Let $f ∈ K [ z ]$ be a polynomial of degree $n ≥ 2$ which splits over $K$, $ξ 1 , … , ξ s$ be all distinct zeros of f with multiplicities $m 1 , … , m s$$( m 1 + … + m s = n )$, and $1 ≤ p ≤ ∞$. Suppose $x ( 0 ) ∈ K s$ is an initial approximation satisfying the following condition
$E ( x ( 0 ) ) < 1 / ( a + 1 ) and Ψ ( E ( x ( 0 ) ) ) < 1 ,$
where $ξ = ( ξ 1 , … , ξ s )$, the function E is defined by (21) and Ψ are defined by
$Ψ ( t ) = 2 a t 2 ω ( t ) ( 1 − b t ) 2 with ω ( t ) = 1 + a t ( n − 1 ) ( 1 − b t ) n − 1 − 1 + t 1 − t .$
Then Iliev’s iteration (11) is well-defined and converges to ξ with Q-order $r = 4$ with the estimate (138) for the asymptotic error constant and with the following error estimates:
$∥ x ( k + 1 ) − ξ ∥ ⪯ λ 4 k ∥ x ( k ) − ξ and ∥ x ( k ) − ξ ∥ ⪯ λ ( 4 k − 1 ) / 3 ∥ x ( 0 ) − ξ ∥ for k ≥ 0 ,$
where $λ = ϕ ( E ( x ( 0 ) ) )$ and the function ϕ is defined by (112).
Proof.
Let $t ∈ [ 0 , 1 / ( a + 1 ) )$. Then we have $t < 1 / b$ since $b ≤ a + 1$. On the other hand, the function $γ$ defined by (111) takes the form $ψ ( t ) = t / ( 1 − b t )$, which means that the function $Ψ$ defined by (136) coincides with the function $Ψ$ defined by (145) on the interval $[ 0 , 1 / ( a + 1 ) )$. These considerations show that the condition (144) imply (135). Hence, Corollary 5 is a consequence of Theorem 7.  □
Remark 9.
Corollary 5 generalizes, improves and complements Theorem 1 in several directions.

## 6. Local Convergence of the Second Kind of Iliev’s Method

In this section, a local convergence result (Theorem 8) is presented for Iliev’s method (11) under the initial conditions of the second kind. To the best of the author’s knowledge, Theorem 6 is the first result in the literature of this type about the iterative method (11).
Let $f ∈ K [ z ]$ be a polynomial of degree $n ≥ 2$ which splits over $K$ and let $ξ 1 , … , ξ s$ be all distinct zeros of f with multiplicities $m 1 , … , m s$, respectively.
The local convergence of the Iliev iteration (11) is studied with respect to the function of initial conditions $E : D ⊂ K s → R +$ defined by (39).
Let us define a function $β : [ 0 , τ ) → R +$ as follows:
$β ( t ) = a t 2 ω ( t ) 1 − a t 2 ω ( t ) ,$
where a is defined by (110), the function $ω$ is defined by
$ω ( t ) = 1 + a γ ( t ) n − 1 n − 1 − 1 + t 1 − t with γ ( t ) = max t , a t 2 1 − t ,$
where $τ$ is the unique solution in the interval $( 0 , 1 )$ of the equation
$1 − a t 2 ω ( t ) = 0 .$
In the next lemma, we prove that the iteration function T defined by (11) is of second kind with control function $β$ defined by (147).
Lemma 8.
Let $f ∈ K [ z ]$ be a polynomial of degree $n ≥ 2$ which splits over $K$, $ξ 1 , … , ξ s$ be all distinct zeros of f with multiplicity $m 1 , … , m s$$( m 1 + … + m s = n )$ and $1 ≤ p ≤ ∞$. Then $T : D ⊂ K n → K n$ defined by (11) is an iteration function of second kind with control function $β : [ 0 , τ ) → R +$ of exact degree $m = 3$ defined by (147), where τ is the unique solution of the Equation (149) in $( 0 , 1 )$.
Proof.
Suppose $x ∈ D$ is such that $E ( x ) < τ$. According to Definition 10, we have to prove that
$x ∈ D and ∥ T ( x ) − ξ ∥ ⪯ β ( E ( x ) ) ∥ x − ξ ∥ .$
The proofs of (150) are the same as the proof of the claim (114) of Lemma 7, except we must use Lemma 5 instead of Lemma 2 and we must replace the function $ϕ$ by $β$. More precisely, the estimates (127), (128), (130) and (133) have to be replaced by the following ones:
$| u ν | = x ν − ξ ν x j − x ν ≤ | x ν − ξ ν | d ν ( x ) ≤ E ( x ) ,$
$| μ j | = | x j − ξ j | m j ∑ ν = 1 , ν ≠ j s m ν | ξ ν − ν j | | x j − x ν | | x j − x ν | ≤ a E ( x ) 2 1 − E ( x ) ,$
$| v k | ≤ max a E ( x ) 2 1 − E ( x ) , E ( x ) = ψ ( E ( x ) ) ,$
$| μ i + ν i | ≤ | x i − ξ i | ∑ j = 1 , j ≠ i s | x j − ξ j | | ω i | | x j − x j | 2 ≤ a E ( x ) 2 ω ( E ( x ) ) < 1 .$
respectively.  □
We can now state the main result of this section. For brevity, the following function is defined:
$Φ ( t ) = 2 a t 2 ω ( t ) + b t ,$
where the constants a and b are defined by (110), and the function $ω$ is defined by (148). According to Theorem 4, using the function $β$ defined by (147), we have to define the functions $Ψ$, $ψ$ and $ϕ$ by
$Ψ ( t ) = 1 − b t − β ( t ) ( 1 + b t ) = 1 − Φ ( t ) 1 − a t 2 ω ( t ) ,$
$ψ ( t ) = 1 − b t ( 1 + β ( t ) ) = 1 − a t 2 ω ( t ) − b t 1 − a t 2 ω ( t ) and ϕ ( t ) = β ( t ) ψ ( t ) = a t 2 ω ( t ) 1 − Φ ( t ) a t 2 ,$
where a and b are defined by (110), the function $ω$ is defined by (148), and the function $Φ$ is defined by (155).
In the following theorem, a convergence result for Iliev’s method (11) is obtained under the second kind of initial conditions. To the best of the author’s knowledge, this is the first result of this type for the method (11).
Theorem 8.
Let $f ∈ K [ z ]$ be a polynomial of degree $n ≥ 2$ which splits over $K$, $ξ 1 , … , ξ s$ be all distinct zeros of f with multiplicities $m 1 , … , m s$$( m 1 + … + m s = n )$ and $1 ≤ p ≤ ∞$. Suppose $x ( 0 ) ∈ K s$ is a vector with distinct components satisfying
$E ( x ( 0 ) ) < 1 and Φ ( E ( x ( 0 ) ) ) ≤ 1 ,$
where the function E is defined by (39) with $ξ = ( ξ 1 , … , ξ s )$ and the function Φ is defined by (155). Then Iliev’s iteration (11) is well-defined and converges to ξ with Q-order $r = 4$, with the estimate (81) for the asymptotic error constant and with the following error estimates:
$∥ x ( k + 1 ) − ξ ∥ ⪯ θ λ 4 k ∥ x ( k ) − ξ ∥ and ∥ x ( k ) − ξ ∥ ⪯ θ k λ ( 4 k − 1 ) / 3 ∥ x ( 0 ) − ξ ∥ for k ≥ 0 ,$
where $λ = ϕ ( E ( x ( 0 ) ) )$, $θ = ψ ( E ( x ( 0 ) ) )$ and the functions ϕ and ψ are defined by (157).
Proof.
It is easy ti see that ${ t ∈ [ 0 , 1 ) : Φ ( t ) ≤ 1 } = { t ∈ [ 0 , τ ) : Ψ ( t ) ≥ 0 }$, where $τ$ is the unique solution of the Equation (98) in the interval $( 0 , 1 )$, and the function $Ψ$ is defined by (156). From this, we deduce that the initial condition (158) is equivalent to the following one:
$E ( x ( 0 ) ) < τ and Ψ ( E ( x ( 0 ) ) ) ≥ 0 .$
Then Theorem 8 follows immediately from Theorem 4 and Lemma 8.  □

## 7. Conclusions

In this paper, two large classes of iteration functions have been introduced and studied in n-dimensional vector spaces $K n$ over a valued field $K$. They are called iteration functions of the first and second kind at a fixed point of the corresponding iteration function. Two general local convergence theorems were established for Picard-type iterative methods with high Q-order of convergence. In particular, it is shown that if an iterative method is generated by an iteration function of first or second kind, then it is Q-convergent under initial approximation that is sufficiently close to the fixed point.
As an application, a detailed local convergence analysis was provided for two fourth-order iterative methods for finding all zeros of a polynomial simultaneously. Convergence results were presented under two kinds of initial approximations. The convergence results under the first kind of initial approximations improve and complement the previous ones due to Kyurkchiev [8] and Iliev [10]. Some of the advantages of these results are as follows: Q-convergence of the methods, larger convergence domains, sharper a priori error estimates, a posteriori error estimates, as well as upper bounds for the asymptotic error constants.

## Funding

This research was supported by the National Science Fund of the Bulgarian Ministry of Education and Science under Grant DN 12/12.

Not applicable.

Not applicable.

Not applicable.

## Conflicts of Interest

The author declares no conflict of interest.

## References

1. Weierstrass, K. Neuer Beweis des Satzes, dass jede ganze rationale Function einer Veränderlichen dargestellt werden kann als ein Product aus linearen Functionen derselben Veränderlichen. Sitzungsber. Königl. Akad. Wiss. Berlin 1891, II, 1085–1101. [Google Scholar] [CrossRef]
2. Proinov, P.D.; Petkova, M.D. A new semilocal convergence theorem for the Weierstrass method for finding zeros of a polynomial simultaneously. J. Complex. 2014, 30, 366–380. [Google Scholar] [CrossRef]
3. Lang, S. Algebra, 3rd ed.; Graduate Texts in Mathematics; Springer: New York, NY, USA, 2002; Volume 211. [Google Scholar] [CrossRef]
4. Ehrlich, L. A modified Newton method for polynomials. Commun. ACM 1967, 10, 107–108. [Google Scholar] [CrossRef]
5. Aberth, O. Iteration methods for finding all zeros of a polynomial simultaneously. Math. Comput. 1973, 27, 339–344. [Google Scholar] [CrossRef]
6. Börsch-Supan, W. Residuenabschätzung für Polynom-Nullstellen mittels Lagrange Interpolation. Numer. Math. 1970, 14, 287–296. [Google Scholar] [CrossRef]
7. Jay, L.O. A note on Q-order of convergence. BIT 2001, 41, 422–429. [Google Scholar] [CrossRef]
8. Kyurkchiev, N.V. Some modifications of L. Ehrlich’s method for the approximate solution of algebraic equations. Pliska Stud. Math. 1983, 5, 43–50. [Google Scholar]
9. Zheng, S.; Sun, F. Some simultaneous iterations for finding all zeros of a polynomial with high order convergence. Appl. Math. Comput. 1999, 99, 233–240. [Google Scholar] [CrossRef]
10. Iliev, A.I. Generalization of Ehrlich-Kjurkchiev method for multiple roots of algebraic equations. Serdica Math. J. 1998, 24, 215–224. [Google Scholar]
11. Proinov, P.D. Relationships between different types of initial conditions for simultaneous root finding methods. Appl. Math. Lett. 2016, 52, 102–111. [Google Scholar] [CrossRef]
12. Proinov, P.D. Unified convergence analysis for Picard iteration in n-dimensional vector spaces. Calcolo 2018, 55, 6. [Google Scholar] [CrossRef]
13. Proinov, P.D. New general convergence theory for iterative processes and its applications to Newton Kantorovich type theorems. J. Complex. 2010, 26, 3–42. [Google Scholar] [CrossRef] [Green Version]
14. Proinov, P.D. General convergence theorems for iterative processes and applications to the Weierstrass root-finding method. J. Complex. 2016, 33, 118–144. [Google Scholar] [CrossRef]
15. Proinov, P.D. On the local convergence of Ehrlich method for numerical computation of polynomial zeros. Calcolo 2016, 52, 413–426. [Google Scholar] [CrossRef]
16. Dochev, K. A variant of Newton’s method for the simultaneous approximation of all roots of an algebraic equation. Phys. Math. J. Bulg. Acad. Sci. 1962, 5, 136–139. [Google Scholar]
17. Wang, D.R.; Zhao, F.G. Complexity analysis of a process for simultaneously obtaining all zeros of polynomials. Computing 1989, 43, 187–197. [Google Scholar] [CrossRef]
 Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

## Share and Cite

MDPI and ACS Style

Proinov, P.D. Two Classes of Iteration Functions and Q-Convergence of Two Iterative Methods for Polynomial Zeros. Symmetry 2021, 13, 371. https://doi.org/10.3390/sym13030371

AMA Style

Proinov PD. Two Classes of Iteration Functions and Q-Convergence of Two Iterative Methods for Polynomial Zeros. Symmetry. 2021; 13(3):371. https://doi.org/10.3390/sym13030371

Chicago/Turabian Style

Proinov, Petko D. 2021. "Two Classes of Iteration Functions and Q-Convergence of Two Iterative Methods for Polynomial Zeros" Symmetry 13, no. 3: 371. https://doi.org/10.3390/sym13030371

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.