Next Article in Journal
Positive Solvability for Conjugate Fractional Differential Inclusion of (k, nk) Type without Continuity and Compactness
Next Article in Special Issue
Reinitializing Sea Surface Temperature in the Ensemble Intermediate Coupled Model for Improved Forecasts
Previous Article in Journal
Social Network Group Decision-Making Method Based on Q-Rung Orthopair Fuzzy Set and Its Application in the Evaluation of Online Teaching Quality
Previous Article in Special Issue
Critical Indices and Self-Similar Power Transform
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Application of a Generalized Secant Method to Nonlinear Equations with Complex Roots

Computer Science Department, Technion—Israel Institute of Technology, Haifa 32000, Israel
Axioms 2021, 10(3), 169; https://doi.org/10.3390/axioms10030169
Submission received: 7 June 2021 / Revised: 22 July 2021 / Accepted: 22 July 2021 / Published: 29 July 2021
(This article belongs to the Special Issue Modern Problems of Mathematical Physics and Their Applications)

Abstract

:
The secant method is a very effective numerical procedure used for solving nonlinear equations of the form f ( x ) = 0 . In a recent work (A. Sidi, Generalization of the secant method for nonlinear equations. Appl. Math. E-Notes, 8:115–123, 2008), we presented a generalization of the secant method that uses only one evaluation of f ( x ) per iteration, and we provided a local convergence theory for it that concerns real roots. For each integer k, this method generates a sequence { x n } of approximations to a real root of f ( x ) , where, for n k , x n + 1 = x n f ( x n ) / p n , k ( x n ) , p n , k ( x ) being the polynomial of degree k that interpolates f ( x ) at x n , x n 1 , , x n k , the order s k of this method satisfying 1 < s k < 2 . Clearly, when k = 1 , this method reduces to the secant method with s 1 = ( 1 + 5 ) / 2 . In addition, s 1 < s 2 < s 3 < , such that lim k s k = 2 . In this note, we study the application of this method to simple complex roots of a function f ( z ) . We show that the local convergence theory developed for real roots can be extended almost as is to complex roots, provided suitable assumptions and justifications are made. We illustrate the theory with two numerical examples.

1. Introduction

Let α be the solution to the equation f ( x ) = 0 . An effective iterative method used for solving this equation that makes direct use of f ( x ) (but no derivatives of f ( x ) ) is the secant method that is discussed in many books on numerical analysis. See, for example, Atkinson [1], Dahlquist and Björck [2], Henrici [3], Ralston and Rabinowitz [4], and Stoer and Bulirsch [5]. See also the recent note [6] by the author, in which the treatment of the secant method and those of the Newton–Raphson, regula falsi, and Steffensen methods are presented in a unified manner.
Recently, this method was generalized by the author in [7] as follows: Starting with x 0 , x 1 , , x k , k + 1 initial approximations to α , we generate a sequence of approximations { x n } , via the recursion
x n + 1 = x n f ( x n ) p n , k ( x n ) , n = k , k + 1 , ,
p n , k ( x ) being the derivative of the polynomial p n , k ( x ) that interpolates f ( x ) at the points x n , x n 1 , , x n k . (Thus, p n , k ( x ) is of degree k.) Clearly, the case k = 1 is simply the secant method. In [7], we also showed that, provided x 0 , x 1 , , x k are sufficiently close to α , the method converges with order s k , that is, lim n | x n + 1 α | | x n α | s k = C 0 for some constant C, and that 1 < s k < 2 . (We call s k the order of convergence of the method or the order of the method for short.) Here s k is the only positive root of the polynomial s k + 1 i = 0 k s i . We also have that
1 + 5 2 = s 1 < s 2 < s 3 < < 2 ; lim k s k = 2 .
Actually, rounded to four significant figures,
s 1 = ˙ 1.618 , s 2 = ˙ 1.839 , s 3 = ˙ 1.928 , s 4 = ˙ 1.966 , s 5 = ˙ 1.984 , s 6 = ˙ 1.992 , s 7 = ˙ 1.996 , etc .
Note that to compute x n + 1 we need knowledge of only f ( x n ) , f ( x n 1 ) , , f ( x n k ) , and because f ( x n 1 ) , , f ( x n k ) have already been computed, f ( x n ) is the only new quantity to be computed. Thus, each step of the method requires f ( x ) to be computed only once. From this, it follows that the efficiency index of this method is simply s k and that this index approaches 2 by increasing k even moderately, as can be concluded from the values of s 1 , , s 7 given above.
In this work, we consider the application of this method to simple complex roots of a function f ( z ) , where z is the complex variable. Let us denote a real or complex root of f ( z ) by α again; that is, f ( α ) = 0 and f ( α ) 0 . Thus, starting with z 0 , z 1 , , z k , k + 1 initial approximations to α , we generate a sequence of approximations { z n } via the recursion
z n + 1 = z n f ( z n ) p n , k ( z n ) , n = k , k + 1 , ,
p n , k ( z ) being the derivative of the polynomial p n , k ( z ) that interpolates f ( z ) at the points z n , z n 1 , , z n k . As in [7], we can use Newton’s interpolation formula to generate p n , k ( z ) and p n , k ( z ) . Thus
p n , k ( z ) = f ( z n ) + i = 1 k f [ z n , z n 1 , , z n i ] j = 0 i 1 ( z z n j )
and
p n , k ( z n ) = f [ z n , z n 1 ] + i = 2 k f [ z n , z n 1 , , z n i ] j = 1 i 1 ( z n z n j ) .
Here, g [ ζ 0 , ζ 1 , , ζ m ] is the divided difference of order m of the function g ( z ) over the set of points { ζ 0 , ζ 1 , , ζ m } and is a symmetric function of these points. For details, we refer the reader to [7].
As proposed in [7], we generate the k + 1 initial approximations as follows: We choose the approximations z 0 , z 1 first. We then generate z 2 by applying our method with k = 1 (that is, with the secant method). Next, we apply our method to z 0 , z 1 , z 2 with k = 2 and obtain z 3 , and so on, until we have generated all k + 1 initial approximations, via
z n + 1 = z n f ( z n ) p n , n ( z n ) , n = 1 , 2 , , k 1 .
Remark 1.
1. 
Instead of choosing z 1 arbitrarily, we can generate it as z 1 = z 0 + f ( z 0 ) as suggested in Brin [8], which is quite sensible since f ( z ) is small near the root α. We can also use the method of Steffensen—which uses only f ( z ) and no derivatives of f ( z ) —to generate z 1 from z 0 ; thus,
z 1 = z 0 [ f ( z 0 ) ] 2 f ( z 0 + f ( z 0 ) ) f ( z 0 ) .
2. 
It is clear that, in case f ( z ) takes on only real values along the R e z axis and we are looking for nonreal roots of f ( z ) , at least one of the initial approximations must be chosen to be nonreal.
3. 
We would like to mention that Kogan, Sapir, and Sapir [9] have proposed another generalization of the secant method for simple real roots of nonlinear equations f ( x ) = 0 that resembles our method described in (1). In the notation of (1), this method produces a sequence of approximations { x n } via
x n + 1 = x n f ( x n ) p n , n ( x n ) , n = 1 , 2 , ,
starting with arbitrary x 0 and x 1 , and it is of order 2. Note that, in (6), p n , n ( x ) interpolates f ( x ) at the points x 0 , x 1 , , x n , hence is of degree n, which is tending to infinity. In (1), p n , k ( x ) is of degree k, which is fixed.
4. 
Yet another generalization of the secant method for finding simple real roots of f ( x ) was recently given by Nijmeijer [10]. This method too requires no derivative information, requires one evaluation of f ( x ) per iteration, and has the same order of convergence as our method. It follows an idea of applying a convergence acceleration method, such as Aitken’s Δ 2 -process, to approximations obtained from the secant method, as proposed by Han and Potra [11]. Because Nijmeijer’s method is not based on polynomial interpolation, it is completely different from our method, however. For Aitken’s Δ 2 -process, see [1,2,3,4,5]. See also [12] (Chapter 15) by the author.
In the next section, we analyze the local convergence properties of the method as it is applied to complex roots. We show that the analysis of [7] can be extended to the complex case following some clever manipulation. We prove that the order s k of the method is the same as that we discovered in the real case. In Section 3, we provide two numerical examples to confirm the results of our convergence analysis.

2. Local Convergence Analysis

We now turn to the analysis of the sequence { z n } n = 0 that is generated via (2). Our treatment covers all k 1 .
In our analysis, we will make use of the Hermite-Genocchi formula that provides an integral representation for divided differences (For a proof of this formula, see Atkinson [1], for example). Even though this formula is usually stated for functions defined on real intervals, it is easy to verify (see Filipsson [13], for example) that it also applies to functions defined in the complex plane under proper assumptions. Thus, provided g ( z ) is analytic on E, a bounded closed convex set in the complex plane, and provided ζ 0 , ζ 1 , , ζ m are in E, there holds
g [ ζ 0 , ζ 1 , , ζ m ] = S m g ( m ) ( t 0 ζ 0 + t 1 ζ 1 + + t m ζ m ) d t 1 d t m , t 0 = 1 i = 1 m t i .
Here S m is the m-dimensional simplex defined as
S m = ( t 1 , , t m ) R m : t i 0 , i = 1 , , m , i = 1 m t i 1 .
We note that (7) holds whether the ζ i are distinct or not. We also note that g [ ζ 0 , ζ 1 , , ζ m ] is a symmetric and continuous function of its arguments.
By the conditions we have imposed on g ( z ) , it is easy to see that the integrand g ( m ) ( i = 0 m t i ζ i ) in (7) is always defined because i = 0 m t i ζ i is in the set E and g ( z ) is analytic on E. This is so because, by (7) and (8),
( t 1 , , t m ) S m t i 0 , i = 0 , 1 , , m , and i = 0 m t i = 1 ,
which implies that i = 0 m t i ζ i is a convex combination of ζ 0 , ζ 1 , , ζ m hence is in the set C = conv { ζ 0 , ζ 1 , , ζ m } , the convex hull of the points ζ 0 , ζ 1 , , ζ m , and C E . Consequently, taking moduli on both sides of (7), we obtain, for all ζ i in E,
| g [ ζ 0 , ζ 1 , , ζ m ] | S m | g ( m ) i = 0 m t i ζ i | d t 1 d t m g ( m ) m ! , g ( m ) = max z E | g ( m ) ( z ) | .
In addition, since i = 0 m t i = 1 in (7), as ζ i ζ ^ for all i = 0 , 1 , , m , there hold i = 0 m t i ζ i ζ ^ and g ( m ) ( i = 0 m t i ζ i ) g ( m ) ( ζ ^ ) , and hence
lim ζ i ζ ^ i = 0 , 1 , , m g [ ζ 0 , ζ 1 , , ζ m ] = g [ ζ ^ , ζ ^ , , ζ ^ ] m + 1 times = g ( m ) ( ζ ^ ) m ! .
In (9) and (10), we have also invoked the fact that (see [14] (p. 346), for example)
S m d t 1 d t m = 1 m ! .
We will make use of these in the proof of our main theorem that follows. This theorem and its proof are almost identical to that given in [7] once we take into account, where and when needed, the fact that we are now working in the complex plane. For convenience, we provide all the details of the proof.
Theorem 1.
Let α be a simple root of f ( z ) , that is, f ( α ) = 0 , but f ( α ) 0 . Let B r be the closed disk of radius r containing α as its center, that is,
B r = { z C : | z α | r } .
Let f ( z ) be analytic on B r . Choose a positive integer k and let z 0 , z 1 , , z k be distinct initial approximations to α. Generate z k + 1 , z k + 2 , via
z n + 1 = z n f ( z n ) p n , k ( z n ) , n = k , k + 1 , ,
where p n , k ( z ) is the polynomial of interpolation to f ( z ) at the points z n , z n 1 , , z n k . Then, provided z 0 , z 1 , , z k are in B r and sufficiently close to α, we have the following cases:
1. 
If f ( k + 1 ) ( α ) 0 , the sequence { z n } converges to α, and
lim n ϵ n + 1 i = 0 k ϵ n i = ( 1 ) k + 1 ( k + 1 ) ! f ( k + 1 ) ( α ) f ( α ) L ; ϵ n = z n α n .
The order of convergence is s k , 1 < s k < 2 , where s k is the only positive root of the polynomial g k ( s ) = s k + 1 i = 0 k s i and satisfies
2 2 k 1 e < s k < 2 2 k 1 for k 2 ; s k < s k + 1 ; lim k s k = 2 ,
e being the base of natural logarithms, and
lim n | ϵ n + 1 | | ϵ n | s k = | L | ( s k 1 ) / k ,
which also implies that
s k = lim n log | ϵ n + 1 / ϵ n | log | ϵ n / ϵ n 1 | .
2. 
If f ( z ) is a polynomial of degree at most k, the sequence { z n } converges to α, and
lim n ϵ n + 1 ϵ n 2 = f ( α ) 2 f ( α ) ; ϵ n = z n α n .
Thus, { z n } converges with order 2 if f ( α ) 0 , and with order greater than 2 if f ( α ) = 0 .
Proof. 
We start by deriving a closed-form expression for the error in z n + 1 . Subtracting α from both sides of (12), and noting that
f ( z n ) = f ( z n ) f ( α ) = f [ z n , α ] ( z n α ) ,
we have
z n + 1 α = 1 f [ z n , α ] p n , k ( z n ) ( z n α ) = p n , k ( z n ) f [ z n , α ] p n , k ( z n ) ( z n α ) .
We now note that
p n , k ( z n ) f [ z n , α ] = p n , k ( z n ) f ( z n ) + f ( z n ) f [ z n , α ] ,
and that
f ( z n ) p n , k ( z n ) = f [ z n , z n , z n 1 , , z n k ] i = 1 k ( z n z n i )
and
f ( z n ) f [ z n , α ] = f [ z n , z n ] f [ z n , α ] = f [ z n , z n , α ] ( z n α ) .
Note that (20) can be obtained by starting with the divided difference representation of f ( z ) p n , k ( z ) , namely, f ( z ) p n , k ( z ) = f [ z , z n , z n 1 , , z n k ] i = 0 k ( z z n i ) , and by computing lim z z n [ f ( z ) p n , k ( z ) ] / i = 0 k ( z z n i ) via L’Hôpital’s rule.
For simplicity of notation, let
f [ z n , z n , z n 1 , , z n k ] = D ^ n and f [ z n , z n , α ] = E ^ n ,
and rewrite (19) and (20) as
p n , k ( z n ) f [ z n , α ] = D ^ n i = 1 k ( ϵ n ϵ n i ) + E ^ n ϵ n ,
p n , k ( z n ) = f ( z n ) + D ^ n i = 1 k ( ϵ n ϵ n i ) .
Substituting these into (18), we finally obtain
ϵ n + 1 = C n ϵ n ; C n p n , k ( z n ) f [ z n , α ] p n , k ( z n ) = D ^ n i = 1 k ( ϵ n ϵ n i ) + E ^ n ϵ n f ( z n ) + D ^ n i = 1 k ( ϵ n ϵ n i ) .
We now prove that convergence takes place. First, let us assume without loss of generality that f ( z ) 0 for all z B r , and set m 1 = min z B r | f ( z ) | > 0 . (This is possible since α B r and f ( α ) 0 , and we can choose r as small as we wish to also guarantee m 1 > 0 .) Next, let M s = max z B r | f ( s ) ( z ) | / s ! , s = 1 , 2 , . Thus, assuming that { z n , z n 1 , , z n k } B r and noting that B r is a convex set, we have by (9) that
| D ^ n | M k + 1 , | E ^ n | M 2 , because { α , z n , z n 1 , , z n k } B r .
Next, choose the ball B t / 2 sufficiently small (with t / 2 r ) to ensure that m 1 > 2 M k + 1 t k + M 2 t / 2 . It can now be verified that, provided z n , z n 1 , , z n k are all in B t / 2 , there holds
| C n | M k + 1 i = 1 k | ϵ n ϵ n i | + M 2 | ϵ n | m 1 M k + 1 i = 1 k | ϵ n ϵ n i | M k + 1 i = 1 k ( | ϵ n | + | ϵ n i ) | + M 2 | ϵ n | m 1 M k + 1 i = 1 k ( | ϵ n | + | ϵ n i | ) C ¯ ,
where
C ¯ M k + 1 t k + M 2 t / 2 m 1 M k + 1 t k < 1 .
Consequently, by (25), | ϵ n + 1 | C ¯ | ϵ n | < | ϵ n | , which implies that z n + 1 B t / 2 , just like z n , z n 1 , , z n k . Therefore, if z 0 , z 1 , , z k are chosen in B t / 2 , then | C n | C ¯ < 1 for all n k , hence { z n } B t / 2 and lim n z n = α .
As for (13) when f ( k + 1 ) ( α ) 0 , we proceed as follows: By the fact that lim n z n = α , we first note that, by (20) and (21),
lim n p n , k ( z n ) = f ( α ) = lim n f [ z n , α ] ,
and thus lim n C n = 0 . This means that lim n ( ϵ n + 1 / ϵ n ) = 0 and, equivalently, that { z n } converges with order greater than 1. As a result,
lim n ( ϵ n / ϵ n i ) = 0 for all i 1 ,
and
ϵ n / ϵ n i = o ( ϵ n / ϵ n j ) as n , for j < i .
Consequently, expanding in (25) the product i = 1 k ( ϵ n ϵ n i ) , we have
i = 1 k ( ϵ n ϵ n i ) = i = 1 k ϵ n i [ 1 ϵ n / ϵ n i ] = ( 1 ) k i = 1 k ϵ n i [ 1 + O ( ϵ n / ϵ n 1 ) ] as n .
Substituting (27) into (25), and defining
D n = D ^ n p n , k ( z n ) , E n = E ^ n p n , k ( z n ) ,
we obtain
ϵ n + 1 = ( 1 ) k D n i = 0 k ϵ n i [ 1 + O ( ϵ n / ϵ n 1 ) ] + E n ϵ n 2 as n .
Dividing both sides of (29) by i = 0 k ϵ n i , and defining
σ n = ϵ n + 1 i = 0 k ϵ n i ,
we have
σ n = ( 1 ) k D n [ 1 + O ( ϵ n / ϵ n 1 ) ] + E n σ n 1 ϵ n k 1 as n .
Now, by (10), (22), and (26),
lim n D n = 1 ( k + 1 ) ! f ( k + 1 ) ( α ) f ( α ) , lim n E n = f ( 2 ) ( α ) 2 f ( α ) .
Because lim n D n and lim n E n are finite, and because lim n ( ϵ n / ϵ n 1 ) = 0 and lim n ϵ n k 1 = 0 , it follows that there exist a positive integer N and positive constants β < 1 and D, with | E n ϵ n k 1 | β when n > N , for which (31) gives
| σ n | D + β | σ n 1 | for all n > N .
Using (33), it is easy to show that
| σ N + s | D 1 β s 1 β + β s | σ N | , s = 1 , 2 , ,
which, by the fact that β < 1 , implies that { σ n } is a bounded sequence. Making use of this fact, we have lim n E n σ n 1 ϵ n k 1 = 0 . Substituting this into (31), and invoking (32), we next obtain lim n σ n = ( 1 ) k lim n D n = L , which is precisely (13).
That s k , the order of the method, as defined in the statement of the theorem, satisfies (14) and (15) follows from Traub [15] (Chapter 3). We provide a simplified treatment of this topic in Appendix A.
This completes the proof of part 1 of the theorem.
When f ( z ) is a polynomial of degree at most k, we first observe that f ( k + 1 ) ( z ) = 0 for all z, which implies that p n , k ( z ) = f ( z ) for all z, hence also p n , k ( z ) = f ( z ) for all z. Therefore, we have that p n , k ( z n ) = f ( z n ) in the recursion of (12). Consequently, (12) becomes
z n + 1 = z n f ( z n ) f ( z n ) , n = k , k + 1 , ,
which is the recursion for the Newton–Raphson method. Thus, (17) follows. This completes the proof of part 2 of the theorem. □

3. Numerical Examples

In this section, we present two numerical examples that we treated with our method. Our computations were done in quadruple-precision arithmetic (approximately 35-decimal-digit accuracy). Note that in order to verify the theoretical results concerning iterative methods with order greater than unity, we need to use computer arithmetic of high precision (preferably, of variable precision, if available) because the number of correct significant decimal digits in the z n increases dramatically from one iteration to the next as we are approaching the solution.
In both examples below, we take k = 2 . We choose z 0 and z 1 and compute z 2 using one step of the secant method, namely,
z 2 = z 1 f ( z 1 ) f [ z 0 , z 1 ] .
Following that, we compute z 3 , z 4 , , via
z n + 1 = z n f ( z n ) f [ z n , z n 1 ] + f [ z n , z n 1 , z n 2 ] ( z n z n 1 ) , n = 2 , 3 , .
In our examples, we have carried out our computations for several sets of z 0 , z 1 , and we have observed essentially the same behavior that we observe in Table 1 and Table 2.
Example 1.
Consider f ( z ) = 0 , where f ( z ) = z 3 8 , whose solutions are α r = 2 e i 2 π r / 3 , r = 0 , 1 , 2 . We would like to obtain the root α 1 = 2 e i 2 π / 3 = 1 + i 3 . We chose z 0 = 2 i and z 1 = 2 + 2 i . The results of our computations are given in Table 1.
From (13) and (16) in Theorem 1, we should have
lim n ϵ n + 1 ϵ n ϵ n 1 ϵ n 2 = ( 1 ) 3 3 ! f ( α 1 ) f ( α 1 ) = 1 24 ( 1 i 3 ) = 0.04166 i 0.07216
and
lim n log | ϵ n + 1 / ϵ n | log | ϵ n / ϵ n 1 | = s 2 = 1.83928 ,
and these seem to be confirmed in Table 1. Furthermore, in infinite-precision arithmetic, z 9 should have close to 60 correct significant figures; we do not see this in Table 1 due to the fact that the arithmetic we have used to generate Table 1 can provide an accuracy of at most 35 digits.
Example 2.
Consider f ( z ) = 0 , where f ( z ) = sin ( i z ) cos z . f ( z ) has infinitely many roots α r = ( 1 i ) ( π / 4 + r π ) , r = 0 , ± 1 , ± 2 , . We would like to obtain the root α 0 = ( 1 i ) π / 4 . We chose z 0 = 1.5 i 1.3 and z 1 = 0.6 i 0.5 . The results of our computations are given in Table 2.
From (13) and (16) in Theorem 1, we should have
lim n ϵ n + 1 ϵ n ϵ n 1 ϵ n 2 = ( 1 ) 3 3 ! f ( α 1 ) f ( α 1 ) = i 6 = i 0.16666
and
lim n log | ϵ n + 1 / ϵ n | log | ϵ n / ϵ n 1 | = s 2 = 1.83928 ,
and these seem to be confirmed in Table 2. Furthermore, in infinite-precision arithmetic, z 8 should have close to 50 correct significant figures; we do not see this in Table 2 due to the fact that the arithmetic we have used to generate Table 2 can provide an accuracy of at most 35 digits.
Remark 2.
In relation to the examples we have just presented, we would like to discuss the issue of estimating the relative errors | ϵ n / α | in the z n . This should help the reader when studying the numerical results included in Table 1 and Table 2. Starting with (13) and (15), we first note that, for all large n,
| ϵ n + 1 | | L | ( s k 1 ) / k | ϵ n | s k .
Therefore, assuming also that α 0 , we have
| ϵ n + 1 / α | D | ϵ n / α | s k , D = | L | 1 / k | α | s k 1 .
Now, if z n has q > 0 correct significant figures, we have | ϵ n / α | = O ( 10 q ) . If, in addition, D = O ( 10 r ) for some r, then we will have
| ϵ n + 1 / α | O ( 10 r q s k ) .
For simplicity, let us consider the case r = 0 , which is practically what we have in the two examples we have treated. Then z n + 1 has approximately q s k correct significant decimal digits. That is, if z n has q correct significant decimal digits, then, due to the fact that s k > 1 , z n + 1 will have s k times as many correct significant decimal digits as z n .

Funding

This research received no external funding.

Acknowledgments

The author would like to thank Tamara Kogan for drawing his attention to the paper [9] mentioned in the Introduction.

Conflicts of Interest

The author declares no conflict of interest.

Appendix A

Before ending, we would like to provide a brief treatment of the order of convergence of our method stated in (14) and (15) by considering
ϵ n + 1 i = 0 k ϵ n i = L n ϵ n + 1 = L i = 0 k ϵ n i n ,
instead of (13). We will show that | ϵ n + 1 | = Q | ϵ n | s k is possible if s k is a solution to the polynomial equation s k + 1 = i = 0 k s i and Q = | L | ( s k 1 ) / k . (For a more detailed treatment, we refer the reader to [15] (Section 3.3)).
We start by expressing all | ϵ n i | in terms of | ϵ n | . We have
| ϵ n i | = | ϵ n | 1 / s k i Q m i , m i = j = 1 i 1 s k j , i = 1 , 2 , .
Substituting this into | ϵ n + 1 | = | L | i = 0 k | ϵ n i | , we obtain
Q | ϵ n | s k = | L | | ϵ n | i = 1 k | ϵ n | 1 / s k i Q m i = | L | Q M | ϵ n | ρ ; ρ = i = 0 k 1 s k i , M = i = 1 k m i .
Of course, this is possible when s k = ρ and Q M + 1 = | L | .
Now, the requirement that s k = ρ is the same as s k k + 1 = i = 0 k s k i , which implies that the order s k should be a root of the polynomial
g k ( s ) = s k + 1 i = 0 k s i = s k + 2 2 s k + 1 + 1 s 1 .
By Descartes’ rule of signs, g k ( s ) has only one positive root, which we denote by s ˜ . Since g k ( 1 ) = k < 0 and g k ( 2 ) = 1 > 0 , we have that 1 < s ˜ < 2 . The remaining k roots of g k ( s ) are the zeroes of the polynomial g ˜ ( s ) = g k ( s ) / ( s s ˜ ) = j = 0 k c j s j , the c j satisfying s ˜ c 0 = 1 and s ˜ c j c j 1 = 1 , j = 1 , , k , hence
c j = 1 s ˜ i = 0 j 1 s ˜ i , j = 0 , 1 , , k 0 < c 0 < c 1 < c k 1 < c k = 1 .
Therefore, by the Eneström–Kakeya theorem, all k roots of g ˜ ( s ) are in the unit disk. We thus conclude that s ˜ = s k since we already know that the order of our method is greater than 1. (For Descartes’ rule of signs and the Eneström–Kakeya theorem, see, for example, Henrici [16] (pp. 442, 462)).
Next, we note that g k ( s ) = s g k 1 ( s ) 1 . Therefore, g k 1 ( s k 1 ) = 0 implies g k ( s k 1 ) = 1 < 0 , which, along with g k ( 2 ) = 1 > 0 , implies that s k 1 < s k < 2 . Therefore, the sequence { s k } k = 1 is monotonically increasing and is bounded from above by 2. Consequently, lim k s k = s ^ exists and s ^ 2 . Now,
g k ( s k ) = 0 s k k + 2 2 s k k + 1 + 1 = 0 s k 2 2 s k = 1 s k k .
Upon letting k on both sides, we obtain s ^ 2 2 s ^ = 0 , which gives s ^ = 2 .
The expression given for M can be simplified considerably as we show next. First, it is easy to verify that
M = i = 1 k k i + 1 s k i = 1 s k k i = 1 k i s k i 1 .
Next,
s k k M = d d s i = 0 k s i | s = s k = d d s s k + 1 1 s 1 | s = s k = ( k + 1 ) s k ( s 1 ) ( s k + 1 1 ) ( s 1 ) 2 | s = s k .
By s k + 1 1 = ( s 1 ) i = 0 k s i , this becomes
s k k M = ( k + 1 ) s k i = 0 k s i s 1 | s = s k = k s k k i = 0 k 1 s k i s k 1 .
Now, by the fact that g k ( s k ) = 0 , we have i = 0 k 1 s k i = s k k + 1 s k k . Consequently,
M = k ( s k 1 ) s k 1 M + 1 = k s k 1 ,
which is the required result.

References

  1. Atkinson, K.E. An Introduction to Numerical Analysis, 2nd ed.; John Wiley & Sons Inc.: New York, NY, USA, 1989. [Google Scholar]
  2. Dahlquist, G.; Björck, Å. Numerical Methods in Scientific Computing: Volume I; SIAM: Philadelphia, PA, USA, 2008. [Google Scholar]
  3. Henrici, P. Elements of Numerical Analysis; Wiley: New York, NY, USA, 1964. [Google Scholar]
  4. Ralston, A.; Rabinowitz, P. A First Course in Numerical Analysis, 2nd ed.; McGraw-Hill: New York, NY, USA, 1978. [Google Scholar]
  5. Stoer, J.; Bulirsch, R. Introduction to Numerical Analysis, 3rd ed.; Springer: New York, NY, USA, 2002. [Google Scholar]
  6. Sidi, A. Unified treatment of regula falsi, Newton–Raphson, secant, and Steffensen methods for nonlinear equations. J. Online Math. Appl. 2006, 6, 1–13. [Google Scholar]
  7. Sidi, A. Generalization of the secant method for nonlinear equations. Appl. Math. E-Notes 2008, 8, 115–123. [Google Scholar]
  8. Brin, L.Q. Tea Time Numerical Analysis; Southern Connecticut State University: New Haven, CT, USA, 2016. [Google Scholar]
  9. Kogan, T.; Sapir, L.; Sapir, A. A nonstationary iterative second-order method for solving nonlinear equations. Appl. Math. Comput. 2007, 188, 75–82. [Google Scholar] [CrossRef]
  10. Nijmeijer, M.J.P. A method to accelerate the convergence of the secant algorithm. Adv. Numer. Anal. 2014, 2014, 321592. [Google Scholar] [CrossRef] [Green Version]
  11. Han, W.; Potra, F.A. Convergence acceleration for some root finding methods. Comput. Suppl. 1993, 9, 67–78. [Google Scholar]
  12. Sidi, A. Practical Extrapolation Methods: Theory and Applications; Number 10 in Cambridge Monographs on Applied and Computational Mathematics; Cambridge University Press: Cambridge, UK, 2003. [Google Scholar]
  13. Filipsson, L. Complex mean-value interpolation and approximation of holomorphic functions. J. Approx. Theory 1997, 91, 244–278. [Google Scholar] [CrossRef] [Green Version]
  14. Davis, P.J.; Rabinowitz, P. Methods of Numerical Integration, 2nd ed.; Academic Press: New York, NY, USA, 1984. [Google Scholar]
  15. Traub, J.F. Iterative Methods for the Solution of Equations; Prentice Hall: Englewood Cliffs, NJ, USA, 1964. [Google Scholar]
  16. Henrici, P. Applied and Computational Complex Analysis; Wiley: New York, NY, USA, 1974; Volume 1. [Google Scholar]
Table 1. Results obtained by applying the generalized secant method with k = 2 , as shown in (34) and (35), to the equation z 3 8 = 0 , to compute the root α 1 = 1 + i 3 . The entries denoted “**” mean that the limit of the extended-precision arithmetic has been reached.
Table 1. Results obtained by applying the generalized secant method with k = 2 , as shown in (34) and (35), to the equation z 3 8 = 0 , to compute the root α 1 = 1 + i 3 . The entries denoted “**” mean that the limit of the extended-precision arithmetic has been reached.
n| ϵ n | ϵ n + 1 ϵ n ϵ n 1 ϵ n 2 log | ϵ n + 1 / ϵ n | log | ϵ n / ϵ n 1 |
0 1.035 D + 00 --
1 1.035 D + 00 --
2 4.808 D 01 8.972 D 02 + i 1.015 D 01 2.516
3 6.979 D 02 1.224 D 01 i 2.727 D 02 1.437
4 4.355 D 03 1.009 D 01 i 4.079 D 02 2.023
5 1.591 D 05 4.561 D 02 i 9.794 D 02 1.839
6 5.223 D 10 3.793 D 02 i 7.268 D 02 1.839
7 2.967 D 18 3.741 D 02 i 7.579 D 02 1.838
8 2.083 D 33 ****
9 0.000 D + 00 ****
Table 2. Results obtained by applying the generalized secant method with k = 2 , as shown in (34) and (35), to the equation sin ( i z ) cos z = 0 , to compute the root α 0 = ( 1 i ) π / 4 . The entries denoted “**” mean that the limit of the extended-precision arithmetic has been reached.
Table 2. Results obtained by applying the generalized secant method with k = 2 , as shown in (34) and (35), to the equation sin ( i z ) cos z = 0 , to compute the root α 0 = ( 1 i ) π / 4 . The entries denoted “**” mean that the limit of the extended-precision arithmetic has been reached.
n| ϵ n | ϵ n + 1 ϵ n ϵ n 1 ϵ n 2 log | ϵ n + 1 / ϵ n | log | ϵ n / ϵ n 1 |
0 6.608 D 01 --
1 3.403 D 01 --
2 1.341 D 01 3.163 D 01 + i 1.397 D 01 2.743
3 1.043 D 02 1.466 D 01 i 1.846 D 01 1.774
4 1.122 D 04 2.943 D 03 i 1.117 D 01 1.934
5 1.755 D 08 9.223 D 03 i 1.614 D 01 1.766
6 3.320 D 15 7.686 D 04 i 1.658 D 01 1.857
7 1.084 D 27 ****
8 9.630 D 35 ****
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Sidi, A. Application of a Generalized Secant Method to Nonlinear Equations with Complex Roots. Axioms 2021, 10, 169. https://doi.org/10.3390/axioms10030169

AMA Style

Sidi A. Application of a Generalized Secant Method to Nonlinear Equations with Complex Roots. Axioms. 2021; 10(3):169. https://doi.org/10.3390/axioms10030169

Chicago/Turabian Style

Sidi, Avram. 2021. "Application of a Generalized Secant Method to Nonlinear Equations with Complex Roots" Axioms 10, no. 3: 169. https://doi.org/10.3390/axioms10030169

APA Style

Sidi, A. (2021). Application of a Generalized Secant Method to Nonlinear Equations with Complex Roots. Axioms, 10(3), 169. https://doi.org/10.3390/axioms10030169

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop