Next Article in Journal
Spatial Correlation-Based Motion-Vector Prediction for Video-Coding Efficiency Improvement
Next Article in Special Issue
Some Real-Life Applications of a Newly Constructed Derivative Free Iterative Scheme
Previous Article in Journal
A MAGDM Algorithm with Multi-Granular Probabilistic Linguistic Information
Previous Article in Special Issue
Local Convergence of a Family of Weighted-Newton Methods
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Two-Step Solver for Nonlinear Equations

1
Department of Mathematics, Cameron University, Lawton, OK 73505, USA
2
Faculty of Applied Mathematics and Informatics, Ivan Franko National University of Lviv, Universitetska Str. 1, Lviv 79000, Ukraine
*
Author to whom correspondence should be addressed.
Symmetry 2019, 11(2), 128; https://doi.org/10.3390/sym11020128
Submission received: 23 December 2018 / Revised: 14 January 2019 / Accepted: 18 January 2019 / Published: 23 January 2019
(This article belongs to the Special Issue Symmetry with Operator Theory and Equations)

Abstract

:
In this paper we present a two-step solver for nonlinear equations with a nondifferentiable operator. This method is based on two methods of order of convergence 1 + 2 . We study the local and a semilocal convergence using weaker conditions in order to extend the applicability of the solver. Finally, we present the numerical example that confirms the theoretical results.

1. Introduction

A plethora of real-life applications from various areas, including Computational Science and Engineering, are converted via mathematical modeling to equations valued on abstract spaces such as n-dimensional Euclidean, Hilbert, Banach, and other spaces [1,2]. Then, researchers face the great challenge of finding a solution x * in the closed form of the equation. However, this task is generally very difficult to achieve. This is why iterative methods are developed to provide a sequence approximating x * under some initial conditions.
Newton’s method, and its variations are widely used to approximate x * [1,2,3,4,5,6,7,8,9,10,11,12,13,14]. There are problems with the implementation of these methods, since the invertibility of the linear operator involved is, in general, costly or impossible. That is why secant-type methods were also developed which are derivative-free. In these cases however, the order of convergence drops from 2 to 1 + 5 2 .
Then, one considers methods that mix Newton and secant steps to increase the order of convergence. This is our first objective in this paper. Moreover, the study of iterative methods involves local convergence where knowledge about the solution x * is used to determine upper bounds on the distances and radii of convergence. The difficulty of choosing initial points is given by local results, so they are important. In the semilocal convergence we use knowledge surrounding the initial point to find sufficient conditions for convergence. It turns out that in both cases the convergence region is small, limiting the applicability of iterative methods. That is why we use our ideas of the center-Lipschitz condition, in combination with the notion of the restricted convergence region, to present local as well as semilocal improvements leading to the extension of the applicability of iterative methods.
The novelty of the paper is that since the new Lipschitz constants are special cases of older ones, no additional cost is required for these improvements (see also the remarks and numerical examples). Our ideas can be used to improve the applicability of other iterative methods [1,2,3,4,5,6,7,8,9,10,11,12,13,14].
By E 1 , E 2 we consider Banach spaces and by Ω E 1 a convex set. F : Ω E 2 is differentiable in the Fréchet sense, G : Ω E 2 is a continuous but its differentiability is not assumed. Then, we study equation
H ( x ) = 0 , for H ( x ) = F ( x ) + G ( x ) .
This problem was considered by several authors. Most of them used one-step methods for finding an approximate solution of (1), for example, Newton’s type method [14], difference methods [4,5] and combined methods [1,2,3,11].
We proposed a two-step method [6,10,12] to numerically solve (1)
x n + 1 = x n F x n + y n 2 + Q ( x n , y n ) 1 ( F ( x n ) + G ( x n ) ) , y n + 1 = x n + 1 F x n + y n 2 + Q ( x n , y n ) 1 ( F ( x n + 1 ) + G ( x n + 1 ) ) , n = 0 , 1 ,
with Q ( x , y ) a first order divided difference of the operator G at the points x and y. This method relates to methods with the order of convergence 1 + 2 [7,13].
If Q : Ω × Ω L ( E 1 , E 2 ) , gives Q ( x , y ) ( x y ) = G ( x ) G ( y ) for all x, y with x y , then, we call it a divided difference.
Two-step methods have some advantages over one-step methods. First, they usually require fewer number of iterations for finding an approximate solution. Secondly, at each iteration, they solve two similar linear problems, therefore, there is a small increase in computational complexity. That is why they are often used for solving nonlinear problems [2,6,8,9,10,12,13].
In [6,10,12] the convergence analysis of the proposed method was provided under classical and generalized Lipschitz conditions and superquadratic convergence order was shown. Numerical results for method (2) were presented in [10,12].

2. Local Convergence

Let S ( x * , ρ ) = { x : x x * < ρ } .
From now on by differentiable, we mean differentiable in the Fréchet sense. Moreover, F, G are assumed as previously.
Theorem 1 ([10,12]).
Assume (1) has a solution x * Ω , G has a first order divided difference Q in Ω, and there exist [ T ( x ; y ) ] 1 = F x + y 2 + Q ( x , y ) 1 for each x y and [ T ( x ; y ) ] 1 B . Moreover, assume for each x , y , u , v Ω , x y
F ( x ) F ( y ) 2 p 1 x y ,
F ( x ) F ( y ) p 2 x y α , α ( 0 , 1 ] ,
Q ( x , y ) Q ( u , v ) q 1 ( x u + y v ) .
Assume S ( x * , r * ) Ω , where r * is the minimal positive zero of
q ( r ) = 1 , 3 B ( p 1 + q 1 ) r q ( r ) = 1 , q ( r ) = B ( p 1 + q 1 ) r + p 2 4 ( α + 1 ) ( α + 2 ) r 1 + α .
Then, the sequences { x n } n 0 , { y n } n 0 for x 0 , y 0 S ( x * , r * ) remain in S ( x * , r * ) with lim n x n = x * , and
x n + 1 x * B ( p 1 + q 1 ) y n x * + p 2 4 ( α + 1 ) ( α + 2 ) x n x * 1 + α x n x * ,
y n + 1 x * B ( p 1 + q 1 ) y n x * + x n x * + x n + 1 x * x n + 1 x * .
The condition [ T ( x ; y ) ] 1 B used in [10,12] is very strong in general. That is why in what follows, we provide a weaker alternative. Indeed, assume that there exists a > 0 and b > 0 such that
F ( x * ) F ( x ) a x * x ,
Q ( x , y ) G ( x * ) b ( x x * + y x * ) for each x , y Ω .
Set c = ( a + 2 b ) T * 1 , Ω 0 = Ω S ( x * , 1 c ) and T * = F ( x * ) + G ( x * ) . It follows, for each x , y S ( x * , r ) , r [ 0 , 1 c ] we get in turn by (8) and (9) provided that T * 1 exists
T * 1 T ( x ; y ) T * T * 1 F x + y 2 F ( x * ) + Q ( x , y ) G ( x * ) T * 1 a 2 ( x x * + y x * ) + b ( x x * + y x * ) T * 1 a 2 + b [ x x * + y x * ] < T * 1 a 2 + b + a 2 + b 1 c = 1 .
Then, (10) and the Banach lemma on invertible operators [2] assure T ( x ; y ) 1 exists with
T ( x ; y ) 1 B ¯ = B ¯ ( r ) = T * 1 1 c r .
Then, Theorem 1 holds but with B ¯ , p ¯ 1 , q ¯ 1 , p ¯ 2 , r ¯ 1 , r ¯ 2 , r ¯ * replacing B, p 1 , q 1 , p 2 , r 1 , r 2 , r * , respectively.
Next, we provide a weaker alternative to the Theorem 1.
Theorem 2.
Assume x * Ω , exists with F ( x * ) + G ( x * ) = 0 , T * 1 L ( E 2 , E 1 ) and together with conditions (8) and (9) following items hold for each x , y , u , v Ω 0
F ( y ) F ( x ) 2 p ¯ 1 y x , F ( y ) F ( x ) p ¯ 2 y x α , α ( 0 , 1 ] , Q ( x , y ) Q ( u , v ) q ¯ 1 ( x u + y v ) .
Let r ¯ 1 , r ¯ 2 be the minimal positive zeros of equations
q ¯ ( r ) = 1 , 3 B ¯ ( p ¯ 1 + q ¯ 1 ) r q ¯ ( r ) = 1 ,
respectively, where
q ¯ ( r ) = B ¯ ( p ¯ 1 + q ¯ 1 ) r + p ¯ 2 4 ( α + 1 ) ( α + 2 ) r 1 + α
and set r ¯ * = min { r ¯ 1 , r ¯ 2 } . Moreover, assume that S ( x * , r ¯ * ) Ω .
Then, the sequences { x n } n 0 , { y n } n 0 for x 0 , y 0 S ( x * , r ¯ * ) remain in S ( x * , r ¯ * ) , lim n x n = x * , and
x n + 1 x * B ¯ ( p ¯ 1 + q ¯ 1 ) y n x * + p ¯ 2 4 ( α + 1 ) ( α + 2 ) x n x * 1 + α x n x * ,
y n + 1 x * B ¯ ( p ¯ 1 + q ¯ 1 ) y n x * + x n x * + x n + 1 x * x n + 1 x * .
Proof. 
It follows from the proof of Theorem 1, (10), (11) and the preceding replacements. ☐
Corollary 1.
Assume hypotheses of Theorem 2 hold. Then, the order of convergence of method (2) is 1 + 1 + α .
Proof. 
Let
a n = x n x * , b n = y n x * , C ¯ 1 = B ¯ ( p ¯ 1 + q ¯ 1 ) , C ¯ 2 = B ¯ p ¯ 2 4 ( α + 1 ) ( α + 2 ) .
By (12) and (13), we get
a n + 1 C ¯ 1 a n b n + C ¯ 2 a n 2 + α , b n + 1 C ¯ 1 ( a n + 1 + a n + b n ) a n + 1 C ¯ 1 ( 2 a n + b n ) a n + 1 C ¯ 1 ( 2 a n + C ¯ 1 ( 2 a 0 + b 0 ) a n ) a n + 1 = C ¯ 1 ( 2 + C ¯ 1 ( 2 a 0 + b 0 ) ) a n a n + 1 ,
Then, for large n and a n 1 < 1 , from previous inequalities, we obtain
a n + 1 C ¯ 1 a n b n + C ¯ 2 a n 2 a n 1 α C ¯ 1 2 ( 2 + C ¯ 1 ( 2 a 0 + b 0 ) ) a n 2 a n 1 + C ¯ 2 a n 2 a n 1 α [ C ¯ 1 2 ( 2 + C ¯ 1 ( 2 a 0 + b 0 ) ) + C ¯ 2 ] a n 2 a n 1 α .
From (14) we relate (2) to t 2 2 t α = 0 , leading to the solution t * = 1 + 1 + α . ☐
Remark 1.
To relate Theorem 1 and Corollary 2 in [12] to our Theorem 2 and Corollary 1 respectively, let us notice that under (3)–(5) B 1 can replace B in these results, where B 1 = B 1 ( r ) = T * 1 1 c 1 r , c 1 = 2 ( p 1 + q 1 ) T * 1 .
Then, we have
p ¯ 1 p 1 , p ¯ 2 p 2 , q ¯ 1 q 1 , c c 1 , B ¯ ( t ) B 1 ( t ) for each t [ 0 , 1 c 1 ) , C ¯ 1 C 1 , C ¯ 2 C 2
and
Ω 0 Ω
since r * r ¯ * , which justify the advantages claimed in the Introduction of this study.

3. Semilocal Convergence

Theorem 3 ([12]).
We assume that S ( x 0 , r 0 ) Ω , the linear operator T 0 = F x 0 + y 0 2 + Q ( x 0 , y 0 ) , where x 0 , y 0 Ω , is invertible and the Lipschitz conditions are fulfilled
T 0 1 ( F ( y ) F ( x ) ) 2 p 0 y x ,
T 0 1 ( Q ( x , y ) Q ( u , v ) ) q 0 ( x u + y v ) .
Let’s λ, μ ( μ > λ ) , r 0 be non-negative numbers such that
x 0 x 1 λ , T 0 1 ( F ( x 0 ) + G ( x 0 ) ) μ ,
r 0 μ / ( 1 γ ) , ( p 0 + q 0 ) ( 2 r 0 λ ) < 1 ,
γ = ( p 0 + q 0 ) ( r 0 λ ) + 0.5 p 0 r 0 1 ( p 0 + q 0 ) ( 2 r 0 λ ) , 0 γ < 1 .
Then, for each n = 0 , 1 , 2 ,
x n x n + 1 t n t n + 1 , y n x n + 1 s n t n + 1 ,
x n x * t n t * , y n x * s n t * ,
where
t 0 = r 0 , s 0 = r 0 λ , t 1 = r 0 μ ,
t n + 1 t n + 2 = ( p 0 + q 0 ) ( s n t n + 1 ) + 0.5 p 0 ( t n t n + 1 ) 1 ( p 0 + q 0 ) [ ( t 0 t n + 1 ) + ( s 0 s n + 1 ) ] ( t n t n + 1 ) ,
t n + 1 s n + 1 = ( p 0 + q 0 ) ( s n t n + 1 ) + 0.5 p 0 ( t n t n + 1 ) 1 ( p 0 + q 0 ) [ ( t 0 t n ) + ( s 0 s n ) ] ( t n t n + 1 ) ,
{ t n } n 0 , { s n } n 0 are non-negative, decreasing sequences that converge to some t * such that r 0 μ / ( 1 γ ) t * < t 0 ; sequences { x n } n 0 , { y n } n 0 S ( x 0 , t * ) and converge to a solution x * of equation (1).
Next, we present the analogous improvements in the semilocal convergence case. Assume that for all x , y , u , v Ω
T 0 1 ( F ( z ) F ( x ) ) 2 p ¯ 0 z x , z = x 0 + y 0 2
and
T 0 1 ( Q ( x , y ) Q ( x 0 , y 0 ) ) q ¯ 0 ( x x 0 + y y 0 ) .
Set Ω 0 = Ω S ( x 0 , r ¯ 0 ) , where r ¯ 0 = 1 + λ ( p ¯ 0 + q ¯ 0 ) 2 ( p ¯ 0 + q ¯ 0 ) . Define parameter γ ¯ and sequences { t ¯ n } , { s ¯ n } for each n = 0 , 1 , 2 , by γ ¯ = ( p 0 0 + q 0 0 ) ( r ¯ 0 λ ) + 0.5 p 0 0 r ¯ 0 1 ( p ¯ 0 + q ¯ 0 ) ( 2 r ¯ 0 λ ) ,
t ¯ 0 = r ¯ 0 , s ¯ 0 = r ¯ 0 λ , t ¯ 1 = r ¯ 0 μ ,
t ¯ n + 1 t ¯ n + 2 = ( p 0 0 + q 0 0 ) ( s ¯ n t ¯ n + 1 ) + 0.5 p 0 0 ( t ¯ n t ¯ n + 1 ) 1 ( p ¯ 0 + q ¯ 0 ) [ ( t ¯ 0 t ¯ n + 1 ) + ( s ¯ 0 s ¯ n + 1 ) ] ( t ¯ n t ¯ n + 1 ) ,
t ¯ n + 1 s ¯ n + 1 = ( p 0 0 + q 0 0 ) ( s ¯ n t ¯ n + 1 ) + 0.5 p 0 0 ( t ¯ n t ¯ n + 1 ) 1 ( p ¯ 0 + q ¯ 0 ) [ ( t ¯ 0 t ¯ n ) + ( s ¯ 0 s ¯ n ) ] ( t ¯ n t ¯ n + 1 ) .
As in the local convergence case, we assume instead of (15) and (16) the restricted Lipschitz-type conditions for each x , y , u , v Ω 0
T 0 1 ( F ( x ) F ( y ) ) 2 p 0 0 x y ,
T 0 1 ( Q ( x , y ) Q ( u , v ) ) q 0 0 ( x u + y v ) .
Then, instead of the estimate in [12] using (15) and (16):
T 0 1 [ T 0 T n + 1 ] T 0 1 F x 0 + y 0 2 F x n + 1 + y n + 1 2 + T 0 1 [ Q ( x 0 , y 0 ) Q ( x n + 1 , y n + 1 ) ] 2 p 0 x 0 x n + 1 + y 0 y n + 1 2 + q 0 ( x 0 x n + 1 + y 0 y n + 1 ) = ( p 0 + q 0 ) ( x 0 x n + 1 + y 0 y n + 1 ) ( p 0 + q 0 ) ( t 0 t n + 1 + s 0 s n + 1 ) ( p 0 + q 0 ) ( t 0 + s 0 ) = ( p 0 + q 0 ) ( 2 r 0 λ ) < 1 ,
we obtain more precise results using (20) and (21)
T 0 1 [ T 0 T n + 1 ] 2 p ¯ 0 x 0 x n + 1 + y 0 y n + 1 2 + q ¯ 0 ( x 0 x n + 1 + y 0 y n + 1 ) ( p ¯ 0 + q ¯ 0 ) ( x 0 x n + 1 + y 0 y n + 1 ) ( p ¯ 0 + q ¯ 0 ) ( t ¯ 0 t ¯ n + 1 + s ¯ 0 s ¯ n + 1 ) ( p ¯ 0 + q ¯ 0 ) ( t ¯ 0 + s ¯ 0 ) = ( p ¯ 0 + q ¯ 0 ) ( 2 t ¯ 0 λ ) < 1 ,
since
Ω 0 Ω , p ¯ 0 p 0 , q ¯ 0 q 0 , p 0 0 p 0 , q 0 0 q 0 , γ ¯ γ , and r ¯ 0 r 0 .
Then, by replacing p 0 , q 0 , r 0 , γ , t n , s n , (26) with p 0 0 , q 0 0 (at the numerator in (18) and (19)), or p ¯ 0 , q ¯ 0 (at the denominator in (18) and (19)), and with r ¯ 0 , γ ¯ , t ¯ n , s ¯ n , (27) respectively, we arrive at the following improvement of Theorem 3.
Theorem 4.
Assume together with (17), (20), (21), (24), (25) that r ¯ 0 μ ( 1 γ ¯ ) , ( p ¯ 0 + q ¯ 0 ) ( 2 r ¯ 0 λ ) < 1 and γ ¯ [ 0 , 1 ] . Then, for each n = 0 , 1 , 2 ,
x n x n + 1 t ¯ n t ¯ n + 1 , y n x n + 1 s ¯ n t ¯ n + 1 ,
x n x * t ¯ n t * , y n x * s ¯ n t * ,
with sequences { t ¯ n } n 0 , { s ¯ n } n 0 given in (22) and (23) decreasing, non-negative sequences that converge to some t * such that r 0 μ / ( 1 γ ¯ ) t * < t ¯ 0 . Moreover, sequences { x n } n 0 , { y n } n 0 S ( x 0 , t ¯ * ) for each n = 0 , 1 , 2 , , and lim n x n = x * .
Remark 2.
It follows (27) that by hypotheses of Theorem 3, Theorem 4, and by a simple inductive argument that the following items hold
t n t ¯ n , s n s ¯ n , o t ¯ n t ¯ n + 1 t n t n + 1 , o s ¯ n t ¯ n + 1 s n t n + 1 , and t * t ¯ * .
Hence, the new results extend the applicability of the method (2).
Remark 3.
If we choose F ( x ) = 0 , p 1 = 0 , p 2 = 0 . Then, the estimates (6) and (7) reduce to similar ones in [7] for the case α = 1 .
Remark 4.
Section 3 contains existence results. The uniqueness results are omitted, since they can be found in [2,6] but with center-Lipschitz constants replacing the larger Lipschitz constants.

4. Numerical Experiments

Let E 1 = E 2 = R 3 and Ω = S ( x * , 1 ) . Define functions F and G for v = ( v 1 , v 2 , v 3 ) T on Ω by
F ( v ) = e v 1 1 , e 1 2 v 2 2 + v 2 , v 3 T , G ( v ) = | v 1 | , | v 2 | , | v 3 | T ,
and set H ( v ) = F ( v ) + G ( v ) . Moreover, define a divided difference Q ( · , · ) by
Q ( v , v ¯ ) = d i a g | v ¯ 1 | | v 1 | v ¯ 1 v 1 , | v ¯ 2 | | v 2 | v ¯ 2 v 2 , | v ¯ 3 | | v 3 | v ¯ 3 v 3
if v i v ¯ i , i = 1 , 2 , 3 . Otherwise, set Q ( v , v ¯ ) = d i a g ( 1 , 1 , 1 ) . Then, T * = 2 d i a g ( 1 , 1 , 1 ) , so T * 1 = 0.5 . Notice that x * = ( 0 , 0 , 0 ) T solves equation H ( v ) = 0 . Furthermore, we have Ω 0 = S ( x * , 2 e + 1 ) , so
p 1 = e 2 , p 2 = e , q 1 = 1 , B = B ( t ) = 1 2 ( 1 c 1 t ) ,
b = 1 , α = 1 , a = e 1 , p ¯ 1 = p ¯ 2 = 1 2 e 2 e + 1 , q ¯ 1 = 1
and Ω 0 is a strict subset of Ω . As well, the new parameters and functions are also more strict than the old ones in [12]. Hence, the aforementioned advantages hold. In particular, r * 0.2265878 and r ¯ * 0.2880938 .
Let’s give results obtained by the method (2) for approximate solving the considered system of nonlinear equations. We chose initial approximations as x 0 = ( 0.1 ; 0.1 ; 0.1 ) d (d is a real number) and y 0 = x 0 + 0.0001 . The iterative process was stopped under the condition x n + 1 x n 10 10 and H ( x n + 1 ) 10 10 . We used the Euclidean norm. The obtained results are shown in Table 1.

5. Conclusions

The convergence region of iterative methods is, in general, small under Lipschitz-type conditions, leading to a limited choice of initial points. Therefore, extending the choice of initial points without imposing additional, more restrictive, conditions than before is extremely important in computational sciences. This difficult task has been achieved by defining a convergence region where the iterates lie, that is more restricted than before, ensuring the Lipschitz constants are at least as small as in previous works. Hence, we achieve: more initial points, fewer iterations to achieve a predetermined error accuracy, and a better knowledge of where the solution lies. These are obtained without additional cost because the new Lipschitz constants are special cases of the old ones. This technique can be applied to other iterative methods.

Author Contributions

Conceptualization, Editing, I.K.A.; Investigation, H.Y. and S.S.

Funding

This research received no external funding.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Argyros, I.K. A unifying local-semilocal convergence analysis and applications for two-point Newton-like methods in Banach space. J. Math. Anal. Appl. 2004, 298, 374–397. [Google Scholar] [CrossRef]
  2. Argyros, I.K.; Magrenán, Á.A. Iterative Methods and Their Dynamics with Applications: A Contemporary Study; CRC Press: Boca Raton, FL, USA, 2017. [Google Scholar]
  3. Argyros, I.K.; Ren, H. On the convergence of modified Newton methods for solving equations containing a non-differentiable term. J. Comp. App. Math. 2009, 231, 897–906. [Google Scholar] [CrossRef] [Green Version]
  4. Hernandez, M.A.; Rubio, M.J. A uniparametric family of iterative processes for solving nondiffrentiable operators. J. Math. Anal. Appl. 2002, 275, 821–834. [Google Scholar] [CrossRef]
  5. Ren, H.; Argyros, I.K. A new semilocal convergence theorem for a fast iterative method with nondifferentiable operators. Appl. Math. Comp. 2010, 34, 39–46. [Google Scholar] [CrossRef]
  6. Shakhno, S.M. Convergence of the two-step combined method and uniqueness of the solution of nonlinear operator equations. J. Comp. App. Math. 2014, 261, 378–386. [Google Scholar] [CrossRef]
  7. Shakhno, S.M. On an iterative algorithm with superquadratic convergence for solving nonlinear operator equations. J. Comp. App. Math. 2009, 231, 222–235. [Google Scholar] [CrossRef] [Green Version]
  8. Shakhno, S.M. On the difference method with quadratic convergence for solving nonlinear operator equations. Mat. Stud. 2006, 26, 105–110. (In Ukrainian) [Google Scholar]
  9. Shakhno, S.M. On two-step iterative process under generalized Lipschitz conditions for the first order divided differences. Math. Methods Phys. Fields 2009, 52, 59–66. (In Ukrainian) [Google Scholar] [CrossRef]
  10. Shakhno, S.; Yarmola, H. On the two-step method for solving nonlinear equations with nondifferentiable operator. Proc. Appl. Math. Mech. 2012, 12, 617–618. [Google Scholar] [CrossRef] [Green Version]
  11. Shakhno, S.M.; Yarmola, H.P. Two-point method for solving nonlinear equation with nondifferentiable operator. Mat. Stud. 2009, 36, 213–220. [Google Scholar]
  12. Shakhno, S.; Yarmola, H. Two-step method for solving nonlinear equations with nondifferentiable operator. J. Numer. Appl. Math. 2012, 109, 105–115. [Google Scholar] [CrossRef]
  13. Werner, W. Über ein Verfahren der Ordnung 1 + 2 zur Nullstellenbestimmung. Numer. Math. 1979, 32, 333–342. [Google Scholar] [CrossRef]
  14. Zabrejko, P.P.; Nguen, D.F. The majorant method in the theory of Newton-Kantorovich approximations and the Pta’k error estimates. Numer. Funct. Anal. Optim. 1987, 9, 671–684. [Google Scholar] [CrossRef]
Table 1. Value of x n x n 1 for each iteration.
Table 1. Value of x n x n 1 for each iteration.
n d = 1 d = 10 d = 50
10.16947501.45796135.9053855
20.00470490.34338741.9962504
30.00000050.01127491.3190118
44.284 × 10 16 0.00000371.0454772
5 2.031× 10 14 0.4157737
6 0.0260385
7 0.0000271
8 1.389 × 10 12

Share and Cite

MDPI and ACS Style

Argyros, I.K.; Shakhno, S.; Yarmola, H. Two-Step Solver for Nonlinear Equations. Symmetry 2019, 11, 128. https://doi.org/10.3390/sym11020128

AMA Style

Argyros IK, Shakhno S, Yarmola H. Two-Step Solver for Nonlinear Equations. Symmetry. 2019; 11(2):128. https://doi.org/10.3390/sym11020128

Chicago/Turabian Style

Argyros, Ioannis K., Stepan Shakhno, and Halyna Yarmola. 2019. "Two-Step Solver for Nonlinear Equations" Symmetry 11, no. 2: 128. https://doi.org/10.3390/sym11020128

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop