Next Article in Journal
Response Evolution of a Tetrachiral Metamaterial Unit Cell under Architectural Transformations
Previous Article in Journal
On the Study of Rainbow Antimagic Connection Number of Comb Product of Friendship Graph and Tree
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Newton-Type Methods for Solving Equations in Banach Spaces: A Unified Approach

1
Department of Computing and Mathematical Sciences, Cameron University, Lawton, OK 73505, USA
2
Department of Theory of Optimal Processes, Ivan Franko National University of Lviv, Universytetska Str. 1, 79000 Lviv, Ukraine
3
Department of Mathematics, University of Houston, Houston, TX 77204, USA
4
Department of Computational Mathematics, Ivan Franko National University of Lviv, Universytetska Str. 1, 79000 Lviv, Ukraine
*
Authors to whom correspondence should be addressed.
Symmetry 2023, 15(1), 15; https://doi.org/10.3390/sym15010015
Submission received: 4 November 2022 / Revised: 15 December 2022 / Accepted: 17 December 2022 / Published: 21 December 2022
(This article belongs to the Section Mathematics)

Abstract

:
A plethora of quantum physics problems are related to symmetry principles. Moreover, by using symmetry theory and mathematical modeling, these problems reduce to solving iteratively finite differences and systems of nonlinear equations. In particular, Newton-type methods are introduced to generate sequences approximating simple solutions of nonlinear equations in the setting of Banach spaces. Specializations of these methods include the modified Newton method, Newton’s method, and other single-step methods. The convergence of these methods is established with similar conditions. However, the convergence region is not large in general. That is why a unified semilocal convergence analysis is developed that can be used to handle these methods under even weaker conditions that are not previously considered. The approach leads to the extension of the applicability of these methods in cases not covered before but without new conditions. The idea is to replace the Lipschitz parameters or other parameters used by smaller ones to force convergence in cases not possible before. It turns out that the error analysis is also extended. Moreover, the new idea does not depend on the method. That is why it can also be applied to other methods to also extend their applicability. Numerical applications illustrate and test the convergence conditions.
MSC:
65G99; 47H99; 65H10; 49M15

1. Introduction

Let X and Y stand for Banach spaces and let Ω be a convex and nonempty subset of X. A plethora of applications from diverse disciplines may be solved, if reduced to a nonlinear equation of the form
F ( x ) = 0 .
This reduction takes place by using mathematical modeling [1,2,3,4,5,6]. Then, a solution denoted by x Ω is to be found that answers the application. The solution may be a number or a vector or a matrix or in general an operator. This task is very challenging in general. Obviously, the solution x is desired in an analytical form. However, in practice, this is achievable only in rare cases. That is why researchers mostly develop iterative methods convergent to x under some conditions on the initial information.
Popular methods are the modified Newton’s method (MNM) and Newton’s method (NM) defined, respectively, for starting point x 0 Ω and all n = 0 , 1 , 2 , by
x n + 1 = x n F ( x 0 ) 1 F ( x n )
and
x n + 1 = x n F ( x n ) 1 F ( x n ) .
Here, F is the notation for the Fréchet derivative of the operator F [7].
Numerous articles have been written on the convergence of these two methods [7,8,9,10,11]. The convergence conditions are mostly sufficient and in rare cases necessary. This observation indicates that there is a possibility to weaken the conditions, especially because these methods may converge even if these conditions are not fulfilled.
That is why in this article the objective is to consider alternatives.
Let us look at the main convergence conditions for these methods.
Remark 1. 
Suppose that there exist parameters L 0 > 0 and L 2 > 0 , such that
F ( x 0 ) 1 ( F ( v ) F ( x 0 ) ) L 0 v x 0
and
F ( x 0 ) 1 ( F ( w ) F ( v ) ) L 2 w v
for all v , w Ω . Moreover, consider a parameter η 0 , such that
F ( x 0 ) 1 F ( x 0 ) η .
Then, the corresponding sufficient convergence conditions for MNM and NM are, respectively [9,12,13]
L 0 η 1 2
and
L 2 η 1 2 .
The conditions (7) and (8) are due to Kantorovich [11]. Clearly, it follows that
L 0 L 2 .
Thus, we deduce that
L 2 η 1 2 L 0 η 1 2 .
That is, the condition (7) is weaker than (8). However, the convergence of MNM is only linear, whereas that of NM is quadratic [11]. Moreover, one can construct even scalar equations, where both conditions (7) and (8) are not fulfilled. That is, convergence is not assured by either convergence result although these methods may converge. Let us look at an elementary but motivational example.
Example 1. 
Let us consider the domain Ω = [ λ , 2 λ ] for λ ( 0 , 1 2 ) and the starting point x 0 = 1 . Moreover, we define the function φ : Ω R by
φ ( t ) = t 3 λ .
The conditions (4)–(6) are fulfilled if L 0 = 3 λ < L 2 = 2 ( 2 λ ) and η = 1 3 ( 1 λ ) . By plugging these values on the conditions (7) and (8) and solving for the parameter λ, we deduce that (8) does not hold for any λ ( 0 , 1 2 ) , whereas (7) does hold provided that λ ( 4 1 0 2 , 1 2 ) . However, the convergence is only linear in the MNM case.
Remark 2. 
In view of the Example 1, the question arises: Can we weaken the condition (8) but maintain the quadratic convergence of NM?
The answer was given in [1,9,12,13] and it is positive. In those articles we looked at condition (8) and realized that a weakening can take place if in condition (8) we replace:
Case 1. Parameter L 2 by a smaller one;
Case 2. Parameter η by a smaller one; and
Case 3. Parameters L 2 and η by smaller ones.
Positive results for Case 1 are reported in [1,11].
Additional benefits include more precise error estimates on the distances x n + 1 x n , x x n and an at least as extended uniqueness ball. The novelty is that all these benefits are achieved without additional conditions. Relevant research can be found in [14,15,16,17].
In this article, we present similar contributions for cases 2 and 3.
The idea is to replace NM with a Newton-type method (NTM) for x 0 Ω , M 0 = x 0 and all n = 0 , 1 , 2 , defined by
x n + 1 = x n F ( M n ) 1 b n F ( H n ) ,
where M , H : Ω Ω are continuous operators and { b n } is a bounded sequence of nonzero parameters. The notation M n , H n stands for M n = M ( x n ) and H n = H ( x n ) with M 0 = x 0 .
Next, we present a general auxiliary result for the convergence of iterative methods.
Lemma 1. 
Let X 0 and Y 0 be normed spaces and F : D X 0 Y 0 be a continuous operator, where the set D is open. Define the NTM
y 0 D , y n + 1 = y n E n F ( H ( y n ) ) ,
where { E n } is a sequence of linear operators.
Suppose
(i) The sequence { E n 1 } exists and is bounded
and
(ii) the sequence { y n } is Cauchy.
Then, there exists a parameter C > 0 such that for all n = 0 , 1 , 2 , ,
E n 1 C , y : = lim n + y n e x i s t s a n d F H ( ( y ) ) = 0 .
Proof. 
The sequence { E n 1 } exists and is bounded by the hypothesis. Thus, (14) holds. Then, it follows by method (13) for n + that
F ( y n ) E n 1 y n + 1 y n 0 ,
since the sequence { y n } is Cauchy.
Thus, we deduce
0 = lim n F ( y n ) F ( lim n y n ) = 0 .
Remark 3. 
(a) 
The space X 0 does not have to be complete for the sequence { y n } to converge. In the case of method (12), set L n = F ( M n ) 1 b n . Then, H ( y ) solves the equation F ( x ) = 0 .
(b) 
Possible choices for H are H ( x ) = x F ( x 0 ) 1 F ( x ) or H ( x ) = x F ( x ) 1 F ( x ) .
(c) 
Special cases of NTM are MNM (if M n = x 0 , H n = I and b n = 1 for all n = 0 , 1 , 2 , ), NM (if M n = H n = I and b n = 1 for all n = 0 , 1 , 2 , ).
Other choices lead to Newton-like methods [1]. That is why by studying the convergence of method NTM, we also unify the convergence of its specializations. Moreover, we may weaken the convergence criteria and improve the error bounds or the information on the uniqueness of the solution x at least in some cases (see Section 4). Notice that the smallness of η is determined by F ( M 0 ) 1 b 0 F ( H 0 ) μ for some μ 0 . Then, in some cases, this parameter is such that
μ < η .
Hence, this case is favorable to our expectations.
The semilocal convergence for method (12) is given in Section 2, followed by the local convergence in Section 3. The examples and the concluding remarks appear in Section 4 and Section 5, respectively.

2. The Semi-Local Convergence of the Method NTM

We introduce certain parameters and real sequences that are important in the convergence of the method (12). Let η , d j , L 1 0 for j = 0 , 1 , 2 , , 7 be given parameters. Define the parameters
δ 0 = d 4 d 7 , δ 1 = L 1 d 1 d 6 2 , δ 2 = d 6 ( L 1 d 2 + d 4 d 5 d 7 ) , δ 3 = d 6 ( L 1 d 3 + d 5 ) .
Some of these parameters can be zero (see also the Example 2). Moreover, define the sequences for all n = 0 , 1 , 2 ,
t 0 = 0 , t 1 = η , a n + 1 = δ 1 ( t n + 1 t n ) + δ 2 t n + δ 3 and t n + 2 = t n + 1 + a n + 1 ( t n + 1 t n ) 1 δ 0 t n + 1 .
This sequence appears often in the study of Newton-like methods [1].
The first general convergence result for the sequence { t n } follows.
Lemma 2. 
Suppose that for δ 0 0 and all n = 0 , 1 , 2 ,
t n + 1 < 1 δ 0 .
Then, the sequence { t n } generated by the Formula (16) is convergent and satisfying 0 t n t n + 1 lim n t n = t [ 0 , 1 δ 0 ] .
Proof. 
It follows by (16) and (17) that the sequence { t n } is nondecreasing and also bounded from above by 1 δ 0 and as such is convergent to t . □
Remark 4. 
We can provide some stronger alternatives to the verification of (17). It is convenient to introduce some polynomials and functions defined on the interval [ 0 , 1 ) in order to show the second convergence result for the sequence { t n } as follows:
p 2 ( t ) = δ 0 t 3 + ( δ 1 + δ 2 ) t δ 1 ,
g n + 1 ( t ) = δ 1 η t n + δ 2 ( 1 + t + + t n ) η + δ 0 t ( 1 + t + + t n + 1 ) η t + δ 3 .
Thus, we get by these definitions that
g n + 1 ( t ) g n ( t ) = p 2 ( t ) t n η .
Define the function g on the interval [ 0 , 1 ) by
g ( t ) = lim n g n ( t ) .
Then, we get
g ( t ) = δ 2 η 1 t + δ 0 t η 1 t + δ 3 t .
Set
δ 4 = 1 + δ 3 δ 0 η , δ 5 = δ 3 + δ 2 η , Δ = δ 4 2 4 δ 5 , a n d f ( t ) = t 2 δ 4 t + δ 5 .
Then, the function g can be rewritten as
g ( t ) = f ( t ) 1 t .
By these definitions p 2 ( 0 ) = δ 1 < 0 and p 2 ( 1 ) = δ 0 + δ 2 > 0 . It then follows by the intermediate theorem that the polynomial p 2 has zeros in the interval ( 0 , 1 ) . Notice that by Descartes’ rule of signs there is only one such zero, which we denote by γ.
Suppose
( I 1 )
δ 4 > 0 ,
0 δ 1 η + δ 3 1 δ 0 η γ ,
f ( γ ) 0 ;
or
( I 2 )
δ 4 > 0 ,
0 δ 1 η + δ 3 1 δ 0 η γ ,
Δ 0 ,
f ( r 1 ) = 0 ,
where r 1 is the smallest positive solution f ( t ) = 0 assured to exist by Δ 0 ;
or
( I 3 )
δ 4 > 0 ,
0 δ 1 η + δ 3 1 δ 0 η γ ,
Δ 0 ,
f δ 4 2 0 .
Notice that the conditions ( I 1 ) is verified in Example 2 used in Theorem 1, which follows.
Lemma 3. 
Suppose that any of the conditions ( I 1 ) ( I 3 ) hold. Then, the sequence { t n } generated by the formula (16) is convergent with 0 t n t n + 1 lim n t n = t [ 0 , 1 δ 0 ] and
0 t n + 2 t n + 1 γ ( t n + 1 t n ) γ n + 1 ( t 1 t 0 ) .
Proof. 
Mathematical induction is utilized to show for all k = 0 , 1 , 2 , the assertion
0 δ 1 ( t k + 1 t k ) + δ 2 t k + δ 3 1 δ 0 t 1 γ .
This assertion holds if k = 0 , by (16) and the second condition in ( I 1 ) ( I 3 ) . It follows that 0 t 2 t 1 γ ( t 1 t 0 ) , t 2 t 1 + γ ( t 1 t 0 ) = 1 γ 2 1 γ η < η 1 γ . Suppose that the assertion (19) holds for all integer values smaller than k. Then, we get 0 t k + 1 t k γ k η and t k + 1 1 γ k + 1 1 γ η < η 1 γ . Then, evidently (19) holds if
δ 1 γ k η + δ 2 1 γ k 1 γ η + δ 3 + δ 0 γ 1 γ k + 2 1 γ η γ 0 .
Assertion (20) motivates the introduction of the recurrent polynomials g k , and shows instead of (20) that
g k ( γ ) 0 .
However, g k + 1 ( γ ) = g k ( γ ) , because p 2 ( γ ) = 0 . Thus, we have g ( γ ) = g k ( γ ) .
Consequently, the estimate g ( γ ) 0 can be shown instead of (21), which is true by any of the last conditions in ( I 1 ) ( I 3 ) .
The induction for the assertion is completed. Therefore, the sequence { t n } is nondecreasing and bounded from above by η 1 γ . Hence, it is convergent to some t 0 , η 1 γ . □
The notation U ( x , a ) , U [ x , a ] is used for the open and closed ball in X of center x and radius a > 0 , respectively. We denote by £ ( Y , X ) the space of bounded linear operators from Y to X.
The following set of conditions are used in the semi-local convergence of NTM.
( A 1 ) There exist x 0 Ω and η 0 , such that
F ( M 0 ) 1 £ ( Y , X ) and F ( M 0 ) 1 b 0 F ( H 0 ) η .
( A 2 )
F ( M 0 ) 1 ( F ( M ( x ) ) F ( M 0 ) ) d 7 M ( x ) M ( x 0 )
and
M ( x ) M ( x 0 ) d 4 x x 0 for all x Ω for some d 4 [ 0 , 1 ) .
Define the region Ω 1 as Ω 1 = Ω U ( M 0 , 1 δ 0 ) , if δ 0 0 , Ω , if δ 0 = 0 .
( A 3 )
H ( y ) H ( x ) d 1 y x ,
H ( x ) M ( x ) ( H ( x 0 ) M ( x 0 ) ) d 2 x x 0 ,
H ( x 0 ) M ( x 0 ) d 3 , H ( x ) M ( x 0 ) < 1 δ 0 ,
| I b n | d 5 ,
| b n | d 6 ,
F ( M 0 ) 1 ( F ( y ) F ( x ) ) L 1 y x for all x , y Ω 1 .
( A 4 ) The conditions of any of the last two lemmas are fulfilled
and
( A 5 ) U [ M 0 , t ] Ω .
Next, the semilocal convergence of the method NTM follows.
Theorem 1. 
Suppose that the conditions ( A 1 ) ( A 5 ) and any of the conditions ( I i ) , i = 1 , 2 , 3 or (17) hold. Then, the sequence { x n } generated by the NTM is well defined in U ( M 0 , t ) , remains in U ( M 0 , t ) and is convergent to some H ( x ) U [ M 0 , t ] solving the equation F ( x ) = 0 . Moreover, the following assertion holds:
x x n t t n .
Proof. 
Mathematical induction is used to show
x k + 1 x k t k + 1 t k
for all k = 0 , 1 , 2 , .
The definition of (16) and the condition ( A 1 ) imply
x 1 M 0 = x 1 x 0 = F ( M 0 ) 1 b 0 F ( H 0 ) η = t 1 t 0 = t 1 < t ,
so the iterate x 1 U ( M 0 , t ) and the assertion (23) hold for k = 0 . Let M ( x k ) U ( M 0 , t ) . It follows from the conditions ( A 1 ) ( A 3 ) that
F ( M 0 ) 1 ( F ( M ( x k ) ) F ( M ( x 0 ) ) ) d 7 M ( x k ) M ( x 0 ) d 7 d 4 x k x 0 δ 0 t < 1 .
Then, the estimate (24) and the perturbation lemma by Banach on linear invertible operators [11,18,19] assert that F ( M ( x k ) ) 1 £ ( Y , X ) and
F ( M ( x k ) ) 1 F ( M 0 ) 1 1 δ 0 x k x 0 .
Moreover, the iterate x k + 1 is well defined.
The definition of NTM implies the identity
F ( H k + 1 ) = F ( H k + 1 ) F ( H k ) F ( H k ) ( x k + 1 x k ) + ( F ( H k ) F ( M k ) b k ) ( x k + 1 x k ) = F ( H k + 1 ) F ( H k ) F ( H k ) ( x k + 1 x k ) + ( F ( H k ) F ( M k ) ) ( x k + 1 x k ) + F ( M k ) ( I b k ) ( x k + 1 x k ) .
By applying the condition ( A 3 ) on the identity (26), we obtain in turn the estimates
H k + 1 H k d 1 x k + 1 x k d 1 ( t k + 1 t k ) , F ( M 0 ) 1 ( F ( H k + 1 ) F ( H k ) F ( H k ) ( x k + 1 x k ) ) 0 1 F ( M 0 ) 1 ( F ( H k + θ ( H k + 1 H k ) ) F ( H k ) ) d θ ( x k + 1 x k ) L 1 d 1 2 x k + 1 x k 2 d 9 ( t k + 1 t k ) 2 , H k M k = ( H ( x k ) M ( x k ) ( H ( x 0 ) M ( x 0 ) ) ) + ( H ( x 0 ) M ( x 0 ) ) H ( x k ) M ( x k ) ( H ( x 0 ) M ( x 0 ) ) + H ( x 0 ) M ( x 0 ) d 2 x k x 0 + d 3 d 2 t k + d 3
and
F ( M 0 ) 1 F ( M k ) ( I b k ) F ( M 0 ) 1 ( ( F ( M k ) F ( M 0 ) ) + F ( M 0 ) ) ( I b k ) d 7 d 4 x k x 0 + d 5 .
By summing up the preceding estimates, we get
| b k + 1 | F ( M 0 ) 1 F ( H k + 1 ) a k + 1 ( t k + 1 t k ) ,
and thus
x k + 2 x k + 1 F ( M k + 1 ) 1 F ( M 0 ) F ( M 0 ) 1 F ( H k + 1 ) b k + 1 a k + 1 ( t k + 1 t k ) 1 δ 0 t k + 1 = t k + 2 t k + 1 ,
and
x k + 2 M 0 x k + 2 x k + 1 + x k + 1 M 0 t k + 2 t k + 1 + t k + 1 t 0 = t k + 2 < t .
Hence, the iterate x k + 2 U ( M 0 , t ) and the estimate (23) holds for all k. Moreover, the sequence { t n } is complete as convergent. Thus, the sequence { x n } is also complete in Banach space X. Hence, it is also convergent to some x U [ x 0 , t ] . Furthermore, the continuity of F, the Remark 3(a), and by letting k + in the estimate (27) we obtain F ( H ( x ) ) = 0 . □
Next, a result is presented concerning the uniqueness of the solution for the equation F ( x ) = 0 .
Proposition 1. 
Suppose
( i ) there exists a solution z U ( M 0 , ρ 1 ) of equation F ( x ) = 0 for some ρ 1 > 0 ;
( i i ) the condition ( A 2 ) holds on the ball U ( M 0 , ρ 1 ) ;
and
( i i i ) there exists ρ 2 ρ 1 , such that
δ 0 ( ρ 1 + ρ 2 ) < 2 .
Define the region Ω 2 = Ω U [ M 0 , ρ 2 ] . Then, the equation F ( x ) = 0 is uniquely solvable by the element z in the region Ω 2 .
Proof. 
Let w Ω 2 be a solution of the equation F ( x ) = 0 . Then, we have M ( z ) = z . Define the linear operator S = 0 1 F ( M ( z + θ ( w z ) ) ) d θ . By using the conditions ( A 2 ) and (28), we obtain in turn that
F ( M 0 ) 1 ( S M 0 ) d 7 0 1 ( ( 1 θ ) z x 0 + θ w x 0 ) d θ d 7 d 4 2 ( ρ 1 + ρ 2 ) = δ 0 2 ( ρ 1 + ρ 2 ) < 1 ,
where we also used
z x 0 = M ( z ) M ( x 0 ) d 4 z x 0 d 4 ρ 2
and
w x 0 = M ( w ) M ( x 0 ) d 4 w x 0 d 4 ρ 1 .
It follows by (29) that S 1 £ ( Y , X ) . Consequently, we can have
w z = S 1 ( F ( w ) F ( z ) ) = S 1 ( 0 0 ) = 0 .
That is, we conclude that w = z . □
Remark 5. 
( i ) The assumption M 0 = x 0 can be dropped if the second condition in ( A 1 ) is replaced by any of x 0 M 0 < η , x 0 M 0 < 1 δ 0 or x 0 M 0 < t , and z 1 M 0 < η , where x 1 = x 0 F ( M 0 ) 1 b 0 F ( N 0 ) . Then, the proof of Theorem 1 still goes through. The limit point t can be replaced by 1 δ 0 or η 1 γ given in closed form in the condition ( A 5 ) .
( i i ) Notice that only the condition ( A 2 ) is used in Proposition 1. However, if all conditions are used then, we can set z = x .
( i i i ) An alternative to the majorizing sequence (16) and the convergence condition can be obtained as follows.
Let a = 2 δ 1 , σ = max { a , δ 0 + δ 2 } , f 1 ( t ) = σ 2 t 2 ( 1 δ 3 ) t + η and q ( t ) = 1 δ 0 t .
Moreover, define the sequence { u n } by
u 0 = 0 , u n + 1 = u n + f 1 ( t n ) q ( t n ) .
Then, if
ν = σ η ( 1 δ 3 ) 2 and δ 3 < 1 ,
it was shown in [8] that the sequence { u n } is nondecreasing and convergent to
u 1 = 1 δ 3 ( 1 δ 3 ) 2 2 ν σ .
The parameter t 1 is the smallest of the two roots of the quadratic polynomial p 1 with the largest being given by
u 2 = 1 δ 3 ( 1 δ 3 ) 2 2 ν σ .
Moreover, simple induction shows
t n u n
0 t n + 1 t n u n + 1 u n
, and
t u 1 .
Hence, the sequence { u n } and the conditions (31) can replace { t n } and conditions ( I i ) in Theorem 1, respectively.
Concerning the uniqueness of the solution for this case, we already have Proposition 1. However, the uniqueness of the solution (see [8]) can be established in the region
Ω 3 = U [ M 0 , u 1 ] Ω i f ν = 1 2 ( 1 δ 3 ) 2 , U ( x 0 , u 2 ) Ω i f ν < 1 2 ( 1 δ 3 ) 2 .
In practice, we shall choose the largest region, the tighter sequence { t n } and t provided that both the ( I i ) and the conditions (31) hold.
( i v ) The sequence of number { b n } can be replaced by a sequence { b ¯ n } of continuous operators from Ω into X. In this case, the proof of Proposition 1 also goes through, provided that
F ( M ( x ) ) 1 b ( x ) = b ( x ) F ( M ( x ) ) 1 for all x Ω .
That is the operators F ( M ( x ) ) 1 and b ( x ) must be commutative.
( v ) A more general method than (12) is given by the Picard iteration
x n + 1 = P ( T ( x n ) ) ,
where P , T : X X are continuous operators and operator T has the same fixed points with P. Suppose that P satisfies
P ( y ) P ( x ) α 0 y x for all x , y Ω
or not and α 0 ( 0 , 1 ) or not. However,
T ( y ) T ( x ) α 1 y x
and
P ( T ( y ) ) P ( T ( x ) ) α 1 α 2 y x
and α 1 α 2 ( 0 , 1 ) . Then, according to the contraction mapping principle [2], the operator P has a fixed point provided also that it maps a closed ball into itself.
A possible choice for P and T can be
T ( x n ) = P ( x n ) = x n F ( M ( x n ) ) 1 b n F ( H ( x n ) ) .

3. Local Convergence

Let l 0 , l , l 1 , l 2 , l 3 , l 4 0 be given parameters with l 1 [ 0 , 1 ] and l 3 l 4 < 1 . Moreover, define the parameter r by
r = 1 l 3 l 4 l 0 + l l 2 ,
provided that l 0 + l l 2 0 . These parameters are connected with the operators appearing on the NTM with the conditions ( C ) .
Suppose
( C 1 ) there exists a solution x Ω of the equation F ( x ) = 0 , such that H ( x ) = M ( x ) = x and F ( M ( x ) ) 1 £ ( Y , X ) ;
( C 2 ) F ( M ( x ) ) 1 ( F ( M ( x ) ) F ( M ( x ) ) ) l 0 x x for all x Ω .
Define the region Q = U ( x , 1 l 0 ) Ω , if l 0 0 , Ω , if l 0 = 0 .
( C 3 )
F ( M ( x ) ) 1 ( F ( y ) F ( x ) ) l y x , H ( x ) H ( x ) l 1 x x , for some l 1 [ 0 , 1 ] 0 1 x + θ ( H ( x ) x ) M ( x ) d θ l 2 x x , | 1 b n | l 3
and
0 1 F ( M ( x ) ) 1 F ( x + θ ( H ( x ) x ) ) d θ l 4
for each x , y Q
and
( C 4 ) U ( x , r ) Ω , where the parameter r is given by the formula (33).
Next, the local convergence analysis uses the parameters “l” and the conditions ( C ) .
Theorem 2. 
Suppose that the conditions ( C 1 ) ( C 4 ) hold and the starting point x 0 U ( x , r ) { x } . Then, the sequence { x n } generated by NTM is such that { x n } U ( x , r ) , lim n + x n = x and
x n + 1 x β n x n x < r ,
where
β n = l l 2 x n x + l 3 l 4 1 l 0 x n x [ 0 , 1 ) .
Proof. 
It follows by the conditions ( C 1 ) , ( C 2 ) , the hypothesis x 0 U ( x , r ) { x } and the radius r that
F ( M ( x ) ) 1 ( F ( M 0 ) F ( M ( x ) ) ) l 0 x 0 x < 1 .
Thus, the operator F ( M 0 ) 1 £ ( Y , X ) and
F ( M 0 ) 1 F ( M ( x ) ) 1 1 l 0 x 0 x .
Moreover, the iterate x 1 is well defined by NTM for n = 0 , and we can write in turn that
x 1 x = x 0 x F ( M 0 ) 1 b 0 F ( H 0 ) = F ( M 0 ) 1 F ( M 0 ) b 0 0 1 F ( x + θ ( H 0 x ) ) d θ ( x 0 x ) = F ( M 0 ) 1 0 1 F ( x + θ ( H 0 x ) ) d θ F ( M 0 ) ( 1 b 0 ) 0 1 F ( x + θ ( H 0 x ) ) d θ ( x 0 x ) .
By composing the expression in the bracket by F ( M ( x ) ) 1 and using the conditions ( C 3 ) , we see that it is bounded above by
l 0 1 x + θ ( H 0 x ) M 0 d θ + | 1 b 0 | 0 1 F ( M ( x ) ) 1 F ( x + θ ( H 0 x ) ) d θ l l 2 x x 0 + l 3 l 4 ,
leading together with (36) and (37) to the estimate (34) for n = 0 , where we also used that H 0 x = H ( x 0 ) H ( x ) l 1 x 0 x < r .
Hence, the iterate x 1 U ( x , r ) { x } .
Then, the induction for the estimate (34) is completed if we simply replace the iterates x 0 , x 1 , β 1 by x k , x k + 1 , β k + 1 in the preceding calculations.
Therefore, we have
x k + 1 x β x k x < r ,
where β = l l 2 r + l 3 l 4 1 l 0 r [ 0 , 1 ) implies that the iterate x k + 1 U ( x , r ) { x } and lim k + x k = x . □
Remark 6. 
The last condition in ( C 3 ) can be dropped in view of the alternative estimate
0 1 F ( M ( x ) ) 1 F ( x + θ ( H 0 x ) ) d θ 0 1 F ( M ( x ) ) 1 ( F ( x + θ ( H 0 x ) ) F ( M ( x ) ) ) d θ + F ( M ( x ) ) 1 F ( M ( x ) ) 1 + l 0 l 1 2 x 0 x .
By replacing this estimate in the proof of Theorem 2, we see that the radius becomes
r 1 = 1 l 3 l l 2 + l 0 l 1 l 3 2 + l 0 ,
where we suppose l 3 [ 0 , 1 ) . Moreover, the new sequence β n 1 is defined by
β n 1 = l l 2 x n x + l 3 ( 1 + l 0 l 1 2 x n x ) 1 l 0 x n x [ 0 , 1 ) .
Even at this generality, the Theorem 2 improves earlier results in the interesting case of NM. Indeed, we have in this case l 1 = 1 , l 3 = 0 , l 2 = 1 2 , l 4 [ 0 , + ) , implying that
r = r 1 = 2 2 l 0 + l .
The corresponding radius given independently by Traub [4] and Rheinboldt [2] is
r 0 = 2 3 l 5 ,
where l 5 satisfies
F ( x ) 1 ( F ( y ) F ( x ) ) l 5 x y f o r a l l x , y Ω .
However, then the estimate
r 0 r
holds, because l 0 l 5 and l 1 l 5 .
Let us look at the function F defined by F ( x ) = e x 1 for all x U ( x , 1 ) . Then, we have for x = 0 , that l 0 = e 1 , l 5 = e , Q = U ( x , 1 l 0 ) and l 1 = e 1 l 0 . Hence, we have
l 0 < l < l 5 .
Moreover,
r 0 = 0.24 < r = 0.32 .
Furthermore, the new sequence { β n } is tighter than { β n 0 } given in [2,4] and defined
β n 0 = l 5 x n x 2 ( 1 l 5 x n x ) .
Finally, notice that the second condition in ( C 3 ) can be replaced by H ( x ) H 1 l 0 . Then, it follows again that H ( x ) Q .
Proposition 2. 
Suppose there exists a solution x ¯ U ( x , ρ 5 ) of the equation F ( x ) = 0 with M ( x ¯ ) = x ¯ ρ . The condition ( C 2 ) holds in the set U ( x , ρ 5 ) , and there exists ρ 6 ρ 5 such that
l 0 ρ 6 < 2 .
Define the region Q 1 = U [ x , ρ 6 ] Ω . Then, the only solution of the equation F ( x ) = 0 in the region Q 1 is x .
Proof. 
Define the linear operator S by S = F ( x + θ ( x ¯ x ) ) d θ for some x ¯ Q 1 with F ( x ¯ ) = 0 . It then follows by ( C 2 ) and (39) that
F ( M ( x ) ) 1 ( S F ( M ( x ) ) ) l 0 2 x ¯ x l 0 2 r < 1 .
Then, by the continuity of F, the invertibility of S and the identity
x ¯ x = S 1 ( F ( x ¯ ) F ( x ) ) = S 1 ( 0 ) = 0 ,
we conclude that x ¯ = x . □
Notice that if all hypotheses of Theorem 2 hold, then we can choose ρ 5 = r .

4. Special Cases and Numerical Problems

The operators appearing on the method (12) are specialized in some interesting cases. Then, a favorable comparison is given with existing methods.
Example 2. 
Let us consider the case of NM. We shall verify the parameters in Theorem 1. It follows from (3) and (12) that the conditions ( A 1 ) – ( A 3 ) are verified provided that d 1 = d 4 = d 6 = 1 , d 2 = d 3 = d 5 = 0 , d 7 and L 1 to be determined if the operator is specified. The parameters delta are: δ 0 = d 7 , δ 1 = L 1 2 , δ 2 = δ 3 = δ 5 = 0 , δ 4 = 1 d 7 η , Δ = ( 1 d 7 η ) 2 and a n + 1 = L 1 2 ( t n + 1 t n ) . Then, the conditions ( I 1 ) reduce to
1 d 7 η > 0 ,
0 L 1 2 η 1 d 7 η γ
and
d 7 γ η 1 η γ 0 ,
respectively. This system of inequalities can be written for
γ = 2 L 1 L 1 + L 1 2 + 8 d 7 L 1
as
L ¯ η 1 2 ,
where
L ¯ = 1 2 ( 4 d 7 + L 1 + L 1 2 + 8 d 7 L 1 ) .
Notice that d 7 = L 0 , L 1 L 2 and L 0 L 1 . It follows that
L 2 η 1 2 L ¯ η 1 2 ,
but not vice versa unless L 0 = L 1 = L 2 . Hence, we see that the general Theorem 1 if reduced provides a weaker convergence criterion than Kantorovich’s (8).
Let us return back to Example 1 given in the introduction. Then, we have Ω 1 = Ω U ( M 0 , 1 δ 0 ) = U ( M 0 , 1 δ 0 ) , because 1 δ 0 < 2 λ for each λ ( 0 , 1 2 ) . It follows by last condition in ( A 3 ) that L 1 = 2 ( 1 + 1 L 0 ) < L 2 for each λ ( 0 , 1 2 ) . The condition (41) is verified provided that λ ( 0.4271907643 , 1 2 ) , which improves the convergence range for NM. Recall that the Kantorovich condition (8) does not hold for any λ ( 0 , 1 2 ) .
Application 1. 
Set b n = 1 and H n = x n . Then, NTM (12) reduces to
x n + 1 = x n F ( M n ) 1 F ( x n ) .
Further special cases of the method (43) are Newton’s method if M n = x n and the simplified Newton’s method provided that M n = x 0 . Other choices of the operators M n are possible [20,21,22].
An interesting choice seems to be
M n = x + μ n ( x n x ) for some μ n [ 0 , 1 ] .
Next, some local and semilocal convergence results are presented under these choices.
Theorem 3. 
Suppose
(i) the inverses F ( M n ) 1 are well defined;
(ii) there exists a solution x Ω of the Equation (1);
(iii) there exists a parameter K > 0 such that for each u , v Ω
F ( u ) F ( v ) K u v .
Then, the following assertions hold:
x n + 1 x 1 2 K F ( M n ) 1 ( M n x n + M n x ) x n x
and
x n + 1 x 1 2 K F ( M n ) 1 x n x 2 .
Moreover, if the operators F ( M n ) 1 exist and are uniformly bounded, then, the convergence order of the method (43) is two.
Proof. 
In view of the choice (44), we only need to show (46), because then (47) follows from it.
We can write
F ( x n ) F ( x ) F ( M n ) ( x n x ) = 0 1 ( F ( τ x n + ( 1 τ ) x ) F ( τ M n + ( 1 τ ) M n ) ) d τ ( x n x ) .
By applying (44) and (45) on (48), we get in turn
F ( x n ) F ( x ) F ( M n ) ( x n x ) 1 2 K ( M n x n + M n x ) x n x
leading to (46) by (43). □
Remark 7. 
Set μ n = 0 in (44), then y n = x . Moreover, set T = F ( x ) 1 . Then, the method (43) further reduces to
x n + 1 = x n T F ( x n ) .
Suppose
T K 1 f o r s o m e p a r a m e t e r K 1 > 0 .
Then, by Theorem 3, we deduce
x n + 1 x 1 2 K K 1 x n x 2 .
Thus, the method (49) has convergence order two as Newton’s method but the ease of the simplified Newton’s method, as the operator is computed only once. Method (49) can be used provided that there exists an operator h, such that
F ( x ) = h ( F ( x ) ) .
Notice that F ( x ) = h ( 0 ) . However, h ( 0 ) is known, because h is given. Hence, F ( x ) is determined. As an example, define the scalar function F to be
F ( x ) = e x α f o r s o m e α > 0 .
Then, we have x = ln α and F ( x ) = F ( x ) + α .
A second local convergence result follows under the condition (52).
Proposition 3. 
Suppose
(i) the operator T = h ( 0 ) 1 exists and T K 1 .
(ii) there exists x 0 Ω such that
ς ( 0 , 1 ) ,
where
ς = 1 2 K K 1 x 0 x .
Then, the following assertions hold:
x n x ς 2 n 1 x 0 x
and
lim n x n = x .
Proof. 
Mathematical induction is given immediately by (51) and (54). □
Example 3. 
Method (49) for F, given by (53), is defined by
x n + 1 = x n 1 α ( e x n α )
and converges to x with the order two provided that x 0 U ( x , K 2 ) for K 2 = 2 K K 1 , because ς satisfies (54).
Proposition 4. 
Suppose
(i) the operator T exists with T K 1 ;
(ii) the operator h is δ Lipschitz continuous and F ( x ) ξ for some ξ 0 and δ > 0 ;
(iii) ρ ( 0 , 1 ) , where ρ = ξ δ K 1 .
Then, the following assertion holds:
Iteration (49) has a unique solution x ,
x n x ρ n 1 ρ x 1 x 0
and lim n x n = x .
Proof. 
Define the operator G ( x ) = h ( 0 ) x F ( x ) . Then, the method (49) can be written as x n + 1 = T G ( x n ) . By using
G ( x ) = h ( 0 ) F ( x ) = h ( 0 ) h ( F ( x ) )
and (ii), we obtain
G ( x ) = δ F ( x ) .
Then, the result follows the celebrated contraction mapping principle. □
So far, we presented local convergence results. Next, we provide a semilocal convergence result.
Notice that if h and F are δ and ξ ( ξ > 0 ) Lipschitz continuous, then by (52) the operator F is Lipschitz continuous with parameter K = δ ξ . Moreover, we obtain
F ( x ) F ( x 0 ) + ξ x x 0 .
Set ε = x 0 x and define the parameter
P ( ε ) = δ K 1 F ( x 0 ) + K 1 K ε .
Furthermore, define the parameters
Δ 1 = ( 1 δ K 1 F ( x 0 ) ) 2 4 K 1 K x 1 x 0 , r 1 = 1 δ K 1 F ( x 0 ) Δ 1 2 K 1 K
and
r 2 = 1 δ K 1 F ( x 0 ) K 1 K .
Theorem 4. 
Suppose
p ( 0 ) < 1 a n d Δ 1 > 0 .
Then, the Equation (1) has a solution x U [ x 0 , r 1 ] , which is unique in U ( x 0 , r 1 ) .
Proof. 
This follows immediately by the contraction mapping principle. □
Remark 8. 
The contraction mapping principle assures that
x n x p 1 n r 1 ,
where
p 1 = p ( r 1 ) = 1 2 ( 1 + δ K 1 F ( x 0 ) Δ 1 ) .
However, by Theorem 3 the convergence order is two for n = m , m + 1 , , where m is the smallest integer satisfying
q = 1 2 K K 1 ( p 1 ) m r 1 < 1 .
Hence, we have improved earlier works in this case.
Application 2. 
Let F ( x ) = x F 1 ( x ) and consider the fixed-point problem
F 1 ( x ) = x .
It is know that a fixed point x Ω satisfies F 1 ( x ) = x , then, clearly the method [23,24]
x n + 1 = x n F ( ( 1 μ ) x n + μ F 1 ( x n ) ) 1 ( x n F 1 ( x n ) ) , 0 μ 1
is another specialization of the method (12). If μ = 0 we get NM (3) and
x n + 1 = x n F x n + F 1 ( x n ) 2 1 ( x n F 1 ( x n ) ) ,
x n + 1 = x n [ I F 1 ( F 1 ( x n ) ) ] 1 ( x n F 1 ( x n ) )
for μ = 0.5 [23] and μ = 1 , respectively.
Example 4. 
Let us apply methods (3), (60) and (61) for solving the Equation (58) with function F 1 ( x ) defined by F 1 ( 1 ) ( x ) = 8 x and F 1 ( 2 ) ( x ) = x e x 1 15 .
Figure 1 contains graphs of the nonlinear functions F 1 ( 1 ) ( x ) , F 1 ( 2 ) ( x ) . The intersection point of the graphs y = x and y = F 1 ( x ) is the solution of the corresponding equation. From graphs (A) and (B) we see that x 1 = 2 and x 2 = 0 are solutions of the equations F 1 ( 1 ) ( x ) = x and F 1 ( 2 ) ( x ) = x , respectively. It is known that a condition F 1 ( x ) < 1 is sufficient for the convergence of methods (60) and (61) [23,24]. It is possible to find intervals on which this condition will hold for both cases (see Figure 2).
Table 1 shows the number of iterations that are needed to obtain the approximate solutions. Results are obtained under condition F ( x n ) 10 10 . The initial approximations x 0 were chosen from the intervals [ x 1 , 5 ] and [ x 2 , 3.2 ] . The approximations obtained at each iteration by the methods (60) and (61) are contained in specified intervals. Therefore, the condition F 1 ( x ) < 1 was fulfilled. The obtained results show that the method (60) converges faster than Newton’s and Stirling’s methods.
Define the scalar function
F 1 ( x ) = 3 4 sin x , i f x π φ ( x ) , i f x π ,
and choose x 0 = π . Then, the method (60) gives the exact solution after only one iteration. The method (61) converges after four iterations. But NM does not converge, provided that the function φ connects smoothly the other part of the function F 1 with | φ ( x ) | 3 4 . Note that in the neighborhood of the point x = π , NM converges more slowly than methods (60) and (61). More advantages of the method (60) and (61) over Newton’s and other methods along the same lines as Application 1. Some possible choices of the function φ are given by φ ( x ) = 1 e 0.75 sin x and φ ( x ) = 3 2 2 cos π 4 + x + 3 4 for each x π .

5. Conclusions

Newton-type methods are developed that specialize in many popular methods for solving nonlinear equations containing Banach space-type operators. The semilocal convergence analysis is based on Lipschitz conditions. The benefits of the new approach involve weaker sufficient convergence conditions, a piece of better information on the location of the solution, and a more provide error analysis. The new idea is general, and it does not depend on the specific method. Therefore, it can also be used on the method (12) and its specializations under Hölder or generalized Lipschitz conditions on the operator F as well as other single point or multipoint iterative methods such as Secant, Stirling’s, Newton-like, Traub, and other methods [14,15,16,17,18,19,20,21,22]. That is the focus of our future research.

Author Contributions

Conceptualization, I.K.A., S.S., S.R. and H.Y.; methodology, I.K.A., S.S., S.R. and H.Y.; software, I.K.A., S.S., S.R. and H.Y.; validation, I.K.A., S.S., S.R. and H.Y.; formal analysis, I.K.A., S.S., S.R. and H.Y.; investigation, I.K.A., S.S., S.R. and H.Y.; resources, I.K.A., S.S., S.R. and H.Y.; data curation, I.K.A., S.S., S.R. and H.Y.; writing—original draft preparation, I.K.A., S.S., S.R. and H.Y.; writing—review and editing, I.K.A., S.S., S.R. and H.Y.; visualization, I.K.A., S.S., S.R. and H.Y.; supervision, I.K.A., S.S., S.R. and H.Y.; project administration, I.K.A., S.S., S.R. and H.Y.; funding acquisition, I.K.A., S.S., S.R. and H.Y. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Argyros, I.K. The Theory and Applications of Iterative Methods, 2nd ed.; Engineering Series; CRC-Taylor and Francis Publ. Group: Boca Raton, FL, USA, 2022. [Google Scholar]
  2. Ortega, J.M.; Rheinboldt, W.C. Iterative Solution of Nonlinear Equations in Several Variables; Academic Press: New York, NY, USA, 1970. [Google Scholar]
  3. Dennis, J.E., Jr.; Schnabel, R.B. Numerical Methods for Unconstrained Optimization and Nonlinear Equations; Prentice-Hall: Englewood Cliffs, NJ, USA, 1983. [Google Scholar]
  4. Traub, J.F. Iterative Methods for the Solution of Equations; Prentice Hall: Hoboken, NJ, USA, 1964. [Google Scholar]
  5. Ezquerro, J.A.; Hernández-Verón, M.A. Newton’s Method: An Updated Approach of Kantorovich’s Theory. Frontiers in Mathematics; Birkhäuser/Springer: Cham, Switzerland, 2017. [Google Scholar]
  6. Verma, R. New Trends in Fractional Programming; Nova Science Publisher: New York, NY, USA, 2019. [Google Scholar]
  7. Potra, F.A.; Pták, V. Nondiscrete induction and iterative processes. In Research Notes in Mathematics; Pitman (Advanced Publishing Program): Boston, MA, USA, 1984; Volume 103. [Google Scholar]
  8. Yamamoto, T. Historical developments in convergence analysis for Newton’s and Newton-like methods. J. Comput. Appl. Math. 2000, 124, 1–23. [Google Scholar] [CrossRef] [Green Version]
  9. Argyros, I.K. Unified Convergence Criteria for Iterative Banach Space Valued Methods with Applications. Mathematics 2021, 9, 1942. [Google Scholar] [CrossRef]
  10. Proinov, P.D. New general convergence theory for iterative processes and its applications to Newton-Kantorovich type theorems. J. Complex. 2010, 26, 3–42. [Google Scholar] [CrossRef] [Green Version]
  11. Kantorovich, L.V.; Akilov, G.P. Functional Analysis; Pergamon Press: Oxford, UK, 1982. [Google Scholar]
  12. Argyros, I.K.; Hilout, S. On an improved convergence analysis of Newton’s scheme. Appl. Math. Comput. 2013, 225, 372–386. [Google Scholar]
  13. Argyros, I.K.; Hilout, S. Weaker conditions for the convergence of Newton’s scheme. J. Complex. 2012, 28, 364–387. [Google Scholar] [CrossRef] [Green Version]
  14. Zhanlav, T.; Chun, C.; Otgondorj, K.H.; Ulziibayar, V. High order iterations for systems of nonlinear equations. Int. J. Comput. Math. 2020, 97, 1704–1724. [Google Scholar] [CrossRef]
  15. Sharma, J.R.; Guha, R.K. Simple yet efficient Newton-like method for systems of nonlinear equations. Calcolo 2016, 53, 451–473. [Google Scholar] [CrossRef]
  16. Grau-Sanchez, M.; Grau, A.; Noguera, M. Ostrowski type methods for solving system of nonlinear equations. Appl. Math. Comput. 2011, 218, 2377–2385. [Google Scholar] [CrossRef]
  17. Kou, J.; Wang, X.; Li, Y. Some eight order root finding three-step methods. Commun. Nonlinear Sci. Numer. Simul. 2010, 15, 536–544. [Google Scholar] [CrossRef]
  18. Shakhno, S.M. Convergence of the two-step combined method and uniqueness of the solution of nonlinear operator equations. J. Comput. Appl. Math. 2014, 261, 378–386. [Google Scholar] [CrossRef]
  19. Shakhno, S.M. On an iterative algorithm with superquadratic convergence for solving nonlinear operator equations. J. Comput. Appl. Math. 2009, 231, 222–235. [Google Scholar] [CrossRef]
  20. Wang, X. An Ostrowski-type method with memory using a novel self-accelerating parameters. J. Comput. Appl. Math. 2018, 330, 710–720. [Google Scholar] [CrossRef]
  21. Moccari, M.; Lofti, T. On a two-step optimal Steffensen-type method: Relaxed local and semi-local convergence analysis and dynamical stability. J. Math. Anal. Appl. 2018, 468, 240–269. [Google Scholar] [CrossRef]
  22. Sharma, J.R.; Arora, H. Efficient derivative-free numerical methods for solving systems of nonlinear equations. Comput. Appl. Math. 2016, 35, 269–284. [Google Scholar] [CrossRef]
  23. Bartish, M.Y.; Shakhno, S.M. On Newton’s method with accelerated convergence. Vest. Kiev Univ. Model. Complex Syst. 1987, 6, 62–66. (In Russian) [Google Scholar]
  24. Werner, W. Newton-like methods for the Computation of Fixed Points. Comput. Math. 1984, 10, 77–86. [Google Scholar] [CrossRef]
Figure 1. Graphs of functions F 1 ( 1 ) ( x ) (A) and F 1 ( 2 ) ( x ) (B), and function y = x .
Figure 1. Graphs of functions F 1 ( 1 ) ( x ) (A) and F 1 ( 2 ) ( x ) (B), and function y = x .
Symmetry 15 00015 g001
Figure 2. Graphs of derivative of functions F 1 ( 1 ) ( x ) (A) and F 1 ( 2 ) ( x ) (B).
Figure 2. Graphs of derivative of functions F 1 ( 1 ) ( x ) (A) and F 1 ( 2 ) ( x ) (B).
Symmetry 15 00015 g002
Table 1. Number of iterations.
Table 1. Number of iterations.
Function x 0 Method (3)Method (60)Method (61)
F 1 ( 1 ) ( x ) 2.5434
4445
5546
F 2 ( 1 ) ( x ) 0.5444
1.5765
3.2867
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Argyros, I.K.; Shakhno, S.; Regmi, S.; Yarmola, H. Newton-Type Methods for Solving Equations in Banach Spaces: A Unified Approach. Symmetry 2023, 15, 15. https://doi.org/10.3390/sym15010015

AMA Style

Argyros IK, Shakhno S, Regmi S, Yarmola H. Newton-Type Methods for Solving Equations in Banach Spaces: A Unified Approach. Symmetry. 2023; 15(1):15. https://doi.org/10.3390/sym15010015

Chicago/Turabian Style

Argyros, Ioannis K., Stepan Shakhno, Samundra Regmi, and Halyna Yarmola. 2023. "Newton-Type Methods for Solving Equations in Banach Spaces: A Unified Approach" Symmetry 15, no. 1: 15. https://doi.org/10.3390/sym15010015

APA Style

Argyros, I. K., Shakhno, S., Regmi, S., & Yarmola, H. (2023). Newton-Type Methods for Solving Equations in Banach Spaces: A Unified Approach. Symmetry, 15(1), 15. https://doi.org/10.3390/sym15010015

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop