Next Article in Journal
A New Method to Optimize the Satisfaction Level of the Decision Maker in Fuzzy Geometric Programming Problems
Next Article in Special Issue
On a Bi-Parametric Family of Fourth Order Composite Newton–Jarratt Methods for Nonlinear Systems
Previous Article in Journal
Total Least Squares Spline Approximation
Previous Article in Special Issue
On a Variational Method for Stiff Differential Equations Arising from Chemistry Kinetics
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Unified Local Convergence for Newton’s Method and Uniqueness of the Solution of Equations under Generalized Conditions in a Banach Space

by
Ioannis K. Argyros
1,
Ángel Alberto Magreñán
2,*,
Lara Orcos
3 and
Íñigo Sarría
4
1
Department of Mathematics Sciences Lawton, Cameron University, Lawton, OK 73505, USA
2
Departamento de Matemáticas y Computación, Universidad de La Rioja, 26006 Logroño, Spain
3
Facultad de Educación, Universidad Internacional de La Rioja, 26006 Logroño, Spain
4
Escuela Superior de Ingeniería y Tecnología, Universidad Internacional de La Rioja, 26006 Logroño, Spain
*
Author to whom correspondence should be addressed.
Mathematics 2019, 7(5), 463; https://doi.org/10.3390/math7050463
Submission received: 5 March 2019 / Revised: 7 May 2019 / Accepted: 13 May 2019 / Published: 23 May 2019
(This article belongs to the Special Issue Computational Methods in Analysis and Applications)

Abstract

:
Under the hypotheses that a function and its Fréchet derivative satisfy some generalized Newton–Mysovskii conditions, precise estimates on the radii of the convergence balls of Newton’s method, and of the uniqueness ball for the solution of the equations, are given for Banach space-valued operators. Some of the existing results are improved with the advantages of larger convergence region, tighter error estimates on the distances involved, and at-least-as-precise information on the location of the solution. These advantages are obtained using the same functions and Lipschitz constants as in earlier studies. Numerical examples are used to test the theoretical results.

1. Introduction

Let X and Y be Banach spaces. Let U ( x , r ) and U ¯ ( x , r ) stand, respectively, for the open and closed ball in X with center x and radius r > 0 . Denote by Ł ( X , Y ) the space of bounded linear operators from X into Y . Further, let D X be a nonempty set.
In the present paper, we are concerned with the problem of approximating a locally unique solution x * of the equation
F ( x ) = 0 ,
where F is a Fréchet continuously differentiable operator, defined on D with values in Y .
Numerous applications from applied mathematics, optimization, mathematical biology, chemistry, economics, physics, engineering, and other disciplines can be brought, in the form of Equation (1) by mathematical modelling [1,2,3,4,5,6,7,8,9]. The solution of these equations can rarely be found in closed form. Hence, the solution methods for these equations are iterative. In particular, the practice of numerical analysis for finding such solutions is essentially connected to variants of iterative methods [1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23]. Research about the convergence issues of Newton methods involves two types: Semi-local and local convergence analysis. The semi-local convergence issue is, based on the information around an initial point, to give criteria ensuring the convergence of iterative methods; meanwhile, the local one is, based on the information around a solution, to find estimates for the radii of the convergence balls. We find, in the literature, several studies on the weakness and/or extension of the hypotheses made on the underlying operators. There is a plethora on local, as well as semi-local, convergence results; we refer the reader to [1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23]. In this paper, we assume the existence of x * , but do not address any existence results.
Newton’s method is defined by the iterative procedure
x 0 is an initial point x n + 1 = x n F ( x n ) 1 F ( x n ) for each n = 0 , 1 , 2 , ,
and is, undoubtedly, one of the most popular iterative processes for generating a sequence { x n } approximating x * . Here, F ( x ) Ł ( X , Y ) denotes the Fréchet derivative of F at x U ¯ ( x 0 , R ) .
Newton–Mysovskii-type conditions (see (10) ) have been used by several authors [4,7,9,12,21,22] to provide a local, as well as a semi-local, convergence for Newton’s method and Newton-like methods.
A very important problem in the study of iterative procedures is the convergence region. Some of the existing results provide conditions for the convergence, based on small regions under certain conditions. Therefore, it is important to enlarge the convergence region without additional hypotheses. Another important problem is to find more precise error estimates on the distance x n x * , as well as uniqueness of the solution results. These are our objectives in this paper.
In particular, we obtain the following advantages over earlier works:
(a1)
At least as large a radius of convergence to at least as many choices of initial points;
(a2)
At least as small a ratio of convergence, so at most as few iterates must be computed to obtain a desired error tolerance; and
(a3)
The information on the location of the solution is at least as precise.
It is worth noticing that these advantages are obtained, although more general and flexible majorant-type conditions are used.
Indeed these advantages are obtained by specializing the new majorant functions. Hence, the applicability of Newton’s method is extended. Our approach can be used to improve local and semi-local results for Newton-like methods, secant-type methods, and other single- or multi-step methods along the same lines.
The paper is structured as follows: Section 2 contains the local convergence analysis of Newton’s method. Applications are given in the Section 3. Our findings are summarized in Section 4.

2. Local Convergence Analysis

We present the main local convergence result for Newton’s method.
Theorem 1.
Let F : D X Y be a Fréchet-differentiable operator. Suppose:
(a)
There exists x * D such that F ( x * ) = 0 and F ( x ) 1 Ł ( Y , X ) for all x D .
(b)
There exists a function φ : [ 0 , + ) [ 0 , + ) with φ ( 0 ) = 0 such that φ ( t ) t 1 + λ is continuous, where t 0 , and is non-decreasing for some λ 0 .
(c)
For all x D
F ( x ) 1 ( F ( x ) F ( x ) ( x x * ) ) φ ( x x * ) x x * .
(d)
There exists a minimal root ϱ > 0 of equation φ ( t ) = 1 , such that
φ ( ϱ ) ϱ λ 1 .
(e)
U ¯ ( x * , ϱ ) D .
Then, the sequence { x n } generated for x 0 U ( x * , ϱ ) { x * } by Newton’s method is well-defined, stays in U ( x * , ϱ ) for all n = 0 , 1 , 2 , , and converges to x * ; which is the only root of the equation F ( x ) = 0 in U ( x * , ϱ ) . Moreover, the following estimate holds
x n x * e n , n = 0 , 1 , 2 ,
where
q = φ ( x 0 x * ) x 0 x * λ [ 0 , 1 ) ,
and e n = q 1 λ x 0 x * ( 1 + λ ) n q 1 λ .
Proof. 
By ( b ) and ( d ) , and using (4), we have that
q = φ ( x 0 x * ) x 0 x * x 0 x * 1 + λ φ ( ϱ ) ϱ 1 + λ x 0 x * x 0 x * ϱ < 1 .
If x k U ( x * , ϱ ) , then, by Newton’s method, we can write
x k + 1 x * = x k x * F ( x k ) 1 F ( x k ) = F ( x k ) 1 ( F ( x k ) F ( x k ) ( x k x * ) ) ,
and so, by ( c ) and (6),
x k + 1 x * φ ( x k x * ) x k x * .
If k = 0 in (7), we obtain, by (4) and (5), that
x 1 x * φ ( x 0 x * ) x 0 x * < x 0 x * < ϱ .
Hence, x 1 U ( x * , ϱ ) ; that is, (7) can be obtained for k = 0 , 1 , . By mathematical induction, all x k U ( x * , ϱ ) and x k x * decreases monotonically. Moreover, for all k = 0 , 1 , , we consequently obtain, from ( b ) and (7), that
x k + 1 x * φ ( x k x * ) x k x * 1 + λ x k x * 1 + λ x k x * φ ( x 0 x * ) x 0 x * 1 + λ x k x * 1 + λ x k x * φ ( x 0 x * ) x 0 x * 1 + λ x 0 x * x k x * λ + 1 = φ ( x 0 x * ) x 0 x * λ x k x * λ + 1 = q x k x * λ + 1 q ( q x k 1 x * λ + 1 ) λ + 1 = q 1 + ( λ + 1 ) q x k 1 x * ( λ + 1 ) 2 ) e k + 1 ,
which implies (3). Notice that, as q 1 λ x 0 x * < 1 , we have lim n e n = 0 and so lim n x n = x * . Let y * U ( x * , ϱ ) with F ( y * ) = 0 . Replace x * by y * in (6)–(8). Then, we have that
x k + 1 y * q x k y * λ + 1 ,
and so lim k x k = y * . However, we showed that lim k x k = x * . Hence, we conclude that x * = y * . □
Remark 1.
Estimate ( c ) generalizes the Newton–Mysovskii-type conditions, already in the literature [4,7,9,12,21,22], of the form
F ( z ) 1 ( F ( x ) F ( y ) F ( x ) ( x y ) ) K x y μ , K > 0 , μ [ 0 , 2 ] ,
for each x , y , z D , if we choose φ ( t ) ¯ = K t μ 1 . However, in this paper, we use the weaker condition
F ( x ) 1 ( F ( x ) F ( x * ) F ( x ) ( x x * ) ) K 0 x x * μ , K 0 > 0 .
Thus, the function φ specializes to φ 0 ( t ) = K 0 t μ 1 for z = x and y = x * . Then, we have φ 0 ( t ) φ ( t ) ¯ , so K 0 K . Moreover, (10) implies ( c ) in this case, but not necessarily vice versa. Hence, the new results, in this case, are better than the old ones. It is worth noticing that these improvements are obtained under weaker conditions (see also the numerical examples), since, as K 0 K , the new radii are larger and the new ratio is smaller.
In the case where ( c ) is difficult to verify, we have the following alternative.
Theorem 2.
Let F : D X Y be a Fréchet-differentiable operator. Suppose:
(a)
There exists an x * D and a function w 0 : R + R + which is continuous and nondecreasing, with w 0 ( 0 ) = 0 , such that
F ( x * ) = 0 , F ( x * ) 1 Ł ( Y , X ) ,
and, for all x D ,
F ( x * ) 1 ( F ( x ) F ( x * ) ) w 0 ( x x * ) .
The equation
w 0 ( t ) = 1
has a minimal positive root, denoted by r 0 . Set D 0 = D U ( x * , r 0 ) .
(b)
There exists a function w : R + R + which is continuous and non-decreasing with w ( 0 ) = 0 such that, for all x D 0 ,
F ( x * ) 1 ( F ( x ) F ( x ) ( x x * ) ) w ( x x * ) x x * .
(c)
The equation
w ( t ) + ( w 0 ( t ) 1 ) t λ = 0 , for some λ 0 ,
has a smallest root, r * [ 0 , r 0 ) .
(d)
The function w ( t ) t λ ( 1 w 0 ( t ) ) is continuous and nondecreasing on the interval ( 0 , r 0 ) .
(e)
U ¯ ( x * , r * ) D .
Then, the sequence { x n } generated for x 0 U ( x * , r * ) { x * } by Newton’s method is well-defined, remains in U ( x * , r * ) for all n = 0 , 1 , 2 , , and converges to x * , which is the only root of the equation F ( x ) = 0 in D 2 = U ( x * , r * ) D . Moreover, the following estimates hold
x n + 1 x * w ( x n x * ) x n x * 1 w 0 ( x n x * ) w ( x n x * ) x n x * λ + 1 x n x * λ ( 1 w 0 ( x n x * ) ) q 0 x n x * 1 + λ x n x * < r * ,
where
q 0 = w ( x 0 x * ) x 0 x * λ ( 1 w 0 ( x 0 x * ) ) [ 0 , 1 ) .
Proof. 
We have that, for all x U ¯ ( x * , r * ) ,
F ( x * ) 1 ( F ( x ) F ( x * ) ) w 0 ( x x * ) w 0 ( r * ) < 1
by ( a ) , ( c ) , and the definition of r * . It follows, from (13) and the Banach Lemma on invertible operators [7,22], that F ( x ) 1 Ł ( Y , X ) and
F ( x ) 1 F ( x * ) 1 1 w 0 ( x x * ) .
Define the function φ on the interval [ 0 , r * ) by
φ ( t ) = w ( t ) 1 w 0 ( t ) .
Then, the result follows from the proof of Theorem 1, by noticing that
x k + 1 x * = [ F ( x k ) 1 F ( x * ) ] [ F ( x * ) 1 ( F ( x k ) F ( x k ) ( x k x * ) ) ] F ( x k ) 1 F ( x * ) F ( x * ) 1 ( F ( x k ) F ( x k ) ( x k x * ) ) w ( x k x * ) x k x * ) 1 w 0 ( x k x * ) = φ ( x k x * ) x k x * ) .
The uniqueness of the solution x * depends on the functions w 0 and w.
Next, we present a uniqueness result, using only the function w 0 .
Proposition 3.
Suppose that D is a convex set. Moreover, we assume that
0 1 w 0 ( θ r ) d θ < 1 , r 0
and
F ( x * ) 1 ( F ( x ) F ( x * ) ) w 0 ( x x * ) ,
for all x D 3 = D U ( x * , r 0 ) , where w 0 : [ 0 , ) [ 0 , ) is a continuous and non-decreasing function. Then, the point x * is the only solution of the equation F ( x ) = 0 in D 3 .
Proof. 
The convergence of Newton’s method to the root x * has been established in Theorem 2. Let y * D 3 with F ( y * ) = 0 . Define Q = 0 1 ( F ( x * + θ ( y * x * ) ) d θ . Using (16), we have
F ( x * ) 1 ( Q F ( x * ) ) 0 1 w 0 ( θ ( x * y * ) ) d θ 0 1 w 0 ( θ r 1 ) d θ < 1 .
Hence, by (17), Q 1 Ł ( Y , X ) . Then, from the identity
0 = F ( y * ) F ( x * ) = Q ( y * x * ) ,
we conclude that x * = y * .□
Remark 2.
(a)
If r = r * , then, by Theorem 2, we conclude that the root x * is unique in D 3 .
(b)
The local results obtained in this study are better than the earlier results in [5,9,17,23,24,25,26], even if specialized.
(c)
Case of the Radius Lipschitz condition [9,26]:
F ( x * ) 1 ( F ( x ) F ( x θ ) ) θ x x * x x * L 1 ( u ) d u , for all x D ,
where L 1 is a positive integrable function and x θ = x * + θ ( x x * ) .
Moreover, in light of (18), there exists a positive integrable function, L 0 , such that
F ( x * ) 1 ( F ( x ) F ( x * ) ) 0 x x * L 0 ( u ) d u , for all x D .
Notice that
L 0 ( t ) L 1 ( t ) , for all t [ 0 , r 0 ] ,
and ϱ 0 is the minimal positive root of the equation
0 t L 0 ( u ) d u = 1 .
The radius of convergence, r 1 , is obtained in [26] under (18), and is given as the root of the equation
0 r L 1 ( u ) u d u r ( 1 0 r L 1 ( u ) d u ) = 1 .
The radius of convergence r * ¯ found by us, if D 0 = D is the positive root of the equation, is
0 r L 1 ( u ) u d u r ( 1 0 r L 0 ( u ) d u ) = 1 .
In view of (21)–(23), we have that
r 1 r * ¯
Indeed, let the functions g 0 and g 1 be defined as
g 0 ( r ) = 0 r L 1 ( u ) u d u + r 0 r L 0 ( u ) d u r
and
g 1 ( r ) = 0 r L 1 ( u ) u d u + r 0 r L 1 ( u ) d u r .
Then, in light of (21), we get
g 0 ( r ) g 1 ( r ) ,
and, for r = r 1 ,
g 0 ( r ) g 1 ( r ) = 0 ,
by the definition of r 1 leading to (24).
We can do even better, if D 0 D . In this case, the function w (i.e., L 1 ) depends on w 0 (i.e., L 0 ), and we have that
F ( x * ) 1 ( F ( x ) F ( x θ ) ) θ x x * x x * L ( u ) d u , for all x D 0 and all θ [ 0 , 1 ] ,
where L is a positive integrable function.
Then, we have that
L ( u ) L 1 ( u ) for all u [ 0 , ϱ 0 ] ,
since D 0 D . In general, we do not know which of the functions L 0 or L is smaller than the other (see, however, the numerical examples). Then, the radius of convergence r * is the positive solution of the equation
0 r L ( u ) u d u r ( 1 0 r L 0 ( u ) d u ) = 1 ,
and we have, by (26), that
r * ¯ r * ,
using a similar proof as the one below (24). Hence, we have that
r 1 r * ¯ r * .
Inequality (29) can be strict, if (22) and (26) are strict inequalities. The corresponding ratios of convergence are also improved (see the numerical examples).
Clearly, (18) (or (26) with r 0 replaced by ϱ 0 in D 0 ) is a special case of condition ( b ) in Theorem 2 and a special case of condition ( a ) in Theorem 2.
(d)
Case of Majorant conditions [5,17]:
F ( x * ) 1 ( F ( x ) F ( x θ ) ) f 1 ( x x * ) f 1 ( θ x x * ) ,
where f 1 is a convex, strictly increasing function, with f 1 ( 0 ) = 0 and f 1 ( 0 ) = 1 .
Notice that the following functions are convex, strictly increasing functions with f 1 ( 0 ) = 0 , and f 1 ( 0 ) = 1 :
  • f 1 : R R with f 1 ( t ) = e t 2 t 1 ;
  • f 1 : [ 0 , 1 ) R with f 1 ( t ) = l n ( 1 t ) 2 ; and
  • f 1 : [ 0 , 1 a ) R with f 1 ( t ) = t 1 a t 2 t , a 0 .
In view of (30), there exists a function f 0 with the same properties as f 1 , such that
F ( x * ) 1 ( F ( x ) F ( x * ) ) f 0 ( 0 ) f 0 ( θ x x * ) , for all x D .
Thus, we can choose
w 0 ( t ) = f 0 ( t ) f 0 ( 0 ) .
By comparing (30) and (31), we get
f 0 ( t ) f 1 ( t ) , for all t [ 0 , ϱ ¯ 0 ] .
The radius of convergence r 1 ¯ in [5,17], under (30), is given as the positive root of the equation
f 1 ( t ) t f 1 ( t ) t f 1 ( t ) = 1 .
In our case, we have that r * ¯ solves the equation
f 1 ( t ) t f 1 ( t ) t f 0 ( t ) = 1 .
Furthermore, by (33),
r 1 ¯ r * ¯ ;
see the proof in [5,17].
We can do better, if D 0 D strictly, and replacing (30) by
F ( x * ) 1 ( F ( x ) F ( x θ ) ) f ( θ ( x x * ) f ( x x * )
for all x D 0 .
Then, choose
w ( t ) = f ( t ) t f ( t ) .
By comparing (30) and (37), we get that
f ( t ) f 1 ( t ) for all t [ 0 , r 0 ] .
Then, the radius r * is given as the root of the equation
f ( r * ) r * f ( r * ) t f 0 ( r * ¯ ) = 1 .
Once more, we have shown that the new results improve the old ones, since (29) holds.
(e)
We can obtain the radii in explicit form. Indeed, specialize the functions L 1 ( t ) = L 1 > 0 , L ( t ) = L > 0 , and L 0 ( t ) = L 0 > 0 in ( a ) , and f 1 ( t ) = L 1 2 t 2 t , f 0 ( t ) = L 0 2 t 2 t , and f ( t ) = L 2 t 2 t in ( b ) . Then, we have that
r 1 = r 1 ¯ = 2 3 L 1 , r * ¯ = 2 2 L 0 + L 1 and r * = 2 2 L 0 + L .
Hence, we get (29). The inequality (29) is strict if
L 0 < L < L 1 ;
see the third numerical example.
The radius r 1 is due to Rheinboldt [23] and Traub [25], whereas r * ¯ is due to Argyros [1].
The corresponding error bounds for the radii r 1 , r * ¯ , and r * are given, respectively, by
x n + 1 x * L 1 x n x * 2 2 ( 1 L 1 x n x * ) ,
x n + 1 x * L 1 x n x * 2 2 ( 1 L 0 x n x * ) ,
and
x n + 1 x * L x n x * 2 2 ( 1 L 0 x n x * ) .
Hence, the error bounds (44) improve the earlier ones, (42) and (43). The same is true for the uniqueness balls.
(f)
The same advantages are obtained if we use Smale-type [25] conditions or those of Ferreira [5] or Wang [26]. Then, we choose
L 1 ( t ) = 2 γ 1 ( 1 γ 1 t ) 3 , L 0 ( t ) = 2 γ 0 ( 1 γ 0 t ) 3 , L ( t ) = 2 γ ( 1 γ t ) 3 ,
f 1 ( t ) = t 1 γ 1 t 2 t , f 0 ( t ) = t 1 γ 0 t 2 t , f ( t ) = t 1 γ t 2 t ,
and r 0 to be the solution of equation
( 1 γ 0 t ) 2 = 1 2 ,
with γ 0 γ 1 and γ γ 1 .
It is worth noticing that these advantages are obtained under the same computational cost, as, in practice, the computation of the old functions L 1 and f 1 requires the computation of the functions L 0 , L, f 0 , and f as special cases.

3. Numerical Examples

Example 4.
Ammonia Problem [19,27] Let us consider the quartic equation that can describe the fraction (or amount) of nitrogen–hydrogen feed that is turned into ammonia, known as fractional conversion. If the pressure is 250 atmospheres and the temperature reaches a value of 500 Celsius degrees,
the equation is:
G ( x ) = x 4 7.79075 x 3 + 14.7445 x 2 + 2.511 x 1.674 .
We set S = R and D = [ 0 , 1 ] . Then,
(a)
We obtain:
w 0 ( t ) = 3 / 2 t ,
r 0 = 4 9 ,
and
w ( t ) = 2 t .
Moreover, the equation
w ( t ) + ( w 0 ( t ) 1 ) t 2 / 5 = 0
has a minimal root r * = 0.0338271 . On the other hand, the function
w ( t ) t 2 / 5 ( 1 w 0 ( t ) ) = 4 t 3 10 2 3 t
is continuous and non-decreasing on the interval [ 0 , r * ) . Finally, it is clear that U ¯ ( x * , r * ) D . Then, we can guarantee that the method (2) converges, due to Theorem 2.
(b)
We obtain:
w 0 ( t ) = 2 t ,
r 0 = 1 4 ,
and
w ( t ) = 2 t .
Moreover, the equation
w ( t ) + ( w 0 ( t ) 1 ) t 2 / 5 = 0
has a minimal root r * = 0.0266048 . On the other hand, the function
w ( t ) t 2 / 5 ( 1 w 0 ( t ) ) = 2 t 3 10 1 2 t
is continuous and non-decreasing on the interval [ 0 , r * ) . Finally, it is clear that U ¯ ( x * , r * ) D . Then, we can guarantee that the method (2) converges, due to Theorem 2.
Example 5.
Planck’s Radiation Law Problem [4]
We consider the following problem:
φ ( λ ) = 8 π c P λ 5 e c P λ B T 1 ,
which calculates the energy density within an isothermal blackbody. After some changes of variable, the problem is similar to
1 x 5 = e x .
Let us define
f ( x ) = e x 1 + x 5 .
We define D as the real interval [ 4 , 6 ] . Then, we obtain:
w 0 ( t ) = t ,
r 0 = 1 ,
and
w ( t ) = t .
Moreover, the equation
w ( t ) + ( w 0 ( t ) 1 ) t 2 / 5 = 0 ,
has a minimal solution r * = 0.060085 . On the other hand, the function
w ( t ) t 1 + λ ( 1 w 0 ( t ) ) = t 10 1 t
is continuous and non-decreasing on the interval [ 0 , r * ) . Finally, it is clear that U ¯ ( x * , r * ) D . Then, we can guarantee that the method (2) converges, due to Theorem 2.
Example 6.
Boundary Value Problem
Let X = Y = R n 1 for a natural integer n 2 , where X and Y are equipped with the max-norm x = dist max 1 i n 1 | x i | . The corresponding matrix norm is
A = max 1 i n 1 j = 1 n 1 | a i j | ,
for A = ( a i j ) 1 i , j n 1 . On the interval [ 0 , 1 ] , we consider the following two-point boundary value problem
v + v 2 = 0 v ( 0 ) = v ( 1 ) = 0 .
To discretize the above equation, we divide the interval [ 0 , 1 ] into n equal parts, with the length of each part being h = 1 / n and the coordinate of each point being x i = i h , for i = 0 , 1 , 2 , , n . A second-order finite difference discretization of Equation (46) results in the following set of non-linear equations
F ( v ) : = v i 1 + h 2 v i 2 2 v i + v i + 1 = 0 for i = 1 , 2 , , ( n 1 ) and from ( 46 ) v 0 = v n = 0 ,
where v = [ v 1 , v 2 , , v ( n 1 ) ] T . For the above system of non-linear equations, we provide the Fréchet derivative
F ( v ) = 2 v 1 n 2 2 1 0 0 0 0 1 2 v 2 n 2 2 1 0 0 0 0 1 2 v 3 n 2 2 1 0 0 0 0 0 0 1 2 v ( n 1 ) n 2 2 .
Let n = 101 and x 0 = [ 5 , 5 , , 5 ] T . To solve the linear systems (step 1 and step 2 in the method (47)), we employ the MatLab routine “linsolve”, which uses LU factorization with partial pivoting. We define the initial guess to be x 0 = l i n s p a c e ( 0 , 12 , 100 ) . Figure 1 plots our numerical solution.
Example 7.
Radius Comparison Suppose that the motion of an object in three dimensions is governed by the system of differential equations
F 1 ( x ) F 1 ( x ) 1 = 0 , F 2 ( y ) ( e 1 ) y 1 = 0 , F 3 ( z ) 1 = 0 ,
with x , y , z D for F 1 ( 0 ) = F 2 ( 0 ) = F 3 ( 0 ) = 0 . Then, the solution of the system is given, for v = ( x , y , z ) T , by the function F : = ( F 1 , F 2 , F 3 ) : D R 3 , defined by
F ( v ) = e x 1 , e 1 2 y 2 + y , z T .
Then, for x * = ( 0 , 0 , 0 ) T , we have
L 0 = e 1 < L = e 1 e 1 < L 1 = e .
Notice that (41) holds. Hence, (29) holds as a strict inequality. In particular, we have, from (22), (23), and (27), that
r 1 = 0.245253 , r * ¯ = 0.324947 , r * = 0.382692 .
Thus, we have improved the previous results.

4. Conclusions

Generalized Newton–Mysovskii-type majorant convergence results have been introduced in this paper. Special cases of the majorant functions involved lead to conditions considered by other authors [4,7,9,12,21,22]. It turns out that, although the conditions are more general, they are also more flexible, leading to some advantages; moreover, without any additional computational effort. Hence, we have extended the applicability of Newton’s method in cases not covered before. This paper paves the way for future research involving other iterative procedures involving inverses of linear operators.

Author Contributions

All authors have contributed in a similar way.

Funding

This research was supported in part by Programa de Apoyo a la investigación de la fundación Séneca-Agencia de Ciencia y Tecnología de la Región de Murcia 20928/PI/18, by the project PGC2018-095896-B-C21 of the Spanish Ministry of Science and Innovation.

Acknowledgments

We would like to express our gratitude to the reviewers for the constructive criticism of the paper.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Argyros, I.K.; Hilout, S. Weaker conditions for the convergence of Newton’s method. J. Complex. 2012, 28, 364–387. [Google Scholar] [CrossRef]
  2. Argyros, I.K.; Magreñán, Á.A. On the convergence of an optimal fourth-order family of methods and its dynamics. Appl. Math. Comput. 2015, 252, 336–346. [Google Scholar] [CrossRef]
  3. Argyros, I.K.; González, D. Local convergence for an improved Jarratt-type method in Banach space. Int. J. Artif. Intell. Interact. Multimed. 2015, 3, 20–25. [Google Scholar] [CrossRef]
  4. Deuflhard, P. Newton Methods for Nonlinear Problems: Affine Invariance and Adaptive Algorithms; Springer: Berlin/Heidelberg, Germany, 2011. [Google Scholar]
  5. Ferreira, O.P. Local convergence of Newton’s method in Banach space from the viewpoint of the majorant principle. IMA J. Numer. Anal. 2009, 29, 746–759. [Google Scholar] [CrossRef]
  6. Hernández, M.A.; Salanova, M.A. Modification of the Kantorovich assumptions for semilocal convergence of the Chebyshev method. J. Comput. Appl. Math. 2000, 126, 131–143. [Google Scholar]
  7. Kantorovich, L.V.; Akilov, G.P. Functional Analysis; Pergamon Press: Oxford, UK, 1982. [Google Scholar]
  8. Kou, J.S.; Li, Y.T.; Wang, X.H. A modification of Newton method with third-order convergence. Appl. Math. Comput. 2006, 181, 1106–1111. [Google Scholar] [CrossRef]
  9. Proinov, P.D. General local convergence theory for a class of iterative processes and its applications to Newton’s method. J. Complex. 2009, 25, 38–62. [Google Scholar] [CrossRef]
  10. Amat, S.; Busquier, S. Third order methods under Kantorovich conditions. J. Math. Anal. Appl. 2007, 336, 243–261. [Google Scholar] [CrossRef]
  11. Amat, S.; Bermúdez, C.; Busquier, S.; Plaza, S. On a third-order Newton-type method free of bilinear operators. Numer. Linear Algebra Appl. 2010, 17, 639–653. [Google Scholar] [CrossRef]
  12. Argyros, I.K. Computational Theory of Iterative Methods; Chui, C.K., Wuytack, L., Eds.; Studies in Computational Mathematics; Elsevier Publ. Co.: New York, NY, USA, 2007; Volume 15. [Google Scholar]
  13. Argyros, I.K. A semilocal convergence analysis for directional Newton methods. Math. Comput. 2011, 80, 327–343. [Google Scholar] [CrossRef]
  14. Argyros, I.K.; Hilout, S. Computational Methods in Nonlinear Analysis: Efficient Algorithms, Fixed Point Theory and Applications; World Scientific: Singapore, 2013. [Google Scholar]
  15. Argyros, I.K.; George, S. Ball Convergence for Steffensen-type Fourth-order Methods. Int. J. Artif. Intell. Interact. Multimed. 2015, 3, 37–42. [Google Scholar] [CrossRef]
  16. Ezquerro, J.A.; Hernández, M.A. Recurrence relations for Chebyshev-type methods. Appl. Math. Optim. 2000, 41, 227–236. [Google Scholar] [CrossRef]
  17. Ezquerro, J.A.; Hernández, M.A. Third-order iterative methods for operators with bounded second derivative. J. Comput. Math. Appl. 1997, 82, 171–183. [Google Scholar] [Green Version]
  18. Ezquerro, J.A.; Hernández, M.A. On the R-order of the Halley method. J. Math. Anal. Appl. 2005, 303, 591–601. [Google Scholar] [CrossRef]
  19. Gopalan, V.B.; Seader, J.D. Application of interval Newton’s method to chemical engineering problems. Reliab. Comput. 1995, 1, 215–223. [Google Scholar]
  20. Gutiérrez, J.A.; Magreñán, Á.A.; Romero, N. On the semilocal convergence of Newton Kantorovich method under center-Lipschitz conditions. Appl. Math. Comput. 2013, 221, 79–88. [Google Scholar]
  21. Ortega, L.M.; Rheinboldt, W.C. Iterative Solution of Nonlinear Equations in Several Variables; Academic Press: New York, NY, USA, 1970. [Google Scholar]
  22. Potra, F.A.; Pták, V. Nondiscrete Induction and Iterative Processes; Research Notes in Mathematics; Pitman: Boston, MA, USA, 1984; Volume 103. [Google Scholar]
  23. Rheinboldt, W.C. An adaptive continuation process for solving systems of nonlinear equations. Banach Cent. Publ. 1977, 3, 129–142. [Google Scholar] [CrossRef]
  24. Traub, J.F. Iterative Methods for the Solution of Equations; Prentice-Hall Series in Automatic Computation; Prentice-Hall: Englewood Cliffs, NJ, USA, 1964. [Google Scholar]
  25. Smale, S. Newton’s method estimates from data at one point. In The Merging of Disciplines: New Directions in Pure, Applied, and Computational Mathematics; Springer: New York, NY, USA, 1986; pp. 185–196. [Google Scholar]
  26. Wang, X. Convergence of Newton’s method and uniqueness of the solution of equations in Banach space. IMA J. Numer. Anal. 2000, 20, 123–134. [Google Scholar] [CrossRef]
  27. Shacham, M. An improved memory method for the solution of a nonlinear equation. Chem. Eng. Sci. 1989, 44, 1495–1501. [Google Scholar] [CrossRef]
Figure 1. Solution of the boundary value problem (46).
Figure 1. Solution of the boundary value problem (46).
Mathematics 07 00463 g001

Share and Cite

MDPI and ACS Style

Argyros, I.K.; Magreñán, Á.A.; Orcos, L.; Sarría, Í. Unified Local Convergence for Newton’s Method and Uniqueness of the Solution of Equations under Generalized Conditions in a Banach Space. Mathematics 2019, 7, 463. https://doi.org/10.3390/math7050463

AMA Style

Argyros IK, Magreñán ÁA, Orcos L, Sarría Í. Unified Local Convergence for Newton’s Method and Uniqueness of the Solution of Equations under Generalized Conditions in a Banach Space. Mathematics. 2019; 7(5):463. https://doi.org/10.3390/math7050463

Chicago/Turabian Style

Argyros, Ioannis K., Ángel Alberto Magreñán, Lara Orcos, and Íñigo Sarría. 2019. "Unified Local Convergence for Newton’s Method and Uniqueness of the Solution of Equations under Generalized Conditions in a Banach Space" Mathematics 7, no. 5: 463. https://doi.org/10.3390/math7050463

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop