Next Article in Journal
Submanifolds in Normal Complex Contact Manifolds
Next Article in Special Issue
Extending the Applicability of Stirling’s Method
Previous Article in Journal
A 2D Non-Linear Second-Order Differential Model for Electrostatic Circular Membrane MEMS Devices: A Result of Existence and Uniqueness
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Generalized High-Order Classes for Solving Nonlinear Systems and Their Applications

by
Francisco I. Chicharro
1,†,
Alicia Cordero
2,†,
Neus Garrido
1,† and
Juan R. Torregrosa
2,*,†
1
Escuela Superior de Ingeniería y Tecnología, Universidad Internacional de La Rioja, 26006 Logroño, Spain
2
Institute for Multidisciplinary Mathematics, Universitat Politècnica de València, 46022 València, Spain
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Mathematics 2019, 7(12), 1194; https://doi.org/10.3390/math7121194
Submission received: 19 November 2019 / Revised: 29 November 2019 / Accepted: 2 December 2019 / Published: 5 December 2019
(This article belongs to the Special Issue Multipoint Methods for the Solution of Nonlinear Equations)

Abstract

:
A generalized high-order class for approximating the solution of nonlinear systems of equations is introduced. First, from a fourth-order iterative family for solving nonlinear equations, we propose an extension to nonlinear systems of equations holding the same order of convergence but replacing the Jacobian by a divided difference in the weight functions for systems. The proposed GH family of methods is designed from this fourth-order family using both the composition and the weight functions technique. The resulting family has order of convergence 9. The performance of a particular iterative method of both families is analyzed for solving different test systems and also for the Fisher’s problem, showing the good performance of the new methods.

1. Introduction

During the last few decades, there has been a wide amount of interest in designing iterative schemes for estimating both equations and systems of equations presenting nonlinearities. Since the problems of finding a zero x * D of a system of nonlinear equations F ( x ) = 0 , where F : D R n R n , are present in science and engineering, the iterative methods are an ideal candidate for finding the solutions.
There is extensive literature on iterative methods for solving nonlinear equations, good overviews can be found in [1,2]. However, the extension of schemes from equations to systems of equations is not always trivial. Based on the Kung–Traub’s conjecture for nonlinear equations [3], several optimal methods of three steps can be found in the recent works [4,5]. There are other interesting methods on the research that reach higher order of convergence [6].
In this paper, we present a new family of iterative methods for solving nonlinear systems of equations with convergence order nine. This class, named GH family, has two weight functions and four steps on its iterative expression. It needs one evaluation of the Jacobian matrix and four functional evaluations of the nonlinear function per iteration. Previously, a fourth-order family with only a weight function is proposed. This family is the basis for designing the iterative schemes of the GH family with a composition-type technique. The development of the family covers Section 2. In order to check the feasibility of the proposed schemes to solve nonlinear systems of equations, Section 3 shows the numerical results when the fourth-order scheme is used for solving Fisher’s partial differential equation and when the ninth-order family is applied to several test functions. Finally, Section 4 collects the main conclusions of the work.
Some basic definitions must be recalled for analyzing the order of convergence of the methods. Further details can be found in [4,7] and also the notation used in this work.

2. The GH Family for Solving Systems of Nonlinear Equations

In [8], a new family of iterative methods for solving nonlinear equations is introduced. Its iterative expression is
y k = x k f ( x k ) f ( x k ) , x k + 1 = x k G ( η k ) f ( x k ) f ( x k ) , k = 0 , 1 , 2 , ,
where G ( η k ) is a weight function with η k = f ( y k ) f ( x k ) . Its order of convergence is analyzed in the following result. A complete proof can be found in [8]. Our purpose in this paper is to extend this result for the case of multidimensional problems.
Theorem 1.
Let f : Ω R R be a real sufficiently differentiable function in an open interval Ω and let x * Ω be a simple root of f ( x ) = 0 . If x 0 is close enough to x * and G ( η ) satisfies conditions G ( 0 ) = G ( 0 ) = 1 and G ( 0 ) = 4 , then all the iterative methods of family (1) converge to x * with fourth-order of convergence, their error equation being
e k + 1 = ( 5 c 2 3 c 2 c 3 ) e k 4 + O ( e k 5 ) ,
where e k = x k x * and c j = f ( j ) ( x * ) j ! f ( x * ) , j 2 .
In order to extend family (1) for solving nonlinear systems, an alternative definition for variable η k is required. For this purpose, we develop the expression as follows:
η k = f ( y k ) f ( x k ) = f ( y k ) ( x k y k ) f ( x k ) = f [ y k , x k ] f ( x k ) + f ( x k ) ( x k y k ) f ( x k ) = f ( x k ) f [ y k , x k ] f ( x k ) .
So, the extension of family (1) for solving systems of nonlinear equations turns into
y ( k ) = x ( k ) [ F ( x ( k ) ) ] 1 F ( x ( k ) ) , x ( k + 1 ) = x ( k ) G ( η k ) [ F ( x ( k ) ) ] 1 F ( x ( k ) ) ,
where
η k = [ F ( x ( k ) ) ] 1 ( F ( x ( k ) ) [ y ( k ) , x ( k ) ; F ] )
and G : R n × n R n × n is a matrix weight function. Furthermore, if X = R n × n denotes the space of all n × n real matrices, then we can define (see [9]) G : X X such that its Fréchet derivatives satisfy:
(a)
G ( u ) ( v ) = G 1 u v , being G : X L ( X ) , G 1 R , and L ( x ) denotes the space of linear mappings from X to itself,
(b)
G ( u , v ) ( w ) = G 2 u v w , where G : X × X L ( X ) and G 2 R .
(c)
G ( u , v , w ) ( t ) = G 3 u v w t , for G : X × X × X L ( X ) and G 3 R .
Moreover, the definition of η k in Label (4) uses the divided difference operator of F on R n , [ · , · ; F ] : Ω × Ω R n × R n L ( R n ) , defined in [10] as
[ x , y ; F ] ( x y ) = F ( x ) F ( y ) , for any x , y Ω .
The next result shows the order of convergence of family (3).
Theorem 2.
Let the nonlinear function F : Ω R n R n be a sufficiently Fréchet differentiable in an open convex set Ω, x * Ω a solution of the system F ( x ) = 0 . It must be also satisfied that F ( x ) is continuous and nonsingular in x * . Let us suppose that the initial guess x ( 0 ) R n is close enough to x * and G ( η k ) satisfies G ( 0 ) = I , G 1 = 1 , G 2 = 4 and | G 3 | < + , where I denotes the identity matrix of size n × n . Then, family (3) converges to x * with order of convergence 4, its error equation being
e ( k + 1 ) = C 3 C 2 + 5 1 6 G 3 C 2 3 e ( k ) 4 + O ( e ( k ) 5 ) ,
e ( k ) = x ( k ) x * being the error in the kth iteration and C j = 1 j ! [ F ( x * ) ] 1 F ( j ) ( x * ) , j 2 .
Proof. 
Let us denote by e ( k ) = x ( k ) x * for all k the error in iteration k. By using Taylor series expansions around x * , F ( x ( k ) ) and F ( x ( k ) ) can be expressed as
F ( x ( k ) ) = F ( x * ) e ( k ) + C 2 e ( k ) 2 + C 3 e ( k ) 3 + C 4 e ( k ) 4 + O ( e ( k ) 5 ) , F ( x ( k ) ) = F ( x * ) I + 2 C 2 e ( k ) + 3 C 3 e ( k ) 2 + 4 C 4 e ( k ) 3 + O ( e ( k ) 4 ) ,
where C j = 1 j ! [ F ( x * ) ] 1 F ( j ) ( x * ) , j 2 . In the same way, it holds that
[ F ( x ( k ) ) ] 1 = X 1 + X 2 e ( k ) + X 3 e ( k ) 2 + X 4 e ( k ) 3 [ F ( x * ) ] 1 + O ( e ( k ) 4 ) .
As [ F ( x ( k ) ) ] 1 F ( x ( k ) ) = I , we have X 1 = I and
X j = i = 2 j i X j i + 1 C i , j > 1 .
From (5) and (6), we have
[ F ( x ( k ) ) ] 1 F ( x ( k ) ) = e ( k ) C 2 e ( k ) 2 + J 3 e ( k ) 3 + J 4 e ( k ) 4 + O ( e ( k ) 5 )
for the values
J j = C j + i = 3 j X j i + 2 C i 1 + X j , j > 2 .
Now, from the above developments,
y ( k ) x * = e ( k ) [ F ( x ( k ) ) ] 1 F ( x ( k ) ) = C 2 e ( k ) 2 J 3 e ( k ) 3 J 4 e ( k ) 4 + O ( e ( k ) 5 ) .
The divided difference operator is defined by the formula of Gennochi–Hermite [10]
x + h , x ; F = 0 1 F ( x + t h ) d t , x R n .
Expanding F ( x + t h ) in Taylor series around x and integrating, we have
x + h , x ; F = F ( x ) + 1 2 F ( x ) h + 1 6 F ( x ) h 2 + O ( h 3 ) .
In particular, for y ( k ) given by the first step of the family (3),
y ( k ) , x ( k ) ; F = i = 1 3 1 i ! F ( i ) ( x ( k ) ) y ( k ) x ( k ) i 1 + O ( e ( k ) 4 ) = F ( x * ) [ I + C 2 e ( k ) + S 2 e ( k ) 2 + S 3 e ( k ) 3 ] + O ( e ( k ) 4 )
is obtained, where S i , i 2 , is
S 2 = C 2 2 + C 3 , S 3 = C 4 + C 3 C 2 C 2 J 3 .
Now, the value of η k is given by:
η k = [ F ( x ( k ) ) ] 1 F ( x ( k ) ) [ y ( k ) , x ( k ) ; F ] = C 2 e ( k ) + A 2 e ( k ) 2 + A 3 e ( k ) 3 + O ( e ( k ) 4 ) ,
where
A 2 = 2 C 3 3 C 2 2 , A 3 = 8 C 2 3 + 3 C 4 6 C 2 C 3 4 C 3 C 2 .
By using (12), its successive powers are obtained:
η k 2 = C 2 2 e ( k ) 2 + B 3 e ( k ) 3 + O ( e ( k ) 4 ) ,
where B 3 = C 2 A 2 + A 2 C 2 , and
η k 3 = C 2 3 e ( k ) 3 + O ( e ( k ) 4 ) .
Using the Taylor expansion of G ( η k ) around 0, we get
G ( η k ) = G ( 0 ) + G 1 η k + 1 2 ! G 2 η k 2 + 1 3 ! G 3 η k 3 + O ( η k 4 ) = G ( 0 ) + G 1 C 2 e ( k ) + G 1 A 2 + 1 2 G 2 C 2 2 e ( k ) 2 + G 1 A 3 + 1 2 G 2 B 3 + 1 6 G 3 C 2 3 e ( k ) 3 + O ( e ( k ) 4 ) .
Then,
G ( η k ) [ F ( x ( k ) ) ] 1 F ( x ( k ) ) = G ( 0 ) e ( k ) + ( G 1 G ( 0 ) ) C 2 e ( k ) 2 + G ( 0 ) J 3 + G 1 ( C 2 2 + A 2 ) + 1 2 G 2 C 2 2 e ( k ) 3 + ( G ( 0 ) J 4 + G 1 ( C 2 J 3 A 2 C 2 + A 3 ) + 1 2 G 2 ( C 2 3 + B 3 ) + 1 6 G 3 C 2 3 ) e ( k ) 4 + O ( e ( k ) 5 ) .
Finally, the error equation of family (3) is
e ( k + 1 ) = e ( k ) G ( η k ) [ F ( x ( k ) ) ] 1 F ( x ( k ) ) = ( I G ( 0 ) ) e ( k ) + ( G ( 0 ) G 1 ) C 2 e ( k ) 2 + G ( 0 ) J 3 + G 1 ( C 2 2 A 2 ) 1 2 G 2 C 2 2 e ( k ) 3 + G ( 0 ) J 4 + G 1 ( C 2 J 3 + A 2 C 2 A 3 ) + 1 2 G 2 ( C 2 3 B 3 ) 1 6 G 3 C 2 3 e ( k ) 4 + O ( e ( k ) 5 ) .
By applying conditions G ( 0 ) = I , G 1 = 1 and G 2 = 4 , the error equation turns into
e ( k + 1 ) = C 3 C 2 + 5 1 6 G 3 C 2 3 e ( k ) 4 + O ( e ( k ) 5 ) ,
so the iterative family (3) is fourth-order convergent. □
Trying to design a higher-order class with the same structure, we now consider the fourth-step iterative family:
y ( k ) = x ( k ) [ F ( x ( k ) ) ] 1 F ( x ( k ) ) , z ( k ) = x ( k ) G ( η k ) [ F ( x ( k ) ) ] 1 F ( x ( k ) ) , w ( k ) = z ( k ) [ F ( x ( k ) ] 1 F ( z ( k ) ) , x ( k + 1 ) = z ( k ) H ( τ k ) [ F ( x ( k ) ) ] 1 F ( z ( k ) ) ,
where G ( η k ) and H ( τ k ) are two matrix weight functions with variables defined by
η k = [ F ( x ( k ) ) ] 1 ( F ( x ( k ) ) [ y ( k ) , x ( k ) ; F ] ) , τ k = [ F ( x ( k ) ) ] 1 ( F ( x ( k ) ) [ z ( k ) , w ( k ) ; F ] ) .
Let us recall that the iterative scheme (13), from now on called GH family, has been obtained by a composition-type of the iterative family (3) with itself. The convergence of GH family is analyzed in the next result.
Theorem 3.
Let the nonlinear function F : Ω R n R n be a sufficiently Fréchet differentiable in an open convex set Ω, x * Ω a solution of the system F ( x ) = 0 . It must be also satisfied that F ( x ) is continuous and nonsingular in x * . Let us suppose that the initial estimation x ( 0 ) R n is close enough to x * and the weight functions G ( η k ) and H ( τ k ) satisfy:
(i) 
G ( 0 ) = I , G 1 = 1 , G 2 = 4 and G 3 = 30 .
(ii) 
H ( 0 ) = I , H 1 = 1 , H 2 = 2 and H 3 = 6 .
Then, all the iterative methods of family (13) converge to x * with order of convergence 9.
Proof. 
By using the developments in the proof of Theorem 2 (with more terms in the error expressions) and also the same way to proceed, we obtain for the second step of family (13)
z ( k ) x * = e ( k ) G ( η k ) [ F ( x ( k ) ) ] 1 F ( x ( k ) ) = K 4 e ( k ) 4 K 5 e ( k ) 5 K 6 e ( k ) 6 K 7 e ( k ) 7 K 8 e ( k ) 8 K 9 e ( k ) 9 + O ( e ( k ) 10 ) .
The coefficients K j are obtained using expressions (7), (9), (11), and (12), and
S 4 = C 5 + C 3 C 2 2 + C 4 C 2 C 2 J 4 C 3 J 3 , S 5 = C 6 + C 4 C 2 2 + C 5 C 2 C 2 J 5 C 3 J 4 C 4 J 3 C 3 C 2 J 3 C 3 J 3 C 2 , S 6 = C 7 + C 4 C 2 3 + C 5 C 2 2 + C 6 C 2 C 2 J 6 C 3 J 5 C 4 J 4 C 5 J 3 C 3 C 2 J 4 C 3 J 4 C 2 C 4 C 2 J 3 C 4 J 3 C 2 + C 3 J 3 2 , S 7 = C 8 + C 5 C 2 3 + C 6 C 2 2 + C 7 C 2 C 2 J 7 C 3 J 6 C 4 J 5 C 5 J 4 C 6 J 3 C 3 C 2 J 5 C 3 J 5 C 2 C 4 C 2 J 4 C 4 J 2 C 2 C 5 C 2 J 3 C 5 J 3 C 2 + C 4 J 3 2 C 4 C 2 2 J 3 C 4 C 2 J 3 C 2 C 4 J 3 C 2 2 + C 3 J 3 J 4 + C 3 J 4 J 3 .
From these expressions, we have
A j = ( j + 1 ) C j + 1 S j + X j C 2 + k = 2 j 1 X k ( ( j k + 2 ) C j k + 2 S j k + 1 ) , j > 3 ,
and then
B 3 = C 2 A 2 + A 2 C 2 , B j = C j 1 A j 1 + k = 2 j 2 A k A j k + A j 1 C j 1 , j > 3 , D 4 = C 2 B 3 + A 2 C 2 2 , D j = C 2 B j 1 + k = 2 j 3 A k B j k + A j 2 C 2 2 , j > 4 .
Thus, it is obtained that
F 1 = C 2 , F 2 = 2 C 3 C 2 2 , F 3 = C 2 3 + 3 C 4 2 C 2 C 3 , F j = A j + 2 B j + 5 D j , j > 3 .
Then, the coefficients in (14) are
K 4 = J 4 + 3 C 2 3 2 C 2 C 3 2 C 3 C 2 + F 3 , K j = J j + k = 1 j 1 F k J j k + 2 F j 3 ( C 2 2 C 3 ) F j 2 C 2 + F j 1 , j > 4 .
Then, being w ( k ) = z ( k ) [ F ( x ( k ) ) ] 1 F ( z ( k ) ) ,
z ( k ) , w ( k ) ; F = F ( z ( k ) ) + 1 2 F ( z ( k ) ) ( w ( k ) z ( k ) ) + O ( e ( k ) 6 ) = F ( x * ) I C 2 K 4 e ( k ) 4 + ( C 2 K 5 + C 2 X 2 K 4 ) e ( k ) 5 + O ( e ( k ) 6 ) .
Therefore, the Taylor development of variable τ k of the weight function H is given by:
τ k = [ F ( x ( k ) ) ] 1 ( F ( x ( k ) ) [ z ( k ) , w ( k ) ; F ] ) = N 1 e ( k ) + N 2 e ( k ) 2 + N 3 e ( k ) 3 + N 4 e ( k ) 4 + N 5 e ( k ) 5 + O ( e ( k ) 6 ) ,
where
N i = X i + 1 , i = 1 , 2 , 3 , and N 4 = C 2 K 4 X 5 .
When the conditions in the theorem for the weight function H are applied, we have
H ( τ k ) = H ( 0 ) + H 1 τ k + 1 2 ! H 2 τ k 2 + 1 3 ! H 3 τ k 3 + O ( τ k 4 ) = I + τ k + τ k 2 + τ k 3 + O ( τ k 4 ) = I + P 1 e ( k ) + P 2 e ( k ) 2 + P 3 e ( k ) 3 + P 4 e ( k ) 4 + O ( e ( k ) 5 ) ,
the coefficients being
P 1 = N 1 , P 2 = N 2 + N 1 2 , P 3 = N 3 + N 1 N 2 + N 2 N 1 + N 1 3 , P 4 = N 4 + N 1 N 3 + N 2 2 + N 3 N 1 + N 1 2 N 2 + N 1 N 2 N 1 + N 2 N 1 2 .
Finally,
H ( τ k ) [ F ( x ( k ) ) ] 1 F ( z ( k ) ) = Q 4 e ( k ) 4 + Q 5 e ( k ) 5 + Q 6 e ( k ) 6 + Q 7 e ( k ) 7 + Q 8 e ( k ) 8 + O ( e ( k ) 9 ) ,
being
Q 4 = K 4 , Q 5 = K 5 X 2 K 4 N 1 K 4 , Q 6 = K 6 X 2 K 5 X 3 K 4 N 1 K 5 N 1 X 2 K 4 P 2 K 4 , Q 7 = K 7 X 2 K 6 X 3 K 5 X 4 K 4 N 1 K 6 N 1 X 2 K 5 N 1 X 3 K 4 P 2 K 5 P 2 X 2 K 4 P 3 K 4 , Q 8 = K 8 + C 2 K 4 2 X 3 K 6 X 2 K 7 X 4 K 5 X 5 K 4 N 1 K 7 N 1 X 2 K 6 N 1 X 3 K 5 N 1 X 4 K 4 P 2 K 6 P 2 X 2 K 5 P 2 X 3 K 4 P 4 K 4 P 3 K 5 P 3 X 2 K 4 .
Then, the error equation of GH family is:
e ( k + 1 ) = z ( k ) x * H ( τ k ) [ F ( x ( k ) ) ] 1 F ( z ( k ) ) = T 4 e ( k ) 4 + T 5 e ( k ) 5 + T 6 e ( k ) 6 + T 7 e ( k ) 7 + T 8 e ( k ) 8 + O ( e ( k ) 9 ) ,
where
T i = K i Q i , i = 4 , 8 .
As it can be proven that T i = 0 for 4 i 8 , we have that GH family has order of convergence 9.

3. Numerical Experience

In this section, we are applying the iterative families (3) and (13) to several nonlinear systems of equations. In particular, the performance of family (3) is checked by solving the Fisher’s partial differential equation.

3.1. Application of Family (3) to Fisher’s Equation

Fisher’s equation [11]
v t ( x , t ) = D v x x ( x , t ) + r v ( x , t ) 1 v ( x , t ) c
represents a model of diffusion in population dynamics, where D > 0 is the diffusion constant, r is the growth rate of the species, and c is the carrying capacity. In this section, a specific case of Fisher’s equation is solved using iterative methods. In this case, D = r = c = 1 , so (15) gets into
v t ( x , t ) = v x x ( x , t ) + v ( x , t ) v 2 ( x , t ) .
The domain of x is the interval [ 25 , 50 ] . The boundary conditions are v ( 25 , t ) = 1 and v ( 50 , t ) = 0 , for t > 0 , while the initial condition is
v ( x , 0 ) = 1 , x < 10 , 0 , 10 x 10 , 1 / 4 , 10 < x < 20 , 0 , x 20 .
Discretizing (16) and by using divided differences, the problem can be solved as a family of nonlinear systems. For this purpose, we first have selected a grid of points in the domain, ( x i , t j ) [ 25 , 50 ] × [ 0 , T m a x ] , where x i represents the node in the spatial variable, set as x i = 25 + i h , i = 0 , 1 , , n x , j is the index of the time variable, set as t j = 0 + j k , j = 0 , 1 , , n t , h and k are the spatial and time steps, respectively, and n x and n t are the number of subintervals for variables x and t, respectively. Then, an approximation of the solution at each point ( x i , t j ) of the mesh will be obtained, that is, v i , j v ( x i , t j ) .
Applying backward differences to the time derivative and central differences to the spacial one, that is
v t ( x , t ) v ( x , t ) v ( x , t k ) k , v x x ( x , t ) v ( x + h , t ) 2 v ( x , t ) + v ( x h , t ) h 2 ,
the scheme in finite differences for the approximated problem is
v i , j v i , j 1 k = v i + 1 , j 2 v i , j + v i 1 , j h 2 + v i , j + v i , j 2 ,
for i = 1 , , n x 1 , j = 1 , , n t . After some algebraic manipulations, (18) results in
( 1 + 2 λ k ) v i , j λ ( v i + 1 , j + v i 1 , j ) + k v i , j 2 = v i , j 1 ,
where λ = k / h 2 . Depending on the number of subintervals used in the discretization of the variable x, a nonlinear system of size ( n x 1 ) × ( n x 1 ) can be found by solving (19). The nonlinear system defined for a fixed j is the following:
A v 1 , j v 2 , j v n x 1 , j + k v 1 , j 2 v 2 , j 2 v n x 1 , j 2 v 1 , j 1 v 2 , j 1 v n x 1 , j 1 λ v 0 , j 0 v n x , j = 0 ,
where matrix A is
1 + 2 λ k λ 0 0 0 λ 1 + 2 λ k λ 0 0 0 0 0 1 + 2 λ k λ 0 0 0 λ 1 + 2 λ k .
Each system gives an approximated solution for the problem in a time step t j from the obtained in the instant t j 1 , so we begin to solve the systems using the solution at t 0 provided by (17).
Using iterative methods for solving nonlinear systems, such as family (3), system (20) can be solved. In order to compare the numerical results, another iterative scheme with the same order of convergence as family (3) has been chosen. In this case, the Sharma et al. [12] method, denoted in this work by S 4 , is fourth-order convergent and has the iterative expression
y ( k ) = x ( k ) 2 3 [ F ( x ( k ) ) ] 1 F ( x ( k ) ) x ( k + 1 ) = x ( k ) 1 2 L k [ F ( x ( k ) ) ] 1 F ( x ( k ) ) ,
where L k = I + 9 4 [ F ( y ( k ) ) ] 1 F ( x ( k ) ) + 3 4 [ F ( x ( k ) ) ] 1 F ( y ( k ) ) .
On the other hand, to check the numerical behavior of family (3), it is necessary to select a weight function satisfying conditions of Theorem 2, so it will be obtained a method of this family. Several functions satisfy the conditions of the theorem, some of them being:
( a ) G ( η k ) = I + η k + 2 η k 2 , ( b ) G ( η k ) = I 2 η k 1 ( I η k ) , ( c ) G ( η k ) = η k + I 1 ( 2 η k 2 + 1 ) .
An efficiency comparison between the proposed schemes is given in terms of the computational cost and the number of functional evaluations. This comparison helps to choose the more efficient method of family (3). For this purpose, we can use the efficiency index defined by Ostrowski [13] as I = p 1 / d , where p is the order of the method and d is the number of new functional evaluations per iteration required by the method. The computational cost for solving a linear system of equations depends on its size. As the proposed methods can be used for solving large systems of equations, this cost must be taken into account. Thus, we compare the performance of the methods with the computational efficiency index introduced in [7] as C E = p 1 / ( d + o p ) , where o p is the number of operations (products and quotients) per iteration.
Table 1 summarizes the results for the computational efficiency index and number of functional evaluations for each method, where G 4 1 , G 4 2 , and G 4 3 denote the resulting iterative schemes when family (3) is applied using the weight functions (22), respectively.
For each method, Table 1 shows the number of different evaluations of the function (F), the Jacobian matrix ( F ), and the number of different divided differences used by the method in each iteration (nDD). All the methods need n and n 2 evaluations to compute F and F , respectively, and n ( n 1 ) 2 for each divided difference operator.
Regarding the operational cost, the value of Mv is the number of matrix–vector products, with n 2 operations for each product. To compute an inverse linear operator, one may solve a n × n linear system of equations, where an LU decomposition is performed and two triangular systems must be solved, with a total cost of 1 3 n 3 + n 2 1 3 n operations. However, for solving r linear systems with the same matrix of coefficients, the LU decomposition is computed only once, so the computational cost is only 1 3 n 3 + r n 2 1 3 n . The values of s1 and s2 are the number of linear systems that each scheme solves per iteration with matrix of coefficients F ( x ( k ) ) or another matrix, respectively.
As the results of Table 1 show, among the methods belonging to family (3), the most efficient is method G 4 1 . Then, the numerical performance of the family is carried out with this method. In addition, method S 4 requires more functional evaluations than the other ones as it computes two Jacobian matrices in each iteration.
The results obtained in Table 1 can be observed in Figure 1, where the value of log 4 ( C E ) for the four methods by varying the size of the system (n) has been represented. As we can see, for small values of n, the indices of G 4 1 and S 4 show similar performance; meanwhile, when the value of n increases, the computational efficiency index of all methods decreases, but the index of method G 4 1 is greater than the rest.
We have solved system (20) using methods S 4 and G 4 1 for n x = 20 . For the numerical performance, we use software Matlab R2017b with variable precision arithmetics of 1000 digits of mantissa.The results of the application of the methods for solving the nonlinear system are collected in Table 2 varying the value of n t and T m a x . For every performance, the iteration procedure stops when F ( x ( k + 1 ) ) < 10 6 or the number of iterations reaches the number 50. The value of iter represents the mean number of iterations needed when all the columns have been calculated and the terms a ( b ) represent the value a · 10 b . Moreover, the elapsed time in seconds to obtain the solution for the problem after 10 (consecutive) executions is shown.
The results in Table 2 show the good performance of method G 4 1 for solving the Fisher’s problem. Method G 4 1 only needs two iterations to calculate a solution for the system, the mean number of iterations always being lower than that of method S 4 . For a fixed value of T m a x , when n t increases, so does the elapsed time, but the approximation to the solution is better since | | F ( x k + 1 ) | | is smaller. In addition, the e-time is lower for method G 4 1 , so it reaches the solution with more computational efficiency and arithmetical precision than the other scheme.

3.2. Application of Family GH to Nonlinear Test Systems

According to the results obtained in Table 1 for methods G 4 1 , 2 , 3 , the numerical experiments for family (13) are developed by using the following weight functions:
G ( η k ) = I + η k + 2 η k 2 + 5 η k 3 , H ( τ k ) = I + τ k + τ k 2 + τ k 3 ,
which satisfy conditions of Theorem 3.
To compare the features of our method with other schemes of the literature, the numerical tests are also performed on two iterative schemes of order 8 that can be found in [5,14]. The method is named G H 9 , for our iterative family using functions (23), N L M 8 for [5], and S L B 8 , for [14]. The computational efficiency index and the number of functional evaluations of these methods are collected in Table 3.
Method G H 9 requires few functional evaluations, as low operational cost and has a competitive computational efficiency index (see Figure 2). We will see now that the numerical experiments confirm these results. For this purpose, methods N L M 8 , S L B 8 , and G H 9 are applied to solve the following nonlinear systems:
(a)
F 1 ( x , y ) = f 11 ( x , y ) , f 12 ( x , y ) T = 0 , where
f 11 ( x , y ) = x 2 y 19 , f 12 ( x , y ) = 1 6 y 3 x 2 + y 17 ,
(b)
F 2 ( x , y , z ) = f 21 ( x , y , z ) , f 22 ( x , y , z ) , f 23 ( x , y , z ) T = 0 , such that
f 21 ( x , y , z ) = sin ( x ) + y 2 + log ( z ) 7 , f 22 ( x , y , z ) = 3 x + 2 y 1 z 3 + 1 , f 23 ( x , y , z ) = x + y z 5 ,
(c)
F 3 ( x , y , z ) = f 31 ( x , y , z ) , f 32 ( x , y , z ) , f 33 ( x , y , z ) T = 0 , where
f 31 ( x , y , z ) = 2 x + y z 4 , f 32 ( x , y , z ) = 2 y + z + x 4 , f 33 ( x , y , z ) = x y z 1 .
The results obtained from applying the methods for solving the nonlinear systems are collected in Table 4, Table 5 and Table 6. The stopping criteria now is a difference between two consecutive iterates lower than 10 200 or the condition F ( x ( k + 1 ) ) < 10 200 with a maximum number of iterations of 50. The results have been calculated using Matlab R2017b with variable precision arithmetics of 2000 digits of mantissa. In this way, the numerical noise is far enough to not affect the final result. The approximated computational order of convergence [15] represents an approximation of the order of convergence of each method.
The ACOC [15] is the approximated computational order of convergence defined as
p A C O C = ln | | x ( k + 1 ) x ( k ) | | / | | x ( k ) x ( k 1 ) | | ln | | x ( k ) x ( k 1 ) | | / | | x ( k 1 ) x ( k 2 ) | | .
It is a computational approximation of the theoretical order.
For every nonlinear system, the higher value of the ACOC is for the GH family, as expected. In general, our proposed scheme converges in less iterations than the other tested methods with very competitive error estimates.

4. Conclusions

Two families of iterative methods for solving nonlinear systems of equations have been introduced, the first one being a multidimensional extension of a previous scalar class of iterative methods. Both schemes are designed via the matrix weight functions procedure with only one evaluation of the Jacobian and its estimation using divided differences for systems. Two more steps are added to the fourth-order class, holding the Jacobian matrix evaluated in the same step, allowing us to reach order of convergence 9 for the GH family. The numerical tests confirm the quality of the iterative schemes for solving systems of equations of non-small size, improving the results of several methods in the literature.

Author Contributions

The individual contribution of the authors have been: conceptualization, J.R.T. and N.G.; software, F.I.C.; validation, F.I.C., A.C. and J.R.T.; formal analysis, N.G.; investigation, F.I.C.; writing—original draft preparation, N.G.; writing—review and editing, J.R.T. and A.C.

Funding

This research was partially supported by both Ministerio de Ciencia, Innovación y Universidades and Generalitat Valenciana, under grants PGC2018-095896-B-C22 (MCIU/AEI/FEDER/UE) and PROMETEO/ 2016/089, respectively.

Acknowledgments

The authors would like to thank the anonymous reviewers for their kind and constructive suggestions.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Petković, M.S.; Neta, B.; Petković, L.D.; Džunić, J. Multipoint Methods for Solving Nonlinear Equations: A survey. App. Math. Comput. 2014, 226, 635–660. [Google Scholar]
  2. Cordero, A.; Torregrosa, J.R. On the Design of Optimal Iterative Methods for Solving Nonlinear Equations. In Advances in Iterative Methods for Nonlinear Equations; Amat, S., Busquier, S., Eds.; Springer: Cham, Switzerland, 2016; pp. 79–111. [Google Scholar]
  3. Kung, H.T.; Traub, J.F. Optimal order of one-point and multipoint iteration. J. Assoc. Comput. Math. 1974, 21, 643–651. [Google Scholar] [CrossRef]
  4. Cordero, A.; Gómez, E.; Torregrosa, J.R. Efficient high-order iterative methods for solving nonlinear systems and their application on heat conduction problems. Complexity 2017, 2017, 6457532. [Google Scholar] [CrossRef] [Green Version]
  5. Sharma, J.; Arora, H. Improved Newton-like methods for solving systems of nonlinear equations. SeMA J. 2017, 74, 147–163. [Google Scholar] [CrossRef]
  6. Amiri, A.; Cordero, A.; Darvishi, M.; Torregrosa, J. Stability analysis of a parametric family of seventh-order iterative methods for solving nonlinear systems. App. Math. Comput. 2018, 323, 43–57. [Google Scholar] [CrossRef]
  7. Cordero, A.; Hueso, J.L.; Martínez, E.; Torregrosa, J.R. A modified Newton-Jarratt’s composition. Numer. Algor. 2010, 55, 87–99. [Google Scholar] [CrossRef]
  8. Chicharro, F.I.; Cordero, A.; Garrido, N.; Torregrosa, J.R. Wide stability in a new family of optimal fourth-order iterative methods. Comput. Math. Methods 2019, 1, e1023. [Google Scholar] [CrossRef] [Green Version]
  9. Artidiello, S. Diseño, Implementación y Convergencia de Métodos Iterativos Para Resolver Ecuaciones y Sistemas No Lineales Utilizando Funciones Peso. Ph.D. Thesis, Servicio de publicaciones Universidad Politécnica de Valencia, Valencia, Spain, 2014. [Google Scholar]
  10. Ortega, J.M.; Reinhboldt, W.C. Iterative Solution of Nonlinear Equations in Several Variables; Academic Press: Cambridge, MA, USA, 1970. [Google Scholar]
  11. Fisher, R.A. The wave of advance of advantageous genes. Ann. Eugen. 1933, 7, 353–369. [Google Scholar] [CrossRef] [Green Version]
  12. Sharma, J.; Guha, R.; Sharma, R. An efficient fourth order weighted-Newton method for systems of nonlinear equations. Numer. Algor. 2013, 62, 307–323. [Google Scholar] [CrossRef]
  13. Ostrowski, A. Solution of Equations and Systems of Equations; Prentice-Hall: Upper Saddle River, NJ, USA, 1964. [Google Scholar]
  14. Soleymani, F.; Lotfi, T.; Bakhtiari, P. A multi-step class of iterative methods for nonlinear systems. Optim. Lett. 2014, 8, 1001–1015. [Google Scholar] [CrossRef]
  15. Cordero, A.; Torregrosa, J.R. Variants of Newton’s method using fifth order quadrature formulas. Appl. Math. Comput. 2007, 190, 686–698. [Google Scholar] [CrossRef]
Figure 1. log 4 ( C E ) from Table 1 for methods G 4 1 , G 4 2 , G 4 3 , and S 4 and different sizes of the system.
Figure 1. log 4 ( C E ) from Table 1 for methods G 4 1 , G 4 2 , G 4 3 , and S 4 and different sizes of the system.
Mathematics 07 01194 g001
Figure 2. C E from Table 3 for methods N L M 8 , S L B 8 , and G H 9 for different sizes of the system.
Figure 2. C E from Table 3 for methods N L M 8 , S L B 8 , and G H 9 for different sizes of the system.
Mathematics 07 01194 g002
Table 1. Funtional evaluations and computational efficiency index of methods G 4 1 , G 4 2 , G 4 3 , and S 4 .
Table 1. Funtional evaluations and computational efficiency index of methods G 4 1 , G 4 2 , G 4 3 , and S 4 .
MethodF F nDDds1s2MvopOrderCE
G 4 1 111 n ( 3 n + 1 ) 2 302 1 3 n 3 + 5 n 2 1 3 n 4 4 1 / ( 1 3 n 3 + 13 2 n 2 + 1 6 n )
G 4 2 111 n ( 3 n + 1 ) 2 321 2 3 n 3 + 6 n 2 2 3 n 4 4 1 / ( 2 3 n 3 + 15 2 n 2 1 6 n )
G 4 3 111 n ( 3 n + 1 ) 2 332 2 3 n 3 + 8 n 2 2 3 n 4 4 1 / ( 2 3 n 3 + 19 2 n 2 1 6 n )
S 4 120 2 n 2 + n 211 2 3 n 3 + 4 n 2 2 3 n 4 4 1 / ( 2 3 n 3 + 6 n 2 + 1 3 n )
Table 2. Numerical results for n x = 20 and different values of T m a x and n t .
Table 2. Numerical results for n x = 20 and different values of T m a x and n t .
T m a x = 0.5 Methodntiter F ( x ( k + 1 ) ) e-time
S 4 10016.09.6054 (−7)397.1095
G 4 1 2.09.233 (−19)93.5890
S 4 20015.07.8637 (−7)781.0227
G 4 1 2.07.2024 (−21)152.5548
S 4 50013.08.4237 (−7)1.6520(03)
G 4 1 2.01.179 (−23)515.5332
T m a x = 1 Methodntiter F ( x ( k + 1 ) ) e-time
S 4 10017.968.1801 (−7)838.7530
G 4 1 2.01.6217 (−16)205.8390
S 4 20016.376.6969 (−7)880.8873
G 4 1 2.01.2656 (−18)211.5296
S 4 50014.5927.1745 (−7)1.9021(03)
G 4 1 2.02.0725 (−21)518.8443
T m a x = 2 Methodntiter F ( x ( k + 1 ) ) e-time
S 4 10019.57.2062 (−7)529.8552
G 4 1 2.02.4203 (−14)106.0159
S 4 20017.989.6415 (−7)965.8759
G 4 1 2.01.8728 (−16)209.1218
S 4 50016.116.3074 (−7)2.0914(03)
G 4 1 2.03.0515 (−19)513.5127
Table 3. Funtional evaluations and computational efficiency index of methods N L M 8 , S L B 8 and G H 9 .
Table 3. Funtional evaluations and computational efficiency index of methods N L M 8 , S L B 8 and G H 9 .
MethodF F nDDds1s2MvopOrderCE
N L M 8 320 2 n 2 + 3 n 702 1 3 n 3 + 11 n 2 2 3 n 8 8 1 / ( 1 3 n 3 + 13 n 2 + 7 3 n )
S L B 8 320 2 n 2 + 3 n 296 2 3 n 3 + 17 n 2 2 3 n 8 8 1 / ( 2 3 n 3 + 19 n 2 + 7 3 n )
G H 9 212 2 n 2 + n 806 1 3 n 3 + 14 n 2 1 3 n 9 9 1 / ( 1 3 n 3 + 16 n 2 + 2 3 n )
Table 4. Numerical results for F 1 ( x , y ) .
Table 4. Numerical results for F 1 ( x , y ) .
Method x ( 0 ) Iter F ( x ( k + 1 ) ) ACOC
N L M 8 [7 7]31.929 (−201)6.9500
S L B 8 31.433 (−378)7.8627
G H 9 34.151 (−343)8.2992
N L M 8 [4 −4.5]471.255 (−865)6.0711
S L B 8 363.522 (−203)7.7505
G H 9 201.164 (−1218)7.9956
N L M 8 [−10 −7.5]203.371 (−501)6.4336
S L B 8 56.248 (−1248)8.0000
G H 9 41.722 (−416)8.1830
Table 5. Numerical results for F 2 ( x , y , z ) .
Table 5. Numerical results for F 2 ( x , y , z ) .
Method x ( 0 ) Iter F ( x ( k + 1 ) ) ACOC
N L M 8 [−1.5 −1.5 −1.5]47.484 (−633)6.3006
S L B 8 43.902 (−1402)7.9274
G H 9 47.286 (−1127)8.0749
N L M 8 [0 −1 −1.5]59.291 (−1174)5.892
S L B 8 42.557 (−1318)7.9291
G H 9 41.064 (−425)8.5315
N L M 8 [3 4 5]51.200 (−958)5.9817
S L B 8 51.306 (−839)8.0606
G H 9 43.982 (−350)8.3187
Table 6. Numerical results for F 3 ( x , y , z ) .
Table 6. Numerical results for F 3 ( x , y , z ) .
Method x ( 0 ) Iter F ( x ( k + 1 ) ) ACOC
N L M 8 [−1 1 2]42.587 (−412)7.8793
S L B 8 132.419 (−854)7.9973
G H 9 46.575 (−616)8.0173
N L M 8 [−0.6 0.8 2.7]341.393 (−290)7.7554
S L B 8 147.066 (−785)7.9959
G H 9 42.445 (−511)8.0092
N L M 8 [−2.5 −1 1]151.667 (−420)7.9179
S L B 8 57.361 (−1165)7.9992
G H 9 42.522 (−325)8.3981

Share and Cite

MDPI and ACS Style

Chicharro, F.I.; Cordero, A.; Garrido, N.; Torregrosa, J.R. Generalized High-Order Classes for Solving Nonlinear Systems and Their Applications. Mathematics 2019, 7, 1194. https://doi.org/10.3390/math7121194

AMA Style

Chicharro FI, Cordero A, Garrido N, Torregrosa JR. Generalized High-Order Classes for Solving Nonlinear Systems and Their Applications. Mathematics. 2019; 7(12):1194. https://doi.org/10.3390/math7121194

Chicago/Turabian Style

Chicharro, Francisco I., Alicia Cordero, Neus Garrido, and Juan R. Torregrosa. 2019. "Generalized High-Order Classes for Solving Nonlinear Systems and Their Applications" Mathematics 7, no. 12: 1194. https://doi.org/10.3390/math7121194

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop