Next Article in Journal
Guaranteed Lower Bounds for the Elastic Eigenvalues by Using the Nonconforming Crouzeix–Raviart Finite Element
Next Article in Special Issue
A Family of Multiple-Root Finding Iterative Methods Based on Weight Functions
Previous Article in Journal
On Semi-Classical Orthogonal Polynomials Associated with a Modified Sextic Freud-Type Weight
Previous Article in Special Issue
Purely Iterative Algorithms for Newton’s Maps and General Convergence
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Memory in a New Variant of King’s Family for Solving Nonlinear Systems

1
School of Mathematics, Thapar Institute of Engineering and Technology University, Patiala 147004, India
2
Institute for Multidisciplinary Mathematics, Universitat Politècnica de Valenència, Camino de Vera s/n, 46022 València, Spain
3
Department of Mathematics, Chandigarh University, Gharuan 140413, India
*
Author to whom correspondence should be addressed.
Mathematics 2020, 8(8), 1251; https://doi.org/10.3390/math8081251
Submission received: 8 July 2020 / Revised: 24 July 2020 / Accepted: 28 July 2020 / Published: 31 July 2020
(This article belongs to the Special Issue Iterative Methods for Solving Nonlinear Equations and Systems 2020)

Abstract

:
In the recent literature, very few high-order Jacobian-free methods with memory for solving nonlinear systems appear. In this paper, we introduce a new variant of King’s family with order four to solve nonlinear systems along with its convergence analysis. The proposed family requires two divided difference operators and to compute only one inverse of a matrix per iteration. Furthermore, we have extended the proposed scheme up to the sixth-order of convergence with two additional functional evaluations. In addition, these schemes are further extended to methods with memory. We illustrate their applicability by performing numerical experiments on a wide variety of practical problems, even big-sized. It is observed that these methods produce approximations of greater accuracy and are more efficient in practice, compared with the existing methods.

1. Introduction

Nonlinear systems of equations, Ψ ( x ) = 0 , Ψ : D R n R n , appear very frequently in many areas of Engineering and Science. Therefore, it is a very challenging task to find solutions of nonlinear systems of equations. Finding their solutions by analytical methods is very hard or rarely possible. Many authors have attempted to estimate the solutions of nonlinear systems of equations using iterative techniques. One of the oldest and simplest iterative method is Newton’s method [1,2] defined as
x ( j + 1 ) = x ( j ) Ψ ( x ( j ) ) 1 Ψ ( x ( j ) ) , j = 0 , 1 , 2 ,
where Ψ ( x ( j ) ) denotes the Jacobian matrix of Ψ evaluated in x ( j ) . This method has quadratic convergence by choosing an initial guess close to the solution. There are many higher-order techniques available in the literature [3,4,5], which have Newton’s method as a predictor step. In many realistic situations, the first-order Fréchet derivative Ψ ( x ) fails to exist or it is time consuming. For such situations, Traub [2] introduced a Jacobian-free method, with order two, defined by
x ( j + 1 ) = x ( j ) [ w ( j ) , x ( j ) ; Ψ ] 1 Ψ ( x ( j ) ) , j = 0 , 1 , 2 ,
where [ w ( j ) , x ( j ) ; Ψ ] is the divided difference of Ψ of first-order and w ( j ) = x ( j ) + β Ψ ( x ( j ) ) and β 0 is an arbitrary constant. For β = 1 , method (2) reduces to the multidimensional extension of Steffensen’s method, presented by Samanskii in [6].
After that, many researchers started developing higher-order Jacobian-free methods [7,8,9,10]. On the other side, to accelerate the order of convergence of iterative schemes with same number of computations, leads to be a popular aspect. That is known as “with memory”, i.e., an iterative method with memory uses data from more than one previous iteration. Also, there is very little literature [11,12,13,14] with methods with the memory for solving nonlinear systems. with this motivation, we develop new iterative schemes to attain convergence order as high as possible keeping the number of function evaluations per iteration as minimum as possible.
This manuscript is organized as follows. In Section 2, we construct new methods of the fourth- and sixth-order and proceed to their convergence analysis. The construction and study of the convergence of their corresponding iterative families with memory is presented in Section 3. In Section 4, various numerical tests has been made to check theoretical results and to compare the properties of the presented algorithms with some similar existing methods. Some conclusions are stated to finish the paper.

2. Design of the New Class

For constructing the new methods, we consider the known King’s family [15] of iterative schemes to solve ψ ( x ) = 0 , as in expression
x ( j + 1 ) = z ( j ) τ x ( j ) , z ( j ) ψ ( z ( j ) ) ψ ( x ( j ) ) , j = 0 , 1 ,
where z ( j ) = x ( j ) ψ ( x ( j ) ) ψ ( x ( j ) ) , τ x ( j ) , z ( j ) = ψ ( x ( j ) ) + α ψ ( z ( j ) ) ψ ( x ( j ) ) + ( α 2 ) ψ ( z ( j ) ) and α R . If α = 0 , the well-known Ostrowski’s method is obtained [2]. By dividing numerator and denominator of τ x ( j ) , z ( j ) by ψ ( x ( j ) ) . Then, we get
τ x ( j ) , z ( j ) = 1 + α u ( j ) 1 + ( α 2 ) u ( j ) ,
where u ( j ) = ψ ( z ( j ) ) ψ ( x ( j ) ) . Now, re-writing
u ( j ) = ψ ( z ( j ) ) ψ ( x ( j ) ) = ψ ( z ( j ) ) ψ ( x ( j ) ) + ψ ( x ( j ) ) ψ ( x ( j ) ) = 1 + ψ ( z ( j ) ) ψ ( x ( j ) ) ψ ( x ( j ) ) .
By using first step of Equation (3) and the operator of divided difference, we have
u ( j ) = 1 1 ψ ( x ( j ) ) [ z ( j ) , x ( j ) ; ψ ] .
This idea allows the generalization of the family to the multidimensional case and it was firstly used in [16]. Now, using binomial expansion up to two terms τ x ( j ) , z ( j ) in (4), it can be expressed as:
τ x ( j ) , z ( j ) 1 + 2 u ( j ) 2 ( α 2 ) ( u ( j ) ) 2 .
Finally, using Equations (5)–(7) in (3), we get
x ( j + 1 ) = z ( j ) 1 + 2 u ( j ) 2 ( α 2 ) ( u ( j ) ) 2 ψ ( z ( j ) ) ψ ( x ( j ) ) , j = 0 , 1 ,
where z ( j ) = x ( j ) ψ ( x ( j ) ) ψ ( x ( j ) ) .
Finally, we replace ψ ( x ( j ) ) with central divided difference [ w 1 ( j ) , w 2 ( j ) ; ψ ] = [ x ( j ) γ ψ ( x ( j ) ) , x ( j ) + δ ψ ( x ( j ) ) ; ψ ] in (8) and considering two more step with two additional functional evaluations of ψ , we propose the following new variant of King’s family in multidimensional case:
z 1 ( j ) = x ( j ) [ w 1 ( j ) , w 2 ( j ) ; Ψ ] 1 Ψ ( x ( j ) ) , z 2 ( j ) = z 1 ( j ) Q ( j ) Ψ ( z 1 ( j ) ) , x ( j + 1 ) = z 2 ( j ) Q ( j ) Ψ ( z 2 ( j ) ) , j = 0 , 1 ,
being u ( j ) = I [ w 1 ( j ) , w 2 ( j ) ; Ψ ] 1 [ z 1 ( j ) , x ( j ) ; Ψ ] , Q ( j ) = I + 2 u ( j ) 2 ( α 2 ) ( u ( j ) ) 2 [ w 1 ( k ) , w 2 ( k ) ; Ψ ] 1 and γ , δ , α R . Let us observe that all the linear systems solved per iteration have the same coefficient matrix, that yields better efficiency.

2.1. Analysis of the Convergence

To explore the convergence of (9), let us recall the results appearing in [1] about Taylor’s series expression on vectorial functions. Let Ψ : D R n R n be d-times differentiable Fréchet in D R n , convex set. Then the following expression holds, for any x , h R n :
Ψ ( x + h ) = Ψ ( x ) + Ψ ( x ) h + 1 2 ! Ψ ( x ) h 2 + 1 3 ! Ψ ( x ) h 3 + + 1 ( d 1 ) ! Ψ ( d 1 ) ( x ) h ( d 1 ) + R d ,
where
R d sup 0 u 1 1 d ! Ψ ( d ) ( x + u h ) h d and h d = ( h , h , d , h ) .
Consider the mapping [ · , · ; Ψ ] : D × D R n × R n L ( R n ) , i.e., the first-order divided difference operator of Ψ on R n , which can be expressed by Gennochi-Hermite formula [17],
x ( j ) + h , x ( j ) ; Ψ = 0 1 Ψ ( x ( j ) + u h ) d u , ( x ( j ) , h ) R n × R n .
By developing Ψ ( x ( j ) + u h ) in Taylor’s series expansion at x ( j ) and integrating, we get
0 1 Ψ ( x ( j ) + u h ) d u = Ψ ( x ( j ) ) + 1 2 Ψ ( x ( j ) ) h + 1 6 Ψ ( x ( j ) ) h 2 + O ( h 3 ) .
If we define e ( j ) = x ( j ) x * , we develop Ψ ( x ( j ) ) and its derivatives in a neighbourhood of x * , where x * R n satisfies Ψ ( x * ) = 0 . Assuming that Ψ ( x * ) 1 exists, we get
Ψ ( x ( j ) ) = Ψ ( x * ) e ( j ) + C 2 e ( j ) 2 + C 3 e ( j ) 3 + C 4 e ( j ) 4 + C 5 e ( j ) 5 + O e ( j ) 6 ,
being C i = 1 i ! Γ Ψ ( i ) ( x * ) L i ( R n , R n ) , i = 2 , 3 , , with Γ = Ψ ( x * ) 1 .
From Equation (13), the derivatives of Ψ ( x ( j ) ) can be written as
Ψ ( x ( j ) ) = Ψ ( x * ) I + 2 C 2 e ( j ) + 3 C 3 e ( j ) 2 + 4 C 4 e ( j ) 3 + 5 C 5 e ( j ) 4 + O e ( j ) 5 ,
Ψ ( x ( j ) ) = Ψ ( x * ) 2 C 2 + 6 C 3 e ( j ) + 12 C 4 e ( j ) 2 + 20 C 5 e ( j ) 3 + O e ( j ) 4 ,
and
Ψ ( x ( j ) ) = Ψ ( x * ) 6 C 3 + 24 C 4 e ( j ) + O e ( j ) 2 ,
being I the identity matrix of order n.
Now, the order of convergence of (9) can be demonstrated through the following result.
Theorem 1.
Let x * R n be a solution of system Ψ ( x ) = 0 and Ψ : D R n R n , being Ψ differentiable enough in D open neighborhood of x * at which Ψ ( x * ) is non-singular. Then, for x ( 0 ) initial guess sufficiently close to x * and α R , iterative scheme (9) has, at least, fourth-order of convergence, provided γ δ .
Proof. 
Let e ( j ) = x ( j ) x * be the error in the approximation x ( j ) . Thus, we denote by e 1 ( j ) = w 1 ( j ) x * = e ( j ) γ Ψ ( x ( j ) ) , and e 2 ( j ) = w 2 ( j ) x * = e ( j ) + δ Ψ ( x ( j ) ) .
Using (13), we have
e 1 ( j ) = ( I Ψ ( x * ) γ ) e ( j ) Ψ ( x * ) γ C 2 e ( j ) 2 Ψ ( x * ) γ C 3 e ( j ) 3 Ψ ( x * ) γ C 4 e ( j ) 4 + O e ( j ) 5 ,
and
e 2 ( j ) = ( I + Ψ ( x * ) δ ) e ( j ) + Ψ ( x * ) δ C 2 e ( j ) 2 + Ψ ( x * ) δ C 3 e ( j ) 3 + Ψ ( x * ) δ C 4 e ( j ) 4 + O e ( j ) 5 .
In view of Equations (11), (12), (14)–(16), and setting x ( j ) + h = e 1 ( j ) + x * , x ( j ) = e 2 ( j ) + x * , we have
h = e 1 ( j ) e 2 ( j ) , and
[ w 1 ( j ) , w 2 ( j ) ; Ψ ] = Ψ ( x * ) I + ( e 1 ( j ) + e 2 ( j ) ) C 2 + O e ( j ) 2 .
[ w 1 ( j ) , w 2 ( j ) ; Ψ ] 1 = Γ I ( e 1 ( j ) + e 2 ( j ) ) C 2 + O e ( j ) 2 ,
where Γ = [ Ψ ( x * ) ] 1 .
Employing Equations (13) and (20), the error equation of first step of scheme (9), one gets
e z 1 ( j ) = z 1 ( j ) x * = ( γ + δ ) Ψ ( x * ) + I C 2 e ( j ) 2 + ( γ δ ) 2 Ψ ( x * ) 2 + 2 ( γ δ ) Ψ ( x * ) 2 I C 2 2 C 3 e ( j ) 3 + O e ( j ) 4 .
Also, we have
[ z 1 ( j ) , x ( j ) ; Ψ ] = Ψ ( x * ) I + C 2 e ( j ) + C 2 2 ( Ψ ( x * ) ( δ γ ) + I ) + C 3 e ( j ) 2 + C 2 C 3 + I Ψ ( x * ) ( γ δ ) C 3 C 2 + Ψ ( x * ) 2 ( γ δ ) 2 + 2 Ψ ( x * ) ( γ δ ) 2 I C 2 3 + 2 C 4 ] e ( j ) 3 + O e ( j ) 4 .
Using (20) and (22), we obtain
u ( j ) = I + Ψ ( x * ) ( δ γ ) C 2 e ( j ) Ψ ( x * ) 2 ( γ δ ) 2 3 Ψ ( x * ) ( γ δ ) + 3 I C 2 2 + C 3 e ( j ) 2 + 2 C 4 3 I + 2 Ψ ( x * ) ( δ γ ) C 2 C 3 I Ψ ( x * ) ( γ δ ) C 3 C 2 ( γ δ ) ( γ δ ) Ψ ( x * ) 2 2 Ψ ( x * ) C 2 3 e ( j ) 3 + O e ( j ) 4 .
From Equation (23), we get
Q ( j ) = Γ I + ( δ γ ) Ψ ( x * ) C 2 e ( j ) + 2 C 3 + 2 ( 1 + α ) I + ( 1 + 4 α ) ( γ δ ) Ψ ( x * ) ( 2 α 1 ) ( γ δ ) 2 Ψ ( x * ) 2 C 2 2 e ( j ) 2 Γ + O e ( j ) 3 .
Again, using Taylor series around x * , we obtain
Ψ ( z 1 ( j ) ) = Ψ ( x * ) ( γ + δ ) Ψ ( x * ) + I C 2 e ( j ) 2 + 2 I + 2 ( γ δ ) Ψ ( x * ) ( γ δ ) 2 Ψ ( x * ) 2 C 2 2 C 3 e ( j ) 3 + O e ( j ) 4 .
Substituting Equations (22)–(25), the error equation up to the second step of technique (9) is
e z 2 ( j ) = C 2 2 ( δ γ ) Ψ ( x * ) Ψ ( x * ) ( δ γ ) + I e ( j ) 3 + ( δ γ ) Ψ ( x * ) C 2 C 3 + 2 I + ( δ γ ) Ψ ( x * ) C 3 C 2 + ( 2 α + 1 ) I + 3 ( 2 α + 1 ) ( δ γ ) Ψ ( x * ) ( γ δ ) 2 ( 6 α + 1 ) Ψ ( x * ) 2 2 α ( γ δ ) Ψ ( x * ) 3 C 2 3 e ( j ) 4 + O e ( j ) 5 .
Again, developing e z 2 ( j ) by Taylor series around x * and using Equation (23), the error equation of (9) is given by
e ( j + 1 ) = ( δ γ ) 2 ( Ψ ( x * ) ) 2 Ψ ( x * ) ( δ γ ) + I C 2 3 e ( j ) 4 + O e ( j ) 5 .
Hence, (9) has order of convergence four, provided γ δ . □
Let us remark that in this proposed method, the fourth-order of convergence is achieved. However, the error equation depends on the value of two parameters, γ and δ , which would allow increasing its order, taking appropriate values. Moreover, more steps with similar shape can be added that can increase the final order of convergence in several units, as can be seen in the following section.

2.2. Development and Convergence Analysis of Sixth-Order Scheme

Now, we propose the following new iterative procedure by introducing additional steps in the proposed technique (9) to achieve sixth-order convergence.
z 1 ( j ) = x ( j ) [ w 1 ( j ) , w 2 ( j ) ; Ψ ] 1 Ψ ( x ( j ) ) , z 2 ( j ) = z 1 ( j ) Q ( j ) Ψ ( z 1 ( j ) ) , z 3 ( j ) = z 2 ( j ) Q ( j ) Ψ ( z 2 ( j ) ) , z 4 ( j ) = z 3 ( j ) Q ( j ) Ψ ( z 3 ( j ) ) , x ( j + 1 ) = z 4 ( j ) Q ( j ) Ψ ( z 4 ( j ) ) , j = 0 , 1 ,
where u ( j ) = I [ w 1 ( j ) , w 2 ( j ) ; Ψ ] 1 [ z 1 ( j ) , x ( j ) ; Ψ ] , Q ( j ) = I + 2 u ( j ) 2 ( α 2 ) ( u ( j ) ) 2 [ w 1 ( j ) , w 2 ( j ) ; Ψ ] 1 and
γ , δ , α R .
Next, we prove the convergence of scheme (28) in the following result.
Theorem 2.
Let Ψ : D R n R n be differentiable enough in D, an open neighborhood of x * which satisfies Ψ ( x * ) = 0 . Consider that initial guess x ( 0 ) is sufficiently close to the required zero x * and Ψ ( x ) is non-singular in x * and continuous. Then, the local convergence order of { x ( j ) } , generated by (28), is six for all α R , if real parameters γ and δ satisfy γ δ .
Proof. 
Now, from fourth step of scheme (28) and using (23)–(27) and Taylor series for Ψ ( z 3 ( j ) ) , we have
e z 4 ( j ) = C 2 4 ( δ γ ) 3 ( Ψ ( x * ) ) 3 Ψ ( x * ) ( δ γ ) + I e ( j ) 5 + O e ( j ) 6 .
In a similar way as before, by considering Taylor series for Ψ ( z 4 ( j ) ) , and Equations (23)–(29), we get final sixth-order error equation
e ( j + 1 ) = C 2 5 ( δ γ ) 4 ( Ψ ( x * ) ) 4 Ψ ( x * ) ( δ γ ) + I e ( j ) 6 + O e ( j ) 7 .
Hence, proposed method (28) has order of convergence six, provided γ δ . □

3. Methods with Memory

As it has been stated in (27), the convergence order of the scheme is four if ( δ γ ) I Ψ ( x * ) I . On the other side, if we choose δ and γ such that ( γ δ ) I = [ Ψ ( x * ) ] 1 , then convergence must be at least five. However, the further acceleration of convergence is not possible in absence of knowledge about the value of Ψ ( x * ) 1 . However, we can estimate Ψ ( x * ) 1 , using the already available data which leads to accelerate the order of convergence. This idea was suggested by Traub in [2] and later used and extended by Petković et al. in [18], at this moment for scalar equations. Motivated from this fact, we approximate Ψ ( x * ) 1 by
B ( j ) = { [ u 1 ( j 1 ) , v 1 ( j 1 ) ; Ψ ] } 1 [ Ψ ( x * ) ] 1 , j 1 ,
where u 1 ( j ) = x ( j ) γ 1 B ( j ) Ψ ( x ( j ) ) and v 1 ( j ) = x ( j ) + δ 1 B ( j ) Ψ ( x ( j ) ) , using the current and previous available data. In this manner, we extend the Jacobian-free methods (9) and (28) to schemes with memory to solve nonlinear systems. Thus, we can define the new methods with memory as follows:
z 1 ( j ) = x ( j ) [ u 1 ( j ) , v 1 ( j ) ; Ψ ] 1 Ψ ( x ( j ) ) , z 2 ( j ) = z 1 ( j ) Q ( j ) Ψ ( z 1 ( j ) ) , x ( j + 1 ) = z 2 ( j ) Q ( j ) Ψ ( z 2 ( j ) ) , j = 0 , 1 ,
and
z 1 ( j ) = x ( j ) [ u 1 ( j ) , v 1 ( j ) ; Ψ ] 1 Ψ ( x ( j ) ) , z 2 ( j ) = z 1 ( j ) Q ( j ) Ψ ( z 1 ( j ) ) , z 3 ( j ) = z 2 ( j ) Q ( j ) Ψ ( z 2 ( j ) ) , z 4 ( j ) = z 3 ( j ) Q ( j ) Ψ ( z 3 ( j ) ) , x ( j + 1 ) = z 4 ( j ) Q ( j ) Ψ ( z 4 ( j ) ) , j = 0 , 1 ,
respectively. Here, Q ( j ) = I + 2 u ( j ) 2 ( α 2 ) ( u ( j ) ) 2 , u ( j ) = I [ u 1 ( j ) , v 1 ( j ) ; Ψ ] 1 [ z 1 ( j ) , x ( j ) ; Ψ ] , and γ 1 and δ 1 are arbitrary constants satisfying δ 1 γ 1 = 1 .

Convergence Analysis of Methods with Memory

Now, we state and prove the R-order of convergence of schemes with memory (32) and (33).
Theorem 3.
Let Ψ : D R n R n be differentiable enough in an open convex neighborhood D of x * , solution of Ψ ( x ) = 0 and given matrix B ( k ) , recursively calculated by the form given in (31) and for an initial guess x ( 0 ) , close enough to solution x * . Then, the R-order convergence for iterative schemes (32) and (33) are 2 + 5 4.24 and 3 + 10 6.162 , respectively, if δ 1 γ 1 = 1 .
Proof. 
Let { x ( j ) } be a sequence of approximation generated by the iterative expression (32) such that it converges to the solution x * of Ψ ( x ) = 0 with R o r d e r at lest r. Then,
e ( j + 1 ) D ( j , r ) e ( j ) r ,
being { D ( j , r ) } a sequence tending to D r , asymptotic error constant, when j . Let us also remark that notation s t means that the magnitudes of s and t have the same order. Further on,
e ( j + 1 ) D ( j , r ) ( D ( j 1 , r ) ( e ( j 1 ) ) r ) r = D ( j , r ) ( D ( j 1 , r ) ) r ( ( e ( j 1 ) ) r 2 ) .
By (31) and (20) for iteration ( j 1 ) , one gets
B ( j ) = [ u 1 ( j 1 ) , v 1 ( j 1 ) ; Ψ ] 1 = Γ I ( e 1 ( j 1 ) + e 2 ( j 1 ) ) C 2 + O e ( j 1 ) 2
and
I + B ( j ) Ψ ( x * ) = ( e 1 ( j 1 ) + e 2 ( j 1 ) ) C 2 + O e ( j 1 ) 2 ) e ( j 1 ) .
Using the accelerating matrix δ γ = B ( j ) in (27), we get
e ( j + 1 ) = C 2 3 B ( j ) 2 Ψ ( x * ) 2 Ψ ( x * ) B ( j ) + I e ( j ) 4 + O e ( j ) ) 5 .
Applying (37) in (38), one gets
e ( j + 1 ) e ( j 1 ) e j 4 e ( j 1 ) ( e ( j 1 ) r ) 4 ( e ( j 1 ) ) 4 r + 1 .
Then, comparing the powers of e ( j 1 ) in (35) and (39), we obtain
r 2 = 4 r + 1 .
By using Theorem 9.2.9 of [1], the R o r d e r of convergence of this scheme is, at least, the unique positive root of (40), i.e., 4.24 .
In similar terms, setting ( δ γ ) I = B ( j ) in (30) and using (37), one gets
e ( j + 1 ) = C 2 5 ( B ( j ) ) 4 ( Ψ ( x * ) ) 4 Ψ ( x * ) ( B ( j ) ) + 1 ( e j ) 6 + O e j ) 7 ) , e ( j 1 ) ( e j ) 6 e ( j 1 ) ( ( e ( j 1 ) ) r ) 6 ( e ( j 1 ) ) 6 r + 1 .
By means of a comparison of the powers of e ( j 1 ) in (35) and (41), we get
r 2 = 6 r + 1 .
In a similar way as before, solving equation (42), the R o r d e r of (33) is 6.162 . This completes the proof. □
Once this convergence order has been stated, the performance of these procedures must be checked on different kinds of problems. In the following section, several real-life problems (some of them big-sized) are solved by using these techniques.

4. Numerical Experiments

In this section, we consider several numerical problems to show the performance of the proposed methods. New schemes (32) namely P M 4 1 , P M 4 2 for α = 1 2 and for α = 1 4 , respectively, are considered and compared with existing techniques with memory proposed by Sharma and Petković, ( S M 4 ) for ( c = 0.01 ) [12] and Narang’s et al. method ( M M 4 1 ) [14]. The proposed schemes (33) for α = 1 2 , and α = 1 4 are denoted as P M 6 3 , and P M 6 4 , respectively and compared with existing schemes with memory namely S M 6 2 , M M 6 proposed by Sharma and Arora et al. [13], and Narang’s et al. [14], respectively. For better and fair comparisons, the performance of the new methods as well as the existing ones is tested for the same varying initial estimation of the accelerating matrix B ( 0 ) = 0.001 I . The numerical results are performed with γ 1 = 1 and satisfying δ 1 γ 1 = 1 . To numerically check the order of convergence proven theoretically, we have displayed the number of iteration indices j, the residual Ψ ( x ( j ) ) , the distance between the two last iterations x ( j + 1 ) x ( j ) , and also the approximated computational order of convergence ( ρ ) using the ACOC defined in [19],
A C O C ln x ( j + 1 ) x ( j ) x ( j ) x ( j 1 ) ln x ( j ) x ( j 1 ) x ( j 1 ) x ( j 2 ) , j = 2 , 3 , ,
where x ( j 2 ) , x ( j 1 ) , x ( j ) , and x ( j + 1 ) are four consecutive approximations in the iterative process. All numerical computations were done on Mathematica 11 [20] with multiple precision arithmetics, by using 2000 digits of mantissa, with the aim of minimizing the round-off errors and in all tables, b ( ± c ) denotes b × 10 ± c .
Example 1.
Let us firstly consider the problem of kinematic synthesis for steering, that was described in [21,22,23,24]. It is modeled as the nonlinear system
G i x 2 sin ϕ i x 3 E i x 2 sin ψ i x 3 2 + E i x 2 cos ψ i 1 G i x 2 cos ϕ i + 1 2 x 1 x 2 cos ϕ i + 1 x 2 sin ψ i x 3 x 1 x 2 sin ϕ i x 3 x 2 cos ψ i x 3 2 = 0 , for i = 1 , 2 , 3 ,
being
E i = sin ϕ i sin ϕ 0 x 3 x 2 x 2 sin ϕ i x 3 x 1 + cos ϕ i cos ϕ 0 x 2 , i = 1 , 2 , 3 ,
and
G i = sin ψ i x 3 x 2 cos ψ i x 2 + sin ψ 0 x 3 x 1 x 2 + cos ψ 0 x 2 + x 1 x 3 , i = 1 , 2 , 3 .
We can see in Table 1 the values of ψ i and ϕ i , in radians. We have considered the initial estimation x ( 0 ) = ( 0.91 , 0.70 , 0.66 ) in order to get the approximation of the solution
x * ( 0.9051567 , 0.6977417 , 0.6508335 ) T .
The numerical results are displayed in Table 2.
It can be observed in Table 2 that the estimations of the error of proposed methods are better than those of the known methods from the first iteration. Moreover, the estimated order of convergence coincides with the theoretical one, for all schemes.
Example 2.
Now, we focus our studies in the Van der Pol equation [22,25], defined as:
z μ ( z 2 1 ) z + z = 0 , μ > 0 ,
governing the flow in a vacuum tube. The boundary conditions are z ( 0 ) = 0 , z ( 2 ) = 1 . Moreover, the set of nodes in the interval [ 0 , 2 ] is given by
x 0 = 0 < x 1 < x 2 < x 3 < < x n , b e i n g x i = x 0 + i h , a n d h = 2 n .
Indeed, we assume that
z 0 = z ( x 0 ) = 0 , z 1 = z ( x 1 ) , , z n 1 = z ( x n 1 ) , z n = z ( x n ) = 1 .
By discretizing problem (44) with central divided differences for the first and second derivative,
z i = z i + 1 z i 1 2 h , z i = z i 1 2 z i + z i + 1 2 h , i = 1 , 2 , , n 1 ,
we get a system of nonlinear equations of size ( n 1 ) × ( n 1 ) ,
2 h 2 z i h μ z i 2 1 z i + 1 z i 1 + 2 z i 1 + z i + 1 2 z i = 0 .
We use μ = 1 2 and initial estimation z i ( 0 ) = 1 k 2 , i = 1 , 2 , , n 1 . Also, we employ n = 101 so we obtain a 100 × 100 system of nonlinear equations. The numerical results are shown in Table 3.
In this case, the best results have been obtained by schemes M M 4 and M M 6 , although those obtained by our proposed methods are very similar.
Example 3.
Let us consider another nonlinear problem that is called Coupled Burgers equations [26] defined as
u t 2 u x 2 2 u u x + u x v = 0 , v t 2 v x 2 2 v v x + u x v = 0 , u ( 0 , t ) = v ( 0 , t ) = 0 , t 0 , u ( x , 0 ) = v ( x , 0 ) = s i n x ,
in the intervals x [ 0 , 5 ] and 0 t 1 4 . Again, by using finite differences, Equation (45) is reduced to a nonlinear system. Consider w i , j = u ( x i , t j ) and r i + 1 , j = v ( x i , t j ) be, respectively, their estimated solution at the nodes of the mesh. Let the number of subintervals in x and t be denoted by M and N, respectively, and h 1 and h 2 be the respective step size. By applying central differences to
u x x ( x i , t j ) = w i + 1 , j 2 w i , j + w i 1 , j h 1 2 a n d v x x ( x i , t j ) = r i + 1 , j 2 r i , j + r i 1 , j h 1 2
backward differences for
u t ( x i , t j ) = w i , j w i , j 1 h 2 a n d v t ( x i , t j ) = r i , j r i , j 1 h 2
and central differences for
u x ( x i , t j ) = w i + 1 , j w i 1 , j ) 2 h 1 a n d v x ( x i , t j ) = r i + 1 , j r i 1 , j 2 h 1 .
We consider M = 9 and N = 9 which yields to a system of size 128, with the initial estimation for p o i n t s = R a n g e [ 0.8 , 0.8 , 0.025 ] and x 0 = D r o p [ D r o p [ J o i n [ p t s , p t s ] , 1 ] , 1 ] has been evaluated in Mathematica software. The matrix plot of two divided differences used in proposed iterative scheme has been shown in Figure 1 and Figure 2. In Figure 3, the approximation of the solution is represented. In Figure 4, the blue line shows the exact solution u ( x , t ) = e t s i n x and dotted red points represents approximated solution.
Example 4.
Now, we are going to check the performance of the methods on the mixed Hammerstein integral equation (see [1] pp. 19–20),
x ( s ) = 1 + 1 5 0 1 G ( s , t ) ( x ( t ) ) 3 d t ,
being the kernel
G ( s , t ) = ( 1 s ) t , t s , s ( 1 t ) , s t .
and x C [ 0 , 1 ] , s , t [ 0 , 1 ] .
We use Gauss-Legendre quadrature formula, given as 0 1 f ( t ) d t j = 1 8 w j f ( t j ) , to transform this intergal equation into a finite-dimensional problem. The nodes t j and weights w j , j = 1 , 2 , , 8 are determined by means of Legendre polynomials (see Table 4). Denoting the approximations of x ( t i ) by x i , i = 1 , 2 , , 8 , one gets the system of nonlinear equations F ( x ) = f 1 ( x ) , , f 8 ( x ) T , where
f i ( x 1 , , x 8 ) = 5 x i 5 j = 1 8 a i j x j 3 = 0 ,
where i = 1 , 2 , , 8 and
a i j = w j t j ( 1 t i ) , j i , w j t i ( 1 t j ) , i < j .
The convergence results of the methods to the solution x * ( 1.002096 , 1.009900 , 1.019727 , 1.026436 , 1.026436 , 1.019727 , 1.009900 , 1.002096 ) T using initial guess x ( 0 ) = ( 9 10 , 9 10 , , 9 10 ) T is presented in Table 5.
In this example, we observe again that the proposed methods, as well in fourth-order as in sixth one, get the best error estimations from the first iterations.
Example 5.
Let us also consider the Frank-Kamenetskii Problem [27] that is described by the following boundary problem
x y + y + x e y = 0 , y ( 0 ) = y ( 1 ) = 0 .
To transform problem (46) into nonlinear system of size 50 × 50 with step size h = 1 51 , the finite difference discretization is used. In the test made, initial guess x ( 0 ) = ( 1 10 , 1 10 , , 1 10 ) T has been used and the obtained results are shown in Table 6.
For this example, method P M 4 1 gets the best error estimations among its partners and P M 6 3 achieves also the best error estimates from the first iteration.

5. Concluding Remarks

To summarize, we have developed new Jacobian-free methods of fourth and sixth-order for solving systems of nonlinear equations numerically. The convergence of the proposed schemes is accelerated by using memorization which is based on current and previous available data. A wide range of numerical experiments have been taken into account, that confirm the theoretical results. It is found that the presented methods with memory perform as good or higher effectiveness as the existing ones.

Author Contributions

The individual contribution of the authors have been: conceptualization, M.K. and S.B.; software, S.B.; validation, M.K. and J.R.T.; formal analysis, A.C.; writing—original draft preparation, M.K.; writing—review and editing, J.R.T. and A.C. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by PGC2018-095896-B-C22 (MCIU/AEI/FEDER, UE).

Acknowledgments

The authors would thank to the anonymous reviewers for their useful comments and suggestions, that have improved the final version of this manuscript.

Conflicts of Interest

The authors declare that there is no conflict of interest regarding the publication of this paper.

References

  1. Ortega, J.M.; Rheinboldt, W.C. Iterative Solution of Nonlinear Equations in Several Variables; Academic Press: New York, NY, USA, 1970. [Google Scholar]
  2. Traub, J.F. Iterative Methods for the Solution of Equations; Prentice-Hall: Englewood Cliffs, NJ, USA, 1964. [Google Scholar]
  3. Cordero, A.; Hueso, J.L.; Martinez, E.; Torregrosa, J.R. A modified Newton-Jarratt’s composition. Numer. Algorithms 2010, 55, 87–99. [Google Scholar] [CrossRef]
  4. Cordero, A.; Hueso, J.L.; Martinez, E.; Torregrosa, J.R. Efficient high-order methods based on golden ratio for nonlinear systems. Appl. Math. Comput. 2011, 217, 4548–4556. [Google Scholar] [CrossRef]
  5. Babajee, D.; Cordero, A.; Soleymani, F.; Torregrosa, J.R. On a Novel Fourth-Order algorithm for solving systems of nonlinear equations. J. Appl. Math. 2012, 165452. [Google Scholar] [CrossRef]
  6. Samanskii, V. On a modification of the Newton’s method. Ukr. Math. J. 1976, 19, 133–138. [Google Scholar]
  7. Zheng, Q.; Zhao, P.; Huang, F. A family of fourth-order Steffensen-type methods with the applications on solving nonlinear odes. Appl. Math. Comput. 2011, 217, 8196–8203. [Google Scholar] [CrossRef]
  8. Sharma, J.R.; Arora, H. An efficient derivative free iterative method for solving systems of nonlinear equations. Appl. Anal. Discret. Math. 2013, 390–403. [Google Scholar] [CrossRef] [Green Version]
  9. Sharma, J.R.; Arora, H.; Petković, M.S. An efficient derivative free family of fourth order methods for solving systems of nonlinear equations. Appl. Math. Comput. 2014, 235, 383–393. [Google Scholar] [CrossRef]
  10. Wang, X.; Zhang, T.; Qian, W.; Teng, M. Seventh-order derivative-free iterative method for solving nonlinear systems. Numer. Algorithms 2015, 70, 545–558. [Google Scholar] [CrossRef]
  11. Chicharro, F.I.; Cordero, A.; Garrido, N.; Torregrosa, J.R. On the improvement of the order of convergence of iterative methods for solving nonlinear systems by means of memory. Appl. Math. Lett. 2020, 104, 106277. [Google Scholar] [CrossRef]
  12. Petković, M.S.; Sharma, J.R. On some efficient derivative-free iterative methods with memory for solving systems of nonlinear equations. Numer. Algorithms 2016, 71, 457–474. [Google Scholar] [CrossRef]
  13. Sharma, J.R.; Arora, H. Efficient higher order derivative-free multipoint methods with and without memory for systems of nonlinear equations. Int. J. Comput. Math. 2018, 95, 920–938. [Google Scholar] [CrossRef]
  14. Narang, M.; Bhatia, S.; Alshomrani, A.S.; Kanwar, V. General efficient class of Steffensen type methods with memory for solving systems of nonlinear equations. J. Comput. Appl. Math. 2019, 352, 23–39. [Google Scholar] [CrossRef]
  15. King, R.F. A family of fourth order methods for nonlinear equations. SIAM J. Numer. Anal. 1973, 10, 876–879. [Google Scholar] [CrossRef]
  16. Abad, M.; Cordero, A.; Torregrosa, J.R. A family of seventh-order schemes for solving nonlinear systems. Bull. Math. Soc. Sci. Math. Roum. Tome 2014, 57, 133–145. [Google Scholar]
  17. Hermite, C. Sur la formule d’interpolation de Lagrange. J. Reine Angew. Math. 1878, 84, 70–79. [Google Scholar] [CrossRef]
  18. Petković, M.S.; Dizunić, J.; Petković, L.D. A family of two-point methods with memory for solving nonlinear equations. Appl. Anal. Discret. Math. 2011, 5, 298–317. [Google Scholar]
  19. Cordero, A.; Torregrosa, J.R. Variants of Newton’s method using fifth-order quadrature formulas. Appl. Math. Comput. 2007, 190, 686–698. [Google Scholar] [CrossRef]
  20. Wolfram, S. The Mathematica Book, 5th ed.; Wolfram Media: Champaign, IL, USA, 2003. [Google Scholar]
  21. Awawdeh, F. On new iterative method for solving systems of nonlinear equations. Numer. Algorithms 2010, 54, 395–409. [Google Scholar] [CrossRef]
  22. Noor, M.A.; Waseem, M.; Noor, K.I. New iterative technique for solving a system of nonlinear equations. Appl. Math. Comput. 2015, 271, 446–466. [Google Scholar] [CrossRef]
  23. Pramanik, S. Kinematic synthesis of a six-member mechanism for automotive steering. ASME J. Mech. Des. 2002, 124, 642–645. [Google Scholar] [CrossRef]
  24. Tsoulos, I.G.; Stavrakoudis, A. On locating all roots of systems of nonlinear equations inside bounded domain using global optimization methods. Nonlinear Anal. Real World Appl. 2010, 11, 2465–2471. [Google Scholar] [CrossRef]
  25. Burden, R.L.; Faires, J.D.; Annette, M. Numerical Analysis; Cengage Learning: Boston, MA, USA, 2016. [Google Scholar]
  26. Polyanin, A.D.; Zaitsev, V.F. Handbook of Nonlinear Partial Differential Equations; Chapman & Hall/CRC: Boca Raton, FL, USA, 2004. [Google Scholar]
  27. Frank-Kamenetskii, D.A. Diffusion and Heat Transfer in Chemical Kinetics; Plenum Press: New York, NY, USA, 1969. [Google Scholar]
Figure 1. Matrix plot of the divided difference operator [ x ( k ) , w ( k ) ; Ψ ] .
Figure 1. Matrix plot of the divided difference operator [ x ( k ) , w ( k ) ; Ψ ] .
Mathematics 08 01251 g001
Figure 2. Matrix plot of the divided difference operator [ x ( k ) , z 1 ( k ) ; ψ ] .
Figure 2. Matrix plot of the divided difference operator [ x ( k ) , z 1 ( k ) ; ψ ] .
Mathematics 08 01251 g002
Figure 3. Estimated solution for coupled Burger Equation’s t [ 0 , 1 4 ] .
Figure 3. Estimated solution for coupled Burger Equation’s t [ 0 , 1 4 ] .
Mathematics 08 01251 g003
Figure 4. Exact and approximated solution of u ( x , t ) .
Figure 4. Exact and approximated solution of u ( x , t ) .
Mathematics 08 01251 g004
Table 1. Values in radians of ψ i and ϕ i for Example (1).
Table 1. Values in radians of ψ i and ϕ i for Example (1).
i ψ i ϕ i
0 1.3954170041747090114 1.7461756494150842271
1 1.7444828545735749268 2.0364691127919609051
2 2.0656234369405315689 2.2390977868265978920
3 2.4600678478912500533 2.4600678409809344550
Table 2. Comparative study of different methods for Example 1.
Table 2. Comparative study of different methods for Example 1.
Cases j Ψ ( x ( j ) ) x ( j + 1 ) x ( j ) ACOC
S M 4 2 2.5 ( 11 ) 4.7 ( 10 ) 4.305
3 3.3 ( 37 ) 4.6 ( 35 )
4 1.6 ( 144 ) 9.1 ( 143 )
M M 4 2 1.7 ( 11 ) 2.5 ( 9 ) 4.055
3 5.7 ( 35 ) 7.5 ( 34 )
4 2.9 ( 135 ) 2.5 ( 133 )
P M 4 1 2 8.3 ( 13 ) 8.4 ( 11 ) 4.200
3 2.3 ( 41 ) 2.8 ( 39 )
4 6.6 ( 161 ) 7.8 ( 159 )
P M 4 2 2 5.0 ( 13 ) 5.3 ( 11 ) 4.085
3 5.0 ( 42 ) 7.7 ( 41 )
4 1.1 ( 164 ) 9.8 ( 163 )
S M 6 2 3.1 ( 19 ) 1.9 ( 17 ) 6.166
3 9.7 ( 100 ) 7.0 ( 98 )
4 1.2 ( 595 ) 7.2 ( 594 )
M M 6 2 1.3 ( 19 ) 1.8 ( 17 ) 6.134
3 4.1 ( 99 ) 1.7 ( 97 )
4 1.6 ( 590 ) 1.9 ( 588 )
P M 6 3 2 1.3 ( 23 ) 1.2 ( 21 ) 6.165
3 1.8 ( 124 ) 1.7 ( 122 )
4 5.8 ( 746 ) 2.8 ( 744 )
P M 6 4 2 4.8 ( 24 ) 4.9 ( 22 ) 6.167
3 3.8 ( 127 ) 4.9 ( 125 )
4 8.9 ( 762 ) 2.7 ( 760 )
Table 3. Comparative study of different methods for Example 2.
Table 3. Comparative study of different methods for Example 2.
Cases j Ψ ( x ( j ) ) x ( j + 1 ) x ( j ) ACOC
S M 4 2 5.8 ( 14 ) 5.8 ( 12 ) 4.239
3 1.3 ( 55 ) 1.3 ( 53 )
4 4.6 ( 232 ) 1.3 ( 230 )
M M 4 2 7.7 ( 15 ) 1.0 ( 12 ) 4.235
3 5.7 ( 59 ) 3.8 ( 57 )
4 2.3 ( 247 ) 2.8 ( 245 )
P M 4 1 2 2.1 ( 14 ) 3.2 ( 12 ) 4.259
3 1.1 ( 56 ) 1.2 ( 54 )
4 1.6 ( 237 ) 1.3 ( 235 )
P M 4 2 2 2.3 ( 14 ) 3.5 ( 12 ) 4.262
3 1.1 ( 56 ) 1.2 ( 54 )
4 1.6 ( 237 ) 1.3 ( 235 )
S M 6 2 6.6 ( 28 ) 3.8 ( 26 ) 6.152
3 4.8 ( 169 ) 3.6 ( 167 )
4 7.4 ( 1037 ) 7.3 ( 1035 )
M M 6 2 9.8 ( 30 ) 9.6 ( 28 ) 6.162
3 4.4 ( 178 ) 4.9 ( 176 )
4 8.2 ( 1092 ) 6.9 ( 1092 )
P M 6 3 2 7.0 ( 29 ) 9.0 ( 27 ) 6.174
3 6.4 ( 172 ) 5.2 ( 170 )
4 4.8 ( 1056 ) 1.9 ( 1054 )
P M 6 4 2 1.1 ( 28 ) 1.4 ( 26 ) 6.170
3 9.8 ( 171 ) 6.9 ( 169 )
4 1.7 ( 1048 ) 6.1 ( 1047 )
Table 4. Abscissas and weights of Gauss-Legendre quadrature formula.
Table 4. Abscissas and weights of Gauss-Legendre quadrature formula.
j t j w j
1 0.01985507175123188415821957 0.05061426814518812957626567
2 0.10166676129318663020422303 0.11119051722668723527217800
3 0.23723379504183550709113047 0.15685332293894364366898110
4 0.40828267875217509753026193 0.18134189168918099148257522
5 0.59171732124782490246973807 0.18134189168918099148257522
6 0.76276620495816449290886952 0.15685332293894364366898110
7 0.89833323870681336979577696 0.11119051722668723527217800
8 0.98014492824876811584178043 0.05061426814518812957626567
Table 5. Comparative study of different methods for Example 4.
Table 5. Comparative study of different methods for Example 4.
Cases j Ψ ( x ( j ) ) x ( j + 1 ) x ( j ) ρ
S M 4 2 4.1 ( 31 ) 8.8 ( 32 ) 4.236
3 2.4 ( 136 ) 5.0 ( 137 )
4 3.7 ( 582 ) 7.8 ( 583 )
M M 4 2 1.6 ( 31 ) 3.5 ( 32 ) 4.236
3 4.8 ( 138 ) 1.0 ( 138 )
4 2.5 ( 589 ) 5.4 ( 590 )
P M 4 1 2 1.5 ( 42 ) 3.3 ( 43 ) 4.234
3 6.7 ( 185 ) 1.4 ( 185 )
4 9.3 ( 788 ) 2.0 ( 788 )
P M 4 2 2 1.9 ( 42 ) 4.1 ( 43 ) 4.234
3 1.7 ( 184 ) 3.5 ( 185 )
4 4.2 ( 786 ) 8.9 ( 787 )
S M 6 2 4.9 ( 67 ) 1.1 ( 67 ) 6.162
3 2.3 ( 420 ) 5.0 ( 421 )
4 1.3 ( 2597 ) 2.8 ( 2598 )
M M 6 2 9.6 ( 69 ) 2.0 ( 69 ) 6.162
3 6.7 ( 431 ) 1.4 ( 431 )
4 1.4 ( 2662 ) 3.0 ( 2663 )
P M 6 3 2 1.5 ( 101 ) 3.3 ( 102 ) 6.162
3 4.2 ( 633 ) 9.0 ( 634 )
4 1.3 ( 3908 ) 2.8 ( 3909 )
P M 6 4 2 2.9 ( 101 ) 6.1 ( 102 ) 6.162
3 1.9 ( 631 ) 4.0 ( 632 )
4 2.0 ( 3898 ) 4.3 ( 3899 )
Table 6. Comparative study of different methods for Example 5.
Table 6. Comparative study of different methods for Example 5.
Cases j Ψ ( x ( j ) ) x ( j + 1 ) x ( j ) ρ
S M 4 2 1.5 ( 19 ) 2.6 ( 16 ) 4.236
3 2.4 ( 74 ) 4.2 ( 71 )
4 2.1 ( 306 ) 3.7 ( 303 )
M M 4 2 6.1 ( 20 ) 1.0 ( 16 ) 4.236
3 5.7 ( 76 ) 9.9 ( 73 )
4 2.6 ( 313 ) 4.4 ( 310 )
P M 4 1 2 6.9 ( 26 ) 1.2 ( 22 ) 4.235
3 3.6 ( 101 ) 6.2 ( 98 )
4 4.7 ( 420 ) 8.1 ( 417 )
P M 4 2 2 9.1 ( 26 ) 1.6 ( 22 ) 4.235
3 1.2 ( 100 ) 2.1 ( 97 )
4 7.3 ( 418 ) 1.3 ( 414 )
S M 6 2 2.8 ( 37 ) 4.8 ( 34 ) 6.162
3 3.3 ( 216 ) 5.8 ( 213 )
4 9.6 ( 1319 ) 1.7 ( 1315 )
M M 6 2 6.2 ( 39 ) 1.1 ( 35 ) 6.162
3 2.3 ( 226 ) 4.0 ( 223 )
4 2.5 ( 1381 ) 4.3 ( 1378 )
P M 6 3 2 4.1 ( 56 ) 7.2 ( 53 ) 6.162
3 3.4 ( 332 ) 5.9 ( 329 )
4 1.6 ( 2033 ) 2.7 ( 2030 )
P M 6 4 2 9.2 ( 56 ) 1.6 ( 52 ) 6.162
3 4.7 ( 330 ) 8.1 ( 327 )
4 2.4 ( 2020 ) 4.2 ( 2017 )

Share and Cite

MDPI and ACS Style

Kansal, M.; Cordero, A.; Bhalla, S.; Torregrosa, J.R. Memory in a New Variant of King’s Family for Solving Nonlinear Systems. Mathematics 2020, 8, 1251. https://doi.org/10.3390/math8081251

AMA Style

Kansal M, Cordero A, Bhalla S, Torregrosa JR. Memory in a New Variant of King’s Family for Solving Nonlinear Systems. Mathematics. 2020; 8(8):1251. https://doi.org/10.3390/math8081251

Chicago/Turabian Style

Kansal, Munish, Alicia Cordero, Sonia Bhalla, and Juan R. Torregrosa. 2020. "Memory in a New Variant of King’s Family for Solving Nonlinear Systems" Mathematics 8, no. 8: 1251. https://doi.org/10.3390/math8081251

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop