Next Article in Journal
Efficient Metaheuristics for the Mixed Team Orienteering Problem with Time Windows
Next Article in Special Issue
An Optimal Order Method for Multiple Roots in Case of Unknown Multiplicity
Previous Article in Journal
A Novel Complex-Valued Encoding Grey Wolf Optimization Algorithm
Previous Article in Special Issue
Offset-Assisted Factored Solution of Nonlinear Systems

Algorithms 2016, 9(1), 5; https://doi.org/10.3390/a9010005

Article
A Family of Iterative Methods for Solving Systems of Nonlinear Equations Having Unknown Multiplicity
1
Dipartimento di Scienza e Alta Tecnologia, Universita dell’Insubria, Via Valleggio 11, Como 22100, Italy
2
Departament de Física i Enginyeria Nuclear, Universitat Politècnica de Catalunya, Comte d’Urgell 187, 08036 Barcelona, Spain
3
Department of Information Technology, Uppsala University, Box 337, SE-751 05 Uppsala, Sweden
4
Department of Mathematics, King Abdulaziz University, Jeddah 21589, Saudi Arabia
*
Author to whom correspondence should be addressed.
Academic Editors: Alicia Cordero, Juan R. Torregrosa and Francisco I. Chicharro
Received: 8 December 2015 / Accepted: 22 December 2015 / Published: 31 December 2015

Abstract

:
The singularity of Jacobian happens when we are looking for a root, with multiplicity greater than one, of a system of nonlinear equations. The purpose of this article is two-fold. Firstly, we will present a modification of an existing method that computes roots with known multiplicities. Secondly, will propose the generalization of a family of methods for solving nonlinear equations with unknown multiplicities, to the system of nonlinear equations. The inclusion of a nonzero multi-variable auxiliary function is the key idea. Different choices of the auxiliary function give different families of the iterative method to find roots with unknown multiplicities. Few illustrative numerical experiments and a critical discussion end the paper.
Keywords:
systems of nonlinear equations; singular Jacobian; roots with multiplicity; auxiliary function

1. Introduction

We are interested in computing a real root α of a function F : D n R n , that is a vector α belonging to D such that F ( α ) = 0 . The most classical iterative method for solving a system of nonlinear equations, especially in the case of simple zeros, is the Newton method that offers quadratic convergence [1,2], under certain local regularity conditions. Many researchers have proposed further iterative methods for solving a system of nonlinear equations, that are efficient and with a high order of convergence [3,4,5,6,7,8,9]. However, as far as we are concerned with roots having multiplicity m ( 2 ) , the classical Newton method deteriorates and its convergence rate is linear with convergence factor deteriorating when the multiplicity is higher: modified variants of the Newton method can offer a good alternative, by recovering quadratic convergence, under the hypothesis that the multiplicity is known in advance.
The most classical modified Newton, for scalar nonlinear equations, can be written as
x 0 = initial guess x k + 1 = x k m f ( x k ) f ( x k ) , k = 0 , 1 ,
The straightforward generalization of Equation (1) is presented in [10]
x 0 = initial guess , x k + 1 = x k F ( x k ) 1 diag ( m ) F ( x k ) , k = 0 , 1 ,
where m = [ m 1 , m 2 , , m n ] T is a vector of multiplicities for system of nonlinear equations F ( x ) = 0 and diag ( · ) represents a diagonal matrix that keeps the input vector at its main diagonal. The proof of quadratic convergence of Equation (2) is established in [10]. Wu [11] proposed a variant of Newton method with the help of an auxiliary function. To complete the reference, we give new details of developments proposed by Wu. Suppose we have a system of nonlinear equations F ( x ) = 0 and we define a new system of nonlinear equation that have the same root
U ( x ) = e v x F ( x ) = 0
where ⊙ is the component-wise multiplication of two vectors and v = [ v 1 , v 2 , , v n ] T . The Fréchet derivative of U ( x ) is
U ( x ) = diag e v x F ( x ) + diag F ( x ) v e v x U ( x ) = diag e v x F ( x ) + diag v F ( x )
The application of Newton method for Equation (3) is
x k + 1 = x k U ( x k ) 1 U ( x k ) x k + 1 = x k diag e v x F ( x ) + diag v F ( x ) 1 e v x F ( x ) x k + 1 = x k F ( x ) + diag v F ( x ) 1 diag e v x 1 e v x F ( x ) x k + 1 = x k F ( x ) + diag v F ( x ) 1 F ( x )
The rate of convergence of Equation (5) is quadratic, just because the proposed iteration coincides with the Newton method. Notice that vector v is a parameter that provides a degree of freedom in Equation (5). Jose et al. [10] proposed a modification in Equation (1) by defining a modified function
U ( x ) = e v x F ( x ) 1 / m = 0
where 1 / m = [ 1 / m 1 , 1 / m 2 , , 1 / m n ] T and power of F ( x ) is component-wise. The application of Newton method to Equation (6) leads to the scheme
x k + 1 = x k F ( x ) + diag v F ( x ) 1 diag ( m ) F ( x )

2. Some Generalizations

The original idea of using a auxiliary function was proposed in [11]. The auxiliary function employed in Equation (3) is the exponential function. The question is, why we choose the exponential function? The answer is: it is a non-zeros function with non-zero derivative. So generalization is straightforward, we can choose any function that is non-zero everywhere in the vicinity of the root and in this way we will ensure the roots of F ( x ) = 0 are not affected by the auxiliary function multiplicity. Let G ( x ) be a non-zero auxiliary function in the neighborhood of the root of the system of nonlinear equations with unknown multiplicity and define a new system of nonlinear equations associated with F ( x ) as below
U ( x ) = G ( x ) F ( x ) = 0
Notice that the roots of U ( x ) = 0 and F ( x ) = 0 are the same because G ( x ) 0 for all x in the neighborhood of root. The first order Fréchet derivative of Equation (8) can be computed as
U i ( x ) = F i ( x ) G i ( x ) U i ( x ) T = F i ( x ) G i ( x ) T + G i ( x ) F i ( x ) T , i = 1 , 2 , , n U 1 ( x ) T U 2 ( x ) T U 3 ( x ) T U n ( x ) T = F 1 ( x ) 0 0 0 F 2 ( x ) 0 0 0 0 0 0 F n ( x ) G 1 ( x ) T G 2 ( x ) T G 3 ( x ) T G n ( x ) T + G 1 ( x ) 0 0 0 G 2 ( x ) 0 0 0 0 0 0 G n ( x ) F 1 ( x ) T F 2 ( x ) T F 3 ( x ) T F n ( x ) T
From Equation (9), the Fréchet derivative of F ( x ) G ( x ) is
F ( x ) G ( x ) = diag F ( x ) G ( x ) + diag G ( x ) F ( x )
U ( x ) = diag G ( x ) F ( x ) + diag F ( x ) G ( x ) U ( x ) = diag G ( x ) F ( x ) + diag F ( x ) diag G ( x ) 1 G ( x )
If we apply the Newton method to the system in Equation (8), then we obtain
x k + 1 = x k F ( x ) + diag F ( x ) diag G ( x ) 1 G ( x ) 1 diag G ( x ) 1 G ( x ) F ( x ) x k + 1 = x k F ( x ) + diag F ( x ) diag G ( x ) 1 G ( x ) 1 F ( x )
The convergence order of Equation (12) is two, under the usual regularity assumptions. The iterative method Equation (7) can be written as
x k + 1 = x k F ( x ) + diag F ( x ) diag G ( x ) 1 G ( x ) 1 diag ( m ) F ( x )
Again the convergence order of Equation (13) is two. In numerical simulations, we show that numerical results can improve by choosing appropriate auxiliary functions: in other words, the use of an auxiliary function can improve the constant hidden in the quadratic convergence.

3. Proposed Method

For the purpose of motivation, we present some developments for single nonlinear equations and subsequently we establish results for the multi-dimensional case. Recently Noor et al. [12] constructed a family of methods for solving nonlinear equations with unknown multiplicities and this represents a crucial improvement wit respect to procedures requiring this information. What they have established is the following. Let g ( x ) be a non-zero function and let us define a new function
q ( x ) = f ( x ) g ( x ) f ( x )
The application of a classical Newton method to the equation q ( x ) = 0 leads to the iteration
x k + 1 = x k q ( x k ) q ( x k ) x k + 1 = x k f ( x k ) f ( x k ) g ( x k ) f ( x k ) ( f ( x k ) g ( x k ) ) f ( x k ) f ( x k ) g ( x k ) ¯
The order of convergence of Equation (15) is two, under suitable regularity assumptions both on f ( · ) and g ( · ) . We are interested in developing a possible multidimensional version of Equation (15). Let F ( x ) = 0 be a system of nonlinear equations and having a root of unknown multiplicities. With the help of a non-zero auxiliary function G ( x ) , we define a new function Q ( x ) .
Q ( x ) = F ( x ) 1 G ( x ) F ( x ) = 0
The first order Fréchet derivative of Equation (16) can be written as
Q ( x ) = F ( x ) 1 2 F ( x ) F ( x ) G ( x ) F ( x ) F ( x ) F ( x ) 1 F ( x ) G ( x )
Further simplification of Q ( x ) 1 Q ( x ) gives
Q ( x ) 1 Q ( x ) = F ( x ) F ( x ) G ( x ) F ( x ) F ( x ) F ( x ) 1 F ( x ) G ( x ) ¯ 1 F ( x ) F ( x ) G ( x )
When comparing the underlined expressions in Equations (15) and (18), we clearly see that it is not possible to simplify the expression F ( x ) F ( x ) F ( x ) 1 F ( x ) G ( x ) , simply because F ( x ) and F ( x ) do not commute in general. Clearly in the scalar case the elimination of f ( x ) is possible due to commutativity.
Our idea amounts in artificially eliminating F ( x ) and F ( x ) 1 from the expression F ( x ) F ( x ) F ( x ) 1 F ( x ) G ( x ) : in this way we obtain a new iterative method for solving system of nonlinear equations with unknown multiplicities
x k + 1 = x k F ( x k ) F ( x k ) G ( x k ) F ( x k ) F ( x k ) G ( x k ) 1 F ( x k ) F ( x k ) G ( x k )
We clearly state that our proposed scheme i.e., the iterative method in Equation (19) is not the application of Newton method to Equation (16). However, our procedure is simpler and preserves the same quadratic convergence as the Newton method, under the very same regularity assumptions. In the next section, we will establish the proof of quadratic convergence for Equation (19).

4. Convergence

If we substitute G ( x ) = 1 in Equation (19), then we obtain
x k + 1 = x k F ( x k ) 2 F ( x k ) F ( x k ) 1 F ( x k ) F ( x k )
First we will establish the proof of quadratic convergence of the iterative procedure in Equation (20) and then for the main iterative method reported in Equation (19). Let α = [ α 1 , α 2 , , α n ] T be the root of F ( x ) = 0 with corresponding vector of multiplicities m = [ m 1 , m 2 , , m n ] T . As a consequence, there exists H ( x ) such that H ( α ) = [ h 1 ( α ) , h 2 ( α ) , , h n ( α ) ] T 0 for which the system of nonlinear equations can be written as
F ( x ) = x α m H ( x )
where x α m = [ ( x 1 α 1 ) m 1 , ( x 2 α 2 ) m 2 , , ( x n α n ) m n ] T . The first order and second order Fréchet derivatives of Equation (21) can be computed as
F ( x ) = diag ( x α ) m H ( x ) + diag m ( x α ) m 1 H ( x ) F ( x ) w = diag ( x α ) m H ( x ) w + diag m ( x α ) m 1 H ( x ) w F ( x ) w = F ( x ) w = diag m ( x α ) m 1 H ( x ) w + diag ( x α ) m H ( x ) w + diag m ( x α ) m 1 w H ( x ) + diag m ( m 1 ) ( x α ) m 2 w H ( x )
where w is a vector that we use to compute second order Fréchet derivative. By replacing w in Equation (22) by F ( x ) , we get
F ( x ) F ( x ) = diag m 2 ( x α ) 2 m 2 H ( x ) 2 diag m ( x α ) 2 m 2 H ( x ) 2 + diag m ( x α ) m 1 diag H ( x ) ( x α ) m H ( x ) + diag m ( x α ) 2 m 1 H ( x ) H ( x ) F ( x ) 2 = diag m 2 ( x α ) 2 m 2 H ( x ) 2 + diag ( x α ) m H ( x ) diag ( x α ) m H ( x ) + diag ( x α ) m H ( x ) diag m ( x α ) m 1 H ( x ) + diag m ( x α ) m 1 H ( x ) diag ( x α ) m H ( x )
By using Equations (22) and (23), and after proper simplifications, we have
F ( x ) F ( x ) = diag m ( x α ) 2 m 1 H ( x ) I + O ( x α ) F ( x ) 2 F ( x ) F ( x ) = diag m ( x α ) 2 m 2 H ( x ) I + O ( x α ) F ( x ) 2 F ( x ) F ( x ) 1 F ( x ) F ( x ) = ( x α ) + O ( x α ) 2
Theorem 1. 
Let F : D R n R n and α = [ α 1 , α 2 , α 3 , , α n ] T D is a root of F ( x ) = ( x α ) m H ( x ) = 0 with corresponding multiplicities vector m = [ m 1 , m 2 , , m n ] T and H ( α ) = [ h 1 ( α ) , h 2 ( α ) , , h n ( α ) ] T 0 with h i ( x ) C 2 D . Then there exists a subset S D such that, if we choose x 0 S , the iterative method in Equation (20) has quadratic convergence in S.
Proof. 
We can write Equation (20) as
x k + 1 = R ( x k ) = x k F ( x k ) 2 F ( x k ) F ( x k ) 1 F ( x k ) F ( x k )
By dropping the index k, we obtain
R ( x ) = x F ( x ) 2 F ( x ) F ( x ) 1 F ( x ) F ( x )
By using Equations (24) and (26), the simplified expression for R ( x ) is
R ( x ) = x ( x α ) + O ( x α ) 2 R ( x ) = O + O diag x α
By substituting x = α in Equation (27), we deduce the crucial relationships
R ( α ) = α R ( α ) = O
and from Equation (28) we conclude that the iterative method in Equation (20) has at least quadratic convergence.  ☐
Finally, the quadratic convergence of the method reported in Equation (19) can be proven as follows. Let e = x α and after applying few simplifications, we can write Equation (19) in the form
M ( x ) = diag G ( x ) 1 F ( x ) 1 F ( x ) F ( x ) G ( x ) L ( x ) = F ( x ) + diag f ( x ) diag G ( x ) 1 G ( x ) e k + 1 = e k L ( x ) M ( x ) 1 F ( x ) e k + 1 = e k I L ( x ) 1 M ( x ) 1 L ( x ) 1 F ( x ) e k + 1 e k I + L ( x ) 1 M ( x ) L ( x ) 1 F ( x ) e k + 1 e k L ( x ) 1 F ( x ) L ( x ) 1 M ( x ) L ( x ) 1 F ( x )
From Equation (12), we can see the term e k L ( x ) 1 F ( x ) = O e 2 and hence L ( x ) 1 F ( x ) = O ( e ) . Moreover, It can be seen easily that L ( x ) 1 M ( x ) = O ( e ) . It means L ( x ) 1 M ( x ) L ( x ) 1 F ( x ) = O e 2 . Hence we conclude that
e k + 1 = O e k 2
Notice that in Equation (24), we shows that the expression F ( x k ) 2 F ( x k ) F ( x k ) 1 F ( x k ) F ( x k ) is independent from factor m and in Equation (29) the inclusion of auxiliary function does not disturb the quadratic convergence. The auxiliary function G ( x ) works as a parameter that helps in rapid convergence by changing the path of convergence. And it is the vector quotient F ( x k ) 2 F ( x k ) F ( x k ) 1 F ( x k ) F ( x k ) that actually makes the method independent from the information contained in the multiplicity.

5. Numerical Testing

It is important to test the computational convergence order (COC) of the iterative methods discussed so far. In all our simulations, we adopt the following definition of COC
COC = log | | F ( x k + 1 ) | | / | | F ( x k ) | | log | | F ( x k ) | | / | | F ( x k 1 ) | | or log | | x k + 1 α | | / | | x k α | | log | | x k α | | / | | x k 1 α | |
Next, we explain how to compute the term F ( x ) ( G ( x ) F ( x ) ) . Suppose, we have a system of three nonlinear equations
Problem 1 = F 1 ( x ) = ( x 1 1 ) 4 exp ( x 2 ) = 0 F 2 ( x ) = ( x 2 2 ) 5 ( x 1 x 2 1 ) = 0 F 3 ( x ) = ( x 3 + 4 ) 6 = 0
The Jacobian F ( x ) of Equation (32) is
F ( x ) = 4 exp ( x 2 ) ( x 1 1 ) 3 exp ( x 2 ) ( x 1 1 ) 4 0 x 2 ( x 2 2 ) 5 x 1 ( x 2 2 ) 5 + 5 ( x 1 x 2 1 ) ( x 2 2 ) 4 0 0 0 6 ( x 3 + 4 ) 5
Now we take a constant vector w = [ w 1 , w 2 , w 3 ] T . By multiplying F ( x ) and w , we get
F ( x ) w = 4 w 1 exp ( x 2 ) ( x 1 1 ) 3 + w 2 exp ( x 2 ) ( x 1 1 ) 4 w 2 ( x 1 ( x 2 2 ) 5 + 5 ( x 1 x 2 1 ) ( x 2 2 ) 4 ) + w 1 x 2 ( x 2 2 ) 5 6 w 3 ( x 3 + 4 ) 5
Now again, we take the Jacobian of F ( x ) w
F ( x ) w = F ( x ) w = 12 w 1 exp ( x 2 ) ( x 1 1 ) 2 + 4 w 2 exp ( x 2 ) ( x 1 1 ) 3 4 w 1 exp ( x 2 ) ( x 1 1 ) 3 + w 2 exp ( x 2 ) ( x 1 1 ) 4 0 w 2 ( 5 x 2 ( x 2 2 ) 4 + ( x 2 2 ) 5 ) w 1 ( x 2 2 ) 5 + w 2 ( 10 x 1 ( x 2 2 ) 4 + 20 ( x 1 x 2 1 ) ( x 2 2 ) 3 ) + 5 w 1 x 2 ( x 2 2 ) 4 0 0 0 30 w 3 ( x 3 + 4 ) 4
By replacing w with F ( x ) G ( x ) = [ f 1 , f 2 , f 3 ] T in Equation (35), we obtain
F ( x ) F ( x ) G ( x ) = 12 f 1 exp ( x 2 ) ( x 1 1 ) 2 + 4 f 2 exp ( x 2 ) ( x 1 1 ) 3 4 f 1 exp ( x 2 ) ( x 1 1 ) 3 + f 2 exp ( x 2 ) ( x 1 1 ) 4 0 f 2 ( 5 x 2 ( x 2 2 ) 4 + ( x 2 2 ) 5 ) f 1 ( x 2 2 ) 5 + f 2 ( 10 x 1 ( x 2 2 ) 4 + 20 ( x 1 x 2 1 ) ( x 2 2 ) 3 ) + 5 f 1 x 2 ( x 2 2 ) 4 0 0 0 30 f 3 ( x 3 + 4 ) 4
For a large system of nonlinear equations, it is not practical that we compute symbolically second order Fréchet derivatives. There is a way to approximate the second order Fréchet derivative numerically by using the history of iterations [13]. However, then, it is hard to keep the quadratic convergence of the iterative method. On the other hand, the methods that do not use the second order Fréchet derivative, require the knowledge of multiplicities of roots. Again practically, it is hard to have the knowledge of root multiplicities for a general system of nonlinear equations. For single nonlinear equations, many authors propose some recipes to approximate the multiplicity of roots iteratively. Finally, we have two kinds of iterative method with quadratic convergence and with the inclusion of auxiliary function G ( x ) . We list them as
x k + 1 = x k F ( x k ) diag F ( x k ) G ( x k ) + diag G ( x k ) F ( x k ) F ( x k ) F ( x k ) G ( x k ) 1 F ( x k ) F ( x k ) G ( x k )
x k + 1 = x k F ( x k ) + diag F ( x k ) diag G ( x k ) 1 G ( x k ) 1 diag m F ( x )
When we take G ( x ) = 1 , the iterative methods Equations (37) and (38) reduce to the following forms
x k + 1 = x k F ( x k ) 2 F ( x k ) F ( x k ) 1 F ( x k ) F ( x k )
x k + 1 = x k F ( x k ) 1 diag m F ( x )
respectively.
Problem 2 = F 1 ( x ) = x 1 x 2 = 0 F 2 ( x ) = x 2 x 3 = 0 F 3 ( x ) = x 3 x 4 = 0 F 4 ( x ) = x 4 x 1 = 0
Problem 3 = F 1 ( x ) = x 1 1 x 2 x 3 = 0 F 2 ( x ) = x 2 1 x 1 x 3 = 0 F 3 ( x ) = x 3 1 x 1 x 2 = 0
In most of the cases, the iterative methods Equations (39) and (40) are badly conditioned. In Table 1, Table 2 and Table 3, we have shown that the exponential function is not the only and best choice to use as auxiliary function for rapid convergence. In the tables, one can see that a particular choice of auxiliary function for method Equation (37) gives the order of convergence greater than two. The multiplicities of roots in problem 3 are less than one and the iterative method in Equation (37) provides better accuracy in the solution of this problem. In the majority of cases, the performance of iterative method in Equation (37) with unknown multiplicity is better than that of the procedure in Equation (38).
Table 1. Problem 1: initial guess = [ 2 , 1 , 2 ] , m = [ 4 , 5 , 6 ] .
Table 1. Problem 1: initial guess = [ 2 , 1 , 2 ] , m = [ 4 , 5 , 6 ] .
G ( x ) Iter | | x α | | Num. StabilityCOC
Iterative method Equaiton (37) 1 6 O 10 43 Badly-conditioned 2 . 0
6 + cos ( x ) / 10 6 O 10 51 Well-conditioned 2 . 05
1 + x 3 / 1000 6 O 10 42 Well-conditioned 2 . 0
exp ( x / 100 ) 6 O 10 46 Well-conditioned 2 . 0
Iterative method Equaiton (38) 1 6 O 10 30 Badly-conditioned 2 . 0
6 + cos ( x ) / 10 6 O 10 30 Well-conditioned 2 . 0
1 + x 3 / 1000 6 O 10 30 Well-conditioned 2 . 0
exp ( x / 100 ) 6 O 10 30 Well-conditioned 2 . 0
Table 2. Problem 2: initial guess = [ 1 , 2 , 4 , 3 ] , m = [ 2 , 2 , 2 , 2 ] .
Table 2. Problem 2: initial guess = [ 1 , 2 , 4 , 3 ] , m = [ 2 , 2 , 2 , 2 ] .
G ( x ) Iter | | f ( x ) | | Num. StabilityCOC
Iterative method Equation (37) 1 1-Badly-conditioned-
6 + cos ( x ) / 10 7 O 10 1551 Well-conditioned 2 . 98
1 + x 3 / 1000 7 O 10 8482 Well-conditioned 3 . 98
exp ( x / 100 ) 7 O 10 376 Well-conditioned 2 . 00
Iterative method Equation (38) 1 1-Badly-conditioned-
6 + cos ( x ) / 10 20 O 10 23 Well-conditioned 1 . 0
1 + x 3 / 1000 20Not convergingWell-conditioned-
exp ( x / 100 ) 7 O 10 443 Well-conditioned 2 . 0
Table 3. Problem 3: initial guess = [ 2 , 4 , 3 ] , m = [ 1 / 2 , 1 / 2 , 1 / 2 ] .
Table 3. Problem 3: initial guess = [ 2 , 4 , 3 ] , m = [ 1 / 2 , 1 / 2 , 1 / 2 ] .
G ( x ) Iter | | f ( x ) | | Num. StabilityCOC
Iterative method Equation (37) 1 12 O 10 2011 Well-conditioned 2 . 00
6 + cos ( x ) / 10 12 O 10 1914 Well-conditioned 2 . 00
1 + x 3 / 1000 12 O 10 1248 Well-conditioned 2 . 00
exp ( x / 10 ) 12 O 10 2767 Well-conditioned 2 . 00
Iterative method Equation (38) 1 1-Badly-conditioned-
6 + cos ( x ) / 10 12 O 10 56 Well-conditioned 2 . 00
1 + x 3 / 1000 20Not convergingWell-conditioned-
exp ( x / 10 ) 7 O 10 35 Well-conditioned 2 . 00

6. Conclusions

We have shown that the exponential function is not the only choice to use as an auxiliary function (compare our conclusions with [10]). The scalar version of iterative method Equation (37) was developed in [12]. Moreover, we have shown that the vector method can not be constructed plainly by the same procedure. Even the iterative method Equation (37) is not a direct consequence of the Newton method, but our analysis shows that we have quadratic convergence as well. The computed COC confirms our claims regarding the order of convergence of different iterative methods. The validity and accuracy of constructed iterative methods are clearly depicted in our computed results for different problems.

Acknowledgments

The work of the second author was partially supported by INdAM-GNCS Gruppo Nazionale per il Calcolo Scientifico and by the Donation KAW 2013.0341 from the Knut & Alice Wallenberg Foundation in collaboration with the Royal Swedish Academy of Sciences, supporting Swedish research in mathematics.

Author Contributions

Fayyaz Ahmad and S. Serra-Capizzano conceived the idea and developed the proofs; Malik Zaka Ullah and A. S. Al-Fhaid performed the experiments, analyzed the data and wrote the paper.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Traub, J.F. Iterative Methods for the Solution of Equations; Prentice-Hall: Englewood Cliffs, NJ, USA, 1964. [Google Scholar]
  2. Ortega, J.M.; Rheinbodt, W.C. Iterative Solution of Nonlinear Equations in Several Variables; Academic Press: London, UK, 1970. [Google Scholar]
  3. Ahmad, F.; Tohidi, E.; Carrasco, J.A. A parameterized multi-step Newton method for solving systems of nonlinear equations. Numer. Algorithms 2015. [Google Scholar] [CrossRef]
  4. Ullah, M.Z.; Serra-Capizzano, S.; Ahmad, F. An efficient multi-step iterative method for computing the numerical solution of systems of nonlinear equations associated with ODEs. Appl. Math. Comput. 2015, 250, 249–259. [Google Scholar] [CrossRef]
  5. Ahmad, F.; Tohidi, E.; Ullah, M.Z.; Carrasco, J.A. Higher order multi-step Jarratt-like method for solving systems of nonlinear equations: Application to PDEs and ODEs. Comput. Math. Appl. 2015, 70, 624–636. [Google Scholar] [CrossRef][Green Version]
  6. Alaidarous, E.S.; Ullah, M.Z.; Ahmad, F.; Al-Fhaid, A.S. An Efficient Higher-Order Quasilinearization Method for Solving Nonlinear BVPs. J. Appl. Math. 2013, 2013, 259371. [Google Scholar] [CrossRef]
  7. Ullah, M.Z.; Soleymani, F.; Al-Fhaid, A.S. Numerical solution of nonlinear systems by a general class of iterative methods with application to nonlinear PDEs. Numer. Algorithms 2014, 67, 223–242. [Google Scholar] [CrossRef]
  8. Montazeri, H.; Soleymani, F.; Shateyi, S.; Motsa, S.S. On a New Method for Computing the Numerical Solution of Systems of Nonlinear Equations. J. Appl. Math. 2012, 2012, 751975. [Google Scholar] [CrossRef]
  9. Cordero, A.; Hueso, J.L.; Martinez, E.; Torregrosa, J.R. A modified Newton-JarrattâĂŹs composition. Numer. Algorithms 2010, 55, 87–99. [Google Scholar] [CrossRef]
  10. Hueso, J.L.; Martinez, E.; Torregrosa, J.R. Modified Newton’s method for systems of nonlinear equations with singular Jacobian. J. Comput. Appl. Math. 2009, 224, 77–83. [Google Scholar] [CrossRef]
  11. Wu, X. Note on the improvement of Newton’s method for systems of nonlinear equations. Appl. Math. Comput. 2007, 189, 1476–1479. [Google Scholar] [CrossRef]
  12. Noor, M.A.; Shah, F.A. A Family of Iterative Schemes for Finding Zeros of Nonlinear Equations having Unknown Multiplicity. Appl. Math. Inf. Sci. 2014, 8, 2367–2373. [Google Scholar] [CrossRef]
  13. Schnabel, R.B.; Frank, P.D. Tensor methods for nonlinear equations. SIAM J. Numer. Anal. 1984, 21, 815–843. [Google Scholar] [CrossRef]
Back to TopTop