Next Article in Journal
Multi-Objective Design Optimization of an Over-Constrained Flexure-Based Amplifier
Next Article in Special Issue
On the Accessibility of Newton’s Method under a Hölder Condition on the First Derivative
Previous Article in Journal
Implementation of a Parallel Algorithm Based on a Spark Cloud Computing Platform
Previous Article in Special Issue
An Optimal Eighth-Order Derivative-Free Family of Potra-Pták’s Method

Algorithms 2015, 8(3), 415-423; https://doi.org/10.3390/a8030415

Article
A Quartically Convergent Jarratt-Type Method for Nonlinear System of Equations
1
Department of Mathematics, Imam Reza International University, Khorasan Razavi, Mashhad, Sanabaad, Daneshgah 91735-553, Iran
2
Department of Mathematics, Zahedan Branch, Islamic Azad University, Zahedan, Iran
*
Author to whom correspondence should be addressed.
Academic Editor: Alicia Cordero
Received: 28 May 2015 / Accepted: 2 July 2015 / Published: 6 July 2015

Abstract

:
In this work, we propose a new fourth-order Jarratt-type method for solving systems of nonlinear equations. The local convergence order of the method is proven analytically. Finally, we validate our results via some numerical experiments including an application to the Chandrashekar integral equations.
Keywords:
iterative methods; Fréchet; systems of nonlinear equations; Chandrashekar integral equations

1. Introduction

A system of solving nonlinear equations by iterative methods is of interest to numerical analysts . One of the popular methods is the classic multi-dimensional Newton method. It has quadratic convergence close to a simple zero, i.e., the number of good digits is roughly doubled at each iteration.
Higher order methods, which require the second or higher order Fréchet derivatives can mostly be costly and thus time consuming. It is consequently important to study higher order variants of Newton method, which require only one more function or first-order derivative calculation and are more robust than the Newton method. Such methods are known as multi-point Newton-like methods in the Traub’s sense .
Accordingly, it is an efficient way of generating higher order schemes free from second or higher order derivatives for solving systems of nonlinear equations. Such methods have been developed in . For more information, one may refer to [3,4]. For application of Newton-type methods in other problems, consult the papers [5,6].
In this work, we introduce the basic preliminaries. Then, we describe a third-order Newton-like method derived from quadrature rules for systems of nonlinear equations and discuss the disadvantage of the third-order methods in terms of efficiency index. We next extend a fourth-order Jarratt-type method from a third-order method for systems of nonlinear equations. We prove the local convergence of the method. We also show that the fourth-order method is more efficient than the second order Newton and a third-order method. Finally, we check the fourth-order convergence of the method through some numerical experiments.

2. Preliminaries

In this study, we use bold font style to represent vectors, matrices and tensors. Let $x = ( x 1 , x 2 , x 3 , . . . , x n ) T$, $x * = ( x 1 * , x 2 * , x 3 * , . . . , x n * ) T$ and $F ( x ) = ( f 1 ( x 1 , . . . , x n ) , f 2 ( x 1 , . . . , x n ) , . . . , f n ( x 1 , . . . , x n ) ) T$ be $n × 1$ vectors. The Jacobian $F ' ( x )$, is an $n × n$ matrix.
The matrix multiplication $F ' ( x ) ( x − x * )$ is a $n × 1$ vector. The Hessian $F ʺ ( x )$ is a third-order tensor ($n × n × n$ matrix) and the notation $F ʺ ( x ) ( x − x * ) 2$ means the matrix multiplication $F ʺ ( x ) ( x − x * ) ( x − x * )$, which results in a $n × 1$ vector as well. The same notations are applied to the higher order derivatives. Furthermore, we let
$c j ( x ) = 1 j ! F ' ( x ) − 1 F ( j ) ( x ) , j = 2 , 3 , ⋯$
which is $n × ... × n ︸ j t i m e s$ tensor.
It is well-known that the Newton method ($2 n d N M$) in multi-dimensional space is given by
$x ( k + 1 ) = G 2 n d N M ( x ( k ) ) = x ( k ) − u ( x ( k ) ) , w h e r e u ( x ( k ) ) = F ' ( x ( k ) ) − 1 F ( x ( k ) )$
Research on systems of nonlinear equations has widely expanded over the last few decades [7,8]. As is well known, the iteration (2) and its variants coupled with some direct solution technique such as Gaussian elimination are good solvers for challenging nonlinear systems in case one has a sufficiently good initial guess $x ( 0 )$ and the dimension of system is not too large. When the Jacobian is large and sparse, inexact Newton methods or high-order methods may be used. For further reading one may refer to .

3. Description of a New Method

Third-order methods free from second derivatives were proposed from quadrature rule for solving systems of nonlinear equations. These methods require one function evaluation and two first order derivatives at two points. One such method is:
$3 r d C O N$ method derived from Closed-Open quadrature formula :
$x ( k + 1 ) = G 3 r d C O N ( x ( k ) ) = x ( k ) − A ( x ( k ) ) − 1 F ( x ( k ) )$
where
$A ( x ) = F ' ( x ) + 3 F ' x − 2 3 u ( x ) 4$
This method is also a member of Frontini-Sormani family of third order methods derived from quadrature rules . The convergence analysis of this method using point of attraction theory can be found in . This method is also more efficient than Halley’s method because it does not require the evaluation of a third order tensor of $n 3$ values.
Let p be the order of a method and d be defined as $d = d 0 n + ∑ j = 1 q d j n j + 1 ,$ where $d 0$ and $d j$ represent the number of times F and $F ( j )$ should be evaluated, respectively. The definition of the logarithm of Informational Efficiency or Efficiency Index for nonlinear systems ($I E$)  is given by
$I E = ln p d 0 n + ∑ j = 1 q d j n j + 1$
Due to that the efficiency indices for the Newton’s method ($2 n d N M$) and the third-order method free from second derivatives ($3 r d C O N$) are given by
$I E 2 n d N M = ln 2 n + n 2 a n d I E 3 r d C O N = ln 3 n + 2 n 2$
respectively. We observe that $I E 3 r d C O N > I E 2 n d N M ,$ if $n = 1$. That is, the third order methods free from second derivatives are less efficient than Newton’s method for systems of nonlinear equations.
Thus, it is suitable to develop a fourth-order method from the third-order method to improve the efficiency. For the scalar case, we can suggest the following quartical iteration, which is in fact a Jarratt-type iterative method including three functional evaluations to reach the highest possible order four 
$y k = x k − 2 3 f ( x k ) f ' ( x k ) , x k + 1 = x k − 4 f ( x k ) f ' ( x k ) + 3 f ' ( y k ) 1 + 9 16 ( f ' ( y k ) f ' ( x k ) − 1 ) 2$
We here build our efficient high order method according to (7) to reach the highest possible order using a fixed and the smallest possible number of functional evaluations. The improved fourth-order method ($4 t h C O N$) to systems of nonlinear equations can be constructed and suggested as follows
$x ( k + 1 ) = G 4 t h C O N ( x ( k ) ) = x ( k ) − H ( x ( k ) ) A ( x ( k ) ) − 1 F ( x ( k ) )$
where
$H ( τ ( x ) ) = I + 9 16 ( τ ( x ) − I ) 2 , τ ( x ) = F ' ( x ) − 1 F ' x − 2 3 u ( x )$
and I is the $n × n$ identity matrix. We also note that this method is an improvement of the $3 t h C O N$ method because it can also be written as:
$G 4 t h C O N ( x ( k ) ) = x ( k ) + H ( x ( k ) ) G 3 t h C O N ( x ( k ) ) − x ( k )$
We describe the algorithm of $4 t h C O N$ method, which requires the evaluations of one function with n values, two Jacobians of $n 2$ values each and the inversion of two matrices as comes next: $A l g o r i t h m 4 t h C O N$
$F ' ( x ( k ) ) u ( x ( k ) ) = F ( x ( k ) ) y ( k ) = x ( k ) − 2 3 u ( x ( k ) ) A ( x ( k ) ) = F ' ( x ( k ) ) + 3 F ' y ( k ) 4 F ' ( x ( k ) ) τ ( x ( k ) ) = F ' y ( k ) A ( x ( k ) ) h ( x ( k ) ) = F ( x ( k ) ) x ( k + 1 ) = x ( k ) − I + 9 16 ( τ ( x ( k ) ) − I ) 2 h ( x ( k ) )$
Note that the efficiency index of the fourth order method free from second derivatives ($4 t h C O N$) is given by
$I E 4 t h C O N = ln 4 n + 2 n 2 , I E 4 t h C O N I E 2 n d N M = 2 n + 2 2 n + 1 > 1 , n ≥ 1$
This shows that the $4 t h C O N$ method is better than $2 n d N M$ and $3 r d C O N$ methods.

4. Convergence Analysis

The local convergence of the $4 t h C O N$ is proved in Theorem 1 and it is as follows.
Theorem 1. Let $F : D ⊂ ℝ n → ℝ n$, be four times Fréchet differentiable in a convex set D of $F ( x ) = 0$. Then the scheme defined by Equation (10) has order of convergence 4.
Proof. Considering the same notation and definitions given above, and by Taylor's series around x,
$F ( x * ) = F ( x ) + F ' ( x ) ( x * − x ) + 1 2 F ( 2 ) ( x ) ( x * − x ) 2 + 1 3 ! F ( 3 ) ( x ) ( x * − x ) 3 + 1 4 ! F ( 4 ) ( x ) ( x * − x ) 4 + O ( ∥ x * − x ∥ 5 )$
Since $F ( x * ) = 0$ and $e = x − x *$, Equation (12) can be simplified to
$F ( x ) = F ' ( x ) e − c 2 ( x ) e 2 + c 3 ( x ) e 3 − c 4 ( x ) e 4 + O ( ∥ e ∥ 5 )$
where $c i = 1 i ! F ' ( x * ) − 1 F ( i ) ( x * )$, $F ( i ) ( x * ) ∈ L ( ℝ n , ... , ℝ n )$, $e p = e , ... , e ︷ p$ and $e ∈ ℝ$. Note that $L$ denotes the set of bounded linear functions. Therefore,
$u ( x ) = F ' ( x ) − 1 F ( x ) = e − c 2 ( x ) e 2 + c 3 ( x ) e 3 − c 4 ( x ) e 4 + O ( ∥ e ∥ 5 )$
Applying Equation (14), we come by
$F ' ( x − 2 3 u ( x ) ) = F ' ( x ) ( I − 4 3 c 2 ( x ) e + 4 3 ( c 2 ( x ) 2 + c 3 ( x ) ) e 2 − 4 c 2 ( x ) c 3 ( x ) + 32 27 c 4 ( x ) e 3 ) + O ( ∥ e ∥ 4 )$
and
$A ( x ) = F ' ( x ) I − c 2 ( x ) e + ( c 2 ( x ) 2 + c 3 ( x ) ) e 2 − 3 c 2 ( x ) c 3 ( x ) + 8 9 c 4 ( x ) e 3 + O ( ∥ e ∥ 4 )$
Using Equations (15) and (16), we can obtain the error equation of $3 t h C O N$.
$A ( x ) ( G 3 t h C O N ( x ) − x * ) = F ' ( x ) c 2 ( x ) 2 e 3 + − 3 c 2 ( x ) c 3 ( x ) + 1 9 c 4 ( x ) e 4 + O ( ∥ e ∥ 5 )$
Using Equation (15). we have
$τ ( x ) = I − 4 3 c 2 ( x ) e + 4 3 ( c 2 ( x ) 2 + c 3 ( x ) ) e 2 − 4 c 2 ( x ) c 3 ( x ) + 32 27 c 4 ( x ) e 3 + O ( ∥ e ∥ 4 )$
and subsequently
$H ( τ ( x ) ) = I + 9 16 ( τ ( x ) − I ) 2 = I + c 2 ( x ) 2 e 2 − 2 c 2 ( x ) 3 + c 2 ( x ) c 3 ( x ) e 3 + O ( ∥ e ∥ 4 )$
Now,
$G 4 t h C O N ( x ) − x * = x − x * + H ( τ ( x ) ) ( G 3 t h C O N ( x ) − x ) = ( I − H ( τ ( x ) ) e + H ( τ ( x ) ) ( G 3 t h C O N ( x ) − x * )$
since $G 3 t h C O N ( x ) − x = G 3 t h C O N ( x ) − x * − e$. Using Equations (19) and (20), we have
$H ( τ ( x ) ) ( G 3 t h C O N ( x ) − x * ) = ( G 3 t h C O N ( x ) − x * ) + O ( ∥ e ∥ 2 × ∥ G 3 t h C O N ( x ) − x * ) ∥ = A ( x ) − 1 F ' ( x ) c 2 ( x ) 2 e 3 + − 3 c 2 ( x ) c 3 ( x ) + 1 9 c 4 ( x ) e 4 + O ( ∥ e ∥ 5 )$
Furthermore, we acquire
$( I − H ( τ ( x ) ) e + H ( τ ( x ) ) ( G 3 t h C O N ( x ) − x * ) = A ( x ) − 1 ( A ( x ) ( I − H ( τ ( x ) ) e F ' ( x ) ( c 2 ( x ) 2 e 3 + − 3 c 2 ( x ) c 3 ( x ) + 1 9 c 4 ( x ) e 4 ) + O ( ∥ e ∥ 5 ) )$
From Equations (21) and (22), we come by
$A ( x ) ( I − H ( τ ( x ) ) e = F ' ( x ) − c 2 ( x ) 2 e 3 + 3 c 2 ( x ) 3 + 2 c 2 ( x ) c 3 ( x ) e 4 + O ( ∥ e ∥ 5 ) )$
Substituting Equation (23) into Equation (8), we obtain
$A ( x ) ( G 4 t h C O N ( x ) − x * ) = F ' ( x ) 3 c 2 ( x ) 3 − c 2 ( x ) c 3 ( x ) + 1 9 c 4 ( x ) e 4 + O ( ∥ e ∥ 5 ) ,$
which establishes the fourth order convergence of the method. The proof is now complete.

5. Numerical Examples

In this section, we compare the performance of the contributed method with that of (2), and (3). The algorithms were written in Matlab 7.6 and tested for the examples given below.
We start with small systems of nonlinear equations. For the following test problems, the approximate solutions are calculated up to 500 digits using the variable arithmetic precision in Matlab 7.6. We define
$e r r = ∥ F ( x ( k ) ) ∥ 2 + ∥ x ( k + 1 ) − x ( k ) ∥ 2 < 1 e − 100$
We use the approximate computational order of convergence p (see ) given by
$p c ≈ l o g ( ∥ x ( k + 1 ) − x ( k ) ∥ ) / ( ∥ x ( k ) − x ( k − 1 ) ∥ ) l o g ( ∥ x ( k ) − x ( k − 1 ) ∥ ) / ( ∥ x ( k − 1 ) − x ( k − 2 ) ∥ )$
We let $I t e r m a x$ be the number of iterations required before convergence is reached and $e r r m i n$ be the minimum residual.
Test Problem 1 (TP1)  $F ( x 1 , x 2 ) = 0$, where $F : ( 4 , 6 ) × ( 5 , 7 ) → ℝ 2$, and $F ( x 1 , x 2 )$$= ( x 1 2 − x 2 − 19$$, x 2 3 / 6 − x 1 2 + x 2 − 17 ) .$ The starting vector is $x ( 0 ) = ( 5 . 1 , 6 . 1 ) T$ and the exact solution is $x * = ( 5 , 6 ) T$. In addition, it is easy to see that $I E 2 n d N M = ln 2 2 + 4 = 0 . 1155 , I E 3 t h C O N = ln 3 2 + 2 ( 4 ) = 0 . 1099 , I E 4 t h C O N = ln 4 2 + 2 ( 4 ) = 0 . 1386 .$
Test Problem 2 (TP2)  The test is as follows
$cos x 2 − sin x 1 = 0 x 3 x 1 − 1 x 2 = 0 exp x 1 − x 3 2 = 0$
with the following solution $x 1 * = 0 . 9095694945200 . . .$, $x 2 * = 0 . 6612268322748 . . .$, $x 3 * = 1 . 5758341439070 . . .$. We choose the starting vector $x ( 0 ) = ( 1 , 0 . 5 , 1 . 5 ) T$. Here, we have
$I E 2 n d N M = ln 2 12 = 0 . 0577 , I E 3 t h C O N = ln 3 21 = 0 . 0523 , I E 4 t h C O N = ln 4 21 = 0 . 0660$
Systems of four nonlinear equations:
Test Problem 3 (TP3) (see ) This example is in what follows:
$x 2 x 3 + x 4 ( x 2 + x 3 ) = 0 x 1 x 3 + x 4 ( x 1 + x 3 ) = 0 x 1 x 2 + x 4 ( x 1 + x 2 ) = 0 x 1 x 2 + x 1 x 3 + x 2 x 3 = 1$
We solve this system using the initial approximation $x ( 0 ) = ( 0 . 5 , 0 . 5 , 0 . 5 , − 0 . 2 ) T$. The solution is
$x 1 * = 0 . 57735026918963 . . . x 2 * = 0 . 57735026918963 . . . x 3 * = 0 . 57735026918963 . . . x 4 * = − 0 . 28867513459482 . . .$
Note that
$I E 2 n d N M = ln 2 4 + 16 = 0 . 0346 , I E 3 t h C O N = ln 3 4 + 2 ( 16 ) = 0 . 0305 , I E 4 t h C O N = ln 4 2 + 2 ( 16 ) = 0 . 0385$
Table 1 gives the results for the TP1-3. It is observed that for all problems the fourth-order method converges in the least number of iterations. The computational order of convergence agrees with the theory. $4 t h C O N$ gives the best results in terms of least residual and is the most efficient method.
Table 1. Comparison of different methods for systems of nonlinear equations.
Table 1. Comparison of different methods for systems of nonlinear equations.
MethodsTP1TP2TP3
$I t e r m a x$$e r r m i n$$p c$$I t e r m a x$$e r r m i n$$p c$$I t e r m a x$$e r r m i n$$p c$
$2 n d N M$78.8e-113291.7e-107281.7e-1442.02
$3 r d C O N$53.2e-1433.0277.3e-285366.3e-2763.04
$4 t h C O N$45.0e-1044.0262.7e-322455.0e-2464.14

Application in Integral Equations

The Chandrasekhar-H equation arising in Radiative Heat Transfer theory (see ) is a nonlinear integral equation which gives a full nonlinear system of equations if discretized. The Chandrasekhar-H equation is given as
$J ( H , c ) = 0 , H : [ 0 , 1 ] → ℝ$
with parameter c and the operator J as
$J ( H , c ) ( U ) = H ( U ) − 1 − c 2 ∫ 0 1 U H ( v ) U + v d v − 1$
If we discretize the integral given in Equation (29) using the Mid-point Integration Rule with n grid points
$∫ 0 1 f ( t ) d t = 1 n ∑ j = 1 n f ( t j ) , t j = ( j − 0 . 5 ) h , h = 1 n , 1 ≤ j ≤ n$
we obtain the resulting system of non-linear equations as follows:
$F i ( U , c ) = U i − 1 − c 2 n ∑ j = 1 n t i U i t i + t j − 1 , 1 ≤ i ≤ n$
When starting with $( 1 , 1 , ... , 1 ) T$ vector, the system (31) has a solution for all $c ∈ ( 0 , 1 )$. The c are equally spaced with $Δ c = 0 . 01$ in the interval $c ∈ ( 0 , 1 )$ and we choose $n = 100$. The approximate solutions are correct to 14 digits. We note that in this case the Jacobian is a full matrix. The stopping criterion is when, $∥ U ( k + 1 ) − U ( k ) ∥ 2 + ∥ F ( U ( k + 1 ) ) ∥ 2 < 1 ( − 11 ) .$ Table 2 shows the result for the Chandrasekhar H-equation by implementation of our codes in double precision arithmetic.
Table 2. Key results for the Chandrasekhar H-equation.
Table 2. Key results for the Chandrasekhar H-equation.
Methods$I t e r T o t a l$$I t e r m e a n$CPU Time (s)
$2 n d N M$4144.1823.28
$3 r d C O N$3613.6527.08
$4 t h C O N$2932.9923.01

6. Concluding Remarks

We have extended a fourth-order Jarratt-type method from a third-order method for systems of nonlinear equations. The local convergence of the fourth-order method has been proved. We have shown that the quartical iterative method is more efficient than the second order Newton and a third-order method.
As future works, we would like to improve the order of the fourth-order methods through an additional function evaluation in a three-step cycle to achieve higher orders and betters efficiencies. Finally, we state that the question: “can the idea of with memorization, (see e.g., ), be incorporated into the new schemes?” would also be considered for future studies.

Author Contributions

The contributions of all of the authors have been similar. All of them have worked together to develop the present manuscript.

Conflicts of Interest

The authors declare no conflict of interest.

References

1. Cordero, A.; Franques, A.; Torregrosa, J.R. Numerical Solution of Turbulence Problems by Solving Burgers’ Equation. Algorithms 2015, 8, 224–233. [Google Scholar] [CrossRef]
2. Traub, J.F. Iterative Methods for the Solution of Equations; Publishing Company: New York, NY, USA, 1982. [Google Scholar]
3. Soleymani, F. A three-step iterative method for nonlinear systems with sixth order of convergence. Int. J. Comput. Sci. Math. 2013, 4, 363–373. [Google Scholar] [CrossRef]
4. Ullah, M.Z.; Soleymani, F.; Al-Fhaid, A.S. Numerical solution of nonlinear systems by a general class of iterative methods with application to nonlinear PDEs. Numer. Algor. 2014, 67, 223–242. [Google Scholar] [CrossRef]
5. Soleymani, F. An efficient and stable Newton-type iterative method for computing generalized inverse $A T , S ( 2 )$. Numer. Algor. 2015, 69, 569–578. [Google Scholar] [CrossRef]
6. Soleymani, F.; Salmani, H.; Rasouli, M. Finding the Moore-Penrose inverse by a new matrix iteration. J. Appl. Math. Comput. 2015, 47, 33–48. [Google Scholar] [CrossRef]
7. Babajee, D.K.R.; Dauhoo, M.Z.; Darvishi, M.T.; Bharati, A. A Karami, Analysis of two Chebyshev-like third order methods free from second derivatives for solving systems of nonlinear equations. J. Comp. Appl. Math. 2010, 233, 2002–2012. [Google Scholar] [CrossRef]
8. Cordero, A.; Hueso, J.L.; Martinez, E.; Torregrosa, J.R. Accelerated methods of order 2p for systems of nonlinear equations. J. Comput. Appl. Math. 2010, 233, 2696–2702. [Google Scholar] [CrossRef]
9. Cordero, A.; Soleymani, F.; Torregrosa, J.R. Dynamical analysis of iterative methods for nonlinear systems or how to deal with the dimension? Appl. Math. Comput. 2014, 244, 398–412. [Google Scholar] [CrossRef]
10. Noor, M.A.; Waseem, M. Some iterative methods for solving a system of nonlinear equations. Comput. Math. Appl. 2009, 57, 101–106. [Google Scholar] [CrossRef]
11. Frontini, M.; Sormani, E. Third-order methods from quadrature formulae for solving systems of nonlinear equations. Appl. Math. Comput. 2004, 140, 771–782. [Google Scholar] [CrossRef]
12. Babajee, D.K.R. Analysis of Higher Order Variants of Newton’s Method and Their Applications to Differential and Integral Equations and in Ocean Acidification. Ph.D. Thesis, University of Mauritius, Réduit, Moka, Mauritius, December 2010. [Google Scholar]
13. Sharifi, M.; Babajee, D.K.R.; Soleymani, F. Finding the solution of nonlinear equations by a class of optimal methods. Comput. Math. Appl. 2012, 63, 764–774. [Google Scholar] [CrossRef]
14. Cordero, A.; Torregrosa, J.R. Variants of Newton’s Method using fifth-order quadrature formulas. Appl. Math. Comput. 2007, 190, 686–698. [Google Scholar] [CrossRef]
15. Kelley, C.T. Solution of the Chandrasekhar H-equation by Newton’s method. J. Math. Phys. 1980, 21, 1625–1628. [Google Scholar] [CrossRef]
16. Lotfi, T.; Mahdiani, K.; Bakhtiari, P.; Soleymani, F. Constructing two step iterative methods with and without memory. Comput. Math. Math. Phys. 2015, 55, 183–193. [Google Scholar] [CrossRef]