Abstract
In this study, an iterative scheme of sixth order of convergence for solving systems of nonlinear equations is presented. The scheme is composed of three steps, of which the first two steps are that of third order Potra–Pták method and last is weighted-Newton step. Furthermore, we generalize our work to derive a family of multi-step iterative methods with order of convergence The sixth order method is the special case of this multi-step scheme for The family gives a four-step ninth order method for As much higher order methods are not used in practice, so we study sixth and ninth order methods in detail. Numerical examples are included to confirm theoretical results and to compare the methods with some existing ones. Different numerical tests, containing academical functions and systems resulting from the discretization of boundary problems, are introduced to show the efficiency and reliability of the proposed methods.
1. Introduction
Many applied problems in Science and Engineering [1,2,3] are reduced to solve nonlinear systems numerically, that is, for a given nonlinear function , where and , to find a vector such that The most widely used method for this purpose is the classical Newton’s method [3,4], which converges quadratically under the conditions that the function F is continuously differentiable and a good initial approximation is given. It is defined by
where is the inverse of Fréchet derivative of the function at . In order to improve the order of convergence of Newton’s method, several methods have been proposed in literature, see, for example [5,6,7,8,9,10,11] and references therein. In particular, the third order method by Potra–Pták [11] for systems of nonlinear equations is given by
It is quite clear that this scheme requires the evaluation of two functions, one derivative and one matrix inversion per iteration, that is usually avoided by solving a linear system. This algorithm is illustrious not only for its simplicity but also for its efficient character.
In this paper, based on Potra–Pták method (2), we develop a three-step scheme with increased order of convergence and still maintaining the efficient character. With these considerations, we propose a three-step iterative method with accelerated sixth order of convergence; of the three steps, the first two are those of Potra–Pták method whereas the third is a weighted Newton-step. Then, based on this scheme, a multi-step family with increasing order of convergence is developed. The sixth order method is the special case of this multi-step scheme for The family gives a four-step ninth order scheme for As much higher order methods are not used in practice, so we study sixth and ninth order methods in particular.
The rest of the paper is organized as follows: In Section 2, we present the new three-step scheme for solving nonlinear systems and we analyze its order of convergence. In Section 2.1, the order of this scheme is improved in three units, by adding another step that needs a new functional evaluation and to solve a linear system with the same matrix of coefficients as before. This idea can be generalized for designing an iterative method with arbitrary order of convergence. The computational efficiency of the proposed schemes is studied in Section 3, doing a comparative analysis with the efficiency of other known methods. Section 4 is devoted to the numerical experiments with academical multivariate functions and nonlinear systems resulted of the discretization of boundary problems. For some cases, the basins of attraction of the methods are also showed. The paper finishes with some conclusions and the references used in it.
2. The Method and Analysis of Convergence
We start with the following iterative scheme:
where the first two-steps are those of Potra–Pták scheme [11] for nonlinear systems and is the first order divided difference of F.
In order to discuss the behavior of scheme (3), we consider the following expression of divided difference operator , see for example, [2,10],
By expanding in Taylor series at the point x and integrating, we have
where
Let . Developing in a neighborhood of and assuming that exists, we have
where and . Also,
Inversion of yields,
We are in a position to analyze the behavior of scheme (3). Thus, the following result is proven.
Theorem 1.
Let be sufficiently differentiable function in an open neighborhood D of its zero α. Let us suppose that the Jacobian matrix is continuous and nonsingular at α. If an initial approximation is sufficiently close to α, then the local order of convergence of method (3) is at least provided and
Proof.
Let is the local error of Newton’s method given by
Employing Equations (10) and (11) in the second step of (3), we get
Using Equations (7)–(9) in (5) for , and , it follows that
With the help of Equations (10) and (13), we can write
Expanding about , we obtain
Using (10), (14) and (15) in the last step of (3), we get
In order to achieve sixth order of convergence it is clear that terms and must vanish for some values of b, and c. This happens when and .
Thus, the error equation (16) becomes
This proves the sixth order of convergence. □
Thus, the proposed scheme (3) is given by
Clearly this formula uses three functional evaluations, one evaluation of the Jacobian matrix, one of a divided difference, and one matrix inversion per iteration. We denote this scheme by
2.1. Multi-step Method with Order
In this section, we improve the by adding a functional evaluation per each new step to get the multi-step version called method. The method is defined as
Let us note that case is method given by (18). This multi-step version has the order of convergence which we shall prove through the following result.
Theorem 2.
Let us Assume that is a sufficiently Fréchet differentiable function in an open convex set D containing the zero α of and is continuous and nonsingular at α. Then, sequence , obtained by using method (19) converges to α with convergence order
Proof.
As a special case, this family when gives a ninth order method:
It is clear that this scheme requires four functional evaluations, one Jacobian matrix, one divided difference, and one matrix inversion per full iteration. We denote this scheme by
Remark 1.
Multi-step version utilizes functional evaluations of one evaluation of , and one divided difference. Also, the scheme (19) requires only one matrix inversion per iteration.
3. Computational Efficiency
To obtain an estimation of the efficiency of the proposed methods we shall make use of efficiency index, according to which the efficiency of an iterative method is given by , where is the order of convergence and C is the computational cost per iteration. For a system of m nonlinear equations and m unknowns, the computational cost per iteration is given by (see [9,10])
where represents the number of evaluations of scalar functions used in the evaluations of F and . The divided difference of F is an matrix with elements (see [11,12])
The number of evaluations of scalar functions of , i.e. , , is . represents the number of products or quotients needed per iteration, and and are ratios between products and evaluations required to express the value of in terms of products.
To compute F in any iterative method we calculate m scalar functions and if we compute the divided difference then we evaluate scalar functions, where and are computed separately. We must add quotients from any divided difference. The number of scalar evaluations is for any new derivative . In order to compute an inverse linear operator we solve a linear system, where we have products and quotients in the LU decomposition and products and m quotients in the resolution of two triangular linear systems. We suppose that a quotient is equivalent to l products. Moreover, we add products for the multiplication of a matrix with a vector or of a matrix by a scalar and m products for the multiplication of a vector by a scalar.
Denoting the efficiency indices of ( and ) by and computational cost by , then taking into account the above considerations, we obtain
To check the performance of the new sixth order method, we compare it with some existing sixth order which belongs to the same class. So, we choose the sixth order methods presented in [10,13]. The sixth order methods presented in [10] are given by
and
The sixth order method proposed in [13] is given by
Per iteration these methods utilize the same number of function evaluations as that of The computational cost and efficiency for the methods and is given as below:
3.1. Comparison among the Efficiencies
To compare the iterative methods , we consider the ratio
It is clear that if , the iterative method is more efficient than . Moreover, if we need to compare the methods having the same order then using (36) we can say that the iterative method is more efficient than if This means that the comparison of the methods possessing same order can be done by just comparing their computational costs.
versuscase:
versuscase:
versuscase:
In addition to the above comparisons we compare the proposed methods and with each other.
versuscase:
The particular boundary expressed by written as a function of and m is
where and This function has a vertical asymptote for
Note that for , the numerator of (37) is positive and the denominator is negative, which shows that is always negative for . That is, the boundary is out of admissible region for and we have and .
We summarize the above results in following theorem:
Theorem 3.
For all , and we have:
(a)
(b) and
(c)
Otherwise, the efficiency comparison depends on m, , and l.
4. Numerical Results
This section is devoted to check the effectiveness and efficiency of some of our proposed methods on different types of applications. In all cases, we apply our proposed schemes H and H and compare the results with those obtained by known methods H, H, and H. We consider academic examples, a special case of a nonlinear conservative problem which is transformed in a nonlinear system by approximating the derivatives by divided differences and, also the approximation of the solution of an elliptic partial differential equation that model a chemical problem.
All the experiments have been carried out in Matlab 2017 with variable precision arithmetics with 1000 digits of mantissa. These calculations have been made with an Intel Core processor i7-4700HQ with a CPU of 2.40 GHz and 16.0 GB al RAM memory. In the tables we include the number of iterations (iter), the residual error of the corresponding function () in the last iterated and the difference between the two last iterates (). We also present the approximated computational order of convergence (ACOC) defined in [14] with the expression
When the components of this vector are stable, it is an approximation of the theoretical order of convergence. In other case, it does not give us any information and we will denote it by −. Also the execution time (in seconds) has been calculated (by means of “cputime” Matlab command) with the mean value of 100 consecutive executions, for big-sized systems corresponding to Examples 1 to 3.
4.1. Example 1
We consider the case of a nonlinear conservative problem described by the differential equation
with the boundary conditions
We transform this boundary problem into a system of nonlinear equations by approximating the second derivative by a divided difference of second order. We introduce points , , where and m is an appropriate positive integer. A scheme is then designed for the determination of numbers as approximations of the solution at point . By using divided differences of second order
we transform the boundary problem into the nonlinear system
Introducing vectors
and the matrix of size
system (38) can be written in the form
In this case, we choose the law for the heat generation in the boundary problem and we solve system (38) by using iterative methods H, H, H, H, and H. In all cases, we use as initial estimation and the stoping criterium or . We can see in Table 1 the results obtained for and .
Table 1.
Numerical results for conservative boundary problem.
There are no significant differences among the results obtained for different values of step h, i.e. for different sizes of the nonlinear system resulting from discretization. Let us remark that, for the lowest number of iterations, the best results in terms of lowest residuals have been obtained by methods H and H.
4.2. Example 2
Let us consider the system of nonlinear equations
for an arbitrary positive integer m. We solve this system whit the same schemes as before, using as initial guess and two values for the size, and , being the solution and , respectively. The stopping criterium used is again or , that is, the process finishes when one of them is satisfied. The obtained results are shown in Table 2.
Table 2.
Numerical results for Example 2.
In this case, all the methods use the same number of iterations to achieve the solution, but the lowest residual are those of proposed schemes H and H .
4.3. Example 3
Gas dynamics can be modeled by a boundary problem described by the following elliptic partial differential equation and the boundary conditions:
By using central divided differences and step in both variables, this problem is discretized in the nonlinear system
where
being I the identity matrix of size , and
To solve problem (39) we have used as initial guess . Also the stopping criteria or have been used and the process finishes when one of them is satisfied or the number of iterations reachs to 50.
The results are shown in Table 3. The first column shows the numerical aspects (approximated computational order of convergence ACOC, last difference between consecutive iterates and residual ) analyzed for the schemes used to solve the problem. In the rest of the columns we show the numerical results obtained by the methods , , , , and .
Table 3.
Numerical results for elliptic partial differential equation.
The low number of iterations needed justify that the ACOC does not approximate properly the theoretical order of convergence. The methods giving lowest exact error at the last iteration are again H and H.
4.4. Example 4
Let us consider the nonlinear two-dimensional system with coordinate functions
This system has two real solutions at points, approximately, and . By using as initial estimation, we have obtained the results appearing in Table 4, in which the residuals and appear, for the three first iterations.
Table 4.
Numerical results for Example 4.
It can be observed that, although the difference between two consecutive iterations is not very small, the precision in the estimation of the root is very high, being the best for proposed schemes H and H.
Moreover, dynamical planes help us to get global information about the convergence process. In Figure 1 we can see the dynamical planes of the proposed methods H and H and known schemes H, H, and H when they are applied on system . This figures are obtained by using the routines described in [15]. To draw these images, a mesh of initial points has been used, 80 was the maximum number of iterations involved and the tolerance used as the stopping criterium. In this paper, we have used a white star to show the roots of the nonlinear system. A color is assigned to each initial estimation (each point of the mesh) depending on where they converge to: Blue and orange correspond to the basins of the roots of the system (40) (brighter as lower is the number of iterations needed to converge) and black color is adopted when the maximum number of iterations is reached or the process diverges.

Figure 1.
Basins of attraction of known and proposed methods on . (a) H; (b) H; (c) H; (d) H; (e) H.
In Figure 1, the shape and wideness of the basins of attraction show that H, H, H, and H can found any of both roots by using a great amount of initial estimations, some of them far from the roots. However, H is hardly able to find the roots and their basins are very small.
4.5. Example 5
Let us consider the nonlinear two-dimensional system with coordinate functions
This system has four real solutions at points , , , and . Let us remark that they are symmetric as the system shows the intersection between two conical curves, an ellipse and a circumference. By using as initial estimation, we have obtained the results appearing in Table 5, in which the residuals and also appear, for the three first iterations.
Table 5.
Numerical results for Example 5.
The numerical results appearing in Table 5 show as the estimation of the roots has similar errors in all sixth order methods, although H and H are slightly better than H. Indeed, the ninth order scheme has the best precision in the approximation of the roots of , as corresponds to its high order of convergence. In terms of computational time, the mean of one hundred consecutive iterations have been used in order to get good estimations of the real time and, in case of proposed methods, they use similar times as the existing schemes getting better accuracy.
In Figure 2 the colors blue, orange, green and purple correspond to the basins of the roots of the system (41). We can see the dynamical planes of the proposed method methods H and H and known schemes H, H, and H when they are applied on system .
Figure 2.
Basins of attraction of known and proposed methods on . (a) H; (b) H; (c) H; (d) H; (e) H.
Regarding the stability of proposed and known iterative processes on , it can be observed in Figure 2 that, in all cases except H, the connected component of the basins of attraction that holds the root (usually known as immediate basin of attraction) is very wide. For all iterative methods black areas of low convergence or divergence appear, being more diffuse in case of H and wider in H. Methods H, H and H present black areas that are mainly regions of slower convergence, that would be colored if a higher number of iterations is fixed as a maximum; meanwhile, the biggest black area of the dynamical plane associated to method H corresponds to divergence.
5. Concluding Remarks
From Potra–Pták third order scheme, an efficient sixth order method for solving nonlinear system is proposed. Moreover, it has been extended by adding subsequent steps with the same structure that adds a new functional evaluation per new step. Then the order of the resulting procedure increases in three units per step. The proposed methods have been shown to be more efficient that several known methods of the same order. Some numerical tests with academic and real-life problems have been made to check the efficiency and applicability of the designed procedures. The numerical performance have been opposed to the stability shown by the dynamical planes of new and known methods on two-dimensional systems.
Author Contributions
Methodology, H.A.; Writing—original draft, J.R.T.; Writing—review & editing, A.C.
Funding
This research was partially supported by Ministerio de Economía y Competitividad under grants MTM2014-52016-C2-2-P and Generalitat Valenciana PROMETEO/2016/089.
Acknowledgments
The authors are grateful to the anonymous referees for their valuable comments and suggestions to improve the paper.
Conflicts of Interest
The authors declare no conflict of interest.
References
- Ostrowski, A.M. Solution of Equations and Systems of Equations; Academic Press: New York, NY, USA, 1966. [Google Scholar]
- Ortega, J.M.; Rheinboldt, W.C. Iterative Solution of Nonlinear Equations in Several Variables; Academic Press: New York, NY, USA, 1970. [Google Scholar]
- Kelley, C.T. Solving Nonlinear Equations with Newton’s Method; SIAM: Philadelphia, PA, USA, 2003. [Google Scholar]
- Traub, J.F. Iterative Methods for the Solution of Equations; Prentice-Hall: Englewood Cliffs, NJ, USA, 1964. [Google Scholar]
- Homeier, H.H.H. A modified Newton method with cubic convergence: The multivariable case. J. Comput. Appl. Math. 2004, 169, 161–169. [Google Scholar] [CrossRef]
- Darvishi, M.T.; Barati, A. A fourth-order method from quadrature formulae to solve systems of nonlinear equations. Appl. Math. Comput. 2007, 188, 257–261. [Google Scholar] [CrossRef]
- Cordero, A.; Hueso, J.L.; Martínez, E.; Torregrosa, J.R. A modified Newton-Jarratt’s composition. Numer. Algor. 2010, 55, 87–99. [Google Scholar] [CrossRef]
- Cordero, A.; Hueso, J.L.; Martínez, E.; Torregrosa, J.R. Efficient high-order methods based on golden ratio for nonlinear systems. Appl. Math. Comput. 2011, 217, 4548–4556. [Google Scholar] [CrossRef]
- Grau-Sánchez, M.; Grau, À.; Noguera, M. On the computational efficiency index and some iterative methods for solving systems of nonlinear equations. J. Comput. Appl. Math. 2011, 236, 1259–1266. [Google Scholar] [CrossRef]
- Grau-Sánchez, M.; Grau, À.; Noguera, M. Ostrowski type methods for solving systems of nonlinear equations. Appl. Math. Comput. 2011, 218, 2377–2385. [Google Scholar] [CrossRef]
- Potra, F.-A.; Pták, V. Nondiscrete Induction and Iterarive Processes; Pitman Publishing: Boston, MA, USA, 1984. [Google Scholar]
- Grau-Sánchez, M.; Noguera, M.; Amat, S. On the approximation of derivatives using divided difference operators preserving the local convergence order of iterative methods. J. Comput. Appl. Math. 2013, 237, 363–372. [Google Scholar] [CrossRef]
- Sharma, J.R.; Arora, H. On efficient weighted-Newton methods for solving systems of nonlinear equations. Appl. Math. Comput. 2013, 222, 497–506. [Google Scholar] [CrossRef]
- Cordero, A.; Torregrosa, J.R. Variants of Newton’s method using fifth order quadrature formulas. Appl. Math. Comput. 2007, 190, 686–698. [Google Scholar] [CrossRef]
- Chicharro, F.I.; Cordero, A.; Torregrosa, J.R. Drawing dynamical and parameter planes of iterative families and methods. Sci. World J. 2013, 2013, 780153. [Google Scholar] [CrossRef] [PubMed]
© 2018 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).


