1. Introduction
The design of iterative algorithms for solving a nonlinear system of equations is a most needed and challenging task in the domain of numerical analysis. Nonlinearity is a phenomenon that occurs in physical and natural events. Phenomena which have an inherent quality of nonlinearity occur frequently in fluid and plasma mechanics, gas dynamics, combustion, ecology, biomechanics, elasticity, relativity, chemical reactions, economics modeling problems, transportation theory, etc. Due to this frequency, many works related to the current mathematical research provide importance for the development and analysis of methods for nonlinear systems. Hence, finding a solution of the nonlinear system is a classical and difficult problem that unlocks the behavior pattern of many application problems in science and engineering, wherein is a sufficiently Frechet differentiable function in an open convex set D.
In the last decade, many iterative techniques have been proposed for solving a system of nonlinear equations. See, for instance, [
1,
2,
3,
4,
5,
6,
7] and the references therein. The popular method for getting a solution
is the Newton’s method (
)
where
denotes any iterative scheme and
denotes the inverse of first Frechet derivative
. This
method needs evaluation of one function, same number of derivative and matrix inversion per iteration which produces second-order convergence.
Another classical scheme for solving system of nonlinear equations is a two-step third-order variant of Newton’s method (
) proposed by Traub [
8] is given by
The fourth-order Newton’s method (
) formed using two steps is given by
was proposed by Noor et al. [
5] based on the variational iteration technique. Abad et al. [
1] combined the Newton and Traub method to obtain a fifth-order method (
) with three steps, given as below:
Sharma and Arora [
6] proposed an eighth-order three-step method (
) given by
Recently, Madhu et al. [
9]
proposed a fifth-order two-step method given by
Furthermore, this method was extended to produce
order of convergence known as
MBJ method using an additional function evaluation and it is given below
To derive the order of convergence for the iterative methods, higher-order derivatives are used although such derivatives are not present in the formulas. The solutions of the equation
are rarely found in closed form and hence most of the methods for solving such equations are usually iterative. Convergence analysis of these iterative methods is an important part in establishing the order of any iterative method. As the convergence domain is narrow, one needs additional hypotheses to enlarge it. Furthermore, knowledge of initial approximations requires the convergence radius. Therefore, the applicability of the methods depends upon the assumptions on the higher-order derivatives of the function. Many research papers which are concerned with the local and semi-local convergence of iterative solutions are available in [
10,
11,
12,
13]. One of the most attractive features of each numerical algorithm is how the procedure deals with large-scale systems of nonlinear equations. Recently, Gonglin Yuan et al. [
14,
15] proposed conjugate gradient algorithms with global convergence under suitable conditions that possess some good properties for solving unconstrained optimization problems.
The primary goal of this manuscript is to propose higher-order iterative techniques which consumes less computational cost for very large nonlinear systems. Motivated by the different methods available in literature, a new efficient two-step method is proposed with convergence order five for solving systems of nonlinear equations. This new method consists of two functional evaluations, two Frechet derivative evaluations and two inverse evaluations per iteration. The order of this method can be increased by three units whenever an additional step is included. This idea is generalized for obtaining new multi-step methods of order , , increasing the convergence order by three units for every step. Also, this method requires only one new function evaluation for every new step.
The remaining part of this manuscript is organized as follows.
Section 2 presents two new algorithms, one with convergence order five and the other is its multi-step version with order
,
. In
Section 3, the analysis of convergence of the new algorithms are presented. Computational efficiencies of the proposed algorithms are calculated based on the computational cost and a comparison with other methods in terms of ratio is reported in
Section 4. Numerical results for some test problems and their comparison with few available equivalent methods are given in
Section 5. Two application problems known as 1-D and 2-D Bratu problems have been solved using the presented methods in
Section 6.
Section 7 includes a short conclusion.
3. Convergence Analysis
To prove the theoretical convergence, the usual Taylor’s expansion (
n-dimensional) has been applied. Hence, we recall some important results from [
3]:
Let
be sufficiently Fréchet differentiable in
D. Assume that
ith derivative of
at
,
, is the
i-linear function
such that
. Given
, which lies near the solution
of the system of nonlinear equations
, one may apply Taylor’s expansion (whenever Jacobian matrix
is nonsingular) to get
where
,
. It is noted that
since
and
. Expanding
using Taylor’s series, we get
where
I denotes the identity matrix. We remark that
. The error is denoted as
for the
rth iteration. The equation
is called the
error equation, where
L is a
p-linear function
and
p denotes
order of convergence. Also,
.
By using the above concepts, the following theorems are established.
Theorem 1. Let be sufficiently Fréchet differentiable at each point of an open convex neighborhood D of . Choose close enough to α, where α is a solution of the system and is continuous and nonsingular at α. The iterative formula (7) producing the sequence converges to α locally with order five. The error expression is found to be Proof. Applying Taylor’s formula around
we get
and we express the differential of first order of
as
where
,
and
. Inverting the above series, we get
where
and
Also,
where
,
and
.
Combining (
11) and (
14), one gets
After calculating inverse of
and using (
13), we get
We get the required error estimate by using (
12), (
16) and (
17) in (
7)
The above result agrees with fifth-order convergence. □
Theorem 2. Let be sufficiently Fréchet differentiable at each point of an open convex neighborhood D of . Choose close enough to α, where α is a solution of the system and is continuous and nonsingular at α. The iterative formula (8) producing the sequence converges to α locally with order , where k is a positive integer and . Proof. Taylor’s expansion of
around
gives
After evaluating
and combining with (
19) yields
Equations (
18) and (
20)leads to
Substituting (
21) in (
8) and calculating the error term, we get
It is proved that
. Therefore, for j = 1, 2, … in (
22), we get
Proceeding by induction, we get
The above result shows that the method has order convergence. □
4. Computational Efficiency of the Algorithms
The efficiency index of any iterative method is measured using the Ostrowski’s definition [
16],
, where
p denotes the order of convergence and
d denotes the number of functional evaluations per iteration. Different algorithms considered here are compared with the proposed algorithms in terms of computational cost. To evaluate Jacobian
and
, one needs
function evaluations (all the elements of matrix
) and
n scalar functional evaluations (the coordinate terms in
) respectively. Whereas, any iterative method which solves system of nonlinear equations needs one or more matrix inversion. This indicates few linear systems must be solved. Hence, the number of operations required for solving a linear system of equations dominates when the computational cost of an iterative method is measured. Due to this, Cordero et al. [
3] introduced the concept of computational efficiency index (CE), where the efficiency index given by Ostrowski is combined with the number of products-quotients required per iteration. Computational efficiency index is defined as
, where
is the number of products-quotients per iteration and the details of its calculation is given in the next paragraph.
The total cost of computation for a nonlinear system of n equations with n unknowns is calculated as follows: For an iterative function, n scalar functions are evaluated and for the computation of divided difference scalar functions are evaluated, where and are computed separately. Also, quotients are added for divided difference. For calculating inverse linear operator, products and divisions in the LU decomposition and products and divisions for solving two triangular linear systems are taken. Moreover, products for multiplying a matrix with a vector or a scalar and n products for multiplying a vector by a scalar are required.
The computational cost and efficiency of the proposed algorithms are compared with methods given by (
1) to (
6) and the algorithms presented recently by Sharma et al. [
17] which are given below: A fifth-order three-step method (
) is given by
An eighth-order four-step method (
) is given by
Table 1 displays the computational cost and computational efficiency (CE) of various methods.
To compare the CE of considered iterative methods, we calculate the following ratio [
17], where
and
denote respectively the computational cost and computational efficiency of the method.
It is clear that when , the iterative method 1 is more efficient than method 2.
versus
: The ratio (
23) is given by
Based on the computation, we have for . Thus, we conclude that for .
versus
: In this case, the ratio (
23) is given by
It has been verified that for . Hence, we have for .
versus
: Here the ratio (
23) is given by
It can be checked that for which implies that for .
versus
: Here the ratio (
23) is given by
It has been verified that for . Hence, we have for .
versus
: The ratio (
23) is given by
It can be checked that for . Thus, we conclude that for .
versus
: The ratio (
23) is given by
It has been verified that for . Hence, we conclude that for .
versus
: The ratio (
23) is given by
Based on the computation, we have for . Thus, we conclude that for .
versus
: The ratio (
23) is given by
It can be checked that for , which implies that for .
versus
: The ratio (
23) is given by
It has been verified that for . Hence, we conclude that for .
versus
: The ratio (
23) is given by
Based on the computation, we have for , which implies that for .
Consolidating the above ratios, the following theorem is stated to show the superiority of the proposed methods.
Theorem 3. Computational efficiency of and methods satisfy: (a) for respectively. (b) for , respectively.
Remark 1. The ratio between the proposed method respectively with and the proposed method respectively with do not satisfy the required condition .
5. Numerical Results
Numerical performance of the presented methods is compared with Newton’s method and some known methods given in the beginning of this paper. All the numerical calculations are done using
MATLAB package for the test examples given below. We have taken 500 digits accuracy while calculating the approximate numerical solutions. The following error residual is adopted to stop the iteration:
The following formula is used to find approximated computational order of convergence
:
Here N denotes the number of iterations needed for obtaining the minimum residual
,
represents the total number of Frechet derivative’s inverse and
represents the total number of function evaluations
to reach the minimum residual as in [
7].
The Jacobian matrix is given by Initial approximation is taken as and the analytic solution is given by .
The above system is solved by taking the starting approximation .
The solution is given by
The Jacobian matrix is given by
The solution for the above system is
. The initial vector for the iteration is taken as
. The Jacobian matrix produced thus is given by
Test Example 4 (TE4): The following boundary value problem is considered
where equal mesh is used for dividing the interval [0, 1] which is given below
Denote
Discretizing the second derivative by the following difference formula
we obtain
nonlinear equations in
variables as given below
The above equations are solved by taking
(i.e.,
) and
as the initial approximation, where we get the Jacobian matrix with 43 non-zero elements as below.
The following solution is obtained for the differential equation at the given mesh points.
Test Example 5 (TE5): The following huge nonlinear system is considered
The solution is
. Choosing the initial vector as
, we obtain the following Jacobian matrix.
The numerical results are displayed in
Table 2 for the test examples TE1-TE5 by which we arrive at
algorithm is the most efficient one with least residual error and less CPU time. Also, it is observed from the results, the proposed algorithms require fewer iterations than the compared methods for the test example TE3. However, when comparing number of functional evaluations
and inverse evaluations
,
algorithm is the best among the compared methods.