New Iterative Methods for Solving Nonlinear Problems with One and Several Unknowns

In this manuscript, a new type of study regarding the iterative methods for solving nonlinear models is presented. The goal of this work is to design a new fourth-order optimal family of two-step iterative schemes, with the flexibility through weight function/s or free parameter/s at both substeps, as well as small residual errors and asymptotic error constants. In addition, we generalize these schemes to nonlinear systems preserving the order of convergence. Regarding the applicability of the proposed techniques, we choose some real-world problems, namely chemical fractional conversion and the trajectory of an electron in the air gap between two parallel plates, in order to study the multi-factor effect, fractional conversion of species in a chemical reactor, Hammerstein integral equation, and a boundary value problem. Moreover, we find that our proposed schemes run better than or equal to the existing ones in the literature.


Introduction
The role of iterative methods in solving nonlinear problems of many branches of science and engineering has increased dramatically in recent years.One of the most important reasons is the applicability of the iterative methods to real-life problems.For example, Shacham, Balaji, and Seader [1,2] described the fraction of the nitrogen-hydrogen feed that gets converted to ammonia (this fact is called fractional conversion) in the form of a nonlinear scalar equation.On the other hand, Shacham [3] expressed the fractional conversion of Species A in a chemical reactor also in the form of a scalar equation.In addition, Shacham and Kehat [4] gave several examples of real-life problems, which can be modeled by means of real scalar equations whose roots play an important role in the cited problems.Some of them are: The chemical equilibrium calculation problem, the isothermal flash problem, the energy or material balance problem in the chemical reactor problem, the azeotropic point calculation problem, the adiabatic flame temperature problem, the calculation of gas volume by the Beattie-Bridgeman method problem, the liquid flow rate in a pipe problem, the pressure drop in the converging-diverging nozzle problem, etc.
On the other hand, Moré [5] proposed a collection of nonlinear model problems, most of them described in terms of nonlinear systems of equations.Further, Grosan and Abraham [6] also discussed the applicability of the iterative methods for solving nonlinear systems in different sciences such as neurophysiology, the kinematics synthesis problem, the chemical equilibrium problem, the combustion problem, and the economics modeling problem.Furthermore, the reactor and steering problems were solved in [7,8] by describing these problems in the form of F(x) = 0.Moreover, Lin et al. [9] also discussed the applicability of the procedures for solving nonlinear systems in transport theory.
The construction of iterative methods for solving nonlinear equations, f (x) = 0, or nonlinear systems, F(x) = 0, with m equations and m unknowns, is an interesting task in the field of numerical analysis.In both cases, there are different ways to develop iterative schemes.Different tools such as quadrature formulae, Adomian polynomials, the divided differences approach, the composition of known methods, the weight function procedure, etc., have been used for designing iterative schemes to solve nonlinear problems.For a good overview on the procedures and techniques, as well as the different schemes developed in the last half century, we refer to some standard text books [10][11][12][13].Some scalar schemes can be translated, in a natural way, to multivariate methods, whereas for other ones, this translation is not possible or requires special algebraic manipulations.
It is straightforward to say from the fourth-order methods for scalar equations and systems available in the literature [14][15][16][17][18][19][20][21][22][23][24] that they present free parameters only at the second step, in order to obtain new iterative methods.In this paper, we explore the idea of including free parameters or weight functions also in the first step.In this case, is it possible to obtain new optimal fourth-order methods for scalar equations with simple iterative expression, smaller asymptotic error constants, and smaller residual errors?Then, we extend this idea to a system of nonlinear equations.
Motivated and inspired by these questions, our main objective in this paper is to highlight the advantages of the new approach over the traditional approach in building new optimal iterative methods of fourth-order.In addition, our proposed methods not only offer faster convergence, but also have less residual error and asymptotic error constants.
Our proposed schemes only use three functional evaluations per iteration, so they are optimal in the sense of the Kung-Traub conjecture for scalar equations.Then, we extend this family for nonlinear systems, preserving the order of convergence.The efficiency of the proposed schemes is tested on several real-life problems, which allow us to conclude that the new methods perform better than or equal to many other known schemes with the same order.
We organize the rest of the manuscript as follows.Section 2 is devoted to developing the proposed family of optimal iterative schemes, establishing its fourth-order of convergence and presenting some special cases that will be used in the numerical section.The extension of the proposed family to nonlinear systems is developed in Section 3 joined with the study of its computational efficiency index and the comparison with known schemes of fourth-order.The performance of the new methods is analyzed with respect to real-life problems and academic ones of one or several variables and described in Section 4. We finish this work with some conclusions and the references used in it.

Development of Fourth-Order Optimal Schemes
This section is devoted to describing the new family of optimal fourth-order schemes to solve f (x) = 0, where f : D ⊆ R → R is defined in an open interval D. The iterative expression of this family is given by: where ϕ(x) is a real weight function, α is a free disposable parameter, and z n is the midpoint of x n and y n , i.e., z n = x n + y n 2 .
Under some conditions of function ϕ, the fourth-order convergence of the elements of ( 1) is presented in the following result.We can observe the role of ϕ(x n ) and α in the construction of the fourth-order convergence schemes.

Convergence Analysis
Theorem 1.Let f : C → C be an analytic function in a neighborhood of the required simple zero x * .Let us consider that initial guess x 0 is close enough to x * .Then, the members of the family (1) have fourth-order convergence if ϕ(x * ) = 2 and α = ϕ (x * ).
Proof.Let us denote by e n = x n − x * the error at the nth step.By using Taylor expansion around x * , we get: and: where Similarly, we can expand ϕ(x n ) around x * by using Taylor's series, which is given as follows: By using ( 2)-( 4) in the first substep of (1), we have: Once again, we expand f (z n ) around point x * , which leads to: Now, we use Equations ( 2)-( 6) in the last substep of (1), obtaining: where M 1 and M 2 are functions of c 2 , c 3 , c 4 , ϕ(x * ), ϕ (x * ), ϕ (x * ) and ϕ (x * ).
It is straightforward to say that the coefficient of e 3 n should be zero in order to obtain fourth-order convergence.Then, we have: Finally, we substitute the value α = ϕ (x * ) in Equation ( 8), obtaining: where ϕ (x * ), ϕ (x * ) ∈ R are free disposable parameters.This completes the proof.
Hence, the proposed family (1) reaches fourth-order convergence by using only three functional evaluations (viz.f (x n ), f (x n ), and f (z n )) per iteration.Therefore, it satisfies the optimality of the Kung-Traub conjecture [25] for multi-point iterative methods without memory.It is vital to note that the values of ϕ(x n ) and α contribute to the construction of the desired fourth-order convergence.

Some Particular Cases
In this section, some particular cases of the family (1) are presented, by assigning different types of function h(x).In Table 1, we show different expressions of function h(x).Moreover, we can find new and interesting iterative methods obtained from different functions h(x) satisfying the conditions of Theorem 1.

ϕ(x)
Second Step Corresponding to ϕ(x) where a ∈ R and b > 2, otherwise h (x * ) will be unbounded. Case-4 It is important to note in Case-2 that if we consider any second-order iteration, which only employs the function of f (x n ) and f (x n ), then we will obtain optimal fourth-order convergence.Otherwise, if we consider any other iteration function, then we will also obtain fourth-order convergence, but not optimal in the sense of the Kung-Traub conjecture, e.g., if we choose φ(x) as Steffensen's method or a Steffensen-type method, which further produces a new fourth-order iterative method, but not optimal.

Multidimensional Extension
Let us consider now the nonlinear system F(x) = 0, defined by a multidimensional function F : D ⊆ R m −→ R m , the zero x for which we are searching.Our aim is to generalize the family (1) to nonlinear systems, and the main drawback is the existence of the quotient f (z n )/ f (x n ).This usually makes the method non-extendable to several variables, but in the recent literature (see for example, [26,27]), the authors solved it by means of the following strategy: the quotient can be written as: where f [x n , z n ] is the first-order divided difference.Now, the class (1) can be written in the following way for nonlinear systems: where H(x) is a matrix-valued function and z (n) is the midpoint of x (n) and y (n) , i.e., z ) defined by Ortega and Rheinboldt in [10], such that [x, y; F](x − y) = F(x) − F(y), ∀x, y ∈ D. In order to obtain the Taylor expansion of the divided difference operator around the solution x, we use the Genocchi-Hermite formula (see [28]): and by developing F (x + th) around x, we obtain: If F ( x) is nonsingular and denoting e = x − x, we have: where Replacing these expressions in (12) and using y = x + h and e y = y − x, we have: In particular, if y is Newton's approximation, i.e., h The following result establishes the sufficient conditions for the convergence of family ( 11) with order four.The notation used for multidimensional Taylor expansions can be found in [29].
which is a solution of F(x) = 0, and x (0) is an initial guess close enough to x.If F (x) is continuous and nonsingular in x, then sequence {x (n) } n≥0 obtained from (11) converges to x with order 4 when H( x) = 2I, H ( x) = 0 (is a null matrix), and H ( x) is bounded, the error equation being in this case: where C q = 1 q! [F ( x)] −1 F (q) ( x), q = 2, 3, . .., and e (n) = x (n) − x.
Proof.By using Taylor expansion of F(x (n) ) and F (x (n) ) around x, From the above expression and forcing where: As H(x (n) ) can be developed in the following way: the error at the first step is: +O(e 5 ) and the error at midpoint z (n) is: +O(e 5 ).
In order to guaranty the quadratic convergence for z (n) , we need to assure that H( x) = 2I.Therefore, In order to obtain the error equation, we calculate: Therefore, the error at the final step is: By assuming H ( x) = 0, the error equation of the method is obtained.

Some Special Cases and Their Computational Indices
Now, we present some particular cases of the family (11) by using different functions H(x).In the following, we show several selected functions.Let us remark that if H( x) = 2I, the method is not new, as it appears in [30].In the following, we focus our attention on the new schemes, describing in each case the computational effort that they involve, in terms of functional evaluations d and the amount of products and quotients op.By using this information, we are going to use the multidimensional extension of the efficiency index defined by Ostrowski in [31] as I = p 1/d and the computational efficiency index CI defined in [29] as CI = p 1/(d+op) , where p is the order of convergence, d is the number of functional evaluations per iteration, and op is the number of products-quotients per iteration.
In order to compare our proposed schemes with other similar ones (of the same order of convergence) existing in the literature, we introduce in what follows some of them, including their respective iterative expressions.This will allow us to calculate their corresponding efficiency indices.
In 2008, Nedzhibov [14] extended the original Jarratt's method (see [15]) for the multi-dimensional case with the help of the Chebyshev-Halley family, whose iterative expression is: denoted in the following as JM.
On the other hand, Hueso et al. in [17] (Equations ( 1)-( 5)) designed the fourth-order scheme: where and S 2 ∈ R is a free disposable parameter.This scheme is denoted throughout the manuscript as HM for S 2 = 9 8 .
Moreover, Junjua et al. in [18] designed a Jarratt-type scheme of the fourth-order of convergence, denoted as JAM, whose iterative expression is: where η(x In Table 2, the efficiency indices I of the new methods Scase-1, Scase-2, Scase-3, and Scase-4 are presented, together with the known methods JM, HM, and JAM.The number of functional evaluations in all these schemes is different, but the order of convergence is the same.To calculate the efficiency index I, it must be taken into account that the number of functional evaluations of one F, F , and first-order-divided difference [•, •; F] at certain iterates are m, m 2 , and m(m − 1), respectively, m being the size of the system.Despite the differences in the structure of the new and existing methods, index I is the same for all of them.On the other hand, to compute an inverse linear operator, we solve an m × m linear system, where as we know, the number of products-quotients that we need to perform to obtain the solution of the system by means of the LU decomposition is m In addition, we need m 2 products for matrix-vector multiplication and m 2 quotients for a divided difference.Therefore, we calculate the CI of method Scase-1.For each iteration, we need to evaluate function F twice, once Jacobian F , and once the divided difference, so 2m 2 + m functional evaluations are needed.In addition, we must solve three linear systems with F (x (n) ) as coefficients matrix (that is, m products-quotients), m 2 quotients for calculating the divided difference, one matrix-vector product (m 2 products-quotients), and three vector-vector products (3m products-quotients).Therefore, the value of index CI for method Scase-1 on a nonlinear system of size m × m is: In Table 3, we show index CI of schemes Scase-1, Scase-2, Scase-3, Scase-4, JM, HM, and JAM.In it, NFE denotes the number of functional evaluations, NLS1 is the number of linear systems with the matrix of coefficients F (x (n) ) to be solved, NLS2 is the number of linear systems with other matrices of coefficients that are solved, and M × V, V × V denote the number of matrix-vector and vector-vector products, respectively.Table 3. Functional evaluations and products-quotients of the methods.NFE, number of functional evaluations; NLS1, number of linear systems.

Method
We observe that, although index I is the same in all these cases, this is not the case of index CI, since the number of inverse linear operators is different for each scheme.In Figure 1, index CI for those methods and systems of size from 2-20 is shown.We can observe that, for a size of the system greater than eight, the best index corresponds to the proposed methods Scase-1 and Scase-2, due to the number of linear systems to be solved and the factor of the dominating term, that is 1  3 m 3 , in comparison to 2 3 m 3 in other schemes.

Numerical Examples
In this section, we show the effectiveness and efficiency of some of our proposed methods, and we compare them with other existing optimal (scalar case) iterative schemes of the same order.In the one-dimensional case, for comparison purposes, we consider some cases from Table 1, Case-1, Case-3 for a = 1  10 , b = 3 , Case-4 (for a = 10 and b = 2), and Case-5 (for a = 1 and b = 1), called OM1, OM2, OM3, and OM4, respectively.In addition, we consider three real-life problems, e.g., a chemical engineering problem, the movement of an electron in the air space between two parallel plates, and the fractional conversion in a chemical reactor problem, which are displayed in Examples 1-3.In addition, the solution to the corresponding problem is also listed in the corresponding example, which is correct up to 30 significant digits.However, the desired roots are available up to several significant digits (a minimum of one thousand), but due the page limit restriction, only 30 significant digits are displayed.Now, we compare them with the optimal fourth-order multi-point iteration function proposed by Khattri and Abbasbandy [32] and Soleymani et al. [33]; from them, we considered methods ( 6) and ( 19) denoted as (KA) and (SKV), respectively.In addition, we also compare them with the optimal schemes of order four, which were presented by Chun [34] and King [35]; we have picked expressions (10) and ( 2) (for β = 1) from them, called (CM) and (KM), respectively.Finally, we also compare them with another optimal family of fourth-order methods derived by Cordero et al. [36], from which we have chosen expression (9), called (CHMT).
We compare our proposed methods with the existing ones in terms of the absolute residual error of the corresponding function | f (x n )|, errors between the two consecutive iterations |x n+1 − x n |, x n+1 − x n (x n − x n−1 ) 4 , and the asymptotic error constant η = lim n→∞ x n+1 − x n (x n − x n−1 ) 4 in Tables 4-6.We calculate the asymptotic error constant and other constants up to several significant digits (a minimum of 1000 significant digits) to minimize the round-off error.We show the values of x n , the absolute residual error in the function | f (x n )|, the difference between the two consecutive iterations |x n+1 − x n |, and the values of x n+1 −x n (x n −x n−1 ) 4 and η.In the context of nonlinear systems, we also consider two applied science problems to check further the validity of the theoretical results for the nonlinear system.We shall compare them with the methods, namely ( 14)-( 16), called JM, HM, and JAM, respectively.We have included the number of iteration indexes (n), the residual error of the corresponding function ( F(x (n+1) ) ), the error in the iterations x (n+1) − x (n) , and the approximated computational order of convergence ρ = log (for details, see Cordero and Torregrosa [37]) in Tables 7 and 8.
All computations have been performed using the software Mathematica 9 (Wolfram Research, Champaign, IL, USA) with multiple precision arithmetic, and in Tables 4-8, A(±B) denotes the number A × 10 (±B) .Example 1.We consider a quartic equation from Shacham, Balaji, and Seader [1,2], which describes the fractional conversion, that is the fraction of the nitrogen-hydrogen feed getting converted to ammonia.If we consider 250 atm and 500 • C, then the equation has the form: Function ( 17) has four zeros, two real zeros, and two conjugated complexes.We want to obtain the zero x * ≈ 3.9485424455620457727 + 0.3161235708970163733i with initial approximation x 0 = 3.7 + 0.25i.Other initial approaches further away from the solution give similar numerical results, with a slower approach to the solution.
Example 2. In the analysis of the movement of an electron in the air gap between two parallel plates, the multi-factor effect is given by: e and m, respectively, being the charge and the mass of the electron, p 0 and v 0 being, respectively, the position and velocity of the electron at instant t 0 , and also E 0 sin(ωt + α) being the RF electric field between the plates.By selecting some values of the parameters in Equation (17), to simplify the expression, we obtain: This function has a simple zero at x * ≈ −0.309093271541794952741986808924, and we use the initial estimation of one.
Example 3. In expression (see [3]), where x denotes the fractional conversion of Species A in a chemical reactor.We must take into account that there is no physical meaning to this expression if x < 0 or x > 1.Then, x is considered in the interval 0 ≤ x ≤ 1.The searched zero is x * ≈ 0.757396246253753879459641297929.Indeed, let us remark that expression (20) is undefined in 0.8 ≤ x ≤ 1, very near to the zero.The derivative of expression (20) is close to zero in 0 ≤ x ≤ 0.5.Therefore, we consider the initial approximation to be x 0 = 0.76 for this problem.

Results and Discussion
We can say from Tables 4-6 that our methods had smaller residual error, in each test function, than the known methods used for comparison, namely KA, SKV, CM, KM, and CHMT.In addition, our methods also had a smaller distance between two consecutive iterations.Therefore, our method converged faster towards the exact root as compared to the existing ones.On the other hand, the proposed methods also had a simple asymptotic error constant corresponding to each test function, which can be seen in Tables 4-6.A similar type of behavior of our methods was found in the case of the multidimensional extension, which is mentioned in Tables 7 and 8.However, our methods could have different behaviors depending on the nonlinear equation.Actually, the behavior of the iterative methods mainly depended on the complexity of the iterative expression, the test function used, the initial guess, the programming of the scheme, etc.

Conclusions
In the past, several researchers proposed optimal multi-point fourth-order iterative methods for simple roots of nonlinear equations, by using weight functions or free parameters only in the second scheme.In this paper, we design a family of two-step optimal iterative methods with fourth-order convergence, including weight functions and parameters in the two steps of the methods.The main strength of our proposed schemes is that they not only give flexibility to researchers at both steps for constructing new optimal fourth-order methods, but also give a faster convergence, a small residual error corresponding to the involved function, and asymptotic error constants in relation to other known schemes.In addition of this, the local convergence analysis of the suggested schemes was proven through Lipschitz constants and the Banach lemma in order to calculate the local convergence radius.Moreover, we extended the proposed scheme to systems of nonlinear equations, preserving the same order of convergence and the same beauty that the scalar one has.Numerical results were performed and compared with earlier existing methods.The results were very consistent with the existing methods.

20 Figure 1 .
Figure 1.Index CI for different sizes of the system.

Table 2 .
Efficiency index for different schemes.

Table 5 .
Convergence behavior of different fourth-order optimal methods for f 3 (x).Considering the mixed Hammerstein integral equation, from Ortega and Rheinbolt

Table 6 .
Convergence behavior of different fourth-order optimal methods for f 4 (x).

Table 7 .
Convergence behavior of different fourth-order methods for Example 4.
[10]ple 5. Let us consider the following boundary value problem described in[10]: y

Table 8 .
Convergence behavior of different fourth-order methods for Example 5.

Table 9 .
Values of abscissas t j and weights w j .