Next Article in Journal
A Multiphysics Framework for Fatigue Life Prediction and Optimization of Rocker Arm Gears in a Large-Mining-Height Shearer
Previous Article in Journal
Sentiment Analysis of Tourist Reviews About Kazakhstan Using a Hybrid Stacking Ensemble Approach
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Fourth-Order Parametric Iterative Approach for Solving Systems of Nonlinear Equations

1
Department of Mathematics, Chandigarh University, Gharuan, Mohali 140413, Punjab, India
2
Scientific Computing Group, Universidad de Salamanca, Plaza de la Merced, 37008 Salamanca, Spain
3
Mathematical Modelling and Applied Computation Research Group (MMAC), Department of Mathematics, Faculty of Science, King Abdulaziz University, P.O. Box 80203, Jeddah 21589, Saudi Arabia
4
Department of Mathematics, Saveetha Institute of Medical and Technical Sciences Engineering College, Chennai 602105, Tamil Nadu, India
*
Author to whom correspondence should be addressed.
Computation 2025, 13(10), 241; https://doi.org/10.3390/computation13100241
Submission received: 15 August 2025 / Revised: 28 September 2025 / Accepted: 6 October 2025 / Published: 14 October 2025
(This article belongs to the Section Computational Engineering)

Abstract

In this paper, we present a novel one-parameter family with fourth-order convergence to solve nonlinear systems, along with its convergence analysis. Several numerical experiments including a Bratu problem, a mixed Hammerstein integral equation, and nonlinear optimization problems (namely, the Broyden banded function and the Broyden tridiagonal function) as well as applications of differential equations, are analyzed using the proposed schemes to demonstrate their effectiveness. The results indicate that these methods produce more accurate approximations and exhibit greater efficiency compared to existing approaches.

1. Introduction

Nonlinear systems of equations, F x = 0 , where F : Ω R n R n , frequently arise in many fields of engineering and science [1,2]. Consequently, finding solutions to these nonlinear systems is a challenging task. Solving these systems analytically is either extremely difficult or rarely feasible. To address this, numerous researchers have proposed iterative techniques for approximating solutions to nonlinear.
One of the oldest and simplest iterative methods is Newton’s method [3,4], which is defined as follows:
x ( k + 1 ) = x ( k ) F x ( k ) 1 F x ( k ) , k = 0 , 1 , 2 ,
where F x ( k ) represents the Jacobian matrix of F evaluated at x ( k ) . Newton’s method exhibits quadratic convergence, provided that the root sought is simple and the initial estimate is sufficiently close to the solution.
Numerous higher-order techniques have been developed in the literature [5,6,7], many of which considered Newton’s method as a first step. However, in various practical scenarios, the first-order Fréchet derivative F ( x ) either does not exist or is computationally expensive to evaluate. To address such situations, Traub [4] proposed a Jacobian-free approach, defined as follows:
x ( k + 1 ) = x ( k ) w ( k ) , x ( k ) ; F 1 F x ( k ) , k = 0 , 1 , 2 ,
where w ( k ) , x ( k ) ; F represents the first-order divided difference operator of F, and w ( k ) = x ( k ) + β F x ( k ) , with β 0 being an arbitrary constant. For β = 1 , this method simplifies to the multidimensional Steffensen’s method, as formulated by Samanskii in [8]. These methods also have a quadratic order of convergence. We recall that the mapping [ · , · ; F ] : R n × R n L ( R n ) satisfies [ x , y ; F ] ( y x ) = F ( y ) F ( x ) , and the components of the matrix associated with [ x ( k ) , y ( k ) ; F ] can be obtained using the formula of the divided difference operator; we utilize the following first-order divided difference operator [3]:
[ x ( k ) , y ( k ) ; F ] i , j 1 = F i ( y 1 , y 2 , , y j 1 , y j , x j + 1 , , x n ) F i ( y 1 , y 2 , , y j 1 , x j , x j + 1 , , x n ) y j x j .
Toenhance the order of convergence, several researchers have developed third-order methods [9,10,11,12], each requiring one function evaluation of F, two evaluations of its derivative F , and two matrix inversions per iteration. Cordero and Torregrosa [13] introduced two additional third-order methods: one method requiring one function evaluation F and three evaluations of F , while the other method demands one function evaluation of F and four evaluations of its Fréchet derivative F , along with two matrix inversions. Another third-order method, proposed by Darvishi and Barati [14], employs two function evaluations of F, two evaluations of F , and two matrix inversions per iteration. Further advancements in third-order methods were made by Darvishi and Barati [15], as well as Potra and Ptak [16], both introducing methods that require two function evaluations, one evaluation of the function’s derivative, and one matrix inversion per iteration.
Moreover, several higher-order iterative schemes have been developed. Babajee et al. [17] presented a fourth-order approach involving one function evaluation of F, two evaluations of F , and two matrix inversions per iteration. Another fourth-order method, proposed by Cordero et al. [18], is based on two function and two Jacobian evaluations along with one matrix inversion. Additionally, the authors in [6] introduced another fourth-order method that utilizes three function evaluations of F, one evaluation of F , and one matrix inversion per iteration.
It is evident that increasing the order of convergence in iterative methods often leads to a higher computational cost per iteration, posing a significant challenge for higher-order methods. Consequently, when developing new iterative methods, maintaining a low computational cost is essential. Motivated by this, we propose new iterative schemes that achieve fourth-order convergence while minimizing the number of function evaluations per iteration. The proposed method requires one function evaluation of F, one divided difference, and one matrix inversion of F per iteration.
This manuscript is organized as follows. In Section 2, a new parametric fourth-order method is presented along with its convergence analysis. Section 3 discussed the efficiency index of the proposed method. In Section 4, various numerical experiments are conducted to validate the theoretical results and compare the results of the proposed algorithms with some existing methods. Finally, the paper concludes with a summary of findings.

2. Proposed Method

The proposed two-step iterative method is represented as
y ( k ) = x ( k ) F x ( k ) 1 F x ( k ) , x ( k + 1 ) = x ( k ) ( α x ( k ) , y ( k ) , F 1 + ( I α ) I k 2 η 1 I k 1 η F x ( k ) 1 ) F x ( k ) ,
where η = I F x ( k ) 1 x ( k ) , y ( k ) , F . In addition, x ( k ) , y ( k ) ; F = F y ( k ) F x ( k ) y ( k ) x ( k ) is a first-order finite difference. The parameters k 1 , k 2 and α R { 1 } are freely disposable. The convergence analysis is stated in the following theorem.
Theorem 1. 
Let F : Ω R n R n be a function that is sufficiently differentiable in an open convex neighborhood Ω containing a zero of F and assume that F ( x ) is nonsingular at x = ξ , and the initial guess x ( 0 ) is sufficiently close to ξ. Then, the iterative schemes defined by (4) exhibit fourth-order convergence for
k 1 = 1 α 1 , k 2 = α α 1 , α R { 1 } ,
and the error equation is given by
e ( k + 1 ) = 2 α 1 α 1 B 2 3 B 2 B 3 e ( k ) 4 + O e ( k ) 5 ,
where B j = 1 j ! [ F ( ξ ) ] 1 [ F ( j ) ( ξ ) ] , j 2 w i t h x ( k ) ξ = e ( k ) .
Proof. 
For the error equation of first step of Equation (4) y ( k ) = x ( k ) F x ( k ) 1 F x ( k ) , we perform the Taylor series of F x ( k ) and F x ( k ) around ξ as follows:
F x ( k ) = F ( ξ ) [ e ( k ) + B 2 e ( k ) 2 + B 3 e ( k ) 3 + B 4 e ( k ) 4 + B 5 e ( k ) 5 + B 6 e ( k ) 6 ] + O e ( k ) 7 ,
F x ( k ) = F ( ξ ) [ I + 2 B 2 e ( k ) + 3 B 3 e ( k ) 2 + 4 B 4 e ( k ) 3 + 5 B 5 e ( k ) 4 + 6 B 6 e ( k ) 5 ] + O e ( k ) 6 .
Assuming that the Jacobian matrix F ( ξ ) is nonsingular, we derive the Taylor series expansion of F x ( k ) 1 as follows:
F x ( k ) 1 = F ( ξ ) 1 [ I + C 2 e ( k ) + C 3 e ( k ) 2 + C 4 e ( k ) 3 + C 5 e ( k ) 4 + C 6 e ( k ) 5 ] + O e ( k ) 6 ,
where C 2 = 2 B 2 , C 3 = 4 B 2 2 3 B 3 , C 4 = 6 B 3 B 2 + 6 B 2 B 3 4 B 4 8 B 2 3 + 8 B 4 B 2 + 8 B 2 B 4 5 B 5 ,   C 5 = 16 B 2 4 + 9 B 3 2 12 B 3 B 2 2 12 B 2 B 3 B 2 12 B 2 2 B 3 , C 6 = 32 B 2 5 18 B 3 2 B 2 + 24 B 3 B 2 3 + 24 B 2 B 3 B 2 2   + 24 B 2 2 B 3 B 2 16 B 4 B 2 2 16 B 2 B 4 B 2 + 10 B 5 B 2 18 B 3 B 2 B 3 has been determined from the following:
F x ( k ) 1 F x ( k ) = I n × n .
Therefore,
F x ( k ) 1 = F ( ξ ) 1 [ I 2 B 2 e ( k ) + 4 B 2 2 3 B 3 e ( k ) 2 + ( 6 B 3 B 2 + 6 B 2 B 3 4 B 4 8 B 2 3 ) e ( k ) 3 + ( 16 B 2 4 + 9 B 3 2 12 B 3 B 2 2 12 B 2 B 3 B 2 12 B 2 2 B 3 + 8 B 4 B 2 + 8 B 2 B 4 5 B 5 ) e ( k ) 4 + ( 32 B 2 5 18 B 3 2 B 2 + 24 B 3 B 2 3 + 24 B 2 B 3 B 2 2 + 24 B 2 2 B 3 B 2 16 B 4 B 2 2 16 B 2 B 4 B 2 + 10 B 5 B 2 18 B 3 B 2 B 3 18 B 2 B 3 2 + 12 B 4 B 3 + 24 B 2 3 B 3 16 B 2 2 B 4 + 12 B 3 B 4 + 10 B 2 B 5 6 B 6 ) e ( k ) 5 ] + O e ( k ) 6 .
By using (5) and (8), one can obtain
F x ( k ) 1 F x ( k ) = e ( k ) B 2 e ( k ) 2 + 2 B 2 2 B 3 e ( k ) 3 + ( 3 B 3 B 2 + 4 B 2 B 3 3 B 4 4 B 2 3 ) e ( k ) 4 + ( 6 B 3 B 2 2 8 B 2 2 B 3 + 6 B 3 2 6 B 2 B 3 B 2 + 6 B 2 B 4 + 4 B 4 B 2 + 8 B 2 4 4 B 5 ) e ( k ) 5 + O e ( k ) 6 .
Considering the error term e ( k ) = x ( k ) ξ , the expansion of the error at the first step of the family (4) is given by the following:
y ( k ) ξ = B 2 e ( k ) 2 2 B 2 2 B 3 e ( k ) 3 3 B 3 B 2 + 4 B 2 B 3 3 B 4 4 B 2 3 e ( k ) 4 6 B 3 B 2 2 8 B 2 2 B 3 + 6 B 3 2 6 B 2 B 3 B 2 + 6 B 2 B 4 + 4 B 4 B 2 + 8 B 2 4 4 B 5 e ( k ) 5 + O e ( k ) 6 .
The Taylor’s expansion of F y ( k ) around ξ can be evaluated similarly to Equation (5),
F y ( k ) = F ( ξ ) [ B 2 e ( k ) 2 + 2 ( B 3 B 2 2 ) e ( k ) 3 + ( 3 B 4 + 5 B 2 3 3 B 3 B 2 4 B 2 B 3 ) e ( k ) 4 + ( 12 B 2 4 6 B 3 2 + 4 B 5 6 B 2 B 4 + 10 B 2 2 B 3 + 6 B 3 B 2 2 4 B 4 B 2 + 8 B 2 B 3 B 2 ) e ( k ) 5 ] + O e ( k ) 6 .
Using (3) on (5), (10), and (11), one can get
x ( k ) , y ( k ) , F = I + B 2 e ( k ) + B 2 2 + B 3 e ( k ) 2 + 2 B 2 3 + 3 B 2 B 3 + B 4 e ( k ) 3 + 4 B 2 4 8 B 2 2 B 3 + 2 B 3 2 + 4 B 2 B 4 + B 5 e ( k ) 4 + ( 8 B 2 5 + 20 B 2 3 B 3 11 B 2 2 B 4 + 5 B 3 B 4 + B 2 9 B 3 2 + 5 B 5 + B 6 ) e ( k ) 5 + O e ( k ) 6 .
Now, using Equations (8) and (12), we get
η = I F x ( k ) 1 x ( k ) , y ( k ) , F , η = B 2 e ( k ) + 3 B 2 2 + 2 B 3 e ( k ) 2 + 8 B 2 3 10 B 2 B 3 + 3 B 4 e ( k ) 3 + ( 20 B 2 4 + 37 B 2 2 B 3 8 B 3 2 14 B 2 B 4 + 4 B 5 ) e ( k ) 4 + ( 48 B 2 5 118 B 2 3 B 3 + 51 B 2 2 B 4 22 B 3 B 4 + B 2 55 B 3 2 18 B 5 + 5 B 6 ) e ( k ) 5 + O e ( k ) 6 .
Using (13), we get
I k 1 η = I k 1 B 2 e ( k ) + k 1 3 B 2 2 2 B 3 e ( k ) 2 k 1 8 B 2 3 10 B 2 B 3 + 3 B 4 e ( k ) 3 k 1 ( 20 B 2 4 + 37 B 2 2 B 3 8 B 3 2 14 B 2 B 4 + 4 B 5 ) e ( k ) 4 k 1 ( 48 B 2 5 118 B 2 3 B 3 + 51 B 2 2 B 4 22 B 3 B 4 + B 2 55 B 3 2 18 B 5 + 5 B 6 ) e ( k ) 5 + O e ( k ) 6 .
In view of (8), one can obtain the following:
I η 1 = I + B 2 e ( k ) + 2 B 2 2 + 2 B 3 e ( k ) 2 + 3 B 2 3 2 B 2 B 3 + B 4 e ( k ) 3 + 3 B 2 4 + 11 B 2 2 B 3 4 B 3 2 8 B 2 B 4 + 4 B 5 e ( k ) 4 + ( 10 B 2 3 B 3 + 14 B 2 2 B 4 + B 2 11 B 3 2 10 B 5 + 5 2 B 3 B 4 + B 6 ) e ( k ) 5 + O e ( k ) 6 .
I k 2 η 1 = I k 2 B 2 e ( k ) + k 2 ( 3 + k 2 ) B 2 2 + 2 B 3 e ( k ) 2 k 2 ( ( 8 6 k 2 + k 2 2 ) B 2 3 + 2 ( 5 + 2 k 2 ) B 2 B 3 + 3 B 4 ) e ( k ) 3 + k 2 ( ( 20 + 25 k 2 9 k 2 2 + k 2 3 ) B 2 4 + ( 37 32 k 2 + 6 k 2 2 ) B 2 2 B 3 + 4 ( 2 + k 2 ) B 3 2 + 2 ( 7 + 3 k 2 ) B 2 B 4 + B 5 ) e ( k ) 4 + O e ( k ) 5 .
Substituting Equations (5)–(16) into the second substep of scheme (4), we obtain the error equation as follows:
e ( k + 1 ) = ( α 1 ) ( k 1 k 2 + 1 ) B 2 e ( k ) 2 + ( ( 2 + k 1 ( 4 + k 2 ) + 4 k 2 k 2 2 + α ( 3 k 1 ( 4 + k 2 ) 4 k 2 + k 2 2 ) ) B 2 2 2 ( α 1 ) ( k 1 k 2 + 1 ) B 3 ) e ( k ) 3 + ( ( 4 13 k 2 + 7 k 2 2 k 3 2 + k 1 ( 13 7 k 2 + k 2 2 ) + α ( 7 + 13 k 2 7 k 2 2 + k 3 2 k 1 ( 13 7 k 2 + k 2 2 ) ) ) B 3 2 + ( 7 + 14 k 2 4 k 2 2 + 2 k 1 ( 7 + 2 k 2 ) + 2 α ( 5 + k 1 ( 7 2 k 2 ) 7 k 2 + 2 k 2 2 ) ) B 2 B 3 3 ( α 1 ) ( k 1 k 2 + 1 ) B 4 ) e ( k ) 4 + O e ( k ) 5 .
To attain a fourth-order convergence rate, the coefficients of e ( k ) 2 and e ( k ) 3 must simultaneously vanish. By equating these coefficients to zero, we obtain the following necessary conditions:
k 1 = 1 α 1 and k 2 = k 1 + 1 = α α 1 , where α R { 1 } .
Substituting the conditions in (18) into Equation (17), we obtain the following:
e ( k + 1 ) = 2 α 1 α 1 B 3 2 B 2 B 3 e ( k ) 4 + O e ( k ) 5 ,
where α R { 1 } represents a free parameter.
Hence, (19) demonstrates the fourth-order convergence of the proposed family in (4).    
Remark 1. 
A sufficient condition for the inverse ( I k 2 η ) 1 to exist is k 2 η < 1 locally, which follows from the method’s fourth-order convergence e ( k + 1 ) = O ( ( e ( k ) ) 4 ) and the local error bound η = O ( ( e ( k ) ) 2 ) , ensuring k 2 η remains small near the solution.

3. Efficiency Index

The efficiency of iterative methods is a critical factor in determining their practical applicability, particularly for large-scale problems. The formula suggested by Ostrowski is the following:
I E = p 1 / d ,
where p is the order of convergence of the method and d represents the number of functional evaluations needed to perform the method per iteration.
Another classical measure of the efficiency of iterative methods is the operational efficiency index proposed by Traub, with the following expression:
I O = p 1 / o p ,
where o p is the number of operations, expressed in units of product, needed to calculate each iteration.
In several occasions, a combination of both is also used, called the computational efficiency index, whose expression is
E = ρ 1 / d + o p .
In general, for m nonlinear equations in m variables, it is required to compute m function evaluations for F and m 2 function evaluations for F . The total number of products and quotients needed for the solution linear system and linear system having identical coefficient matrix (common coefficient matrix repeated r times) are 1 3 m 3 + m 2 1 3 m and 1 3 m 3 + r m 2 1 3 m , respectively, with the help of LU factorization. Because of this, the computational cost rises by m 2 for every system that has a matrix of common coefficients. Further, we also added m 2 number of evaluations for products of a matrix with a vector.
For the proposed method, which achieves a fourth-order convergence ( p = 4 ), the efficiency index is given by
E = 4 1 / m 3 3 + 5 m 2 + 2 m .
There are two functional evaluations and a single jacobian evaluation, so the number of functional evaluations is as follows:
2 m + m 2
On the other hand, a divided difference operator is calculated and a single system is solved, so the number of operations is as follows:
m 3 3 + 4 m 2
Therefore, the total number of function evaluations and operations is as follows:
2 m + m 2 + m 3 3 + 4 m 2 = m 3 3 + 5 m 2 + 2 m

4. Numerical Results

In this section, we examine several numerical problems to evaluate the effectiveness of the proposed methods. The newly introduced schemes, denoted as P M 1 , P M 2 , and P M 3 , correspond to the parameter values α = 1 4 , α = 1 9 , and α = 1 9 , respectively. These methods are analyzed and compared against the following system approaches.
1.
Cordero et al. [19], denoted as CL 1 , CL 2 , respectively.
y ( k ) = x ( k ) x ( k ) + λ H x ( k ) , x ( k ) ; F 1 F x ( k ) , x ( k + 1 ) = x ( k ) β 6 G 3 + 2 G 2 + G + 1 x ( k ) + λ H x ( k ) , x ( k ) ; F 1 F x ( k ) ,
for β = 1 and G ( x ( k ) , y ( k ) ) = I x ( k ) + λ H x ( k ) , x ( k ) ; F 1 x ( k ) , y ( k ) , F .
y ( k ) = x ( k ) x ( k ) + λ H x ( k ) , x ( k ) ; F 1 F x ( k ) , x ( k + 1 ) = y ( k ) 3 2 Γ ( k ) x ( k ) + λ H x ( k ) , x ( k ) ; F 1 F y ( k ) ,
where Γ ( k ) = x ( k ) + λ H x ( k ) , x ( k ) ; F 1 x ( k ) , y ( k ) , F .
We note that the authors in (20) and (21) used λ = 0.001 and
H x ( k ) = f 1 m x ( k ) , f 2 m x ( k ) , , f n m x ( k ) T ,
with m = 2 .
2.
Grau et al. [11], denoted as GG
y ( k ) = x ( k ) [ F ( x ( k ) ) ] 1 F ( x ( k ) ) , x ( k + 1 ) = x ( k ) 1 2 [ F ( x ( k ) ) ] 1 + [ F ( y ( k ) ) ] 1 F ( x ( k ) ) .
3.
Cordero et al. [20], denoted as CR 1 .
y ( k ) = x ( k ) F ( x ( k ) ) 1 F ( x ( k ) ) , x ( k + 1 ) = y ( k ) F ( x ( k ) ) 1 p k F ( y ( k ) ) + q k F ( x ( k ) ) ,
where ν k = F ( y ( k ) ) 2 F ( x ( k ) ) 2 = F ( y ( k ) ) T F ( y ( k ) ) F ( x ( k ) ) T F ( x ( k ) ) , K k = 1 1 + λ ν k ,   p k = K k ( 1 + ψ ν k ) ,   q k = 2 K k ν k and ( λ , ψ ) R 2
4.
Cordero et al. [21], denoted as CR 2 .
x ( k + 1 ) = y ( k ) ω k F x ( k ) 1 F x ( k ) ,
where y ( k ) represents Newton’s method for systems and ( λ , ψ ) R 2 , and ω k = 1 + 2 μ k + ψ ν k 1 + λ ν k , μ k = F ( x ( k ) ) T F ( y ( k ) ) F ( x ( k ) ) 2 , ν k = F ( y ( k ) ) 2 F ( x ( k ) ) 2 .
5.
Sharma and Arora [22], denoted as SA .
y ( k ) = x ( k ) 2 3 [ F ( x ( k ) ) ] 1 F ( x ( k ) ) , x ( k + 1 ) = x ( k ) 23 8 I [ F ( x ( k ) ) ] 1 F ( y ( k ) ) 3 I 9 8 [ F ( x ( k ) ) ] 1 F ( y ( k ) ) [ F ( x ( k ) ) ] 1 F ( x ( k ) ) .
To numerically verify the convergence order established in Theorem 1, we present the iteration number k, the error of the residual F x ( k ) , the error between two consecutive iterations x k + 1 x k , and the approximate computational order of convergence (ACOC). The ACOC denoted as ρ is calculated as defined in [13]:
ACOC ln x k + 1 x k x k x k 1 ln x k x k 1 x k 1 x k 2 , k = 2 , 3 ,
where x k 2 , x k 1 , x k , and x k + 1 are four consecutive approximations in the iterative process.
Mathematica 11.1.1 [23] was used for all numerical computations, and stopping criteria x k 1 x k < 10 500 and F x ( k ) < 10 500 is used to reduce roundoff errors and the configurations of our computer are given below:
  • A processor Intel(R) Core(TM) i3-1005G1 (Intel, Santa Clara, CA, USA).
  • CPU @ 1.20 GHz (64 bit machine) (Intel, Santa Clara, CA, USA).
  • Microsoft Windows 10 Pro (Microsoft Corporation, Albuquerque, NM, USA).
In all tables, b ( ± c ) denotes b × 10 ± c . The results from Table 1, Table 2, Table 3, Table 4, Table 5 and Table 6 show that the proposed methods give better results than existing methods and Table 7 presents the number of iterations required by each method for the respective examples. A detailed discussion of the outcomes is provided separately for each example.
Example 1. 
Consider the Bratu problem [24], which has a wide range of applications, including the fuel ignition model in thermal combustion, thermal reactions, radioactive heat transfer, chemical reactor theory, the Chandrasekhar model of the universe’s expansion, and nanotechnology [25,26,27,28]. The problem is formulated as follows:
y + C e y = 0 , y ( 0 ) = y ( 1 ) = 0 .
Using the discretization of finite differences, the boundary value problem (1) is transformed into a non-linear system of size 40 × 40 with step size h = 1 / 41 . The central difference approach, which is provided as follows, has been used for the second derivative.
y k = y k 1 2 y k + y k + 1 h 2 ,
which further yields the succeeding system of nonlinear equation
y k 1 2 y k + y k + 1 + h 2 C e k y , k = 1 , 2 , , 40 .
Table 1 presents the computational comparison of the solution to this problem. The convergence of the proposed and existing schemes approaching the solution   
ξ = ( 0.0556856839360764399 , 0.109484517218912365 , 0.1612922092823206959 , 0.2110028847969697865 , 0.2585096651583685153 , 0.3037053235694527046 , 0.3464830109827774699 , 0.3867370479445188487 , 0.4243637749768939967 , 0.4592624516247244838 , 0.4913361917742592867 , 0.5204929204449964016 , 0.5466463350885622102 , 0.5697168526396365918 , 0.5896325222867928076 , 0.6063298832873388347 , 0.6197547472372992549 , 0.6298628850887883449 , 0.6366206009024921671 , 0.6400051768044093745 , 0.6400051768044093745 , 0.6366206009024921671 , 0.6298628850887883449 , 0.6197547472372992549 , 0.6063298832873388347 , 0.5896325222867928076 , 0.5697168526396365918 , 0.5466463350885622102 , 0.5204929204449964016 , 0.4913361917742592867 , 0.4592624516247244838 , 0.4243637749768939967 , 0.3867370479445188487 , 0.3464830109827774699 , 0.3037053235694527046 , 0.2585096651583685153 , 0.2110028847969697865 , 0.1612922092823206959 , 0.1094845172189123365 ,
0.0556856839360764399 ) t r is listed in Table 1.
The error estimates of the proposed methods outperform the existing methods from the first iteration, as shown in Table 1. Furthermore, for all proposed schemes, the estimated order of convergence aligns perfectly with the theoretical one. The approximate solution obtained with method P M 1 is shown in Figure 1.
Example 2. 
Consider the well-known Hammerstein integral equation, as described in [3], given by ζ ( p ) = 1 + θ 0 1 K ( p , u ) ( ζ ( u ) ) 3 d u , where ζ C [ 0 , 1 ] ; p , u [ 0 , 1 ] , θ = 1 5 and
K e r n e l = K ( p , u ) = ( 1 p ) u , u p , p ( 1 u ) , p u .
This integral equation is formulated into a system of nonlinear equations by making use of Gauss Legendre quadrature formula defined as
0 1 g ( u ) d u j = 1 8 w j g ( u j ) ,
where u j and w j are the x-coordinates and the weights, respectively, at eight nodes given in the following Table. By approximating ζ ( u i ) with x i for i = 1 , 2 , , 8 , we obtain the following system of nonlinear equations:
5 x i 5 j = 1 8 a i j x j 3 = 0 , i = 1 , 2 , . . . , 8
with a i j = w j u j ( 1 u i ) , j i , w j u i ( 1 u j ) , i < j .
The x-coordinates and weights used in the Gauss Legendre quadrature procedure are as follows:
N o d e n o . x c o o r d i n a t e s w e i g h t s
1 0.01985507175123188415821 0.05061426814518812957626
2 0.10166676129318663020422 0.11119051722668723527217
3 0.23723379504183550709113 0.15685332293894364366898
4 0.40828267875217509753026 0.18134189168918099148257
5 0.59171732124782490246973 0.18134189168918099148257
6 0.76276620495816449290886 0.15685332293894364366898
7 0.89833323870681336979577 0.11119051722668723527217
8 0.98014492824876811584178 0.05061426814518812957626
The convergence of the proposed and existing schemes approaching the solution
ξ = ( 1.00209624503115 , 1.00990031618748 , 1.01972696099317 , 1.02643574303062 , 1.02643574303062 , 1.01972696099317 ,
1.00990031618748 , 1.00209624503115 ) t r is listed in Table 2.
In Table 2, the error estimates of the proposed methods outperform the existing methods from the first iteration. G G , C R 1 , and C R 2 provided poor results compared the remaining methods.
Example 3. 
Broyden Tridiagonal Function: The Broyden Tridiagonal Function [29] is widely used in numerical mathematics and optimization as a benchmark for testing nonlinear solvers and optimization algorithms. Its importance stems from its tridiagonal structure, sparsity, and nonlinearity, which make it a manageable yet sufficiently complex system to evaluate computational methods. The nonlinear system formed by broyden tridiagonal function is defined as follows:
F i x = ( 3 2 x i ) x i x i 1 2 x i + 1 + 1 = 0 , i = 1 , 2 , 100 .
The convergence of the proposed and existing schemes approaching the solution
ξ = ( 0.5707611929747512511 , 0.6819101288680880658 , 0.7024860206676488739 , 0.7062605758008976131 , 0.7069518542981373673 , 0.7070784178423320335 , 0.7071015885928448094 , 0.7071058305586259006 , 0.7071066071515205562 , 0.7071067493253026534 , 0.7071067753535900542 , 0.7071067801186861731 , 0.7071067809910501756 , 0.7071067811507571225 , 0.7071067811799952717 , 0.7071067811853480092 , 0.7071067811863279549 , 0.7071067811865073572 , 0.7071067811865402011 , 0.7071067811865462144 , 0.7071067811865473148 , 0.7071067811865475163 , 0.7071067811865475532 , 0.7071067811865475599 , 0.7071067811865475612 , 0.7071067811865475614 , 0.7071067811865475615 , 0.7071067811865475615 , 0.7071067811865475615 , 0.7071067811865475615 , 0.7071067811865475615 , 0.7071067811865475615 , 0.7071067811865475615 , 0.7071067811865475615 , 0.7071067811865475615 , 0.7071067811865475615 , 0.7071067811865475615 , 0.7071067811865475615 , 0.7071067811865475615 , 0.7071067811865475615 , 0.7071067811865475615 , 0.7071067811865475615 , 0.7071067811865475615 , 0.7071067811865475615 , 0.7071067811865475615 , 0.7071067811865475615 , 0.7071067811865475615 , 0.7071067811865475615 , 0.7071067811865475615 , 0.7071067811865475615 , 0.7071067811865475615 , 0.7071067811865475615 , 0.7071067811865475615 , 0.7071067811865475615 , 0.7071067811865475615 , 0.7071067811865475614 , 0.7071067811865475614 , 0.7071067811865475613 , 0.7071067811865475661 , 0.7071067811865475604 , 0.7071067811865475585 , 0.7071067811865475533 , 0.7071067811865475392 , 0.7071067811865475006 , 0.7071067811865473953 , 0.7071067811865471076 , 0.7071067811865463218 , 0.7071067811865441757 , 0.7071067811865383145 , 0.7071067811865223068 , 0.7071067811864785874 , 0.7071067811863591838 , 0.7071067811860330757 , 0.7071067811851424289 , 0.7071067811827099488 , 0.7071067811760665025 , 0.7071067811579223241 , 0.7071067811083680359 , 0.7071067809730283466 , 0.7071067806033967325 , 0.7071067795938811146 , 0.7071067768367528186 , 0.7071067693066499708 , 0.7071067487408863958 , 0.7071066925729116003 , 0.7071065391703262674 , 0.7071061202064697042 , 0.7071049759579877528 , 0.7071018508582931619 , 0.7070933157956696782 , 0.7070700055072725268 , 0.7070063430511282925 , 0.7068324809375858107 , 0.7063577059891970941 , 0.7050615273253236235 , 0.7015251953077047754 , 0.6918946289504083648 , 0.6657975233421828793 , 0.5960353126266538993 ,
0.4164123011668417431 ) t r is listed in Table 3.
It is observed in Table 3 that, for all proposed schemes, the computational order of convergence is approximately simillar with the theoretical one while all existing methods have different computational order of convergence than theoretical convergence order.
Example 4. 
Consider the Frank-Kamenetskii problem [30], which is governed by the differential equation:
x y + y + x e y = 0 , y ( 0 ) = y ( 1 ) = 0 .
To transform the boundary value problem (4) into a system of nonlinear equations with 50 unknown variables, we apply a finite difference discretization using a step size of h ˜ = 1 51 . Furthermore, the second derivative is approximated using the central difference scheme as follows:
y j = y j 1 2 y j + y j + 1 h 2 , j = 1 , 2 , , 50 .
The solution ξ = ( 0.4742933 , 0.4738815 , 0.4730579 , 0.4718232 , 0.4701778 ,
0.4681227 , 0.4656589 , 0.4627878 , 0.4595107 , 0.4558294 , 0.4517458 ,
0.447262 , 0.4423803 , 0.437103 , 0.4314332 , 0.425373 , 0.418926 , 0.4120966 ,
0.4048861 , 0.397299 , 0.389338 , 0.381004 , 0.372319 , 0.3632593 , 0.3538470 ,
0.3440823 , 0.333969 , 0.323513 , 0.312719 , 0.301591 , 0.2901340 , 0.2783530 ,
0.266253 , 0.253839 , 0.241118 , 0.228092 , 0.2147698 , 0.2011542 , 0.1872513 ,
0.1730667 , 0.158605 , 0.143873 , 0.128876 , 0.113619 , 0.09810770 , 0.0823470 ,
0.06634287 , 0.05010065 , 0.03362582 , 0.01692381 ) t r of this problem is examined and demonstrated in Table 4.
In this case, the best results were achieved by the proposed methods. Furthermore, for all the proposed schemes, the estimated order of convergence aligns perfectly with the theoretical one.
Example 5.
Broyden Banded Function: The Broyden Banded Function [29] is another benchmark problem often used in the study of numerical methods for solving nonlinear systems of equations. Unlike the Broyden Tridiagonal Function, which is tridiagonal, this function has a more general banded structure, meaning that each equation depends not just on immediate neighbors, but on a subset of variables within a certain band. The Broyden Banded Function for a system of n equations with variables x = ( x 1 , x 2 , , x n ) t r is given by the following:
F i x = ( 2 + 5 x i 2 ) x i + 1 j S ( i ) x j ( 1 + 2 x j ) , i = 1 , 2 , , 50
where S ( i ) = { j : j i , max ( 1 , i p ) j min ( n , i + p ) } where p is the bandwidth.
The convergence of the proposed and existing schemes approaching the solution
ξ = ( 0.4283028635872503067 , 0.4765964243562935888 , 0.5196524636464013979 , 0.5580993248561520037 , 0.5925061559650828611 , 0.6245037074105165235 , 0.6232386691324512479 , 0.6214196767136478016 , 0.6196158428334761759 , 0.6182260179198573987 , 0.6175180248414952073 , 0.6177318303186657405 , 0.6179003162526636765 , 0.6180077985633592471 , 0.6180570610194790544 , 0.6180627237744715955 , 0.6180464123676292322 , 0.6180369432559549751 , 0.6180327968239003001 , 0.6180320109076161335 , 0.6180327484374204368 , 0.6180336522098025557 , 0.6180340391955201848 , 0.6180341290746869223 , 0.6180340902897343633 , 0.6180340279672514792 , 0.6180339892049930822 , 0.6180339788606287306 , 0.6180339805677721225 , 0.6180339850062839112 , 0.6180339881846647394 , 0.6180339893995617135 , 0.6180339894032136724 , 0.6180339890890702474 , 0.6180339888323726205 , 0.6180339887160342564 , 0.6180339886991714529 , 0.6180339887199347385 , 0.6180339887404800992 , 0.6180339887510495647 , 0.6180339887535674478 , 0.6180339887524483498 , 0.6180339887515573876 , 0.6180339887274529494 , 0.6180339894850733806 , 0.6180339646916006426 , 0.6180347757588284862 , 0.6180082405037704871 ,
0.6188732808161922001 , 0.5862791221249134994 ) t r is listed in Table 5.
In this example, we observe again that the proposed methods achieve the best error estimates from the very first iterations.
Example 6.
Application on a Model of Nutrient Diffusion in a Biological Substrate [31].
Nutrient diffusion plays a crucial role in biological systems, affecting cellular metabolism, tissue engineering, and microbial growth. Mathematical models of nutrient diffusion help predict nutrient availability, optimize bioreactors, and enhance medical treatments such as drug delivery. Here, we explores the application of a diffusion model in a biological substrate, analyzing factors influencing diffusion rates and their implications.
Consider a two-dimensional biological substrate that acts as a growth medium or cultivation area for microorganisms. Our goal is to study the diffusion and distribution of nutrients within this medium, as they play a crucial role in the development and viability of the organisms present. The corresponding mathematical formulation is given below:
2 u x 2 + 2 u y 2 = u ( x , y ) 3 + | u ( x , y ) | , for Ω = { ( x , y ) R 2 : x , y [ 0 , 1 ] } , u ( x , 0 ) = 2 x 2 x + 1 , u ( x , 1 ) = 2 , 0 x 1 , u ( 0 , y ) = 2 y 2 y + 1 , u ( 1 , y ) = 2 , 0 y 1 .
Here, u ( x , y ) denotes the nutrient concentration at any location ( x , y ) in the substrate. The equation models nutrient diffusion, with the term u ( x , y ) 3 + | u ( x , y ) | representing interactions between nutrient concentration and biochemical processes occurring within the medium.
The boundary conditions describe how nutrients behave at the substrate’s edges. At the lateral boundaries ( x = 0 and x = 1 ), they may indicate initial concentrations or continuous nutrient supply. At the upper and lower boundaries ( y = 0 and y = 1 ), they can correspond to nutrient absorption by plant roots or interactions with microorganisms present on the surface.
Solving this problem provides insights into nutrient distribution within the substrate and its influence on organism growth and health. This knowledge can contribute to optimizing agricultural methods and improving biological productivity.
As an example, the authors solve this equation for a small system using a block-wise finite difference approach. The discretization involves creating a mesh:
h = 1 n + 1 , k = 1 m + 1 ,
with mesh points ( x i , y j ) defined as x i = i h , i = 0 , , n + 1 , and y j = j k , j = 0 , , m + 1 . The discrete form of the equation is given by the following:
u x x ( x i , y j ) + u y y ( x i , y j ) = u ( x i , y j ) 3 + | u ( x i , y j ) | .
This discretized representation allows us to numerically solve for u ( x , y ) using computational methods. Therefore, by approximating the partial derivatives using central divided differences, we have the following:
u ( x i + 1 , y j ) 2 u ( x i , y j ) + u ( x i 1 , y j ) h 2 + u ( x i , y j + 1 ) 2 u ( x i , y j ) + u ( x i , y j 1 ) k 2 = u ( x i , y j ) 3 + | u ( x i , y j ) | ,
for i = 1 , , n and j = 1 , , m .
Now, we denote u ( x i , y j ) = u i , j . Simplifying the notation, we obtain the following:
2 Λ 2 + 1 Λ 2 u i , j 1 Λ 2 ( u i , j + 1 + u i , j 1 ) ( u i + 1 , j + u i 1 , j ) + h 2 u i , j 3 + | u i , j | = 0 ,
with Λ = h k , i = 1 , , n , and j = 1 , , m , here m = n = 12 .
The convergence of the proposed and existing schemes approaching the solution
ξ = ( 0.9159579141800242144 , 0.9072089786307704952 , 0.9128728847479804976 , 0.9369806949845996747 , 0.9817387686770479002 , 1.0485392499801558597 , 1.1384294588808200581 , 1.2524247111387106228 , 1.3918552265712354804 , 1.5589639444123838739 , 1.7582456241777705775 , 0.9072089786307704952 , 0.9226014175996360779 , 0.9439241122376833725 , 0.9766415904120995219 , 1.0242681117073024105 , 1.0892758524850682645 , 1.1736834484673403999 , 1.2795317813663094181 , 1.4094228204235085911 , 1.567337016707685883 , 1.7600861551188621331 , 0.9128728847479804976 , 0.9439241122376833725 , 0.9759760675354321177 , 1.0146447653774112913 , 1.063991607220679141 , 1.1271523601179183468 , 1.2068749715483895544 , 1.3060293623445218922 , 1.4281978317519255005 , 1.5784971707380888588 , 1.7648499158163805706 , 0.9369806949845996747 , 0.9766415904120995219 , 1.0146447653774112913 , 1.0562699741873993097 , 1.1056547558964022744 , 1.1662390118051703965 , 1.2412232244889596071 , 1.3340527273929771553 , 1.4489903276505684513 , 1.5918786748132008886 , 1.7712456396452484821 , 0.981738768677047902 , 1.0242681117073024105 , 1.0639916072206791141 , 1.105654755896402744 , 1.1531829249716005428 , 1.2100399869774647234 , 1.2796254600037224161 , 1.3657198140917823271 , 1.4730213214555072145 , 1.6078497761505851621 , 1.7791442059653217752 , 1.0485392499801558597 , 1.0892758524850682645 , 1.1271523601179183468 , 1.1662390118051703965 , 1.2100399869774647234 , 1.2618193332653576622 , 1.3249558773708321743 , 1.4033536890986037907 , 1.5019501660249639085 , 1.6273856823114507777 , 1.7889450582238398736 , 1.1384294588808200581 , 1.1736834484673403999 , 1.206874971548389544 , 1.2412232244889596071 , 1.2796254600037224161 , 1.3249558773708321743 , 1.3803786842110392967 , 1.4497272368032708017 , 1.5379992136528640005 , 1.6520291742902751577 , 1.8014319391776102766 , 1.2524247111387106228 , 1.2795317813663094181 , 1.3060293623445218922 , 1.3340527273929771553 , 1.3657198140917823271 , 1.4033536890986037907 , 1.4497272368032708017 , 1.5084039721016773017 , 1.5842350448920806677 , 1.6840828331686575708 , 1.8178602010251390389 , 1.3918552265712354804 , 1.4094228204235085911 , 1.4281978317519255005 , 1.4489903276505684513 , 1.4730213214555072145 , 1.5019501660249639085 , 1.5379992136528640005 , 1.584235044892080667 , 1.6450676969911291586 , 1.7270705866382999624 , 1.8402676260652124239 , 1.5589639444123838739 , 1.5673370167076855883 , 1.5784971707380888588 , 1.5918786748132088866 , 1.6078497761505851621 , 1.6273856823114507777 , 1.6520291742902751577 , 1.6840828331686575708 , 1.7270705866382999624 , 1.7866317582138956501 , 1.8721986766381496795 , 1.758245624177770575 , 1.7600861551188621331 , 1.7648499158163805706 , 1.7712456396452484821 , 1.7791442059653217752 , 1.7889450582238398736 , 1.8014319391776102766 , 1.8178602010251390389 , 1.8402676260652124239 , 1.8721986766381496795 ,
1.9204682004958430305 ) t r is listed in Table 6.
In this case, the best results were obtained with the proposed methods. G G , C R 1 , and C R 2 showed poor results compared to the remaining methods.
Remark 2.
The graphical error analysis for Examples 1–6 is illustrated in Figure 2. It is evident from all the subfigures in Figure 2 that our method reduces the error faster than existing techniques. Finally, Table 8 shows the CPU times used by each method for each example. It can be seen that the proposed methods provide the best computation times.
A schematic of the algorithm for the family of proposed methods is shown in Appendix A.

5. Discussion

The numerical results demonstrate that our one-parameter family of iterative methods achieves better convergence behavior and accuracy than comparable schemes, highlighting the statistical significance of the improvements. However, a limitation of the proposed approach is that it is not globally convergent.

6. Conclusions

In this paper, we propose a family of iterative methods derived from Newton’s scheme, incorporating a real parameter α 1 . The theoretical aspects of convergence and stability have been thoroughly analyzed and subsequently validated through numerical experiments in the final part of the study.
Both negative and positive values of the parameter α are considered to numerically compare the performance of the methods with the existing ones. These methods are applied to approximate the roots of nonlinear functions. Notably, the convergence behavior for negative values and positive values of α demonstrates the performance of existing methods. Additionally, a computational approximation of the order of convergence, consistent with the theoretical fourth order, is observed.

Author Contributions

Conceptualization, S.B., G.S., H.R. and R.B.; methodology, G.S., H.R. and R.B.; software, S.B., G.S., H.R. and R.B.; validation, S.B., G.S., H.R. and R.B.; formal analysis, H.R. and R.B.; investigation, S.B., H.R. and R.B.; resources, H.R. and R.B.; data curation, S.B. and G.S.; writing—original draft preparation, S.B., G.S., H.R. and R.B.; writing—review and editing, S.B., G.S., H.R., R.B. and H.A.; visualization, S.B., G.S., H.R., R.B. and H.A.; funding acquisition, H.R.; supervision, S.B., R.B. and H.R. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding author.

Acknowledgments

The authors have reviewed and edited the output and take full responsibility for the content of this publication.

Conflicts of Interest

The authors declare no conflicts of interest.

Appendix A

Algorithm A1 for the proposed Numerical Schemes P M i
  •    Step 1: Take initial guess x 0 ,
  •                                      set k = 0 and tolerance ϵ > 0 .
  •    Step 2: Compute the inner and outer loop
  • Compute : y ( k ) = x ( k ) F x ( k ) 1 F x ( k ) ,
  •                                             For k from 0 to n do
  •      Update : x ( k + 1 ) = x ( k ) α x ( k ) , y ( k ) , F 1 + ( I α ) I k 2 η 1 I k 1 η F x ( k ) 1 F x ( k ) ,
  •                                             End do.
  •    Step 3:
                       If | x [ k + 1 ] x [ k ] |   10 500 or F ( x k 10 500 , then stop. Otherwise,
  •    Step 4:
  •                    Set k = k + 1 and then move to step 2.
  •    End do.

References

  1. Regmi, S. Optimized Iterative Methods with Applications in Diverse Disciplines; Nova Science Publishers: New York, NY, USA, 2020. [Google Scholar]
  2. Tsoulos, I.G.; Stavrakoudis, A. On locating all roots of systems of nonlinear equations inside a bounded domain using global optimization methods. Nonlinear Anal. Real World Appl. 2010, 11, 2465–2471. [Google Scholar] [CrossRef]
  3. Ortega, J.M.; Rheinboldt, W.C. Iterative Solution of Nonlinear Equations in Several Variables; Academic Press: New York, NY, USA, 1970. [Google Scholar]
  4. Traub, J.F. Iterative Methods for the Solution of Equations; Prentice-Hall: Englewood Cliffs, NJ, USA, 1964. [Google Scholar]
  5. Cordero, A.; Hueso, J.L.; Martínez, E.; Torregrosa, J.R. A modified Newton-Jarratt’s composition. Numer. Algor. 2010, 55, 87–99. [Google Scholar] [CrossRef]
  6. Cordero, A.; Hueso, J.L.; Martínez, E.; Torregrosa, J.R. Efficient high-order methods based on the golden ratio for nonlinear systems. Appl. Math. Comput. 2011, 217, 4548–4556. [Google Scholar] [CrossRef]
  7. Babajee, D.K.; Cordero, A.; Soleymani, F.; Torregrosa, J.R. On a novel fourth-order algorithm for solving systems of nonlinear equations. J. Appl. Math. 2012, 2012, 165452. [Google Scholar] [CrossRef]
  8. Samanskii, V. On a modification of the Newton method. Ukr. Math. J. 1967, 19, 133–138. [Google Scholar]
  9. Cordero, A.; Torregrosa, J.R. Variants of Newton’s method for functions of several variables. Appl. Math. Comput. 2006, 183, 199–208. [Google Scholar] [CrossRef]
  10. Frontini, M.; Sormani, E. Third-order methods from quadrature formulae for solving systems of nonlinear equations. Appl. Math. Comput. 2004, 149, 771–782. [Google Scholar] [CrossRef]
  11. Grau-Sanchez, M.; Grau, Á.; Noguera, M. On the computational efficiency index and some iterative methods for solving systems of nonlinear equations. J. Comput. Appl. Math. 2011, 236, 1259–1266. [Google Scholar] [CrossRef]
  12. Homeier, H.H.H. A modified Newton method with cubic convergence: The multivariable case. J. Comput. Appl. Math. 2004, 169, 161–169. [Google Scholar] [CrossRef]
  13. Cordero, A.; Torregrosa, J.R. Variants of Newton’s method using fifth-order quadrature formulas. Appl. Math. Comput. 2007, 190, 686–698. [Google Scholar] [CrossRef]
  14. Darvishi, M.T.; Barati, A. Super cubic iterative methods to solve systems of nonlinear equations. Appl. Math. Comput. 2007, 188, 1678–1685. [Google Scholar] [CrossRef]
  15. Darvishi, M.T.; Barati, A. A third-order Newton-type method to solve systems of nonlinear equations. Appl. Math. Comput. 2007, 187, 630–635. [Google Scholar] [CrossRef]
  16. Potra, F.A.; Ptak, V. Nondiscrete Induction and Iterative Processes; Pitman Publishing: Boston, MA, USA, 1984. [Google Scholar]
  17. Babajee, D.K.R.; Madhu, K.; Jayaraman, J. On some improved harmonic mean Newton-like methods for solving systems of nonlinear equations. Algorithms 2015, 8, 895–909. [Google Scholar] [CrossRef]
  18. Cordero, A.; Torregrosa, J.R. Iterative methods of order four and five for systems of nonlinear equations. J. Comput. Appl. Math. 2009, 231, 541–551. [Google Scholar] [CrossRef]
  19. Cordero, A.; Leonardo-Sepúlveda, M.A.; Torregrosa, J.R.; Vassileva, M.P. Enhancing the convergence order from p to p + 3 in iterative methods for solving nonlinear systems of equations without the use of Jacobian matrices. Mathematics 2023, 11, 4238. [Google Scholar] [CrossRef]
  20. Cordero, A.; Rojas-Hiciano, R.V.; Torregrosa, J.R.; Vassileva, M.P. A highly efficient class of optimal fourth-order methods for solving nonlinear systems. Numer. Algor. 2024, 95, 1879–1904. [Google Scholar] [CrossRef]
  21. Cordero, A.; Rojas-Hiciano, R.V.; Torregrosa, J.R.; Vassileva, M.P. Maximally efficient damped composed Newton-type methods to solve nonlinear systems of equations. Appl. Math. Comput. 2025, 492, 129231. [Google Scholar] [CrossRef]
  22. Sharma, J.R.; Arora, H. Efficient Jarratt-like methods for solving systems of nonlinear equations. Calcolo 2014, 51, 193–210. [Google Scholar] [CrossRef]
  23. Wolfram, S. The Mathematica Book; Wolfram Media: Champaign, IL, USA, 2003. [Google Scholar]
  24. Alaidarous, E.S.; Ullah, M.Z.; Ahmad, F.; Al-Fhaid, A.S. An efficient higher-order quasilinearization method for solving nonlinear BVPs. J. Appl. Math. 2013, 2013, 259371. [Google Scholar] [CrossRef]
  25. Gelfand, I.M. Some problems in the theory of quasi-linear equations. Trans. Am. Math. Soc. 1963, 2, 295–381. [Google Scholar]
  26. Wan, Y.Q.; Guo, Q.; Pan, N. Thermo-electro-hydrodynamic model for the electrospinning process. Int. J. Nonlinear Sci. Numer. Simul. 2004, 5, 5–8. [Google Scholar]
  27. Jacobsen, J.; Schmitt, K. The Liouville-Bratu-Gelfand problem for radial operators. J. Differ. Equ. 2002, 184, 283–298. [Google Scholar] [CrossRef]
  28. Jalilian, R. Non-polynomial spline method for solving Bratu’s problem. Comput. Phys. Commun. 2010, 181, 1868–1872. [Google Scholar] [CrossRef]
  29. Pollock, S.; Schwartz, H. Benchmarking results for the Newton–Anderson method. Results Appl. Math. 2020, 8, 100095. [Google Scholar] [CrossRef]
  30. Kamenetskii, F.; Al’bertovich, D. Diffusion and Heat Transfer in Chemical Kinetics; Plenum Press: New York, NY, USA, 1969. [Google Scholar]
  31. Villalba, E.G.; Hernandez, M.; Hueso, J.L.; Martínez, E. Using decomposition of the nonlinear operator for solving non-differentiable problems. Math. Method Appl. Sci. 2022, 48, 7987–8006. [Google Scholar]
Figure 1. Approximate solution for Bratu Problem obtained with method P M 1 .
Figure 1. Approximate solution for Bratu Problem obtained with method P M 1 .
Computation 13 00241 g001
Figure 2. Graphical error analysis.
Figure 2. Graphical error analysis.
Computation 13 00241 g002
Table 1. Convergence behavior of Example 1 with x ( 0 ) = ( sin π h , sin 2 π h , , sin 40 π h ) t r .
Table 1. Convergence behavior of Example 1 with x ( 0 ) = ( sin π h , sin 2 π h , , sin 40 π h ) t r .
Methodsk F x ( k ) x k + 1 x k ρ
1 1.0 ( 4 ) 3.5 ( 2 )
P M 1 2 7.5 ( 12 ) 2.6 ( 9 ) 3.9986
3 2.1 ( 40 ) 7.3 ( 38 )
1 1.3 ( 4 ) 4.3 ( 2 )
P M 2 2 1.9 ( 11 ) 6.6 ( 9 ) 3.9982
3 1.1 ( 38 ) 3.7 ( 36 )
1 1.5 ( 4 ) 5.3 ( 2 )
P M 3 2 4.8 ( 11 ) 1.7 ( 8 ) 3.9975
3 4.9 ( 37 ) 1.7 ( 34 )
1 2.6 ( 3 ) 1.4 ( 1 )
C L 1 2 3.0 ( 8 ) 1.6 ( 6 ) 4.5698
3 6.8 ( 28 ) 3.7 ( 26 )
1 1.0 ( 2 ) 5.3 ( 1 )
C L 2 2 6.3 ( 6 ) 3.5 ( 4 ) 4.6978
3 1.9 ( 18 ) 1.1 ( 16 )
1 1.3 ( 3 ) 6.7 ( 2 )
G G 2 2.0 ( 8 ) 1.1 ( 6 ) 3.3934
3 8.2 ( 23 ) 4.4 ( 21 )
1 9.0 ( 2 ) 3.7
C R 1 2 1.0 ( 3 ) 8.2 ( 3 ) 2.2443
3 2.4 ( 10 ) 1.3 ( 8 )
1 1.0 ( 2 ) 6.0 ( 1 )
C R 2 2 4.1 ( 7 ) 1.6 ( 5 ) 2.8531
3 1.1 ( 18 ) 5.9 ( 17 )
1 1.0 ( 2 ) 5.3 ( 1 )
S A 2 6.3 ( 6 ) 3.5 ( 4 ) 4.9097
3 1.9 ( 18 ) 1.1 ( 16 )
Table 2. Convergence behavior of Example 2 with initial guess x ( 0 ) = ( 0.5 , 0.5 , , 0.5 ) t r .
Table 2. Convergence behavior of Example 2 with initial guess x ( 0 ) = ( 0.5 , 0.5 , , 0.5 ) t r .
Methodsk F x ( k ) x k + 1 x k ρ
1 7.6 ( 4 ) 1.6 ( 4 )
P M 1 2 7.3 ( 19 ) 1.6 ( 19 ) 3.9981
3 6.4 ( 79 ) 1.4 ( 79 )
1 7.8 ( 4 ) 1.7 ( 4 )
P M 2 2 8.1 ( 19 ) 1.7 ( 19 ) 3.9981
3 1.0 ( 78 ) 2.1 ( 79 )
1 7.9 ( 4 ) 1.7 ( 4 )
P M 3 2 9.0 ( 19 ) 1.9 ( 19 ) 3.9982
3 1.6 ( 78 ) 3.4 ( 79 )
1 2.6 ( 3 ) 2.0 ( 4 )
C L 1 2 6.4 ( 18 ) 4.8 ( 19 ) 4.0880
3 2.5 ( 76 ) 1.9 ( 77 )
1 2.8 ( 3 ) 2.2 ( 4 )
C L 2 2 1.1 ( 17 ) 8.4 ( 19 ) 4.0851
3 2.7 ( 75 ) 2.0 ( 76 )
1 4.3 ( 2 ) 3.2 ( 3 )
G G 2 1.2 ( 9 ) 9.0 ( 11 ) 3.1016
3 2.8 ( 32 ) 2.1 ( 33 )
1 9.3 ( 3 ) 6.9 ( 4 )
C R 1 2 4.9 ( 12 ) 3.6 ( 13 ) 3.2769
3 1.0 ( 40 ) 7.5 ( 42 )
1 7.1 ( 3 ) 5.4 ( 4 )
C R 2 2 2.3 ( 14 ) 1.7 ( 15 ) 2.9927
3 9.6 ( 48 ) 2.5 ( 49 )
1 2.8 ( 3 ) 2.2 ( 4 )
S A 2 1.1 ( 17 ) 8.3 ( 19 ) 4.0595
3 2.6 ( 75 ) 1.9 ( 76 )
Table 3. Convergence behavior of Example 3 with initial guess x ( 0 ) = ( 2 , 2 , , 2 ) t r .
Table 3. Convergence behavior of Example 3 with initial guess x ( 0 ) = ( 2 , 2 , , 2 ) t r .
Methodsk F x ( k ) x k + 1 x k ρ
1 1.9 6.4 ( 1 )
P M 1 2 9.9 ( 5 ) 3.5 ( 5 ) 3.8700
3 3.4 ( 21 ) 1.1 ( 21 )
1 2.2 7.4 ( 1 )
P M 2 2 2.2 ( 4 ) 7.7 ( 5 ) 3.8689
3 8.9 ( 20 ) 3.0 ( 20 )
1 2.5 8.2 ( 1 )
P M 3 2 4.1 ( 4 ) 1.4 ( 4 ) 3.8666
3 1.3 ( 18 ) 4.3 ( 19 )
1 40.0 1.3
C L 1 2 5.6 ( 2 ) 2.0 ( 3 ) 5.4623
3 9.7 ( 13 ) 3.3 ( 14 )
1 45.0 1.4
C L 2 2 1.1 ( 1 ) 4.0 ( 3 ) 5.6695
3 2.1 ( 11 ) 7.2 ( 13 )
123 7.8
G G 2 3.1 ( 3 ) 1.1 ( 4 ) 4.9381
3 4.2 ( 18 ) 1.4 ( 19 )
1 29.0 1.1
C R 1 2 1.8 ( 1 ) 6.3 ( 3 ) 5.0575
3 9.5 ( 9 ) 2.7 ( 10 )
1 17.0 5.7 ( 1 )
C R 2 2 5.3 ( 3 ) 1.7 ( 4 ) 3.0017
3 3.0 ( 12 ) 6.5 ( 14 )
145 1.4
S A 2 1.1 ( 1 ) 4.0 ( 3 ) 5.6710
3 2.1 ( 11 ) 7.4 ( 13 )
Table 4. Convergence behavior of Example 4 with initial guess x ( 0 ) = ( 0.1 , 0.1 , , 0.1 ) t r .
Table 4. Convergence behavior of Example 4 with initial guess x ( 0 ) = ( 0.1 , 0.1 , , 0.1 ) t r .
Methodsk F x ( k ) x k + 1 x k ρ
1 6.0 ( 7 ) 1.1 ( 3 )
P M 1 2 1.4 ( 19 ) 2.6 ( 16 ) 4.0000
3 4.5 ( 70 ) 8.2 ( 67 )
1 6.5 ( 7 ) 1.2 ( 3 )
P M 2 2 2.1 ( 19 ) 3.8 ( 16 ) 4.0000
3 2.4 ( 69 ) 4.5 ( 66 )
1 7.0 ( 7 ) 1.3 ( 3 )
P M 3 2 3.1 ( 19 ) 5.6 ( 16 ) 4.0000
3 1.3 ( 68 ) 2.3 ( 65 )
1 8.2 ( 6 ) 2.0 ( 3 )
C L 1 2 3.0 ( 17 ) 7.6 ( 15 ) 4.2409
3 6.1 ( 63 ) 1.5 ( 60 )
1 1.0 ( 5 ) 2.5 ( 3 )
C L 2 2 9.8 ( 17 ) 2.4 ( 14 ) 4.2507
3 8.6 ( 61 ) 2.2 ( 58 )
1 2.0 ( 5 ) 5.0 ( 3 )
G G 2 6.5 ( 13 ) 1.7 ( 10 ) 3.2565
3 2.6 ( 35 ) 7.0 ( 33 )
1 5.4 ( 5 ) 1.3 ( 2 )
C R 1 2 2.0 ( 12 ) 3.1 ( 10 ) 3.0568
3 4.6 ( 34 ) 7.7 ( 32 )
1 5.4 ( 5 ) 1.3 ( 2 )
C R 2 2 6.8 ( 13 ) 1.7 ( 10 ) 3.1970
3 3.3 ( 36 ) 8.6 ( 34 )
1 1.1 ( 5 ) 2.6 ( 3 )
S A 2 1.2 ( 16 ) 2.9 ( 14 ) 4.2522
3 1.8 ( 60 ) 4.5 ( 58 )
Table 5. Convergence behavior of Example 3 with initial guess x ( 0 ) = ( 0.5 , 0.5 , , 0.5 ) t r .
Table 5. Convergence behavior of Example 3 with initial guess x ( 0 ) = ( 0.5 , 0.5 , , 0.5 ) t r .
Methodsk F x ( k ) x k + 1 x k ρ
1 4.8 ( 2 ) 5.2 ( 3 )
P M 1 2 5.3 ( 11 ) 5.8 ( 12 ) 3.9726
3 9.3 ( 47 ) 1.0 ( 47 )
1 6.9 ( 2 ) 7.6 ( 3 )
P M 2 2 3.4 ( 10 ) 3.7 ( 11 ) 3.9918
3 2.2 ( 43 ) 2.4 ( 44 )
1 9.5 ( 2 ) 1.0 ( 2 )
P M 3 2 1.6 ( 9 ) 1.7 ( 10 ) 3.9909
3 1.3 ( 40 ) 1.5 ( 41 )
1 2.7 ( 1 ) 4.2 ( 2 )
C L 1 2 9.5 ( 6 ) 1.5 ( 7 ) 4.5140
3 1.8 ( 27 ) 2.8 ( 29 )
1 8.5 1.3 ( 1 )
C L 2 2 1.2 ( 3 ) 1.8 ( 5 ) 4.7525
3 6.4 ( 19 ) 1.0 ( 20 )
1 3.8 ( 1 ) 5.9 ( 3 )
G G 2 8.7 ( 8 ) 1.4 ( 9 ) 3.2763
3 1.1 ( 27 ) 1.8 ( 29 )
1 38.0 5.3 ( 1 )
C R 1 2 2.0 ( 1 ) 3.3 ( 3 ) 4.4097
3 2.1 ( 7 ) 3.4 ( 9 )
1 5.5 8.8 ( 2 )
C R 2 2 3.8 ( 4 ) 6.0 ( 6 ) 3.3184
3 2.5 ( 15 ) 4.1 ( 17 )
1 8.5 1.3 ( 1 )
S A 2 1.1 ( 3 ) 1.8 ( 5 ) 4.7518
3 6.1 ( 19 ) 9.6 ( 21 )
Table 6. Convergence behavior of Example 6 with initial guess x ( 0 ) = ( 1 , 1 , , 1 ) t r .
Table 6. Convergence behavior of Example 6 with initial guess x ( 0 ) = ( 1 , 1 , , 1 ) t r .
Methodsk F x ( k ) x k + 1 x k ρ
1 6.2 ( 3 ) 1.6 ( 3 )
P M 1 2 1.1 ( 17 ) 3.7 ( 17 ) 3.9840
3 5.2 ( 72 ) 1.6 ( 71 )
1 5.8 ( 4 ) 1.4 ( 3 )
P M 2 2 5.8 ( 18 ) 1.7 ( 17 ) 3.9812
3 2.5 ( 73 ) 6.8 ( 73 )
1 5.3 ( 4 ) 1.2 ( 3 )
P M 3 2 2.6 ( 18 ) 6.6 ( 18 ) 3.9757
3 6.1 ( 75 ) 1.3 ( 74 )
1 3.5 ( 4 ) 1.4 ( 2 )
C L 1 2 9.4 ( 18 ) 4.8 ( 17 ) 4.0028
3 1.8 ( 71 ) 6.0 ( 71 )
1 8.0 ( 4 ) 3.7 ( 3 )
C L 2 2 8.5 ( 16 ) 4.2 ( 15 ) 3.9996
3 1.5 ( 63 ) 7.4 ( 63 )
1 1.1 ( 1 ) 2.4 ( 2 )
G G 2 9.5 ( 9 ) 3.2 ( 9 ) 3.3369
3 3.1 ( 29 ) 1.1 ( 29 )
1 9.3 ( 3 ) 3.7 ( 2 )
C R 1 2 3.9 ( 9 ) 1.0 ( 8 ) 2.9306
3 1.3 ( 28 ) 5.9 ( 28 )
1 7.5 ( 3 ) 3.0 ( 2 )
C R 2 2 9.1 ( 10 ) 4.1 ( 9 ) 3.0092
3 6.9 ( 31 ) 3.5 ( 30 )
1 8.8 ( 3 ) 3.7 ( 3 )
S A 2 9.3 ( 15 ) 4.2 ( 15 ) 4.2917
3 1.6 ( 62 ) 7.4 ( 63 )
Table 7. Number of iterations taken by the methods in each example, respectively.
Table 7. Number of iterations taken by the methods in each example, respectively.
MethodsExample 1Example 2Example 3Example 4Example 5Example 6
P M 1 555555
P M 2 555555
P M 3 555555
C L 1 656565
C L 2 656565
G G 665666
C R 1 767676
C R 2 666675
S A 656565
Table 8. CPU time used by the methods in each example.
Table 8. CPU time used by the methods in each example.
MethodsExample 1Example 2Example 3Example 4Example 5Example 6Total TimeAverage Time
P M 1 54.311 1.601 289.503 83.269 84.861 881.732 1395.276 232.546
P M 2 55.748 1.727 291.365 82.354 85.493 865.837 1382.52 230.420
P M 3 54.523 1.621 290.154 82.225 84.908 885.475 1398.906 233.151
C L 1 60.569 2.203 489.799 86.236 87.429 890.662 1616.898 269.483
C L 2 66.117 2.189 494.327 86.972 82.168 892.880 1624.650 270.775
G G 60.449 1.871 260.691 91.487 93.795 907.086 1415.376 235.896
C R 1 56.037 1.765 275.593 86.875 90.081 898.348 1408.698 234.783
C R 2 56.497 1.634 272.366 86.858 90.247 899.627 1407.228 234.538
S A 57.489 1.859 284.298 90.929 97.864 888.185 1420.624 236.770
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Bhalla, S.; Singh, G.; Ramos, H.; Behl, R.; Alshehri, H. A Fourth-Order Parametric Iterative Approach for Solving Systems of Nonlinear Equations. Computation 2025, 13, 241. https://doi.org/10.3390/computation13100241

AMA Style

Bhalla S, Singh G, Ramos H, Behl R, Alshehri H. A Fourth-Order Parametric Iterative Approach for Solving Systems of Nonlinear Equations. Computation. 2025; 13(10):241. https://doi.org/10.3390/computation13100241

Chicago/Turabian Style

Bhalla, Sonia, Gurjeet Singh, Higinio Ramos, Ramandeep Behl, and Hashim Alshehri. 2025. "A Fourth-Order Parametric Iterative Approach for Solving Systems of Nonlinear Equations" Computation 13, no. 10: 241. https://doi.org/10.3390/computation13100241

APA Style

Bhalla, S., Singh, G., Ramos, H., Behl, R., & Alshehri, H. (2025). A Fourth-Order Parametric Iterative Approach for Solving Systems of Nonlinear Equations. Computation, 13(10), 241. https://doi.org/10.3390/computation13100241

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop