Next Article in Journal
An Optimization Problem for Computing Predictive Potential of General Sum/Product-Connectivity Topological Indices of Physicochemical Properties of Benzenoid Hydrocarbons
Next Article in Special Issue
Achieving Optimal Order in a Novel Family of Numerical Methods: Insights from Convergence and Dynamical Analysis Results
Previous Article in Journal
A Progressive Outlook on Possibility Multi-Fuzzy Soft Ordered Semigroups: Theory and Analysis
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A New Optimal Numerical Root-Solver for Solving Systems of Nonlinear Equations Using Local, Semi-Local, and Stability Analysis

by
Sania Qureshi
1,2,*,†,
Francisco I. Chicharro
3,*,†,
Ioannis K. Argyros
4,†,
Amanullah Soomro
1,†,
Jihan Alahmadi
5,† and
Evren Hincal
6,†
1
Department of Basic Sciences and Related Studies, Mehran University of Engineering & Technology, Jamshoro 76062, Pakistan
2
Department of Computer Science and Mathematics, Lebanese American University, Beirut P.O. Box 13-5053, Lebanon
3
Instituto Universitario de Matemática Multidisciplinar, Universitat Politècnica de València, 46022 València, Spain
4
Department of Computing and Mathematics Sciences, Cameron University, Lawton, OK 73505, USA
5
Department of Mathematics, College of Science and Humanities in Al-Kharj, Prince Sattam Bin Abdulaziz University, Al-Kharj 11942, Saudi Arabia
6
Department of Mathematics, Near East University, 99138 Mersin, Turkey
*
Authors to whom correspondence should be addressed.
These authors contributed equally to this work.
Axioms 2024, 13(6), 341; https://doi.org/10.3390/axioms13060341
Submission received: 22 April 2024 / Revised: 14 May 2024 / Accepted: 16 May 2024 / Published: 21 May 2024

Abstract

:
This paper introduces an iterative method with a remarkable level of accuracy, namely fourth-order convergence. The method is specifically tailored to meet the optimality condition under the Kung–Traub conjecture by linear combination. This method, with an efficiency index of approximately 1.5874 , employs a blend of localized and semi-localized analysis to improve both efficiency and convergence. This study aims to investigate semi-local convergence, dynamical analysis to assess stability and convergence rate, and the use of the proposed solver for systems of nonlinear equations. The results underscore the potential of the proposed method for several applications in polynomiography and other areas of mathematical research. The improved performance of the proposed optimal method is demonstrated with mathematical models taken from many domains, such as physics, mechanics, chemistry, and combustion, to name a few.

1. Introduction

Root-finding methods are essential in various scientific disciplines for solving nonlinear systems, which are systems of equations that cannot be represented as linear combinations of their inputs. These techniques, which include Newton’s method, the bisection method, and the secant method, are used to determine solutions to equations where the function is zero, called [1] roots. Engineering relies on the use of equations to address the qualities of materials and forces, which are crucial to the design of systems and structures. In the field of physics, differential equations are used to elucidate various phenomena related to motion, energy, and waves. Root-finding algorithms are used in finance to calculate the internal rate of return and in option pricing models [2]. These methods are necessary because nonlinear systems are very complicated and often cannot be solved analytically or need to be greatly simplified in order to be solved. Root-finding methods provide an accurate numerical approach to approximate answers with a high level of accuracy. This allows scientists and engineers to effectively model real-world phenomena, optimize systems, and make predictions based on theoretical models.
As mentioned above, iterative root-finding algorithms are essential computational methods used to solve equations in which a function is equal to zero, demonstrated as follows:
Φ ( x ) = 0 .
Equations of the (1) type are of utmost importance in the fields of mathematics, engineering, physics, and computer science. They are employed to tackle a wide range of problems, including the optimization of engineering designs and the solution of equations in theoretical physics, among others [3,4].
There exists a continuous operator, Φ : D S , in Equation (1). This operator is defined on a nonempty convex subset D of a Banach space S and has values in a Banach space S . This investigation aims to determine an approximation of the unique local solution a l p h a S for the problem stated in (1). In the one-dimensional example, Banach spaces are equivalent to S = S = R . The problem can be reduced to approximating a single local root α of
φ ( x ) = 0 ,
where φ : I R R and I is a neighborhood of α [5].
An essential objective of the numerical analysis is to find the origin of nonlinear equations, whether they are single-variable or multivariable. This endeavor has led to the development of multiple algorithms that strive to predict exact solutions with the required level of accuracy. The pursuit of accuracy and efficiency in solving complex equations drives the advancement and application of iterative methods for root determination. Numerical methods play a crucial role in practical situations where it is often impossible to obtain accurate results by analytical means. Moreover, the ability to quickly and accurately determine the solutions of nonlinear equations is crucial in simulations, optimizations, and modeling in various scientific fields, making these techniques indispensable tools for researchers [6,7]. The main numerical analysis techniques are the bisection method, the Newton–Raphson method, and the secant method. The bisection method is a reliable, albeit slow, strategy that guarantees convergence by halving the interval containing the root at each iteration. In contrast, the Newton–Raphson method offers a higher convergence rate by using the derivative of the function, making it a preferable technique for many applications requiring speed and efficiency. The derivative-free secant method offers a middle ground between the bisection and Newton–Raphson methods. It provides an efficient method that does not depend heavily on the differentiability of the function.
Recent studies have led to the development of sophisticated algorithms, such as two- and three-step root-finding approaches, which aim to improve convergence rates and stability [8,9,10,11,12]. These methods improve traditional iterative algorithms by incorporating additional steps or using higher-order derivatives to improve accuracy and speed. For example, two-step methods typically involve an initial Newton–Raphson-like step followed by a corrective phase that improves the accuracy of the root computation. Three-step techniques improve on this idea by introducing an additional level of precision, whereby even higher levels of accuracy can be achieved. Discussions on the optimality of root-finding algorithms revolve mainly around their speed of convergence and computational efficiency. Second-order algorithms, such as the improved Newton–Raphson methodology and some variants of the secant method, offer a practical balance between computational simplicity and fast convergence.
In contrast, fourth-order convergent methods aim to achieve faster convergence at the expense of computational simplicity, as they require higher-order derivatives or additional function evaluations. These approaches are especially valuable for solving complex equations that require fast convergence and are not limited by processing resources. Iterative methods used in numerical analysis to find the roots of nonlinear equations are essential, as they allow for solving otherwise intractable problems. The progression of these algorithms, ranging from traditional textbook approaches to the latest research advances, demonstrates the continuing quest for greater efficiency, accuracy, and practicality. As computing power increases, root-finding algorithms will also improve in sophistication and performance, underscoring their continuing importance in mathematics and the sciences. This summary provides insight into the thorough and precise analysis needed to fully understand and appreciate the breadth and variety of iterative root-finding methods. The original question requires a thorough examination of the subject, including such aspects as theoretical foundations, improvements in algorithms, comparative evaluations of methodologies, and discussions of practical implementations and applications. Engaging in such a task would require a substantial document, which exceeds the brief summary provided here.

2. Existing Optimal Algorithms

The Kung–Traub conjecture [13], introduced in the 1970s, establishes a theoretical upper bound on the effectiveness of iterative techniques used to solve nonlinear equations. According to this hypothesis, the highest convergence order of any iterative root-finding algorithm employing a finite number of function evaluations per iteration is 2 d , where d is the number of derivative evaluations per iteration. Essentially, this implies that for methods that do not compute derivatives (i.e., d = 0 ), such as the bisection method, the order of convergence is 1 (linear convergence). The maximum order of convergence for methods that evaluate the function and its first derivative (i.e., when d = 1 , such as Newton’s method) is 2. The conjecture establishes a standard for judging how well algorithms for finding roots perform: the Kung–Traub framework says that an optimal algorithm for finding roots achieves the highest possible level of convergence as a function of the number of derivative evaluations it performs. As a result, several algorithms have been created to achieve these efficiency bounds, including Halley’s approach and other advanced methods. As defined by Kung and Traub, these algorithms attempt to strike a balance between computational cost and convergence speed.
Newton’s method is one of the classical optimal methods for solving equations of the type (2), with the following computational steps:
x n + 1 = x n φ ( x n ) φ ( x n ) , n = 0 , 1 , 2 , ,
where φ ( x n ) 0 . The optimal second-order convergent solver given in (3) requires only two function evaluations ( φ & φ ) per iteration.
The optimal two-step fourth-order convergent solver (OPNM1) in [14] is given by the following computational steps:
y n = x n 2 φ ( x n ) 3 φ ( x n ) , x n + 1 = x n 17 8 9 φ ( y n ) 4 φ ( x n ) + 9 8 φ ( y n ) φ ( x n ) 2 7 4 3 φ ( y n ) 4 φ ( x n ) φ ( x n ) φ ( x n ) , n = 0 , 1 , 2 , .
The numerical solver (4) requires, at each iteration, three function evaluations ( φ ( x n ) and two evaluations of the first-order derivative φ ( x n ) and φ ( y n ) ).
In the same research article [14], the authors proposed another optimal two-step fourth-order convergent solver (OPNM2), which is given by the following computational steps:
y n = x n 2 φ ( x n ) 3 φ ( x n ) , x n + 1 = x n 25 16 9 φ ( y n ) 8 φ ( x n ) + 9 16 φ ( y n ) φ ( x n ) 2 4 φ ( x n ) φ ( x n ) + 3 φ ( y n ) , n = 0 , 1 , 2 , .
The numerical solver (5) also requires, at each iteration, three function evaluations ( φ ( x n ) and two evaluations of the first-order derivative φ ( x n ) and φ ( y n ) ).
Jaiswal [15] proposed an optimal two-step fourth-order convergent solver (OPNM3), which is given by the following computational steps:
y n = x n 2 φ ( x n ) 3 φ ( x n ) , x n + 1 = x n 2 7 φ ( y n ) 4 φ ( x n ) + 3 4 φ ( y n ) φ ( x n ) 2 2 φ ( x n ) φ ( x n ) + φ ( y n ) , n = 0 , 1 , 2 , .
Once again, the numerical solver (6) requires, at each iteration, three function evaluations ( φ ( x n ) and two evaluations of the first-order derivative φ ( x n ) and φ ( y n ) ).
In the same research paper [15], Jaiswal also proposed a second optimal two-step fourth-order convergent solver (OPNM4), which is given by the following computational steps:
y n = x n 2 φ ( x n ) 3 φ ( x n ) , x n + 1 = x n 9 4 9 φ ( y n ) 4 φ ( x n ) + φ ( y n ) φ ( x n ) 2 3 4 φ ( y n ) 2 φ ( x n ) φ ( x n ) φ ( x n ) , n = 0 , 1 , 2 , .
Once again, the numerical solver (7) requires, at each iteration, three function evaluations ( φ ( x n ) and two evaluations of the first-order derivative, φ ( x n ) and φ ( y n ) ).
There are several fourth-order optimal numerical solvers for solving nonlinear equations, both univariate and multivariate. However, it is crucial to note that several recently suggested optimal techniques are not suitable for solving nonlinear equations in higher dimensions. The optimal methods, devised in [16,17,18,19], do not usually attempt to be used for nonlinear systems, whereas the fourth-order optimal method proposed in the present research study solves both scalar and vector versions of nonlinear equations.

3. Construction of the Optimal Fourth-Order Numerical Solver

In this section, we use the notion of a linear combination of two well-established third-order (non-optimal) numerical solvers to obtain an optimal numerical solver for dealing with nonlinear equations of the type (1). Both numerical solvers, chosen for the linear combination, are discussed in [14] and given as follows:
x n + 1 = x n 4 φ ( x n ) φ ( x n ) + 3 φ ( y n ) ,
and
x n + 1 = x n φ ( x n ) 4 1 φ ( x n ) + 3 φ ( y n ) ,
where n = 0 , 1 , 2 , and y n = x n 2 3 φ ( x n ) φ ( x n ) .
Our major aim is to construct a fourth-order optimal convergent numerical solver using a linear combination of solvers given in (8) and (9). The linear combination has the following form:
x n + 1 = x n κ 4 φ ( x n ) φ ( x n ) + 3 φ ( y n ) ( 1 κ ) φ ( x n ) 4 1 φ ( x n ) + 3 φ ( y n ) ,
where κ R is the adjusting parameter. When κ = 1 , the method reduces to (8), while κ = 0 results in (9). Methods given in (8) and (9) are not optimal, while (10) is an optimal fourth-order convergent solver for a suitable choice of κ . The performance of (10) depends on the adjustment parameter κ which is obtained from the Taylor expansion as given in Theorem 1.
Theorem 1.
Assume that the function φ : D R R has a simple root α D , where D is an open interval. If φ is sufficiently smooth in the neighborhood of the root α, then the order of convergence of the iterative method defined by (10) is at least three, as shown in the following equation:
e n + 1 = 1 3 ( κ + 2 ) ς 2 2 e n 3 + O ( e n 4 ) ,
where e n = x n α , and ς r = φ ( r ) ( α ) r ! φ ( α ) , r = 2 , 3 , 4 , . Furthermore, if κ = 2 , then the order of convergence is four.
Proof of Theorem 1. 
Let e n = x n α and d n = y n α .
Expanding φ ( x n ) via a Taylor series expansion around α , we have the following:
φ ( x n ) = φ ( α ) e n + ς 2 e n 2 + ς 3 e n 3 + ς 4 e n 4 + ς 5 e n 5 + O ( e n 6 ) .
Expanding φ ( x n ) via a Taylor series expansion around α , we have the following:
φ ( x n ) = φ ( α ) 1 + 2 ς 2 e n + 3 ς 3 e n 2 + 4 ς 4 e n 3 + 5 ς 5 e n 4 + O ( e n 5 ) .
Expanding 1 φ ( x n ) via a Taylor series expansion around α , we have the following:
1 φ ( x n ) = 1 φ ( α ) ( 1 2 ς 2 e n + 4 ς 2 2 3 ς 3 e n 2 + 4 2 ς 2 3 + 3 ς 2 ς 3 ς 4 e n 3 + ( 16 ς 2 4 36 ς 2 2 ς 3 + 9 ς 3 2 + 16 ς 2 ς 4 ) e n 4 + O ( e n 5 ) ) .
Dividing (12) by (13), we have the following:
φ ( x n ) φ ( x n ) = e n ς 2 e n 2 + 2 ( ς 2 2 ς 3 ) e n 3 + ( 7 ς 2 ς 3 4 ς 2 3 3 ς 4 ) e n 4 + O ( e n 5 ) .
Substituting (15) in the first step of (20), gives the following:
d n = 1 3 e n 2 3 ς 2 e n 2 + 2 ( ς 2 2 ς 3 ) e n 3 + ( 7 ς 2 ς 3 4 ς 2 3 3 ς 4 ) e n 4 + O ( e n 5 ) ,
where d n = y n α .
Expanding 1 φ ( y n ) via a Taylor series expansion around α and using Equation (16), we have the following:
1 φ ( y n ) = 1 φ ( α ) ( 1 2 ς 2 3 e n + ( 8 ς 2 2 9 ς 3 3 ) e n 2 + 4 27 28 ς 2 3 24 ς 3 ς 2 ς 4 e n 3 + 1 81 704 ς 2 4 + 1332 ς 3 ς 2 2 380 ς 4 ς 2 207 ς 3 2 e n 4 + O e n 5 ) .
Expanding 1 φ ( x n ) + 3 φ ( y n ) via a Taylor series expansion around α and using Equation (16), gives the following:
1 φ ( x n ) + 3 φ ( y n ) = 1 φ ( α ) ( 1 4 1 4 ς 2 e n 1 4 ς 3 e n 2 + 1 4 ( 3 ς 2 3 ς 2 ς 3 10 9 ς 4 ) e n 3 + 1 4 ( 9 ς 2 4 + 13 ς 2 2 ς 3 ς 3 2 13 9 ς 2 ς 4 35 27 ς 5 ) e n 4 + O ( e n 5 ) ) .
Finally, substituting the obtained series obtained in (12), (14), (17), and (18) into the structure defined in (10), the error equation is obtained as follows:
e n + 1 = 1 3 ς 2 2 ( κ + 2 ) e n 3 + 1 9 ς 2 3 ( ( 14 κ + 13 ) ) + 3 ς 3 ς 2 ( 4 κ + 5 ) + ς 4 e n 4 + O ( e n 5 ) .
Equation (19) implies that for any κ R , the method defined by (10) is at least cubically convergent. □
To increase the convergence order of (10), the error equation, Equation (19), suggests that the parameter κ should be 2 . Using this value of κ in (10), the proposed optimal numerical method (OPPNM) takes the following form:
y n = x n 2 φ ( x n ) 3 φ ( x n ) , x n + 1 = x n 3 4 φ ( x n ) 1 φ ( x n ) + 3 φ ( y n ) + 8 φ ( x n ) φ ( x n ) + 3 φ ( y n ) ,
where n = 0 , 1 , 2 , . Numerical solver (20) is optimal under the Kung–Traub conjecture since its order of convergence equals 2 θ 1 , where θ represents the number of function evaluations taken by the solver at each iteration. The proposed numerical solver uses three function evaluations (one function evaluation and two of its first-order derivatives) at each iteration and has fourth-order local convergence, which is also discussed in the next section. The flowchart of the proposed two-step optimal numerical solver is shown in Figure 1.

4. Local Convergence Analysis

4.1. Scalar Form

Local convergence analysis is a method used to determine the specific conditions under which a given iterative procedure can effectively converge to a solution of a nonlinear equation of the form (1). More specifically, it calculates the region of the complex plane encompassing a root into which the iterative method will converge. The study begins by assuming that the iterative method has achieved convergence at a root and then investigates the behavior of the procedure at that root. The analysis usually consists of examining the behavior of the iteration function of the method, which establishes a connection between the current approximation of the root and the subsequent approximation. Taylor series expansion is an essential tool for performing local convergence analysis. It allows us to estimate the iteration function in the vicinity of the root. Consequently, this estimation can be used to determine the rate at which the strategy reaches the root, as well as the conditions that must be satisfied for convergence.
The study usually involves determining the radius of convergence, which is the distance from the root that the iteration function can be approximated by a Taylor series expansion. The iterative approach guarantees convergence to the root if the initial estimate is within the radius of convergence. The root-finding methods usually examined by local convergence analysis are Newton’s method, the secant method, and the bisection method. By analyzing the behavior of these solutions, one can determine their strengths and weaknesses and identify the scenarios in which they are most effective. Therefore, our objective is to examine the local convergence of the suggested numerical solver (20) employing Taylor series expansion. This requires us to give the following theorem:
Theorem 2.
Suppose that α D is the exact root of a differentiable function φ : D R R for an open interval D . Then, the two-step method given in (20) has fourth-order convergence, and the asymptotic error term is determined to be
e n + 1 = 1 9 15 ς 2 3 9 ς 2 ς 3 + ς 4 e n 4 + O ( e n 5 ) ,
where e n = x n α and ς r = φ ( r ) ( α ) r ! φ ( α ) , r = 2 , 3 , 4 , .
Proof of Theorem 2. 
From (19), the following error equation is obtained for κ = 2 :
e n + 1 = 1 9 15 ς 2 3 9 ς 2 ς 3 + ς 4 e n 4 + O ( e n 5 ) .
The error equation clearly suggests that the proposed method (20), in the scalar version, has fourth-order convergence. It is also an optimal method in the sense of the Kung–Traub conjecture. □

4.2. Vector Form

The suggested method is optimal and can be used effectively for both single-variable and multivariable problems while maintaining simplicity. Within this particular framework, the approach uses the Jacobian matrix, which covers all first-order partial derivatives of the system, to progressively estimate the solution of the system through iterations. The approach starts with an initial estimation of the solution and then performs corrections at each step. These corrections are generated by multiplying the inverse of the Jacobian matrix by two-thirds of the vector of negative functions evaluated in the current estimate. The process is carried out for the second step in the suggested method. The iterative procedure persists until the solution reaches a state in which it satisfies the system of equations within a preset tolerance. This approach effectively handles the intricate nonlinear relationships between variables. This scheme is highly appreciated for its fast convergence and accuracy in various scientific and technical applications, provided that the initial approximation is close enough to the real solution and the system satisfies specific constraints on the invertibility of the Jacobian.
Let us formalize the approach to discuss the proposed optimal method (20) for solving a system of nonlinear equations. Let us consider a system of n nonlinear equations with n variables, represented as follows:
Φ ( x ) = 0 ,
where Φ ( x ) = [ ϕ 1 ( x ) , ϕ 2 ( x ) , , ϕ n ( x ) ] T is a vector-valued function of x = [ x 1 , x 2 , , x n ] T , and each ϕ i ( x ) is a nonlinear function of the variables x 1 , x 2 , , x n .
The method seeks to find a vector x , such that Φ ( x ) = 0 . Starting from an initial guess x ( 0 ) , the method iteratively updates the estimate of the root using the following formula:
y ( n ) = x ( n ) 2 3 [ J ( x ( n ) ) ] 1 Φ ( x ( n ) ) , x ( n + 1 ) = x ( n ) 3 4 [ J ( x ( n ) ) ] 1 + 3 [ J ( y ( n ) ) ] 1 Φ ( x ( n ) ) + 8 [ J ( x ( n ) ) + 3 J ( y ( n ) ) ] 1 Φ ( x ( n ) ) .
where J ( x ( n ) ) is the Jacobian matrix of Φ evaluated at x ( n ) , and n is the iteration index. The Jacobian matrix is defined as follows:
Φ ( x ) = J ( x ) = φ 1 x 1 φ 1 x n φ n x 1 φ n x n .
The method applies a correction that is expected to bring the current estimate x ( n ) closer to the true solution by solving a linear system at each iteration. The process is repeated until a convergence criterion is met, such as when the norm of the function vector Φ ( x ( n ) ) is less than a specified tolerance, indicating that x ( n ) is close to the root.
Many things affect how quickly the optimal solver (20) converges locally. These include the quality of the initial approximation and the Jacobian matrix’s properties. Given favorable circumstances, the approach exhibits quartic convergence, greatly enhancing its efficiency in solving nonlinear systems. Nevertheless, if the Jacobian matrix is singular or nearly unique during any iteration, the approach may either fail to converge or converge to a solution that does not satisfy the given system.
Here, we present the results to demonstrate the error equation and, consequently, the order of convergence of the proposed strategy for a system of nonlinear equations.
Lemma 1
([20]). Let Φ : Θ R N R N be an r-times Fréchet differentiable in a convex set Θ R N . Then, for any x and Δ x R N , the following expression holds:
Φ ( x + Δ x ) = Φ ( x ) + Φ ( x ) Δ x + 1 2 ! Φ ( x ) Δ x 2 + 1 3 ! Φ ( x ) Δ x 3 + + 1 ( r 1 ) ! Φ ( r 1 ) ( x ) Δ x r 1 + R r ,
where
| | R r | | 1 r ! sup 0 < t < 1 | | Φ ( r ) ( x + Δ x t ) | | | | Δ x | | r ,
and the symbol Φ ( p ) ( x ) Δ x p means Φ ( p ) ( x ) Δ x p = ( Φ ( p ) ( x ) Δ x ) p Δ x R N .
We introduce the following theorem to demonstrate the error equation and, consequently, the convergence order for the proposed optimal solver while dealing with a system of nonlinear equations, as follows:
Theorem 3
([21]). Let the function Φ : Θ R N R N be sufficiently differentiable in a convex set Θ containing a simple zero α of Φ ( x ) . Let us consider that Φ ( x ) is continuous and nonsingular in α. If the initial guess x 0 is close to α, then the sequence x ( n ) obtained with the proposed two-step optimal solver (23) converges to α with fourth-order convergence.
Proof of Theorem 3. 
Let α be the root of Φ ( x ) , x ( n ) be the nth approximation to the root by (23), and e n = x ( n ) α be the error vector after the nth iteration. Expanding Φ ( x ( n ) ) via a Taylor series expansion around α , we obtain the following:
Φ ( x ( n ) ) = J ( α ) e n + Λ 2 e n 2 + Λ 3 e n 3 + Λ 4 e n 4 + Λ 5 e n 5 + O ( e n 5 ) ,
where Λ r = 1 r ! [ J ( α ) ] 1 Φ ( r ) ( α ) , r = 2 , 3 , .
Expanding J ( x ( n ) ) via a Taylor series expansion around α gives the following:
J ( x ( n ) ) = J ( α ) 1 + 2 Λ 2 e n + 3 Λ 3 e n 2 + 4 Λ 4 e n 3 + 5 Λ 5 e n 4 + O ( e n 4 ) .
Expanding [ J ( x ( n ) ) ] 1 via a Taylor series expansion around α leads to the following:
[ J ( x ( n ) ) ] 1 = [ J ( α ) ] 1 ( 1 2 Λ 2 e n + 4 Λ 2 2 3 Λ 3 e n 2 + 4 2 Λ 2 3 + 3 Λ 2 Λ 3 Λ 4 e n 3 + ( 16 Λ 2 4 36 Λ 2 2 Λ 3 + 9 Λ 3 2 + 16 Λ 2 Λ 4 ) e n 4 + O ( e n 5 ) ) .
Multiplying (25) and (27), we have the following:
[ J ( x ( n ) ) ] 1 Φ ( x ( n ) ) = e n Λ 2 e n 2 + 2 ( Λ 2 2 Λ 3 ) e n 3 + ( 7 Λ 2 Λ 3 4 Λ 2 3 3 Λ 4 ) e n 4 + O ( e n 5 ) .
Substituting (28) in the first step of (20) yields the following:
d n = 1 3 e n 2 3 Λ 2 e n 2 + 2 ( Λ 2 2 Λ 3 ) e n 3 + ( 7 Λ 2 Λ 3 4 Λ 2 3 3 Λ 4 ) e n 4 + O ( e n 5 ) ,
where d n = y n α . Expanding J ( y ( n ) ) via a Taylor series expansion around α and using Equation (29), we have the following:
J ( y ( n ) ) = J ( α ) ( 1 + 2 Λ 2 3 e n + 1 3 4 Λ 2 2 + Λ 3 e n 2 4 27 18 Λ 2 3 27 Λ 3 Λ 2 Λ 4 e n 3 + 4 9 12 Λ 2 4 24 Λ 3 Λ 2 2 + 11 Λ 4 Λ 2 + 6 Λ 3 2 e n 4 + O e n 5 ) .
Expanding [ J ( y ( n ) ) ] 1 via a Taylor series expansion around α and using Equation (29),
[ J ( y ( n ) ) ] 1 = [ J ( α ] 1 ( 1 2 Λ 2 3 e n + ( 8 Λ 2 2 9 Λ 3 3 ) e n 2 + 4 27 28 Λ 2 3 24 Λ 3 Λ 2 Λ 4 e n 3 + 1 81 704 Λ 2 4 + 1332 Λ 3 Λ 2 2 380 Λ 4 Λ 2 207 Λ 3 2 e n 4 + O e n 5 ) .
Now, substituting Equations (25)–(27), (30), and (31) in the second step of (23), the error equation is obtained as follows:
e n + 1 = 1 9 15 Λ 2 3 9 Λ 2 Λ 3 + Λ 4 e n 4 + O ( e n 5 ) .
Error Equation (32) proves the fourth-order convergence of the proposed two-step optimal method (23) presented in the vector form. □

5. Convergence without Taylor Series

Semi-local convergence for the proposed solver (20) requires a careful balance between the proximity of the initial approximation to the root, the behavior of the function and its derivatives near the root, and the inherent properties of the solver. By establishing the necessary conditions on the function and the initial approximation, and by a detailed analysis of the error dynamics, it can be seen that the solver exhibits fourth-order convergence within a certain neighborhood around the root. This analysis not only guarantees the effectiveness of the solver but also provides practical guidance on its application for solving equations where high accuracy is desired.
There are some problems with the Taylor series methodology employed to show local convergence analysis for the (20) solver limiting its applicability even though convergence is possible. We list these problems as follows:
(P1)
The local convergence is carried out for functions on the real line or the finite-dimensional Euclidean space.
(P2)
The function φ must be at least five times differentiable. Let us consider a function φ : [ 1.3 , 1.3 ] R , defined as follows:
φ ( t ) = d 1 t 2 log t + d 2 t 5 + d 3 t 4 , if t 0 0 , if t = 0 ,
where d 1 0 and d 2 + d 3 = 0 . Then, γ = 1 [ 1.3 , 1.3 ] is a solution of the equation φ ( t ) = 0 . But the function φ is not continuous at t = 0 . Thus, the results of the previous section—being only sufficient—cannot guarantee the convergence of the sequence { x n } generated by the method to the solution γ = 1 . However, the method converges to γ = 1 if, for example, we start from x 0 = 1 , 1 . This observation indicates that the sufficient convergence conditions can be weakened.
(P3)
There is no a priori knowledge of the number of iterations required to reach a desired error tolerance since no computable upper bounds on | | γ x n | | are given.
(P4)
The separation of the solutions is not discussed.
(P5)
The semi-local analysis of convergence, which is considered to be more important, is not considered either.
We positively address the above-listed problems (P1)–(P5) as follows:
(P1)’
The convergence analysis is carried out for Banach space-valued operators.
(P2)’
Both types of analyses use conditions only on the operators on the method (20).
(P3)’
The number of iterations to reach the error tolerance is known in advance since priori estimates on | | γ x n | | are provided.
(P4)’
The separation of the solutions is discussed.
(P5)’
The semi-local analysis of convergence relies on majorizing sequences [22,23]. These analyses also depend on control by generalized continuity conditions. This approach allows us to extend the utilization of the method (20).
Let Ξ , Ξ 1 denote Banach spaces, Ω Ξ denote a convex, non-empty set that is open or closed, and Υ : Ω Ξ 1 denote a differentiable operator in the sense of Fréchet. We shall locate a solution γ of the equation Υ ( x ) = 0 iteratively. In particular, method (20) is utilized in the setting, and is defined for x 0 Ω , and each n = 0 , 1 , 2 , by the following:
y n = x n 2 3 Υ ( x n ) 1 Υ ( x n ) , A n = Υ ( x n ) + 3 Υ ( y n ) , x n + 1 = x n 3 4 Υ ( x n ) 1 Υ ( x n ) 9 4 Υ ( y n ) 1 Υ ( x n ) + 8 A n 1 Υ ( x n ) .
It is clear that the method (33) reduces to (20) if Ξ = Ξ 1 = R j (j is a natural number).

5.1. Local Analysis of Convergence

The conditions are for M = [ 0 , + ] . Suppose we have the following:
(C1)
There exists the smallest solution ρ 0 M of the equation ϖ 0 ( t ) 1 = 0 , where M R + is a continuous and non-decreasing function. Set M 0 = [ 0 , ρ 0 ) .
(C2)
There exists a function, ϖ : M 0 R + , such that g 1 : M 0 R + is defined by the following:
g 1 ( t ) = 0 1 ϖ ( ( 1 τ ) t ) d τ + 1 3 ( 1 + 0 1 ϖ 0 ( τ t ) d τ ) 1 ϖ 0 ( t ) ,
the equation g 1 ( t ) 1 = 0 has the smallest solution in the interval M 0 { 0 } denoted as r 1 .
(C3)
The equation p ( t ) 1 = 0 , where p : M 0 R + is defined by
p ( t ) = 1 4 ( 3 ϖ 0 ( g 1 ( t ) t ) + ϖ 0 ( t ) ) ,
has the smallest solution M 0 { 0 } , denoted as ρ 1 . Set ρ = min { ρ 0 , ρ 1 } and M 1 = [ 0 , ρ ) .
(C4)
The equation g 2 ( t ) 1 = 0 , where g 2 : M 1 R + is defined by the following:
ϖ ¯ ( t ) = ϖ ( ( 1 + g 1 ( t ) ) t ) ϖ 0 ( t ) + ϖ 0 ( g 1 ( t ) t ) ,
g 2 ( t ) = 0 1 ϖ ( ( 1 τ ) t ) d τ 1 ϖ 0 ( t ) + ϖ ¯ ( t ) ( 1 + 0 1 ϖ 0 ( τ t ) d τ ) 2 ( 1 p ( t ) ) ( 1 ϖ 0 ( g 1 ( t ) t ) ) + ϖ ¯ ( t ) ( 1 + 0 1 ϖ 0 ( τ t ) d τ ) 4 ( 1 ϖ 0 ( t ) ) ( 1 ϖ 0 ( g 1 ( t ) t ) ) ,
has the smallest solution in M 1 { 0 } denoted as r 2 . Set
r = min { r 1 , r 2 } .
The parameter r is shown in Theorem 4 to be a possible radius of convergence for the method (33). However, we first connect the functions ϖ 0 and ϖ to the operators on the method (33).
(C5)
There exists Δ f ( Ξ , Ξ 1 ) and a solution γ Ω such that Δ 1 f ( Ξ 1 , Ξ ) so that for each x Ω , | | Δ 1 ( F ( x ) Δ ) | | ν 0 ( | | x 0 γ | | ) . Set Ω 1 = S [ γ , ρ 0 ] Ω .
(C6)
| | Δ 1 ( Υ ( y ) Υ ( x ) ) | | ϖ ( | | y x | | ) for each x , y Ω 1 , and
(C7)
S [ γ , r ] Ω .
Under the conditions (C1)–(C7), we present the local analysis of convergence for the method (33).
Theorem 4.
Suppose that the conditions (C1)–(C7) hold. Then, the following assertions hold for the iterates { x n } produced by the method (33) provided that x 0 S ( γ , r ) { γ }
{ x n } S ( γ , r ) ,
| | y n γ | | g 1 ( | | x n γ | | ) | | x n γ | | | | x n γ | | < r ,
| | x n + 1 γ | | g 2 ( | | x n γ | | ) | | x n γ | | | | x n γ | | ,
and the sequence { x n } converges to the solution γ of the equation Υ ( x ) = 0 .
Proof of Theorem 4. 
Let u S ( γ , r ) { r } . Then, the application of the conditions (C1), (C5), and (34) imply for x = x 0
| | Δ 1 ( F ( x 0 ) Δ ) | | ϖ 0 ( | | x 0 x 0 | | ) < 1 .
Thus, the Banach Lemma involving inverses of linear operators assures [22,23] that Υ ( x 0 ) 1 f ( Ξ 1 , Ξ ) as well as
| | Υ ( x 0 ) 1 Δ | | 1 1 ϖ 0 ( | | x 0 γ | | ) .
It also follows that the iterate y 0 exists by the first substep of the method (33) if n = 0 ,
y 0 γ = x 0 γ Υ ( x 0 ) 1 Υ ( x 0 ) + 1 3 Υ ( x 0 ) 1 Υ ( x 0 ) = [ Υ ( x 0 ) 1 Δ ] 0 1 Δ 1 ( Υ ( γ + τ ( x 0 γ ) ) Υ ( x 0 ) d τ ( x 0 γ ) + 1 3 Υ ( x 0 ) 1 Υ ( x 0 ) .
Using (34), (C6), (38), and (39), we have the following in turn:
| | y 0 γ | | 0 1 ϖ ( ( 1 τ ) | | x 0 γ | | ) d τ + 1 3 ( 1 + 0 1 ϖ 0 ( τ | | x 0 γ | | ) d τ | | x 0 γ | | 1 ϖ 0 ( | | x 0 γ | | ) g 1 ( | | x 0 γ | | ) | | x 0 γ | | | | x 0 γ | | < r .
So assertion (36) holds if n = 0 , and iterate y 0 S ( γ , r ) . As in (38), we have the following:
| | ( 4 Δ ) 1 ( A 0 4 Δ ) | | 1 4 [ 3 ϖ 0 ( | | y 0 γ | | + ϖ 0 ( | | x 0 γ | | ) ] = p 0 < 1 ,
so A 0 1 f ( Ξ 1 , Ξ ) , and
| | A 0 1 Δ | | 1 4 ( 1 p 0 ) .
Consequently, the iterate x 1 exists by the second substep of the method (33), and
x 1 γ = ( x 0 γ Υ ( x 0 ) 1 Υ ( x 0 ) ) + 1 4 Υ ( x 0 ) 1 Υ ( x 0 ) 9 4 Υ ( y ) 1 Υ ( x 0 ) + 8 A 0 1 Υ ( x 0 ) = ( x 0 γ Υ ( x 0 ) 1 Υ ( x 0 ) ) + 1 4 ( Υ ( x 0 ) 1 Υ ( y 0 ) 1 ) Υ ( x 0 ) + 2 ( 4 A 0 1 Υ ( y 0 ) 1 ) Υ ( x 0 ) = ( x 0 γ Υ ( x 0 ) 1 ) Υ ( x 0 ) + 1 4 Υ ( x 0 ) 1 ( Υ ( y 0 ) Υ ( x 0 ) ) Υ ( y 0 ) 1 Υ ( x 0 ) + 2 A 0 1 ( Υ ( y 0 ) Υ ( x 0 ) ) × Υ ( y 0 ) 1 Υ ( x 0 )
Leading by (34), (40), (38), (41), (C1), and (C6) that
| | x 1 γ | | [ 0 1 ϖ ( ( 1 τ ) | | x 0 γ | | ) d τ 1 ϖ 0 ( | | x 0 γ | | ) + 1 4 ϖ ¯ 0 ( 1 + 0 1 ϖ 0 ( τ | | x 0 γ | | ) d τ ) ( 1 ϖ 0 ( | | x 0 γ | | ) ) ( 1 ϖ 0 ( | | y 0 γ | | ) ) + ϖ ¯ 0 ( 1 + 0 1 ϖ 0 ( τ | | x 0 γ | | ) d τ ) 2 ( 1 p 0 ) ( 1 ϖ 0 ( | | y 0 γ | | ) ) ] | | x 0 γ | | g 2 ( | | x 0 γ | | ) | | x 0 γ | | | | x 0 γ | | .
Thus, the assertion (37) holds if n = 0 , and the iterate x 1 S ( γ , r ) . By switching x 0 , y 0 , x 1 by x m . y m , x m + 1 (m a natural integer), we terminate the induction for the assertions (35)–(37). Then, the estimation
| | x m + 1 γ | | d | | x m γ | | d m + 1 | | x 0 γ | | < r ,
where γ = g 2 ( | | x 0 γ | | ) [ 0 , 1 ) shows that lim m + x m = γ . □
Remark 1.
(i) 
The radius r in the condition (C7) can be replaced by ρ 0 .
(ii) 
Possible choices of the operator Δ can be Δ = I or Δ = F ( γ ) , provided that the operator F ( γ ) is invertible. Other choices are possible, as long as conditions (C5) and (C6) hold.
The separation of the solutions is developed in the following result.
Proposition 1.
Suppose there exists ρ 2 > 0 , such that the condition (C5) holds in the ball S ( γ , ρ 2 ) , and there exists ρ 3 ρ 2 , such that we have the following:
0 1 ϖ 0 ( τ ρ 3 ) d τ < 1 .
Set Ω 2 = S [ γ , ρ 3 ] Ω . Then, the equation Υ ( x ) is uniquely solvable by γ in the set Ω 2 .
Proof. 
Proof of Proposition 1 Suppose that there exists γ 1 Ω 2 such that F ( γ 1 ) = 0 , and γ 1 γ . Define the linear operator L = 0 1 Υ ( γ + τ ( γ 1 γ ) ) d τ . Then, it follows by (C5) and (45) that
| | Δ 1 ( L Δ ) | | 0 1 ϖ 0 ( τ | | γ 1 γ | | ) d τ 0 1 ϖ 0 ( τ ρ 3 ) d τ < 1 .
Then, L 1 f ( Ξ , Ξ 1 ) , and from the following approximation,
γ 1 γ = L 1 ( Υ ( γ 1 ) Υ ( γ ) ) = L 1 ( 0 ) = 0 .
Consequently, we can conclude that γ 1 = γ .  □
Remark 2.
We can certainly choose ρ 2 = r in Proposition 1.

5.2. Semi-Local Analysis of Convergence

The roles of γ , ϖ 0 , and ϖ are exchanged by x 0 , ν 0 and ν as follows. Suppose that we have the following:
(H1)
Equation ν 0 ( t ) 1 = 0 has the smallest solution denoted by ρ 3 in the interval M { 0 } , where ν 0 : M R + is a continuous as well as a non-decreasing function. Set M 2 = [ 0 , ρ 3 ) .
(H2)
There exists a function ν 0 : M 2 R + , which is continuous as well as non-decreasing. We define the sequence { α n } for α 0 = 0 , some β 0 0 , and each n = 0 , 1 , 2 , by the following:
q n = 1 4 ( 3 ν 0 ( β n ) + ν 0 ( α n ) ) , α n + 1 = β n + 1 8 ( β n α n ) + 27 ( 1 + ν 0 ( α n ) ) 8 ( 1 ν 0 ( β n ) ) ( β n α n ) + 3 ( 1 + ν 0 ( α n ) ) 1 q n ( β n α n ) , δ n + 1 = 1 + 0 1 ν 0 ( α n + τ ( α n + 1 α n ) ) d τ ( α n + 1 α n ) + 3 2 ( 1 + ν 0 ( α n ) ) ( β n α n ) , and β n + 1 = α n + 1 + 2 3 δ n + 1 1 ν 0 ( α n + 1 ) .
The scalar sequence { x n } , as defined, is shown in Theorem 4 to be majorizing for the method (33). But first, a general convergence condition for it is needed.
(H3)
There exists ρ 4 [ 0 , ρ 3 ) , such that for each n = 0 , 1 , 2 ,   ν 0 ( α n ) < 1 , ν 0 ( β n ) < 1 , q n < 1 and α n < ρ 4 .
It follows by simple induction (46) and condition (H3) that 0 α n β n α n + 1 < ρ 4 . Thus, the real sequence { α n } is nondecreasing and bounded from above by ρ 4 , and as such, it is convergent to some α [ 0 , ρ 4 ] such that lim n + α n = α .
The limit α is the unique least upper bound of the sequence { α n } . Notice that if ν 0 is strictly increasing, we can take ρ 4 = ν 0 1 ( 1 ) . As in the local analysis, the functions ν 0 and ν relate to the operators of the method (33).
(H4)
There exists an invertible operator Δ f ( Ξ , Ξ 1 ) such that for some x 0 Ω ,
| | Δ 1 ( Υ ( x 0 ) Δ ) | | ν 0 ( | | x x 0 | | ) , for each x Ω .
Set Ω 3 = S ( x 0 , ρ 3 ) Ω . Notice that for x = x 0 , we have | | Δ 1 ( Υ ( x 0 ) Δ ) | | ν 0 ( 0 ) < 1 . Thus, Υ ( x 0 ) 1 f ( Ξ 1 , Ξ ) , and we can take β 0 2 3 | | Υ ( x 0 ) 1 Υ ( x 0 ) | | .
(H5)
| | Δ 1 ( Υ ( y ) Υ ( x ) ) | | v ( | | y x | | ) for each x , y , Ω 3 , and
(H6)
S [ x 0 , a ] Ω .
The semi-local analysis relies on the conditions (H1)–(H6).
Theorem 5.
Suppose that the conditions (H1)–(H6) hold. Then, it follows that assertions hold for the sequence { x n } produced by the method (33).
{ x n } S ( x 0 , a ) ,
| | y n x n | | β n α n ,
| | x n + 1 y n | | α n + 1 β n ,
and there exists a solution γ S [ x 0 , a ] of the equation Υ ( x ) = 0 , so that
| | γ x n | | α α n .
Proof of Theorem 5. 
As in the local analysis, the assertions (47)–(49) are shown using induction. The definition of α 0 , β 0 , and the first substep of the method (33) imply that the assertions (47) and (48) hold for n = 0 , and iterate y 0 S ( x 0 , a ) . The conditions (H1)–(H4), in the local case, show the following: u S ( x 0 , a )
| | Δ 1 ( Υ ( u ) Δ ) | | ( | | u x 0 | | ) < 1 ,
so Υ ( u ) 1 f ( Ξ 1 , Ξ ) , and | | Υ ( u ) 1 Δ | | 1 1 ν 0 ( | | u x 0 | | ) , and similarly
| | ( 4 Δ ) 1 ( A m 4 Δ ) | | 1 4 ( 3 ν 0 ( | | y m x 0 | | ) + ν 0 ( | | x m x 0 | | ) ) 1 4 ( 3 ν 0 ( β m ) + ν 0 ( α m ) ) = q n < 1 ,
so A m 1 f ( Ξ 1 , Ξ ) , and
| | A m 1 Δ | | 1 4 ( 1 q m ) .
Consequently, the iterate x m + 1 exists by the second substep of the method (33). By subtracting the first from the second substep of the method (33), we have the following:
x m + 1 y m = 1 12 Υ ( x m ) 1 Υ ( x m ) 9 4 Υ ( y m ) 1 Υ ( x m ) + 8 A m 1 Υ ( x m ) . It follows that | | x m + 1 y m | | 1 8 ( β m α m ) + 27 8 ( 1 + ν 0 ( α m ) ) 1 ν 0 ( β m ) ( β m α m ) + 3 ( 1 + ν 0 ( α m ) ) ( 1 q m ) = α m + 1 β m ,
and
| | x m + 1 x 0 | | | | x m + 1 y m | | + | | y m x 0 | | α m + 1 β m + β m α 0 = α m + 1 < α ,
so the assertion (47) holds for n = m + 1 , as well as (49) for n = m . By the last substep of the method (33), we can write in turn that
Υ ( x m + 1 ) = Υ ( x m + 1 ) Υ ( x m ) 3 2 Υ ( x m ) ( y m x m ) = 0 1 Υ ( x m + τ ( x m + 1 x m ) ) d τ ( x m + 1 x m ) + 3 2 Υ ( x m ) ( y m x m ) ,
leading to
| | Δ 1 Υ ( x m + 1 ) | | ( 1 + 0 1 ν 0 ( | | x m x 0 | | + τ | | x m + 1 x m | | ) d τ ) | | x m + 1 x m | | + 3 2 ( 1 + ν 0 ( | | x m x 0 | | ) | | y m x m | | ( 1 + 0 1 ν 0 ( α m + τ ( α m + 1 α m ) ) d τ ) ( α m + 1 α m ) + 3 2 ( 1 + ν 0 ( α m ) ) ( β m α m ) = δ m + 1 ,
where we also use
| | Δ 1 Υ ( x m ) | | | | Δ 1 ( Υ ( x m ) Δ + Δ ) | | 1 + ν 0 ( | | x m x 0 | | ) 1 + ν 0 ( α m ) ,
and
0 1 Δ 1 ( Υ ( x m + τ ( x m + 1 x m ) Δ + Δ ) d τ 1 + 0 1 ν 0 ( | | x m x 0 | | + τ | | x m + 1 x m | | ) d τ ( 1 + 0 1 ν 0 ( α m + τ ( α m + 1 α m ) ) d τ .
Therefore, by the first substep of (33), we have the following:
y m + 1 x m + 1 2 3 Υ x m + 1 1 Δ Δ 1 Υ x m + 1 2 3 δ m + 1 1 ν 0 ( | | x m + 1 x 0 | | ) 2 3 δ m + 1 1 ν 0 ( α m + 1 ) = β m + 1 α m + 1
and
y m + 1 x 0 y m + 1 x m + 1 + x m + 1 x 0 β m + 1 α m + 1 + α m + 1 α 0 = β m + 1 < a .
The induction for the assertions (47)–(49) is completed. It follows that the sequence x m is Cauchy in the Banach space B j , and as such, it is convergent to some γ S x 0 , a . By sending m + , and the continuity of the operate Υ , we deduce that Υ ( γ ) = 0 . Moreover, by the estimation
| | x m + i + x m | | α m + i α m ,
for i a natural number. The last assertion (50) of the Theorem 5 follows if i + in (52). □
Remark 3.
(i) 
The parameter α can be switched by ρ 3 in the condition (C6).
(ii) 
As in the local case, possible choices are Δ = I or Δ = Υ x 0 , provided that the operator Υ x 0 is invertible. Other choices satisfying the conditions (C4) and (C5) are possible.
The separation of solutions is given in the following result:
Proposition 2.
Suppose that there exists a solution γ 1 S x 0 , ρ 5 of the equation Υ ( x ) = 0 for some ρ 5 > 0 . The condition (C4) holds in the ball S x 0 , ρ 5 , and there exists ρ 6 ρ 5 such that
0 1 ν 0 τ ρ 5 + ( 1 τ ) ρ 6 d τ < 1 .
Set Ω 4 = S x 0 , ρ 6 Ω . Then, the only solution of the equation Υ ( x ) = 0 in the set Ω 4 is γ 1 .
Proof of Proposition 2. 
Suppose that there exists γ 2 Ω 4 solving the equation Υ ( x ) = 0 , and satisfying γ 2 γ 1 . Define the linear operator L 1 = 0 1 Υ γ 1 + τ γ 2 γ 1 d τ . Then, by condition (C4) and (52), it follows that
Δ 1 L 1 Δ 0 1 ν 0 τ γ 1 x 0 + ( 1 τ ) | | γ 2 x 0 | | ) d τ 0 1 v 0 τ ρ 5 + ( 1 τ ) ρ τ d τ < 1 .
Thus, L 1 1 f Ξ 1 , Ξ , and we can write the following:
γ 2 γ 1 = L 1 1 Υ γ 2 Υ γ 1 = L 1 1 ( 0 ) = 0 .
Therefore, we deduce that γ 2 = γ 1 . □
Remark 4.
If all conditions C 1 C 6 hold, we take γ = γ 1 and ρ 5 = α in the Proposition 2.

6. Stability Analysis

The stability analysis of the method is performed via complex dynamics. Research about this topic can be found in references [24,25]. In recent years, dynamic studies have been carried out to analyze the stability of iterative methods [26,27,28].
The stability of scheme (20) is performed on its application over quadratic polynomials. We are working with the general expression p ( z ) = ( z a ) ( z b ) , where a , b C ^ . Applying p ( z ) on (20), the resulting rational operator is as follows:
R ( z ) = z 3 4 ( z a ) ( z b ) 9 ( a + b 2 z ) 3 a 2 8 z ( a + b ) + 2 a b + 3 b 2 + 8 z 2 1 a + b 2 z 2 ( a z ) ( b z ) ( a + b 2 z ) a 2 3 z ( a + b ) + a b + b 2 + 3 z 2 .
Let us note that R ( z ) depends on variable z and roots a and b. However, applying the Möbius transformation M ( z ) = z a z b , we find that R ( z ) is conjugated with operator
O ( z ) = M R M 1 ( z ) = z 4 3 z 2 + 5 z + 5 5 z 2 + 5 z + 3 ,
which has a simpler expression and no longer depends on the roots a and b. In fact, a and b have been mapped with 0 and , respectively. The rational operator fits in the form
R ( z ) = z p i = 0 n a i z i i = 0 n a n i z i = z p a 0 + a 1 z + + a n 1 z n 1 + a n z n a n + a n 1 z + + a 1 z n 1 + a 0 z n ,
where { a i } i = 0 n R . Properties of this kind of rational operator can be found in [29].
Proposition 3.
The fixed points of O ( z ) are as follows:
  • z 0 = 0 and z = , which are super-attracting;
  • z 1 = 1 , which is repelling; and
  • z 2 987 1040 + i 491 392 , z ¯ 2 , z 3 553 1439 + i 1022 2015 and z ¯ 3 , the roots of polynomial 3 z 4 + 8 z 3 + 13 z 2 + 8 z + 3 , which are repelling.
Proof of Proposition 3. 
Fixed points satisfy
O ( z ) = z z 4 3 z 2 + 5 z + 5 5 z 2 + 5 z + 3 = z z z 3 3 z 2 + 5 z + 5 5 z 2 + 5 z + 3 1 = 0 ,
therefore, z = 0 is a fixed point. In addition,
z 3 3 z 2 + 5 z + 5 5 z 2 + 5 z + 3 1 = 0 3 z 5 + 5 z 4 + 5 z 3 5 z 2 5 z 3 = 0 ( z 1 ) 3 z 4 + 8 z 3 + 13 z 2 + 8 z + 3 = 0 ,
so the remaining fixed points are z = 1 and the roots of polynomial 3 z 4 + 8 z 3 + 13 z 2 + 8 z + 3 . The operator O ( z ) satisfies lim z 0 1 O 1 z = 0 , so z = is also a fixed point. The derivative of the rational operator is as follows:
O ( z ) = 6 z 3 10 z 4 + 25 z 3 + 34 z 2 + 25 z + 10 5 z 2 + 5 z + 3 2 ,
whose evaluation on the fixed points is O ( 0 ) = 0 , O ( 1 ) = 48 13 > 1 , O ( z 2 ) = O ( z ¯ 2 ) = O ( z 3 ) = O ( z ¯ 3 ) 7.2188 > 1 , so z = 0 is super-attracting and the rest of the strange fixed points are repelling. Furthermore, lim z 0 1 O 1 z = 0 , so z = is super-attracting. □
Since the only attracting fixed points correspond to the roots of the nonlinear function, stability is guaranteed.
Proposition 4.
The critical points of O ( z ) are as follows:
  • z 0 = 0 and z = ; and
  • z 4 755 1783 + i 260 287 , z ¯ 4 , z 5 1301 1574 + i 497 883 , and z ¯ 5 , the roots of polynomial 10 + 25 z + 34 z 2 + 25 z 3 + 10 z 4 = 0 .
Dynamical planes illustrate the basins of attraction of the attracting fixed points, showing the stability of a single method. The implementation of the dynamical plane was developed in Matlab R2022b, following the guidelines of [30]. Figure 2 represents the dynamical plan of the rational operator O ( z ) . Orange and blue represent the basins of attraction of z 0 and z , respectively. White circles refer to strange fixed points, while white squares represent free critical points. The dynamical plane only considers the initial guesses { z 0 } [ 1.3 , 1.3 ] , { z 0 } [ 1.3 , 1.3 ] , since the rest of the dynamical plane is completely blue.
The dynamical plane evidences the high stability of the iterative method for quadratic polynomials. Although the boundaries between the basins of attraction are intricate, each initial guess (except for the five extraneous fixed points) converges to a fixed attracting point that is coincident with the roots of the polynomial.

7. Numerical Results

This section elaborates on the utilization of numerical simulations for both scalar and vector forms of non-linear equations, integrating both theoretical and practical models. In the comparative analysis, we will examine specific parameters such as the number of iterations (i), the magnitude of absolute error ( e n = | x n + 1 x n | for scalars and ( e ( n ) = | | x ( n + 1 ) x ( n ) | | for vectors) at each iteration stage, and the processing time measured in CPU seconds. The simulations employed various iterative methodologies as delineated in Section 2, juxtaposed against the proposed fourth-order optimal iterative technique, as depicted in Equation (20) for scalars and (23) for systems. The numerical analyses in terms of tabular results were conducted using the software MAPLE 2022 on an Intel (R) Core (TM) i7 HP laptop equipped with 24 GB of RAM and operating at a frequency of 1.3 GHz while Python was used for graphical outputs. Regarding the numerical simulations, a maximum precision threshold of 4000 digits was established. Additionally, a cap of 50 iterations has been imposed to attain the requisite solution. The simulations are stopped based on the following halting criteria. For a single-variable nonlinear system ( f ( x ) = 0 ), we have the following:
e n = | x n + 1 x n | 10 200 , n = 1 , 2 , 3 , ,
and for a multi-variable nonlinear system, we have the following: ( F ( x ) = 0 ):
e ( n ) = | | x ( n + 1 ) x ( n ) | | 10 200 , n = 1 , 2 , 3 , ,
The following problems are considered from recent papers. The exact solution ( α ) is shown against the test functions:
Problem 1.
φ 1 x = x 2 + sin ( x ) 1 , α 0.0 .
Problem 2.
φ 2 x = log ( x ) x 3 + 2 sin ( x ) , α 1.297997743280371847164479238286 .
Problem 3
([31]). Boussinesq’s formula for vertical stress in the fields of soil mechanics and geotechnical engineering is given by the following:
μ z = p π x + cos ( x ) sin ( x ) ,
Equation (57), when μ z = 1 / 4 , can be written as follows:
φ 3 ( x ) = x + cos ( x ) sin ( x ) π 0.25 .
In Table 1, the OPPNM method exhibits the most promising results when compared to OPNM1, OPNM2, OPNM3, and OPNM4. This method demonstrates rapid convergence toward the solution, particularly noticeable in the initial iterations across multiple problems and initial guesses, indicating a superior efficiency in approaching the correct solution quickly. While other methods show varying degrees of convergence and accuracy, OPPNM consistently reduces the absolute errors’ magnitude with each iteration, suggesting a robust performance across different nonlinear equations. Although some methods may reach lower errors at the final iteration, the speed and reliability of OPPNM in achieving a significant error reduction early on make it a potentially preferable method, especially in applications where a quick approximation is valuable. This aligns with the expectation that a proposed method should outperform existing ones, and in this context, the proposed optimal method OPPNM given in (20) fulfills the criteria effectively.
The graphical representation in the images illustrates a comparative study of different numerical solvers applied to the functions given in Problems 1–3. The efficiency curves are plotted in Figure 3, Figure 4, Figure 5, Figure 6, Figure 7 and Figure 8 to show the relationship between the precision of the solution, measured by the absolute error on the y-axis, and the computational effort, represented by the number of iterations and CPU time on the x-axis. In both sets of comparisons, the OPPNM method emerges as the most efficient. Specifically, for the initial guesses of 6.5 and 8.0 for φ 1 ( x ) , OPPNM achieves a rapid convergence to a low absolute error, outpacing the other methods. The steep descent of the OPPNM curves demonstrates its swift reduction in error as iterations progress, and it also shows significantly less CPU time required to reach a similar level of accuracy compared to its counterparts. This suggests a high rate of convergence and lower computational cost, making OPPNM the preferred solver based on the data presented. The claim that OPPNM is an optimal fourth-order method is substantiated by its apparent superior performance, both in terms of speed and computational resources. A similar sort of observation is made for Problems 2 and 3 in their efficiency curves.
The proposed fourth-order method (20) showcases notable progress in solving nonlinear equations of the form φ ( x ) = 0 , establishing a fresh standard in computational efficiency. This methodology accelerates the calculation process and minimizes operating expenses by reaching the solution in fewer iterations and requiring the least CPU time compared to other optimal fourth-order algorithms as shown in Figure 3, Figure 4, Figure 5, Figure 6, Figure 7 and Figure 8. This efficiency is especially useful in large computing jobs and intricate simulations where time and resource allocation are critical. The method’s innovative algorithmic framework and exceptional performance make it the preferred choice for researchers and professionals seeking swift, precise, and cost-efficient solutions in applied mathematics, engineering, and other fields.
Given below are some nonlinear systems taken in higher dimensions. Table 2 compares the absolute errors of five numerical methods—labeled from OPPNM to OPNM4—applied to solve nonlinear systems of equations for Problems 4 through 8. The iterations are stopped when the desired accuracy mentioned in (56) is achieved. The OPPNM method, described as a “fourth-order optimal root-solver”, exhibits the lowest absolute errors across all iterations, indicating its superior accuracy and convergence rate. For instance, in Problem 4, the absolute error achieved by OPPNM is in the range of 10 1783 to 10 1 , significantly outperforming the other methods. This trend of minimal errors is consistent across the problems listed, highlighting the effectiveness of the OPPNM method. The higher order of convergence inherent to OPPNM likely contributes to its ability to rapidly decrease errors with each iteration. This characteristic is crucial for achieving accurate solutions efficiently. Thus, the table substantiates the claim that OPPNM is the superior method among those compared, due to its optimal convergence properties and the consistently lower absolute errors it achieves, which aligns with expectations for a method touted as a fourth-order optimal root-solver. Moreover, in Problem 8, the focus is on the absolute errors of the last 7 iterations out of 24 for the OPPNM to OPNM4 methods. Here, the OPPNM method stands out with its absolute error reduction, showcasing errors diminishing from 10 6 to an impressively low 10 1211 . This sharp decline in error magnitude demonstrates the method’s robustness and its capacity for high-precision solutions. In contrast, the other methods, OPNM1 through OPNM4, exhibit higher errors in the last iterations, with the smallest error being in the range of 10 52 , which is significantly higher than that of OPPNM. This indicates that while the other methods are converging, they do so at a slower rate and with less accuracy. The OPPNM’s superior performance in the tail end of the iterations suggests stable convergence without signs of stagnation, underscoring its efficiency and effectiveness as a fourth-order optimal root-solver, particularly as the solution is refined in the final iterations.
Problem 4.
The non-linear system F ( x ) of two equations from [14] is given as follows:
x 1 2 x 2 19 = 0 , x 1 2 + x 2 3 6 + x 2 17 = 0 .
The initial guess is taken to be x ( 0 ) = [ 5.1 , 6.1 ] T , where the exact solution of the system (59) is α = [ 5 , 6 ] T .
Problem 5.
The non-linear system F ( x ) of two equations from [14] is given as follows:
log ( x 2 ) x 1 2 + x 1 x 2 = 0 , log ( x 1 ) x 2 2 + x 1 x 2 = 0 .
The initial guess is taken to be x ( 0 ) = [ 0.5 , 1.5 ] T , where the exact solution of the system (60) is α = [ 1 , 1 ] T .
Problem 6.
The non-linear system F ( x ) of four equations from [14] is given as follows:
x 2 x 3 + x 4 ( x 2 + x 3 ) = 0 , x 1 x 3 + x 4 ( x 1 + x 3 ) = 0 , x 1 x 2 + x 4 ( x 1 + x 2 ) = 0 , x 1 x 2 + x 1 x 3 + x 2 x 3 1 = 0 .
The initial guess is taken to be x ( 0 ) = [ 2.5 , 2.5 , 2.5 , 1.5 ] T , where the exact solution of the system (61) is α = [ 0.577350 , 0.577350 , 0.577350 , 0.288675 ] T .
Problem 7.
Neurophysiology application [32,33]: The nonlinear model consists of the following six equations:
x 1 2 + x 3 2 = 1 , x 2 2 + x 4 2 = 1 , x 5 x 3 3 + x 6 x 4 3 = c 1 , x 5 x 1 3 + x 6 x 2 3 = c 2 , x 5 x 1 x 3 2 + x 6 x 4 2 x 2 = c 3 , x 5 x 1 2 x 3 + x 6 x 2 2 x 4 = c 4 . ,
where the constants, c i , in the above model can be randomly chosen. In our experiment, we consider c i = 0 , i = 1 , , 4 . The initial guess is taken to be x ( 0 ) = [ 0.9 , 0.8 , 0.7 , 0.5 , 0.3 , 0.1 ] T where the solution of the above 6 × 6 system to the first few digits is as follows:
α = 0.3162277660 0.4472135955 0.9486832980 0.8944271909 1.42 × 10 12225 5.42 × 10 19336 .
Problem 8.
Lastly, we consider a 10-dimensional nonlinear system that is related to combustion. The investigation of combustion phenomena at elevated temperatures, specifically at 3000 °C, formulated through a system of ten nonlinear algebraic equations by A. P. Morgan [34], serves as a quintessential exemplar of the intricate confluence of disciplines, including chemical engineering, thermodynamics, and applied numerical analysis.
x 2 + 2 x 6 + x 9 + 2 x 10 10 5 = 0 , x 3 + x 8 3 · 10 5 = 0 , x 1 + x 3 + 2 x 5 + 2 x 8 + x 9 + x 10 5 · 10 5 = 0 , x 4 + 2 x 7 10 5 = 0 , x 1 2 0.5140437 · 10 7 x 5 = 0 , 2 x 2 2 0.1006932 · 10 6 x 6 = 0 , x 4 2 0.7816278 · 10 15 x 7 = 0 , x 1 x 3 0.1496236 · 10 6 x 8 = 0 , x 1 x 2 0.6914411 · 10 7 x 9 = 0 , x 1 x 2 2 0.2089296 · 10 14 x 10 = 0 .
The initial guess is taken to be x ( 0 ) = [ 0.1 , 0.4 , 0.2 , 0.3 , 0.1 , 0.6 , 0.7 , 0.5 , 0.1 , 0.4 ] T , where the solution of the above 10 × 10 system to the first few digits is as follows:
α = 0.00000014709013277155 0.00000022619636102493 0.000000152807633833404 0.0000000006251491477 0.000000042088848007963 0.000000101625122135197 0.000000499996874254262 0.0000001487192366166596 0.0000005371172945330 0.000000360209195086792 .

8. Conclusions and Future Remarks

This research highlights the important role played by nonlinear equations in various scientific fields, such as engineering, physics, and mathematics. Given the prevalence of mathematical models lacking closed solutions in these areas, the need for numerical methodologies, in particular root-solving algorithms, becomes evident. This study presents a new fourth-order convergent root solver that advances iterative approaches and offers improved accuracy. This solver—adapted to satisfy the optimality condition posed by the Kung–Traub conjecture by a linear combination—achieves an efficiency index of 1.5874 . By means of localized and semi-localized analyses, we rigorously discuss its convergence and efficiency. Furthermore, the examination of local and semi-local convergence, together with dynamic stability analysis, highlights the robustness of the solver in dealing with systems of nonlinear equations. Empirical validations using mathematical models from various fields such as physics, mechanics, chemistry, and combustion have demonstrated the superior performance of the proposed solver.
For future research, it would be beneficial to explore the integration of this solver with machine learning algorithms to further improve its predictive capabilities and its effectiveness in solving even more complex systems of equations. This could open new avenues for solving real-world problems with unprecedented accuracy and speed.

Author Contributions

Conceptualization, S.Q. and I.K.A.; methodology, S.Q. and E.H.; software, F.I.C. and A.S.; validation, J.A. and E.H.; formal analysis, S.Q. and I.K.A.; investigation, S.Q.; resources, J.A.; writing—original draft preparation, A.S.; writing—review and editing, F.I.C.; visualization, J.A.; supervision, S.Q.; project administration, S.Q.; funding acquisition, F.I.C. All authors have read and agreed to the published version of the manuscript.

Funding

F.I.C. received partial funding from “Ayuda a Primeros Proyectos de Investigación (PAID-06-23), Vicerrectorado de Investigación de la Universitat Politècnica de València (UPV)”, in the project MERLIN framework.

Data Availability Statement

Data are contained within the article.

Acknowledgments

The authors are especially grateful to the reviewers, whose reports have significantly improved the final version of this article.

Conflicts of Interest

The authors declare no conflicts of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results.

References

  1. Faires, J.; Burden, R. Numerical Methods, 4th ed.; Cengage Learning: Belmont, CA, USA, 2012. [Google Scholar]
  2. Naseem, A.; Rehman, M.; Abdeljawad, T. A novel root-finding algorithm with engineering applications and its dynamics via computer technology. IEEE Access 2022, 10, 19677–19684. [Google Scholar] [CrossRef]
  3. Abro, H.A.; Shaikh, M.M. A new family of twentieth order convergent methods with applications to nonlinear systems in engineering. Mehran Univ. Res. J. Eng. Technol. 2023, 42, 165–176. [Google Scholar] [CrossRef]
  4. Shaikh, M.M.; Massan, S.-u.-R.; Wagan, A.I. A sixteen decimal places’ accurate Darcy friction factor database using non-linear Colebrook’s equation with a million nodes: A way forward to the soft computing techniques. Data Brief 2019, 27, 104733. [Google Scholar] [CrossRef]
  5. Argyros, M.I.; Argyros, I.K.; Regmi, S.; George, S. Generalized three-step numerical methods for solving equations in banach spaces. Mathematics 2022, 10, 2621. [Google Scholar] [CrossRef]
  6. Ramos, H.; Monteiro, M.T.T. A new approach based on the Newton’s method to solve systems of nonlinear equations. J. Comput. Appl. Math. 2017, 318, 3–13. [Google Scholar] [CrossRef]
  7. Ramos, H.; Vigo-Aguiar, J. The application of Newton’s method in vector form for solving nonlinear scalar equations where the classical Newton method fails. J. Comput. Appl. Math. 2015, 275, 228–237. [Google Scholar] [CrossRef]
  8. Abdullah, S.; Choubey, N.; Dara, S. Optimal fourth-and eighth-order iterative methods for solving nonlinear equations with basins of attraction. J. Appl. Math. Comput. 2024. [Google Scholar] [CrossRef]
  9. Yun, J.H. A note on three-step iterative method for nonlinear equations. Appl. Math. Comput. 2008, 202, 401–405. [Google Scholar] [CrossRef]
  10. Dehghan, M.; Shirilord, A. Three-step iterative methods for numerical solution of systems of nonlinear equations. Eng. Comput. 2022, 38, 1015–1028. [Google Scholar] [CrossRef]
  11. Soleymani, F.; Vanani, S.K.; Afghani, A. A general three-step class of optimal iterations for nonlinear equations. Math. Prob. Eng. 2011, 2011, 469512. [Google Scholar] [CrossRef]
  12. Darvishi, M. Some three-step iterative methods free from second order derivative for finding solutions of systems of nonlinear equations. Int. J. Pure Appl. Math. 2009, 57, 557–573. [Google Scholar]
  13. Kung, H.; Traub, J.F. Optimal order of one-point and multipoint iteration. J. ACM 1974, 21, 643–651. [Google Scholar] [CrossRef]
  14. Singh, A.; Jaiswal, J. Several new third-order and fourth-order iterative methods for solving nonlinear equations. Int. J. Eng. Math. 2014, 2014, 828409. [Google Scholar] [CrossRef]
  15. Jaiswal, J.P. Some class of third-and fourth-order iterative methods for solving nonlinear equations. J. Appl. Math. 2014, 2014, 817656. [Google Scholar] [CrossRef]
  16. Sharma, E.; Panday, S.; Dwivedi, M. New optimal fourth order iterative method for solving nonlinear equations. Int. J. Emerg. Technol. 2020, 11, 755–758. [Google Scholar]
  17. Khattri, S.K.; Abbasbandy, S. Optimal fourth order family of iterative methods. Matematički Vesnik 2011, 63, 67–72. [Google Scholar]
  18. Chun, C.; Lee, M.Y.; Neta, B.; Džunić, J. On optimal fourth-order iterative methods free from second derivative and their dynamics. Appl. Math. Comput. 2012, 218, 6427–6438. [Google Scholar] [CrossRef]
  19. Panday, S.; Sharma, A.; Thangkhenpau, G. Optimal fourth and eighth-order iterative methods for non-linear equations. J. Appl. Math. Comput. 2023, 69, 953–971. [Google Scholar] [CrossRef]
  20. Abro, H.A.; Shaikh, M.M. A new time-efficient and convergent nonlinear solver. Appl. Math. Comput. 2019, 355, 516–536. [Google Scholar] [CrossRef]
  21. Qureshi, S.; Ramos, H.; Soomro, A.K. A New Nonlinear Ninth-Order Root-Finding Method with Error Analysis and Basins of Attraction. Mathematics 2021, 9, 1996. [Google Scholar] [CrossRef]
  22. Argyros, I. Unified Convergence Criteria for Iterative Banach Space Valued Methods with Applications. Mathematics 2021, 9, 1942. [Google Scholar] [CrossRef]
  23. Argyros, I. The Theory and Applications of Iteration Methods, 2nd ed.; CRC Press: Boca Raton, FL, USA, 2022. [Google Scholar] [CrossRef]
  24. Devaney, R.L. An Introduction to Chaotic Dynamical Systems; Addison-Wesley: Boston, MA, USA, 1989. [Google Scholar]
  25. Beardon, A.F. Iteration of Rational Functions: Complex Analytic Dynamical Systems; Springer: New York, NY, USA, 1991. [Google Scholar]
  26. Wang, X.; Chen, X.; Li, W. Dynamical behavior analysis of an eighth-order Sharma’s method. Intl. J. Biomath. 2023, 2023, 2350068. [Google Scholar] [CrossRef]
  27. Kroszczynski, K.; Kiliszek, D.; Winnicki, I. Some Properties of the Basins of Attraction of the Newton’s Method for Simple Nonlinear Geodetic Systems. Preprints 2021, 2021120151. [Google Scholar] [CrossRef]
  28. Campos, B.; Villalba, E.G.; Vindel, P. Dynamical and numerical analysis of classical multiple roots finding methods applied for different multiplicities. Comput. Appl. Math. 2024, 43, 230. [Google Scholar] [CrossRef]
  29. Campos, B.; Canela, J.; Vindel, P. Dynamics of Newton-line root finding methods. Numer. Algorithms 2023, 93, 1453–1480. [Google Scholar] [CrossRef]
  30. Chicharro, F.I.; Cordero, A.; Torregrosa, J.R. Drawing Dynamical and Parameters Planes of Iterative Families and Methods. Sci. World J. 2013, 2013, 708153. [Google Scholar] [CrossRef]
  31. Abdullah, S.; Choubey, N.; Dara, S. An efficient two-point iterative method with memory for solving non-linear equations and its dynamics. J. Appl. Math. Comput. 2024, 70, 285–315. [Google Scholar] [CrossRef]
  32. Verschelde, J.; Verlinden, P.; Cools, R. Homotopies exploiting Newton polytopes for solving sparse polynomial systems. SIAM J. Numer. Anal. 1994, 31, 915–930. [Google Scholar] [CrossRef]
  33. Grosan, C.; Abraham, A. A new approach for solving nonlinear equations systems. IEEE Trans. Syst. Man Cybern.-Part A Syst. Hum. 2008, 38, 698–714. [Google Scholar] [CrossRef]
  34. Morgan, A. Solving Polynomial Systems Using Continuation for Engineering and Scientific Problems; SIAM: Philadelphia, PA, USA, 2009. [Google Scholar]
Figure 1. Flowchart of the proposed optimal fourth-order numerical solver given in (20).
Figure 1. Flowchart of the proposed optimal fourth-order numerical solver given in (20).
Axioms 13 00341 g001
Figure 2. Dynamical plane of O ( z ) .
Figure 2. Dynamical plane of O ( z ) .
Axioms 13 00341 g002
Figure 3. Efficiency curves for the function φ 1 ( x ) with numerical solvers under consideration for x 0 = 6.5 .
Figure 3. Efficiency curves for the function φ 1 ( x ) with numerical solvers under consideration for x 0 = 6.5 .
Axioms 13 00341 g003
Figure 4. Efficiency curves for the function φ 1 ( x ) with numerical solvers under consideration for x 0 = 8 .
Figure 4. Efficiency curves for the function φ 1 ( x ) with numerical solvers under consideration for x 0 = 8 .
Axioms 13 00341 g004
Figure 5. Efficiency curves for the function φ 2 ( x ) with numerical solvers under consideration for x 0 = 5 .
Figure 5. Efficiency curves for the function φ 2 ( x ) with numerical solvers under consideration for x 0 = 5 .
Axioms 13 00341 g005
Figure 6. Efficiency curves for the function φ 2 ( x ) with numerical solvers under consideration for x 0 = 7.2 .
Figure 6. Efficiency curves for the function φ 2 ( x ) with numerical solvers under consideration for x 0 = 7.2 .
Axioms 13 00341 g006
Figure 7. Efficiency curves for the function φ 3 ( x ) with numerical solvers under consideration for x 0 = 0.2 .
Figure 7. Efficiency curves for the function φ 3 ( x ) with numerical solvers under consideration for x 0 = 0.2 .
Axioms 13 00341 g007
Figure 8. Efficiency curves for the function φ 3 ( x ) with numerical solvers under consideration for x 0 = 0.4 .
Figure 8. Efficiency curves for the function φ 3 ( x ) with numerical solvers under consideration for x 0 = 0.4 .
Axioms 13 00341 g008
Table 1. Numerical simulations for the scalar type of nonlinear equations presented in Problems 1–3.
Table 1. Numerical simulations for the scalar type of nonlinear equations presented in Problems 1–3.
ProblemIGMethod e 1 e 2 e 3 e 4 e 5 e 6 e 7 e 8
16.5OPPNM4.581.26 2.72 · 10 2 4.51 · 10 8 3.64 · 10 31 1.54 · 10 123 4.87 · 10 493 -
OPNM14.351.46 5.31 · 10 2 9.65 · 10 7 1.21 · 10 25 2.95 · 10 101 1.05 · 10 403 -
OPNM24.401.42 4.73 · 10 2 6.08 · 10 7 1.91 · 10 26 1.84 · 10 104 1.61 · 10 416 -
OPNM34.331.47 5.79 · 10 2 1.56 · 10 6 9.82 · 10 25 1.53 · 10 97 9.16 · 10 389 -
OPNM44.301.50 6.23 · 10 2 2.14 · 10 6 3.61 · 10 24 2.91 · 10 95 1.22 · 10 379 -
8.0OPPNM5.541.76 6.16 · 10 2 1.09 · 10 6 1.25 · 10 25 2.13 · 10 101 1.80 · 10 404 -
OPNM15.411.85 1.01 · 10 1 1.10 · 10 5 2.06 · 10 21 2.49 · 10 84 5.35 · 10 336 -
OPNM25.431.84 9.26 · 10 2 7.93 · 10 6 5.50 · 10 22 1.27 · 10 86 3.66 · 10 345 -
OPNM35.391.86 1.08 · 10 1 1.66 · 10 5 1.26 · 10 20 4.13 · 10 81 4.80 · 10 323 -
OPNM45.391.86 1.08 · 10 1 1.66 · 10 5 1.26 · 10 20 4.13 · 10 81 4.80 · 10 323 -
25.0OPPNM2.65 8.99 · 10 1 1.48 · 10 1 9.12 · 10 4 2.76 · 10 12 2.31 · 10 46 1.13 · 10 182 6.61 · 10 728
OPNM12.51 9.62 · 10 1 2.23 · 10 1 5.56 · 10 3 6.87 · 10 9 1.67 · 10 32 5.78 · 10 127 8.34 · 10 505
OPNM22.54 9.50 · 10 1 2.07 · 10 1 4.14 · 10 3 2.14 · 10 9 1.56 · 10 34 4.38 · 10 135 2.74 · 10 537
OPNM32.50 9.67 · 10 1 2.31 · 10 1 6.65 · 10 3 1.71 · 10 8 7.85 · 10 31 3.49 · 10 120 1.37 · 10 477
OPNM42.48 9.73 · 10 1 2.41 · 10 1 7.85 · 10 3 3.43 · 10 8 1.33 · 10 29 3.02 · 10 115 7.96 · 10 458
7.2OPPNM3.941.53 4.05 · 10 1 2.05 · 10 2 6.30 · 10 7 6.29 · 10 25 6.27 · 10 97 6.17 · 10 385
OPNM13.681.62 5.34 · 10 1 6.13 · 10 2 7.20 · 10 5 2.00 · 10 16 1.20 · 10 62 1.57 · 10 247
OPNM23.741.61 5.08 · 10 1 5.07 · 10 2 3.46 · 10 5 1.08 · 10 17 1.00 · 10 67 7.51 · 10 268
OPNM33.661.63 5.45 · 10 1 6.71 · 10 2 1.15 · 10 4 1.58 · 10 15 5.79 · 10 59 1.04 · 10 232
OPNM43.631.64 5.60 · 10 1 7.43 · 10 2 1.72 · 10 4 8.44 · 10 15 4.88 · 10 56 5.44 · 10 221
30.2OPPNM 2.16 · 10 1 2.35 · 10 4 7.47 · 10 16 7.64 · 10 62 8.37 · 10 246 ---
OPNM1 2.16 · 10 1 2.87 · 10 4 2.43 · 10 15 1.26 · 10 59 9.19 · 10 237 ---
OPNM2 2.16 · 10 1 2.83 · 10 4 2.31 · 10 15 1.03 · 10 59 4.07 · 10 237 ---
OPNM3 2.16 · 10 1 3.06 · 10 4 3.67 · 10 15 7.62 · 10 59 1.41 · 10 233 ---
OPNM4 2.16 · 10 1 3.13 · 10 4 4.11 · 10 15 1.22 · 10 58 9.69 · 10 233 ---
0.4OPPNM 1.59 · 10 2 1.46 · 10 8 1.13 · 10 32 4.00 · 10 129 6.29 · 10 515 ---
OPNM1 1.59 · 10 2 2.12 · 10 8 7.28 · 10 32 1.01 · 10 125 3.78 · 10 501 ---
OPNM2 1.59 · 10 2 2.11 · 10 8 7.19 · 10 32 9.64 · 10 126 3.11 · 10 501 ---
OPNM3 1.59 · 10 2 2.44 · 10 8 1.47 · 10 31 1.96 · 10 124 6.22 · 10 496 ---
OPNM4 1.59 · 10 2 2.51 · 10 8 1.71 · 10 31 3.68 · 10 124 7.91 · 10 495 ---
Table 2. Numerical simulations for the systems of nonlinear equations presented in Problems 4–8.
Table 2. Numerical simulations for the systems of nonlinear equations presented in Problems 4–8.
ProblemMethod e ( 1 ) e ( 2 ) e ( 3 ) e ( 4 ) e ( 5 ) e ( 6 ) e ( 7 )
4OPPNM 1.00 · 10 1 5.87 · 10 7 7.34 · 10 28 1.79 · 10 111 6.37 · 10 446 1.02 · 10 1783 -
OPNM1 9.90 · 10 2 1.16 · 10 6 2.25 · 10 26 3.19 · 10 105 1.28 · 10 420 3.33 · 10 1682 -
OPNM2 9.90 · 10 2 1.15 · 10 6 2.15 · 10 26 2.66 · 10 105 6.16 · 10 421 1.78 · 10 1683 -
OPNM3 9.90 · 10 2 1.43 · 10 6 6.40 · 10 26 2.50 · 10 105 7.01 · 10 413 3.72 · 10 1651 -
OPNM4 9.90 · 10 2 1.49 · 10 6 8.01 · 10 26 6.65 · 10 103 3.15 · 10 411 1.59 · 10 1644 -
5OPPNM 5.64 · 10 1 7.93 · 10 2 2.38 · 10 6 1.93 · 10 24 9.52 · 10 97 5.70 · 10 386 7.36 · 10 1543
OPNM1 5.63 · 10 1 6.33 · 10 2 3.98 · 10 6 5.93 · 10 23 2.92 · 10 90 1.72 · 10 359 2.06 · 10 1436
OPNM2 5.66 · 10 1 6.86 · 10 2 5.09 · 10 6 1.55 · 10 22 1.37 · 10 88 8.25 · 10 353 1.09 · 10 1409
OPNM3 5.69 · 10 1 6.80 · 10 2 6.60 · 10 6 5.88 · 10 22 3.80 · 10 86 6.65 · 10 343 6.24 · 10 1370
OPNM4 5.69 · 10 1 6.87 · 10 2 6.53 · 10 6 5.86 · 10 22 3.94 · 10 86 8.13 · 10 343 1.49 · 10 1369
6OPPNM1.68 2.44 · 10 1 1.95 · 10 3 2.46 · 10 11 3.83 · 10 44 5.53 · 10 177 4.24 · 10 710
OPNM11.63 2.91 · 10 1 4.94 · 10 3 2.16 · 10 09 5.68 · 10 36 7.07 · 10 144 3.05 · 10 577
OPNM21.64 2.82 · 10 1 4.28 · 10 3 1.19 · 10 09 5.15 · 10 37 4.51 · 10 148 5.21 · 10 594
OPNM31.62 2.97 · 10 1 5.60 · 10 3 4.49 · 10 09 1.41 · 10 34 3.50 · 10 138 2.74 · 10 554
OPNM41.61 3.02 · 10 1 6.14 · 10 3 6.84 · 10 09 8.16 · 10 34 4.34 · 10 135 6.31 · 10 542
7OPPNM 2.96 · 10 1 3.54 · 10 3 4.37 · 10 15 4.03 · 10 67 1.16 · 10 279 3.17 · 10 1134 1.18 · 10 3424
OPNM1 2.91 · 10 1 8.67 · 10 3 1.51 · 10 13 1.44 · 10 61 1.28 · 10 258 8.43 · 10 1052 9.74 · 10 3178
OPNM2 2.92 · 10 1 7.85 · 10 3 1.24 · 10 13 8.18 · 10 62 1.62 · 10 259 2.63 · 10 1055 1.57 · 10 3189
OPNM3 2.90 · 10 1 9.74 · 10 3 3.44 · 10 13 4.44 · 10 60 1.03 · 10 252 2.48 · 10 1028 2.11 · 10 3109
OPNM4 2.80 · 10 1 1.04 · 10 2 4.40 · 10 13 1.13 · 10 59 3.93 · 10 251 4.59 · 10 1022 1.61 · 10 3090
e ( 18 ) e ( 19 ) e ( 20 ) e ( 21 ) e ( 22 ) e ( 23 ) e ( 24 )
8OPPNM 4.24 · 10 6 9.70 · 10 8 3.86 · 10 11 3.51 · 10 24 2.1134 · 10 76 2.75 · 10 285 7.88 · 10 1121
OPNM1 3.85 · 10 5 1.09 · 10 5 2.51 · 10 6 4.77 · 10 8 5.36 · 10 12 2.42 · 10 27 8.56 · 10 89
OPNM2 7.13 · 10 6 7.86 · 10 7 3.74 · 10 9 7.87 · 10 16 1.01 · 10 42 1.02 · 10 42 2.52 · 10 150
OPNM3 4.68 · 10 5 1.29 · 10 5 3.52 · 10 6 1.22 · 10 7 7.08 · 10 11 9.71 · 10 23 2.75 · 10 70
OPNM4 6.17 · 10 5 1.65 · 10 5 5.12 · 10 6 3.64 · 10 7 9.04 · 10 10 3.09 · 10 18 2.99 · 10 52
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Qureshi, S.; Chicharro, F.I.; Argyros, I.K.; Soomro, A.; Alahmadi, J.; Hincal, E. A New Optimal Numerical Root-Solver for Solving Systems of Nonlinear Equations Using Local, Semi-Local, and Stability Analysis. Axioms 2024, 13, 341. https://doi.org/10.3390/axioms13060341

AMA Style

Qureshi S, Chicharro FI, Argyros IK, Soomro A, Alahmadi J, Hincal E. A New Optimal Numerical Root-Solver for Solving Systems of Nonlinear Equations Using Local, Semi-Local, and Stability Analysis. Axioms. 2024; 13(6):341. https://doi.org/10.3390/axioms13060341

Chicago/Turabian Style

Qureshi, Sania, Francisco I. Chicharro, Ioannis K. Argyros, Amanullah Soomro, Jihan Alahmadi, and Evren Hincal. 2024. "A New Optimal Numerical Root-Solver for Solving Systems of Nonlinear Equations Using Local, Semi-Local, and Stability Analysis" Axioms 13, no. 6: 341. https://doi.org/10.3390/axioms13060341

APA Style

Qureshi, S., Chicharro, F. I., Argyros, I. K., Soomro, A., Alahmadi, J., & Hincal, E. (2024). A New Optimal Numerical Root-Solver for Solving Systems of Nonlinear Equations Using Local, Semi-Local, and Stability Analysis. Axioms, 13(6), 341. https://doi.org/10.3390/axioms13060341

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop