A New Optimal Numerical Root-Solver for Solving Systems of Nonlinear Equations Using Local, Semi-Local, and Stability Analysis

: This paper introduces an iterative method with a remarkable level of accuracy, namely fourth-order convergence. The method is specifically tailored to meet the optimality condition under the Kung–Traub conjecture by linear combination. This method, with an efficiency index of approximately 1.5874, employs a blend of localized and semi-localized analysis to improve both efficiency and convergence. This study aims to investigate semi-local convergence, dynamical analysis to assess stability and convergence rate, and the use of the proposed solver for systems of nonlinear equations. The results underscore the potential of the proposed method for several applications in polynomiography and other areas of mathematical research. The improved performance of the proposed optimal method is demonstrated with mathematical models taken from many domains, such as physics, mechanics, chemistry, and combustion, to name a few.


Introduction
Root-finding methods are essential in various scientific disciplines for solving nonlinear systems, which are systems of equations that cannot be represented as linear combinations of their inputs.These techniques, which include Newton's method, the bisection method, and the secant method, are used to determine solutions to equations where the function is zero, called [1] roots.Engineering relies on the use of equations to address the qualities of materials and forces, which are crucial to the design of systems and structures.In the field of physics, differential equations are used to elucidate various phenomena related to motion, energy, and waves.Root-finding algorithms are used in finance to calculate the internal rate of return and in option pricing models [2].These methods are necessary because nonlinear systems are very complicated and often cannot be solved analytically or need to be greatly simplified in order to be solved.Root-finding methods provide an accurate numerical approach to approximate answers with a high level of accuracy.This allows scientists and engineers to effectively model real-world phenomena, optimize systems, and make predictions based on theoretical models.
As mentioned above, iterative root-finding algorithms are essential computational methods used to solve equations in which a function is equal to zero, demonstrated as follows: Φ(x) = 0. (1) Equations of the (1) type are of utmost importance in the fields of mathematics, engineering, physics, and computer science.They are employed to tackle a wide range of problems, including the optimization of engineering designs and the solution of equations in theoretical physics, among others [3,4].
There exists a continuous operator, Φ : D ⊂ S ′ , in Equation (1).This operator is defined on a nonempty convex subset D of a Banach space S and has values in a Banach space S ′ .This investigation aims to determine an approximation of the unique local solution al pha ∈ S for the problem stated in (1).In the one-dimensional example, Banach spaces are equivalent to S = S ′ = R.The problem can be reduced to approximating a single local root where φ : I ⊂ R → R and I is a neighborhood of α [5].
An essential objective of the numerical analysis is to find the origin of nonlinear equations, whether they are single-variable or multivariable.This endeavor has led to the development of multiple algorithms that strive to predict exact solutions with the required level of accuracy.The pursuit of accuracy and efficiency in solving complex equations drives the advancement and application of iterative methods for root determination.Numerical methods play a crucial role in practical situations where it is often impossible to obtain accurate results by analytical means.Moreover, the ability to quickly and accurately determine the solutions of nonlinear equations is crucial in simulations, optimizations, and modeling in various scientific fields, making these techniques indispensable tools for researchers [6,7].The main numerical analysis techniques are the bisection method, the Newton-Raphson method, and the secant method.The bisection method is a reliable, albeit slow, strategy that guarantees convergence by halving the interval containing the root at each iteration.In contrast, the Newton-Raphson method offers a higher convergence rate by using the derivative of the function, making it a preferable technique for many applications requiring speed and efficiency.The derivative-free secant method offers a middle ground between the bisection and Newton-Raphson methods.It provides an efficient method that does not depend heavily on the differentiability of the function.
Recent studies have led to the development of sophisticated algorithms, such as twoand three-step root-finding approaches, which aim to improve convergence rates and stability [8][9][10][11][12].These methods improve traditional iterative algorithms by incorporating additional steps or using higher-order derivatives to improve accuracy and speed.For example, two-step methods typically involve an initial Newton-Raphson-like step followed by a corrective phase that improves the accuracy of the root computation.Three-step techniques improve on this idea by introducing an additional level of precision, whereby even higher levels of accuracy can be achieved.Discussions on the optimality of rootfinding algorithms revolve mainly around their speed of convergence and computational efficiency.Second-order algorithms, such as the improved Newton-Raphson methodology and some variants of the secant method, offer a practical balance between computational simplicity and fast convergence.
In contrast, fourth-order convergent methods aim to achieve faster convergence at the expense of computational simplicity, as they require higher-order derivatives or additional function evaluations.These approaches are especially valuable for solving complex equations that require fast convergence and are not limited by processing resources.Iterative methods used in numerical analysis to find the roots of nonlinear equations are essential, as they allow for solving otherwise intractable problems.The progression of these algorithms, ranging from traditional textbook approaches to the latest research advances, demonstrates the continuing quest for greater efficiency, accuracy, and practicality.As computing power increases, root-finding algorithms will also improve in sophistication and performance, underscoring their continuing importance in mathematics and the sciences.This summary provides insight into the thorough and precise analysis needed to fully understand and appreciate the breadth and variety of iterative root-finding methods.The original question requires a thorough examination of the subject, including such aspects as theoretical foundations, improvements in algorithms, comparative evaluations of methodologies, and discussions of practical implementations and applications.Engaging in such a task would require a substantial document, which exceeds the brief summary provided here.

Existing Optimal Algorithms
The Kung-Traub conjecture [13], introduced in the 1970s, establishes a theoretical upper bound on the effectiveness of iterative techniques used to solve nonlinear equations.According to this hypothesis, the highest convergence order of any iterative root-finding algorithm employing a finite number of function evaluations per iteration is 2 d , where d is the number of derivative evaluations per iteration.Essentially, this implies that for methods that do not compute derivatives (i.e., d = 0), such as the bisection method, the order of convergence is 1 (linear convergence).The maximum order of convergence for methods that evaluate the function and its first derivative (i.e., when d = 1, such as Newton's method) is 2. The conjecture establishes a standard for judging how well algorithms for finding roots perform: the Kung-Traub framework says that an optimal algorithm for finding roots achieves the highest possible level of convergence as a function of the number of derivative evaluations it performs.As a result, several algorithms have been created to achieve these efficiency bounds, including Halley's approach and other advanced methods.As defined by Kung and Traub, these algorithms attempt to strike a balance between computational cost and convergence speed.
Newton's method is one of the classical optimal methods for solving equations of the type (2), with the following computational steps: where φ ′ (x n ) ̸ = 0.The optimal second-order convergent solver given in (3) requires only two function evaluations (φ & φ ′ ) per iteration.
The optimal two-step fourth-order convergent solver (OPNM1) in [14] is given by the following computational steps: The numerical solver (4) requires, at each iteration, three function evaluations ( φ(x n ) and two evaluations of the first-order derivative φ ′ (x n ) and φ ′ (y n )).
In the same research article [14], the authors proposed another optimal two-step fourthorder convergent solver (OPNM2), which is given by the following computational steps: The numerical solver (5) also requires, at each iteration, three function evaluations (φ(x n ) and two evaluations of the first-order derivative φ ′ (x n ) and φ ′ (y n )).
Jaiswal [15] proposed an optimal two-step fourth-order convergent solver (OPNM3), which is given by the following computational steps: Once again, the numerical solver ( 6) requires, at each iteration, three function evaluations (φ(x n ) and two evaluations of the first-order derivative φ ′ (x n ) and φ ′ (y n )).
In the same research paper [15], Jaiswal also proposed a second optimal two-step fourth-order convergent solver (OPNM4), which is given by the following computational steps: Once again, the numerical solver ( 7) requires, at each iteration, three function evaluations (φ(x n ) and two evaluations of the first-order derivative, φ ′ (x n ) and φ ′ (y n )).
There are several fourth-order optimal numerical solvers for solving nonlinear equations, both univariate and multivariate.However, it is crucial to note that several recently suggested optimal techniques are not suitable for solving nonlinear equations in higher dimensions.The optimal methods, devised in [16][17][18][19], do not usually attempt to be used for nonlinear systems, whereas the fourth-order optimal method proposed in the present research study solves both scalar and vector versions of nonlinear equations.

Construction of the Optimal Fourth-Order Numerical Solver
In this section, we use the notion of a linear combination of two well-established third-order (non-optimal) numerical solvers to obtain an optimal numerical solver for dealing with nonlinear equations of the type (1).Both numerical solvers, chosen for the linear combination, are discussed in [14] and given as follows: and where n = 0, 1, 2, . . .and Our major aim is to construct a fourth-order optimal convergent numerical solver using a linear combination of solvers given in (8) and (9).The linear combination has the following form: where κ ∈ R is the adjusting parameter.When κ = 1, the method reduces to (8), while κ = 0 results in (9).Methods given in ( 8) and ( 9) are not optimal, while ( 10) is an optimal fourth-order convergent solver for a suitable choice of κ.The performance of (10) depends on the adjustment parameter κ which is obtained from the Taylor expansion as given in Theorem 1.
Theorem 1. Assume that the function φ : D ⊂ R → R has a simple root α ∈ D, where D is an open interval.If φ is sufficiently smooth in the neighborhood of the root α, then the order of convergence of the iterative method defined by (10) is at least three, as shown in the following equation: where e n = x n − α, and Expanding φ ′ (x n ) via a Taylor series expansion around α, we have the following: Expanding 1 φ ′ (x n ) via a Taylor series expansion around α, we have the following: Dividing ( 12) by ( 13), we have the following: Substituting (15) in the first step of (20), gives the following: where d n = y n − α.
Expanding 1 φ ′ (y n ) via a Taylor series expansion around α and using Equation ( 16), we have the following: Expanding 1 φ ′ (x n )+3φ ′ (y n ) via a Taylor series expansion around α and using Equation ( 16), gives the following: Finally, substituting the obtained series obtained in ( 12), ( 14), (17), and (18) into the structure defined in (10), the error equation is obtained as follows: Equation (19) implies that for any κ ∈ R, the method defined by ( 10) is at least cubically convergent.
To increase the convergence order of (10), the error equation, Equation (19), suggests that the parameter κ should be −2.Using this value of κ in (10), the proposed optimal numerical method (OPPNM) takes the following form: where n = 0, 1, 2, . ... Numerical solver (20) is optimal under the Kung-Traub conjecture since its order of convergence equals 2 θ−1 , where θ represents the number of function evaluations taken by the solver at each iteration.The proposed numerical solver uses three function evaluations (one function evaluation and two of its first-order derivatives) at each iteration and has fourth-order local convergence, which is also discussed in the next section.The flowchart of the proposed two-step optimal numerical solver is shown in Figure 1.
Flowchart of the proposed optimal fourth-order numerical solver given in (20).

Scalar Form
Local convergence analysis is a method used to determine the specific conditions under which a given iterative procedure can effectively converge to a solution of a nonlinear equation of the form (1).More specifically, it calculates the region of the complex plane encompassing a root into which the iterative method will converge.The study begins by assuming that the iterative method has achieved convergence at a root and then investigates the behavior of the procedure at that root.The analysis usually consists of examining the behavior of the iteration function of the method, which establishes a connection between the current approximation of the root and the subsequent approximation.Taylor series expansion is an essential tool for performing local convergence analysis.It allows us to estimate the iteration function in the vicinity of the root.Consequently, this estimation can be used to determine the rate at which the strategy reaches the root, as well as the conditions that must be satisfied for convergence.
The study usually involves determining the radius of convergence, which is the distance from the root that the iteration function can be approximated by a Taylor series expansion.The iterative approach guarantees convergence to the root if the initial estimate is within the radius of convergence.The root-finding methods usually examined by local convergence analysis are Newton's method, the secant method, and the bisection method.By analyzing the behavior of these solutions, one can determine their strengths and weaknesses and identify the scenarios in which they are most effective.Therefore, our objective is to examine the local convergence of the suggested numerical solver (20) employing Taylor series expansion.This requires us to give the following theorem: Then, the two-step method given in (20) has fourth-order convergence, and the asymptotic error term is determined to be where e n = x n − α and ς r = φ (r) (α) r!φ ′ (α) , r = 2, 3, 4, . . . .

Proof of Theorem 2.
From ( 19), the following error equation is obtained for κ = −2: The error equation clearly suggests that the proposed method (20), in the scalar version, has fourth-order convergence.It is also an optimal method in the sense of the Kung-Traub conjecture.

Vector Form
The suggested method is optimal and can be used effectively for both single-variable and multivariable problems while maintaining simplicity.Within this particular framework, the approach uses the Jacobian matrix, which covers all first-order partial derivatives of the system, to progressively estimate the solution of the system through iterations.The approach starts with an initial estimation of the solution and then performs corrections at each step.These corrections are generated by multiplying the inverse of the Jacobian matrix by two-thirds of the vector of negative functions evaluated in the current estimate.The process is carried out for the second step in the suggested method.The iterative procedure persists until the solution reaches a state in which it satisfies the system of equations within a preset tolerance.This approach effectively handles the intricate nonlinear relationships between variables.This scheme is highly appreciated for its fast convergence and accuracy in various scientific and technical applications, provided that the initial approximation is close enough to the real solution and the system satisfies specific constraints on the invertibility of the Jacobian.
Let us formalize the approach to discuss the proposed optimal method (20) for solving a system of nonlinear equations.Let us consider a system of n nonlinear equations with n variables, represented as follows: and each ϕ i (x) is a nonlinear function of the variables x 1 , x 2 , . . ., x n .The method seeks to find a vector x, such that Φ(x) = 0. Starting from an initial guess x (0) , the method iteratively updates the estimate of the root using the following formula: where J(x (n) ) is the Jacobian matrix of Φ evaluated at x (n) , and n is the iteration index.The Jacobian matrix is defined as follows: The method applies a correction that is expected to bring the current estimate x (n) closer to the true solution by solving a linear system at each iteration.The process is repeated until a convergence criterion is met, such as when the norm of the function vector ∥Φ(x (n) )∥ is less than a specified tolerance, indicating that x (n) is close to the root.
Many things affect how quickly the optimal solver (20) converges locally.These include the quality of the initial approximation and the Jacobian matrix's properties.Given favorable circumstances, the approach exhibits quartic convergence, greatly enhancing its efficiency in solving nonlinear systems.Nevertheless, if the Jacobian matrix is singular or nearly unique during any iteration, the approach may either fail to converge or converge to a solution that does not satisfy the given system.
Here, we present the results to demonstrate the error equation and, consequently, the order of convergence of the proposed strategy for a system of nonlinear equations.

Lemma 1 ([20]
).Let Φ : Θ ⊂ R N → R N be an r-times Fréchet differentiable in a convex set Θ ⊂ R N .Then, for any x and ∆x ∈ R N , the following expression holds: and the symbol We introduce the following theorem to demonstrate the error equation and, consequently, the convergence order for the proposed optimal solver while dealing with a system of nonlinear equations, as follows: sufficiently differentiable in a convex set Θ containing a simple zero α of Φ(x).Let us consider that Φ ′ (x) is continuous and nonsingular in α.If the initial guess x 0 is close to α, then the sequence x (n) obtained with the proposed two-step optimal solver (23) converges to α with fourth-order convergence.
Proof of Theorem 3. Let α be the root of Φ(x), x (n) be the nth approximation to the root by (23), and e n = x (n) − α be the error vector after the nth iteration.Expanding Φ(x (n) ) via a Taylor series expansion around α, we obtain the following: where Expanding J(x (n) ) via a Taylor series expansion around α gives the following: Expanding [J(x (n) )] −1 via a Taylor series expansion around α leads to the following: Multiplying ( 25) and ( 27), we have the following: Substituting (28) in the first step of (20) yields the following: where d n = y n − α.Expanding J(y (n) ) via a Taylor series expansion around α and using Equation ( 29), we have the following: Expanding [J(y (n) )] −1 via a Taylor series expansion around α and using Equation ( 29), Now, substituting Equations ( 25)-( 27), (30), and (31) in the second step of ( 23), the error equation is obtained as follows: Error Equation (32) proves the fourth-order convergence of the proposed two-step optimal method (23) presented in the vector form.

Convergence without Taylor Series
Semi-local convergence for the proposed solver (20) requires a careful balance between the proximity of the initial approximation to the root, the behavior of the function and its derivatives near the root, and the inherent properties of the solver.By establishing the necessary conditions on the function and the initial approximation, and by a detailed analysis of the error dynamics, it can be seen that the solver exhibits fourth-order convergence within a certain neighborhood around the root.This analysis not only guarantees the effectiveness of the solver but also provides practical guidance on its application for solving equations where high accuracy is desired.
There are some problems with the Taylor series methodology employed to show local convergence analysis for the (20) solver limiting its applicability even though convergence is possible.We list these problems as follows: (P1) The local convergence is carried out for functions on the real line or the finitedimensional Euclidean space.(P2) The function φ must be at least five times differentiable.Let us consider a function φ : [−1.3, 1.3] → R, defined as follows: where But the function φ ′′′ is not continuous at t = 0. Thus, the results of the previous section-being only sufficient-cannot guarantee the convergence of the sequence {x n } generated by the method to the solution γ = 1.However, the method converges to γ = 1 if, for example, we start from x 0 = 1, 1.This observation indicates that the sufficient convergence conditions can be weakened.(P3) There is no a priori knowledge of the number of iterations required to reach a desired error tolerance since no computable upper bounds on ||γ − x n || are given.(P4) The separation of the solutions is not discussed.(P5) The semi-local analysis of convergence, which is considered to be more important, is not considered either.
We positively address the above-listed problems (P1)-(P5) as follows: (P1)' The convergence analysis is carried out for Banach space-valued operators.(P2)' Both types of analyses use conditions only on the operators on the method (20).
(P3)' The number of iterations to reach the error tolerance is known in advance since priori estimates on ||γ − x n || are provided.(P4)' The separation of the solutions is discussed.(P5)' The semi-local analysis of convergence relies on majorizing sequences [22,23].These analyses also depend on control by generalized continuity conditions.This approach allows us to extend the utilization of the method (20).
Let Ξ, Ξ 1 denote Banach spaces, Ω ⊂ Ξ denote a convex, non-empty set that is open or closed, and Υ : Ω → Ξ 1 denote a differentiable operator in the sense of Fréchet.We shall locate a solution γ of the equation Υ(x) = 0 iteratively.In particular, method (20) is utilized in the setting, and is defined for x 0 ∈ Ω, and each n = 0, 1, 2, . . .by the following: It is clear that the method (33) reduces to (20) if Ξ = Ξ 1 = R j (j is a natural number).
Theorem 4. Suppose that the conditions (C1)-(C7) hold.Then, the following assertions hold for the iterates {x n } produced by the method (33) and the sequence {x n } converges to the solution γ of the equation Υ(x) = 0.

Remark 1.
(i) The radius r in the condition (C7) can be replaced by ρ 0 .
(ii) Possible choices of the operator ∆ can be ∆ = I or ∆ = F ′ (γ), provided that the operator F ′ (γ) is invertible.Other choices are possible, as long as conditions (C5) and (C6) hold.
The separation of the solutions is developed in the following result.

Semi-Local Analysis of Convergence
The roles of γ, ϖ 0 , and ϖ are exchanged by x 0 , ν 0 and ν as follows.Suppose that we have the following: (H1) Equation ν 0 (t) − 1 = 0 has the smallest solution denoted by ρ 3 in the interval M − {0}, where ν 0 : M → R + is a continuous as well as a non-decreasing function.Set M 2 = [0, ρ 3 ).(H2) There exists a function ν 0 : M 2 → R + , which is continuous as well as non-decreasing.
Theorem 5. Suppose that the conditions (H1)-(H6) hold.Then, it follows that assertions hold for the sequence {x n } produced by the method (33).
{x n } ⊂ S(x 0 , a), (47) and there exists a solution γ ∈ S[x 0 , a] of the equation Υ(x) = 0, so that Proof of Theorem 5.As in the local analysis, the assertions (47)-(49) are shown using induction.The definition of α 0 , β 0 , and the first substep of the method (33) imply that the assertions (47) and (48) hold for n = 0, and iterate y 0 ∈ S(x 0 , a).The conditions (H1)-(H4), in the local case, show the following: u ∈ S(x 0 , a) , and similarly Consequently, the iterate x m+1 exists by the second substep of the method (33).By subtracting the first from the second substep of the method (33), we have the following: It follows that and so the assertion (47) holds for n = m + 1, as well as (49) for n = m.By the last substep of the method (33), we can write in turn that where we also use Therefore, by the first substep of ( 33), we have the following: and The induction for the assertions (47)-( 49) is completed.It follows that the sequence {x m } is Cauchy in the Banach space B j , and as such, it is convergent to some γ ∈ S[x 0 , a].By sending m → +∞, and the continuity of the operate Υ, we deduce that Υ(γ) = 0.Moreover, by the estimation for i a natural number.The last assertion (50) of the Theorem 5 follows if i → +∞ in (52).

Remark 3.
(i) The parameter α can be switched by ρ 3 in the condition (C6).
(ii) As in the local case, possible choices are ∆ = I or ∆ = Υ ′ (x 0 ), provided that the operator Υ ′ (x 0 ) is invertible.Other choices satisfying the conditions (C4) and (C5) are possible.

Stability Analysis
The stability analysis of the method is performed via complex dynamics.Research about this topic can be found in references [24,25].In recent years, dynamic studies have been carried out to analyze the stability of iterative methods [26][27][28].
The stability of scheme ( 20) is performed on its application over quadratic polynomials.We are working with the general expression p(z) = (z − a)(z − b), where a, b ∈ Ĉ. Applying p(z) on ( 20), the resulting rational operator is as follows: Let us note that R(z) depends on variable z and roots a and b.However, applying the Möbius transformation M(z) = z−a z−b , we find that R(z) is conjugated with operator which has a simpler expression and no longer depends on the roots a and b.In fact, a and b have been mapped with 0 and ∞, respectively.The rational operator fits in the form where {a i } n i=0 ∈ R. Properties of this kind of rational operator can be found in [29].
Proposition 3. The fixed points of O(z) are as follows: • z 0 = 0 and z ∞ = ∞, which are super-attracting;

Proof of Proposition 3. Fixed points satisfy
therefore, z = 0 is a fixed point.In addition, so the remaining fixed points are z = 1 and the roots of polynomial 3z 4 + 8z 3 + 13z 2 + 8z + 3. The operator O(z) satisfies lim z→0 = 0, so z = ∞ is also a fixed point.The derivative of the rational operator is as follows: whose evaluation on the fixed points is so z = 0 is super-attracting and the rest of the strange fixed points are repelling.Furthermore, lim z→0 Since the only attracting fixed points correspond to the roots of the nonlinear function, stability is guaranteed.The dynamical plane evidences the high stability of the iterative method for quadratic polynomials.Although the boundaries between the basins of attraction are intricate, each initial guess (except for the five extraneous fixed points) converges to a fixed attracting point that is coincident with the roots of the polynomial.

Numerical Results
This section elaborates on the utilization of numerical simulations for both scalar and vector forms of non-linear equations, integrating both theoretical and practical models.In the comparative analysis, we will examine specific parameters such as the number of iterations (i), the magnitude of absolute error (e n = |x n+1 − x n | for scalars and (e (n) = ||x (n+1) − x (n) || for vectors) at each iteration stage, and the processing time measured in CPU seconds.The simulations employed various iterative methodologies as delineated in Section 2, juxtaposed against the proposed fourth-order optimal iterative technique, as depicted in Equation ( 20) for scalars and ( 23) for systems.The numerical analyses in terms of tabular results were conducted using the software MAPLE 2022 on an Intel (R) Core (TM) i7 HP laptop equipped with 24 GB of RAM and operating at a frequency of 1.3 GHz while Python was used for graphical outputs.Regarding the numerical simulations, a maximum precision threshold of 4000 digits was established.Additionally, a cap of 50 iterations has been imposed to attain the requisite solution.The simulations are stopped based on the following halting criteria.For a single-variable nonlinear system ( f (x) = 0), we have the following: and for a multi-variable nonlinear system, we have the following: (F(x) = 0): The following problems are considered from recent papers.The exact solution (α) is shown against the test functions:

Problem 3 ([31]
). Boussinesq's formula for vertical stress in the fields of soil mechanics and geotechnical engineering is given by the following: Equation (57), when µ z = 1/4, can be written as follows: In Table 1, the OPPNM method exhibits the most promising results when compared to OPNM1, OPNM2, OPNM3, and OPNM4.This method demonstrates rapid convergence toward the solution, particularly noticeable in the initial iterations across multiple problems and initial guesses, indicating a superior efficiency in approaching the correct solution quickly.While other methods show varying degrees of convergence and accuracy, OPPNM consistently reduces the absolute errors' magnitude with each iteration, suggesting a robust performance across different nonlinear equations.Although some methods may reach lower errors at the final iteration, the speed and reliability of OPPNM in achieving a significant error reduction early on make it a potentially preferable method, especially in applications where a quick approximation is valuable.This aligns with the expectation that a proposed method should outperform existing ones, and in this context, the proposed optimal method OPPNM given in (20) fulfills the criteria effectively.The graphical representation in the images illustrates a comparative study of different numerical solvers applied to the functions given in Problems 1-3.The efficiency curves are plotted in Figures 3-8 to show the relationship between the precision of the solution, measured by the absolute error on the y-axis, and the computational effort, represented by the number of iterations and CPU time on the x-axis.In both sets of comparisons, the OPPNM method emerges as the most efficient.Specifically, for the initial guesses of 6.5 and 8.0 for φ 1 (x), OPPNM achieves a rapid convergence to a low absolute error, outpacing the other methods.The steep descent of the OPPNM curves demonstrates its swift reduction in error as iterations progress, and it also shows significantly less CPU time required to reach a similar level of accuracy compared to its counterparts.This suggests a high rate of convergence and lower computational cost, making OPPNM the preferred solver based on the data presented.The claim that OPPNM is an optimal fourth-order method is substantiated by its apparent superior performance, both in terms of speed and computational resources.A similar sort of observation is made for Problems 2 and 3 in their efficiency curves.The proposed fourth-order method (20) showcases notable progress in solving nonlinear equations of the form φ(x) = 0, establishing a fresh standard in computational efficiency.This methodology accelerates the calculation process and minimizes operating expenses by reaching the solution in fewer iterations and requiring the least CPU time compared to other optimal fourth-order algorithms as shown in Figures 3-8.This efficiency is especially useful in large computing jobs and intricate simulations where time and resource allocation are critical.The method's innovative algorithmic framework and exceptional performance make it the preferred choice for researchers and professionals seeking swift, precise, and cost-efficient solutions in applied mathematics, engineering, and other fields.
Given below are some nonlinear systems taken in higher dimensions.Table 2 compares the absolute errors of five numerical methods-labeled from OPPNM to OPNM4-applied to solve nonlinear systems of equations for Problems 4 through 8.The iterations are stopped when the desired accuracy mentioned in (56) is achieved.The OPPNM method, described as a "fourth-order optimal root-solver", exhibits the lowest absolute errors across all iterations, indicating its superior accuracy and convergence rate.For instance, in Problem 4, the absolute error achieved by OPPNM is in the range of 10 −1783 to 10 −1 , significantly outperforming the other methods.This trend of minimal errors is consistent across the problems listed, highlighting the effectiveness of the OPPNM method.The higher order of convergence inherent to OPPNM likely contributes to its ability to rapidly decrease errors with each iteration.This characteristic is crucial for achieving accurate solutions efficiently.Thus, the table substantiates the claim that OPPNM is the superior method among those compared, due to its optimal convergence properties and the consistently lower absolute errors it achieves, which aligns with expectations for a method touted as a fourth-order optimal root-solver.Moreover, in Problem 8, the focus is on the absolute errors of the last 7 iterations out of 24 for the OPPNM to OPNM4 methods.Here, the OPPNM method stands out with its absolute error reduction, showcasing errors diminishing from 10 −6 to an impressively low 10 −1211 .This sharp decline in error magnitude demonstrates the method's robustness and its capacity for high-precision solutions.In contrast, the other methods, OPNM1 through OPNM4, exhibit higher errors in the last iterations, with the smallest error being in the range of 10 −52 , which is significantly higher than that of OPPNM.This indicates that while the other methods are converging, they do so at a slower rate and with less accuracy.The OPPNM's superior performance in the tail end of the iterations suggests stable convergence without signs of stagnation, underscoring its efficiency and effectiveness as a fourth-order optimal root-solver, particularly as the solution is refined in the final iterations.Problem 4. The non-linear system F(x) of two equations from [14] is given as follows: The initial guess is taken to be x (0) = [5.1,6.1] T , where the exact solution of the system (59) is α = [5, 6] T .

Conclusions and Future Remarks
This research highlights the important role played by nonlinear equations in various scientific fields, such as engineering, physics, and mathematics.Given the prevalence of mathematical models lacking closed solutions in these areas, the need for numerical methodologies, in particular root-solving algorithms, becomes evident.This study presents a new fourth-order convergent root solver that advances iterative approaches and offers improved accuracy.This solver-adapted to satisfy the optimality condition posed by the Kung-Traub conjecture by a linear combination-achieves an efficiency index of 1.5874.By means of localized and semi-localized analyses, we rigorously discuss its convergence and efficiency.Furthermore, the examination of local and semi-local convergence, together with dynamic stability analysis, highlights the robustness of the solver in dealing with systems of nonlinear equations.Empirical validations using mathematical models from various fields such as physics, mechanics, chemistry, and combustion have demonstrated the superior performance of the proposed solver.
For future research, it would be beneficial to explore the integration of this solver with machine learning algorithms to further improve its predictive capabilities and its effectiveness in solving even more complex systems of equations.This could open new avenues for solving real-world problems with unprecedented accuracy and speed.

Proposition 4 .
The critical points of O(z) are as follows:• z 0 = 0 and z ∞ = ∞;and • z 4 ≈ − 755 1783 + i 260 287 , z4 , z 5 ≈ − 1301 1574 + i 497 883 , and z5 , the roots of polynomial 10 + 25z + 34z 2 + 25z 3 + 10z 4 = 0.Dynamical planes illustrate the basins of attraction of the attracting fixed points, showing the stability of a single method.The implementation of the dynamical plane was developed in Matlab R2022b, following the guidelines of [30]. Figure 2 represents the dynamical plan of the rational operator O(z).Orange and blue represent the basins of attraction of z 0 and z ∞ , respectively.White circles refer to strange fixed points, while white squares represent free critical points.The dynamical plane only considers the initial guesses ℜ{z 0 } ∈ [−1.3, 1.3], ℑ{z 0 } ∈ [−1.3, 1.3], since the rest of the dynamical plane is completely blue.

Figure 3 .
Figure 3. Efficiency curves for the function φ 1 (x) with numerical solvers under consideration for x 0 = 6.5.

Figure 7 .
Figure 7. Efficiency curves for the function φ 3 (x) with numerical solvers under consideration for x 0 = 0.2.

Figure 8 .
Figure 8. Efficiency curves for the function φ 3 (x) with numerical solvers under consideration for x 0 = 0.4.

Proof of Theorem 1. Let
e n = x n − α and d n = y n − α.

Table 1 .
Numerical simulations for the scalar type of nonlinear equations presented in Problems 1-3.