Next Article in Journal
Mathematical Modeling of Fractals via Proximal F-Iterated Function Systems
Previous Article in Journal
The Norm Function for Commutative Z2-Graded Rings
Previous Article in Special Issue
Computing the Set of RBF-FD Weights Within the Integrals of a Kernel-Based Function and Its Applications
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Efficient Newton-Type Solvers Based on for Finding the Solution of Nonlinear Algebraic Problems

by
Haifa Bin Jebreen
1,*,
Hongzhou Wang
2 and
Yurilev Chalco-Cano
3
1
Mathematics Department, College of Science, King Saud University, P.O. Box 2455, Riyadh 11451, Saudi Arabia
2
Beijing Key Lab on Mathematical Characterization, Analysis, and Applications of Complex Information, MIIT Key Laboratory of Mathematical Theory and Computation in Information Security, School of Mathematics and Statistics, Beijing Institute of Technology, Beijing 100081, China
3
Departamento de Matemática, Universidad de Tarapacá, Casilla 7D, Arica 1000000, Chile
*
Author to whom correspondence should be addressed.
Axioms 2024, 13(12), 880; https://doi.org/10.3390/axioms13120880
Submission received: 31 October 2024 / Revised: 9 December 2024 / Accepted: 12 December 2024 / Published: 19 December 2024
(This article belongs to the Special Issue Differential Equations and Inverse Problems, 2nd Edition)

Abstract

:
The purpose of this study is to improve the computational efficiency of solvers for nonlinear algebraic problems with simple roots. To this end, a multi-step solver based on Newton’s method is utilized. Divided difference operators are applied at two substeps in various forms to enhance the convergence speed and, consequently, the solver’s efficiency index. Attraction basins for the proposed solvers and their competitors are presented, demonstrating that the proposed solvers exhibit large attraction basins in the scalar case while maintaining high convergence rates. Theoretical findings are supported by numerical experiments.

1. Background and Introductory Notes

Nonlinear equations frequently arise across numerous disciplines within science and engineering, and they play a significant part in the field of applied mathematics. Due to the inherent complexity of these equations, obtaining exact analytical solutions is typically challenging or unattainable. As a result, numerical iterative techniques are widely utilized to compute approximate solutions [1].
The impetus for this manuscript is to develop a rapid iteration method that enhances both efficacy and accuracy to solve nonlinear collections of equations; see the pioneer discussions in the works [2,3]. By avoiding the necessity for second Fréchet derivatives, our approach attempts to mitigate the complexity and efficiency of the available methodologies, hence contributing to advancements in the field of computational mathematics. The initial aim is to provide a high-rate solver for tackling nonlinear sets, accommodating both real and complex simple roots. Then, the aim is to improve calculational efficacy via decreasing the frequency of matrix inverses and the number of functional evaluations, aligning with the foundational frameworks of computational mathematics.
We take into consideration a class of nonlinear algebraic equations presented in ([4] Chapter 5):
L ( z ) = 0 ,
wherein L ( z ) = ( g 1 ( z ) , g 2 ( z ) , , g υ ( z ) ) T and each g i ( z ) , 1 i υ represents a coordinate function. We assume that L ( z ) is sufficiently differentiable inside an open and convex domain D R υ . Initially, we investigate several foundational iterative methods designed to solve the Equation (1). A widely recognized approach is Newton’s method (NM2), which is articulated as follows [5]:
L ( σ ( ϱ ) ) δ ( ϱ ) = L ( σ ( ϱ ) ) , ϱ = 0 , 1 , 2 , , σ ( ϱ + 1 ) = δ ( ϱ ) + σ ( ϱ ) .
This method reaches the quadratical rate, contingent upon the initial value σ ( 0 ) being enough close to the simple zero. To circumvent some limitations corresponding to NM2, alternative solvers [6], such as Steffensen’s method (SM2), have been developed, which do not require derivative computations [7]. The SM2 for addressing nonlinear systems is defined [8]
λ ( ϱ ) = L ( σ ( ϱ ) ) + σ ( ϱ ) , σ ( ϱ + 1 ) = [ σ ( ϱ ) , λ ( ϱ ) , L ] 1 L ( σ ( ϱ ) ) + σ ( ϱ ) , ϱ = 0 , 1 , 2 , ,
which employs the divided difference operator (DDO). For high-dimensional points τ and y, the first-order DDO of L is defined as follows ( 1 i , j m ):
[ τ , y ; L ] i , j = L i ( τ 1 , , τ j 1 , y j , , y m ) + L i ( τ 1 , , τ j , y j + 1 , , y m ) y j + τ j .
In a more general context, the DDO for L in R m can be formulated as [8]
[ · , · ; L ] : D R m × R m L ( R m ) ,
which satisfies the relationship [ y , τ ; L ] ( τ + y ) = L ( y ) L ( τ ) , τ , y D . By letting h = y τ , the first-order DDO is written via the following [9]:
[ τ + h , τ ; L ] = 0 1 L ( τ + t h ) d t .
However, the definitions provided in (4)–(6) primarily result in dense matrices for representing the DDO, which limits the applicability of the SM2 to solve (1) effectively. Traub introduced an enhancement to NM2 that achieves local cubic convergence [10] (TM):
γ ( ϱ ) = σ ( ϱ ) L ( σ ( ϱ ) ) 1 L ( σ ( ϱ ) ) , σ ( ϱ + 1 ) = γ ( ϱ ) L ( σ ( ϱ ) ) 1 L ( γ ( ϱ ) ) .
Another prominent and efficient technique to solve (1) is the quartically convergent Jarratt method [11,12], given as follows (JM4):
y ( ϱ ) = σ ( ϱ ) 2 3 L ( σ ( ϱ ) ) 1 L ( σ ( ϱ ) ) , σ ( ϱ + 1 ) = σ ( ϱ ) 1 2 ( 3 L ( y ( ϱ ) ) L ( σ ( ϱ ) ) ) 1 · ( 3 L ( y ( ϱ ) ) + L ( σ ( ϱ ) ) ) L ( σ ( ϱ ) ) 1 L ( σ ( ϱ ) ) .
In an effort to refine both (7) and (8), the authors of [13] have proposed a three-step sixth-rate solver that eliminates the necessity of computing DDO and second-order Fréchet derivatives to solve (1) as follows (MSM):
η ( ϱ ) = L ( σ ( ϱ ) ) 1 L ( σ ( ϱ ) ) , ξ ( ϱ ) = σ ( ϱ ) 2 3 η ( ϱ ) , γ ( ϱ ) = σ ( ϱ ) 23 8 I 3 L ( σ ( ϱ ) ) 1 L ( ξ ( ϱ ) ) + 9 8 ( L ( σ ( ϱ ) ) 1 L ( ξ ( ϱ ) ) ) 2 η ( ϱ ) , σ ( ϱ + 1 ) = γ ( ϱ ) 5 2 I 3 2 L ( σ ( ϱ ) ) 1 L ( ξ ( ϱ ) ) L ( σ ( ϱ ) ) 1 L ( γ ( ϱ ) ) .
These solvers are distinguished not only by their straightforward implementation but also by their computational efficiency. For a thorough overview of this subject, please refer to [14,15].
This article introduces two multi-step higher-order solvers aimed at resolving (1), specifically proposing fourth- and fifth-order iterative techniques for computing simple zeros. These solvers circumvent the need for computing the second Fréchet differentiation.
This manuscript is structured as follows:
  • Section 2 provides a discussion of multi-step schemes comprising three substeps to reach fast convergence, while utilizing a minimal number of LU decompositions (also known as lower–upper (LU) decomposition).
  • Section 3 includes an error analysis and assesses convergence rates.
  • The dynamic characteristics of the proposed schemes in contrast to those of the available solvers are compared in Section 4.
  • Additionally, Section 5 applies the proposed method to several test problems.
  • Finally, Section 6 concludes with findings and outlines future research directions.

2. A New Procedure

We propose an enhanced iterative scheme that builds upon the pioneer methods discussed in Section 1, incorporating multiple steps while maintaining a fixed Jacobian, as follows (PM1):
y ( ϱ ) = σ ( ϱ ) L ( σ ( ϱ ) ) 1 L ( σ ( ϱ ) ) , ξ ( ϱ ) = y ( ϱ ) σ ( ϱ ) , y ( ϱ ) ; L 1 L ( y ( ϱ ) ) , σ ( ϱ + 1 ) = ξ ( ϱ ) σ ( ϱ ) , y ( ϱ ) ; L 1 L ( ξ ( ϱ ) ) .
The DDO during the second and third substeps is intentionally held constant to maximize convergence enhancement; see the discussions in [16,17]. However, to boost the convergence, we may use an updated node value in the DDO of the third substep, which would lead to a higher convergence speed as follows (PM2):
y ( ϱ ) = σ ( ϱ ) L ( σ ( ϱ ) ) 1 L ( σ ( ϱ ) ) , ξ ( ϱ ) = y ( ϱ ) σ ( ϱ ) , y ( ϱ ) ; L 1 L ( y ( ϱ ) ) , σ ( ϱ + 1 ) = ξ ( ϱ ) ξ ( ϱ ) , y ( ϱ ) ; L 1 L ( ξ ( ϱ ) ) .
Furthermore, in practical implementations, matrix inversion is to be avoided. Hence, we employ LU decompositions for linear systems. Thus, we attain the following four linear algebraic equations in the process of handling PM1 or PM2 to calculate the simple root of a nonlinear algebraic problem (1):
L ( σ ( ϱ ) ) Y ( ϱ ) = L ( σ ( ϱ ) ) , σ ( ϱ ) , y ( ϱ ) ; L M ( ϱ ) = L ( y ( ϱ ) ) , σ ( ϱ ) , y ( ϱ ) ; L N ( ϱ ) = L ( ξ ( ϱ ) ) , ξ ( ϱ ) , y ( ϱ ) ; L P ( ϱ ) = L ( ξ ( ϱ ) ) ,
Then, the iterative expressions PM1 and PM2 can be simply reformulated in implementations as follows:
y ( ϱ ) = σ ( ϱ ) Y ( ϱ ) , ξ ( ϱ ) = y ( ϱ ) M ( ϱ ) , σ ( ϱ + 1 ) = ξ ( ϱ ) N ( ϱ ) ,
and
y ( ϱ ) = σ ( ϱ ) Y ( ϱ ) , ξ ( ϱ ) = y ( ϱ ) M ( ϱ ) , σ ( ϱ + 1 ) = ξ ( ϱ ) P ( ϱ ) .
We adhered to a fundamental principle in computational mathematics which asserts that the true evaluation of numerical algorithms is based on computational efficiency. This efficiency was directly related to the quality of the method and inversely related to its calculational cost. The quality of a method has was measured by its convergence rate and structural properties, while its calculational cost referred to the amount of work required for function evaluations, derivatives, and LU decompositions throughout the process. With these considerations in mind, we provided fourth- and fifth-order methods PM1 and PM2 based on different DDOs (of the first order).
While several sixth- and seventh-order methods exist in the literature [13,16] that perform well for resolving collections of nonlinear equations, our objective here is to demonstrate that effective fourth- and fifth-rate schemes can also be developed. By strategically freezing the DDO or applying it at suitable stages, the convergence domain may be extended, as will be shown in Section 4.
This approach is novel in the literature for nonlinear systems, focusing primarily on expanding the attraction basins (as will be seen in Section 4). In contrast, other frozen-type schemes tend to lose their broad convergence domain due to the application of multiple frozen Jacobians. The process of the proposed solvers are summarized in Algorithm 1 and the flowchart of Figure 1.
Algorithm 1 The algorithmic format of the multi-step iterative schemes (PM1 and PM2).
 1:
Input: Initial guess σ ( 0 ) , tolerance ϵ and a stopping termination, or maximum iterations σ max
 2:
Output: Approximate solution σ ( ϱ + 1 )
 3:
Initialize ϱ 0
 4:
while  L ( σ ( ϱ ) ) > ϵ  and/or  ϱ < σ max  do
 5:
   Compute y ( ϱ ) = σ ( ϱ ) L ( σ ( ϱ ) ) 1 L ( σ ( ϱ ) ) by using LU factorization to avoid the matrix inversion
 6:
   Compute ξ ( ϱ ) = y ( ϱ ) σ ( ϱ ) , y ( ϱ ) ; L 1 L ( y ( ϱ ) ) by using LU factorization to avoid the matrix inversion
 7:
   if using PM1 then
 8:
     Update σ ( ϱ + 1 ) = ξ ( ϱ ) σ ( ϱ ) , y ( ϱ ) ; L 1 L ( ξ ( ϱ ) ) and use the LU factorziation obtained in Line 6 to proceed
 9:
   else if using PM2 then
10:
     Update σ ( ϱ + 1 ) = ξ ( ϱ ) ξ ( ϱ ) , y ( ϱ ) ; L 1 L ( ξ ( ϱ ) ) by using a new LU factorization to avoid the matrix inversion
11:
   end if
12:
    ϱ ϱ + 1
13:
end while
14:
Return  σ ( ϱ + 1 )

3. Rates for Convergence

In this section, we provide a rigorous theoretical analysis of the convergence rate for the iterative schemes (13)–(14). Before presenting the primary theoretical results concerning convergence, we first introduce the υ -dimensional Taylor expansion.
The convergence rates of the memoryless iterations PM1 and PM2 are established using υ -dimensional Taylor expansions. Let er ( ϱ ) = σ ( ϱ ) β represent the error at the ϱ -th iteration. As discussed in [18],
er ( ϱ + 1 ) = ( F ) er ( ϱ ) p + O ( er ( ϱ ) p + 1 ) .
The derived error equation indicates that F possesses the characteristics of a p-linear function, wherein F L ( R υ , R υ , , R υ ) and p is the convergence order. In addition, one obtains
er ( ϱ ) p = ( er ( ϱ ) , er ( ϱ ) , , er ( ϱ ) p terms ) .
Let L : D R υ R υ be assumed to possess sufficient differentiability in the Fréchet sense over D. As established in [11], the m-th differentiation of L at u R υ for m 1 is defined as an m-linear function, namely,
L ( m ) ( u ) : R υ × × R υ R υ ,
so that L ( m ) ( u ) ( v 1 , , v m ) R υ . For β + h R υ situated in the neighborhood of the zero β of (1), the Taylor series is capable of being expressed by the following [11]:
L ( β + h ) = L ( β ) h + m = 2 p 1 Θ m h m + O ( h p ) ,
where
Θ m = 1 m ! [ L ( β ) ] 1 L ( m ) ( β ) , m 2 .
It follows that Θ m h m R υ and [ L ( β ) ] 1 L ( R υ ) . Moreover, for L ( · ) , we obtain
L ( β + h ) = L ( β ) I + m = 2 p 1 m Θ m h m 1 + O ( h p ) ,
whereas I gives the identity matrix. Also, m Θ m h m 1 L ( R υ ) .
Theorem 1.
In (1), let L : D R υ R υ possess sufficient Fréchet differentiability at each point within D and L ( β ) = 0 at β R υ . Also, assume L ( z ) is smooth and invertible at β. Next, { σ ( ϱ ) } ϱ 0 produced by PM1 converges to β with a local fourth convergence order.
Proof. 
To establish the rate of convergence, using (18) and (20), the following approach is considered to obtain
L ( σ ( ϱ ) ) = L ( β ) er ( ϱ ) + Θ 2 er ( ϱ ) 2 + Θ 3 er ( ϱ ) 3 + Θ 4 er ( ϱ ) 4 + O ( er ( ϱ ) 5 ) ,
and
L ( σ ( ϱ ) ) = L ( β ) I + 2 Θ 2 er ( ϱ ) + 3 Θ 3 er ( ϱ ) 2 + 4 Θ 4 er ( ϱ ) 3 + O ( er ( ϱ ) 4 ) ,
where Θ i = 1 i ! [ L ( β ) ] 1 L ( i ) ( β ) for i 2 , and er ( ϱ ) = σ ( ϱ ) β . From (21) and (22), we derive the following:
y ( ϱ ) = β + 1 3 er ( ϱ ) + 2 3 Θ 2 er ( ϱ ) 2 + O ( er ( ϱ ) 3 ) .
For the DDO matrix σ ( ϱ ) , y ( ϱ ) ; L , we compute the Taylor expansion similarly to finally obtain the following error at the end of the second substep of PM1:
ξ ( ϱ ) β = Θ 2 2 er ( ϱ ) 3 + 3 Θ 2 3 + 3 Θ 3 Θ 2 er ( ϱ ) 4 + O ( er ( ϱ ) 5 ) .
We avoid providing them here for the sake of brevity and clarity. Similarly, we have the following:
L ( ξ ( ϱ ) ) = L ( β ) [ Θ 3 er ( ϱ ) 3 + Θ 4 er ( ϱ ) 4 ] + O ( er ( ϱ ) 5 ) .
Employing (24) and (25) in the third substep of PM1, we derive
er ( ϱ + 1 ) = Θ 2 2 er ( ϱ ) 4 + O ( er ( ϱ ) 5 ) ,
showcasing a fourth-order convergence rate, while necessitating only two LU decompositions per iteration cycle. This concludes the proof.    □
Theorem 2.
In (1), consider that L : D R υ R υ is sufficiently Fréchet-differentiable at every point in D, with L ( β ) = 0 for some of β R υ . Furthermore, assume that L ( z ) is continuous and invertible at β. Under these conditions, the sequence { σ ( ϱ ) } ϱ 0 generated by PM2 converges locally to β with a fifth-order convergence rate.
Proof. 
The proof pursues the path of proof in Theorem 1. Accordingly, we provide only its ultimate error equation,
er ( ϱ + 1 ) = Θ 2 3 er ( ϱ ) 5 + O ( er ( ϱ ) 6 ) ,
illustrating a fourth-order rate of convergence, while necessitating three LU decompositions per iteration. This completes the proof.    □

4. Dynamical Behavior

Without compromising generality, for the one variable scenario, the iterative schemes PM1 and PM2 emphasize the significance of illustrating the basin of attractions for iteration solvers applied to polynomial problems of varying degrees within the field of complex numbers [19]. This is particularly valuable in shading the plots via the required number of iterates to attain convergence. In the complex plane, distinct roots of a polynomial are associated with different areas of attraction. By visualizing such fractal-like basins, we can delineate the areas where starting approximations tend to specific zeros. The boundaries between these basins typically exhibit fractal characteristics, presenting a complex and intricate structure that is challenging to predict. As the polynomial degree increases, these boundaries become even more convoluted, making such visualizations instrumental in comprehending how polynomial complexity influences the convergence behavior of iterative methods.
Moreover, shading the plots according to the quantity of iterates necessary for convergence provides learning insights into the effectiveness of the scheme. Areas in which the method exhibits fast convergence can be regarded as zones of higher stability or greater computational efficiency, while regions needing more iterates (or those that fail to converge) indicate potential inefficiencies or instabilities. We included such discussions in Figure 2, Figure 3, Figure 4 and Figure 5, depicting the domain [ 10 , 10 ] × [ 10 , 10 ] with a maximum number of iterates set to 150 and a tolerance level of 10 3 for the residual as the stop termination. Four different solvers, PM1, PM2, JM4 and NM2, were compared.
In Figure 2, Figure 3, Figure 4 and Figure 5, the darkest color indicates the fewest number of iterations. In this way, it not necessary to write down the number of iterations in different tables since the shading of the colors indicates this procedure. The white-like areas show divergences or requiring many number of iterations to converge.
For the points on the boundaries of the regions in Figure 2, Figure 3, Figure 4 and Figure 5, the convergence may not be fast for any of the compared methods since the lighter color shows the Julia sets in such areas, which exist due to failure in convergence. The computational time required for all compared schemes to draw the attraction basins is negligible. Furthermore, the color scheme used in all the figures are consistent. To reveal how the boundaries of the regions of the figures were obtained, we provide a Mathematica program for NM2 in the case of p ( z ) = z 3 1 as follows:
p[z_] = z^3 - 1;
x1 = p[z]/p’[z];
x2 = z - x1;
xx = x2 /. z -> #;
DensityPlot[
 Length[
  FixedPointList[xx &, x + I y, 40,
   SameTest -> (Abs[#1 - #2] < 10^-3 &)]],
 {y, -10, 10}, {x, -10, 10},
 ColorFunction -> "BlueGreenYellow",
 PlotPoints -> 150
 ]
The analysis further demonstrates that examining the variation in the number of iterates across the complex plane provides insight into the method’s global convergence characteristics. The fractal-like boundaries delineate regions where minor perturbations in the initial guess can result in significantly different outcomes, such as convergence to distinct roots or divergence. This pronounced sensitivity is particularly important to comprehend when employing such methods in real-world applications, where the precision of the starting point may be constrained. Finally, by analyzing the fractal attraction basins, we can design new iterative methods that have more desirable basin structures, such as larger regions of fast convergence and simpler basin boundaries.
The primary objective here is to demonstrate that we achieved accelerated convergence while maintaining large attraction basins compared to other those of higher-order solvers in the literature. Additionally, examining attraction basins for the multidimensional case is nearly impossible; therefore, we limit our analysis to the scalar case.

5. Computational Results

The target here is to support the application of our contributed schemes, PM1 and PM2. The computations were carried out using Wolfram [20,21]. Intentionally, we programmed them via multiple-precision arithmetics using 2500 digits to check and observe the higher orders of convergence in the numerical results. The linear sets involved were solved using LU factorization by LinearSolver [ ] . Numerical experiments were performed in a consistent environment. For comparative purposes, we assessed NM2, JM4, and our proposed higher-order techniques for solving the nonlinear equation systems.
It would be beneficial to provide a more detailed explanation of how the DDOs are implemented in the multi-step solver. To achieve such a goal, the Mathematica implementation is provided below to enhance the reader’s understanding.
(*How to compute the DDO for two arbitrary points when size
        stands for the dimension of the nonlinear system*)
 
(*Define the two points*)
point1[j_] := Join[x[[;; j]], w[[j + 1 ;;]]];
point2[j_] := Join[x[[;; j - 1]], w[[j ;;]]];
 
(*Define the DDO computations*)
DDO = Transpose@
   SparseArray@
    Table[(f[point1[j]] - f[point2[j]])/(x[[j]] - w[[j]]), {j, size}];
 
(*Define its LU factorization*)
fun = LinearSolve[DDO];
Example 1
([22]). We examine the nonlinear system L ( z ) = 0 , which contains a complex solution, as outlined in the following discussions.
L ( z ) = 5 exp ( z 1 2 ) z 2 + 2 z 7 z 10 + 8 z 3 z 4 5 z 6 3 z 9 , 5 tan ( z 1 + 2 ) + cos ( z 9 z 10 ) + z 2 3 + 7 z 3 4 2 sin 3 ( z 6 ) , z 1 2 z 10 z 5 z 6 z 7 z 8 z 9 + tan ( z 2 ) + 2 z 3 z 4 5 z 6 3 , 2 tan ( z 1 2 ) + 2 z 2 + z 3 2 5 z 5 3 z 6 + z 8 cos ( z 9 ) , cos 1 ( z 1 2 ) sin ( z 2 ) 2 z 10 z 5 4 z 6 z 9 + z 3 2 , 10 z 1 2 z 10 + cos ( z 2 ) + z 3 2 5 z 6 3 2 z 8 4 z 9 , z 1 z 2 z 7 z 8 z 10 + z 3 5 5 z 5 3 + z 7 , 10 z 1 + z 3 2 5 z 5 2 + 10 z 6 z 8 sin ( z 7 ) + 2 z 9 , z 1 sin ( z 2 ) 2 z 10 z 8 + z 10 5 z 6 10 z 9 , cos 1 ( 10 z 10 + z 8 + z 9 ) + z 4 sin ( z 2 ) + z 3 15 z 5 2 + z 7 ,
where its root is ( i = 1 ):
β ( 1.3273490 + 0.3502924 i , 1.058599 1.748724 i , 1.0276186 0.0141308 i , 3.273950 + 0.127828 i , 0.8318243 + 0.0017551 i , 0.48532459 + 0.684877 i , 0.1693667 + 0.184091 i , 1.534419 0.321214 i , 2.086379 + 0.4263427 i , 1.989592 + 1.478395 i ) * .
The computational results, along with the numerical convergence rate denoted by ρ , are presented in Table 1. For the computations, we employed 500 fixed floating-point precisions, with the starting guess chosen as
σ ( 0 ) = ( 1.200 + 0.300 i , 1.100 1.900 i , 1.000 0.100 i , 2.500 + 0.500 i , 0.800 0.100 i , 0.400 + 1.000 i , 0.100 + 0.100 i , 1.300 0.700 i , 2.000 + 0.500 i , 1.900 + 1.400 i ) * .
Furthermore, the residual norm is represented utilizing the notation · 2 .
The linear systems were solved using LU decomposition in Mathematica. However, it is possible to employ other strategies such as approximate matrix inverse preconditioners or matrix iterative methods to solve such linear systems; for more details, see [23,24]. Furthermore, the proposed nonlinear solvers can be used in a multitude of differential equation problems such as those discussed in [25,26,27].
The following question might arise: does the NM2 method have better performance than PM1 and PM2? Generally speaking, in the field of iteration solvers for the solution of nonlinear equations, there are some points which may or may not lead to convergence for a specific solver. This is mainly due to the attraction basins of different methods for any specific experiment. Hence, researchers are suggested to follow Theorems 1 and 2 and to choose the starting point in the vicinity of a true solution to obtain the convergence as well as conduct a fair comparison.
Example 2.
The effectiveness of the specified schemes is determined through solving the test problem L ( z 1 , z 2 ) = 0 , where L : ( 0 , 30 ) × ( 0 , 30 ) R 2 :
L ( z ) = L ( z 1 , z 2 ) = ( z 1 2 z 2 19 , z 2 3 / 6 z 1 2 + z 2 17 ) .
We initiate with the vector σ ( 0 ) = ( 40.5 , 5.5 ) * , aiming to reach the exact root, β = ( 5 , 6 ) * .
The results for this case are given in Table 2. The numerical performance is demonstrated by resolving a range of nonlinear equations commonly encountered in applied mathematics. The results of the computational tests reveal that the proposed methods reach the zeros in a few iterates, while also providing high accuracy at each step. Additionally, the numerically observed convergence behavior validates the theoretically predicted higher orders of convergence.

6. Conclusions

In this study, fourth- and fifth-orders solver were provided, aiming at improving the computational efficiency of solvers for nonlinear algebraic systems with both real and complex solutions. By adhering to the fundamental principle of numerical analysis, which emphasizes the importance of computational efficiency, we ensured that the devised methods balance algorithmic quality and computational cost. Our approach, rooted in multi-step solvers derived from Newton’s method, utilizes DDOs at key substeps to elevate the speed of convergence, and, consequently, the solver’s efficiency index. The theoretical advancements introduced in this work were validated by numerical experiments, confirming the efficiency of the proposed schemes. We demonstrated that higher-order methods, when carefully constructed using divided difference operators, can achieve substantial improvements in stability regions for the choices of the initial approximations based on the basins of attractions given in Section 4 while maintaining or improving accuracy.

Author Contributions

All writers equally contributed to this paper. All authors have read and agreed to the published version of the manuscript.

Funding

This project was supported by Researchers Supporting Project number RSP2024R210, King Saud University, Riyadh, Saudi Arabia.

Data Availability Statement

No particular data were utilized.

Acknowledgments

We wish to convey our appreciation to all four referees for their perceptive feedback, which greatly strengthened the clarity and rigor of this manuscript.

Conflicts of Interest

The writers declare that they have no conflicts of interest.

References

  1. Khandani, H. New algorithms to estimate the real roots of differentiable functions and polynomials on a closed finite interval. J. Math. Model. 2023, 11, 631–647. [Google Scholar]
  2. Singh, H.; Sharma, J.R. A two-point Newton-like method of optimal fourth order convergence for systems of nonlinear equations. Complexity 2025, 86, 101907. [Google Scholar] [CrossRef]
  3. Moscoso-Martínez, M.; Chicharro, F.I.; Cordero, A.; Torregrosa, J.R.; Ureña-Callay, G. Achieving optimal order in a novel family of numerical methods: Insights from convergence and dynamical analysis results. Axioms 2024, 13, 458. [Google Scholar] [CrossRef]
  4. McNamee, J.M.; Pan, V.Y. Numerical Methods for Roots of Polynomials–Part I; Elsevier: Amsterdam, The Netherlands, 2007. [Google Scholar]
  5. McNamee, J.M.; Pan, V.Y. Numerical Methods for Roots of Polynomials–Part II; Elsevier: Amsterdam, The Netherlands, 2013. [Google Scholar]
  6. Khdhr, F.W.; Soleymani, F.; Saeed, R.K.; Akgül, A. An optimized Steffensen–type iterative method with memory associated with annuity calculation. Eur. Phys. J. Plus 2019, 134, 146. [Google Scholar] [CrossRef]
  7. Noda, T. The Steffensen iteration method for systems of nonlinear equations. Proc. Jpn. Acad. 1987, 63, 186–189. [Google Scholar] [CrossRef]
  8. Grau-Sánchez, M.; Grau, À.; Noguera, M. On the computational efficiency index and some iterative methods for solving systems of nonlinear equations. J. Comput. Appl. Math. 2011, 236, 1259–1266. [Google Scholar] [CrossRef]
  9. Rostami, M.; Lotfi, T.; Brahmand, A. A fast derivative-free iteration scheme for nonlinear systems and integral equations. Mathematics 2019, 7, 637. [Google Scholar] [CrossRef]
  10. Traub, J.F. Iterative Methods for the Solution of Equations; Prentice Hall: New York, NY, USA, 1964. [Google Scholar]
  11. Cordero, A.; Hueso, J.L.; Martínez, E.; Torregrosa, J.R. A modified Newton-Jarratt’s composition. Numer. Algorithms 2010, 55, 87–99. [Google Scholar] [CrossRef]
  12. Kansal, M.; Cordero, A.; Bhalla, S.; Torregrosa, J.R. New fourth-and sixth-order classes of iterative methods for solving systems of nonlinear equations and their stability analysis. Numer. Algorithms 2021, 87, 1017–1060. [Google Scholar] [CrossRef]
  13. Montazeri, H.; Soleymani, F.; Shateyi, S.; Motsa, S.S. On a new method for computing the numerical solution of systems of nonlinear equations. J. Appl. Math. 2012, 2012, 751975. [Google Scholar] [CrossRef]
  14. Chang, C.-W.; Qureshi, S.; Argyros, I.K.; Saraz, K.M.; Hincal, E. A modified fractional Newton’s solver. Axioms 2024, 13, 689. [Google Scholar] [CrossRef]
  15. Ogbereyivwe, O.; Atajeromavwo, E.J.; Umar, S.S. Jarratt and Jarratt-variant families of iterative schemes for scalar and system of nonlinear equations, Iran. J. Numer. Anal. Optim. 2024, 14, 391–416. [Google Scholar]
  16. Soheili, A.R.; Soleymani, F. Iterative methods for nonlinear systems associated with finite difference approach in stochastic differential equations. Numer. Algorithms 2016, 71, 89–102. [Google Scholar] [CrossRef]
  17. Cordero, A.; Leonardo-Sepúlveda, M.A.; Torregrosa, J.R.; Vassileva, M.P. Increasing in three units the order of convergence of iterative methods for solving nonlinear systems. Math. Comput. Simul. 2024, 223, 509–522. [Google Scholar] [CrossRef]
  18. Sharma, J.R.; Guha, R.K.; Sharma, R. An efficient fourth order weighted-Newton method for systems of nonlinear equations. Numer. Algorithms 2013, 62, 307–323. [Google Scholar] [CrossRef]
  19. Shi, L.; Ullah, M.Z.; Nashine, H.K.; Alansari, M.; Shateyi, S. An enhanced numerical iterative method for expanding the attraction basins when computing matrix signs of invertible matrices. Fractal Fract. 2023, 7, 684. [Google Scholar] [CrossRef]
  20. Dubin, D. Numerical and Analytical Methods for Scientists and Engineers Using Mathematica; John Wiley & Sons: Hoboken, NJ, USA, 2003. [Google Scholar]
  21. Clark, J.; Kapadia, D. Introduction to Calculus: A Computational Approach; Wolfram Media: Champaign, IL, USA, 2024. [Google Scholar]
  22. Lotfi, T.; Momenzadeh, M. Constructing an efficient multi-step iterative scheme for nonlinear system of equations. Comput. Methods Differ. Equ. 2021, 9, 710–721. [Google Scholar]
  23. Ma, X.; Nashine, H.K.; Shil, S.; Soleymani, F. Exploiting higher computational efficiency index for computing outer generalized inverses. Appl. Numer. Math. 2022, 175, 18–28. [Google Scholar] [CrossRef]
  24. Shil, S.; Nashine, H.K.; Soleymani, F. On an inversion-free algorithm for the nonlinear matrix problem X α + A X β A + B X γ B = I . Int. J. Comput. Math. 2022, 99, 2555–2567. [Google Scholar] [CrossRef]
  25. Shahriari, M.; Saray, B.N.; Mohammadalipour, B.; Saeidian, S. Pseudospectral method for solving the fractional one-dimensional Dirac operator using Chebyshev cardinal functions. Phys. Scr. 2023, 98, 055205. [Google Scholar] [CrossRef]
  26. Saray, B.N. Abel’s integral operator: Sparse representation based on multiwavelets. BIT 2021, 61, 587–606. [Google Scholar] [CrossRef]
  27. Saray, B.N.; Lakestani, M.; Dehghan, M. On the sparse multiscale representation of 2-D Burgers equations by an efficient algorithm based on multiwavelets, Numer. Methods Partial. Differ. Equ. 2023, 39, 1938–1961. [Google Scholar] [CrossRef]
Figure 1. Flowchart of the implementations of the proposed solvers.
Figure 1. Flowchart of the implementations of the proposed solvers.
Axioms 13 00880 g001
Figure 2. NM2 basins of attractions for p ( z ) = z 2 1 , p ( z ) = z 3 1 , p ( z ) = z 4 1 , and p ( z ) = z 5 1 on the upper-left, upper-right, lower-left, and lower-right corners, respectively.
Figure 2. NM2 basins of attractions for p ( z ) = z 2 1 , p ( z ) = z 3 1 , p ( z ) = z 4 1 , and p ( z ) = z 5 1 on the upper-left, upper-right, lower-left, and lower-right corners, respectively.
Axioms 13 00880 g002
Figure 3. PM1 basins of attractions for p ( z ) = z 2 1 , p ( z ) = z 3 1 , p ( z ) = z 4 1 , and p ( z ) = z 5 1 on the upper-left, upper-right, lower-left, and lower-right corners, respectively.
Figure 3. PM1 basins of attractions for p ( z ) = z 2 1 , p ( z ) = z 3 1 , p ( z ) = z 4 1 , and p ( z ) = z 5 1 on the upper-left, upper-right, lower-left, and lower-right corners, respectively.
Axioms 13 00880 g003
Figure 4. PM2 basins of attractions for p ( z ) = z 2 1 , p ( z ) = z 3 1 , p ( z ) = z 4 1 , p ( z ) = z 5 1 on the upper-left, upper-right, lower-left, and lower-right corners, respectively.
Figure 4. PM2 basins of attractions for p ( z ) = z 2 1 , p ( z ) = z 3 1 , p ( z ) = z 4 1 , p ( z ) = z 5 1 on the upper-left, upper-right, lower-left, and lower-right corners, respectively.
Axioms 13 00880 g004
Figure 5. JM4 basins of attractions for p ( z ) = z 2 1 , p ( z ) = z 3 1 , p ( z ) = z 4 1 , p ( z ) = z 5 1 on the upper-left, upper-right, lower-left, and lower-right corners, respectively.
Figure 5. JM4 basins of attractions for p ( z ) = z 2 1 , p ( z ) = z 3 1 , p ( z ) = z 4 1 , p ( z ) = z 5 1 on the upper-left, upper-right, lower-left, and lower-right corners, respectively.
Axioms 13 00880 g005
Table 1. Numerical results for various methods presented in Example 1.
Table 1. Numerical results for various methods presented in Example 1.
Methods L ( σ ( 4 ) ) L ( σ ( 5 ) ) L ( σ ( 6 ) ) L ( σ ( 7 ) ) L ( σ ( 8 ) ) L ( σ ( 9 ) )
NM2 2.73 × 10 2 1.79 × 10 5 1.28 × 10 11 2.52 × 10 23 8.28 × 10 47 2.50 × 10 94
SM2 1.83 × 10 2 7.33 × 10 6 5.17 × 10 12 1.51 × 10 24 2.03 × 10 49 3.91 × 10 99
PM1 2.39 × 10 17 3.95 × 10 71 4.92 × 10 286
PM2 3.47 × 10 82 7.55 × 10 412 2.85 × 10 2059
Table 2. Numerical results for various methods presented in Example 2.
Table 2. Numerical results for various methods presented in Example 2.
Methods L ( σ ( 4 ) ) L ( σ ( 5 ) ) L ( σ ( 6 ) ) L ( σ ( 7 ) ) L ( σ ( 8 ) ) L ( σ ( 9 ) )
NM2 1.11 × 10 3 3.22 × 10 2 8.69 × 10 1 1.88 × 10 1 2.08 3.78 × 10 2
SM2 2.25 × 10 6 2.14 × 10 6 2.04 × 10 6 1.93 × 10 6 1.82 × 10 6 1.70 × 10 6
PM1 2.40 × 10 24 1.17 × 10 101 6.71 × 10 411    
JM4 7.41 1.07 × 10 3 7.26 × 10 19    
PM2 2.59 × 10 23 8.74 × 10 122 3.82 × 10 614
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Jebreen, H.B.; Wang, H.; Chalco-Cano, Y. Efficient Newton-Type Solvers Based on for Finding the Solution of Nonlinear Algebraic Problems. Axioms 2024, 13, 880. https://doi.org/10.3390/axioms13120880

AMA Style

Jebreen HB, Wang H, Chalco-Cano Y. Efficient Newton-Type Solvers Based on for Finding the Solution of Nonlinear Algebraic Problems. Axioms. 2024; 13(12):880. https://doi.org/10.3390/axioms13120880

Chicago/Turabian Style

Jebreen, Haifa Bin, Hongzhou Wang, and Yurilev Chalco-Cano. 2024. "Efficient Newton-Type Solvers Based on for Finding the Solution of Nonlinear Algebraic Problems" Axioms 13, no. 12: 880. https://doi.org/10.3390/axioms13120880

APA Style

Jebreen, H. B., Wang, H., & Chalco-Cano, Y. (2024). Efficient Newton-Type Solvers Based on for Finding the Solution of Nonlinear Algebraic Problems. Axioms, 13(12), 880. https://doi.org/10.3390/axioms13120880

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop