An Iterative Hybrid Algorithm for Roots of Non-Linear Equations

: Finding the roots of non-linear and transcendental equations is an important problem in engineering sciences. In general, such problems do not have an analytic solution; the researchers resort to numerical techniques for exploring. We design and implement a three-way hybrid algorithm that is a blend of the Newton–Raphson algorithm and a two-way blended algorithm (blend of two methods, Bisection and False Position). The hybrid algorithm is a new single pass iterative approach. The method takes advantage of the best in three algorithms in each iteration to estimate an approximate value closer to the root. We show that the new algorithm outperforms the Bisection, Regula Falsi, Newton–Raphson, quadrature based, undetermined coefﬁcients based, and decomposition-based algorithms. The new hybrid root ﬁnding algorithm is guaranteed to converge. The experimental results and empirical evidence show that the complexity of the hybrid algorithm is far less than that of other algorithms. Several functions cited in the literature are used as benchmarks to compare and conﬁrm the simplicity, efﬁciency, and performance of the proposed method.


Introduction
The construction of numerical solutions for non-linear equations is essential in many branches of science and engineering. Most of the time, non-linear problems do not have analytic solutions; the researchers resort to numerical methods. New efficient methods for solving nonlinear equations are evolving frequently, and are ubiquitously explored and exploited in applications. Their purpose is to improve the existing methods, such as classical Bisection, False Position, Newton-Raphson, and their variant methods for efficiency, simplicity, and approximation reliability in the solution.
There are various ways to approach a problem; some methods are based on only continuous functions while others take advantage of the differentiability of functions. The algorithms such as Bisection using midpoint, False Position using secant line intersection, and Newton-Raphson using tangent line intersection are ubiquitous. There are improvements of these algorithms for speedup; more elegant and efficient implementations are emerging. The variations of continuity-based Bisection and False Position methods are due to Dekker [1,2], Brent [3], Press [4], and several variants of False Position including the reverse quadratic interpolation (RQI) method. The variations of derivative based quadratic order Newton-Raphson method are 3rd, 4th, 5th, 6th order methods. From these algorithms, the researcher has to find a suitable algorithm that works best for every function [5,6]. For example, the bisection method for a simple equation x − 2 = 0, on interval [0, 4], will get the solution in one iteration. However, if the same algorithm is used on a different, smaller interval [0, 3], iterations will go on forever to get to 2; it will need some tolerance on error or on the number of iterations to terminate the algorithm.
For derivative based methods, several predictor-corrector solutions have been proposed for extending the Newton-Raphson method with the support of Midpoint (mean of endpoints of domain interval), Trapezoidal (mean of the function values at the endpoint of the domain), Simpsons (quadratic approximation of the function) quadrature Eng 2021, 2 formula [7], undetermined coefficients [8], a third order Newton-type method to solve a system of nonlinear equations [9], Newton's method with accelerated convergence [10] using trapezoidal quadrature, fourth order method of undetermined coefficients [11], one derivative and two function evaluations [12], Newton's Method using fifth order quadrature formulas [13], using Midpoint, Trapezoidal, and Simpson quadrature sixth order rule [14]. Newton-Raphson and its variants use the derivative of the function; the derivative of the function is computed from integrations using quadrature-based methods. The new three-way hybrid algorithm presented here does not use any variations or quadrature or method of undetermined coefficient, and still, it competes with all these algorithms.
For these reasons, a two-way blended algorithm was designed and implemented that is a blend of the bisection algorithm and regula falsi algorithm. It does not take advantage of the differentiability of the function [15]. Most of the time, the equations involve differentiable functions. To take advantage of differentiability, we design a threeway hybrid algorithm that is a hybrid of three algorithms: Bisection, False Position, and Newton-Raphson. The hybrid algorithm is a new single pass iterative approach. The method does not use predictor-corrector technique, but predicts a better estimate in each iteration. The new algorithm is promising in outperforming the False Position algorithm and the Newton-Raphson algorithm. Table 1 is a listing of all the methods used for test cases. It is also confirmed [Tables 2-5] that it outperforms the quadrature based [7,13,14], undetermined coefficients methods [8], and decomposition based [16] algorithms in terms of number of function evaluations per iteration as well as overall number of iterations, computational order of convergence (COC), and efficiency index (EFF). This hybrid root finding algorithm performs fewer or at most that many iterations as these cited methods or functions. The bisection and regula falsi algorithms require only continuity and no derivatives. This algorithm is guaranteed to converge to a root. The hybrid algorithm utilizes the best of the three techniques. The theoretical and empirical evidence shows that the computational complexity of the hybrid algorithm is considerably less than that of the classical algorithms. Several functions cited in the literature are used to confirm the simplicity, efficiency, and performance of the proposed method. The resulting iteration counts are compared with the existing iterative methods in Table 2.
Even though the classical methods have been developed and used for decades, enhancements are made to improve the performance of these methods. A method may perform better than other methods on one dataset, and may produce inferior results on another dataset. We have seen an example just above that different methods have their own strengths/weaknesses. A dynamic new hybrid algorithm is presented here by taking advantage of the best in the Bisection, False Position, and Newton-Raphson methods to locate the roots independent of Dekker, Brent, RQI, and 3rd-6th order methods. Its computational efficiency is validated by comparing it with the existing methods via complexity analysis and empirical evidence. This new hybrid algorithm outperforms all the existing algorithms; see Section 4 for empirical outcomes validating the performance of the new algorithm. This paper is organized as follows. Section 2 describes background methods to support the new algorithm. Section 3 is the new algorithm. Section 4 is experimental analysis using a multitude of examples used by researchers in the literature, results, and comparison with their findings. Section 5 is on the complexity of computations. Section 6 is the conclusion followed by references.

Background, Definitions
In order to review the literature, we briefly describe methods for (1) root approximation, (2) error calculation and error tolerance, and (3) algorithm termination criteria. There is no single optimal algorithm for root approximation. Normally we look for the solution in the worst-case scenario. The order of complexity does not tell the complete detailed outcome. The computational outcome may depend on implementation details, the domain, tolerance, and the function. No matter what, for comparison with different algorithms accomplishing the same task, we use the same function, same tolerance, same termination criteria to justify the superiority of one algorithm over the other. When we are faced with competing choices, normally the simplest one is the accurate one. This is particularly true in this case. We provide a new algorithm that is simpler and outperforms all these methods. For background, we will briefly refer to two types of equations: (1) those requiring only continuous functions, and (2) those requiring differentiable functions along with the desired order of derivative in their formulations.
There are two types of problems: (1) continuity based with no derivative requirement, such as Bisection, False Position, and their extensions; and (2) derivative based, such as Newton-Raphson and its variations. For simulations, we use error Tolerance ε = 0.0000001, tolerance coupled with iterations termination criteria as (|x k − x k−1 | + |f(x k )|) < ε, and upper bound on iterations as 100: for function f: Since the solution is obtained by iterative methods, the definition of convergence is as follows: Definition (Convergence) [10,12,17]. Let x n, and α ∈ R, n ≥ 0. Then, the sequence {x n } is said to converge to α if lim n→∞ |x n − α| = 1 Definition (Order of Convergence). Let x n, and α ∈ R, n ≥ 0, sequence {x n } converge to α.
In addition, [12,18] if there exists a constant C > 0 (C = 0, C = ∞) and an integer p ≥ 1 such that lim n→∞ |x n+1 − α| |x n − α| p = C then the sequence {x n } is said to converge to α with convergence order p, and C is the asymptotic error constant [10,12,18,19]. Since x n = α + e n , the error equation becomes e n+1 = C e n p + O(e n p+1 ) If p = 1, 2, or 3, the convergence is called linear, quadratic, or cubic convergence, respectively.
These theoretical criteria do not take into consideration how much computation the function values perform in each step. The order of convergence p can be approximated by the Computational Order of Convergence (COC) that takes into account the combinatorial cost of the method. Definition (Computational Order of Convergence). Suppose three iterations x n−1 , x n , x n+1 are closer to the root α; then, the order of convergence is approximated by [10,12,14] Additionally, since α is not known a priori, there are other ways to compute COC, namely, using four iterations [8] instead of three iterations [10,12,17].
If the power p is the order of convergence and q is the number of function evaluations per iteration, then p 1/q is called the efficiency index and is denoted by EFF [10,12].

Bisection Method
The Bisection method (1) is a binary approach for eliminating subintervals, (2) is virtually a binary search, and (3) converges slowly to the root. Without regard to the function, an upper bound on the root error is fixed at each iteration and it can be computed a priori. By specifying the tolerance on root error, the upper bound on the number of iterations is predetermined.

False Position (Regula Falsi) Method
The poor performance of the Bisection method led to the False Position method. It finds zeros of the function by linear interpolation. The False Position uses the location of the root for better selection. Unfortunately, this method does not perform as well as expected for all functions; see Figure 1 [15] for four types of concavity in the function. Iterations are terminated at ten to keep the plots clean. The figures show that the False Position method performs poorly as compared to the Bisection method.
Eng 2021, 2, FOR PEER REVIEW 4 Definition (Efficiency Index). If the power p is the order of convergence and q is the number of function evaluations per iteration, then p 1/q is called the efficiency index and is denoted by EFF [10,12].

Bisection Method
The Bisection method (1) is a binary approach for eliminating subintervals, (2) is virtually a binary search, and (3) converges slowly to the root. Without regard to the function, an upper bound on the root error is fixed at each iteration and it can be computed a priori. By specifying the tolerance on root error, the upper bound on the number of iterations is predetermined.

False Position (Regula Falsi) Method
The poor performance of the Bisection method led to the False Position method. It finds zeros of the function by linear interpolation. The False Position uses the location of the root for better selection. Unfortunately, this method does not perform as well as expected for all functions; see Figure 1 [15] for four types of concavity in the function. Iterations are terminated at ten to keep the plots clean. The figures show that the False Position method performs poorly as compared to the Bisection method.

Dekker's Method
Dekker algorithm [2] was designed to circumvent the shortcomings of the slow speed of the Bisection and the uncontrolled iteration count of the False Position method. It is a combination of two methods: Bisection and False Position. First of all, it assumes that f(x) is continuous on [a, b] and f(a)f(b) < 0. It maintains that |f(b)| < |f(a)|; otherwise, a and b are exchanged to ensure that b is the "best" estimate of the approximate root. The algorithm maintains that {b k } is the sequence of best estimates, and the root enclosing interval is always [a k , b k ] or [b k , a k ]. Dekker's algorithm maintain three values: b is the current best approximation, c is the previous approximation b k−1 , and a is the contrapoint so that the root lies in the interval [a k , b k ] ∪ [b k , a k ]. Both the secant point, s, and midpoint, m, are computed; b k is s or m, whichever is closest to b k−1 . Though it is better than the False position method, nonetheless, it has some road blocks, handled by Brent.

Brent's Algorithm
Brent's algorithm [3] evolved from the failures of Dekker's algorithm; it is a step towards the improvement of Dekker's algorithm. It is a slight improvement at the cost of extra test computations. It is a combination of four methods, Bisection, False Position, Dekker, and reverse (a.k.a. inverse) quadratic interpolation (RQI), described next. Reverse quadratic interpolation is based on the method of Lagrange polynomials and it leads to extra calculations. Thus, it may take more iterations to circumvent pathological cases. It uses three estimates to derive the next estimate. This algorithm is more robust and more costly.

Detour to Reverse Quadratic Interpolation
Let a, b, and c be three distinct points, and f(a), f(b), and f(c) be corresponding distinct values of the function f(x); there is a unique quadratic polynomial through three points (a, f(a)), (b, f(b)), (c, f(c)). The Lagrange form of polynomial is most suitable for evaluation of the polynomial for values of x and inverse value of y. In fact, there is a direct formula for a reverse quadratic interpolating (RPI) polynomial that can be evaluated from values of y.
The derivative based methods depend on the initial start point, x 0 . The simplest such technique is the Newton-Raphson method, which is slower than its variations. These algorithms outperform the conventional Newton-Raphson method. The variations include decomposition based [16], quadrature based, and undetermined coefficients based [11,13,14]. These methods are quite complex and detailed. The reader may wish to refer to the full papers for details. For completeness, we describe the iteration formulas to show how these methods iterate to get to the root.

Newton-Raphson (1760)
Let f be differentiable on an open interval containing a, b. By Taylor series expansion of f, By retaining the first derivative term only, linear approximation of order one Assuming the value of b to be close to the root of f, further leads to which is standard Newton-Raphson method. Thus, for function f, Newton-Raphson [17] successive estimates for the solution are This amounts to two function evaluations for each iteration. However, if we write amounts to one function evaluation for each iteration. It is just a matter of how we count the number of function evaluations. It is of theoretical interest, but really, we have to consider the combinatorial cost of the function, too. In fact, we can write That results in one function evaluation f(x n ) in every iteration. We use this form in the algorithm.

Oghovese-John Method (2014)
The Oghovese-John sixth order method approximates the iteration estimates by using the average of Midpoint and Simpson quadrature, and it is shown to have approximation accuracy of order six.
The expression for Oghovese-John's estimates [7] is as follows. Let Then, for n ≥ 0 (Based on the average of Midpoint and Simpsons 3/8 rule.) Then, the iterates, x n+1 , are defined in terms of v n instead of x n Convergence and stopping criteria are specified with error tolerance and upper bound on iterations for termination. In fact, the algorithm terminates considerably much earlier than it reaches any termination condition in the algorithm.
The derivation of the Oghovese-Emunefe [7] method uses the average of Midpoint and Simpson quadrature. We briefly describe its derivation in Appendix A.
The following variations are very interesting and challenging. It is a historical perspective [18][19][20][21][22][23][24] of development of innovations in the improvement of the Newton-Raphson method and its variants for solution to non-linear equations. We will compare the hybrid algorithm with these. First, we describe their iteration recurrence relations without derivations and defer their derivations to references for simplicity.

Grau-Diaz-Barero (2006)
Grau et al. [18] introduced a new method by correcting Ostrowski's method [12]: The method uses one derivative and two function evaluations. The method has a sixth order convergence with the following iterations:

Sharma-Guha (2007)
Sharma and Guha [22] modified and parameterized Ostrowski's method [12] having four function evaluations and a sixth order convergence. Their formula is given as follows: where a and b are problem dependent parameters, with b = a − 2.

Khattri-Abbasbandy (2011)
Khattri-Abbasbandy [11] introduced an iterative method having a fourth order convergence using three function evaluations per iteration with the following formula: 2.10. Fang-Chen-Tian-Sun-Chen (2011) Fang et al. [23] modified Newton's method with five evaluation functions and produced a sixth order convergence method having the following iterations: z n = y n − f(y n ) b n f(y n ) + f (y n ) (16) x n+1 z n − f(z n ) a n f(z n ) + f (z n ) (17) where is a n , b n , c n are real numbers chosen in such a way that 0 ≤ |a n |, |b n |, |c n | ≤ 1, and sign(a n f (x n )) = sign(f (x n )), sign(b n f (y n )) = sign(f (y n )), sign(c n f (Z n )) = sign(f (z n )), where n = 1,2, . . . , and sign(x) is a sign function.

Jayakumar (2013)
Jayakumar generalized Simpson-Newton's [14] method for solving nonlinear equations. His algorithm has third order convergence and four function evaluations per iteration.
Let a be a root of the function (1) and suppose that x n−1 , x n , x n+1 are three successive iterations closer to the root a. This is a third order convergence formulation.
Recall the Simpson 1/3 iteration formula There is no end in sight from the extensions; the arithmetic mean is implicit Equation (21). Harmonic-Simpson-Newton's method (HSN): In Equation (21), using harmonic mean instead of arithmetic mean , the Harmonic-Simpson-Newton's method becomes

Nora-Imran-Syamsudhuha (2018)
This is a very interesting and complex analysis paper [8], the order of convergence is still six. In this article, the authors present a combination of the Khattri and Abbasbandy [11] method with Newton's method using the principle of undetermined coefficients method.
Furthermore, using the fourth order method derived by Khattri and Abbasbandy, they propose the following iterative method.
with m = a(b 2 − 3ab + a 2 ), a = z n − x n , b = y n − x n .

Weerakon et al. (2000)
Weerkoon-Fernando [10] used trapezoidal quadrature to achieve third order convergence with the following iteration forms The trapezoid rule is The Weerakoon-Fernando formula becomes This is also called the Arithmetic mean formula. If arithmetic mean is replaced with Harmonic mean, another variation is called the Harmonic mean formula

Edmond Halley (1995)
Halley [24] improved Newton's method. Halley's method (1995) requires that the function be C 3 , three times continuously differentiable. The root iterations have cubic convergence. The function f(x) is expanded to approximate the quadratic in two ways and cancelling the second degree term to arrive at the linear formula Multiply (28) by (x n+1 − x n )f"(x n ), (29) by 2f (x n ), and subtract the resulting (28) from (29); we get Halley's algorithm has convergence of order three. This completes our discussion of the derivative based formulas.
In summary, the methods defined in Sections 2.5-2.14 are several variations of the Newton-Raphson method. Table 1 is a descriptions of the symbols used in the Tables 2-5, to identify the methods: MN_R stands for Newton-Raphson method, MOJmis stands for Oghovese-John method; (it uses Midpoint_Simpson1/3 method), MGDB stands for Grau-Diaz-Barero method, MSG stands for Sharma-Guha method, MKA stands for Khattri-Abbasbandy method (which uses the method of undetermined coefficients), FCTSC stands for Fang-Chen-Tian-Sun-Chen method, MJKms, MJKhs stand for JayaKumar method, two versions (he uses mean and Simpson1/3 rule; also, he uses harmonic mean with Simpson 1/3 rule), MNIS stands for Nora-Imran-Syamsudhuha method (it extends Khattri and Abbasbandy method), NWFhm3 stands for Weerakon et al. method using harmonic mean, NHAL stands for Edmond Halley method using second derivatives, Hybridn stands for hybrid method using Bisection, False Position, and Newton's methods. Hybrid method is the three way method described next in Section 3.  Tables 2-5 with Efficiency Index (EFF).

Method
Year Name in Tables 2-5

Three Way Hybrid Algorithm
The three way algorithm is an extension of the two way algorithm [15] used for continuous functions. The three way algorithm takes advantage of the differentiability of the function, also. First, the bisection and false position methods compete for a more accurate approximate root; then, the algorithm computes the smaller enclosing interval for it. At the next step, the better of the approximate root and Newton root are processed to get the better of the three methods. The algorithm is as follows and the Matlab code for the hybrid algorithm in given in Appendix B.

Convergence of Hybrid Algorithm
The new hybrid algorithm is an improvement over the Newton-Raphson algorithm and a blend of two algorithms: Bisection algorithm and False Position algorithm, and is independent of any other algorithm. The new algorithm differs from all the previous algorithms by tracking the best root approximation in addition to the best bracketing interval. The number of iterations to find a root depends on the criteria used to determine the root accuracy. The complexity of the new algorithm is better that the bisection. In other words, it uses fewer iterations than the bisection algorithm by retaining the root bracketed, whereas other algorithms use more iterations than the Bisection method. If f(x) is used, then complexity depends on the function as well as the method, because we want to ensure that the function value at the estimate is tolerable.
If relative error is used, it does not work for every function because it gets stuck for the case where relative error is constant. Most of the time, absolute error, ∈ s , is used as the stopping criteria. For the Bisection method, on interval [a, b], the upper bound n b (∈) on the number of iterations can be found from b−a 2 n < ∈ s and is lg ((b − a)/∈ s ). Since e n+1 = 1/2 e n , it has linear convergence. For the False Position method, it depends on the location of the root near the endpoint of the bracketing interval and the convexity of the function. Thus, the bound n f (∈ s ) for the number of iterations for the False Position method cannot be predetermined, it can be less, n f (∈ s ) < n b (∈) = lg ((b − a)/∈ s ), or can be greater, n f (∈ s ) > n b (∈ s ) = lg ((b − a)/∈ s ). The number of iterations, n(∈ s ), in the hybrid algorithm is less than min(n f (∈ s ), n b (∈ s )). The introduction of the Newton-Raphson in the Hybrid further reduces the complexity of computations, resulting in fewer iterations. The Newton-Raphson algorithm of the quadratic order of convergence is given in Section 2.5: The convergence analysis of Newton is trivial; however, for completeness it is as follows.
Proposition. α is a simple real root of a sufficiently differentiable function f on an open interval (a, b) of real line. If, then, the initial estimate x 0 is sufficiently close to α, then the Newton-Raphson as defined in Section 2.5 has second order convergence.
Proof. It is trivial; for completeness, we derive the formula formally.
If α is a root, x n is the approximation, then f(α) = 0, x n = α + e n with error e n . Let Since α is a simple root f (α) = 0.
By dividing we get, From Newton-Raphson recurrence Hence, the order of convergence is quadratic.
Since the hybrid algorithm is a combination of three algorithms, no derivatives are involved in the Bisection and False Position, and Newton's method can also use the secant approach to avoid derivatives in computations. The convergence order of the three methods is one, one, and two; they may be combined to get the composite order of convergence. The order of convergence that takes into account the combinatorial cost is computed. It competes with other sixth order methods. Thus, the COC indicates that the order of convergence is closer to sixth order, Table 5. Even if the order of convergence is taken as five or four, the efficiency index competes with them. We safely assume the fourth order for the hybrid algorithm, which is supported by Tables 2-5. We have convergence order four and three function evaluations as shown in (Section 2.5). The efficiency Index is computed with convergence order four and three function evaluations per iteration. The new algorithm complexity is far simpler than the higher order algorithms. This is validated with the experimental computations based on the number of computational iterations, see Tables 2-5. This algorithm guarantees the successful resolution of the roots of non-linear equations. Other variants of Newton-Raphson converge with 3rd-6th order; they are not guaranteed to converge without additional constraints on the functions and the initial guess is close to the root. It differs from all the previous algorithms by tracking the best bracketing interval in addition to the best root approximation. Instead of brute force application of the Bisection or False Position or Newton-Raphson methods solely, we select the most relevant method and use its value at each step to construct the approximate root and bracketing interval. Thus, we design a new hybrid algorithm that performs better than the Bisection method, False Position method, and Newton-Raphson methods. Since an equation can have several roots, the user can appropriately isolate an interval enclosing a single root. Otherwise, different algorithms will end up reaching different roots, making it difficult to compart the algorithms. The new algorithm also performs better than the other variants cited above for continuous functions as long as the f(a)·f(b) < 0 is determined at the end points of the defining interval. At each iteration, the root estimate is computed from both midpoint, secant point, tangent point if differentiable, and the better of the three is selected for the next approximation; in addition, the common enclosing interval is updated to the new smaller interval. This eliminates the unnecessary iterations in either method. There is no requirement on differentiability but derivative used only if available as required by Newton-Raphson and its variants. This is a simple, reliable, and fast algorithm.

Empirical Results of Simulations
The new hybrid algorithm has been tested to ensure that it performs better than other existing methods by optimizing the number of iterations required for approximations, the computation order of convergence, and the efficiency index for the test cases. There are various types of functions: polynomial, trigonometric, exponential, rational, logarithmic, and a heterogeneous combination of these.
The methods defined in Sections 2.5-2.14 are several variations of the Newton-Raphson method. The symbols used to identify the methods are described in Table 1. For comparison with the hybrid algorithm, the tables have several parameters: function, initial value, order of convergence, number of function values per iteration (nofe), Number of iteration (NIter), overall number of function evaluations (NOFE), computational order of convergence (COC), efficiency index (EFF), root, error, between the last two iterations and function value. The hybrid algorithm is labelled Hybridn where [a,b] is the interval of definition and a is used as the initial start value for all other algorithms. All tables show that the hybrid algorithm performs better with respect to number of iterations, number of function evaluations, efficiency index, and computational order of convergence. The hybrid algorithm computes the root with fewer than or equal to the number of iterations of other algorithms as evidenced in the tables.
We have compared the hybrid algorithm with most of the functions used in the literature. These methods were extensively tested on all the functions in Table 2.
Since it is multi-dimensional data, two dimensional table includes a specific feature related to all other parameters. The 22 functions have been tested on all 10 parameters and all 12 methods. Table 2 compares one parameter NOFE for all algorithms on all functions. Instead of 22 displays tables, the next three tables compare all parameters using a different function for NOFE, COC, and EFF. Table 3 compares NOFE on all algorithms and on all parameters with one function. Table 4 compares COC on all algorithms and on all parameters with another function. Table 5 compares EFF on all algorithms and on all parameters with another different function.