Next Article in Journal
Ordering of Omics Features Using Beta Distributions on Montecarlo p-Values
Next Article in Special Issue
High-Order Filtered PID Controller Tuning Based on Magnitude Optimum
Previous Article in Journal
Wildfires Vegetation Recovery through Satellite Remote Sensing and Functional Data Analysis
Previous Article in Special Issue
On a Riemann–Liouville Type Implicit Coupled System via Generalized Boundary Conditions
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Comparative Study among New Hybrid Root Finding Algorithms and Traditional Methods

by
Elsayed Badr
1,2,*,
Sultan Almotairi
3,* and
Abdallah El Ghamry
4
1
Scientific Computing Department, Faculty of Computers & Artificial Intelligence, Benha University, Benha 13518, Egypt
2
Higher Technological Institute, 10th of Ramadan City, Embassies District, Nasr City, Cairo 11765, Egypt
3
Department of Natural and Applied Sciences, Community College Majmaah University, Al-Majmaah 11952, Saudi Arabia
4
Computer Science Department, Faculty of Computers & Artificial Intelligence, Benha University, Benha 13518, Egypt
*
Authors to whom correspondence should be addressed.
Mathematics 2021, 9(11), 1306; https://doi.org/10.3390/math9111306
Submission received: 29 April 2021 / Revised: 24 May 2021 / Accepted: 3 June 2021 / Published: 7 June 2021
(This article belongs to the Special Issue Dynamical Systems in Engineering)

Abstract

:
In this paper, we propose a novel blended algorithm that has the advantages of the trisection method and the false position method. Numerical results indicate that the proposed algorithm outperforms the secant, the trisection, the Newton–Raphson, the bisection and the regula falsi methods, as well as the hybrid of the last two methods proposed by Sabharwal, with regard to the number of iterations and the average running time.

1. Introduction

There are many sciences (mathematics, computer science, dynamical systems in engineering, agriculture, biomedical, etc.) that require finding the roots of non-linear equations. When there is not an analytic solution, we try to determine a numerical solution. There is not a specific algorithm for solving every non-linear equation efficiently.
There are several pure methods for solving such problems, including the pure, metaheuristic and blended methods. Pure methods include classical techniques such as the bisection method, the false position method, the secant method and the Newton–Raphson method, etc. Metaheuristic methods use metaheuristic algorithms such as particle swarm optimization, firefly, and ant colony for root finding, whereas blended methods are hybrid combinations of two classical methods.
There is not a specific method for solving every non-linear equation efficiently. In general, we can see more details about classical methods in [1,2,3,4] and especially for the bisection and Newton–Raphson methods in [5,6,7,8]. Other problems such as minimization, target shooting, etc. are discussed in [9,10,11,12,13,14].
Sabharwal [15] proposed a novel blended method that is a dynamic hybrid of the bisection and false position methods. He deduced that his algorithm outperformed pure methods (bisection and false position). On the other hand, he observed that his algorithm outperformed the secant method and the Newton–Raphson method according to the number of iterations. Sabharwal did not analyze his algorithm according to the running time, but he was satisfied with the iterations number only. Perhaps there is a method that has a small number of iterations, but the execution time is large and vice versa. For this reason, the iteration number and the running time are important metrics to evaluate the algorithms. Unfortunately, most researchers have not paid attention to the details of finding the running time. Furthermore, they did not discuss and did not answer the following question: why does the running time change from one run to another on the used software package?
The genetic algorithm was used to compare among the classical methods [9,10,11] based on the fitness ratio of the equations. The authors deduced that the genetic algorithm is more efficient than the classical algorithms for solving the functions x2x − 2 [12] and x2 + 2 x − 7 [11]. Mansouri et al. [12] presented a new iterative method to determine the fixed point of a nonlinear function. Therefore, they combined ideas proposed in the artificial bee colony algorithm [13] and the bisection method [14]. They illustrate this method with four benchmark functions and compare results with others methods, such as artificial bee colony (ABC), particle swarm optimization (PSO), genetic algorithm (GA) and firefly algorithms.
For more details about the classical methods, hybrid methods and the metaheuristic approaches, the reader can refer to [16,17].
In this work, we propose a novel blended algorithm that has the advantages of the trisection method and the false position algorithm. The computational results show that the proposed algorithm outperforms the trisection and regula falsi methods. On the other hand, the introduced algorithm outperforms the bisection, Newton–Raphson and secant methods according to the iteration number and the average of running time. Finally, the implementation results show the superiority of the proposed algorithm on the blended bisection and false position algorithm, which was proposed by Sabharwal [15]. The results presented in this paper open the way for presenting new methods that compete with traditional methods and may replace them in software packages.
The rest of this work is organized as follows: The pure methods for determining the roots of non-linear equations are introduced in Section 2. The blended algorithms for finding the roots of non-linear equations are presented in Section 3. In Section 4, the numerical results analysis and statistical test among the pure methods and the blended algorithms are provided. Finally, conclusions are drawn in Section 5.

2. Pure Methods

In this section, we introduce five pure methods for finding the roots of non-linear equations. These methods are the bisection method, the trisection method, the false position method, the secant method and the Newton–Raphson method. We contribute to implementing the trisection algorithm with equal subintervals that overcomes the bisection algorithm on fifteen benchmark equations as shown in Section 3. On the other hand, the trisection algorithm also outperforms the false position method, secant method and Newton–Raphson method partially, as shown in Section 3.

2.1. Bisection Method

We assume that the function f(x) is defined and continuous on the closed interval [a, b], where the signals of f(x) at the ends (a and b) are different. We divide the interval [a, b] into two halves, where x = a + b 2 , if f(x) = 0; then, x becomes a solution for the equation f(x) = 0. Otherwise, ( f ( x ) 0 ) and we can choose one subinterval [ a , x ]   or   [ x , b ] that has different signals of f(x) at its ends. We repeat dividing the new subinterval into two halves until we reach the exact solution x where f ( x ) = 0 or the approximate solution f ( x ) 0 with tolerance, eps. The value of eps closes to zero as shown in Algorithm 1 and other algorithms.
Algorithm 1. Bisection(f, a, b, eps).
     Input: The function f(x),
         The interval [a, b] where the root lies in,
         The absolute error (eps).
     Output: The root (x),
         The value of f(x)
         Numbers of iterations (n),
         The interval [a, b] where the root lies in
       n := 0
       while true do
       n := n + 1
       x := (a + b)/2
       if |f(x)| <= eps.
         return x, f(x), n, a, b
       else if f(a) * f(x) < 0
         b := x
       else
         a := x
     end (while)
The size of the interval was reduced by half at each iteration. Therefore the value eps is determined from the following formula:
e p s = b a 2 n
where n is the number of iterations. From (1), the number of iterations is found by
n = log 2 ( b a e p s )
The bisection method is a bracketing method, so it brackets the root in the interval [a, b], and at each iteration, the size of the interval [a, b] is halved. Accordingly, it reduces the error between the approximation root and the exact root for any iteration. On the other hand, the bisection method works quickly if the approximate root is far from the endpoint of the interval; otherwise, it needs more iterations to reach the root [17].

Advantages and Disadvantages of the Bisection Method

The bisection method is simple to implement, and its convergence is guaranteed. On the other hand, it has a relatively slow convergence, it needs different signs for the function values of the endpoints, and the test for checking this affects the complexity in the number of operations.

2.2. Trisection Method

The trisection method is like the bisection method, except that it divides the interval [a, b] into three subintervals, while the bisection method divides the interval [a, b] into two partial periods. Algorithm 2 divides the interval [a, b] into three equal subintervals and searches for the root in the subinterval that contains different signs of the function values at the endpoints of this subinterval.
If the condition of termination is true, then the iteration has finished its task; otherwise, the algorithm repeats the calculations.
In order to divide the interval [a, b] into equal three parts by x1 and x2, we need to know the locations of x1 and x2 as the following:
As shown in Figure 1, since
x1a = bx2
x2x1 = x1a
By solving Equations (3) and (4),
We get
x 1 = 2 a + b 3
And
x 2 = 2 b + a 3
The size of the interval [a, b] decreases to a third with each repetition. Therefore, the value eps is determined from the following formula:
e p s = b a 3 n
where n is the number of iterations. From (5) the number of iterations is found by
n = log 3 ( b a e p s )
When we compare Equations (2) and (6), we conclude that the iterations number of the trisection algorithm is less than the iterations number of the bisection algorithm. We might think that the trisection algorithm is better than the bisection algorithm since it requires a few iterations. However, it might be the case that one iteration of the trisection algorithm has an execution time greater than the execution time of one iteration of the bisection algorithm. Therefore, we will consider both execution time and the number of iterations to evaluate the different algorithms.
Algorithm 2. Trisection(f, a, b, eps).
   Input: The function f(x),
       The interval [a, b] where the root lies in,
       The absolute error (eps).
   Output: The root (x),
       The value of f(x)
       Numbers of iterations (n),
       The interval [a, b] where the root lies in
   n := 0
   while true do
      n := n + 1
      x1 := (b + 2*a)/3
      x2 := (2*b + a)/3
      if |f(x1)| < |f(x2)|
       x := x1
      else
       x := x2
      if |f(x)| <= eps
       return x, f(x), n, a, b
      else if f(a) * f(x1) < 0
       b := x1
      else if f(x1) * f(x2) < 0
       a := x1
       b := x2
      else
       a := x2
   end (while)

Advantages and Disadvantages of the Trisection Method

The trisection method has the same advantages and disadvantages of the bisection method, in addition to being faster than it, as shown in Table 1, Table 2, Table 3, Table 4, Table 5, Table 6, Table 7, Table 8 and Table 9.

2.3. False Position (Regula Falsi) Method

There is no unique method suitable for finding the roots of all nonlinear functions. Each method has advantages and disadvantages. Hence, the false position method is a dynamic and fast method when the nature of the function is linear. The function f(x), whose roots are in the interval [a, b] must be continuous, and the values of f(x) at the endpoints of the interval [a, b] have different signs. The false position method uses two endpoints of the interval [a, b] with initial values (r0 = a, r1 = b). The connecting line between the two points (r0, f(r0)) and (r1, f(r1)) intersects the x-axis at the next estimate, r2. Now, we can determine the successive estimates, rn from the following relationship
r n = r n 1 f ( r n 1 ) ( r n 1 r n 2 ) f ( r n 1 ) f ( r n 2 )
for n  2.
Remark: The regula falsi method is very similar to the bisection method. However, the next iteration point is not the midpoint of the interval but the intersection of the x-axis with a secant through (a, f(a)) and (b, f(b)).
Algorithm 3 uses the relation (7) to get the successive approximations by the false position method.
Algorithm 3. False Position(f, a, b, eps).
   Input: The function (f),
       The interval [a, b] where the root lies in,
       The absolute error (eps).
   Output: The root (x),
       The value of f(x)
       Numbers of iterations (n),
       The interval [a, b] where the root lies in
   n := 0
   while true do
       n := n + 1;
       x = a − (f(a)*(ba))/(f(b) − f(a))
       if |f(x)| <= eps
        return x, f(x), n, a, b
       else if f(a) * f(x) < 0
        b := x
       else
        a := x
   end (while)

Advantages and Disadvantages of the Regula Falsi Method

It is guaranteed to converge, and it is fast when the function is linear. On the other hand, we cannot determine the iterations number needed for convergence. It is very slow when the function is not linear.

2.4. Newton–Raphson Method

This method depends on a chosen initial point x0. This point plays an important role for Newton–Raphson method. The success of the method depends mainly on the point x0, and then the method may converge to its root or diverge based on the choice of the point x0. Therefore, the first estimate can be determined from the following relation.
x 1 = x 0 f ( x 0 ) f ( x 0 )
The successive approximations for the Newton–Raphson method can be found from the following relation:
x i + 1 = x i f ( x i ) f ( x i )
such that the f ( x i ) is the first derivative of the function f (x) at the point xi.
Algorithm 4 uses the relation (9) to get the successive approximations by the Newton–Raphson method.
Algorithm 4. Newton(f, xi, eps).
    This function implements Newton’s method.
    Input: The function (f),
       An initial root xi,
       The absolute error (eps).
    Output: The root (x),
       The value of f(x)
       Numbers of iterations (n),
    g(x) := f’(x)
    n = 0
    while true do
      n := n + 1
      xi = xif(xi)/g(xi)
      if |f(x)| <= eps
        return xi, f(xi), n
    end (while)

Advantages and Disadvantages of the Newton–Raphson Method

It is very fast compared to other methods, but it sometimes fails, meaning that there is no guarantee of its convergence.

2.5. Secant Method

Just as there is the possibility of the Newton method failing, there is also the possibility that the secant method will fail. The Newton method uses the relation (9) to find the successive approximations, but the secant method uses the following relation:
x i + 1 = x i x i x i 1 f ( x i ) f ( x i 1 ) f ( x i )
Algorithm 5 uses the relation (10) to get the successive approximations by the secant method.
Algorithm 5. Secant(f, a, b, eps).
    This function implements the Secant method.
    Input: The function (f),
       Two initial roots: a and b,
       The absolute error (eps).
    Output: The root (x),
       The value of f(x)
       Numbers of iterations (n),
    n := 0
    while true do
      n := n + 1
      x := bf(b)*(ba)/(f(b) − f(a))
      if |f(x)| <= eps
       return x, f(x), n
      a := b
      b := x
    end (while)

Advantages and Disadvantages of the Secant Method

It is very fast compared to other methods, but it sometimes fails, meaning that there is no guarantee of its convergence.

3. Hybrid Algorithms

In this section, instead of pure methods such as the bisection method, the trisection method, the false position method, the secant method and the Newton–Raphson method, we propose a new hybrid root-finding algorithm (trisection–false position), which outperforms the algorithm (bisection-false position) that was proposed by Sabharwal [15].

3.1. Blended Bisection and False Position

Sabharwal [15] proposed a new algorithm that has the advantages of both the bisection and the false position methods. He built a novel hybrid method, Algorithm 6, which overcame the pure methods (bisection and false position).
Algorithm 6. blendBF(f, a, b, eps).
This function implements the blended method of bisection and false position methods.
   Input: The function (f),
         The interval [a, b] where the root lies in,
         The absolute error (eps).
   Output: The root (x), The value of f(x); Numbers of iterations (n),
         The interval [a, b] where the root lies in
   n := 0
   a1 := a
   a2 := a
   b1 := b
   b2 := b
   while true do
      n := n + 1
      xB := (a + b)/2
      xF := a − (f(a)*(ba))/(f(b) − f(a))
      if |f(xB)| < |f(xF)|
       x := xB
      else
       x := xF
      if |(fx)| <= eps
       return x, f(x), n, a, b
      if f(a)*f(xB) < 0
       b1 := xB
      else
       a1 := xB
      if f(a) * f(xF) < 0
       b2 := xF
      else
      a2 := xF
      a := max(a1, a2);
      b := min(b1, b2)
   end (while)

Advantages and Disadvantages of the Blended Algorithm

It is guaranteed to converge, and it is efficient more than the classical methods but it sometimes takes a long time to get root.

3.2. Blended Trisection and False Position

We exploit the superiority of the trisection over the bisection method (as shown in Section 4) in order to present a new hybrid method (Algorithm 7) that overcomes the hybrid method presented by Sabharwal [15]. The blended method (trisection–false position) is based on calculating the segment line point in the false position method and also calculating two points that divide the interval [a, b] in the trisection method and then choosing the best of them, which converges to the approximating root. The number of iterations n(eps) of the proposed hybrid method is less than or equal to min{nf(eps), nt(eps)}, where nf(eps) and nt(eps) are the number of iterations of the false position method and the trisection method, respectively. Algorithm 7 outperforms all the classical methods (Table 1, Table 2, Table 3, Table 4, Table 5, Table 6, Table 7, Table 8 and Table 9).
Algorithm 7. blendTF(f, a, b, eps).
This function implements the blended method of trisection and false position methods.
   Input: The function (f); The interval [a, b] where the root lies in,
         The absolute error (eps).
   Output: The root (x), The value of f(x), Numbers of iterations (n),
         The interval [a, b] where the root lies in
   n = 0; a1 := a; a2 := a; b1 := b, b2 := b
   while true do
       n := n + 1
       xT1 := (b + 2*a)/3
       xT2 := (2*b + a)/3
       xF := a − (f(a)*(ba))/(f(b) − f(a))
       x := xT1
       fx := fxT1
       if |f(xT2)| < |f(x)|
        x := xT2
       if |f(xF)| < |f(x)|
        x := xF
       if |f(x)| <= eps
        return x, f(x), n, a, b
       if fa * f(xT1) < 0
        b1 := xT1
       else if f(xT1) * f(xT2) < 0
        a1 := xT1
        b1 := xT2
       else
        a1 := xT2
       if fa*f(xF) < 0
        b2 := xF;
       else
       a2 := xF;
       a := max(a1, a2) ; b := min(b1, b2)
   end (while)

Advantages and Disadvantages of the Blended Algorithm (The Proposed Algorithm)

It is guaranteed to converge, and it is more efficient than the classical methods and the blended algorithm that was proposed in [15], as shown in Table 1, Table 2, Table 3, Table 4, Table 5, Table 6, Table 7, Table 8 and Table 9.

4. Computational Study

The numerical results of the pure methods bisection method, trisection method, false position method, secant method and Newton–Raphson method are proposed. In addition to the computational results for the hybrid methods, the bisection–false position and trisection–false position are proposed. We compare the pure method and the hybrid method with the proposed hybrid method according to the number of iterations and CPU time. We used fifteen benchmark problems for this comparison, as shown in Table 1. We ran each problem ten times, and then we computed the average of CPU time and the number of iterations.
We used MATLAB v7.01 Software Package to implement all the codes. All codes were run under 64-bit Window 8.1 Operating System with Core(TM)i5 CPU M 460 @2.53GHz, 4.00 GB of memory.

Dataset and Evaluation Metrics

There are different ways to terminate the numerical algorithms such as the absolute error (eps) and the number of iterations. In this paper, we used the absolute error (eps = 10−14) to terminate all the algorithms. Perhaps there is a method that has a small number of iterations, but the execution time is large and vice versa. For this reason, the iteration number and the running time are important metrics to evaluate the algorithms. Unfortunately, most researchers did not pay attention to the details of finding the running time. Furthermore, they did not discuss and did not answer the following question: why does the running time change from one run to another with the used software package? Therefore, we ran every algorithm ten times and calculated the average of the running time to obtain an accurate running time and avoid the problems of the operating systems.
In Table 2, the abbreviations AppRoot, Error, LowerB and UpperB are used to denote the approximation root, the difference between two successive roots, lower bound and upper bound, respectively. Table 2 shows the performance of all classical methods and blended algorithms for solving the Problem 4. It is clear that both the trisection and the proposed blended algorithm (trisection-false position) outperformed the other algorithms. Because it is not accurate enough to make a conclusion from one function, we used fifteen benchmark functions (Table 1) to evaluate the proposed algorithm.
Ali Demir [23] proved that the trisection method with k-Lucas number works faster than the bisection method. From Table 3 and Table 4 and Figure 2, it is clear that the trisection method is better than the bisection method with respect to the running time for all problems except for problem 9. On the other hand, the trisection method determined the exact root (2.0000000000000000) of problem 4 after one iteration, but the bisection method found the approximate root (2.0000000000000284) after 45 iterations. Figure 3 shows that the trisection method always has fewer iterations than the bisection method. We can determine the number of iterations for the trisection method by n = log 3 ( b a e p s ) and the number of iterations for the bisection method by n = log 2 ( b a e p s ) . The authors [6,11] explained that the secant method is better than the bisection and Newton–Raphson methods for problem 8. It is not accurate to draw a conclusion from one function [15], so we experimented on fifteen benchmark functions. From Table 7, it is clear that the secant method failed to solve problem 11.
From Table 5, Table 6 and Table 7, we deduce that the proposed hybrid algorithm (trisection-false position) is better than the Newton–Raphson, false-position and secant. The Newton–Raphson method failed to solve problems P6, P9 and P11, and the secant method failed to solve P11.
From Figure 4 and Table 8 and Table 9, it is clear that the proposed blended algorithm (trisection–false position) has fewer iterations than the blended algorithm (bisection–false position) [15] on all the problems except problem 5 (i.e., according to the number of iterations, the proposed algorithm achieved 93.3% of fifteen problems but Sabharwal’s algorithm achieved 6.6%).
From Figure 5 and Table 8 and Table 9, it is clear that the proposed blended algorithm (trisection–false position) outperforms the blended algorithm (bisection-false position) [15] for eight problems versus seven problems (i.e., the proposed algorithm achieved 53.3% of fifteen problems but Sabharwal’s algorithm achieved 46.6%). On the other hand, the trisection method determined the exact root (1.0000000000000000) of the problem 4 after nine iterations, but the bisection method found the approximate root (0.9999999999999999) after 12 iterations.

5. Conclusions

In this work, we proposed a novel blended algorithm that has the advantages of the trisection method and the false position method. The computational results show that the proposed algorithm outperforms the trisection and regula falsi methods. On the other hand, the introduced algorithm outperforms the bisection, Newton–Raphson and secant methods according to the iteration number and the average running time. Finally, the implementation results show the superiority of the proposed algorithm on the blended bisection and false position algorithm, which was proposed by Sabharwal [15]. In future work, we will do more numerical studies using benchmark functions to evaluate the proposed algorithm and ensure that it competes with the traditional algorithms to replace it in software packages such as Matlab and Python. We will also propose some other hybrid algorithms that may be better than the proposed algorithm such as the bisection–Newton–Raphson method and trisection–Newton–Raphson.

Author Contributions

Conceptualization, E.B.; methodology, S.A.; software, A.E.G.; validation, E.B.; formal analysis, E.B.; investigation, S.A.; resources, A.E.G.; data curation, A.E.G.; writing—original draft preparation, E.B.; writing—review and editing, E.B.; visualization, E.B.; supervision, E.B.; project administration, E.B.; funding acquisition, S.A. All authors have read and agreed to the published version of the manuscript.

Funding

The authors extend their appreciation to the Deanship of Scientific Research at Majmaah University for funding this work under project number (R-2021-140).

Acknowledgments

The help from Higher Technological Institute, 10th of Ramadan City, Egypt for publishing is sincerely and greatly appreciated. We also thank the referees for suggestions to improve the presentation of this paper.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Hasan, A. Numerical Study of Some Iterative Methods for Solving Nonlinear Equations. Int. J. Eng. Sci. Invent. 2016, 5, 1–10. [Google Scholar]
  2. Hasan, A.; Ahmad, N. Compartive study of a new iterative method with that Newtons Method for solving algebraic and transcesental equations. Int. J. Comput. Math. Sci. 2015, 4, 32–37. [Google Scholar]
  3. Khirallah, M.Q.; Hafiz, M.A. Solving system of nonlinear equations using family of jarratt methods. Int. J. Differ. Equ. Appl. 2013, 12, 69–83. [Google Scholar] [CrossRef]
  4. Remani, C. Numerical Methods for Solving Systems of Nonlinear Equations; Lakehead University: Thunder Bay, ON, Canada, 2012; p. 13. [Google Scholar]
  5. Lally, C.H. A faster, high precision algorithm for calculating symmetric and asymmetric. arXiv 2015, arXiv:1509.01831. [Google Scholar]
  6. Ehiwario, J.C.; Aghamie, S.O. Comparative Study of Bisection, Newton-Raphson and Secant Methods of Root-Finding Problems. IOSR J. Eng. 2014, 4, 1–7. [Google Scholar]
  7. Ait-Aoudia, S.; Mana, I. Numerical solving of geometric constraints by bisection: A distributed approach. Int. J. Comput. Inf. Sci. 2004, 2, 66. [Google Scholar]
  8. Baskar, S.; Ganesh, S.S. Introduction to Numerical Analysis; Department of Mathematics, Indian Institute of Technology Bombay Powai: Mumbai, India, 2016. [Google Scholar]
  9. Srivastava, R.B.; Srivastava, S. Comparison of numerical rate of convergence of bisection, Newton and secant methods. J. Chem. Biol. Phys. Sci. 2011, 2, 472–479. [Google Scholar]
  10. Moazzam, G.; Chakraborty, A.; Bhuiyan, A. A robust method for solving transcendental equations. Int. J. Comput. Sci. Issues 2012, 9, 413–419. [Google Scholar]
  11. Nayak, T.; Dash, T. Solution to quadratic equation using genetic algorithm. In Proceedings of the National Conference on AIRES-2012, Vishakhapatnam, India, 29–30 June 2012. [Google Scholar]
  12. Mansouria, P.; Asadya, B.; Guptab, N. The Bisection–Artificial Bee Colony algorithm to solve fixed point problems. Appl. Soft Comput. 2015, 26, 143–148. [Google Scholar] [CrossRef]
  13. Karaboga, D.; Basturk, B. A powerful and efficient algorithm for numerical func-tion optimization: Artificial Bee Colony (ABC) algorithm. J. Glob. Optim. 2007, 39, 459–471. [Google Scholar] [CrossRef]
  14. Burden, L.R.; Douglas, F.J. Numerical Analysis, Prindle, Weber & Schmidt, 3rd ed.; Amazon: Seattle, WA, USA, 1 January 1985. [Google Scholar]
  15. Sabharwal, C.L. Blended Root Finding Algorithm Outperforms Bisection and Regula Falsi Algorithms. Mathematics 2019, 7, 1118. [Google Scholar] [CrossRef] [Green Version]
  16. Badr, E.M.; Elgendy, H. A hybrid water cycle-particle swarm optimization for solving the fuzzy underground water confined steady flow. Indones. J. Electr. Eng. Comput. Sci. 2020, 19, 492–504. [Google Scholar] [CrossRef]
  17. Chapra, S.C.; Canale, R.P. Numerical Methods for Engineers, 7th ed.; McGraw-Hill: Boston, MA, USA, 2015. [Google Scholar]
  18. Harder, D.W. Numerical Analysis for Engineering. Available online: https://ece.uwaterloo.ca/~{}dwharder/NumericalAnaly-sis/10RootFinding/falseposition/ (accessed on 11 June 2019).
  19. Calhoun, D. Available online: https://math.boisestate.edu/~{}calhoun/teaching/matlab-tutorials/lab_16/html/lab_16.html (accessed on 13 June 2019).
  20. Mathews, J.H.; Fink, K.D. Numerical Methods Using Matlab, 4th ed.; Prentice-Hall Inc.: Upper Saddle River, NJ, USA, 2004; ISBN 0-13-065248-2. [Google Scholar]
  21. Esfandiari, R.S. Numerical Methods for Engineers and Scientists Using MATLAB; CRC Press: Boca Raton, FL, USA, 2013. [Google Scholar]
  22. Joe, D.H. Numerical Methods for Engineers and Scientists, 2nd ed.; CRC Press: Boca Raton, FL, USA, 2001. [Google Scholar]
  23. Demir, A. Trisection method by k-Lucas numbers. Appl. Math. Comput. 2008, 198, 339–345. [Google Scholar] [CrossRef]
Figure 1. How to divide the interval [a, b] into three subintervals.
Figure 1. How to divide the interval [a, b] into three subintervals.
Mathematics 09 01306 g001
Figure 2. A comparison among 7 methods on 15 problems according to the number of iteration.
Figure 2. A comparison among 7 methods on 15 problems according to the number of iteration.
Mathematics 09 01306 g002
Figure 3. A comparison among 7 methods on 15 problems according to the CPU time.
Figure 3. A comparison among 7 methods on 15 problems according to the CPU time.
Mathematics 09 01306 g003
Figure 4. A comparison among 7 methods on 15 problems according to the number of iterations.
Figure 4. A comparison among 7 methods on 15 problems according to the number of iterations.
Mathematics 09 01306 g004
Figure 5. A comparison among 7 methods on 15 problems according to the CPU time.
Figure 5. A comparison among 7 methods on 15 problems according to the CPU time.
Mathematics 09 01306 g005
Table 1. Fifteen benchmark problems.
Table 1. Fifteen benchmark problems.
No.ProblemIntervalsReferences
P1 x 2 3 [1, 2]Harder [18]
P2 x 2 5 [2, 7]Srivastava [9]
P3 x 2 10 [3, 4]Harder [18]
P4 x 2 x 2 [1, 4]Moazzam [10]
P5 x 2 + 2 x 7 [1, 3]Nayak [11]
P6 x 3 2 [0, 2]Harder [18]
P7 x e x 7 [0, 2]Callhoun [19]
P8 x cos ( x ) [0, 1]Ehiwario [6]
P9 x sin ( x ) 1 [0, 2]Mathews [20]
P10 x cos ( x ) + 1 [−2, 4]Esfandiari [21]
P11 x 10 1 [0, 1.3]Chapra [17]
P12 x 2 + e x / 2 5 [1, 2]Esfandiari [21]
P13 sin ( x ) sinh ( x ) + 1 [3, 4]Esfandiari [21]
P14 e x 3 x 2 [2, 3]Hoffman [22]
P15 sin ( x ) x 2 [0.5, 1]Chapra [17]
Table 2. Comparison among pure methods and blended algorithms according to iterations, AppRoot, error and interval bounds.
Table 2. Comparison among pure methods and blended algorithms according to iterations, AppRoot, error and interval bounds.
MethodIter.AppRootErrorLowerBUpperB
Bisection192.00000190734863280.00000572204953641.99999618530273442.0000076293945313
Trisection12.00000000000000000.00000000000000001.00000000000000004.0000000000000000
FalsePosition151.99999838938812880.00000483183301951.99999597347356444.0000000000000000
Secant62.00000007864320220.0000002359296127nana
NewtonRaphson52.00000000069849190.0001373332926100nana
Hybrid [15]22.00000000000000000.00000000000000001.50000000000000002.5000000000000000
OurHybrid12.00000000000000000.00000000000000001.00000000000000004.0000000000000000
Table 3. Solutions of fifteen problems by the bisection method.
Table 3. Solutions of fifteen problems by the bisection method.
ProblemBisection Method
IterAverage CPU TimeApproximate RootFunction ValueLower BoundUpper Bound
P1440.5148391.73205080756889630.00000000000000001.73205080756883941.7320508075689531
P2440.3390062.23606797749977200.00000000000000002.23606797749948782.2360679775000563
P3440.3303003.16227766016839950.00000000000000003.16227766016834273.1622776601684564
P4450.3392742.00000000000002840.00000000000000001.99999999999994322.0000000000001137
P5480.4130621.82842712474619160.00000000000000861.82842712474618451.8284271247461987
P6490.3737101.25992104989487430.00000000000000541.25992104989487071.2599210498948779
P7460.3811111.5243452049841437−0.00000000000000751.52434520498411531.5243452049841721
P8440.3458500.7390851332151556−0.00000000000000850.73908513321509870.7390851332152124
P9460.5563001.1141571408719244−0.00000000000000791.11415714087189601.1141571408719528
P10450.4544942.0739328090912181−0.00000000000000742.07393280909104762.0739328090913887
P11440.3381341.00000000000000580.00000000000000000.99999999999993181.0000000000000795
P12480.3793921.6490132683031895−0.00000000000000281.64901326830318601.6490132683031931
P13480.3904383.2215883990939425−0.00000000000000563.22158839909393893.2215883990939460
P14460.3549502.1253911988111298−0.00000000000000072.12539119881111562.1253911988111440
P15450.3595460.8767262153950668−0.00000000000000480.87672621539505260.8767262153950810
Table 4. Solutions of fifteen problems by the trisection method.
Table 4. Solutions of fifteen problems by the trisection method.
ProblemTrisection Method
IterAverage CPU TimeApproximate RootFunction ValueLower BoundUpper Bound
P1260.2923491.73205080756888560.00000000000000001.73205080756809891.7320508075692791
P2280.3113192.23606797749978630.00000000000000002.23606797749934932.2360679775000047
P3280.3129393.16227766016839110.00000000000000003.16227766016830403.1622776601684350
P410.0111612.00000000000000000.00000000000000001.00000000000000004.0000000000000000
P5290.3304261.82842712474619070.00000000000000361.82842712474616161.8284271247462491
P6300.3415531.2599210498948719−0.00000000000000621.25992104989486231.2599210498948914
P7310.3498061.5243452049841439−0.00000000000000491.52434520498413751.5243452049841473
P8290.3268330.7390851332151560−0.00000000000000780.73908513321514150.7390851332151852
P9280.7736901.11415714087193480.00000000000000661.11415714087176011.1141571408720223
P10280.3161542.07393280909121460.00000000000000072.07393280909069012.0739328090914770
P11260.2974321.00000000000003930.00000000000000000.99999999999952781.0000000000010620
P12260.2999951.64901326830319040.00000000000000121.64901326830240351.6490132683035839
P13310.3607163.2215883990939425−0.00000000000000563.22158839909394073.2215883990939456
P14280.3238732.12539119881113110.00000000000000652.12539119881104372.1253911988111747
P15290.3346400.8767262153950647−0.00000000000000250.87672621539505020.8767262153950720
Table 5. Solutions of fifteen problems by the false position method.
Table 5. Solutions of fifteen problems by the false position method.
ProblemFalse Position Method
IterAverage CPU TimeApproximate RootFunction ValueLower BoundUpper Bound
P1120.1347191.73205080756885990.00000000000000001.73205080756863472.0000000000000000
P2460.5100512.23606797749977470.00000000000000002.23606797749976097.0000000000000000
P3140.1551693.16227766016836440.00000000000000003.16227766016825164.0000000000000000
P4340.3827181.99999999999995580.00000000000000001.99999999999988944.0000000000000000
P5200.5564111.8284271247461896−0.00000000000000271.82842712474618743.0000000000000000
P6400.4550441.2599210498948719−0.00000000000000621.25992104989487012.0000000000000000
P7290.3304031.5243452049841437−0.00000000000000751.52434520498414192.0000000000000000
P8110.1244560.7390851332151551−0.00000000000000920.73908513321505001.0000000000000000
P960.0780881.11415714087193060.00000000000000081.09975017029461641.1141571408730828
P10120.1380262.07393280909121460.00000000000000072.07393280909120392.5157197710146586
P111271.4472240.99999999999998120.00000000000000000.99999999999997551.3000000000000000
P12150.1764291.6490132683031899−0.00000000000000081.64901326830318712.0000000000000000
P13440.5078563.22158839909394160.00000000000000633.22158839909394074.0000000000000000
P14440.5049182.1253911988111285−0.00000000000000792.12539119881112673.0000000000000000
P15160.1856200.87672621539505520.00000000000000800.87672621539500911.0000000000000000
Table 6. Solutions of fifteen problems by Newton’s method.
Table 6. Solutions of fifteen problems by Newton’s method.
ProblemNewton’s Method
IterAverage CPU TimeApproximate RootFunction Value
P160.2405001.73205080756887740.0000000000000000
P250.1868192.23606797749978980.0000000000000000
P350.1851363.16227766016837950.0000000000000000
P470.2443932.00000000000000000.0000000000000000
P560.2143491.8284271247461901−0.0000000000000002
P6Fail
P7140.4369721.52434520498414440.0000000000000002
P860.2145120.73908513321516070.0000000000000001
P9Fail
P10140.439575−4.91718592528713220.0000000000000011
P11Fail
P1260.2213871.64901326830319020.0000000000000002
P1360.2211573.22158839909394200.0000000000000004
P1450.1914652.12539119881112980.0000000000000017
P1590.3060000.87672621539506250.0000000000000000
Table 7. Solutions of fifteen problems by the secant method.
Table 7. Solutions of fifteen problems by the secant method.
ProblemSecant Method
IterAverage CPU TimeApproximate RootFunction Value
P160.0680711.73205080756887720.0000000000000000
P270.0777632.23606797749978980.0000000000000000
P350.0558223.16227766016837640.0000000000000000
P480.0891422.00000000000000000.0000000000000000
P560.0667671.82842712474619070.0000000000000036
P6100.1143671.2599210498948716−0.0000000000000073
P790.1013451.52434520498414440.0000000000000002
P860.0675140.73908513321516070.0000000000000001
P950.0734861.11415714087193040.0000000000000004
P1080.0912062.0739328090912150−0.0000000000000003
P11Fail
P1260.0696051.64901326830319020.0000000000000002
P1380.0934423.22158839909394200.0000000000000004
P1470.0811602.1253911988111298−0.0000000000000007
P1570.0807840.8767262153950625−0.0000000000000000
Table 8. Solutions of fifteen problems by the hybrid method bisection-false position.
Table 8. Solutions of fifteen problems by the hybrid method bisection-false position.
ProblemBisection-False Position Method
IterAverage CPU TimeApproximate RootFunction ValueLower BoundUpper Bound
P180.1212321.73205080756887720.00000000000000001.73205080756880011.7350578402209837
P2100.1487202.23606797749978890.00000000000000002.23606797749936392.2439291539836148
P370.1034423.16227766016837020.00000000000000003.16227766016258733.1721597778622157
P420.0300532.00000000000000000.00000000000000001.50000000000000002.5000000000000000
P550.0749411.8284271247461901−0.00000000000000021.82842712474300041.8284271247493797
P690.1372751.2599210498948723−0.00000000000000411.25992104989398391.2611286403176987
P7110.1659071.52434520498414440.00000000000000021.52434520498413861.5260333371087631
P880.1203220.73908513321516070.00000000000000010.73908513321514700.7422270732175922
P960.1014881.11415714087193020.00000000000000011.11324273276427071.1141571408719768
P10100.1506112.0739328090912150−0.00000000000000032.07393280909118662.0789350033373930
P11120.1841450.99999999999999990.00000000000000000.99999999999993051.0003433632829859
P1280.1239391.6490132683031895−0.00000000000000281.64901326830264351.6531557562694839
P1390.1396543.22158839909394200.00000000000000043.22158839909392423.2224168881395068
P1490.1397752.1253911988111289−0.00000000000000552.12539119881040422.1275191334463157
P1570.1074930.87672621539505810.00000000000000480.87672621538867120.8772684454348731
Table 9. Solutions of fifteen problems by the hybrid method trisection-false position.
Table 9. Solutions of fifteen problems by the hybrid method trisection-false position.
ProblemTrisection-False Position Method
IterAverage CPU TimeApproximate RootFunction ValueLower BoundUpper Bound
P170.1314181.73205080756887720.00000000000000001.73205080756878241.7324926951584967
P280.1492702.23606797749978940.00000000000000002.23606797749871382.2373661277171197
P360.1112313.16227766016837770.00000000000000003.16227766016231023.1638711488008444
P410.0185352.00000000000000000.00000000000000001.00000000000000004.0000000000000000
P570.1319061.8284271247461901−0.00000000000000021.82842712474615211.8288267084339278
P680.1521301.2599210498948730−0.00000000000000091.25992104989381871.2602675857311345
P770.1316701.52434520498414440.00000000000000021.52434520498406621.5244112793655715
P870.1313450.73908513321516070.00000000000000010.73908513321511930.7396432352779715
P950.2134091.11415714087193260.00000000000000351.11264405191456751.1141571409109841
P1080.1503782.0739328090912150−0.00000000000000032.07393280909120792.0745363211703700
P1190.1706381.00000000000000000.00000000000000000.99999999999990901.0000567349034972
P1260.1158721.6490132683031897−0.00000000000000181.64901326830152551.6496393349802922
P1370.1354813.22158839909394200.00000000000000043.22158839909314983.2217303732361522
P1470.1349902.1253911988111298−0.00000000000000072.12539119881106362.1254846670968397
P1550.0962750.87672621539506160.00000000000000100.87672621511424120.8767286917958327
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Badr, E.; Almotairi, S.; Ghamry, A.E. A Comparative Study among New Hybrid Root Finding Algorithms and Traditional Methods. Mathematics 2021, 9, 1306. https://doi.org/10.3390/math9111306

AMA Style

Badr E, Almotairi S, Ghamry AE. A Comparative Study among New Hybrid Root Finding Algorithms and Traditional Methods. Mathematics. 2021; 9(11):1306. https://doi.org/10.3390/math9111306

Chicago/Turabian Style

Badr, Elsayed, Sultan Almotairi, and Abdallah El Ghamry. 2021. "A Comparative Study among New Hybrid Root Finding Algorithms and Traditional Methods" Mathematics 9, no. 11: 1306. https://doi.org/10.3390/math9111306

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop