Abstract
In the present paper, we introduced a quadratically convergent Newton-like normal S-iteration method free from the second derivative for the solution of nonlinear equations permitting at some points in the neighborhood of the root. Our proposed method works well when the Newton method fails and performs even better than some higher-order converging methods. Numerical results verified that the Newton-like normal S-iteration method converges faster than Fang et al.’s method. We studied different aspects of the normal S-iteration method regarding the faster convergence to the root. Lastly, the dynamic results support the numerical results and explain the convergence, divergence, and stability of the proposed method.
MSC:
65H04; 65H05; 30C15; 37N30
1. Introduction
In this work, we propose a Newton-like normal S-iteration method for solving nonlinear algebraic and transcendental equations of the following form [1,2,3]:
Newton’s method [4,5] is a basic method for solving (1), which converges to the root quadratically under some conditions. Newton’s method is defined as follows:
Although Newton’s method is the most important known and the most basic used method to solve Equation (1), some weaknesses of Newton’s method [6,7,8,9,10,11,12] are as follows:
- (i)
- It is only a second-order method;
- (ii)
- The initial approximation should be near the root;
- (iii)
- The denominator term of Newton’s method must not be zero at the root or near the root.
To overcome these weaknesses, Wu [12] developed a quadratic convergent method in 2000, which is expressed as follows:
where . Fang et al. [11] studied a method in 2008, defined as follows:
where . They claimed that their method (4) is of cubic convergence. More precisely,
Theorem 1
([11]). Let be a function and assume that
(L1) is a simple zero of f;
(L2) f is three times differentiable on I;
(L3) , for all where is a neighborhood of Then, method (4) converges cubically to
Recently, Wang and Liu [13] revealed that Fang et al.’s method given by (4) is only of second order. Wang and Liu [13] revised theorem 1 as follows:
Theorem 2.
Let be a function and assume that
(i) is a simple zero of f;
(ii) f is three times differentiable on I;
(iii) , for all , where is a neighborhood of . Then, method (4) converges quadratically to .
More recently, Wang and Liu [13] modified method (4) for third-order convergence as follows:
where and it is equal to -. Under the above modification, Wang and Liu [13] settled the third-order convergence theorem as follows:
Theorem 3.
Let be a function and assume that
(W1) is a simple zero of f;
(W2) f is three times differentiable on I;
(W3) for all , where is a neighborhood of .
Then, the iterative method (5) is cubically convergent.
It is clear from condition (W2) of Theorem 3 that the sufficient condition for the convergence of method (5) to the zero of the function f is that the third derivative of f must exist. However, we often come across a situation in which the third-order derivative of the function does not exist, while f has a zero in the interval I. Consider the function defined by
Here, . Note that , and does not exist. Hence, we observe that
- (i)
- Newton’s method (2) can not be used;
- (ii)
The objective of this work is to introduce a Newton-like normal S-iteration method for solving nonlinear Equation (1). Taking this into account, we describe a new method in which the second derivative of function f is sufficient for convergence and is well comparable to third-order methods. For this purpose, we applied the normal S-iteration process to a second-order converging Newton-like method. The novelty of our proposed Newton-like normal S-iteration method is that when we compare it with other methods, the theoretical results remain the same, but the numerical results and dynamic results significantly improve. In the theoretical section, we show that due to having a quadratic converging method, it requires only second-order differentiability. In the numerical section, we verify the theoretical results with numerical examples and show that in spite of being a second-order method, the present method is well comparable to the previously published second- and third-order methods. Furthermore, with sensitivity analysis, we determined the optimality conditions for the proposed method by finding the suitable values of and in order to obtain the optimum results and also obtained the average number of iterations by performing several operations of the proposed method considering the 50 grid points. Lastly, We confirmed the theoretical and numerical results using the dynamic analysis of the proposed method. Thus, we plotted the fractal pattern graphs of the proposed method alongside those of the previously published methods to confirm the applicability of the proposed method. The results show that our method not only answers the research question affirmatively but also behaves very well in comparison to the third-order method developed by Wang and Liu [13].
The rest of this paper is arranged as follows: Section 2 presents preliminary results. In Section 3, we propose the new Newton-like normal S-iteration method and explain its convergence analysis. In Section 4, numerical examples are given to verify the theoretical results. Section 5 is related to the sensitivity analysis. Lastly, dynamic analysis supports the numerical and theoretical results in Section 6.
2. Preliminary
Let be a root of nonlinear Equation (1) and f be a sufficiently differentiable function and , where is a neighborhood of . Then, the numerical solution of (1) can be written as
Approximating the integral by with in (6), we obtain
3. New Newton-like Method and Its Theoretical Convergence Analysis
In this section, we introduce the new Newton-like normal S-iteration method and study its convergence analysis.
In [14], Sahu introduced a normal S-iteration process as follows:
Definition 1.
Let D be a nonempty convex subset of a normed space X and be an operator. Then, for arbitrary , the normal S-iteration process is defined by
where the sequence .
There are many papers dealing with the S-iteration process and the normal S-iteration process in the literature. In [15], Sahu introduced a Newton-like method based on normal the S-iteration process as follows:
where the sequence and is the derivative of f at point x.
We now introduce our new Newton-like normal S-iteration method for solving nonlinear Equation (1), when may be zero in the neighborhood of the root, as
where
and is a sequence in ℜ, such that . The parameter is chosen in such a manner that both and have the same sign, and hence denominator is nonzero in Equation (8). For this purpose, we use the signum function as follows:
The main result of this paper can now be established as follows:
Theorem 4.
Let be a function and assume that
- (i)
- is a simple zero of f;
- (ii)
- f is two times differentiable on I;
- (iii)
- , for all , where is neighborhood of and .
Then, the Newton-like normal S-iteration method defined by (7) is quadratically convergent locally to the zero of f.
Proof.
Let be a simple zero of a function f, and . Using Taylor expansion about and using , we obtain
Using the above in (8), we obtain
On expanding and about , we obtain
Hence, the Newton-like normal S-iteration method proposed in (7) has second-order convergence.
4. Numerical Analysis
In this section, we present some numerical tests to show the applicability of the proposed method by considering two categories of functions, namely (i) functions that are differentiable three times, and (ii) functions that are differentiable only two times. Numerical computations were carried out in MATLAB 2007 and the stopping criteria were taken as (i) , (ii) , where . We applied the Newton-like normal S-iteration method for the following three values of :
- (i)
- ;
- (ii)
- ;
- (iii)
- ( is taken as in Wang and Liu [13]).
- (i)
- Functions with third-order differentials:
Here, we consider the examples taken by Wang and Liu [13] as follows:
For the two values of and as indicated in Wang’s method, we considered and , as shown in Table 1. When starting with the same initial points as in Wang and Liu [13] in all test problems, we observe that for both values of , our normal S-iteration method takes less number of iterations than the method of Wang and Liu method [13] for the value of . Thus, in spite of being a second-order convergence method, it performs better than the third-order method of Wang and Liu [13]. Furthermore, It may be noted that in all test problems, the classical Newton’s method either fails or diverges in most cases. In Table 1, F, D, and denote the failure of the method, the divergence of the method, and not converging to the desired root, respectively.
Table 1.
Functions for which third-order differentials exist.
- (ii)
- Functions that are differentiable only two times
Table 2.
Functions for which third-order differential does not exist.
As we know from the condition (W2) of Theorem 3, the cubically convergent method of Wang and Liu will converge to the root only if the third-order differential of the function exists in the neighborhood of the root. Hence, Wang’s method is no longer applicable in this case. Therefore, we compared the present method with quadratically convergent same-order Newton’s method and Fang et al.’s method [11] for different values of and (, as in Wang and Liu [13] and ) in Table 2. In all the test problems, for all values of and , we can see that the present new Newton-like normal S-iteration method is always taking less number of iterations, except for example 3 (case ), in comparison to other quadratically convergent methods. Hence, we conclude that the present method is more effective, robust, and stable.
5. Sensitivity Analysis
5.1. The Behavior of Normal S-Iteration Method for Different Values of and
We took the function to investigate the empirical behavior of the proposed normal S-iteration method for different values of and . The numerical results in Table 3 show that when starting with the initial approximations 0.73 and −3.0, the proposed method is not significantly affected due to the variation in the value of , but the value of plays a crucial role as its different values are considered in the interval (0, 1). Thus, we can see that with the values of ranging from 0.1 to 0.9, the optimum value of is found to be 0.9 for which the proposed method is taking the least number of iterations.
Table 3.
The proposed method for different values of and .
5.2. Normal-S Iteration Method with Variable Value of
We considered the two sequence of as and to solve the following two test functions using the proposed method:
We observe from Table 4 that the second sequence is taking fewer iterations in comparison to the first sequence in converging to the root for both test functions. Hence, we conclude that the sequence that converges near 1, i.e., , gives the faster convergence to the root.
Table 4.
Normal-S iteration method with a variable value of .
5.3. Average Number of Iterations in Normal-S Iteration Method
Table 5 and Table 6 show the average number of iterations denoted by ANI for 50 tests conducted with different values of [6]. For this purpose, we considered the following two test functions, which are three times differentiable:
Table 5.
The average number of iterations in normal-S iteration method.
Table 6.
The average number of iterations (ANI) in normal-S iteration method.
Example 1.
.
It has root . We took the initial approximations in the grid as follows: and (see Table 5). The allowed error is .
Example 2.
.
It has root We took the initial approximations of in the grid as follows: , and (see Table 6). The allowed error is .
5.4. Convergence Behavior of the Methods of Newton and Fang et al. and the Present Method
The convergence behavior of Newton’s method, Fang et al.’s method [11], and the new Newton-like normal S-iteration method are shown in Figure 1, Figure 2 and Figure 3. To study the convergence behavior, we took the test functions , , and , and for each test function, we considered the three cases as follows:

Figure 1.
The graph using values of functions and roots: (a) Graph for ; (b) Graph for ; (c) Graph for .
Figure 2.
Graph using roots and number of iterations: (a) Graph for ; (b) Graph for ; (c) Graph for .
Figure 3.
Graph for values of functions and numbers of iterations: (a) Graph for ; (b) Graph for ; (c) Graph for .
- Case 1: The graph between function and root for , , and
Here, from Figure 1a, it is clear that for , we have . Starting with this initial approximation , the value of for Newton’s method, Fang et al.’s method [11], and the present method are 0.702195479022049, 0.677964714450141, and 0.576332178830878, respectively. Clearly, from Figure 1a, it can be inferred that the present method (red line) is better in its very first iteration among all three methods. After successive iterations, starting with , the present method very rapidly converges to the root , as shown in the figure. Similarly, for the function in Figure 1b and the function in Figure 1c, we can see that the present method converges to the root faster than others.
- Case 2: Graph of the number of iterations and roots for , , and
For the function , we have for . It is clear from Figure 2b that when starting with the initial approximation , Newton’s method, Fang et al.’s method [11], and the present method converge to the root in 88, 61, and 49 iterations, respectively. Hence, the new Newton-like normal S-iteration method takes fewer iterations among all the iterative methods. Similarly, we see the same pattern for and in Figure 2a,c.
- Case 3: Graph of number of iterations and functions for , , and
In Figure 3c, we have for . Starting with , we can see from the graph that the value of the function in the present method becomes 0 in 33 iterations, while the Newton method and Fang et al.’s method [11] take 89 and 43 iterations, respectively, which shows that the present method converges to the root faster than Newton’s method and Fang et al.’s method. Figure 3a,b show the same results for functions and , respectively.
6. Dynamic Analysis of Methods for Functions , , , and
Now, we define the following definitions but in the extended complex plane:
Definition 2
([10,16,17]). Let us consider as a rational map on the Riemann sphere, where I is a subset of the complex numbers . Then, a point is said to be a fixed point of g if
Again, for any point , the orbit of the point z can be defined as the set
Definition 3
([10,16]). A periodic point is said to be of period k if there exists a smallest positive integer k, i.e., .
Remark 1.
If is a periodic point of period k, then clearly, it is a fixed point for .
Definition 4
([10,16,17]). Let be a zero of the function F. Then, the basin of attraction of the zero value is defined as the set of all initial approximations such that any numerical iterative method starting with converges to . It can be written as
Here, is any fixed point iterative method.
Remark 2.
For example, in the case of Newton’s method,
We can write the basin of attraction of the zero value for Newton’s method as follows:
Definition 5
([10,16,17]). The Julia set of a nonlinear map is denoted as and is defined as a set consisting of the closure of its repelling periodic points [18]. The complement of Julia set is called the Fatou set .
Remark 3.
- (i)
- The Julia set of a nonlinear map may also be defined as the common boundary shared by the basins of roots, and the Fatou set may also be defined as the set that contains the basin of attraction.
- (ii)
- Sometimes, the Fatou set of a nonlinear map may also be defined as the solution space and the Julia set of a nonlinear map may also be defined as the error space;
- (iii)
- Fractals are very complicated phenomena that may be defined as self-similar unexpected geometric objects that are repeated at every small scale ([19]).
We plotted the dynamics of the iterative methods for various functions. Then, we examined the theoretical and numerical results with the help of dynamic results. A dynamic study helps us to understand the convergence and stability of the methods [10]. We applied our method on a square of points with a tolerance and a maximum of 30 iterations. For any function, if the sequence generated by the iterative methods with any initial point converges to a zero in the square, then point will lie in the basins of attraction of this zero, and we assign a fixed color to this point ([20]).
In the following, we describe the speed of convergence and dynamics of the considered methods under two cases for finding the complex roots of functions. In the first case, we plotted the speed of convergence and dynamics of Newton’s method, Fang et al.’s method [11], and the proposed method for functions , (for which the third-order derivative does not exist). In the second case, we studied the speed of convergence and dynamics of Newton’s method, Wang and Liu’s method [13], and the proposed method for functions and (for which the third-order derivative exists).
6.1. Functions for Which the Third-Order Derivative Does Not Exist
We took the following two functions, which are differentiable only two times.
For the function , the dynamics and speed of convergence for various methods are shown in Figure 4a–c. It is clear from Figure 4 that the proposed method with and = 0.9 generates a Fatou set with larger orbits and darker color and a Julia set with fewer fractal boundaries and less chaotic behavior. Newton’s method shows some type of fractal boundaries and chaotic behavior in the middle and right side of Figure 4a. The dynamics of Fang et al.’s method [11] generates a Fatou set with smaller orbits but a larger Julia set and thus is considered the worst method.
Figure 4.
Dynamics of different methods for : (a) Newton’s method; (b) Fang et al.’s method; (c) proposed method.
The dynamics and speed of convergence of Newton’s method, Fang et al.’s method, and the proposed method for are plotted in Figure 5a, Figure 5b, and Figure 5c, respectively. Clearly, the fractal patterns graph of Newton’s method has a large Julia set with fractal boundaries and chaotic behavior, whereas the proposed method and Fang et al.’s method [11] have a large Fatou set with basins, but both methods have some nonconverging regions, shown in the left side of the figures with red color.
Figure 5.
Dynamics of different methods for : (a) Newton’s method; (b) Fang et al. method; (c) proposed method.
6.2. Functions for Which the Third-Order Derivative Exist
We took the following two functions, which are differentiable three times.
For , the dynamics of Newton’s method, Wang and Liu’s method [13], and the proposed method can be seen in Figure 6a, Figure 6b, and Figure 6c, respectively. Here, Figure 6 shows that the proposed method with and = 0.9 is the best method because of having a Fatou set with larger orbits and darker color and a Julia set with fewer fractal boundaries and less chaotic behavior. Wang and Liu’s method [13] generates some type of chaotic behavior in the whole figure (Figure 6b). Newton’s method generates a Fatou set with smaller orbits and a Julia set with less chaotic behavior with reddish color in the middle of the figure (see Figure 6a). This is the reason why Newton’s method takes several iterations and sometimes fails.
Figure 6.
Dynamics of different methods for : (a) Newton’s method; (b) Wang and Liu’s method; (c) proposed method.
The dynamics of Newton’s method, Wang and Liu’s method, and the proposed method for function are shown in Figure 7a–c. The failure of Newton’s method with the starting point , as shown in Table 1, is proved in Figure 7a. The speed of the convergence of Newton’s method and Wang and Liu’s method [13] is slow with a fractal Julia set and chaotic behavior in comparison with the proposed method.
Figure 7.
Dynamics of different methods for : (a) Newton’s method; (b) Wang and Liu’s method; (c) proposed method.
6.3. Dynamics of Proposed Method with Variable Value of for Example
We plotted the speed of convergence and dynamics of the proposed method with variable values of for . The results are shown in Figure 8. It is clear from the figure that the speed of the convergence of the proposed method increases with an increase in the value of as the Fatou set increases with larger orbits and a darker color. Moreover, for the value of , the speed of convergence is optimal with larger orbits and less chaotic behavior in comparison with , and .
Figure 8.
Dynamics of the proposed method with variable values of for : (a) proposed method at ; (b) proposed method at ; (c) proposed method at ; (d) proposed method at ; (e) proposed method at .
7. Future Work
In future research, using our proposed method, we may consider the problem of solving an algebraic equation that has roots with multiplicity. It will be interesting to see the performance of a derivative-free version of the proposed method for the problems considered in this study. The proposed method may also be discussed in Banach space to solve the system of equations. Thus, the proposed method has indeed potential areas of interest that will be the topic of our future research.
8. Conclusions
We presented a new Newton-like normal S-iteration method for finding the root of the nonlinear equation . Our theoretical results show that due to quadratic convergence, it requires only second-order differentiability rather than third-order differentiability. The numerical results and different graphic illustrations show that in spite of being a second-order convergence method, the proposed method is the most effective and superior when Newton’s method fails, and it performs better than the same-order method of Fang et al. [11] and the third-order method of Wang and Liu [13], as it converges to the root much faster and very efficiently for different values of with . Further, we showed that this convergence of the proposed method is accelerated for a sequence of variable values of converging to one. The results of the dynamic analysis also support the theoretical and numerical results related to the convergence and stability behavior of the proposed method. Thus, from a practical point of view, the new Newton-like normal S-iteration method has definite practical utility.
Author Contributions
Conceptualization, M.K.S. and I.K.A.; methodology, M.K.S. and I.K.A.; software, M.K.S., I.K.A. and A.K.S.; validation, M.K.S. and I.K.A.; formal analysis, M.K.S. and I.K.A.; investigation, M.K.S. and I.K.A.; resources, M.K.S. and I.K.A.; data curation, M.K.S. and I.K.A.; writing—original draft preparation, M.K.S. and I.K.A.; writing—review and editing, M.K.S. and I.K.A.; visualization, M.K.S. and I.K.A.; supervision, M.K.S. and I.K.A.; project administration, M.K.S. and I.K.A.; funding acquisition, M.K.S. and I.K.A. All authors have read and agreed to the published version of the manuscript.
Funding
This research received no external funding.
Data Availability Statement
Not applicable.
Conflicts of Interest
The authors declare that they have no conflict of interest.
References
- Baccouch, M. A Family of High Order Numerical Methods for Solving Nonlinear Algebraic Equations with Simple and Multiple Roots. Int. J. Appl. Comput. Math. 2017, 3, 1119–1133. [Google Scholar] [CrossRef]
- Singh, M.K.; Singh, A.K. The Optimal Order Newton’s Like Methods with Dynamics. Mathematics 2021, 9, 527. [Google Scholar] [CrossRef]
- Singh, M.K.; Argyros, I.K. The Dynamics of a Continuous Newton-like Method. Mathematics 2022, 10, 3602. [Google Scholar] [CrossRef]
- Traub, J.F. Iterative Methods for the Solution of Equations; Prentice Hall: Clifford, NJ, USA, 1964. [Google Scholar]
- Argyros, I.K.; Hilout, S. On Newton’s Method for Solving Nonlinear Equations and Function Splitting. Numer. Math. Theor. Meth. Appl. 2011, 4, 53–67. [Google Scholar]
- Ardelean, G.; Balog, L. A qualitative study of Agarwal et al. iteration procedure for fixed points approximation. Creat. Math. Inform. 2016, 25, 135–139. [Google Scholar] [CrossRef]
- Ardelean, G. A comparison between iterative methods by using the basins of attraction. Appl. Math. Comput. 2011, 218, 88–95. [Google Scholar] [CrossRef]
- Deng, J.J.; Chiang, H.D. Convergence region of Newton iterative power flow method: Numerical studies. J. Appl. Math. 2013, 4, 53–67. [Google Scholar] [CrossRef]
- Kotarski, W.; Gdawiec, K.; Lisowska, A. Polynomiography via Ishikawa and Mann iterations. In International Symposium on Visual Computing; Springer: Berlin/Heidelberg, Germany, 2012; pp. 305–313. [Google Scholar]
- Amat, S.; Busquier, S.; Plaza, S. Review of some iterative root-finding methods from a dynamical point of view. Scientia 2004, 10, 3–35. [Google Scholar]
- Fang, L.; He, G.; Hub, Z. A cubically convergent Newton-type method under weak conditions. J. Comput. Appl. Math. 2008, 220, 409–412. [Google Scholar] [CrossRef]
- Wu, X.Y. A new continuation Newton-like method and its deformation. Appl. Math. Comput. 2000, 112, 75–78. [Google Scholar] [CrossRef]
- Wang, H.; Liu, H. Note on a Cubically Convergent Newton–Type Method Under Weak Conditions. Acta Appl. Math. 2010, 110, 725–735. [Google Scholar] [CrossRef]
- Sahu, D.R. Applications of the S-iteration process to constrained minimization problems and split feasibility problems. Fixed Point Theory 2011, 12, 187–204. [Google Scholar]
- Sahu, D.R. Strong convergence of a fixed point iteration process with applications. In Proceedings of the International Conference on Recent Advances in Mathematical Sciences and Applications; 2009; pp. 100–116. Available online: https://scholar.google.ca/scholar?hl=en&as_sdt=0%2C5&q=Strong+convergence+of+a+fixed+point+iteration+process+with+applications&btnG= (accessed on 18 October 2022).
- Scott, M.; Neta, B.; Chun, C. Basin attractors for various methods. Appl. Math. Comput. 2011, 218, 2584–2599. [Google Scholar] [CrossRef]
- Argyros, I.K.; Magrenan, A.A. Iterative Methods and Their Dynamics with Applications: A Contemporary Study; CRC Press, Taylor and Francis: Boca Raton, FL, USA, 2017. [Google Scholar]
- Julia, G. Memoire sure l’iteration des fonction rationelles. J. Math. Pures et Appl. 1918, 81, 47–235. [Google Scholar]
- Mandelbrot, B.B. The Fractal Geometry of Nature; WH Freeman: New York, NY, USA, 1983; ISBN 978-0-7167-1186-5. [Google Scholar]
- Susanto, H.; Karjanto, N. Newton’s method’s basins of attraction revisited. Appl. Math. Comput. 2009, 215, 1084–1090. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).








