Next Article in Journal
Axiomatic Processes for Asymmetric Allocation Rules under Fuzzy Multicriteria Situations
Previous Article in Journal
On the Simulations of Second-Order Oscillatory Problems with Applications to Physical Systems
Previous Article in Special Issue
On the Structure of Coisometric Extensions
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Newton-like Normal S-iteration under Weak Conditions

by
Manoj K. Singh
1,
Ioannis K. Argyros
2,* and
Arvind K. Singh
3,*
1
College of Technology, Sardar Vallabhbhai Patel University of Agriculture and Technology, Meerut 250110, India
2
Department of Mathematical Sciences, Cameron University, Lawton, OK 73505, USA
3
Department of Mathematics, Institute of Science, Banaras Hindu University, Varanasi 221005, India
*
Authors to whom correspondence should be addressed.
Axioms 2023, 12(3), 283; https://doi.org/10.3390/axioms12030283
Submission received: 19 October 2022 / Revised: 28 February 2023 / Accepted: 28 February 2023 / Published: 8 March 2023

Abstract

:
In the present paper, we introduced a quadratically convergent Newton-like normal S-iteration method free from the second derivative for the solution of nonlinear equations permitting f ( x ) = 0 at some points in the neighborhood of the root. Our proposed method works well when the Newton method fails and performs even better than some higher-order converging methods. Numerical results verified that the Newton-like normal S-iteration method converges faster than Fang et al.’s method. We studied different aspects of the normal S-iteration method regarding the faster convergence to the root. Lastly, the dynamic results support the numerical results and explain the convergence, divergence, and stability of the proposed method.

1. Introduction

In this work, we propose a Newton-like normal S-iteration method for solving nonlinear algebraic and transcendental equations of the following form [1,2,3]:
f ( x ) = 0 .
Newton’s method [4,5] is a basic method for solving (1), which converges to the root quadratically under some conditions. Newton’s method is defined as follows:
x n + 1 = x n f ( x n ) f ( x n ) , n = 0 , 1 , 2 , .
Although Newton’s method is the most important known and the most basic used method to solve Equation (1), some weaknesses of Newton’s method [6,7,8,9,10,11,12] are as follows:
(i)
It is only a second-order method;
(ii)
The initial approximation should be near the root;
(iii)
The denominator term of Newton’s method must not be zero at the root or near the root.
To overcome these weaknesses, Wu [12] developed a quadratic convergent method in 2000, which is expressed as follows:
x n + 1 = x n f ( x n ) λ n f ( x n ) + f ( x n ) , n = 0 , 1 , 2 , ,
where | λ n | ( 0 , ) . Fang et al. [11] studied a method in 2008, defined as follows:
y n = x n + f ( x n ) λ n f ( x n ) + f ( x n ) , x n + 1 = y n f ( y n ) λ n f ( x n ) + f ( x n ) , n = 0 , 1 , 2 , ,
where | λ n | 1 . They claimed that their method (4) is of cubic convergence. More precisely,
Theorem 1 
([11]). Let f : I be a function and assume that
(L1) x I is a simple zero of f;
(L2) f is three times differentiable on I;
(L3) λ n f ( x ) + f ( x ) 0 , for all x N ( x ) where N ( x ) is a neighborhood of x . Then, method (4) converges cubically to x .
Recently, Wang and Liu [13] revealed that Fang et al.’s method given by (4) is only of second order. Wang and Liu [13] revised theorem 1 as follows:
Theorem 2. 
Let f : I be a function and assume that
(i) x I is a simple zero of f;
(ii) f is three times differentiable on I;
(iii) λ n f ( x ) + f ( x ) 0 , for all x N ( x ) , where N ( x ) is a neighborhood of x . Then, method (4) converges quadratically to x .
More recently, Wang and Liu [13] modified method (4) for third-order convergence as follows:
y n = x n + f ( x n ) λ n f ( x n ) f ( x n ) , x n + 1 = y n + f ( y n ) λ n f ( x n ) f ( x n ) , n = 0 , 1 , 2 , ,
where | λ n | 1 and it is equal to - s i g n f ( x n ) f ( x n ) m i n { 1 , | f ( x n ) | } . Under the above modification, Wang and Liu [13] settled the third-order convergence theorem as follows:
Theorem 3. 
Let f : I be a function and assume that
(W1) x I is a simple zero of f;
(W2) f is three times differentiable on I;
(W3) λ n f ( x ) f ( x ) 0 for all x N ( x ) , where N ( x ) is a neighborhood of x .
Then, the iterative method (5) is cubically convergent.
It is clear from condition (W2) of Theorem 3 that the sufficient condition for the convergence of method (5) to the zero of the function f is that the third derivative of f must exist. However, we often come across a situation in which the third-order derivative of the function does not exist, while f has a zero in the interval I. Consider the function f 1 defined by
f 1 ( x ) = x 5 / 2 exp ( x ) + 1 .
Here, x = 0.0 . Note that f 1 ( x ) = 0 , and f 1 ( x ) does not exist. Hence, we observe that
(i)
Newton’s method (2) can not be used;
(ii)
The method of Wang and Liu (5) does not satisfy the condition (W2) of Theorem 3. At this stage, the following question naturally arises: Is it possible to propose an iterative method for finding the solution of (1), when f is not three times differentiable on I?
The objective of this work is to introduce a Newton-like normal S-iteration method for solving nonlinear Equation (1). Taking this into account, we describe a new method in which the second derivative of function f is sufficient for convergence and is well comparable to third-order methods. For this purpose, we applied the normal S-iteration process to a second-order converging Newton-like method. The novelty of our proposed Newton-like normal S-iteration method is that when we compare it with other methods, the theoretical results remain the same, but the numerical results and dynamic results significantly improve. In the theoretical section, we show that due to having a quadratic converging method, it requires only second-order differentiability. In the numerical section, we verify the theoretical results with numerical examples and show that in spite of being a second-order method, the present method is well comparable to the previously published second- and third-order methods. Furthermore, with sensitivity analysis, we determined the optimality conditions for the proposed method by finding the suitable values of λ n and β n in order to obtain the optimum results and also obtained the average number of iterations by performing several operations of the proposed method considering the 50 grid points. Lastly, We confirmed the theoretical and numerical results using the dynamic analysis of the proposed method. Thus, we plotted the fractal pattern graphs of the proposed method alongside those of the previously published methods to confirm the applicability of the proposed method. The results show that our method not only answers the research question affirmatively but also behaves very well in comparison to the third-order method developed by Wang and Liu [13].
The rest of this paper is arranged as follows: Section 2 presents preliminary results. In Section 3, we propose the new Newton-like normal S-iteration method and explain its convergence analysis. In Section 4, numerical examples are given to verify the theoretical results. Section 5 is related to the sensitivity analysis. Lastly, dynamic analysis supports the numerical and theoretical results in Section 6.

2. Preliminary

Let x be a root of nonlinear Equation (1) and f be a sufficiently differentiable function and x n N ( x ) , where N ( x ) is a neighborhood of x . Then, the numerical solution of (1) can be written as
f ( x ) = f ( x n ) + x n x f ( t ) d t .
Approximating the integral by ( x x n ) f ( x n ) with x = x in (6), we obtain
0 f ( x n ) + ( x x n ) f ( x n ) .
Therefore, a new approximation x n + 1 to x can be written as (2). The Newton method (2) fails when the derivative of the f becomes zero in the neighborhood of the root. On replacing f ( x n ) in (2) by f ( x n ) + λ n f ( x n ) , we obtain an approximation x n + 1 , as given in (3), which is the quadratically convergent method given by Wu [12].

3. New Newton-like Method and Its Theoretical Convergence Analysis

In this section, we introduce the new Newton-like normal S-iteration method and study its convergence analysis.
In [14], Sahu introduced a normal S-iteration process as follows:
Definition 1. 
Let D be a nonempty convex subset of a normed space X and T : D D be an operator. Then, for arbitrary x 0 D , the normal S-iteration process is defined by
x n + 1 = T ( 1 β n ) x n + β n T ( x n ) , n = 0 , 1 , 2 , ,
where the sequence β n ( 0 , 1 ) .
There are many papers dealing with the S-iteration process and the normal S-iteration process in the literature. In [15], Sahu introduced a Newton-like method based on normal the S-iteration process as follows:
x n + 1 = y n f ( y n ) f ( y n ) , y n = ( 1 β n ) x n + β n u n , u n = x n f ( x n ) f ( x n ) , n = 0 , 1 , 2 , ,
where the sequence β n ( 0 , 1 ) and f ( x ) is the derivative of f at point x.
We now introduce our new Newton-like normal S-iteration method for solving nonlinear Equation (1), when f may be zero in the neighborhood of the root, as
y n = ( 1 β n ) x n + β n G ( x n ) , x n + 1 = G ( y n ) , n = 0 , 1 , 2 , ,
where
G ( x n ) = x n + f ( x n ) λ n f ( x n ) f ( x n ) ,
β n ( 0 , 1 ) and λ n is a sequence in , such that | λ n | 1 . The parameter λ n is chosen in such a manner that both λ n f ( x n ) and f ( x n ) have the same sign, and hence denominator is nonzero in Equation (8). For this purpose, we use the signum function as follows:
s i g n ( x ) = 1 , i f x 0 , 1 , i f x < 0 .
The main result of this paper can now be established as follows:
Theorem 4. 
Let f : I be a function and assume that
(i) 
x I is a simple zero of f;
(ii) 
f is two times differentiable on I;
(iii) 
λ n f ( x ) f ( x ) 0 , for all x N ( x ) , where N ( x ) is neighborhood of x and λ n 1 .
Then, the Newton-like normal S-iteration method defined by (7) is quadratically convergent locally to the zero of f.
Proof. 
Let x I be a simple zero of a function f, e n = x n x and A k = 1 k ! f ( k ) ( x ) / f ( x ) . Using Taylor expansion about x and using f ( x ) = 0 , we obtain
f ( x n ) = f ( x ) e n + A 2 e n 2 + A 3 e n 3 + O ( e n 4 ) ,
f ( x n ) = f ( x ) 1 + 2 A 2 e n + 3 A 3 e n 2 + 4 A 4 e n 3 + O ( e n 4 ) .
Now, from the above two equations, we obtain
f ( x n ) λ n f ( x n ) = f ( x ) [ 1 + 2 A 2 λ n e n + 3 A 3 λ n A 2 e n 2 + 4 A 4 λ n A 3 e n 3 + O ( e n 4 ) ]
and from (9) and (11), we have
f ( x n ) λ n f ( x n ) f ( x n ) = e n + A 2 λ n e n 2 + 2 A 2 λ n λ n 2 2 A 2 2 + 2 A 3 e n 3 + O ( e n 4 ) .
Using the above in (8), we obtain
G ( x n ) = x + A 2 λ n e n 2 + 2 A 2 λ n λ n 2 2 A 2 2 + 2 A 3 e n 3 + O ( e n 4 ) .
Now, on using (12) in the first substep of (7), we get
y n = x + 1 β n e n + β n A 2 λ n e n 2 + β n 2 A 2 λ n λ n 2 2 A 2 2 + 2 A 3 e n 3 + O ( e n 4 ) .
On expanding f ( y n ) and f ( y n ) about x n , we obtain
f ( y n ) = f ( x ) 1 β n e n + A 2 ( 1 β n ) 2 + β n A 2 λ n e n 2 + β n 2 A 2 λ n λ n 2 2 A 2 2 + 2 A 3 + 2 A 2 ( 1 β n ) ( A 2 λ n ) e n 3 + O ( e n 4 ) ,
f ( y n ) = f ( x ) 1 + 2 A 2 1 β n e n + 3 A 3 ( 1 β n ) 2 + 2 A 2 β n A 2 λ n e n 2 + β n 2 A 2 2 A 2 λ n λ n 2 2 A 2 2 + 2 A 3 + 6 A 3 ( 1 β n ) ( A 2 λ n ) e n 3 + O ( e n 4 ) .
Now, from (14) and (15), we have
λ n f ( y n ) f ( y n ) = f ( x ) 1 + ( 1 β n ) ( λ n 2 A 2 ) e n + λ n ( 1 β n ) 2 ( A 2 3 A 3 ) + β n A 2 λ n ( 1 + 2 A 2 ) e n 2 + O ( e n 3 ) .
Furthermore, from (14) and (16), we have
f ( y n ) λ n f ( y n ) f ( y n ) = ( 1 β n ) e n + λ n 3 A 2 β n ( λ n 5 A 2 ) + β n 2 ( λ n 3 A 2 ) e n 2 + O ( e n 3 ) .
With the help of (17), the second equation of (7) becomes
x n + 1 = x + λ n 3 A 2 β n ( 2 λ n 6 A 2 ) + β n 2 ( λ n 3 A 2 ) e n 2 + O ( e n 3 )
e n + 1 = C e n 2 + O ( e n 3 )
where C = λ n 3 A 2 β n ( 2 λ n 6 A 2 ) + β n 2 ( λ n 3 A 2 ) .
Hence, the Newton-like normal S-iteration method proposed in (7) has second-order convergence.

4. Numerical Analysis

In this section, we present some numerical tests to show the applicability of the proposed method by considering two categories of functions, namely (i) functions that are differentiable three times, and (ii) functions that are differentiable only two times. Numerical computations were carried out in MATLAB 2007 and the stopping criteria were taken as (i) | f ( x k ) | ε , (ii) | x k x k 1 | ε , where ε = 10 15 . We applied the Newton-like normal S-iteration method for the following three values of λ n :
(i)
| λ n | = 0.5 ;
(ii)
| λ n | = 1.0 ;
(iii)
λ n = s i g n f ( x n ) f ( x n ) m i n { 1 , | f ( x n ) | } ( λ n is taken as in Wang and Liu [13]).
(i)
Functions with third-order differentials:
Here, we consider the examples taken by Wang and Liu [13] as follows:
F 1 ( x ) = x sin x + cos x 0.6 , x = 2.54623173142842 ,
F 2 ( x ) = x 3 2 x 2 + x 1 , x = 1.75487766624669 ,
F 3 ( x ) = ln x , x = 1.0000 ,
F 4 ( x ) = arctan x , x = 0.0000 ,
F 5 ( x ) = x + 1 exp ( sin x ) , x = 1.69681238680975 ,
F 6 ( x ) = x exp ( x 2 ) ( sin x ) 2 + 3 cos x + 5 , x = 1.20764782713092 ,
F 7 ( x ) = 10 x exp ( x 2 ) 1 . x = 1.67963061042845 .
For the two values of λ n = 0.5 and λ n as indicated in Wang’s method, we considered β n = 0.5 and 0.9 , as shown in Table 1. When starting with the same initial points as in Wang and Liu [13] in all test problems, we observe that for both values of λ n , our normal S-iteration method takes less number of iterations than the method of Wang and Liu method [13] for the value of β n = 0.9 . Thus, in spite of being a second-order convergence method, it performs better than the third-order method of Wang and Liu [13]. Furthermore, It may be noted that in all test problems, the classical Newton’s method either fails or diverges in most cases. In Table 1, F, D, and N C denote the failure of the method, the divergence of the method, and not converging to the desired root, respectively.
(ii)
Functions that are differentiable only two times
We considered the following real functions from I , and the results are shown in Table 2:
f 1 ( x ) = x 5 2 exp x + 1 , x = 0.0 ,
f 2 ( x ) = x 4 sin 1 x , x 0 , x = 0.31830988618379 ( x 0 = 1 ) ,
x = 0.106103295394597 ( x 0 = 0.1 ) ,
f 3 ( x ) = x 7 3 sin x , x = 0.0 ,
f 4 ( x ) = ( x 2 ) 7 3 x 3 + 3 x 2 2 , x = 2.475200396019297 ,
f 5 ( x ) = x 7 3 exp x , x = 0.0 ,
f 6 ( x ) = ( x + 2 ) 5 2 + exp x 1 , x = 1.142466838767107 .
As we know from the condition (W2) of Theorem 3, the cubically convergent method of Wang and Liu will converge to the root only if the third-order differential of the function exists in the neighborhood of the root. Hence, Wang’s method is no longer applicable in this case. Therefore, we compared the present method with quadratically convergent same-order Newton’s method and Fang et al.’s method [11] for different values of λ n and β n ( λ n = 0.5 , λ n = 1 , λ as in Wang and Liu [13] and β n = 0.5 , β n = 0.9 ) in Table 2. In all the test problems, for all values of λ n and β n , we can see that the present new Newton-like normal S-iteration method is always taking less number of iterations, except for example 3 (case β n = 0.5 ), in comparison to other quadratically convergent methods. Hence, we conclude that the present method is more effective, robust, and stable.

5. Sensitivity Analysis

5.1. The Behavior of Normal S-Iteration Method for Different Values of λ n and β n

We took the function F 6 ( x ) = x exp ( x 2 ) ( sin x ) 2 + 3 cos x + 5 , ( x = 1.20764782713092 ) to investigate the empirical behavior of the proposed normal S-iteration method for different values of λ n and β n . The numerical results in Table 3 show that when starting with the initial approximations 0.73 and −3.0, the proposed method is not significantly affected due to the variation in the value of λ n , but the value of β n plays a crucial role as its different values are considered in the interval (0, 1). Thus, we can see that with the values of β n ranging from 0.1 to 0.9, the optimum value of β n is found to be 0.9 for which the proposed method is taking the least number of iterations.

5.2. Normal-S Iteration Method with Variable Value of β

We considered the two sequence of β n as β n 1 = 0.1 + 1 / 2 ( n + 2 ) and β n 2 = 1 1 / 2 ( n + 2 ) to solve the following two test functions using the proposed method:
( a ) F 1 ( x ) = x s i n x + c o s x 0.6 , x = 2.54623173142842
( b ) f 2 ( x ) = x 4 s i n ( 1 / x ) , x = 0.31830988618379 .
We observe from Table 4 that the second sequence β n 2 = 1 1 / 2 ( n + 2 ) is taking fewer iterations in comparison to the first sequence β n 1 = 0.1 + 1 / 2 ( n + 2 ) in converging to the root for both test functions. Hence, we conclude that the sequence that converges near 1, i.e., β n 2 = 1 1 / 2 ( n + 2 ) , gives the faster convergence to the root.

5.3. Average Number of Iterations in Normal-S Iteration Method

Table 5 and Table 6 show the average number of iterations denoted by ANI for 50 tests conducted with different values of β n [6]. For this purpose, we considered the following two test functions, which are three times differentiable:
Example 1. 
F 2 ( x ) = x 3 2 x 2 + x 1.0 = 0 .
It has root x = 1.75487766624669 . We took the initial approximations in the grid as follows: x 0 = 0.25 + i h , i = 1 , , 50 and h = 0.03 (see Table 5). The allowed error is 10 14 .
Example 2. 
F 6 ( x ) = x exp ( x 2 ) ( sin x ) 2 + 3 cos x + 5 .
It has root x = 1.20764782713092 . We took the initial approximations of x 0 in the grid as follows: x 0 = 2.0 + i h , i = 1 , , 50 , and h = 0.03 (see Table 6). The allowed error is 10 14 .

5.4. Convergence Behavior of the Methods of Newton and Fang et al. and the Present Method

The convergence behavior of Newton’s method, Fang et al.’s method [11], and the new Newton-like normal S-iteration method are shown in Figure 1, Figure 2 and Figure 3. To study the convergence behavior, we took the test functions f 2 , f 3 , and f 5 , and for each test function, we considered the three cases as follows:
  • Case 1: The graph between function and root for f 2 , f 3 , and f 5
Here, from Figure 1a, it is clear that for x 0 = 1.0 , we have f 2 ( x 0 ) = 0.841470984807896 . Starting with this initial approximation x 0 , the value of x 1 for Newton’s method, Fang et al.’s method [11], and the present method are 0.702195479022049, 0.677964714450141, and 0.576332178830878, respectively. Clearly, from Figure 1a, it can be inferred that the present method (red line) is better in its very first iteration among all three methods. After successive iterations, starting with x 0 = 1.0 , the present method very rapidly converges to the root x = 0.318309886183791 , as shown in the figure. Similarly, for the function f 3 in Figure 1b and the function f 5 in Figure 1c, we can see that the present method converges to the root x = 0 faster than others.
  • Case 2: Graph of the number of iterations and roots for f 2 , f 3 , and f 5
For the function f 3 , we have f 3 ( x 0 ) = 0.841470984807896 for x 0 = 1.0 . It is clear from Figure 2b that when starting with the initial approximation x 0 , Newton’s method, Fang et al.’s method [11], and the present method converge to the root x = 0.0 in 88, 61, and 49 iterations, respectively. Hence, the new Newton-like normal S-iteration method takes fewer iterations among all the iterative methods. Similarly, we see the same pattern for f 2 and f 5 in Figure 2a,c.
  • Case 3: Graph of number of iterations and functions for f 2 , f 3 , and f 5
In Figure 3c, we have f 5 ( x 0 ) = 2.718281828459045 for x 0 = 1.0 . Starting with x 0 , we can see from the graph that the value of the function f 5 in the present method becomes 0 in 33 iterations, while the Newton method and Fang et al.’s method [11] take 89 and 43 iterations, respectively, which shows that the present method converges to the root x = 0.0 faster than Newton’s method and Fang et al.’s method. Figure 3a,b show the same results for functions f 2 and f 3 , respectively.

6. Dynamic Analysis of Methods for Functions f 1 , f 2 , F 1 , and F 2

Now, we define the following definitions but in the extended complex plane:
Definition 2 
([10,16,17]). Let us consider g : I C as a rational map on the Riemann sphere, where I is a subset of the complex numbers C . Then, a point z 0 is said to be a fixed point of g if
g ( z 0 ) = z 0 .
Again, for any point z C , the orbit of the point z can be defined as the set
O r b ( z ) = { z , g ( z ) , g 2 ( z ) , , g n ( z ) , } .
Definition 3 
([10,16]). A periodic point z 0 is said to be of period k if there exists a smallest positive integer k, i.e., g k ( z 0 ) = z 0 .
Remark 1. 
If z 0 is a periodic point of period k, then clearly, it is a fixed point for g k .
Definition 4 
([10,16,17]). Let z be a zero of the function F. Then, the basin of attraction of the zero value z is defined as the set of all initial approximations z 0 such that any numerical iterative method starting with z 0 converges to z . It can be written as
B ( z ) = { z 0 : z n + 1 = g n ( z 0 ) c o n v e r g e s z } .
Here, g n is any fixed point iterative method.
Remark 2. 
For example, in the case of Newton’s method,
z n + 1 = g ( z n ) ,
g ( z n ) = z n F ( z n ) F ( z n ) , n = 0 , 1 , 2 , .
We can write the basin of attraction of the zero value z for Newton’s method as follows:
B ( z ) = { z 0 : z n + 1 = g n ( z 0 ) c o n v e r g e s z } .
Definition 5 
([10,16,17]). The Julia set of a nonlinear map g ( z ) is denoted as J ( g ) and is defined as a set consisting of the closure of its repelling periodic points [18]. The complement of Julia set J ( g ) is called the Fatou set f ( g ) .
Remark 3. 
(i)
The Julia set of a nonlinear map may also be defined as the common boundary shared by the basins of roots, and the Fatou set may also be defined as the set that contains the basin of attraction.
(ii)
Sometimes, the Fatou set of a nonlinear map may also be defined as the solution space and the Julia set of a nonlinear map may also be defined as the error space;
(iii)
Fractals are very complicated phenomena that may be defined as self-similar unexpected geometric objects that are repeated at every small scale ([19]).
We plotted the dynamics of the iterative methods for various functions. Then, we examined the theoretical and numerical results with the help of dynamic results. A dynamic study helps us to understand the convergence and stability of the methods [10]. We applied our method on a square R × R = [ 5 , 5 ] × [ 5 , 5 ] of 700 × 700 points with a tolerance | f ( z n ) | < 5 × 10 2 and a maximum of 30 iterations. For any function, if the sequence generated by the iterative methods with any initial point z 0 converges to a zero z in the square, then point z 0 will lie in the basins of attraction of this zero, and we assign a fixed color to this point z 0 ([20]).
In the following, we describe the speed of convergence and dynamics of the considered methods under two cases for finding the complex roots of functions. In the first case, we plotted the speed of convergence and dynamics of Newton’s method, Fang et al.’s method [11], and the proposed method for functions f 1 , f 2 (for which the third-order derivative does not exist). In the second case, we studied the speed of convergence and dynamics of Newton’s method, Wang and Liu’s method [13], and the proposed method for functions F 1 and F 2 (for which the third-order derivative exists).

6.1. Functions for Which the Third-Order Derivative Does Not Exist

We took the following two functions, which are differentiable only two times.
f 1 ( x ) = x 5 2 exp x + 1 , x = 0.0 ,
f 2 ( x ) = x 4 s i n ( 1 / x ) , x = 0.31830988618379 .
For the function f 1 ( x ) = x 5 2 exp x + 1 , x = 0.0 , the dynamics and speed of convergence for various methods are shown in Figure 4a–c. It is clear from Figure 4 that the proposed method with | λ n | = 0.5 and β n = 0.9 generates a Fatou set with larger orbits and darker color and a Julia set with fewer fractal boundaries and less chaotic behavior. Newton’s method shows some type of fractal boundaries and chaotic behavior in the middle and right side of Figure 4a. The dynamics of Fang et al.’s method [11] generates a Fatou set with smaller orbits but a larger Julia set and thus is considered the worst method.
The dynamics and speed of convergence of Newton’s method, Fang et al.’s method, and the proposed method for f 2 ( x ) = x 4 sin ( 1 / x ) , are plotted in Figure 5a, Figure 5b, and Figure 5c, respectively. Clearly, the fractal patterns graph of Newton’s method has a large Julia set with fractal boundaries and chaotic behavior, whereas the proposed method and Fang et al.’s method [11] have a large Fatou set with basins, but both methods have some nonconverging regions, shown in the left side of the figures with red color.

6.2. Functions for Which the Third-Order Derivative Exist

We took the following two functions, which are differentiable three times.
F 1 ( x ) = x sin x + cos x 0.6 , x = 2.54623173142842 ,
F 2 ( x ) = x 3 2 x 2 + x 1 , x = 1.75487766624669 .
For F 1 ( x ) = x sin x + cos x 0.6 , the dynamics of Newton’s method, Wang and Liu’s method [13], and the proposed method can be seen in Figure 6a, Figure 6b, and Figure 6c, respectively. Here, Figure 6 shows that the proposed method with | λ n | = 0.5 and β n = 0.9 is the best method because of having a Fatou set with larger orbits and darker color and a Julia set with fewer fractal boundaries and less chaotic behavior. Wang and Liu’s method [13] generates some type of chaotic behavior in the whole figure (Figure 6b). Newton’s method generates a Fatou set with smaller orbits and a Julia set with less chaotic behavior with reddish color in the middle of the figure (see Figure 6a). This is the reason why Newton’s method takes several iterations and sometimes fails.
The dynamics of Newton’s method, Wang and Liu’s method, and the proposed method for function F 2 ( x ) = x 3 2 x 2 + x 1 are shown in Figure 7a–c. The failure of Newton’s method with the starting point x 0 = 1.0 , as shown in Table 1, is proved in Figure 7a. The speed of the convergence of Newton’s method and Wang and Liu’s method [13] is slow with a fractal Julia set and chaotic behavior in comparison with the proposed method.

6.3. Dynamics of Proposed Method with Variable Value of β for Example F 2

We plotted the speed of convergence and dynamics of the proposed method with variable values of β for F 2 ( x ) = x 3 2 x 2 + x 1 , x = 1.75487766624669 . The results are shown in Figure 8. It is clear from the figure that the speed of the convergence of the proposed method increases with an increase in the value of β as the Fatou set increases with larger orbits and a darker color. Moreover, for the value of β = 0.9 , the speed of convergence is optimal with larger orbits and less chaotic behavior in comparison with β = 0.1 , 0.3 , 0.5 , and 0.7 .

7. Future Work

In future research, using our proposed method, we may consider the problem of solving an algebraic equation that has roots with multiplicity. It will be interesting to see the performance of a derivative-free version of the proposed method for the problems considered in this study. The proposed method may also be discussed in Banach space to solve the system of equations. Thus, the proposed method has indeed potential areas of interest that will be the topic of our future research.

8. Conclusions

We presented a new Newton-like normal S-iteration method for finding the root of the nonlinear equation f ( x ) = 0 . Our theoretical results show that due to quadratic convergence, it requires only second-order differentiability rather than third-order differentiability. The numerical results and different graphic illustrations show that in spite of being a second-order convergence method, the proposed method is the most effective and superior when Newton’s method fails, and it performs better than the same-order method of Fang et al. [11] and the third-order method of Wang and Liu [13], as it converges to the root much faster and very efficiently for different values of λ n with β n = 0.9 . Further, we showed that this convergence of the proposed method is accelerated for a sequence of variable values of β n converging to one. The results of the dynamic analysis also support the theoretical and numerical results related to the convergence and stability behavior of the proposed method. Thus, from a practical point of view, the new Newton-like normal S-iteration method has definite practical utility.

Author Contributions

Conceptualization, M.K.S. and I.K.A.; methodology, M.K.S. and I.K.A.; software, M.K.S., I.K.A. and A.K.S.; validation, M.K.S. and I.K.A.; formal analysis, M.K.S. and I.K.A.; investigation, M.K.S. and I.K.A.; resources, M.K.S. and I.K.A.; data curation, M.K.S. and I.K.A.; writing—original draft preparation, M.K.S. and I.K.A.; writing—review and editing, M.K.S. and I.K.A.; visualization, M.K.S. and I.K.A.; supervision, M.K.S. and I.K.A.; project administration, M.K.S. and I.K.A.; funding acquisition, M.K.S. and I.K.A. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare that they have no conflict of interest.

References

  1. Baccouch, M. A Family of High Order Numerical Methods for Solving Nonlinear Algebraic Equations with Simple and Multiple Roots. Int. J. Appl. Comput. Math. 2017, 3, 1119–1133. [Google Scholar] [CrossRef]
  2. Singh, M.K.; Singh, A.K. The Optimal Order Newton’s Like Methods with Dynamics. Mathematics 2021, 9, 527. [Google Scholar] [CrossRef]
  3. Singh, M.K.; Argyros, I.K. The Dynamics of a Continuous Newton-like Method. Mathematics 2022, 10, 3602. [Google Scholar] [CrossRef]
  4. Traub, J.F. Iterative Methods for the Solution of Equations; Prentice Hall: Clifford, NJ, USA, 1964. [Google Scholar]
  5. Argyros, I.K.; Hilout, S. On Newton’s Method for Solving Nonlinear Equations and Function Splitting. Numer. Math. Theor. Meth. Appl. 2011, 4, 53–67. [Google Scholar]
  6. Ardelean, G.; Balog, L. A qualitative study of Agarwal et al. iteration procedure for fixed points approximation. Creat. Math. Inform. 2016, 25, 135–139. [Google Scholar] [CrossRef]
  7. Ardelean, G. A comparison between iterative methods by using the basins of attraction. Appl. Math. Comput. 2011, 218, 88–95. [Google Scholar] [CrossRef]
  8. Deng, J.J.; Chiang, H.D. Convergence region of Newton iterative power flow method: Numerical studies. J. Appl. Math. 2013, 4, 53–67. [Google Scholar] [CrossRef]
  9. Kotarski, W.; Gdawiec, K.; Lisowska, A. Polynomiography via Ishikawa and Mann iterations. In International Symposium on Visual Computing; Springer: Berlin/Heidelberg, Germany, 2012; pp. 305–313. [Google Scholar]
  10. Amat, S.; Busquier, S.; Plaza, S. Review of some iterative root-finding methods from a dynamical point of view. Scientia 2004, 10, 3–35. [Google Scholar]
  11. Fang, L.; He, G.; Hub, Z. A cubically convergent Newton-type method under weak conditions. J. Comput. Appl. Math. 2008, 220, 409–412. [Google Scholar] [CrossRef]
  12. Wu, X.Y. A new continuation Newton-like method and its deformation. Appl. Math. Comput. 2000, 112, 75–78. [Google Scholar] [CrossRef]
  13. Wang, H.; Liu, H. Note on a Cubically Convergent Newton–Type Method Under Weak Conditions. Acta Appl. Math. 2010, 110, 725–735. [Google Scholar] [CrossRef]
  14. Sahu, D.R. Applications of the S-iteration process to constrained minimization problems and split feasibility problems. Fixed Point Theory 2011, 12, 187–204. [Google Scholar]
  15. Sahu, D.R. Strong convergence of a fixed point iteration process with applications. In Proceedings of the International Conference on Recent Advances in Mathematical Sciences and Applications; 2009; pp. 100–116. Available online: https://scholar.google.ca/scholar?hl=en&as_sdt=0%2C5&q=Strong+convergence+of+a+fixed+point+iteration+process+with+applications&btnG= (accessed on 18 October 2022).
  16. Scott, M.; Neta, B.; Chun, C. Basin attractors for various methods. Appl. Math. Comput. 2011, 218, 2584–2599. [Google Scholar] [CrossRef]
  17. Argyros, I.K.; Magrenan, A.A. Iterative Methods and Their Dynamics with Applications: A Contemporary Study; CRC Press, Taylor and Francis: Boca Raton, FL, USA, 2017. [Google Scholar]
  18. Julia, G. Memoire sure l’iteration des fonction rationelles. J. Math. Pures et Appl. 1918, 81, 47–235. [Google Scholar]
  19. Mandelbrot, B.B. The Fractal Geometry of Nature; WH Freeman: New York, NY, USA, 1983; ISBN 978-0-7167-1186-5. [Google Scholar]
  20. Susanto, H.; Karjanto, N. Newton’s method’s basins of attraction revisited. Appl. Math. Comput. 2009, 215, 1084–1090. [Google Scholar] [CrossRef]
Figure 1. The graph using values of functions and roots: (a) Graph for f 2 ; (b) Graph for f 3 ; (c) Graph for f 5 .
Figure 1. The graph using values of functions and roots: (a) Graph for f 2 ; (b) Graph for f 3 ; (c) Graph for f 5 .
Axioms 12 00283 g001aAxioms 12 00283 g001b
Figure 2. Graph using roots and number of iterations: (a) Graph for f 2 ; (b) Graph for f 3 ; (c) Graph for f 5 .
Figure 2. Graph using roots and number of iterations: (a) Graph for f 2 ; (b) Graph for f 3 ; (c) Graph for f 5 .
Axioms 12 00283 g002
Figure 3. Graph for values of functions and numbers of iterations: (a) Graph for f 2 ; (b) Graph for f 3 ; (c) Graph for f 5 .
Figure 3. Graph for values of functions and numbers of iterations: (a) Graph for f 2 ; (b) Graph for f 3 ; (c) Graph for f 5 .
Axioms 12 00283 g003
Figure 4. Dynamics of different methods for f 1 ( x ) = x 5 2 exp x + 1 : (a) Newton’s method; (b) Fang et al.’s method; (c) proposed method.
Figure 4. Dynamics of different methods for f 1 ( x ) = x 5 2 exp x + 1 : (a) Newton’s method; (b) Fang et al.’s method; (c) proposed method.
Axioms 12 00283 g004
Figure 5. Dynamics of different methods for f 2 ( x ) = x 4 s i n ( 1 / x ) : (a) Newton’s method; (b) Fang et al. method; (c) proposed method.
Figure 5. Dynamics of different methods for f 2 ( x ) = x 4 s i n ( 1 / x ) : (a) Newton’s method; (b) Fang et al. method; (c) proposed method.
Axioms 12 00283 g005
Figure 6. Dynamics of different methods for F 1 ( x ) = x sin x + cos x 0.6 : (a) Newton’s method; (b) Wang and Liu’s method; (c) proposed method.
Figure 6. Dynamics of different methods for F 1 ( x ) = x sin x + cos x 0.6 : (a) Newton’s method; (b) Wang and Liu’s method; (c) proposed method.
Axioms 12 00283 g006
Figure 7. Dynamics of different methods for F 2 ( x ) = x 3 2 x 2 + x 1 : (a) Newton’s method; (b) Wang and Liu’s method; (c) proposed method.
Figure 7. Dynamics of different methods for F 2 ( x ) = x 3 2 x 2 + x 1 : (a) Newton’s method; (b) Wang and Liu’s method; (c) proposed method.
Axioms 12 00283 g007
Figure 8. Dynamics of the proposed method with variable values of β for F 2 ( x ) = x 3 2 x 2 + x 1 : (a) proposed method at β = 0.1 ; (b) proposed method at β = 0.3 ; (c) proposed method at β = 0.5 ; (d) proposed method at β = 0.7 ; (e) proposed method at β = 0.9 .
Figure 8. Dynamics of the proposed method with variable values of β for F 2 ( x ) = x 3 2 x 2 + x 1 : (a) proposed method at β = 0.1 ; (b) proposed method at β = 0.3 ; (c) proposed method at β = 0.5 ; (d) proposed method at β = 0.7 ; (e) proposed method at β = 0.9 .
Axioms 12 00283 g008
Table 1. Functions for which third-order differentials exist.
Table 1. Functions for which third-order differentials exist.
f ( x ) x 0 Newton’s
Method
Wang and Liu’s
Method
Normal S-Iteration Method
| λ n | = 0.5 λ n as Wang and Liu
β n = 0.5 β n = 0.9 β n = 0.5 β n = 0.9
F 1 0F57554
−4655465
F 2 1F75554
3766665
F 3 5D55476
2643354
F 4 3D45454
−1534343
F 5 4NC66576
2544444
F 6 0.73D86484
−32315119119
F 7 0.7D54444
2644343
Table 2. Functions for which third-order differential does not exist.
Table 2. Functions for which third-order differential does not exist.
f ( x ) x 0 Newton
Method
Fang et al.
Method
Normal S-Iteration Method
λ n as Wang and Liu | λ n | = 0.5 | λ n | = 1
β n = 0.9 β n = 0.5 β n = 0.9 β n = 0.5 β n = 0.9 β n = 0.5
f 1 0.5 F7343445
f 2 1.0 99676768
0.1 55343434
f 3 0.3 8558476047604760
1.0 8861496249624962
f 4 2.0 F9455444
f 5 1.0 8943334233423343
f 6 2.0 10F345434
Table 3. The proposed method for different values of λ n and β n .
Table 3. The proposed method for different values of λ n and β n .
f ( x ) x 0 β n Normal S-Iteration Method
| λ n | = 0.5 | λ n | = 1 λ n as Wang and Liu
0.11399
0.3788
0.730.5688
0.7555
0.9444
F 6
0.1141514
0.3121313
−3.00.5111111
0.7101010
0.9899
Table 4. Normal-S iteration method with a variable value of β .
Table 4. Normal-S iteration method with a variable value of β .
f ( x ) Normal S-Iteration for Sequence β n 1 Normal S-Iteration for Sequence β n 2
| λ n | = 0.5 λ n as Wang and Liu | λ n | = 0.5 λ n as Wang and Liu
F 1 ( x ) −4.00000000000000−4.00000000000000−4.00000000000000−4.00000000000000
−3.019890471239318−3.269614812666443−2.787748595141695−3.031336002398129
−2.647689829523139−2.830596759888509−2.550732240466982−2.602227130430227
−2.552574309373607−2.597269129310047−2.546231963106547−2.546267106449917
−2.546259317314531−2.547305870288047−2.546231731428419−2.546231731433164
−2.546231731968219−2.546232155885697 −2.546231731428418
−2.546231731428418−2.546231731428486
−2.546231731428418
f 2 ( x ) 1.0000000000000001.0000000000000001.0000000000000001.000000000000000
0.6908620971142790.7132514191703330.5884893666233790.613537499145787
0.5001589849206280.5062029172319440.3881292768558680.391429495638206
0.3917185928010760.3906810528324760.3235278706518330.323501217147855
0.3385473747198770.3370617562994490.3183142597001370.318313848040219
0.3206268567112580.3202226527505270.3183098861847800.318309886184561
0.3183463747369780.3183335918844210.3183098861837910.318309886183791
0.3183098955906890.318309889954683
0.3183098861837910.318309886183791
Table 5. The average number of iterations in normal-S iteration method.
Table 5. The average number of iterations in normal-S iteration method.
β The average Number of Iterations (ANI) in Normal-S Iteration Method
| λ n | = 0.5 | λ n | = 1 λ n as Wang and Liu
0.15.3400005.0800005.100000
0.25.0400004.9200004.920000
0.34.8000004.7200004.600000
0.44.3600004.4800004.420000
0.54.2400004.3000004.280000
0.64.1000004.0800004.140000
0.73.8000003.7600003.640000
0.83.7000003.6000003.540000
0.93.6200003.3000003.340000
Table 6. The average number of iterations (ANI) in normal-S iteration method.
Table 6. The average number of iterations (ANI) in normal-S iteration method.
β Average Number of Iterations in Normal-S Iteration Method
| λ n | = 0.5 | λ n | = 1 λ n as Wang and Liu
0.15.7254905.4117655.333333
0.25.4117655.1960785.137255
0.35.1764714.9411764.882353
0.44.9803924.8235294.764706
0.54.7058824.6666674.607843
0.64.4313734.5490204.450980
0.74.2549024.3529414.294118
0.83.7647064.1372554.058824
0.93.8039223.7647063.666667
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Singh, M.K.; Argyros, I.K.; Singh, A.K. Newton-like Normal S-iteration under Weak Conditions. Axioms 2023, 12, 283. https://doi.org/10.3390/axioms12030283

AMA Style

Singh MK, Argyros IK, Singh AK. Newton-like Normal S-iteration under Weak Conditions. Axioms. 2023; 12(3):283. https://doi.org/10.3390/axioms12030283

Chicago/Turabian Style

Singh, Manoj K., Ioannis K. Argyros, and Arvind K. Singh. 2023. "Newton-like Normal S-iteration under Weak Conditions" Axioms 12, no. 3: 283. https://doi.org/10.3390/axioms12030283

APA Style

Singh, M. K., Argyros, I. K., & Singh, A. K. (2023). Newton-like Normal S-iteration under Weak Conditions. Axioms, 12(3), 283. https://doi.org/10.3390/axioms12030283

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop