Previous Article in Journal
Counting Hamiltonian Cycles in 2-Tiled Graphs Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

# Geometrically Constructed Family of the Simple Fixed Point Iteration Method

1
University Institute of Engineering and Technology, Panjab University, Chandigarh 160014, India
2
Department of Mathematics, Goswami Ganesh Dutta Sanatan Dharma College, Chandigarh 160030, India
3
Department of Mathematics Sciences, Cameron University, Lawton, OK 73505, USA
4
Department of Mathematics, Faculty of Science, King Abdulaziz University, P.O. Box 80203, Jeddah 21589, Saudi Arabia
5
Department of Computer Science, University of Oklahoma, Norman, OK 73071, USA
6
Institute of IR 4.0, The National University of Malaysia, Bangi 43600, UKM, Malaysia
7
Department of Mathematics & Statistics, McMaster University, Hamilton, ON L8S 4K1, Canada
8
Center for Dynamics and Institute for Analysis, Faculty of Mathematics, Technische Universität Dresden, 01062 Dresden, Germany
*
Author to whom correspondence should be addressed.
Mathematics 2021, 9(6), 694; https://doi.org/10.3390/math9060694
Received: 8 February 2021 / Revised: 10 March 2021 / Accepted: 19 March 2021 / Published: 23 March 2021

## Abstract

:
This study presents a new one-parameter family of the well-known fixed point iteration method for solving nonlinear equations numerically. The proposed family is derived by implementing approximation through a straight line. The presence of an arbitrary parameter in the proposed family improves convergence characteristic of the simple fixed point iteration as it has a wider domain of convergence. Furthermore, we propose many two-step predictor–corrector iterative schemes for finding fixed points, which inherit the advantages of the proposed fixed point iterative schemes. Finally, several examples are given to further illustrate their efficiency.
MSC:
47H99; 49M15; 65G99; 65H10

## 1. Introduction

The fixed point iteration is probably the simplest and most important root-finding algorithm in numerical analysis [1,2]. The fixed point methods and fixed point theorems have many applications in mathematics and engineering. One way to study numerical ordinary differential solvers and Runge–Kutta methods is to convert them as fixed point iterations. The well-known Newton’s method [2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17] is also a special case of an iteration method. The fixed point theory has been voluminously used as a tool to find the solution of function-differential equations. Furthermore, the fixed point problems are equivalent to root-finding problems and sometimes easier to analyze while posing some strange and cute problems by themselves.
Suppose that we wish to find the approximate solution of the nonlinear equation
$f ( x ) = 0 ,$
where $f : [ a , b ] ⊂ R → R$ is a sufficiently differentiable function with simple zeros.
This can be rewritten to obtain an equation of the form
$x = ϕ ( x ) ,$
in such a way that any solution of (2), which is a fixed point, is a root of the original Equation (1). Root-finding problems and fixed-point problems are equivalent classes in the following sense:
$f ( x ) has a zero at x = α ⇔ ϕ ( x ) = x − f ( x ) has a fixed point at x = α .$
Geometrically, the fixed point occurs where the graph of $y = ϕ ( x )$ intersects the graph of the straight line $y = x$. Starting from a suitable approximation $x 0$ and consider that the recursive process:
$x n + 1 = ϕ ( x n ) , n ≥ 0 , 1 , 2 , ⋯ ,$
is called a fixed point iteration method. This method is locally linearly convergent if $| ϕ ′ ( x ) | < 1$ for all $x ∈ [ a , b ]$.

## 2. Geometric Derivation of the Family

Assume that Equation (2) has a fixed point at $x = α$. Let
$y = ϕ ( x )$
represent the graph of the function $ϕ ( x )$.
Let $x 0$ be an initial guess to the required fixed point and $ϕ ( x 0 )$ be the corresponding point on the graph of the function $y = ϕ ( x )$. The idea is to approximate nonlinear function $y = ϕ ( x )$ by a linear approximation. Therefore, we assume that
$( y − ϕ ( x 0 ) ) + m ( x − x 0 ) = 0$
be a linear approximation to the curve $y = ϕ ( x )$, where $m ≥ 0$ is the slope of the straight line. The expression (5) can be rewritten as $y = m ( x 0 − x ) + ϕ ( x 0 )$, and this line is passing through the points $( x 0 , ϕ ( x 0 ) )$ and . More details can be found in the following Figure 1:
The point of intersection of (5) with the straight line $y = x$ will be a required fixed point and let $x = x 1$ be this point of intersection. Therefore, at the point of intersection, the expression (5) yields
$x 1 = m x 0 + ϕ ( x 0 ) m + 1 .$
Without loss of generality, the general form of the above expression (6) can be written as follows:
$x n + 1 = m x n + ϕ ( x n ) m + 1 , n ≥ 0 .$
Next, we want to demonstrate the convergence order of the proposed iterative scheme (7). Therefore, we rewrite the expression (7) in the following way:
$x n + 1 = m x n + ϕ ( x n ) m + 1 = h ( x n ) ,$
or we can say that $h ( x ) = m x + ϕ ( x ) m + 1$. If $α$ is a fixed point of $ϕ ( x ) = x$, then
$h ( α ) = m α + ϕ ( α ) m + 1 = m α + α m + 1 = α .$
So, we conclude that $α$ is a root of $h ( x ) = 0$.
Theorem 1.
Let $ϕ ( x )$ and $ϕ ′ ( x )$ be continuous functions on the interval $[ a , b ]$. In addition, we assume that $m ≥ 0$ and $a ≤ x ≤ b ⇒ a ≤ ϕ ( x ) ≤ b , λ = max { | h ′ ( x ) | }$ for all $x ∈ [ a , b ]$, then (i) $x = h ( x )$ has a unique solution $α ∈ [ a , b ]$; (ii) for any initial guess $x 0 ∈ [ a , b ]$, the iteration $x n + 1 = h ( x n ) , n = 0 , 1 , 2 , ⋯$ will converge to α.
Proof.
First of all, we will prove the first part. Since $ϕ ′ ( x )$ exists in the interval $[ a , b ]$, therefore, this implies that $h ′ ( x )$ exists in the interval $[ a , b ]$. For any two points $u , v ∈ [ a , b ]$, we have
$h ( u ) − h ( v ) = h ′ ( c ) ( u − v ) , for some c ∈ ( u , v ) ,$
which yields further
$| h ( u ) − h ( v ) | = | h ′ ( c ) | | u − v | , ≤ λ | u − v | .$
Let us suppose that there are two solutions $α$ and $β$ of $x = h ( x )$ in the interval $[ a , b ]$. So, we have
$h ( α ) = α , and h ( β ) = β .$
From (8), we further have
$| h ( α ) − h ( β ) | ≤ λ | α − β | .$
If $α ≠ β$, then $λ ≥ 1$ which contradicts the fact that $λ < 1$. Therefore, we have
$α = β .$
Hence, $x = h ( x )$ has a unique solution in [a, b].
Next, we move to the second part.
If $x 0 ∈ [ a , b ]$, then $ϕ ( x 0 ) ∈ [ a , b ]$. This further implies that
$m x 0 + ϕ ( x 0 ) ( m + 1 ) ∈ [ a , b ] .$
Therefore, $h ( x 0 ) ∈ [ a , b ]$ and hence $x 1 ∈ [ a , b ]$.
Repeating the above process inductively, one gets ${ x n } ∈ [ a , b ]$.
Furthermore, one can have
$| x n − α | = | h ( x n − 1 ) − h ( α ) | , = | h ′ ( c n ) | | x n − 1 − α | , for some c n ∈ ( α , x n − 1 ) , ≤ λ | x n − 1 − α | .$
Continuing inductively
$| x n − α | ≤ λ n | x 0 − α | → 0 , ( as λ < 1 )$
Hence, the sequence ${ x n }$ of $x n + 1 = h ( x n )$ converges to $α$. □
Theorem 2.
Let $ϕ : R → R$ be an analytic function in the region containing the fixed point $x = α$. In addition, we assume that initial guess $x = x 0$ is sufficiently close to the required fixed point for guaranteed convergence. Then, the proposed scheme (7) has at least linear convergence
$e n + 1 = ( m + ϕ ′ ( α ) ) e n m + 1 + ϕ ″ ( α ) e n 2 m + 1 + O ( e n 3 ) .$
In the case $m = − ϕ ′ ( α ) ≈ − ϕ ′ ( x n ) ≈ − ϕ ( x n + 1 ) − ϕ ( x n ) x n + 1 − x n$, we obtain
since $x n + 1 = ϕ ( x n )$, then the scheme (7) reaches at least the second order of convergence.
Proof.
Suppose $x n ≈ α$. We can write
$h ( x n ) = h ( α ) + h ′ ( α ) ( x n − α ) + h ″ ( α ) ( x n − α ) 2 2 ! + O ( ( x n − α ) 3 ) ,$
by Taylor’s expansion in the neighborhood of fixed point “$α$”.
Therefore, one gets
$x n + 1 − α = h ′ ( α ) ( x n − α ) + h ″ ( α ) ( x n − α ) 2 2 ! + O ( ( x n − α ) 3 ) .$
As $h ( x ) = m x + ϕ ( x ) m + 1$, we get $h ′ ( α ) = m + ϕ ′ ( α ) m + 1$ and $h ″ ( α ) = ϕ ″ ( α ) m + 1$.
Substituting these values in (10), one can have
$e n + 1 = ( m + ϕ ′ ( α ) ) e n m + 1 + ϕ ″ ( α ) e n 2 m + 1 + O ( e n 3 ) .$
Furthermore, if $m = − ϕ ′ ( α )$, then $e n + 1 = ϕ ″ ( α ) e n 2 m + 1 + O ( e n 3 )$.
This implies that scheme (7) has at least second-order convergence. □

#### Special Cases

Here, we shall consider the role of the parameter $m ≥ 0$ and derive the various following formulas as follows:
1.
For $m = 0$, Formula (7) corresponds to the classical fixed point method $x n + 1 = ϕ ( x n )$.
2.
For $m = 1 − γ γ$, where $γ ∈ ( 0 , 1 ]$, Formula (7) corresponds to the following well-known Schaefer’s iteration scheme 
$x n + 1 = ( 1 − γ ) x n + γ ϕ ( x n ) .$
3.
For $m = 1 − γ n γ n$, where ${ γ n }$ is a real sequence in $( 0 , 1 ]$, Formula (7) corresponds to the following well-known Mann’s iteration 
$x n + 1 = ( 1 − γ n ) x n + γ n ϕ ( x n ) .$
4.
By inserting $m = 1$, in scheme (7), one achieves the following well-known Kranselski’s iteration 
$x n + 1 = x n + ϕ ( x n ) 2 ,$
denoted by $( K M )$ for the computational results (see also more recent work on this iteration in the book ).
Similarly, we can derive several other formulas by taking different specific values of m. Furthermore, we proposed the following new schemes on the basis of some standard means of two quantities $x n$ and $ϕ ( x n )$ of same signs:
5.
Geometric mean-based fixed point formula is given by
$x n + 1 = x n ϕ ( x n ) , where x 0 ≠ 0 .$
6.
Harmonic mean-based fixed point formula is defined by
$x n + 1 = 2 x n ϕ ( x n ) x n + ϕ ( x n ) , where x 0 ≠ 0 .$
7.
Centroidal mean-based fixed point formula is mentioned as follows:
8.
The following fixed point formula based on the Heronian mean is defined as
$x n + 1 = x n + x n ϕ ( x n ) + ϕ ( x n ) 3 .$
9.
The fixed point formula based on Contra-harmonic is depicted as follows:
$x n + 1 = x n 2 + ϕ 2 ( x n ) x n + ϕ ( x n ) .$
Remark 1.
Geometric mean-based fixed point formula and Heronian mean formula are applicable for finding positive fixed points only.

## 3. Two-Step Iterative Schemes

In this section, we present a new two-step predictor–corrector iterative schemes using the modified fixed point methods as predictor. There are several two-point [1,22,23] and multi-point [24,25] iterative schemes in the literature for finding the fixed points. Here, we mention some of them as follows:
1.
Ishikawa  has proposed the following iterative scheme:
$x n + 1 = ( 1 − β n ) x n + β n ϕ ( y n ) , y n = ( 1 − γ n ) x n + γ n ϕ ( x n ) ,$
where ${ β n }$ and ${ γ n }$ are sequences of positive numbers in $( 0 , 1 ]$ as a generalization of the Mann  iteration scheme. We denote this method as $( I S )$ for the computational work and choose $β n = γ n = 1 n 3 + 1$.
2.
Agarwal et al.  have proposed the following iteration scheme defined as
$x n + 1 = ( 1 − β n ) ϕ ( x n ) + β n ϕ ( y n ) , y n = ( 1 − γ n ) x n + γ n ϕ ( x n ) ,$
where ${ β n }$ and ${ γ n }$ are sequences of positive numbers in $( 0 , 1 ]$. We call this scheme by $( A S )$ for the computational work and consider $β n = γ n = 1 n 3 + 1$. For $γ n = 0$, it reduces to the well-known Mann iteration scheme.
3.
Thianwan  defined the following two-step iteration scheme as
$x n + 1 = ( 1 − β n ) y n + β n ϕ ( y n ) , y n = ( 1 − γ n ) x n + γ n ϕ ( x n ) ,$
where ${ β n }$ and ${ γ n }$ are sequences of positive numbers in $( 0 , 1 ]$. We denote this method as $( T S )$ for the computational work and choose $β n = γ n = 1 n 3 + 1$. This scheme is also known as modification of Mann’s method.

#### Modified Schemes

These elementary schemes allow us to propose the following iterative schemes with any of the proposed methods as the first predictor step and these existing methods as the second step. For the sake of simplicity, we consider some of the special cases as a predictor part. Therefore, we have the following modified schemes depicted in the Table 1.

## 4. Numerical Examples

The theoretical results developed in the previous sections are tested in this section. We choose our methods by substituting $m = 1 2$, $m = 1 4$, $m = 1 10$ and $m = − ϕ ′ ( x n )$ in the proposed scheme (7), denoted by $O M 1 , O M 2$, $O M 3$ and $O M 4$, respectively. In addition, we select methods $G M$ and $H M$ from special cases $( 5 )$ and $( 6 )$, respectively.
In order to check the effectiveness of our results, we consider five different types of nonlinear problems which are illustrated in examples (1)–(5). In the Table 2, we compared them with classical fixed point method. In addition, we contrast our method to the existing Ishikawa and Agarwal methods, and the results are mentioned in the Table 3 and Table 4, respectively. Finally, we compared them with classical Mann’s and Thianwan methods and computational depicted in the Table 5. In all the tables, we mentioned the results after twelve iterations (i.e., $k = 12$) with $γ n = β n = 1 n 3 + 1$.
Additionally, we obtain the computational order of convergence by adopting the pursuing techniques
or the approximate computational order of convergence (ACOC) 
Computations are performed with the package $M a t h e m a t i c a 9$ with multiple precision arithmetic. The $a ( ± b )$ stands for $a × 10 ± b$.
Example 1.
Let us consider the following standard test problem
$f ( x ) = cos x − x e x .$
The corresponding fixed point iterative method is given as follows:
$ϕ ( x n ) = e − x cos x .$
The required zero of expression (16) and fixed point for (17) is $α = 0.517757363682459$ with initial guess $x 0 = 0.52$.
Example 2.
We choose the following expression for the comparison with other different fixed point methods
$f ( x ) = e − x − x .$
We can easily obtained the following fixed point iterative method based on expression (18)
$ϕ ( x n ) = e − x n .$
The required zero of expression (18) and fixed point for (19) is $α = 0.567143289740340$ with initial guess $x 0 = 0.6$.
Example 3.
Here, we assume the following expression
$f ( x ) = sin x − 10 ( x − 1 ) .$
Based on the expression (20), we have the following fixed point iterative method:
$ϕ ( x n ) = 1 + sin x 10 .$
The required zero of expression (20) and fixed point for (21) is $α = 1.08859775239789$. We select $x 0 = 1.1$ as the initial guess for comparison.
Example 4.
Assume another test problem as follows
$f ( x ) = x 3 + x 2 − 1 .$
Corresponding to expression (22), we have
$ϕ ( x n ) = 1 x n + 1 ,$
as the fixed point iterative method. The required zero of expression (22) and fixed point for (23) is $α = 0.754877666347995$. We assume the starting point $x 0 = 0.8$ for contrast.
Example 5.
Here, we assume another expression
$f ( x ) = cos x − sin x .$
We have the following expression for the fixed point method
$ϕ ( x n ) = x + cos x − sin x .$
The required zero of expression (24) and fixed point for (25) is $α = 0.785396509573049$. We consider $x 0 = 0.8$ as the initial guess for comparison.

## 5. Role of the Parameter ‘m’

The presence of the arbitrary slope $‘ m ’$ in the proposed family has the following characteristics:
1.
Since $a ≤ x ≤ b$ implies that $a ≤ ϕ ( x ) ≤ b$. Therefore, the parameter ‘$m ≥ 0$’ ensures that the fixed point divides the interval between $x 0$ and $ϕ ( x 0 )$ internally in the ratio $m : 1$ or $1 : m$, otherwise, there will be an external division and hence, $h ( x ) ∉ [ a , b ]$.
2.
Since $h ( x ) = m x + ϕ ( x ) m + 1$. As $| h ′ ( x n ) | < 1$ is the sufficient condition for the convergence of modified fixed point method, then we have
This further implies that
$− ( 2 m + 1 ) < ϕ ′ ( x n ) < 1 .$
This is the interval of convergence of our proposed scheme (7). As $m ≥ 0$, so (26) represents a wider domain of convergence in contrast to the classical fixed point method $x = ϕ ( x )$. In particular for $m = 1$ (arithmetic mean), (26) gives the following interval of convergence as
$− 3 < ϕ ′ ( x n ) < 1 .$
Therefore, the arithmetic mean formula has a bigger interval of convergence as compared to simple fixed point method.
Remark 2.
For $x = ϕ ( x )$, we have different ways to choose $ϕ ( x )$; however, we have to select $ϕ ( x )$ in such a way that the fixed point iteration method converges to its fixed point. We shall illustrate it by taking the following two examples:
Example 6.
We choose the following expression for the comparison of simple fixed point method with the modified fixed point method, namely, arithmetic mean formula
$f ( x ) = x 2 − x − 1 .$
Corresponding to expression (27), one has
$ϕ ( x n ) = x n 2 − 1 .$
One of the required zero of expression (27) and fixed point for (28) is $α = − 0.6180339887498948$ with the initial guess $x 0 = − 0.6$. For $ϕ ( x ) = x 2 − 1$, the fixed point method diverges for the interval $[ − 0.65 , − 0.55 ]$. Here, $ϕ ′ ( x ) = 2 x$ and $− 0.65 ≤ x ≤ − 0.55$ implies $− 1.3 ≤ 2 x ≤ − 1.1$. This further implies $− 1.3 ≤ ϕ ′ ( x ) ≤ − 1.1$, which violates the condition of $| ϕ ′ ( x ) | < 1$ for all $x ∈ [ − 0.65 , − 0.55 ]$. On the other hand, the interval of convergence for arithmetic mean formula is $[ − 3 , 1 ]$ and $ϕ ′ ( x )$ clearly lies with in this interval. For $m = 1$, Formula (7) becomes $x n + 1 = x n + ϕ ( x n ) 2$. This further gives
$x n + 1 = x n 2 + x n − 1 2 .$
The modified arithmetic mean fixed point method converges to $α = − 0.6180339887498948$ for the initial guess $x 0 = − 0.6$.
Example 7.
Let us consider the general square root finding problem by fixed point methods. We wish to compute square roots of $x = a f o r a > 0$. This is equivalent to find the roots of $x 2 = a$. For example, let $a = 4$. Therefore, the corresponding function becomes
$f ( x ) = x 2 − 4 .$
Consider the following two possible rearrangements of $f ( x )$ as
1.
$x n + 1 = ϕ ( x n ) = 4 x n , n = 0 , 1 , 2 , ⋯ ,$
2.
$x n + 1 = ϕ ( x n ) = x n 2 + 2 x n , n = 0 , 1 , 2 , ⋯ .$
One of the required zero of expression (29) and fixed point for the above two sequences is $α = 2$ with initial guess $x 0 = 1.99$. The first considered sequence diverges as $| ϕ ′ ( x n ) | > 1$, and the second one converges for the simple fixed point method as $| ϕ ′ ( x n ) | < 1$, for all $x n ∈ [ 1.95 , 2.05 ]$. Let us discuss the first sequence further. We have
$ϕ ( x n ) = 4 x n .$
The interval of convergence for arithmetic mean formula is $[ − 3 , 1 ]$ and $ϕ ′ ( x )$ clearly lies within this interval. For $m = 1$, Formula (7) becomes $x n + 1 = x n + ϕ ( x n ) 2$.
This further implies
$x n + 1 = x n 2 + 2 x n .$
The modified arithmetic mean fixed point method converges to $α = 2$ for the initial guess $x 0 = 1.99$, since $| ϕ ′ ( x n ) | < 1$ for all $x n ∈ S : = [ 1.95 , 2.05 ]$.
Remark 3.
We can do better sometimes with the selection of initial points or the convergence rate or order, if we consider iteration functions using similar information. As an example, consider Newton’s method defined for all $n = 0 , 1 , 2 , ⋯$ by
$x n + 1 = x n − f ′ ( x n ) − 1 f ( x n ) .$
Then, the celebrated Newton–Kantorovich semi-local convergence criterion [13,14] is given by
$h = l d ≤ 1 2 ,$
where
$d = | f ′ ( x 0 ) − 1 f ( x 0 ) | ,$
and l is the Lipschitz constant in the condition
for all $x ∈ D ⊂ R$ for some D. Then, as an example in the case of Example (7), (ii) Newton’s method (30) coincides with the modified arithmetic mean fixed point method, but since $f ( x ) = x 2 − 4$, $d = | x 0 2 − 4 |$ and $l = 1 2 | x 0 |$, condition (31) is satisfied for $x 0 ∈ S 1 = ( − ∞ , 2 ] ∪ [ 2 , ∞ )$, which includes S, and if $x 0 ∈ S 2 = ( − ∞ , 2 ) ∪ ( 2 , ∞ )$ then, $h < 1 2$, so the convergence is quadratic faster than for the given (only linear) in the case of modified arithmetic mean.

## 6. Conclusions

Motivated by geometrical considerations, we developed a one-parameter class of fixed point iteration methods for generating a sequence approximating fixed points of nonlinear equations. These methods are more specialist than a number of earlier popular methods. Sufficient convergence criteria have been provided as well as the convergence order. Numerical examples further demonstrate the efficiency as well as the superiority of the new methods over earlier ones using similar convergence information. The convergence order of Theorem 2 is confirmed in Table 2 by using COC or ACOC. These schemes can also be extended for finding the fixed points of nonlinear systems.

## Author Contributions

V.K., P.S., I.K.A. and R.B.: conceptualization; methodology; validation; writing—original draft preparation; writing—review and editing. C.A., A.A. and M.S.: review and editing. All authors have read and agreed to the published version of the manuscript.

## Funding

This research received no external funding.

Not applicable.

Not applicable.

Not applicable.

## Conflicts of Interest

The authors declare no conflict of interest.

## References

1. Agawal, R.P.; Regan, D.O.; Sahu, D.R. Iterative constructions of the fixed point of nearly asymptotically nonexpansive mapping. J. Nonlinear Convex Anal. 2012, 27, 145–156. [Google Scholar]
2. Traub, J.F. Iterative Methods for the Solution of Equations; Prentice-Hall: Englewood Cliffs, NJ, USA, 1964. [Google Scholar]
3. Behl, R.; Salimi, M.; Ferrara, M.; Sharifi, S.; Samaher, K.A. Some real life applications of a newly constructed derivative free iterative scheme. Symmetry 2019, 11, 239. [Google Scholar] [CrossRef][Green Version]
4. Salimi, M.; Nik Long, N.M.A.; Sharifi, S.; Pansera, B.A. A multi-point iterative method for solving nonlinear equations with optimal order of convergence. Jpn. J. Ind. Appl. Math. 2018, 35, 497–509. [Google Scholar] [CrossRef]
5. Sharifi, S.; Salimi, M.; Siegmund, S.; Lotfi, T. A new class of optimal four-point methods with convergence order 16 for solving nonlinear equations. Math. Comput. Simul. 2016, 119, 69–90. [Google Scholar] [CrossRef][Green Version]
6. Salimi, M.; Lotfi, T.; Sharifi, S.; Siegmund, S. Optimal Newton-Secant like methods without memory for solving nonlinear equations with its dynamics. Int. J. Comput. Math. 2017, 94, 1759–1777. [Google Scholar] [CrossRef][Green Version]
7. Matthies, G.; Salimi, M.; Sharifi, S.; Varona, J.L. An optimal eighth-order iterative method with its dynamics. Jpn. J. Ind. Appl. Math. 2016, 33, 751–766. [Google Scholar] [CrossRef][Green Version]
8. Sharifi, S.; Ferrara, M.; Salimi, M.; Siegmund, S. New modification of Maheshwari method with optimal eighth order of convergence for solving nonlinear equations. Open Math. (Former. Cent. Eur. J. Math.) 2016, 14, 443–451. [Google Scholar]
9. Lotfi, T.; Sharifi, S.; Salimi, M.; Siegmund, S. A new class of three point methods with optimal convergence order eight and its dynamics. Numer. Algor. 2016, 68, 261–288. [Google Scholar] [CrossRef]
10. Jamaludin, N.A.A.; Nik Long, N.M.A.; Salimi, M.; Sharifi, S. Review of some iterative methods for solving nonlinear equations with multiple zeros. Afr. Mat. 2019, 30, 355–369. [Google Scholar] [CrossRef]
11. Nik Long, N.M.A.; Salimi, M.; Sharifi, S.; Ferrara, M. Developing a new family of Newton–Secant method with memory based on a weight function. SeMA J. 2017, 74, 503–512. [Google Scholar] [CrossRef]
12. Ferrara, M.; Sharifi, S.; Salimi, M. Computing multiple zeros by using a parameter in Newton-Secant method. SeMA J. 2017, 74, 361–369. [Google Scholar] [CrossRef][Green Version]
13. Magreñán, A.A.; Argyros, I.K. A Contemporary Study of Iterative Methods: Convergence, Dynamics and Applications; Academic Press: Cambridge, MA, USA; Elsevier: Amsterdam, The Netherlands, 2019. [Google Scholar]
14. Argyros, I.K.; Magreñán, A.A. Iterative Methods and Their Dynamics with Applications; CRC Press: New York, NY, USA; Taylor & Francis: Abingdon, UK, 2021. [Google Scholar]
15. Burden, R.L.; Faires, J.D. Numerical Analysis; PWS Publishing Company: Boston, MA, USA, 2001. [Google Scholar]
16. Ostrowski, A.M. Solution of Equations and Systems of Equation; Pure and Applied Mathematics; Academic Press: New York, NY, USA; London, UK, 1960; Volume IX. [Google Scholar]
17. Petkovic, M.S.; Neta, B.; Petkovic, L.; Džunič, J. Multipoint Methods for Solving Nonlinear Equation; Elsevier: Amsterdam, The Netherlands, 2013. [Google Scholar]
18. Schaefer, H. Über die methods sukzessiver approximationen. Jahreber Deutsch. Math. Verein 1957, 59, 131–140. [Google Scholar]
19. Mann, W.R. Mean Value Methods in Iteration. Proc. Am. Math. Soc. 1953, 4, 506–510. [Google Scholar] [CrossRef]
20. Kranselski, M.A. Two remarks on the method of successive approximation (Russian). Uspei Nauk. 1955, 10, 123–127. [Google Scholar]
21. Berinde, V. Iterative Approximation of Fixed Points; Springer: Berlin/Heidelberg, Germany; New York, NY, USA, 2002. [Google Scholar] [CrossRef]
22. Ishikawa, S. Fixed Point by a New Iteration Method. Proc. Am. Math. Soc. 1974, 44, 147–150. [Google Scholar] [CrossRef]
23. Thianwan, S. Common fixed Points of new iterations for two asymptotically nonexpansive nonself mappings in Banach spaces. J. Comput. Appl. Math. 2009, 224, 688–695. [Google Scholar] [CrossRef][Green Version]
24. Noor, M.A. New approximation schemes for general variational inequalities. J. Math. Anal. Appl. 2000, 251, 217–229. [Google Scholar] [CrossRef][Green Version]
25. Phuengrattana, W.; Suantai, S. On the rate of convergence of Mann Ishikawa, Noor and SP iterations for continuous functions on an arbitrary interval. J. Comput. Appl. Math. 2011, 235, 3006–3014. [Google Scholar] [CrossRef][Green Version]
26. Cordero, A.; Torregrosa, J.R. Variants of Newtons method using fith-order quadrature formulas. Appl. Math. Comput. 2007, 190, 686–698. [Google Scholar]
Figure 1. The graph of approximate nonlinear function $y = ϕ ( x )$ by a linear approximation.
Figure 1. The graph of approximate nonlinear function $y = ϕ ( x )$ by a linear approximation.
Table 1. Some modified schemes based on Ishikawa’s, Agarwal and Thianwan as corrector, respectively.
Table 1. Some modified schemes based on Ishikawa’s, Agarwal and Thianwan as corrector, respectively.
PredictorIshikawa’sAgarwalThianwan
CorrectorCorrectorCorrector
$y n = x n ϕ ( x n )$$x n + 1 = ( 1 − β n ) x n + β n ϕ ( y n ) ,$$x n + 1 = ( 1 − γ n ) ϕ ( x n ) + γ n ϕ ( y n ) ,$$x n + 1 = ( 1 − γ n ) y n + γ n ϕ ( y n ) ,$
called by$( I G M )$$( A G M )$$( T G M )$
$y n = 2 x n ϕ ( x n ) x n + ϕ ( x n )$$x n + 1 = ( 1 − β n ) x n + β n ϕ ( y n ) ,$$x n + 1 = ( 1 − γ n ) ϕ ( x n ) + γ n ϕ ( y n ) ,$$x n + 1 = ( 1 − γ n ) y n + γ n ϕ ( y n ) ,$
known by$( I H M )$$( A H M )$$( T H M )$
$y n = x n + 2 ϕ ( x n ) 3$$x n + 1 = ( 1 − β n ) x n + β n ϕ ( y n ) ,$$x n + 1 = ( 1 − γ n ) ϕ ( x n ) + γ n ϕ ( y n ) ,$$x n + 1 = ( 1 − γ n ) y n + γ n ϕ ( y n ) ,$
denoted by$( I O M 1 )$$( A O M 1 )$$( T O M 1 )$
$y n = x n + 4 ϕ ( x n ) 5$$x n + 1 = ( 1 − β n ) x n + β n ϕ ( y n ) ,$$x n + 1 = ( 1 − γ n ) ϕ ( x n ) + γ n ϕ ( y n ) ,$$x n + 1 = ( 1 − γ n ) y n + γ n ϕ ( y n ) ,$
called by$( I O M 2 )$$( A O M 2 )$$( T O M 2 )$
$y n = x n + 10 ϕ ( x n ) 11$$x n + 1 = ( 1 − β n ) x n + β n ϕ ( y n ) ,$$x n + 1 = ( 1 − γ n ) ϕ ( x n ) + γ n ϕ ( y n ) ,$$x n + 1 = ( 1 − γ n ) y n + γ n ϕ ( y n ) ,$
known by$( I O M 3 )$$( A O M 3 )$$( T O M 3 )$
Table 2. Comparison of different fixed point methods on Examples (1)–(5) with $k = 12$.
Table 2. Comparison of different fixed point methods on Examples (1)–(5) with $k = 12$.
ExamplesE.C.FIMKMGMHMOM1OM2OM3OM4
R.E.
$ρ$
(1)$| x k + 1 − x k |$$3.4 ( − 4 )$$9.3 ( − 16 )$$9.1 ( − 16 )$$8.9 ( − 16 )$$8.1 ( − 11 )$$2.2 ( − 7 )$$2.0 ( − 5 )$$8.2 ( − 14 , 080 )$
$f ( x k )$$5.7 ( − 4 )$$3.1 ( − 15 )$$3.1 ( − 15 )$$3.0 ( − 15 )$$4.6 ( − 11 )$$4.7 ( − 7 )$$3.7 ( − 5 )$$2.5 ( − 14 , 079 )$
$ρ$$0.9998$$1.000$$1.000$$1.000$$1.000$$1.000$$1.000$$2.000$
(2)$| x k + 1 − x k |$$5.6 ( − 5 )$$2.8 ( − 10 )$$2.5 ( − 10 )$$2.3 ( − 10 )$$1.9 ( − 18 )$$2.9 ( − 9 )$$1.6 ( − 6 )$$6.5 ( − 9126 )$
$f ( x k )$$5.6 ( − 5 )$$5.6 ( − 10 )$$5.0 ( − 10 )$$4.5 ( − 10 )$$2.9 ( − 18 )$$3.6 ( − 9 )$$1.7 ( − 6 )$$1.0 ( − 9125 )$
$ρ$$1.000$$1.000$$1.000$$1.000$$1.000$$1.000$$1.000$$2.000$
(3)$| x k + 1 − x k |$$1.1 ( − 18 )$$2.3 ( − 6 )$$2.3 ( − 6 )$$2.3 ( − 6 )$$3.9 ( − 8 )$$2.7 ( − 10 )$$3.0 ( − 13 )$$2.6 ( − 13 , 415 )$
$f ( x k )$$1.1 ( − 17 )$$4.6 ( − 5 )$$4.5 ( − 5 )$$4.5 ( − 5 )$$5.9 ( − 7 )$$3.4 ( − 9 )$$3.3 ( − 12 )$$2.5 ( − 13 , 414 )$
$ρ$$1.000$$1.000$$1.000$$1.000$$1.000$$1.000$$1.000$$2.000$
(4)$| x k + 1 − x k |$$5.3 ( − 10 )$$3.7 ( − 7 )$$3.5 ( − 7 )$$3.4 ( − 7 )$$8.2 ( − 11 )$$1.1 ( − 20 )$$8.3 ( − 14 )$$2.6 ( − 10 , 135 )$
$f ( x k )$$1.4 ( − 9 )$$2.0 ( − 6 )$$1.9 ( − 6 )$$1.8 ( − 6 )$$3.3 ( − 10 )$$3.7 ( − 20 )$$2.4 ( − 13 )$$8.3 ( − 10 , 135 )$
$ρ$$1.000$$1.000$$1.000$$1.000$$1.000$$1.000$$1.000$$2.000$
(5)$| x k + 1 − x k |$$5.3 ( − 7 )$$4.1 ( − 9 )$$4.0 ( − 9 )$$3.9 ( − 9 )$$1.7 ( − 17 )$$4.4 ( − 13 )$$5.5 ( − 9 )$$6.0 ( − 1 , 102 , 284 )$
$f ( x k )$$5.3 ( − 7 )$$8.2 ( − 9 )$$8.0 ( − 9 )$$7.9 ( − 9 )$$2.5 ( − 17 )$$5.5 ( − 13 )$$6.1 ( − 9 )$$8.5 ( − 1 , 102 , 284 )$
$ρ$$1.000$$1.000$$1.000$$1.000$$1.000$$1.000$$1.000$$2.000$
FIM, E.C. and R.E. stand for classical fixed point method, errors between two consecutive iterations and residual errors in the corresponding function by using the obtained fixed point, respectively.
Table 3. Comparison of our methods with existing Ishikawa method based on $k = 12$ number of iterations.
Table 3. Comparison of our methods with existing Ishikawa method based on $k = 12$ number of iterations.
ExamplesE.C.$I S$IGMIHMIOM1IOM2IOM3
R.E.
(1)$| x k + 1 − x k |$$5.1 ( − 7 )$$3.9 ( − 8 )$$3.9 ( − 18 )$$9.1 ( − 8 )$$1.8 ( − 7 )$$2.3 ( − 7 )$
$f ( x k )$$1.5 ( − 3 )$$1.9 ( − 4 )$$1.9 ( − 4 )$$5.8 ( − 4 )$$1.5 ( − 3 )$$2.5 ( − 3 )$
(2)$| x k + 1 − x k |$$3.1 ( − 6 )$$8.7 ( − 7 )$$7.9 ( − 7 )$$1.7 ( − 7 )$$1.1 ( − 6 )$$1.8 ( − 6 )$
$f ( x k )$$5.4 ( − 3 )$$2.1 ( − 3 )$$1.9 ( − 3 )$$4.8 ( − 4 )$$3.5 ( − 3 )$$6.6 ( − 3 )$
(1)$| x k + 1 − x k |$$5.7 ( − 9 )$$6.6 ( − 8 )$$6.6 ( − 8 )$$4.6 ( − 8 )$$3.0 ( − 8 )$$1.7 ( − 8 )$
$f ( x k )$$9.9 ( − 5 )$$1.1 ( − 3 )$$1.1 ( − 3 )$$7.7 ( − 4 )$$5.0 ( − 4 )$$2.8 ( − 4 )$
(4)$| x k + 1 − x k |$$5.3 ( − 7 )$$8.6 ( − 7 )$$8.4 ( − 7 )$$4.4 ( − 7 )$$7.2 ( − 8 )$$2.4 ( − 7 )$
$f ( x k )$$2.4 ( − 3 )$$4.4 ( − 3 )$$4.3 ( − 3 )$$2.4 ( − 3 )$$4.0 ( − 4 )$$1.3 ( − 3 )$
(5)$| x k + 1 − x k |$$6.9 ( − 7 )$$4.0 ( − 7 )$$5.1 ( − 7 )$$8.2 ( − 8 )$$1.9 ( − 7 )$$4.2 ( − 7 )$
$f ( x k )$$1.2 ( − 3 )$$8.7 ( − 4 )$$8.6 ( − 4 )$$2.0 ( − 4 )$$4.9 ( − 4 )$$1.2 ( − 3 )$
IS stands for Ishikawa’s scheme. From the above numerical results, we concluded that our methods $I G M , I H M$ and $I O M 1$, have smaller absolute residual errors and smaller errors difference between two iterations as compared to the original Ishikawa method in all the examples. On the other hand, our methods, $I O M 2$ and $I O M 3$, have similar computational results to Ishikawa method.
Table 4. Comparison of our methods with standard Agarwal scheme after $k = 12$ number of iterations.
Table 4. Comparison of our methods with standard Agarwal scheme after $k = 12$ number of iterations.
ExamplesE.C.ASAGMAHMAOM1AOM2AOM3
R.E.
(1)$| x k + 1 − x k |$$1.5 ( − 4 )$$1.4 ( − 5 )$$1.4 ( − 5 )$$2.2 ( − 5 )$$3.1 ( − 5 )$$3.4 ( − 5 )$
$f ( x k )$$2.5 ( − 4 )$$2.4 ( − 5 )$$2.4 ( − 5 )$$3.7 ( − 5 )$$5.3 ( − 5 )$$5.8 ( − 5 )$
(2)$| x k + 1 − x k |$$1.9 ( − 5 )$$6.1 ( − 6 )$$5.6 ( − 6 )$$8.6 ( − 7 )$$4.1 ( − 6 )$$5.2 ( − 6 )$
$f ( x k )$$1.9 ( − 5 )$$6.1 ( − 6 )$$5.6 ( − 6 )$$8.6 ( − 7 )$$4.1 ( − 6 )$$5.2 ( − 6 )$
(3)$| x k + 1 − x k |$$3.7 ( − 20 )$$3.9 ( − 15 )$$3.9 ( − 19 )$$5.1 ( − 18 )$$1.4 ( − 19 )$$6.8 ( − 20 )$
$f ( x k )$$3.7 ( − 19 )$$3.9 ( − 14 )$$3.9 ( − 18 )$$5.1 ( − 17 )$$1.4 ( − 18 )$$6.8 ( − 19 )$
(4)$| x k + 1 − x k |$$7.4 ( − 11 )$$1.3 ( − 10 )$$1.2 ( − 10 )$$5.3 ( − 11 )$$7.2 ( − 12 )$$2.0 ( − 11 )$
$f ( x k )$$2.0 ( − 10 )$$3.4 ( − 10 )$$3.3 ( − 10 )$$1.4 ( − 10 )$$1.9 ( − 11 )$$5.2 ( − 11 )$
(5)$| x k + 1 − x k |$$1.4 ( − 7 )$$8.6 ( − 8 )$$8.5 ( − 8 )$$1.3 ( − 8 )$$2.4 ( − 8 )$$4.2 ( − 8 )$
$f ( x k )$$1.4 ( − 7 )$$3.6 ( − 8 )$$8.5 ( − 8 )$$1.3 ( − 8 )$$2.4 ( − 8 )$$4.2 ( − 8 )$
AS stands for Agarwal’s scheme. We deduced from the obtained numerical results that our methods, , $A O M 2$ and $A O M 3$, have better numerical results as compared to the classical Agarwal scheme in examples 1, 2 and 5. In addition, our methods have similar numerical results to the Agarwal method in the case of examples 3 and 4.
Table 5. Comparison of our methods with classical Mann’s and Thianwan method after $k = 12$ number of iterations.
Table 5. Comparison of our methods with classical Mann’s and Thianwan method after $k = 12$ number of iterations.
ExamplesE.C.MSTSTGMTHMTOM1TOM2TOM3
R.E.
(1)$| x k + 1 − x k |$$1.2 ( − 7 )$$1.3 ( − 8 )$$4.9 ( − 17 )$$4.8 ( − 17 )$$9.7 ( − 13 )$$1.2 ( − 8 )$$1.1 ( − 6 )$
$f ( x k )$$3.6 ( − 4 )$$1.9 ( − 5 )$$1.6 ( − 16 )$$1.6 ( − 16 )$$2.4 ( − 12 )$$2.5 ( − 8 )$$2.0 ( − 6 )$
(2)$| x k + 1 − x k |$$2.6 ( − 6 )$$4.9 ( − 7 )$$2.3 ( − 11 )$$2.1 ( − 11 )$$1.7 ( − 19 )$$2.6 ( − 10 )$$3.4 ( − 7 )$
$f ( x k )$$4.5 ( − 3 )$$4.2 ( − 4 )$$4.6 ( − 11 )$$4.3 ( − 11 )$$2.6 ( − 19 )$$3.2 ( − 10 )$$3.7 ( − 7 )$
(3)$| x k + 1 − x k |$$1.3 ( − 7 )$$5.1 ( − 9 )$$4.6 ( − 8 )$$4.6 ( − 8 )$$8.0 ( − 10 )$$5.5 ( − 12 )$$6.8 ( − 20 )$
$f ( x k )$$2.2 ( − 3 )$$4.4 ( − 5 )$$4.8 ( − 7 )$$9.2 ( − 7 )$$1.2 ( − 8 )$$6.9 ( − 11 )$$6.8 ( − 19 )$
(4)$| x k + 1 − x k |$$2.7 ( − 6 )$$2.8 ( − 7 )$$2.4 ( − 8 )$$2.3 ( − 8 )$$5.4 ( − 12 )$$7.4 ( − 22 )$$5.5 ( − 15 )$
$f ( x k )$$9.5 ( − 3 )$$6.4 ( − 4 )$$1.3 ( − 7 )$$1.2 ( − 7 )$$2.2 ( − 11 )$$2.4 ( − 21 )$$1.6 ( − 14 )$
(5)$| x k + 1 − x k |$$1.1 ( − 6 )$$2.0 ( − 7 )$$3.7 ( − 10 )$$3.7 ( − 10 )$$1.6 ( − 18 )$$4.0 ( − 14 )$$5.1 ( − 10 )$
$f ( x k )$$1.9 ( − 3 )$$1.8 ( − 4 )$$7.5 ( − 10 )$$7.4 ( − 10 )$$2.3 ( − 8 )$$5.0 ( − 14 )$$5.6 ( − 10 )$
MS and TS stand for Mann’s and Thianwan’s schemes, respectively. On the basis of computational results, we inferred that our methods, and $T O M 3$, have better performance in the form of absolute residual error and smaller error difference between two iterations as compared to the classical Mann’s and Thianwan schemes.
 Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

## Share and Cite

MDPI and ACS Style

Kanwar, V.; Sharma, P.; Argyros, I.K.; Behl, R.; Argyros, C.; Ahmadian, A.; Salimi, M. Geometrically Constructed Family of the Simple Fixed Point Iteration Method. Mathematics 2021, 9, 694. https://doi.org/10.3390/math9060694

AMA Style

Kanwar V, Sharma P, Argyros IK, Behl R, Argyros C, Ahmadian A, Salimi M. Geometrically Constructed Family of the Simple Fixed Point Iteration Method. Mathematics. 2021; 9(6):694. https://doi.org/10.3390/math9060694

Chicago/Turabian Style

Kanwar, Vinay, Puneet Sharma, Ioannis K. Argyros, Ramandeep Behl, Christopher Argyros, Ali Ahmadian, and Mehdi Salimi. 2021. "Geometrically Constructed Family of the Simple Fixed Point Iteration Method" Mathematics 9, no. 6: 694. https://doi.org/10.3390/math9060694

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.