Geometrically Constructed Family of the Simple Fixed Point Iteration Method

: This study presents a new one-parameter family of the well-known ﬁxed point iteration method for solving nonlinear equations numerically. The proposed family is derived by implementing approximation through a straight line. The presence of an arbitrary parameter in the proposed family improves convergence characteristic of the simple ﬁxed point iteration as it has a wider domain of convergence. Furthermore, we propose many two-step predictor–corrector iterative schemes for ﬁnding ﬁxed points, which inherit the advantages of the proposed ﬁxed point iterative schemes. Finally, several examples are given to further illustrate their efﬁciency. stands for Ishikawa’s scheme. From the above numerical results, we concluded that our methods IGM , IHM and IOM 1, have smaller absolute residual errors and smaller errors difference between two iterations as compared to the original Ishikawa method in all the examples. On the other hand, our methods, IOM 2 and IOM 3, have similar computational results to Ishikawa method.


Introduction
The fixed point iteration is probably the simplest and most important root-finding algorithm in numerical analysis [1,2]. The fixed point methods and fixed point theorems have many applications in mathematics and engineering. One way to study numerical ordinary differential solvers and Runge-Kutta methods is to convert them as fixed point iterations. The well-known Newton's method [2][3][4][5][6][7][8][9][10][11][12][13][14][15][16][17] is also a special case of an iteration method. The fixed point theory has been voluminously used as a tool to find the solution of function-differential equations. Furthermore, the fixed point problems are equivalent to root-finding problems and sometimes easier to analyze while posing some strange and cute problems by themselves.
Suppose that we wish to find the approximate solution of the nonlinear equation where f : [a, b] ⊂ R → R is a sufficiently differentiable function with simple zeros. This can be rewritten to obtain an equation of the form in such a way that any solution of (2), which is a fixed point, is a root of the original Equation (1). Root-finding problems and fixed-point problems are equivalent classes in the following sense: f (x) has a zero at x = α ⇔ φ(x) = x − f (x) has a fixed point at x = α.
Geometrically, the fixed point occurs where the graph of y = φ(x) intersects the graph of the straight line y = x. Starting from a suitable approximation x 0 and consider that the recursive process: is called a fixed point iteration method. This method is locally linearly convergent if |φ (x)| < 1 for all x ∈ [a, b].

Geometric Derivation of the Family
Assume that Equation (2) has a fixed point at x = α. Let represent the graph of the function φ(x). Let x 0 be an initial guess to the required fixed point and φ(x 0 ) be the corresponding point on the graph of the function y = φ(x). The idea is to approximate nonlinear function y = φ(x) by a linear approximation. Therefore, we assume that be a linear approximation to the curve y = φ(x), where m ≥ 0 is the slope of the straight line. The expression (5) can be rewritten as y = m(x 0 − x) + φ(x 0 ), and this line is passing through the points (x 0 , φ(x 0 )) and x 0 + φ(x 0 ) m , 0 . More details can be found in the following Figure 1: The point of intersection of (5) with the straight line y = x will be a required fixed point and let x = x 1 be this point of intersection. Therefore, at the point of intersection, the expression (5) yields Without loss of generality, the general form of the above expression (6) can be written as follows: Next, we want to demonstrate the convergence order of the proposed iterative scheme (7). Therefore, we rewrite the expression (7) in the following way: So, we conclude that α is a root of h(x) = 0.
Proof. First of all, we will prove the first part.
If α = β, then λ ≥ 1 which contradicts the fact that λ < 1. Therefore, we have Hence, x = h(x) has a unique solution in [a, b]. Next, we move to the second part.

Continuing inductively
Hence, the sequence {x n } of x n+1 = h(x n ) converges to α.
Theorem 2. Let φ : R → R be an analytic function in the region containing the fixed point x = α.
In addition, we assume that initial guess x = x 0 is sufficiently close to the required fixed point for guaranteed convergence. Then, the proposed scheme (7) has at least linear convergence .
since x n+1 = φ(x n ), then the scheme (7) reaches at least the second order of convergence.
by Taylor's expansion in the neighborhood of fixed point "α". Therefore, one gets As Substituting these values in (10), one can have . This implies that scheme (7) has at least second-order convergence.

Special Cases
Here, we shall consider the role of the parameter m ≥ 0 and derive the various following formulas as follows:

4.
By inserting m = 1, in scheme (7), one achieves the following well-known Kranselski's iteration [20] x denoted by (KM) for the computational results (see also more recent work on this iteration in the book [21]). Similarly, we can derive several other formulas by taking different specific values of m. Furthermore, we proposed the following new schemes on the basis of some standard means of two quantities x n and φ(x n ) of same signs: 5.
Geometric mean-based fixed point formula is given by

6.
Harmonic mean-based fixed point formula is defined by Centroidal mean-based fixed point formula is mentioned as follows:

8.
The following fixed point formula based on the Heronian mean is defined as

9.
The fixed point formula based on Contra-harmonic is depicted as follows:

Remark 1.
Geometric mean-based fixed point formula and Heronian mean formula are applicable for finding positive fixed points only.

Two-Step Iterative Schemes
In this section, we present a new two-step predictor-corrector iterative schemes using the modified fixed point methods as predictor. There are several two-point [1,22,23] and multi-point [24,25] iterative schemes in the literature for finding the fixed points. Here, we mention some of them as follows: 1.
Ishikawa [22] has proposed the following iterative scheme: where {β n } and {γ n } are sequences of positive numbers in (0, 1] as a generalization of the Mann [19] iteration scheme. We denote this method as (IS) for the computational work and choose β n = γ n = 1 n 3 +1 .

2.
Agarwal et al. [1] have proposed the following iteration scheme defined as where {β n } and {γ n } are sequences of positive numbers in (0, 1]. We call this scheme by (AS) for the computational work and consider β n = γ n = 1 n 3 +1 . For γ n = 0, it reduces to the well-known Mann iteration scheme. 3.
Thianwan [23] defined the following two-step iteration scheme as where {β n } and {γ n } are sequences of positive numbers in (0, 1]. We denote this method as (TS) for the computational work and choose β n = γ n = 1 n 3 +1 . This scheme is also known as modification of Mann's method.

Modified Schemes
These elementary schemes allow us to propose the following iterative schemes with any of the proposed methods as the first predictor step and these existing methods as the second step. For the sake of simplicity, we consider some of the special cases as a predictor part. Therefore, we have the following modified schemes depicted in the Table 1.

Numerical Examples
The theoretical results developed in the previous sections are tested in this section. We choose our methods by substituting m = 1 2 , m = 1 4 , m = 1 10 and m = −φ (x n ) in the proposed scheme (7), denoted by OM1, OM2, OM3 and OM4, respectively. In addition, we select methods GM and HM from special cases (5) and (6), respectively.
In order to check the effectiveness of our results, we consider five different types of nonlinear problems which are illustrated in examples (1)-(5). In the Table 2, we compared them with classical fixed point method. In addition, we contrast our method to the existing Ishikawa and Agarwal methods, and the results are mentioned in the Tables 3 and 4, respectively. Finally, we compared them with classical Mann's and Thianwan methods and computational depicted in the Table 5. In all the tables, we mentioned the results after twelve iterations (i.e., k = 12) with γ n = β n = 1 n 3 +1 . Additionally, we obtain the computational order of convergence by adopting the pursuing techniques , for each k = 1, 2, . . . (14) or the approximate computational order of convergence (ACOC) [26] , for each k = 2, 3, . . . (15) Computations are performed with the package Mathematica 9 with multiple precision arithmetic. The a(±b) stands for a × 10 ±b .

Example 1. Let us consider the following standard test problem
The corresponding fixed point iterative method is given as follows: The required zero of expression (16) and fixed point for (17) is α = 0.517757363682459 with initial guess x 0 = 0.52.

Example 2.
We choose the following expression for the comparison with other different fixed point methods We can easily obtained the following fixed point iterative method based on expression (18) φ(x n ) = e −x n .
The required zero of expression (18) and fixed point for (19) is α = 0.567143289740340 with initial guess x 0 = 0.6.

Example 3.
Here, we assume the following expression Based on the expression (20), we have the following fixed point iterative method: The required zero of expression (20) and fixed point for (21) is α = 1.08859775239789. We select x 0 = 1.1 as the initial guess for comparison.

Example 4. Assume another test problem as follows
Corresponding to expression (22), we have as the fixed point iterative method. The required zero of expression (22) and fixed point for (23) is α = 0.754877666347995. We assume the starting point x 0 = 0.8 for contrast.

Example 5.
Here, we assume another expression We have the following expression for the fixed point method The required zero of expression (24) and fixed point for (25) is α = 0.785396509573049. We consider x 0 = 0.8 as the initial guess for comparison.

Role of the Parameter 'm'
The presence of the arbitrary slope 'm' in the proposed family has the following characteristics: 1.
Since a ≤ x ≤ b implies that a ≤ φ(x) ≤ b. Therefore, the parameter 'm ≥ 0' ensures that the fixed point divides the interval between x 0 and φ(x 0 ) internally in the ratio m : 1 or 1 : m, otherwise, there will be an external division and hence, h(x) / ∈ [a, b].

2.
Since h(x) = mx + φ(x) m + 1 . As |h (x n )| < 1 is the sufficient condition for the convergence of modified fixed point method, then we have This further implies that − (2m This is the interval of convergence of our proposed scheme (7). As m ≥ 0, so (26) represents a wider domain of convergence in contrast to the classical fixed point method x = φ(x). In particular for m = 1 (arithmetic mean), (26) gives the following interval of convergence as −3 < φ (x n ) < 1.
Therefore, the arithmetic mean formula has a bigger interval of convergence as compared to simple fixed point method.
Remark 2. For x = φ(x), we have different ways to choose φ(x); however, we have to select φ(x) in such a way that the fixed point iteration method converges to its fixed point. We shall illustrate it by taking the following two examples: Then, the celebrated Newton-Kantorovich semi-local convergence criterion [13,14] is given by where and l is the Lipschitz constant in the condition for all x ∈ D ⊂ R for some D. Then, as an example in the case of Example (7), (ii) Newton's method (30) coincides with the modified arithmetic mean fixed point method, but since f (x) = x 2 − 4, d = |x 2 0 − 4| and l = 1 2|x 0 | , condition (31) is satisfied for x 0 ∈ S 1 = (−∞, 2] ∪ [ which includes S, and if x 0 ∈ S 2 = (−∞, 2) ∪ ( √ 2, ∞) then, h < 1 2 , so the convergence is quadratic faster than for the given (only linear) in the case of modified arithmetic mean.

Conclusions
Motivated by geometrical considerations, we developed a one-parameter class of fixed point iteration methods for generating a sequence approximating fixed points of nonlinear equations. These methods are more specialist than a number of earlier popular methods. Sufficient convergence criteria have been provided as well as the convergence order. Numerical examples further demonstrate the efficiency as well as the superiority of the new methods over earlier ones using similar convergence information. The convergence order of Theorem 2 is confirmed in Table 2 by using COC or ACOC. These schemes can also be extended for finding the fixed points of nonlinear systems.