1. Introduction
The theory of fixed point iterative methods has progressively become an invaluable area of study as many problems in mathematics, engineering, physics, economics etc. can be transformed into a fixed point problems [
1,
2]. In fact, variational inequalities and equilibrium problems in Hilbert spaces and Banach spaces are solved by using fixed point iterative schemes [
3,
4,
5,
6]. These techniques have been applied in fluid mechanics and fluid–structure interaction [
7,
8], the design and analysis of fractals, etc. Over the years, researchers have developed several iterative methods for the approximation of fixed points [
9]. Fixed point iterative schemes are eminent as every root finding problem can also be converted into a fixed point problem and vice versa. Suppose we want to find the solution of a nonlinear equation
where
is sufficiently differentiable function with simple zeros. This equation can also be written in the form
such that any solution of Equation (
2), which is fixed point, is a root of the Equation (
1).
Let
be a real Banach space and
be a function, where
D is a non-empty closed and convex subset of
E and
. Let
represent the set of all fixed points of
T. One of the oldest, most simple and best-known fixed point iterative methods for approximating the fixed points was given by Picard [
10] in 1890, which is given by
Picard iteration approximates fixed points of Equation (
2), where
T is contraction mapping. If
T is non-expansive, then the Picard iterative process fails to approximate the fixed points of Equation (
2) even when the existence of fixed points is guaranteed. To overcome this limitation, researchers in this direction developed different iterative processes to approximate fixed points of non-expansive mappings and other mappings that are more general than non-expansive mappings. We list some of the methods available in the literature. The Mann iterative [
11] process is defined as follows:
where
is a real sequence in
. If
T is continuous and the Mann iterative scheme converges, then it converges to a fixed point of
T. However, if
T is not continuous, then there is no assurance that the Mann process will converge to the fixed point of
T. Kranselski [
12] proposed a one-step fixed point iterative scheme denoted by (KM) as
Kanwar et al. [
13] proposed another one-parameter family of a one-step fixed point iterative scheme as follows:
Picard, Mann and Kranselski iteration schemes are obtained from (
6) by using different values of the parameter used. There are two-step fixed point iterative schemes available in the literature. Some of them are listed here. Ishikawa [
14] proposed the following two-step iteration algorithm as
where
and
are real sequences in
. The Ishikawa iteration procedure is a generalization of Mann iteration, but there is no dependence between the convergence results of Mann and Ishikawa iterative procedures. Moreover, Ishikawa iteration is a two-step Mann iteration with two different parameter sequences. For
, the Ishikawa iterative scheme reduces to the Mann iterative scheme. Agarwal et al. [
15] defined the S-iteration process as follows:
where
and
are real sequences in
. Hussain et al. [
16] provided an example to show that iterative scheme (
8) is faster than Mann and Ishikawa iterative schemes for Zamfirescu operators. Thianwan [
17] proposed another two-step iteration scheme as follows:
where
and
are real sequences in
. Yildirim et al. [
18] proved the convergence of the Thianwan iterative scheme and its equivalence with Picard, Mann and Ishikawa iterative schemes for Zamfirescu operators. Khan [
19] developed the Picard–Mann hybrid iterative process, which was faster than almost all two-step iterative schemes at that time. This is given as follows:
where
is a real sequence in
.
In the attempt to find faster iterative schemes, researchers moved towards three-step iterative processes. These iterative schemes are mostly compositions of Picard and Mann iterative schemes. Karakaya et al. [
20] proposed the following algorithm. For each
, the sequence
is defined by
Ullah and Arshad [
21] introduced the following iteration process:
Abbas et al. [
22] proposed the following iteration algorithm: for each
, the sequence
is defined by
where
is a sequence of real numbers in
. They proved that this iterative scheme converges faster than most of the existing schemes in the literature. They also proved that this method is equivalent to iterative schemes given by (
11) and (
12).
Akutsah and Narain [
23] proposed the following iterative scheme:
They also discussed results about the strong convergence and stability of the proposed scheme. This iterative scheme is faster than iterative schemes (
11), (
12) and (
13). Gürsoy et al. [
24] defined the iteration scheme as follows:
Ullah and Arshad [
25] developed an iterative process as
Nawab et al. [
26] defined the K-iteration process as
In general, two qualities, namely fastness and stability, play important roles for an iterative scheme to be preferred over the other iterative schemes. In this paper, we have proposed a new one-parameter class of a one-step fixed point iterative method, which is faster than many existing one-step methods. We also extended this method to two-step and three-step iterative schemes.
2. Preliminaries
In this section, we give some important results and definitions, which are used to prove the main results.
Berinde [
27] introduced a class of operators on an arbitrary space
E satisfying the following condition:
where
,
and
.
Osilike [
28] considered a more general class than Berinde, which is given as follows: there exists
such that for each
,
Imoru and Olatinwo [
29] further extended the class of mappings of Berinde and Osilike using the following contractive condition: there exists
and a monotonically increasing and continuous function
such that
and for all
, satisfies the condition
Lemma 1 ([
9])
. If δ is a real number such that and is a sequence of positive numbers such that , then for any sequence of positive numbers satisfyingwe have Definition 1. Let be any arbitrary sequence in . Then, an iteration procedure , converging to fixed point r, is said to be T-stable if for , we have if and only if .
Definition 2 ([
30])
. Suppose that and are two real sequences with limits a and b, respectively. Then, is said to converge faster than , if 3. Development of Method
Assume that Equation (
2) has a fixed point
. Let
represent the graph of the function
. Let
be an initial guess of the required fixed point and
be the corresponding point on the graph of the function
(
Figure 1).
Here, we approximate the nonlinear function
by a linear approximation of the double intercept form of a straight line. Let
be the mid point of a line segment between the axes. Then, the equation of the line is given by
where
.
The point of intersection of line (
23) and the line
will give the required fixed point. Let
be the point of intersection. Thus,
This can be generalized as
We also propose the generalization of this iterative scheme as follows:
or
3.1. Special Cases
Here, we consider some special cases of expression (
26).
For
, formula (
26) reduces to the simple Picard’s fixed point method
.
For
, formula (
26) gives rise to the harmonic mean formula
.
For
, where
formula (
26) reduces to the following iterative scheme:
For
, where
is a real sequence
formula (
26) corresponds to the following iterative scheme:
This is another variant of the Mann iterative scheme.
3.2. Role of Parameter k
In this section, we discuss the characteristics of parameter k.
Let
and
Since
r is a fixed point of
T, one gets
For large values of
j,
, one obtains
Since
is a sufficient condition for the convergence of the proposed fixed point iterative scheme (
26), one thus gets
This further implies that
This is the interval of convergence for proposed fixed point iteration given by (
26), which is wider than the interval of convergence of Picard’s iteration. In particular, for
(harmonic mean), the interval of convergence becomes
.
Therefore, the harmonic mean fixed point scheme (HM) has a wider interval of convergence than simple Picard’s fixed point iteration.
Remark 1. There can be several ways to choose , but the choice of the should be such that the fixed point iteration scheme converges to its fixed point. We present some examples to discuss this in detail.
Example 1. We have considered the following example to compare the proposed fixed point method, named the harmonic mean formula with Picard’s iterative scheme. Consider the following two possible rearrangements of as
- (1)
- (2)
As is the required root of Equation (29) and the fixed point for the above two sequences with an initial guess , in the case of , the Picard method diverges for the interval []. Here, and implies that . This further implies that , which violates the condition of . As the harmonic mean formula converges to the required fixed point. In the case of , then , which clearly implies that , Picard’s iterative method converges.
Example 2. Let us consider a square root finding problem by fixed point methods. Consider the function Here, we consider two possible rearrangements of as
- (1)
- (2)
Here, is the required root of Equation (30), and the fixed point for the above two sequences with an initial guess . In the case of first sequence , Picard’s method diverges, and the second one converges as In the case of the first sequence, the corresponding harmonic mean iteration converges. In the case of the harmonic mean formula, the interval of convergence is . In the case of the second sequence, the harmonic mean fixed point iterative scheme converges, as
3.3. Two-Step and Three-Step Iterative Schemes
In this section, we propose new two-step and three-step iterative schemes defined as follows.
3.3.1. Two-Step Iterative Schemes
For
given in
D, the sequence
in
D is given by
and
3.3.2. Three-Step Iterative Scheme
For
given in
D, the sequence
in
D is given by
4. Convergence and Stability Analysis
In this section, we shall prove the strong convergence and stability results and obtain error equations for iterative schemes given by (
26), (
31) and (
33), respectively.
Theorem 1. Let be a function satisfying the contractive mapping (20) with fixed point ‘r’, where D is a non-empty closed and convex subset of real Banach space E. Let be defined by the iteration process (26) and , where is a real number. Then, converges strongly to a unique fixed point r of T. Proof. We shall establish that
. Using (
26), one has
Let
. Since
and
have same sign and
,
,
and
also have the same sign. Thus,
. Therefore,
So, using (
35) in (
34), one gets
Repeating the above process, one gets
Continuing like this, one can write
Let
. As each
, therefore
. Thus, (
37) becomes
Since , Hence, it follows from Lemma 1 that Therefore, converges strongly to r.
We now prove that
r is unique. Let
such that
. Suppose that
.
This is a contradiction. Therefore . □
Theorem 2. Let be a function satisfying contractive condition (20) with fixed point r, where D is a non-empty closed and convex subset of real Banach space E. Let be defined by the iteration process (26) and , where is a real number. Then, the iterative scheme is T-stable. Proof. Let
be an arbitrary sequence in
D and suppose that the sequence generated by (
26) is
, which converges to the unique fixed point
r and that
. To show that the iterative scheme is
T-stable, we have to show that
if and only if
. Suppose that
. We find that
Since from (
35),
and
, by using Lemma 1, one obtains
.
Conversely, suppose that
. Proceeding similarly to before, we have
Using the hypothesis , one gets .
Hence, iteration process (
26) is
T-stable. □
Theorem 3. Let be a function satisfying (20) with fixed point r, where D us a non-empty closed and convex subset of real Banach space E. Let be defined by the iteration process (26) and , where is a real number. Then, the iterative scheme has a linear order of convergence, and for , the order of convergence of the iterative scheme is two. Proof. Suppose . Expanding about r by Taylor series expansion, one gets
As and .
Substituting these values in (
41), one obtains
Therefore, scheme (
26) has a linear order of convergence.
Furthermore, if
, then
This implies that scheme (
26) has at least a second order of convergence. □
Theorem 4. Let be a function satisfying contractive condition (20) with fixed point r, where D is a non-empty closed and convex subset of real Banach space E. Let be defined by the iteration process (31) and , where is a real number. Then, converges strongly to a unique fixed point r of T. Proof. We shall establish that
. Using (
31), one has
Further, let
. Since
and
have the same sign and
,
, so
and
also have the same sign. Thus,
Repeating the above process, one gets
Let
. Since each
, so
. Then, (
46) becomes
Since
and
, one gets
Hence, it follows from (
47) that
Therefore,
converges strongly to
r.
Similarly, as before, it can be easily checked that r is unique. □
Theorem 5. Let be a function satisfying contractive condition (20) with fixed point r, where D is a non-empty closed and convex subset of real Banach space E. Let be defined by the iteration process (31) and , where is a real number. Then, the iterative scheme is T-stable. Proof. Let
be an arbitrary sequence in
D, and suppose that the sequence generated by (
31) is
converging to unique fixed point
r and that
. To show that the iterative scheme is
T-stable, we have to show that
if and only if
. Suppose that
. We have
Since from (
35)
and
, by using Lemma 1, one obtains
.
Conversely, suppose that
. Proceeding similarly to before, one can have
Using the hypothesis , one gets .
Hence, iteration process (
31) is
T-stable. □
Theorem 6. Let be a function satisfying contractive condition (20) with fixed point r, where D is a nonempty closed and convex subset of real Banach space E. Let be defined by the iteration process (31) and , where is a real number. Then, this iterative scheme has a linear order of convergence. Proof. For a given sequence
, we denote
. By Taylor’s series expansion about
r, one gets
Using this expansion, one can write
Let
. Thus,
. This implies that
Using the value of
in the above equation, one can get
The above equation represents the error equation for the iterative scheme given by (
31). □
Theorem 7. Let be a function satisfying contractive condition (20) with fixed point r, where D is a nonempty closed and convex subset of real Banach space E. Let be a sequence in and . Given , consider the sequence and obtained through the iteration processes (31) and (10), respectively. Then, converges to r faster than . Proof. From (
47) in Theorem 4, we get the following inequality:
Using a similar argument to that in Theorem 4, the iteration process (
10) takes the form
Using
in (52), we obtain
This can be also written as
Since so, , we get , which implies that
Therefore, . Using Definition 2, we conclude that converges to r faster than . □
Remark 2. Proceeding in a similar way, it can be shown that the new iteration scheme (31) converges faster than other two-step iterative schemes. Remark 3. By using a similar argument, we can prove the strong convergence stability and fastness results for the two-step iterative scheme (32) and three-step fixed point iteration (33). Remark 4. The error equation for three-step fixed point iteration (33) is given as Furthermore, if , then , which implies that the iterative scheme (33) has second order convergence. 5. Numerical Experiments
The theoretical results proposed in previous sections are tested in this section. We take some particular cases of our method (
26) by substituting
,
,
and
.
To check the effectiveness of the proposed method (
26), we consider some scalar nonlinear equations, which are mentioned in
Table 1. In addition, in
Table 1, their corresponding fixed points and initial guesses are mentioned. For better comparison, in each example, we find the number of iterations required and the computational time of convergence to reach the stopping criteria given as follows:
It is well known that CPU time is not unique and depends on the specification of the computer. The mean CPU time is calculated by taking the mean of 12 performances of the program. The mean CPU time for each test problem in seconds is mentioned. The comparison results of different cases of (
26) with Picard, Mann and KM are shown in
Table 2. These results show that when
k tends to zero, the results are almost the same as for Picard iteration, and our proposed method performs much better than Mann and KM for each test problem in terms of the number of iterations required to attain the stopping criteria.
For the comparison of iterative schemes given by (
31) and (
33), we consider different scalar test problems in
Table 3. In
Table 4, we have compared different two-step iterative schemes given by Ishikawa (ISH), Agarwal (AGR), Thainwan (THI) and Khan (KHAN) with the proposed two-step scheme given by (
31) called
with
and
. Results show that our proposed two-step method performed much better than well-known similar existing two-step methods. In
Table 5 and
Table 6, we have compared different three-step fixed point iterative schemes given by Karakaya, Ullah, Abbas, Akutash, Gürsoy and Nawab with the proposed three-step iterative scheme (
33) and when
, named
,
, respectively, with
and
.
Further, in
Table 7, we consider the test problems of linear and non-linear systems of equations, and their results are shown in
Table 8 and
Table 9. All these calculations have been performed in
with multiple precision arithmetic using a laptop with an Apple MacBook Air M1 containing an Apple M1 chip, which has 8 Core CPU
GHz and
GHz) with 8 GB of RAM and the MacOS Ventura (
) operating system.
It can be observed that in most of the cases, new iterations perform better than most of the similar existing methods both in terms of the number of iterations and the computational time of convergence.