Modiﬁed Inertial Hybrid and Shrinking Projection Algorithms for Solving Fixed Point Problems

: In this paper, we introduce two modiﬁed inertial hybrid and shrinking projection algorithms for solving ﬁxed point problems by combining the modiﬁed inertial Mann algorithm with the projection algorithm. We establish strong convergence theorems under certain suitable conditions. Finally, our algorithms are applied to convex feasibility problem, variational inequality problem, and location theory. The algorithms and results presented in this paper can summarize and unify corresponding results previously known in this ﬁeld.


Introduction
Throughout this paper, let C denote a nonempty closed convex subset of real Hilbert spaces H with standard inner products •, • and induced norms • .For all x, y ∈ C, there is Tx − Ty ≤ x − y , and the mapping T : C → C is said to be nonexpansive.We use Fix(T) := {x ∈ C : Tx = x} to represent the set of fixed points of a mapping T : C → C. The main purpose of this paper is to consider the following fixed point problem: Find x * ∈ C, such that T (x * ) = x * , where T : C → C is nonexpansive with Fix(T) = ∅.
There are various specific applications for approximating fixed point problems with nonexpansive mappings, such as monotone variational inequalities, convex optimization problems, convex feasibility problems, and image restoration problems; see, e.g., [1][2][3][4][5][6].It is well known that the Picard iteration method may not converge, and an effective way to overcome this difficulty is to use Mann iterative method, which generates sequences {x n } recursively: the iterative sequence {x n } defined by (1) weakly converges to a fixed point of T when the condition ∑ ∞ n=1 α n (1 − α n ) = +∞ is satisfied, where {α n } ⊂ (0, 1).Many practical applications, for instance, quantum physics and image reconstruction, are in infinite dimensional spaces.To investigate these problems, norm convergence is usually preferable to weak convergence.Therefore, modifying the Mann iteration method to obtain strong convergence is an important research topic.For recent works, see [7][8][9][10][11][12] and the references therein.On the other hand, the Ishikawa iterative method can strongly converge to the fixed point of nonlinear mappings.
For more discussion, see [13][14][15][16].In 2003, Nakajo and Takahashi [17] established strong convergence of the Mann iteration with the aid of projections.Indeed, they considered the following algorithm: where {α n } ⊂ [0, 1), T is a nonexpansive mapping on C and P C n ∩Q n is the metric projection from C onto C n ∩ Q n .This method is now referred to as the hybrid projection method.Inspired by Nakajo and Takahashi [17], Takahashi, Takeuchi, and Kubota [18] also proposed a projection-based method and obtained strong convergence results, which is now called the shrinking projection method.In recent years, many authors gained new algorithms based on projection method; see [10,[18][19][20][21][22][23].
Generally, the Mann algorithm has a slow convergence rate.In recent years, there has been tremendous interest in developing the fast convergence of algorithms, especially for the inertial type extrapolation method, which was first proposed by Polyak in [24].Recently, some researchers have constructed different fast iterative algorithms by means of inertial extrapolation techniques, for example, inertial Mann algorithm [25], inertial forward-backward splitting algorithm [26,27], inertial extragradient algorithm [28,29], inertial projection algorithm [30,31], and fast iterative shrinkage-thresholding algorithm (FISTA) [32].The results of these algorithms and other related ones not only theoretically analyze the convergence properties of inertial type extrapolation algorithms, but also numerically demonstrate their computational performance on some data analysis and image processing problems.
In 2008, Mainge [25] proposed the following inertial Mann algorithm based on the idea of the Mann algorithm and inertial extrapolation: It should be pointed out that the iteration sequence {x n } defined by (3) only obtains weak convergence results under the following assumptions: It should be noted that the condition (C2) is very strong, which prohibits execution of related algorithms.Recently, Bot and Csetnek [33] got rid of the condition (C2); for more details, see Theorem 5 in [33].
In 2014, Sakurai and Iiduka [34] introduced an algorithm to accelerate the Halpern fixed point algorithm in Hilbert spaces by means of conjugate gradient methods that can accelerate the convergence rate of the steepest descent method.Very recently, inspired by the work of Sakurai and Iiduka [34], Dong et al. [35] proposed a modified inertial Mann algorithm by combining the inertial method, the Picard algorithm and the conjugate gradient method.Their numerical results showed that the proposed algorithm has some advantages over other algorithms.Indeed, they obtained the following result: Theorem 1.Let T : C → C be a nonexpansive mapping with Fix(T) = ∅.Set µ ∈ (0, 1], η > 0 and x 0 , x 1 ∈ H arbitrarily and set d 0 := (Tx 0 − x 0 ) /η. Define a sequence {x n } by the following algorithm: The iterative sequence {x n } defined by (4) converges weakly to a point in Fix(T) under the following conditions: {w n } defined in (4) assume that {Tw n − w n } is bounded and {Tw n − y} is bounded for any y ∈ Fix(T).
Inspired and motivated by the above works, in this paper, based on the modified inertial Mann algorithm (4) and the projection algorithm (2), we propose two new modified inertial hybrid and shrinking projection algorithms, respectively.We obtain strong convergence results under some mild conditions.Finally, our algorithms are applied to a convex feasibility problem, a variational inequality problem, and location theory.
The structure of the paper is the following.Section 2 gives the mathematical preliminaries.Section 3 present modified inertial hybrid and shrinking projection algorithms for nonexpansive mappings in Hilbert spaces and analyzes their convergence.Section 4 gives some numerical experiments to compare the convergence behavior of our proposed algorithms with previously known algorithms.Section 5 concludes the paper with a brief summary.

Preliminaries
We use the notation x n → x and x n x to denote the strong and weak convergence of a sequence {x n } to a point x ∈ H, respectively.Let ω w {x n } := x : ∃x n j x denote the weak w-limit set of {x n }.For any x, y ∈ H and t ∈ R, we have tx + (1 For any x ∈ H, there is a unique nearest point P C x in C, such that P C (x):= argmin y∈C x − y .P C is called the metric projection of H onto C. P C x has the following characteristics: From this characterization, the following inequality can be obtained We give some special cases with simple analytical solutions: (1) The Euclidean projection of x 0 onto an Euclidean ball B[c, r] = {x : x − c ≤ r} is given by (2) The Euclidean projection of x 0 onto a box Box[ , u] = {x : ≤ x ≤ u} is given by The Euclidean projection of x 0 onto a halfspace H − a,b = {x : a, x ≤ b} is given by Next we give some results that will be used in our main proof.
Lemma 1. [36] Let C be a nonempty closed convex subset of real Hilbert spaces H and let T : C → H be a nonexpansive mapping with Fix(T) = ∅.Assume that {x n } be a sequence in C and x ∈ H such that x n x and Tx n − x n → 0 as n → ∞, then x ∈ Fix(T).Lemma 2. [37] Let C be a nonempty closed convex subset of real Hilbert spaces H.For any x, y, z ∈ H and

Modified Inertial Hybrid and Shrinking Projection Algorithms
In this section, we introduce two modified inertial hybrid and shrinking projection algorithms for nonexpansive mappings in Hilbert spaces using the ideas of the inertial method, the Picard algorithm, the conjugate gradient method, and the projection method.We prove the strong convergence of the algorithms under suitable conditions.Theorem 2. Let C be a bounded closed convex subset of real Hilbert spaces H and let T : C → C be a nonexpansive mapping with Fix(T) = ∅.Assume that the following conditions are satisfied: Set x −1 , x 0 ∈ H arbitrarily and set d 0 := (Tx 0 − x 0 )/η.Define a sequence {x n } by the following: where the sequence {ξ n } is defined by where n 0 satisfies ψ n ≤ 1 2 for all n ≥ n 0 .Then the iterative sequence {x n } defined by (7) converges to P Fix(T) x 0 in norm.
Proof.We divided our proof in three steps.
Step 1.To begin with, we need to show that Fix(T) ⊂ C n ∩ Q n .It is easy to check that C n is convex by Lemma 2. Next we prove Fix(T) ⊂ C n for all n ≥ 0. Assume that d n ≤ M 1 for some n ≥ n 0 .The triangle inequality ensures that which implies that d n ≤ M 2 for all n ≥ 0, that is, {d n } is bounded.Due to w n ∈ C, we get that w n − p ≤ M 1 for all u ∈ Fix(T).From the definition of {y n } and nonexpansivity of T we obtain Therefore, where . Thus, we have u ∈ C n for all n ≥ 0 and hence Fix(T) ⊂ C n for all n ≥ 0. On the other hand, it is easy to see that Fix(T) ⊂ C = Q 0 when n = 0. Suppose that Fix(T) ⊂ Q n−1 , by combining the fact that x n = P C n−1 ∩Q n−1 x 0 and (5) we obtain According to the induction assumption we have Fix(T) and it follows from the definition of Q n that Fix(T) ⊂ Q n .Therefore, we get Fix(T) ⊂ C n ∩ Q n for all n ≥ 0.
Step 2. We prove that x n+1 − x n → 0 as n → ∞.Combining the definition of Q n and Fix(T) ⊂ Q n , we obtain We note that {x n } is bounded and The fact x n+1 ∈ Q n , we have x n − x 0 ≤ x n+1 − x 0 , which means that lim n→∞ x n − x 0 exists.Using (6), one sees that which implies that x n+1 − x n → 0 as n → ∞.Next, by the definition of w n , we have which further yields that Step 3. It remains to show that x n → x * , where x * = P Fix(T) x 0 .From x n+1 ∈ C n we get Therefore, On the other hand, since Consequently, In view of ( 9) and Lemma 1, it follows that every weak limit point of {x n } is a fixed point of T. i.e., ω w {x n } ⊂ Fix(T).By means of Lemma 3 and the inequality (8), we get that {x n } converges to P Fix(T) x 0 in norm.The proof is complete.Theorem 3. Let C be a bounded closed convex subset of real Hilbert spaces H and let T : C → C be a nonexpansive mapping with Fix(T) = ∅.Assume that the following conditions are satisfied: Set x −1 , x 0 ∈ H arbitrarily and set d 0 := (Tx 0 − x 0 )/η.Define a sequence {x n } by the following: where the sequence {ξ n } is defined by η M 1 , where n 0 satisfies ψ n ≤ 1 2 for all n ≥ n 0 .Then the iterative sequence {x n } defined by (10) converges to P Fix(T) x 0 in the norm.
Proof.We divided our proof in three steps.
Step 1.Our first goal is to show that Fix(T) ⊂ C n+1 for all n ≥ 0. According to Step 1 in Theorem 2, for all u ∈ Fix(T), we obtain Therefore, u ∈ C n+1 for each n ≥ 0 and hence Fix(T) ⊂ C n+1 ⊂ C n .
Step 2. As mentioned above, the next thing to do in the proof is show that x n+1 − x n → 0 as n → ∞.
Using the fact that x n = P C n x 0 and Fix(T) ⊂ C n , we have It follows that {x n } is bounded, in addition, we note that On the other hand, since x n+1 ∈ C n , we obtain x n − x 0 ≤ x n+1 − x 0 , which implies that lim n→∞ x n − x 0 exists.In view of (6), we have which further implies that lim n→∞ x n+1 − x n = 0. Also, we have lim n→∞ w n − x n = 0 and lim n→∞ w n − x n+1 = 0.
Step 3. Finally, we have to show that x n → x * , where x * = P Fix(T) x 0 .The remainder of the argument is analogous to that in Theorem 2 and is left to the reader.

Remark 1.
We remark here that the modified inertial hybrid projection algorithm (7) (in short, MIHPA) and the modified inertial shrinking projection algorithm (10) (in short, MISPA) contain some previously known results.
When δ n = 0 and ψ n = 0, the MIHPA becomes the hybrid projection algorithm (in short, HPA) proposed by Nakajo and Takahashi [17] and the MISPA becomes the shrinking projection algorithm (in short, SPA) proposed by Takahashi, Takeuchi, and Kubota [18].When δ n = 0 and ψ n = 0, the MIHPA becomes the modified hybrid projection algorithm (in short, MHPA) proposed by Dong et al. [35], the MISPA becomes the modified shrinking projection algorithm (in short, MSPA).

Numerical Experiments
In this section, we provide three numerical applications to demonstrate the computational performance of our proposed algorithms and compare them with some existing ones.All the programs are performed in MATLAB2018a on a personal computer Intel(R) Core(TM) i5-8250U CPU @ 1.60GHz 1.800 GHz, RAM 8.00 GB.

Example 1.
As an example, we consider the convex feasibility problem, for any nonempty closed convex set C i ⊂ R N (i = 0, 1, . . ., m), we find x * ∈ C := m i=0 C i , where one supposes that C = ∅.A mapping T : R N → R N is defined by T := P 0 1 m ∑ m i=1 P i , where P i = P C i stands for the metric projection onto C i .It follows from P i being nonexpansive that the mapping T is also nonexpansive.Furthermore, we note that In this experiment, we set C i as a closed ball with center c i ∈ R N and radius r i > 0. Thus P i can be computed with Choose r i = 1 (i = 0, 1, . . ., m), c 0 = [0, 0, . . ., 0], c 1 = [1, 0, . . ., 0], and We have Fix(T) = {0} from the special choice of c 1 , c 2 and r 1 , r 2 .In Algorithms (7) and (10), satisfied, the iteration stops.We test our algorithms under different inertial parameters and initial values.Results are shown in Table 1, where "Iter."represents the number of iterations.
where f : R N → R N is a mapping.Take VI(C, f ) denote the solution of VI (12).T : R N → R N is defined by T := P C (I − γ f ), where 0 < γ < 2/L, and L is the Lipschitz constant of the mapping f .In [39], Xu showed that T is an averaged mapping, i.e., T can be seen as the average of an identity mapping I and a nonexpansive mapping.It follows that Fix(T) = VI(C, f ), we can solve VI (12) by finding the fixed point of T. Taking f : R 2 → R 2 as follows: f (x, y) = (2x + 2y + sin(x), −2x + 2y + sin(y)), ∀x, y ∈ R .
The feasible set C is given by C , where e = (1, 1) T .It is not hard to check that f is Lipschitz continuous with constant L = √ 26 and 1-strongly monotone [40].Therefore, VI (12) has a unique solution x * = (0, 0) T .
where ω i > 0 are given weights and a i ∈ R n are anchor points.It is easy to check that the objective function f in (13) is convex and coercive.Therefore, the problem has a nonempty solution set.It should be noted that f is not differentiable at the anchor points.The most famous method to solve the problem (13) is the Weiszfeld algorithm; see [41] for more discussion.Weiszfeld proposed the following fixed point algorithm: , where A = {a 1 , a 2 , . . . ,a m }.
We consider a small example with n = 2, m = 4 anchor points, and ω i = 1 for all i.It follows from the special selection of anchor points a i (i = 1, 2, 3, 4) that the optimal value of (13) is x * = (5, 5) T .We use the same algorithms as in Example 2, and our parameter settings are as follows, setting η = 1,

Remark 2.
From Examples 1-3, we know that our proposed algorithms are effective and easy to implement.Moreover, initial values do not affect the computational performance of our algorithms.However, it should be mentioned that the MIHPA algorithm, the MISPA algorithm, the MHPA algorithm, and the MSPA algorithm will slow down the speed and accuracy of the HPA algorithm and the SPA algorithm.The acceleration may be eliminated by the projection onto the set C n and Q n and C n+1 .
convex and closed.Lemma 3. [38] Let C be a nonempty closed convex subset of real Hilbert spaces H. Let {x n } ⊂ H, u ∈ H and m = P C u.If ω w {x n } ⊂ C and satisfies the condition x n − u ≤ u − m , ∀n ∈ N. Then x n → m.

Figure 1 .
Figure 1.Convergence process at different initial values for Example 3.

Figure 2 .
Figure 2. Convergence behavior of iteration error {E n } for Example 3.

Table 1 .
Computational results for Example 1.Our another example is to consider the following variational inequality problem (in short, VI).For any nonempty closed convex set C

Table 2 .
Computational results for Example 2.
Example 3. The Fermat-Weber problem is a famous model in location theory.It is can be formulated mathematically as the problem of finding x ∈ R n that solves min We use E n = x n − x * 2 < 10 −4 or maximum iteration 300 as the stopping criterion.The initial values are randomly generated by the MATLAB function 10rand(2,1).Figures1 and 2show the convergence behavior of iterative sequence {x n } and iteration error E n , respectively.