Abstract
In this paper, we introduce two modified inertial hybrid and shrinking projection algorithms for solving fixed point problems by combining the modified inertial Mann algorithm with the projection algorithm. We establish strong convergence theorems under certain suitable conditions. Finally, our algorithms are applied to convex feasibility problem, variational inequality problem, and location theory. The algorithms and results presented in this paper can summarize and unify corresponding results previously known in this field.
Keywords:
conjugate gradient method; steepest descent method; hybrid projection; shrinking projection; inertial Mann; strongly convergence; nonexpansive mapping MSC:
49J40; 47H05; 90C52
1. Introduction
Throughout this paper, let C denote a nonempty closed convex subset of real Hilbert spaces H with standard inner products and induced norms . For all , there is , and the mapping is said to be nonexpansive. We use to represent the set of fixed points of a mapping . The main purpose of this paper is to consider the following fixed point problem: Find , such that , where is nonexpansive with .
There are various specific applications for approximating fixed point problems with nonexpansive mappings, such as monotone variational inequalities, convex optimization problems, convex feasibility problems, and image restoration problems; see, e.g., [,,,,,]. It is well known that the Picard iteration method may not converge, and an effective way to overcome this difficulty is to use Mann iterative method, which generates sequences recursively:
the iterative sequence defined by (1) weakly converges to a fixed point of T when the condition is satisfied, where .
Many practical applications, for instance, quantum physics and image reconstruction, are in infinite dimensional spaces. To investigate these problems, norm convergence is usually preferable to weak convergence. Therefore, modifying the Mann iteration method to obtain strong convergence is an important research topic. For recent works, see [,,,,,] and the references therein. On the other hand, the Ishikawa iterative method can strongly converge to the fixed point of nonlinear mappings. For more discussion, see [,,,]. In 2003, Nakajo and Takahashi [] established strong convergence of the Mann iteration with the aid of projections. Indeed, they considered the following algorithm:
where , T is a nonexpansive mapping on C and is the metric projection from C onto . This method is now referred to as the hybrid projection method. Inspired by Nakajo and Takahashi [], Takahashi, Takeuchi, and Kubota [] also proposed a projection-based method and obtained strong convergence results, which is now called the shrinking projection method. In recent years, many authors gained new algorithms based on projection method; see [,,,,,,].
Generally, the Mann algorithm has a slow convergence rate. In recent years, there has been tremendous interest in developing the fast convergence of algorithms, especially for the inertial type extrapolation method, which was first proposed by Polyak in []. Recently, some researchers have constructed different fast iterative algorithms by means of inertial extrapolation techniques, for example, inertial Mann algorithm [], inertial forward–backward splitting algorithm [,], inertial extragradient algorithm [,], inertial projection algorithm [,], and fast iterative shrinkage–thresholding algorithm (FISTA) []. The results of these algorithms and other related ones not only theoretically analyze the convergence properties of inertial type extrapolation algorithms, but also numerically demonstrate their computational performance on some data analysis and image processing problems.
In 2008, Mainge [] proposed the following inertial Mann algorithm based on the idea of the Mann algorithm and inertial extrapolation:
It should be pointed out that the iteration sequence defined by (3) only obtains weak convergence results under the following assumptions:
- (C1)
- and ;
- (C2)
- .
It should be noted that the condition (C2) is very strong, which prohibits execution of related algorithms. Recently, Bot and Csetnek [] got rid of the condition (C2); for more details, see Theorem 5 in [].
In 2014, Sakurai and Iiduka [] introduced an algorithm to accelerate the Halpern fixed point algorithm in Hilbert spaces by means of conjugate gradient methods that can accelerate the convergence rate of the steepest descent method. Very recently, inspired by the work of Sakurai and Iiduka [], Dong et al. [] proposed a modified inertial Mann algorithm by combining the inertial method, the Picard algorithm and the conjugate gradient method. Their numerical results showed that the proposed algorithm has some advantages over other algorithms. Indeed, they obtained the following result:
Theorem 1.
Let be a nonexpansive mapping with . Set and arbitrarily and set . Define a sequence by the following algorithm:
The iterative sequence defined by (4) converges weakly to a point in under the following conditions:
- (D1)
- is nondecreasing with and , ;
- (D2)
- Exists such that and ;
- (D3)
- defined in (4) assume that is bounded and is bounded for any .
Inspired and motivated by the above works, in this paper, based on the modified inertial Mann algorithm (4) and the projection algorithm (2), we propose two new modified inertial hybrid and shrinking projection algorithms, respectively. We obtain strong convergence results under some mild conditions. Finally, our algorithms are applied to a convex feasibility problem, a variational inequality problem, and location theory.
The structure of the paper is the following. Section 2 gives the mathematical preliminaries. Section 3 present modified inertial hybrid and shrinking projection algorithms for nonexpansive mappings in Hilbert spaces and analyzes their convergence. Section 4 gives some numerical experiments to compare the convergence behavior of our proposed algorithms with previously known algorithms. Section 5 concludes the paper with a brief summary.
2. Preliminaries
We use the notation and to denote the strong and weak convergence of a sequence to a point , respectively. Let denote the weak w-limit set of . For any and , we have .
For any , there is a unique nearest point in C, such that . is called the metric projection of H onto C. has the following characteristics:
From this characterization, the following inequality can be obtained
We give some special cases with simple analytical solutions:
- (1)
- The Euclidean projection of onto an Euclidean ball is given by
- (2)
- The Euclidean projection of onto a box is given by
- (3)
- The Euclidean projection of onto a halfspace is given by
Next we give some results that will be used in our main proof.
Lemma 1.
[] Let C be a nonempty closed convex subset of real Hilbert spaces H and let be a nonexpansive mapping with . Assume that be a sequence in C and such that and as , then .
Lemma 2.
[] Let C be a nonempty closed convex subset of real Hilbert spaces H. For any and . is convex and closed.
Lemma 3.
[] Let C be a nonempty closed convex subset of real Hilbert spaces H. Let , and . If and satisfies the condition . Then .
3. Modified Inertial Hybrid and Shrinking Projection Algorithms
In this section, we introduce two modified inertial hybrid and shrinking projection algorithms for nonexpansive mappings in Hilbert spaces using the ideas of the inertial method, the Picard algorithm, the conjugate gradient method, and the projection method. We prove the strong convergence of the algorithms under suitable conditions.
Theorem 2.
Let C be a bounded closed convex subset of real Hilbert spaces H and let be a nonexpansive mapping with . Assume that the following conditions are satisfied:
Set arbitrarily and set . Define a sequence by the following:
where the sequence is defined by , and , where satisfies for all . Then the iterative sequence defined by (7) converges to in norm.
Proof.
We divided our proof in three steps.
Step 1. To begin with, we need to show that . It is easy to check that is convex by Lemma 2. Next we prove for all . Assume that for some . The triangle inequality ensures that
which implies that for all , that is, is bounded. Due to , we get that for all . From the definition of and nonexpansivity of T we obtain
Therefore,
where . Thus, we have for all and hence for all . On the other hand, it is easy to see that when . Suppose that , by combining the fact that and (5) we obtain for any . According to the induction assumption we have , and it follows from the definition of that . Therefore, we get for all .
Step 2. We prove that as . Combining the definition of and , we obtain
We note that is bounded and
The fact , we have , which means that exists. Using (6), one sees that
which implies that as . Next, by the definition of , we have
which further yields that
Step 3. It remains to show that , where . From we get
Therefore,
On the other hand, since and , we obtain
Consequently,
Theorem 3.
Let C be a bounded closed convex subset of real Hilbert spaces H and let be a nonexpansive mapping with . Assume that the following conditions are satisfied:
Set arbitrarily and set . Define a sequence by the following:
where the sequence is defined by , and , where satisfies for all . Then the iterative sequence defined by (10) converges to in the norm.
Proof.
We divided our proof in three steps.
Step 1. Our first goal is to show that for all . According to Step 1 in Theorem 2, for all , we obtain
Therefore, for each and hence .
Step 2. As mentioned above, the next thing to do in the proof is show that as . Using the fact that and , we have
It follows that is bounded, in addition, we note that
On the other hand, since , we obtain , which implies that exists. In view of (6), we have
which further implies that . Also, we have and .
Step 3. Finally, we have to show that , where . The remainder of the argument is analogous to that in Theorem 2 and is left to the reader. □
Remark 1.
We remark here that the modified inertial hybrid projection algorithm (7) (in short, MIHPA) and the modified inertial shrinking projection algorithm (10) (in short, MISPA) contain some previously known results. When and , the MIHPA becomes the hybrid projection algorithm (in short, HPA) proposed by Nakajo and Takahashi [] and the MISPA becomes the shrinking projection algorithm (in short, SPA) proposed by Takahashi, Takeuchi, and Kubota []. When and , the MIHPA becomes the modified hybrid projection algorithm (in short, MHPA) proposed by Dong et al. [], the MISPA becomes the modified shrinking projection algorithm (in short, MSPA).
4. Numerical Experiments
In this section, we provide three numerical applications to demonstrate the computational performance of our proposed algorithms and compare them with some existing ones. All the programs are performed in MATLAB2018a on a personal computer Intel(R) Core(TM) i5-8250U CPU @ 1.60GHz 1.800 GHz, RAM 8.00 GB.
Example 1.
As an example, we consider the convex feasibility problem, for any nonempty closed convex set (), we find , where one supposes that . A mapping is defined by , where stands for the metric projection onto . It follows from being nonexpansive that the mapping T is also nonexpansive. Furthermore, we note that . In this experiment, we set as a closed ball with center and radius . Thus can be computed with
Choose , , , and . is randomly selected from . We have from the special choice of and . In Algorithms (7) and (10), setting , , , , . When the iteration error is satisfied, the iteration stops. We test our algorithms under different inertial parameters and initial values. Results are shown in Table 1, where “Iter." represents the number of iterations.
Table 1.
Computational results for Example 1.
Example 2.
Our another example is to consider the following variational inequality problem (in short, VI). For any nonempty closed convex set ,
where is a mapping. Take denote the solution of VI (12). is defined by , where , and L is the Lipschitz constant of the mapping f. In [], Xu showed that T is an averaged mapping, i.e., T can be seen as the average of an identity mapping I and a nonexpansive mapping. It follows that , we can solve VI (12) by finding the fixed point of T. Taking as follows:
The feasible set C is given by , where . It is not hard to check that f is Lipschitz continuous with constant and 1-strongly monotone []. Therefore, VI (12) has a unique solution .
We use the Algorithm (7) (MIHPA), the Algorithm (10) (MISPA), the modified hybrid projection algorithm (MHPA), the modified shrinking projection algorithm (MSPA), the hybrid projection algorithm (HPA), and the shrinking projection algorithm (SPA) to solve Example 2. Setting , , , (we consider that T is an average mapping). The initial values are randomly generated by the MATLAB function rand(2,1). We use to denote the iteration error of algorithms, and the maximum iteration 300 as the stopping criterion. Results are reported in Table 2, where “Iter." denotes the number of iterations.
Table 2.
Computational results for Example 2.
Example 3.
The Fermat–Weber problem is a famous model in location theory. It is can be formulated mathematically as the problem of finding that solves
where are given weights and are anchor points. It is easy to check that the objective function f in (13) is convex and coercive. Therefore, the problem has a nonempty solution set. It should be noted that f is not differentiable at the anchor points. The most famous method to solve the problem (13) is the Weiszfeld algorithm; see [] for more discussion. Weiszfeld proposed the following fixed point algorithm: . The mapping is defined by , where . We consider a small example with anchor points,
and for all i. It follows from the special selection of anchor points that the optimal value of (13) is .
We use the same algorithms as in Example 2, and our parameter settings are as follows, setting , , . We use or maximum iteration 300 as the stopping criterion. The initial values are randomly generated by the MATLAB function 10rand(2,1). Figure 1 and Figure 2 show the convergence behavior of iterative sequence and iteration error , respectively.
Figure 1.
Convergence process at different initial values for Example 3.
Figure 2.
Convergence behavior of iteration error for Example 3.
Remark 2.
From Examples 1–3, we know that our proposed algorithms are effective and easy to implement. Moreover, initial values do not affect the computational performance of our algorithms. However, it should be mentioned that the MIHPA algorithm, the MISPA algorithm, the MHPA algorithm, and the MSPA algorithm will slow down the speed and accuracy of the HPA algorithm and the SPA algorithm. The acceleration may be eliminated by the projection onto the set and and .
5. Conclusions
In this paper, we proposed two modified inertial hybrid and shrinking projection algorithms based on the inertial method, the Picard algorithm, the conjugate gradient method, and the projection method. We could then work with the strong convergence theorems under suitable conditions. However, numerical experiments showed that our algorithms cannot accelerate some previously known algorithms.
Author Contributions
Supervision, S.L.; Writing—original draft, B.T. and S.X. All authors have read and agreed to the published version of the manuscript.
Funding
This research received no external funding.
Acknowledgments
We greatly appreciate the reviewers for their helpful comments and suggestions.
Conflicts of Interest
The authors declare no conflict of interest.
References
- Qin, X.; Yao, J.C. A viscosity iterative method for a split feasibility problem. J. Nonlinear Convex Anal. 2019, 20, 1497–1506. [Google Scholar]
- Cho, S.Y. Generalized mixed equilibrium and fixed point problems in a Banach space. J. Nonlinear Sci. Appl. 2016, 9, 1083–1092. [Google Scholar] [CrossRef]
- Nguyen, L.V.; Ansari, Q.H.; Qin, X. Linear conditioning, weak sharpness and finite convergence for equilibrium problems. J. Glob. Optim. 2020. [Google Scholar] [CrossRef]
- Dehaish, B.A.B. A regularization projection algorithm for various problems with nonlinear mappings in Hilbert spaces. J. Inequal. Appl. 2015, 2015, 1–14. [Google Scholar] [CrossRef]
- Dehaish, B.A.B. Weak and strong convergence of algorithms for the sum of two accretive operators with applications. J. Nonlinear Convex Anal. 2015, 16, 1321–1336. [Google Scholar]
- Qin, X.; An, N.T. Smoothing algorithms for computing the projection onto a Minkowski sum of convex sets. Comput. Optim. Appl. 2019, 74, 821–850. [Google Scholar] [CrossRef]
- Takahahsi, W.; Yao, J.C. The split common fixed point problem for two finite families of nonlinear mappings in Hilbert spaces. J. Nonlinear Convex Anal. 2019, 20, 173–195. [Google Scholar]
- An, N.T.; Qin, X. Solving k-center problems involving sets based on optimization techniques. J. Glob. Optim. 2020, 76, 189–209. [Google Scholar] [CrossRef]
- Cho, S.Y.; Kang, S.M. Approximation of common solutions of variational inequalities via strict pseudocontractions. Acta Math. Sci. 2012, 32, 1607–1618. [Google Scholar] [CrossRef]
- Takahashi, W. The shrinking projection method for a finite family of demimetric mappings with variational inequality problems in a Hilbert space. Fixed Point Theory 2018, 19, 407–419. [Google Scholar] [CrossRef]
- Qin, X.; Cho, S.Y.; Wang, L. A regularization method for treating zero points of the sum of two monotone operators. Fixed Point Theory Appl. 2014, 2014, 75. [Google Scholar] [CrossRef]
- Chang, S.S.; Wen, C.F.; Yao, J.C. Zero point problem of accretive operators in Banach spaces. Bull. Malays. Math. Sci. Soc. 2019, 42, 105–118. [Google Scholar] [CrossRef]
- Tan, K.K.; Xu, H.K. Approximating fixed points of non-expansive mappings by the Ishikawa iteration process. J. Math. Anal. Appl. 1993, 178, 301. [Google Scholar] [CrossRef]
- Sharma, S.; Deshpande, B. Approximation of fixed points and convergence of generalized Ishikawa iteration. Indian J. Pure Appl. Math. 2002, 33, 185–191. [Google Scholar]
- Singh, A.; Dimri, R.C. On the convergence of Ishikawa iterates to a common fixed point for a pair of nonexpansive mappings in Banach spaces. Math. Morav. 2010, 14, 113–119. [Google Scholar] [CrossRef]
- De la Sen, M.; Abbas, M. On best proximity results for a generalized modified Ishikawa’s iterative scheme driven by perturbed 2-cyclic like-contractive self-maps in uniformly convex Banach spaces. J. Math. 2019, 2019, 1356918. [Google Scholar] [CrossRef]
- Nakajo, K.; Takahashi, W. Strong convergence theorems for nonexpansive mappings and nonexpansive semigroups. J. Math. Anal. Appl. 2003, 279, 372–379. [Google Scholar] [CrossRef]
- Takahashi, W.; Takeuchi, Y.; Kubota, R. Strong convergence theorems by hybrid methods for families of nonexpansive mappings in Hilbert spaces. J. Math. Anal. Appl. 2008, 341, 276–286. [Google Scholar] [CrossRef]
- Cho, S.Y. Strong convergence analysis of a hybrid algorithm for nonlinear operators in a Banach space. J. Appl. Anal. Comput. 2018, 8, 19–31. [Google Scholar]
- Qin, X.; Cho, S.Y.; Wang, L. Iterative algorithms with errors for zero points of m-accretive operators. Fixed Point Theory Appl. 2013, 2013, 148. [Google Scholar] [CrossRef]
- Chang, S.S.; Wen, C.F.; Yao, J.C. Common zero point for a finite family of inclusion problems of accretive mappings in Banach spaces. Optimization 2018, 67, 1183–1196. [Google Scholar] [CrossRef]
- Qin, X.; Cho, S.Y. Convergence analysis of a monotone projection algorithm in reflexive Banach spaces. Acta Math. Sci. 2017, 37, 488–502. [Google Scholar] [CrossRef]
- He, S.; Dong, Q.-L. The combination projection method for solving convex feasibility problems. Mathematics 2018, 6, 249. [Google Scholar] [CrossRef]
- Polyak, B.T. Some methods of speeding up the convergence of iteration methods. Comput. Math. Math. Phys. 1964, 4, 1–17. [Google Scholar] [CrossRef]
- Maingé, P.E. Convergence theorems for inertial KM-type algorithms. J. Comput. Appl. Math. 2008, 219, 223–236. [Google Scholar] [CrossRef]
- Lorenz, D.; Pock, T. An inertial forward-backward algorithm for monotone inclusions. J. Math. Imaging Vis. 2015, 51, 311–325. [Google Scholar] [CrossRef]
- Qin, X.; Wang, L.; Yao, J.C. Inertial splitting method for maximal monotone mappings. J. Nonlinear Convex Anal. 2020, in press. [Google Scholar]
- Thong, D.V.; Hieu, D.V. Inertial extragradient algorithms for strongly pseudomonotone variational inequalities. J. Comput. Appl. Math. 2018, 341, 80–98. [Google Scholar] [CrossRef]
- Luo, Y.L.; Tan, B. A self-adaptive inertial extragradient algorithm for solving pseudo-monotone variational inequality in Hilbert spaces. J. Nonlinear Convex Anal. 2020, in press. [Google Scholar]
- Liu, L.; Qin, X. On the strong convergence of a projection-based algorithm in Hilbert spaces. J. Appl. Anal. Comput. 2020, 10, 104–117. [Google Scholar]
- Tan, B.; Xu, S.S.; Li, S. Inertial shrinking projection algorithms for solving hierarchical variational inequality problems. J. Nonlinear Convex Anal. 2020, in press. [Google Scholar]
- Beck, A.; Teboulle, M. A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J. Imaging Sci. 2009, 2, 183–202. [Google Scholar] [CrossRef]
- Boţ, R.I.; Csetnek, E.R.; Hendrich, C. Inertial Douglas–Rachford splitting for monotone inclusion problems. Appl. Math. Comput. 2015, 256, 472–487. [Google Scholar] [CrossRef]
- Sakurai, K.; Iiduka, H. Acceleration of the Halpern algorithm to search for a fixed point of a nonexpansive mapping. Fixed Point Theory Appl. 2014, 2014, 202. [Google Scholar] [CrossRef]
- Dong, Q.-L.; Yuan, H.B.; Cho, Y.J.; Rassias, T.M. Modified inertial Mann algorithm and inertial CQ-algorithm for nonexpansive mappings. Optim. Lett. 2018, 12, 87–102. [Google Scholar] [CrossRef]
- Bauschke, H.H.; Combettes, P.L. Convex Analysis and Monotone Operator Theory in Hilbert Spaces; Springer: New York, NY, USA, 2011; Volume 48. [Google Scholar]
- Kim, T.H.; Xu, H.K. Strong convergence of modified Mann iterations for asymptotically nonexpansive mappings and semigroups. Nonlinear Anal. 2006, 64, 1140–1152. [Google Scholar] [CrossRef]
- Martinez-Yanes, C.; Xu, H.K. Strong convergence of the CQ method for fixed point iteration processes. Nonlinear Anal. 2006, 64, 2400–2411. [Google Scholar] [CrossRef]
- Xu, H.K. Averaged mappings and the gradient-projection algorithm. J. Optim. Theory Appl. 2011, 150, 360–378. [Google Scholar] [CrossRef]
- Dong, Q.-L.; Cho, Y.J.; Zhong, L.L.; Rassias, T.M. Inertial projection and contraction algorithms for variational inequalities. J. Glob. Optim. 2018, 70, 687–704. [Google Scholar] [CrossRef]
- Beck, A.; Sabach, S. Weiszfeld’s method: Old and new results. J. Optim. Theory Appl. 2015, 164, 1–40. [Google Scholar] [CrossRef]
© 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).