Abstract
In our paper, we propose two new iterative algorithms with Meir–Keeler contractions that are based on Tseng’s method, the multi-step inertial method, the hybrid projection method, and the shrinking projection method to solve a monotone variational inclusion problem in Hilbert spaces. The strong convergence of the proposed iterative algorithms is proven. Using our results, we can solve convex minimization problems.
Keywords:
Meir–Keeler contractions; multi-step inertial method; hybrid projection method; shrinking projection method; variational inclusion problem MSC:
47H09; 47H10; 47H04
1. Introduction
1.1. Variational Inclusion Problem
In a real Hilbert space H with inner product and induced norm , we assume that is a set-valued mapping while is a single-valued mapping.
We consider the following variational inclusion problem: find an element such that
This problem has been studied by many scholars [1,2,3,4,5,6,7,8,9].
A classical algorithm to solve the problem (1) is the forward–backward splitting algorithm put forward by Passty [2] and by Lions and Mercier [3]. In 2000, Tseng [4] proposed a modified forward–backward splitting algorithm (Algorithm 1) about null points of maximal monotone mappings. This algorithm is weakly convergent under some conditions.
| Algorithm 1: Modified forward–backward splitting algorithm. |
In 2015, an algorithm named inertial forward–backward algorithm (Algorithm 2) was proposed by Lorenz and Pock [5]. We notice that the algorithm is also weakly convergent.
| Algorithm 2: Inertial forward–backward algorithm. |
In 2020, Tan et al. [6] introduced the inertial hybrid projection algorithm (Algorithm 3) and inertial shrinking projection method (Algorithm 4) by combining the two algorithms (Algorithms 1 and 2) with two classes of hybrid projection methods to solve the variational inclusion problem in Hilbert spaces, as follows:
| Algorithm 3: Inertial hybrid projection algorithm. |
| Algorithm 4: Inertial shrinking projection algorithm. |
They proved these two algorithms are strongly convergent under certain conditions.
1.2. Fixed Point Problem
Assume that D is a nonempty closed convex subset of H and that is a mapping. Let us recall that the fixed point problem is finding a point such that . We denote the set of fixed points of T by .
In the field of fixed point problems, many fruitful achievements were introduced by scholars [10,11,12,13,14,15,16,17,18,19,20,21,22]. One of the classic algorithms is the Krasnosel’skiǐ–Mann algorithm [10,11], which is defined as follows:
Under some certain conditions, the sequence converges weakly to a fixed point of T. In 2019, Dong et al. [20] presented a multi-step inertial Krasnosel’skiǐ–Mann algorithm, which is defined as Algorithm 5.
| Algorithm 5: Multi-step inertial Krasnosel’skiǐ–Mann algorithm. |
Under suitable conditions, the sequence converges weakly to a point in . In addition, Yao et al. [21] proposed a projected fixed point algorithm in real Hilbert spaces in 2017, which is defined as Algorithm 6. The sequence converges strongly to the unique fixed point of under some conditions.
| Algorithm 6: Projected fixed point algorithm. |
Motivated by the results of [6,20,21], we construct two new algorithms to solve variational inclusion problems and obtain two strong convergence theorems. By using our results, we can solve convex minimization problems in Hilbert spaces as applications.
2. Preliminaries
Now, we present some necessary definitions and lemmas in the following for our convergence analysis.
Definition 1
([23,24,25,26,27]). Let be a nonlinear mapping.
- (i)
- S is nonexpansive if
- (ii)
- S is firmly nonexpansive ifIt is obvious to see that a firmly nonexpansive mapping is nonexpansive.
- (iii)
- S is contractive ifwhere is a real number.
- (iv)
- S is Meir–Keeler contractive if, for any , there exists such thatIt it obvious to see that a contractive mapping is Meir–Keeler contractive.
- (v)
- S is L-Lipschitz continuous () if
- (vi)
- S is monotone if
Lemma 1
([23,28,29,30]). A Meir–Keeler contractive mapping has a unique fixed point on a complete metric space.
Lemma 2
([31]). Let D be a convex subset of a Banach space E and S be a Meir–Keeler contractive mapping on D. Then, there exists for each , such that
Recall the metric projection operator , defined as follows:
Lemma 3
([32,33]). Given and , we have
- (i)
- if and only if
- (ii)
- is firmly nonexpansive, i.e.,
- (iii)
- .
Definition 2
([34]). Let be a set-valued mapping. is the effective domain of A. The graph of A is denoted by , i.e., . A set-valued mapping is called monotone if
A monotone set-valued mapping A is called maximal monotone if, for each , if and only if
For a maximal monotone set-valued mapping and , we can define a mapping as
It is worth noticing that is single-valued and firmly nonexpansive. The mapping is called the resolvent of A for r.
Lemma 4
([35]). Let A be a maximal monotone mapping on H into and be a mapping. Then, for any , , where is the resolvent of A for r.
Lemma 5
([35,36]). Let be a maximal monotone mapping. For ,
where is the resolvent of A for r and is the resolvent of A for s.
Let be a sequence. We use and to indicate that converges strongly and weakly to x, respectively.
Definition 3
([21,37]). Let be a nonempty closed convex subsets, . We define s- and w- as follows:
If there exists a set such that s- w-, we say that converges to in the sense of Mosco and denote by M-. It is obvious to prove that, if is non-increasing with respect to inclusion, then converges to in the sense of Mosco.
Lemma 6
([21,38]). Let be a nonempty closed convex subsets, . If exists and is nonempty, then , .
3. Algorithms
In this section, we present two algorithms to find the solutions to variational inclusion problems in Hilbert spaces.
The following conditions are assumed to be true.
(A1) is L-Lipschitz continuous () and monotone.
(A2) is maximal monotone.
(A3) is a Meir–Keeler contraction.
(A4) .
We need the following lemma.
Lemma 7
([6]). The sequence generated by the algorithm is non-increasing and
4. Main Results
In this section, we analyze the strong convergence of Algorithms 7 and 8.
| Algorithm 7: Multi-step inertial hybrid Tseng’s algorithm. |
|
| Algorithm 8: Multi-step inertial shrinking Tseng’s algorithm. |
|
Theorem 1.
Assume that the conditions (A1)–(A4) are satisfied. Then, the sequence generated by Algorithm 7 converges strongly to , where is the unique fixed point of .
Proof.
The proof is divided into four steps.
Step 1., and and are closed and convex.
Obviously, for each , is a half-space, so is closed and convex.
For , is closed and convex. Suppose that is given and that is closed and convex for some . It is clear that is a half-space, so it is closed and convex. Hence, is closed and convex. , and is closed and convex by induction.
Step 2. We prove that for each .
Let . We see that
Since , we have
Hence,
On the other hand, since , we have
Hence
From the maximal monotonicity of G, we deduce
which means
This means that . Hence, for each .
For , , which yields .
Assume that is given and that for some . From Lemma 3, we obtain
Since , we have
From the expression of , we obtain . Hence, .
Therefore, , by induction.
Step 3. We prove that converges strongly to z, where z is the unique fixed point of .
From the expression of , we know that . Set . It follows from Lemma 6 that
Suppose the contrary, i.e., . One can choose a real number such that . Continue to choose a real number such that . Since f is a Meir–Keeler contraction, there exists such that implies, , . Take , we have
and
Since , there exists such that
The following two cases are considered now.
Case 1. There exists such that .
From the expression of and Lemma 3, we can obviously see that . Thus, from (10) and (11), we conclude
By induction, we obtain
which implies that
This contradicts (9).
Case 2. for all .
From Lemma 2, there exists such that
Thus, for , we obtain
which means that is a finite number. Therefore,
This is a contradiction.
Hence, we obtain that converges strongly to z.
Step 4. We prove that converges strongly to .
From Step 3, it is sufficient to prove that . Since , we have
Hence, . Therefore is bounded, and so are , , and , where appears in Lemma 7. Combining (12) and (14), we conclude
Since , we know that . Hence,
By Lemma 7, we know that . Therefore, . Combining this with , we obtain
Since is nonexpansive, we conclude
Hence,
From Lemma 5, we have
Hence,
By and the continuity of , we conclude . It follows from Lemma 4 that . Since , we see that
Taking the limit in (22), we obtain
It follows from Lemma 3 that . Since has the unique fixed point , we obtain . □
Theorem 2.
Assume that the conditions (A1)–(A4) are satisfied. Let the sequence be generated by Algorithm 8. Then, converges strongly to , where is the unique fixed point of .
Proof.
It is obvious that is a closed convex subset of H for each by induction. Using the same proof as in (2)–(6), we obtain that for each . Denote the unique fixed point of by z. From the expression of , we know that . Set . It follows from Lemma 6 that is as follows:
Using the similar proof of Theorem 1, we obtain . □
Remark 1.
If , , , and , then Algorithm 8 reduces to Algorithm 4.
5. Applications
In this section, some applications for solving the nonsmooth composite convex minimization problems are introduced in Hilbert spaces.
Denote by
Consider the following problem
where and which satisfies the following conditions:
- g is Gâteaux differentiable, and its gradient is Lipschitz continuous. h may not be Gâteaux differentiable.
- .
We need the following definitions and lemmas.
Definition 4
([34]). Let . The proximal operator of h of order is defined by
Lemma 8
([34]). Let . Then, is maximal monotone and .
Lemma 9
([34]). Let . Then, if and only if .
Next, we apply our main results to solve problem (24).
Theorem 3.
Assume that the condition (A3) is satisfied. Let the sequence be generated by Algorithm 9. The converges strongly to , where is the unique fixed point of .
Proof.
is monotone because g is convex. Let and in Theorem 4.1. We can obtain the desired result using Lemmas 8 and 9. □
Theorem 4.
Assume that the condition (A3) is satisfied. Let the sequence be generated by Algorithm 10. Then, converges strongly to , where is the unique fixed point of .
Proof.
is monotone because g is convex. Let and in Theorem 4.2. We can obtain the desired result by Lemmas 8 and 9. □
| Algorithm 9: |
|
| Algorithm 10: |
|
6. Conclusions
As known, the variational inclusion problems have always been a topic discussed by a large number of scholars. It not only plays an increasingly important role in the field of modern mathematics but also is widely used in many other fields, such as mechanics, optimization theory, nonlinear programming, etc. Tan et al. combined Tseng’s algorithm and hybrid projection algorithm to obtain a new strongly convergent algorithm. Our work in this paper is based on the work conducted by Tan et al. combined with the multi-step inertial method and the Krasnosel’skiǐ–Mann algorithm for solving the variational inclusion problems in a real Hilbert space. Then, new strong convergence theorems are obtained. By using our results, we can solve the related problems in a Hilbert space. Our results extend and improve many recent correlative results of other authors [1,2,3,4,5,6,20,21]. For example, our Algorithm 8 extends and improves Algorithm 4 in [6] in the following ways:
- (i)
- One-step inertia is generalized to multi-step inertia.
- (ii)
- There is an in the definition of .
- (iii)
- The anchor value is replaced with for the last step of iteration, where f is a Meir–Keeler contraction. This greatly expands the application scope of the iterative algorithm.
Author Contributions
Conceptualization, M.Y. and B.J.; Data curation, B.J.; Formal analysis, Y.W. and M.Y.; Funding acquisition, M.Y.; Methodology, Y.W. and B.J.; Project administration, Y.W.; Resources, B.J.; Supervision, Y.W.; Writing—original draft, M.Y. All authors have read and agreed to the published version of the manuscript.
Funding
This research received no external funding.
Institutional Review Board Statement
Not applicable.
Informed Consent Statement
Not applicable.
Data Availability Statement
Not applicable.
Acknowledgments
The authors thank the referees for their helpful comments, which notably improved the presentation of this paper. This work was supported by the National Natural Science Foundation of China (grant no. 11671365).
Conflicts of Interest
The author declare that they have no competing interests.
References
- Rockafellar, R.T. Monotone operators and the proximal point algorithms. SIAM J. Control Optim. 1976, 14, 877–898. [Google Scholar] [CrossRef] [Green Version]
- Passty, G.B. Ergodic convergence to a zero of the sum of monotone operators in Hilbert spaces. J. Math. Anal. Appl. 1979, 72, 383–390. [Google Scholar] [CrossRef] [Green Version]
- Lions, P.L.; Mercier, B. Splitting algorithms for the sum of two nonlinear operators. SIAM J. Number. Anal. 1979, 16, 964–979. [Google Scholar] [CrossRef]
- Tseng, P. A modified forward-backward splitting method for maximal monotone mappings. SIAM J. Control Optim. 2000, 38, 431–446. [Google Scholar] [CrossRef]
- Lorenz, D.; Pock, T. An inertial forward-backward algorithm for monotone inclusions. J. Math. Imaging Vis. 2015, 51, 311–325. [Google Scholar] [CrossRef] [Green Version]
- Tan, B.; Zhou, Z.; Qin, X. Accelerated projection-based forward-backward splitting algorithms for monotone inclusion problems. J. Appl. Anal. Comput. 2020, 10, 2184–2197. [Google Scholar]
- Dadashi, V.; Postolache, M. Forward-backward splitting algorithm for fixed point problems and zeros of the sum of monotone operators. Arab. J. Math. 2020, 9, 89–99. [Google Scholar] [CrossRef] [Green Version]
- Bauschke, H.H.; Combettes, P.L.; Reich, S. The asymptotic behavior of the composition of two resolvents. Nonlinear Anal. 2005, 60, 283–301. [Google Scholar] [CrossRef]
- Zhao, X.P.; Yao, J.C.; Yao, Y. A proximal algorithm for solving split monotone variational inclusions. UPB Sci. Bull. Ser. A 2020, 82, 43–52. [Google Scholar]
- Mann, W.R. Mean value methods in iteration. Proc. Am. Math. Soc. 1953, 4, 506–510. [Google Scholar] [CrossRef]
- Krasnosel’skiǐ, M.A. Two remarks on the method of successive approximations. Usp. Mat. Nauk. 1955, 10, 123–127. [Google Scholar]
- Halpern, B. Fixed points of nonexpanding maps. Bull. Amer. Math. Soc. 1967, 73, 957–961. [Google Scholar] [CrossRef] [Green Version]
- Reich, S. Weak convergence theorems for nonexpansive mappings in Banach spaces. J. Math. Anal. Appl. 1979, 67, 274–276. [Google Scholar] [CrossRef] [Green Version]
- Moudafi, A. Viscosity approximation methods for fixed-points problems. J. Math. Anal. Appl. 2000, 241, 46–55. [Google Scholar] [CrossRef] [Green Version]
- Nakajo, K.; Takahashi, W. Strong convergence theorems for nonexpansive mappings and nonexpansive semigroups. J. Math. Anal. Appl. 2003, 279, 372–379. [Google Scholar] [CrossRef] [Green Version]
- Takahashi, W.; Takeuchi, Y.; Kutoba, R. Strong convergence theorems by hybrid methods for families of nonexpansive mappings in Hilbert space. J. Math. Anal. Appl. 2008, 341, 276–286. [Google Scholar] [CrossRef]
- Marino, G.; Xu, H.K. A general iterative method for nonexpansive mappings in Hilbert spaces. J. Math. Anal. Appl. 2006, 318, 43–52. [Google Scholar] [CrossRef] [Green Version]
- Tian, M. A general iterative algorithm for nonexpansive mappings in Hilbert spaces. Nonlinear Anal. 2010, 73, 689–694. [Google Scholar] [CrossRef]
- Thakur, B.S.; Thakur, D.; Postolache, M. A new iterative scheme for numerical reckoning fixed points of Suzuki’s generalized nonexpansive mappings. Appl. Math. Comput. 2016, 275, 147–155. [Google Scholar] [CrossRef]
- Dong, Q.L.; Huang, J.Z.; Li, X.H.; Cho, Y.J.; Rassias, T.M. MiKM: Multi-step inertial Krasnosel’skiǐ-Mann algorithm and its applications. J. Glob. Optim. 2019, 73, 801–824. [Google Scholar] [CrossRef]
- Yao, Y.; Shahzad, N.; Liou, Y.C.; Zhu, L.J. A projected fixed point algorithm with Meir-Keeler contraction for pseudocontractive mappings. J. Nonlinear Sci. Appl. 2017, 10, 483–491. [Google Scholar] [CrossRef] [Green Version]
- Yao, Y.; Postolache, M.; Yao, J.C. Strong convergence of an extragradient algorithm for variational inequality and fixed point problems. UPB Sci. Bull. Ser. A 2020, 82, 3–12. [Google Scholar]
- Meir, A.; Keeler, E. A theorem on contraction mappings. J. Math. Anal. Appl. 1969, 28, 326–329. [Google Scholar] [CrossRef] [Green Version]
- Goebel, K.; Reich, S. Uniform Convexity, Hyperbolic Geometry, and Nonexpansive Mappings; Marcel Dekker: New York, NY, USA, 1984. [Google Scholar]
- Xu, H.K. Iterative methods for the split feasibility problem in infinite-dimensional Hilbert space. Inverse Probl. 2010, 26, 10518. [Google Scholar] [CrossRef]
- Yao, Y.; Liou, Y.C.; Yao, J.C. Iterative algorithms for the split variational inequality and fixed point problems under nonlinear transformations. J. Nonlinear Sci. Appl. 2017, 10, 843–854. [Google Scholar] [CrossRef] [Green Version]
- Ceng, L.C.; Petrusel, A.; Yao, J.C.; Yao, Y. Systems of variational inequalities with hierarchical variational inequality constraints for Lipschitzian pseudocontractions. Fixed Point Theory 2019, 20, 113–133. [Google Scholar] [CrossRef] [Green Version]
- Reich, S. Fixed points of contractive functions. Boll. Un. Mat. Ital. 1972, 5, 26–42. [Google Scholar]
- Karapınar, E.; Samet, B.; Zhang, D. Meir-Keeler type contractions on JS-metric spaces and related fixed point theorems. J. Fixed Point Theory Appl. 2018, 20, 60. [Google Scholar] [CrossRef]
- Li, C.Y.; Karapınar, E.; Chen, C.M. A discussion on random Meir-Keeler contractions. Mathematics 2020, 8, 245. [Google Scholar] [CrossRef] [Green Version]
- Suzuki, T. Moudafi’s viscosity approximations with Meir-Keeler contractions. J. Math. Anal. Appl. 2007, 325, 342–352. [Google Scholar] [CrossRef] [Green Version]
- Xu, H.K. Averaged mappings and the gradient-projection algorithm. J. Optim. Theory Appl. 2011, 150, 360–378. [Google Scholar] [CrossRef]
- Ceng, L.C.; Ansari, Q.H.; Yao, J.C. Some iterative methods for finding fixed points and for solving constrained convex minimization problems. Nonlinear Anal. 2011, 74, 5286–5302. [Google Scholar] [CrossRef]
- Bauschke, H.H.; Combettes, P.L. Convex Analysis and Monotone Operator Theory in Hilbert Spaces; Springer: Berlin, Germany, 2011. [Google Scholar]
- Lin, L.J.; Takahashi, W. A general iterative method for hierarchical variational inequality problems in Hilbert spaces and applications. Positivity 2012, 16, 429–453. [Google Scholar] [CrossRef]
- Takahashi, W.; Xu, H.K.; Yao, J.C. Iterative methods for generalized split feasibility problems in Hilbert spaces. Set-Valued Var. Anal. 2015, 23, 205–221. [Google Scholar] [CrossRef]
- Mosco, U. Convergence of convex sets and of solutions of variational inequalities. Adv. Math. 1969, 3, 510–585. [Google Scholar] [CrossRef] [Green Version]
- Tsukada, M. Convergence of best approximations in a smooth Banach space. J. Approx. Theory 1984, 40, 301–309. [Google Scholar] [CrossRef] [Green Version]
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).