Abstract
In a uniformly convex and q-uniformly smooth Banach space with , one use VIP to indicate a variational inclusion problem involving two accretive mappings and CFPP to denote the common fixed-point problem of an infinite family of strict pseudocontractions of order q. In this paper, we introduce a composite extragradient implicit method for solving a general symmetric system of variational inclusions (GSVI) with certain VI and CFPP. We then investigate its convergence analysis under some weak conditions. Finally, we consider the celebrated LASSO problem in Hilbert spaces.
1. Introduction-Preliminaries
Throughout this article, one always supposes that H is a real infinite dimensional Hilbert space endowed with norm and inner product . Let the nonempty subset be convex and closed, and let the mapping be the nearest point (metric) projection of H onto C. Let be a nonself mapping. The celebrated variational inequality problem (VIP) is to find an element s.t. , for all . One denotes by VI() the set of all solutions of the VIP. The VIP acts as a unified framework for lots of real industrial and applied problems, such as, machine learning, transportation, image processing and economics; see, e.g., [1,2,3,4,5,6,7]. One knows that Korpelevich’s extragradient algorithm [8] is now one of the most popular algorithms to numerically solve the VIP. It reads as follows:
with , where L is the Lipschitz constant of the mapping A. If its solution set is not empty, it was obtained that sequence is weakly convergent. The price for the weak convergence is that A must be a Lipschitz and monotone mapping. To date, now, the extragradient method and relaxed extragradient methods have received much attention and has been studied extensively; see, e.g., [9,10,11,12,13]. Next, one assumes that B is monotone set-valued mapping defined on H and A is a monotone single-valued mapping defined on H, respectively. The so-called variational inclusion problem is to find an element s.t. and it has recently been studied by many authors based on splitting-based approximation methods; see, e.g., [14,15,16,17]. This model provides a unified framework for a lot of theoretical and practical problems and was investigated via different methods [18,19,20,21,22]. To the best of the authors’ knowledge, there are no few associated results obtained in Banach spaces. Next, one will turn our attention to Banach spaces.
Next, will be used to present be the dual space of Banach space E. Let be a convex and closed set. Given a nonlinear mapping T on C, one uses the symbol to denote the set of all the fixed points of T. Recall that the mapping T is said to be a Lipschitz mapping if and only if, , . If the Lipschitz constant L is just one, one say that T is a nonexpansive mapping whose complementary mappings are monotone.
Recall that the duality mapping is defined by
For a particular case, one uses J to stand for , which is commonly called the normalized duality. Recall that a mapping T defined on C is said to be a strict pseudocontraction of order q if, , there is such that the following inequality holds for some
The convexity modulus of space E, , which maps the interval to the interval , is defined as follows
The smoothness modulus of space E, , which maps the interval to the interval , is defined as follows
Recall that a space E is said to be uniformly convex if , . Recall that a space E is said to be uniformly smooth if . Further it is said to be q-uniformly () smooth if s.t. , . In q-uniformly () smooth spaces, one has the following celebrated inequality, for some ,
One says that an operator , where D is convex and closed subset of C, is said to be a sunny mapping if, for and , . If is both nonexpansive and sunny, then , .
In the setting of q-uniformly smooth spaces, we recall that an operator B is said to be accretive if, for some , for all . An accretive operator B is said to be inverse-strongly of order q if, for each , s.t. for all for some . An accretive operator B is said to be m-accretive if for all . Furthermore, one can define a mapping by for each . Such is called the resolvent of B for . In the sequel, we will use the notation . From [9], we have and for and . From [23], one has for all and , where .
Let and be nonlinear mappings with . Consider the symmetrical system of variational inclusions, which consists of finding the pair in s.t.
where is a positive constant for . Ceng, Postolache and Yao [24] obtain the fact that problem (2) is equivalent to a fixed point problem.
Based on the equivalent relation, Ceng et al. [24] suggested a composite viscosity implicit rule for solving the GSVI (2) as follows:
where , with , and the sequences are such that (i) ; (ii) , ; (iii) ; (iv) . They proved that converges strongly to a point of , which solves a certain VIP.
In this article, we introduce and investigate an implicitly composite solution method to solve the GSVI (2) with certain VIP and CFPP constraints. We then analyze convergence of the suggested method in the setting of real Hilbert spaces under some mild conditions. An application is also considered.
From now on, one always uses to denote the smoothness coefficient; see [25,26]. One also lists some essential lemmas for the strong convergence theorem next.
Lemma 1
([27]). Let E be q-uniformly smooth with , and a closed convex set. Let be a ς-strict pseudocontraction of order q. Given . Define a single-valued nonlinear mapping by . Then is nonexpansive with provided .
Lemma 2
([28]). Let E be q-uniformly smooth with . Suppose that is an α-inverse-strongly accretive mapping of order q. Then, for any given ,
Lemma 3
([26,28]). Let be a fixed real number and let E be a q-uniformly smooth Banach space. Then . Let be two m-accretive operators. Let be -inverse-strongly accretive mapping of order q for each . Define an operator by
If , then G is a nonexpansive mapping.
The following lemmas was proved in [29], which is a simple extension of the inequality established in [26].
Lemma 4.
In a uniformly convex real Banach space there is a convex, strictly increasing, continuous function g, which maps to with such that
where is any real number and is a real function associated with μ and ν, for all (r is some real number) and such that .
Lemma 5
([26]). If E be a uniformly convex Banach space, then there exist a convex, strictly increasing, continuous function h, which maps to with such that
where x and y are in some bounded subset of E and .
Lemma 6
([30]). Let be a sequence of self-mappings on C such that
Then, for each , converges strongly to some point of C. Moreover, let S be the self-mapping on C defined by for all . Then .
Lemma 7
([31]). Let E be a strictly convex Banach space. Let be a nonexpansive mapping defined on a convex and closed subset C of E for each . Let be nonempty. Let be a positive sequence such that . Then a mapping S on C defined by is well defined, nonexpansive and holds.
Lemma 8
([32]). Let E be a smooth Banach space. Let , where C is convex and closed set in E, be a self-nonexpansive mapping. Suppose that λ is constant in the interval . Then , where , converges strongly to a fixed point , which solves for all .
Lemma 9
([33]). Let be a sequence in such that , where and satisfy the conditions: (i) , ; (ii) or . Then .
Lemma 10
([34]). Let be a real sequence such that there exists a real sequence , which is a subsequence of such that for each integer . Let be a integer Define the sequence defined by where the integer is chosen in such a way that . (i) and for each ; (ii) and .
2. Results
Throughout this section, suppose that C is a convex closed set in a Banach space E, which is both uniformly convex and q-uniformly smooth with . Let both and be m-accretive operators. Let be single-valued -inverse strongly accretive mapping for each . Further, one assumes that is a self mapping defined as with constants . Let be a -uniformly strictly pseudocontractive mapping for each . Let and be a -inverse-strongly accretive mapping of order q and an m-accretive operator, respectively. Assume that the feasibility set is nonempty.
| Algorithm 1: Composite extragradient implicit method for the GSVI (2) with VIP and CFPP constraints. |
| Initial Step. Given . Let be an arbitrary initial. Iteration Steps. Compute from the current as follows: Step 1. Calculate Step 2. Calculate ; Step 3. Calculate ; Step 4. Calculate , where u is a fixed element in C, , with and . Set and go to Step 1. |
Lemma 11.
Let the vector sequence be constructed by Algorithm 1. One has that the sequence is bounded.
Proof.
Putting and , we know from Lemma 1 that each is nonexpansive with . Let . Then we observe that By use of Lemmas 2 and 3, we deduce that and are nonexpansive mappings. It is clear that there is only one element satisfying
Since G and are both nonexpansive mappings, we get
and hence Using the nonexpansivity of G again, we deduce from that . By use of Lemmas 2 and 4, we have
which leads to . On the other hand,
This ensures that . So it follows from (3.2) that
which leads to . □
Theorem 1.
Suppose that is the vector sequence generated/defined by Iterative Algorithm 1. Assume that the parameter sequences satisfy and ; and ; and ; , . Suppose , where D is a bounded set in set C. Define the mapping S by for all . Then the sequence converges to strongly. The solution also uniquely solves for all .
Proof.
We next give two possible cases.
Case 1. We assume that there is an integer with the restriction that is non-increasing. From (4), we get From , one sees that . It is easy to see from Lemma 4 that ,
and
From the fact that g is a strictly increasing, continuous and convex function with , one has
By use of Lemma 5, we get
where is a convex, strictly increasing, continuous function as in Lemma 5. This hence entails
In a similar way, one concludes
which hence entails
Note that
which leads us to
From (8), one has
which immediately yields that
Since and the fact that and are strictly increasing, continuous and convex functions, one concludes that . This immediately implies that
Furthermore, borrowing defined as the first step in the iteration procedure, we obtain that
which yields
This further implies that
In a similar way, one further concludes
Using (10) leads us to
Hence,
Note that and the fact that and are strictly increasing, continuous and convex functions. From (6) and (9) we have
By using (9) and (11), one concludes It follows that
Thanks to (11), we get which, together with , leads to
From the boundedness of and setting , we have . Lemma 6 yields that . So, Further, from (13), we have
Letting , we deduce from Lemma 1 that is a nonexpansive mapping. It is easy to see from (14) that For each , set . It follows that
In light of for all , we obtain
We define a mapping by with for constants . Lemma 7 guarantees that is nonexpansive and
Taking into account that
we deduce from (12) and (15) that
Let . Lemma 8 guarantees that converges to a point in norm, and further solves the VIP: , From (1), we have
Further, from (16), one has
where M is a constant such that for all and . From the properties of and the fact that as , one gets A simple calculation indicates that
and then
Using (17), we have An application of Lemma 9 yields that as . Thus, as .
Case 2. We assume that there is s.t. , where N is the set of all positive integers. We now give a new mapping by Using Lemma 10, one concludes
Putting and using the same reasoning as in Case 1 we can obtain
In view of and , we conclude that
Consequently, Using Lemma 3, we have that
Thanks to , we get
It is easy to see from (18) that as . This completes the proof.
It is well known that in Hilbert spaces. From Theorem 1, we derive the following conclusion.
Corollary 1.
Let be a closed convex set. Let be a family of ς-uniformly strict pseudocontraction mappings defined on C. Suppose that are both maximal monotone operators and is -inverse-strongly monotone mapping for . Define the mapping by for constants . Let and be a σ-inverse-strongly monotone mapping and a maximal monotone operator, respectively. For any given and , let be the sequence generated by
where the sequences are sequences in with the additional restrictions and are such that , as ; and ; and ; for . Assume that , where D is a bounded subset of C. Define a self mapping S by , and further assume that . If , then , where uniquely solves .
Next, we recall the least absolute shrinkage and selection operator (LASSO) [35], which can be formulated as a convex constrained optimization problem:
where T is a bounded operator on H, b is a fixed vector in H, and is a real number. In this section, is employed to denote the set of solutions of LASSO (20). LASSO, which acts as a unified model for a number of real problems, has been investigated in different settings. Ones know that a solution to (20) is a minimizer to the following minimization problem: where . It is known that is -inverse-strongly monotone. Hence, we have that z solves the LASSO iff z solves the problem, which consists of finding s.t.
where is real, and is the proximal of defined as follows
This is separable in indices. So, , for , … with .
By putting and in Corollary 1, we obtain the following result immediately.
Corollary 2.
Let and be the same as in Corollary 1 with . Assume that . For any given and , let be the sequence generated by
where the sequences , and are such that the conditions presented in Corollary 3.1 hold where . Then as .
Funding
This work was supported by Gyeongnam National University of Science and Technology Grant in 2020.3.1–2022.2.28.
Acknowledgments
The author thanks to the referees for useful suggestions which improved the presentation of this manuscript.
Conflicts of Interest
The author declares no conflict of interest.
References
- Qin, X.; An, N.T. Smoothing algorithms for computing the projection onto a Minkowski sum of convex sets. Comput. Optim. Appl. 2019, 74, 821–850. [Google Scholar] [CrossRef]
- Sahu, D.R.; Yao, J.C.; Verma, M.; Shukla, K.K. Convergence rate analysis of proximal gradient methods with applications to composite minimization problems. Optimization 2020. [Google Scholar] [CrossRef]
- Nguyen, L.V.; Qin, X. The Minimal time function associated with a collection of sets. ESAIM Control Optim. Calc. Var. 2020. [Google Scholar] [CrossRef]
- Ansari, Q.H.; Islam, M.; Yao, J.C. Nonsmooth variational inequalities on Hadamard manifolds. Appl. Anal. 2020, 99, 340–358. [Google Scholar] [CrossRef]
- An, N.T. Solving k-center problems involving sets based on optimization techniques. J. Global Optim. 2020, 76, 189–209. [Google Scholar] [CrossRef]
- Nguyen, L.V.; Qnsari, Q.H.; Qin, X. Weak sharpness and finite convergence for solutions of nonsmooth variational inequalities in Hilbert spaces. Appl. Math. Optim. 2020. [Google Scholar] [CrossRef]
- Nguyen, L.V.; Ansair, Q.H.; Qin, X. Linear conditioning, weak sharpness and finite convergence for equilibrium problems. J. Global Optim. 2020, 77, 405–424. [Google Scholar] [CrossRef]
- Korpelevich, G.M. The extragradient method for finding saddle points and other problems. Ekon. i Mat. Metod. 1976, 12, 747–756. [Google Scholar]
- Dehaish, B.A.B. Weak and strong convergence of algorithms for the sum of two accretive operators with applications. J. Nonlinear Convex Anal. 2015, 16, 1321–1336. [Google Scholar]
- Takahahsi, W.; Yao, J.C. The split common fixed point problem for two finite families of nonlinear mappings in Hilbert spaces. J. Nonlinear Convex Anal. 2019, 20, 173–195. [Google Scholar]
- Qin, X.; Wang, L.; Yao, J.C. Inertial splitting method for maximal monotone mappings. J. Nonlinear Convex Anal. 2020, in press. [Google Scholar]
- Takahashi, W.; Wen, C.F.; Yao, J.C. The shrinking projection method for a finite family of demimetric mappings with variational inequality problems in a Hilbert space. Fixed Point Theory 2018, 19, 407–419. [Google Scholar] [CrossRef]
- Qin, X.; Yao, J.C. A viscosity iterative method for a split feasibility problem. J. Nonlinear Convex Anal. 2019, 20, 1497–1506. [Google Scholar]
- Liu, L. A hybrid steepest descent method for solving split feasibility problems involving nonexpansive mappings. J. Nonlinear Convex Anal. 2019, 20, 471–488. [Google Scholar]
- Liu, L.; Qin, X.; Agarwal, R.P. Iterative methods for fixed points and zero points of nonlinear mappings with applications. Optimization 2019. [Google Scholar] [CrossRef]
- Chang, S.S. Common zero point for a finite family of inclusion problems of accretive mappings in Banach spaces. Optimization 2018, 67, 1183–1196. [Google Scholar] [CrossRef]
- Qin, X.; Yao, J.C. Weak convergence of a Mann-like algorithm for nonexpansive and accretive operators. J. Inequal. Appl. 2016, 2016, 232. [Google Scholar] [CrossRef][Green Version]
- Abbas, H.A.; Aremu, K.O.; Jolaoso, L.O.; Mewomo, O.T. An inertial forward-backward splitting method for approximating solutions of certain optimization problem. J. Nonlinear Funct. Anal. 2020, 2020, 6. [Google Scholar]
- Kimura, Y.; Nakajo, K. Strong convergence for a modified forward-backward splitting method in Banach spaces. J. Nonlinear Var. Anal. 2019, 3, 5–18. [Google Scholar]
- Ceng, L.C. Asymptotic inertial subgradient extragradient approach for pseudomonotone variational inequalities with fixed point constraints of asymptotically nonexpansive mappings. Commun. Optim. Theory 2020, 2020, 2. [Google Scholar]
- Gibali, A. Polyak’s gradient method for solving the split convex feasibility problem and its applications. J. Appl. Numer. Optim. 2019, 1, 145–156. [Google Scholar]
- Tong, M.Y.; Tian, M. Strong convergence of the Tseng extragradient method for solving variational inequalities. Appl. Set-Valued Anal. Optim. 2020, 2, 19–33. [Google Scholar]
- Barbu, V. Nonlinear Semigroups and Differential Equations in Banach Spaces; Noordhoff: Amsterdam, The Netherlands, 1976. [Google Scholar]
- Ceng, L.C.; Postolache, M.; Yao, Y. Iterative algorithms for a system of variational inclusions in Banach spaces. Symmetry 2019, 11, 811. [Google Scholar] [CrossRef]
- Aoyama, K.; Iiduka, H.; Takahashi, W. Weak convergence of an iterative sequence for accretive operators in Banach spaces. Fixed Point Theory Appl. 2006, 2006, 35390. [Google Scholar] [CrossRef]
- Xu, H.K. Inequalities in Banach spaces with applications. Nonlinear Anal. 1991, 16, 1127–1138. [Google Scholar] [CrossRef]
- Zhang, H.; Su, Y. Convergence theorems for strict pseudocontractions in q-uniformly smooth Banach spaces. Nonlinear Anal. 2009, 71, 4572–4580. [Google Scholar] [CrossRef]
- Lopez, G.; Martin-Marquez, V.; Wang, F.; Xu, H.K. Forward-backward splitting methods for accretive operators in Banach spaces. Abst. Anal. Appl. 2012, 2012, 109236. [Google Scholar] [CrossRef]
- Qin, X.; Cho, S.Y.; Yao, J.C. Weak and strong convergence of splitting algorithms in Banach spaces. Optimization 2020, 69, 243–267. [Google Scholar] [CrossRef]
- Aoyama, K.; Kimura, Y.; Takahashi, W.; Toyoda, M. Approximation of common fixed points of a countable family of nonexpansive mappings in a Banach space. Nonlinear Anal. 2007, 67, 2350–2360. [Google Scholar] [CrossRef]
- Bruck, R.E. Properties of fixed-point sets of nonexpansive mappings in Banach spaces. Trans. Am. Math. Soc. 1973, 179, 51–262. [Google Scholar] [CrossRef]
- Reich, S. Strong convergence theorems for resolvents of accretive operators in Banach spaces. J. Math. Anal. Appl. 1980, 75, 287–292. [Google Scholar] [CrossRef]
- Xue, Z.; Zhou, H.; Cho, Y.J. Iterative solutions of nonlinear equations for m-accretive operators in Banach spaces. J. Nonlinear Convex Anal. 2000, 1, 313–320. [Google Scholar]
- Maingé, P.E. Strong convergence of projected subgradient methods for nonsmooth and nonstrictly convex minimization. Set-Valued Anal. 2008, 16, 899–912. [Google Scholar] [CrossRef]
- Tibshirani, R. Regression shrinkage and selection via the lasso. J. R. Stat. Soc. Ser. B 1996, 58, 267–288. [Google Scholar] [CrossRef]
© 2020 by the author. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).