Abstract
This paper aims to present two inertial iterative algorithms for estimating the solution of split variational inclusion and its extended version for estimating the common solution of and fixed point problem of a nonexpansive mapping in the setting of real Hilbert spaces. We establish the weak convergence of the proposed algorithms and strong convergence of the extended version without using the pre-estimated norm of a bounded linear operator. We also exhibit the reliability and behavior of the proposed algorithms using appropriate assumptions in a numerical example.
Keywords:
split variational inclusion; fixed point problem; inertial algorithms; weak convergence; strong convergence MSC:
47H05; 47H06; 49J53
1. Introduction
The split feasibility problems , due to Censor and Elfving [1], have ample applications in medical science. Therefore, has been widely used over the past twenty years in the design of intensity-modulation therapy treatments and other areas of applied sciences, see, e.g., [2,3,4,5]. Censor et al. [6,7] merged the variational inequality problem and , and a different kind of problem came to existence known as split variational inequality problem defined as:
where and are subsets of Hilbert spaces and , respectively, is a bounded linear operator, and are two operators, and .
Moudafi [8] extended into a split monotone variational inclusion problem defined as:
where and are set-valued mappings on Hilbert spaces and , respectively, and . Moudafi [8] proposed the following iterative scheme for . Let , choose any starting point and compute
where is an adjoint operator of B, with R being the spectral radius of the operator , and
If , then turns into the split inclusion problem (in short, suggested and discussed by Byrne et al. [9]:
where and , are the same as in (2). Moreover, Byrne et al. [9] suggested the following iterative scheme for . Let and select a starting point ; then, compute
where is the adjoint operator of B, , and , are the resolvents of monotone mappings , respectively. It is obvious to see that solves if and only if . Kazmi and Rizvi [10] studied the following iterative scheme for calculating the common solutions of and of a nonexpansive mapping S. For , compute
where f is contraction and . By extending the work of Kazmi and Rizvi [10], Dilshad et al. [11] discussed the common solution of and the fixed point of a finite collection of nonexpansive mappings. Sitthithakerngkiet et al. [12] investigated the common solutions of and a fixed point of a countably infinite collection of nonexpansive mappings and proposed and discussed the following method. For , compute
where is arbitrary, and is W-mapping, which is created by an infinite collection of nonexpansive mappings. Furthermore, Akram et al. [13] modify the method discussed in [10] and investigate the common solution of and :
where , satisfying and . Some results related to and can be found in [14,15,16,17,18,19] and the references therein.
It is noted that the step size depending upon the norm is commonly used in the above-mentioned iterative schemes. To skip this restruction, a new type of iterative method with a self-adaptive step size has been invented. López et al. [20] composed a relaxed iterative method for with a self-adaptive step size. Dilshad et al. [21] studied the without using a pre-calculated norm . Some useful related work can be found in [22,23,24,25,26] and the references therein.
In recent years, great efforts have been made to speed up various algorithms. The inertia term as one of the speed-up techniques has been studied by many scientists because of its simple form and good speed-up effect. Recall that using the concepts of implicit descritization for the derivatives, Alvarez and Attouch [27] have developed the inertial proximal point method, which can be expressed as
where A is monotone mapping, is the resolvent of A and . Such types of schemes have a better convergence rate, and hence, this scheme was modified and applied to solve numerous nonlinear problems; see [28,29,30,31,32,33,34] and the references therein.
Following the above-mentioned inertial method, we consider two inertial iterative algorithms for approximating the solution of and common solutions of and of a nonexpansive mapping.
The next section contains some theory and auxiliary results which are helpful in the proof of the main results. In Section 3, we explain two self-adaptive inertial iterative methods. Section 4 is focused on the proof of the main results discussing the solution of and a common solution of and . At last, we illustrate a numerical example in favor of the proposed iterative algorithms showing their behavior and reliability.
2. Preliminaries
Assume that is a real Hilbert space with the inner product . The strong convergence of the real sequence to z is indicated by and the weak convergence is indicated by . If is a sequence in X, indicates the weak -limit set of , that is
We know that for some , there exists a unique nearest point in Q denoted by such that
is called the projection of z onto , which satisfies
Moreover, is identified by the fact
For all in Hilbert space X, such that ; then, we have the following equality and inequality
and
Definition 1.
A mapping is called
- (i)
- Contraction, if
- (ii)
- Nonexpansive, if
- (iii)
- Firmly nonexpansive, if
- (iv)
- τ-inverse strongly monotone, if there exists such that
Definition 2.
Let be a set valued mapping. Then
- (i)
- The mapping A is called monotone if ;
- (ii)
- ;
- (iii)
- The mapping A is called maximal monotone if is not properly contained in the graph of any other monotone operator.
Lemma 1
([35]). If is a sequence of non-negative real numbers such that
where is a sequence in and is a sequence of real numbers such that
- (i)
- (ii)
- or
Then
Lemma 2
([36]). In a Hilbert space X,
- (i)
- a mapping is τ-inverse strongly monotone if and only if is firmly nonexpansive for .
- (ii)
- If is monotone and is the resolvent of A, then and are firmly nonexpansive for .
- (iii)
- If is nonexpansive, then is demiclosed at zero and if A is firmly nonexpansive, then is firmly nonexpansive.
Lemma 3
([37]). Let be a bounded sequence in Hilbert space X. Assume there exists a subset and satisfying the properties
- (i)
- exists, ,
- (ii)
Then, there exists such that .
Lemma 4
([38]). Let be a sequence of real numbers that does not decrease at infinity in the sense that there exists a subsequence of such that for all . In addition, consider the sequence of integers defined by
Then, is a nondecreasing sequence verifying and ,
Lemma 5
([38]). Assume that is a non-negative sequence of real numbers satisfying
- (i)
- ;
- (ii)
- ;
- (iii)
- , where .
Then, is convergent and , where for any .
3. Inertial Iterative Methods
Suppose that and are real Hilbert spaces and are monotone mappings; , are the resolvents of and , respectively. We assume that , where denotes the solution set of and denotes the fixed point set of . First, we suggest the following iterative algorithm for .
Algorithm 1.
Choose such that and let be a positive sequence satisfying .
- Iterative Step: Given arbitrary , and , for , choose , where
Compute
where and are defined as
and
where , and
Algorithm 2.
Choose such that and let be a positive sequence satisfying .
- Iterative Step: Given arbitrary , and , for , choose , where
Compute
where and are defined as
and
where , , , , and .
Remark 1.
It is not difficult to show that if or for some , then . In this case, the iteration process ended after a finite number of iterations. We suppose that the proposed algorithms generate infinite sequences which do not end in a finite number of terms.
Remark 2.
From the selection of , such that , we can conclude that
Remark 3.
By using the definitions of resolvent of monotone mappings and , we can easily obtain that if and only if and .
4. Main Results
Theorem 1.
Let , be real Hilbert spaces; , be maximal monotone mappings and be a bounded linear operator. If and such that
Then, the sequence generated from Algorithm 1 converges weakly to a point .
Proof.
Let , then . Since resolvent operator is firmly nonexpansive, hence, so is for , then by Algorithm 1 and (10), we have
Now, using (12), we estimate that
From (18) and (19), we obtain
Since is firmly nonexpasive and using and (10), we estimate
By (13), it turns out that
It follows from (21) and (22) that
Combining (20) and (23), we obtain
By using the Cauchy–Schwartz inequality, we observe that
Since , we get
From (25) and (24), we obtain
Since, , that is , we obtain
Applying Lemma 5, we deduce that the limit exists, which guarantees the boundednesss of sequence and hence and . From (26), it follows that and
which concludes
hence
which concludes that
It remains to show that . Let and be a subsequence of so that , as . Applying (27) and Remark 2, in Algorithm 1, it follows that
and
Hence, there exist subsequences and of and , respectively, which converge to . From (27), we obtain
which imply that and . □
Theorem 2.
Let , be real Hilbert spaces; and let , be set-valued maximal monotone mappings. If , are real sequences in , and
Then, the sequence obtained from Algorithm 2 converges strongly to .
Proof.
Let . From Algorithm 2, we have
and
which shows that is bounded and hence the , , and are bounded. From (20) and (23) of the proof of Theorem 1, we have
Now,
Combining (30)–(32), we obtain
Two possible cases occur.
Case I. Suppose the sequence is nonincreasing; then, there exists such that , for each . Then, exists and hence Since , , and , hence from (33), we have
From (34) and (35), we obtain
From Algorithm 2, using Remark 2, we obtain
From Algorithm 2, using (34) and (35), we obtain as
By using (38)–(40), we obtain
Thus, since , and using (36), (40) and (41), we obtain
and
Hence, there exists a subsequence of which converges weakly to l. By using Lemma 3, we conclude that . By Theorem 1, we have that . So, we obtain . Setting and rewrite , we have
From (45) and Algorithm 2, we obtain
Since and , then using (35), we obtain
Thus, by Lemma 1 in (45), we obtain .
Case II. If the sequence is increasing, we can construct a subsequence of such that for all . In this case, we define a subsequence of positive integers
then and and , it follows from (33) that
that is
Since and and , then for subsequences , and , we obtain
Similarly, we can show that as and . It is remaining to show that .
By using and the boundedness of , we have
Since , we obtain
due to , and using , using Lemma 1 in (46), we obtain that , and
that is, . Hence, the theorem is proved. □
For , we obtain the following corollary of Theorem 2.
Corollary 1.
Let , , , B, and be identical as in Theorem 2. Let , be sequences in such that
hold. Then, the sequence obtained by Algorithm 2 (with ), converges strongly to .
For , we obtain the following corollary of Theorem 2.
Corollary 2.
For and , we obtain the following corollary of Theorem 2.
5. Numerical Experiments
Suppose . Let us consider the monotone mappings and defined as . The nonexpansive mapping is defined as and bounded linear operator is defined as . It is not a difficult task to show that and are monotone mappings and B is nonexpansive mapping and . The resolvents of and with parameter are
We choose , , and satisfying the condition (28) in Algorithm 2. We fixed the maximum number of iterations 50 as a stopping criterion. The parameter is randomly generated in , where is calculated by using (14). The behavior of the sequences and are plotted in Figure 1 by applying three distinct cases of parameters which are mentioned below:

Figure 1.
Numerical behavior of and choosing three cases of parameters.
- Case (I): , , , , .
- Case (II): , , , , .
- Case (III): , , , , .
Observations:
- In Figure 1a–d, we observed that the behavior of and is uniform irrespective of the selection of parameters.
- From Figure 1e–f, we notice that the sequence obtained from Algorithm 2 converges to the same limit with a suitable selection of parameters.
- It is worthwhile to mention that the estimation of is not required to implement the algorithm, which is not so handy to calculate in general.
6. Conclusions
We have suggested and analyzed inertial methods to estimate the solution of and common solution of and . We proved the weak and strong convergence of algorithms to estimate the solution of and with suitable assumptions in such a way that the estimation of the step size does not require a pre-estimated norm . Finally, we perform a numerical example to exhibit the behavior of the proposed algorithms using different cases of parameters.
Author Contributions
Conceptualization, M.D.; methodology, D.F.; validation, M.A.; formal analysis, D.F.; investigation, L.S.M.A.; writing, original draft preparation, M.D.; funding acquisition, D.F.; writing, review and editing, M.A. All authors have read and agreed to the published version of the manuscript.
Funding
This research received no external funding.
Institutional Review Board Statement
Not applicable.
Informed Consent Statement
Not applicable.
Data Availability Statement
Not applicable.
Acknowledgments
The authors thank to the referees for their invaluable suggestions and comments which bring the manuscript in the current form. The researchers wish to extend their sincere gratitude to the Deanship of Scientific Research at the Islamic University of Madinah for the support provided to the Post-Publishing Program 2.
Conflicts of Interest
The authors declare no conflict of interest.
References
- Censor, Y.; Elfving, T. A multi projection algorithm using Bregman projections in a product space. Numer. Algor. 1994, 8, 221–239. [Google Scholar] [CrossRef]
- Censor, Y.; Bortfeld, T.; Martin, B.; Trofimov, A. A unified approach for inversion problem in intensity modulated radiation therapy. Phys. Med. Biol. 2006, 51, 2353–2365. [Google Scholar] [CrossRef] [PubMed]
- Censor, Y.; Elfving, T.; Kopf, N.; Bortfeld, T. The multiple-sets split feasibility problem and its applications for inverse problems. Inverse Probl. 2005, 21, 2071–2084. [Google Scholar] [CrossRef]
- Censor, Y.; Motova, X.A.; Segal, A. Perturbed projections and subgradient projections for the multiple-sets split feasibility problem. J. Math. Anal. Appl. 2007, 327, 1244–1256. [Google Scholar] [CrossRef]
- Masad, E.; Reich, S. A note on the multiple-set split convex feasibility problem in Hilbert space. J. Nonlinear Convex Anal. 2007, 8, 367–371. [Google Scholar]
- Censor, Y.; Gibali, A.; Reich, S. The split variational inequality problem, The Technion Institute of Technology, Haifa. arXiv 2010, arXiv:1009.3780. [Google Scholar]
- Censor, Y.; Gibali, A.; Reich, S. Algorithms for the split variational inequality problem. Numer. Alg. 2012, 59, 301–323. [Google Scholar] [CrossRef]
- Moudafi, A. Split monotone variational inclusions. J. Optim. Theory Appl. 2011, 150, 275–283. [Google Scholar] [CrossRef]
- Byrne, C.; Censor, Y.; Gibali, A.; Reich, S. Weak and strong convergence of algorithms for split common null point problem. J. Nonlinear Convex Anal. 2012, 13, 759–775. [Google Scholar]
- Kazmi, K.R.; Rizvi, S.H. An iterative method for split variational inclusion problem and fixed point problem for a nonexpansive mapping. Optim. Lett. 2014, 8, 1113–1124. [Google Scholar] [CrossRef]
- Dilshad, M.; Aljohani, A.F.; Akram, M. Iterative scheme for split variational inclusion and a fixed-point problem of a finite collection of nonexpansive mappings. J. Funct. Spaces 2020, 2020, 3567648. [Google Scholar] [CrossRef]
- Sitthithakerngkiet, K.; Deepho, J.; Kumam, P. A hybrid viscosity algorithm via modify the hybrid steepest descent method for solving the split variational inclusion in image reconstruction and fixed point problems. Appl. Math. Comp. 2015, 250, 986–1001. [Google Scholar] [CrossRef]
- Akram, M.; Dilshad, M.; Rajpoot, A.K.; Babu, F.; Ahmad, R.; Yao, J.-C. Modified iterative schemes for a fixed point problem and a split variational inclusion problem. Mathematics 2022, 10, 2098. [Google Scholar] [CrossRef]
- Alansari, M.; Farid, M.; Ali, R. An iterative scheme for split monotone variational inclusion, variational inequality and fixed point problems. Adv. Differ. Equ. 2020, 485, 1–21. [Google Scholar] [CrossRef]
- Abubakar, J.; Kumam, P.; Deepho, J. Multistep hybrid viscosity method for split monotone variational inclusion and fixed point problems in Hilbert spaces. AIMS Math. 2020, 5, 5969–5992. [Google Scholar] [CrossRef]
- Alansari, M.; Dilshad, M.; Akram, M. Remark on the Yosida approximation iterative technique for split monotone Yosida variational inclusions. Comp. Appl. Math. 2020, 39, 203. [Google Scholar] [CrossRef]
- Dilshad, M.; Siddiqi, A.H.; Ahmad, R.; Khan, F.A. An Iterative Algorithm for a Common Solution of a Split Variational Inclusion Problem and Fixed Point Problem for Non-expansive Semigroup Mappings. In Industrial Mathematics and Complex Systems; Manchanda, P., Lozi, R., Siddiqi, A., Eds.; Industrial and Applied Mathematics; Springer: Singapore, 2017. [Google Scholar]
- Taiwo, A.; Alakoya, T.O.; Mewomo, O.T. Halpern-type iterative process for solving split common fixed point and monotone variational inclusion problem between Banach spaces. Numer. Algor. 2021, 86, 1359–1389. [Google Scholar] [CrossRef]
- Zhu, L.-J.; Yao, Y. Algorithms for approximating solutions of split variational inclusion and fixed-point problems. Mathematics 2023, 11, 641. [Google Scholar] [CrossRef]
- Lopez, G.; Martin-Marquez, V.; Xu, H.K. Solving the split feasibilty problem without prior’ knowledge of matrix norms. Inverse Prob. 2012, 28, 085004. [Google Scholar] [CrossRef]
- Dilshad, M.; Akram, M.; Ahmad, I. Algorithms for split common null point problem without pre-existing estimation of operator norm. J. Math. Inequal. 2020, 14, 1151–1163. [Google Scholar] [CrossRef]
- Gibali, A.; Mai, D.T.; Nguyen, T.V. A new relaxed CQ algorithm for solving Split Feasibility Problems in Hilbert spaces and its applications. J. Indus. Manag. Optim. 2018, 2018, 1–25. [Google Scholar] [CrossRef]
- Moudafi, A.; Gibali, A. l1−l2 Regularization of split feasibility problems. Numer. Algorithms 2017, 1–19. [Google Scholar] [CrossRef]
- Moudafi, A.; Thakur, B.S. Solving proximal split feasibilty problem without prior knowledge of matrix norms. Optim. Lett. 2014, 8, 2099–2110. [Google Scholar] [CrossRef]
- Shehu, Y.; Iyiola, O.S. Convergence analysis for the proximal split feasibiliy problem using an inertial extrapolation term method. J. Fixed Point Theory Appl. 2017, 19, 2483–2510. [Google Scholar] [CrossRef]
- Tang, Y. New algorithms for split common null point problems. Optimization 2020, 1141–1160. [Google Scholar] [CrossRef]
- Alvarez, F.; Attouch, H. An inertial proximal method for maximal monotone operators via discretization of a nonlinear osculattor with damping. Set-Valued Anal. 2001, 9, 3–11. [Google Scholar] [CrossRef]
- Tang, Y.; Zhang, Y.; Gibali, A. New self-adaptive inertial-like proximal point methods for the split common null point problem. Symmetry 2021, 13, 2316. [Google Scholar] [CrossRef]
- Alansari, M.; Ali, R.; Farid, M. Strong convergence of an inertial iterative algorithm for variational inequality problem, generalized equilibrium problem, and fixed point problem in a Banach space. J. Inequal. Appl. 2020, 42, 1–22. [Google Scholar] [CrossRef]
- Abbas, H.A.; Aremu, K.O.; Jolaoso, L.O.; Mewomo, O.T. An inertial forward-backward splitting method for approximating solutions of certain optimization problems. J. Nonlinear Func. Anal. 2020, 2020, 6. [Google Scholar]
- Dilshad, M.; Akram, M.; Nsiruzzaman, M.d.; Filali, D.; Khidir, A.A. Adaptive inertial Yosida approximation iterative algorithms for split variational inclusion and fixed point problems. AIMS Math. 2023, 8, 12922–12942. [Google Scholar] [CrossRef]
- Liu, L.; Cho, S.Y.; Yao, J.C. Convergence analysis of an inertial T-seng’s extragradient algorithm for solving pseudomonotone variational inequalities and applications. J. Nonlinear Var. Anal. 2021, 5, 627–644. [Google Scholar]
- Tang, Y.; Lin, H.; Gibali, A.; Cho, Y.-J. Convegence analysis and applicatons of the inertial algorithm solving inclusion problems. Appl. Numer. Math. 2022, 175, 1–17. [Google Scholar] [CrossRef]
- Tang, Y.; Gibali, A. New self-adaptive step size algorithms for solving split variational inclusion problems and its applications. Numer. Algor. 2019, 83, 305–331. [Google Scholar] [CrossRef]
- Xu, H.K. Iterative algorithms for nonlinear operators. J. Lond. Math. Soc. 2002, 66, 240–256. [Google Scholar] [CrossRef]
- Bauschke, H.H.; Combettes, P.L. Convex Analysis and Monotone Operator Theory in Hilbert Spaces; Springer: Berlin/Heidelberg, Germany, 2011. [Google Scholar]
- Opial, Z. Weak covergence of the sequence of successive approximations of nonexpansive mappings. Bull. Amer. Math. Soc. 1976, 73, 591–597. [Google Scholar] [CrossRef]
- Mainge, P.E. Strong convergence of projected subgradient methods for nonsmooth and nonstrictly convex minimization. Set-Valued Anal. 2008, 16, 899–912. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).

