Abstract
This work focuses on split feasibility problems in Hilbert spaces. To accelerate the convergent rate of gradient-CQ algorithms, we introduce an inertial term. Additionally, non-monotone stepsizes are employed to adjust the relaxation parameter applied to the original stepsizes, ensuring that these original stepsizes maintain a positive lower bound. Thereby, the efficiency of the algorithms is improved. Moreover, the weak and strong convergence of the proposed algorithms are established through proofs that exhibit a similar symmetry structure and do not require the assumption of Lipschitz continuity for the gradient mappings. Finally, the LASSO problem is presented to illustrate and compare the performance of the algorithms.
1. Introduction
This work focuses on the split feasibility problem (), as follows
where and are both real Hilbert spaces, the closed convex sets and are nonempty, and is a bounded linear operator with adjoint . If a point exists, then the solution set of is given by
Censor and Elfving [] was the first to study the original formulation of for modeling inverse problems. This model has since been widely applied to practical problems such as signal processing [] and medical image reconstruction []. Because of its wide range of applications, numerous numerical algorithms have been proposed to solve the ; see [,,,,,,,,,,,,,,,,] and the references therein. Among these, a classical method for solving the is Byrne’s CQ algorithm [,], which us defined by the following scheme:
where , while and are projections onto C and Q, respectively. In numerical experiments, sets C and Q are defined as follows
where and are two proper convex functions. Since the projections and do not have closed-form expressions, algorithm (1) is not applicable. To address this, Yang [] proposed the following weakly convergent algorithm:
where the stepsize is and
with and
with , and the function with a gradient of is defined in Lemma 6. Evidently, and , and the projections onto the half-spaces and have closed-form expressions.Therefore, algorithm (3) performs well. However, the fixed step size is quite conservative, which negatively impacts the numerical performance of Byrne’s CQ algorithm. To overcome this limittion, many self-updated step size algorithms have been proposed to solve the ; see, e.g., [,,,,,,,,,,,,,,]. Among them, Kesornprom et al. [] proposed the following gradient-CQ algorithm for solving the
where and , , . They proved the weak convergence of algorithm (4) under the conditions and To obtain strong convergence, they presented the following gradient-CQ algorithm through Halpern’s iteration process [,,]. Specifically, the fixed vector is provided and the initial guess is chosen arbitrarily. The sequence is computed as follows:
where and are given in (4). The resulting sequence converges strongly to under the conditions , , ,
To accelerate the convergent rate of relaxed CQ algorithms, the inertial extrapolation technique [] is adopted, and inertial relaxed CQ algorithms have been introduced; see, e.g., [,,,,,]. Recently, Ma and Liu [] proved the strong convergence of the relaxed CQ approach [] using the following scheme.
where is the inertial term, and are found through Lemma 3.
Note that under the assumptions of Lipschitz continuity for the gradient operator and firm nonexpansiveness of , the described algorithms exhibit certain features. In algorithms (4) and (5), the relaxation parameters and are applied directly to the step sizes.As a result, the step sizes lack positive lower bounds, possible leading to conservative choices and, consequently, slow convergence, as observed in []. The convergence of these algorithms is established under the aforementioned conditions. However, the study of gradient-CQ algorithms with inertial effects, such as in (6), has not yet been established.This raises a natural question:
Question: Can we develop new modifications of algorithms (4) and (5) that not only improve numerical performance but also ensure convergence by relaxing the assumptions of Lipschitz continuity for the gradient operator and firm nonexpansiveness for the mapping ?
In response to the proposed question, we answer in the affirmative. The main contributions of this work are as follows:
- -
- -
- We adopt double non-Lipschitz step sizes [], which remove the restrictions imposed by the condition and the requirement . The structures of our proposed step sizes are similar symmetry, and they have positive lower bounds and grow with the iterative count, leading to accelerated convergence;
- -
- We propose inertial gradient-CQ algorithms with double non-Lipschitz step sizes for solving the , and we establish their weak and strong convergence without requiring Lipschitz continuity of or firm nonexpansiveness of , respectively;
- -
- We apply our methods to the LASSO problem to demonstrate and validate the theoretical results.
2. Preliminaries
The symbol ⇀ stands for weak convergence and → represents strong convergence. Let and be real Hilbert spaces. For any sequence , denotes the weak limit set of ; namely, .
Definition 1
([]). Let be a real Hilbert space and let be a convex function. An element is called the subgradient of V at if
The collection of all subgradients of V at are subdifferentials of the function V at this point, which is denoted by , i.e.,
Lemma 1.
Let be a real Hilbert space. For all , then
(i)
(ii)
(iii)
Let C be a nonempty closed and convex subset of , then the orthogonal projection of onto C is defined by
By the following Lemma 2 (iii), this is a firmly nonexpansive mapping.
Lemma 2
([]). Let C be a nonempty closed convex subset of a real Hilbert space and let be the metric projection from onto C. Then the following statements hold:
(i)
(ii)
(iii)
(iv)
A function is called weakly lower semi-continuous (w-lsc) at if yields
Lemma 3
([]). Let be a sequence of nonnegative numbers fulfilling the following:
where and are sequences of nonnegative numbers, such that and . Then, exists.
Lemma 4
([]). Let be a real Hilbert space and let be a sequence in , such that there exists a nonempty closed and convex subset C of fulfilling the following conditions:
(i) for all , exists;
(ii) any weak cluster point of belongs to C.
- Then, there exists , such that converges weakly to
Lemma 5
((Lemma 2 [])). Let and be sequences and such that there
(i)
(ii)
(iii) where
- Then, is a converging sequence and where (for any ).
Lemma 6
([]). Let C and Q be closed and convex subsets of real Hilbert spaces and , respectively, and be a bounded linear operator. Let be a function defined by
The following properties hold:
(i) V is convex and differentiable;
(ii) V is lsc on ;
(iii) , ;
(iv) is Lipschitz continuous.
Lemma 7
([]). Suppose that is a sequence of nonnegative real numbers, such that there ,
where is a sequence in , is a sequence of nonnegative real numbers, and and are two sequences in , such that
(i)
(ii) ;
(iii) yields for any subsequence of
Then, .
Lemma 8
([]). Consider the with the function V as in Lemma 6, and let and . Then, the equivalence of the following statements holds:
(i) The point solves the ;
(ii) The point solves the fixed-point equation:
(iii) The point solves the variational inequality problem with respect to the gradient of V, namely, find a point , such that
3. Weak Convergence
In this section, we introduce an accelerated gradient-CQ algorithm for solving SFP in real Hilbert spaces. Also, its weak convergence is analyzed under conditions that are weaker than those that are typically assumed. Before presenting our algorithm, we list several preliminary conditions below.
(A1) The solution set of is nonempty, i.e.,
(A2) Sets C and Q are defined in (2), and sets and are defined in (3), the function with its gradient are defined in Lemma 6.
Now, our proposed algorithm is as follows.
Remark 1.
As shown in Algorithm 1, if , then it can be seen from (7) that , that is, . Also, its weak convergence is analyzed under conditions that are weaker than usual. Before describing our algorithm, several conditions are given below. From Lemma 8 (i–ii), we also show . Combining sets and , we further see that and Thus, it can be concluded from (2) that and . Also, its weak convergence is analyzed under conditions that are weaker than usual. Before presenting our algorithm, several conditions are given below. Thereby, is a solution of the .
Algorithm 1: A weakly convergent algorithm for SFP. |
Step 0. Take , , , and |
. The sequence satisfies |
. |
Step n. Compute |
where the stepsizes and are updated via |
and |
|
Remark 2
([]). In Lemma 9, the condition is easy to compute. Specifically, given two consecutive iteratives and , once can compute using (7), where the inertial parameter is chosen to satisfy , with
where fulfills
Lemma 9.
Let the sequence be generated by Algorithm 1, and suppose there , Then, is a bounded sequence.
Now, we illustrate that our proposed step sizes are well-defined when assuming non-Lipschitz continuity of .
Lemma 10.
Let the sequences and be generated by Algorithm 1. Then, we obtain , , and , where and are initial step sizes.
Proof.
In situation , we obtain
which yields that
This further shows that
where . Through induction, we see that the sequence has a lower bound . Using (8), we find that
Using Lemma 3, it can be seen that exists and we can denote , and as its lower bound is positive, then . Similarly, we obtain and . □
We analyze the following theorem without requiring firm-nonexpansiveness of .
Theorem 1.
Assume that Conditions (A1)–(A2) hold and , Then, the sequence developed by Algorithm 1 converges weakly to a point in the solution set Γ of .
Proof.
Let . Since and , we can see that and . Using Lemma 1 (iii) and Lemma 2 (ii), we show that
Similarly, we arrive at
From (7), (8), and (10), we have
Combining (7), (8), (11), (12), and Lemma 2 (iv), we obtain
From the definition of , it follows that
Thanks to Lemma 1 (iii), one has
Substituting this into (14), we arrive at
Using (13) and (15), we can verify that
Since
where . Then, , , and . So, ,
Now, applying Lemma 5 with
and
Since , and using Lemma 5, we can see that exists and
where As a result,
Further, it follows from (17) that ,
which implies that
which, along with and , deduces that
Moreover, from (17), we have
Equation (18) shows that
Additionally, using (7), (18), and (19), we arrive at
which means that
and
which yields that
where the second inequality in (22) comes from the basic inequality . In view of (21) and (22), we have
Using (7) and (9), we see that ,
This shows that
Combining (23) and (24), we observe that
Now, we need to verify that Let be an arbitrary element. Since is bounded by Lemma 9, there exists a sequence of , such that It follows from (24) that . Note that , one has
where . From the boundedness of , we conclude that is bounded. From (20) and (25), we obtain
Using w-lsc of q again, it implies that
This concludes that
Below, we establish that Using the definition of and , we can verify that
where From the boundedness of , is bounded. From (23) and (27), we see that
From the w-lsc of c, and (28), it follows that
Hence, So, we know that Since the choice of is arbitrary, we arrive at Hence, the results are achieved using Lemma 4. □
4. Strong Convergence
This section presents a strongly convergent algorithm that integrates inertial effects, the relaxed gradient-CQ method [], and the Halpern-type method [], combined with newly designed step sizes. The following assumptions are given to prove the convergence of the method.
(A3) Let be a positive sequence in , such that , that is, , where and fulfill
The proposed approach has the following form.
Remark 3.
In Algorithm 2, the inertial parameter is chosen as
Algorithm 2: A strongly convergent algorithm for SFP. |
Step 0. Take , , , and |
, u is a fixed vector. The sequence satisfies |
. |
Step n. Compute |
where the step sizes and are updated via |
and |
|
In the following, the strong convergence proofs of Algorithm 2 are similar to some weakly convergent proof steps from Algorithm 1 and are thus omitted.
Theorem 2.
Assume that Conditions (A1)–(A3) hold, then the sequence generated by Algorithm 2 converges strongly to a point in the solution set Γ of .
Proof We set and . Using the similar proof as in Theorem 1, we have that , ,
and
From the definition of , it follows that
By (29), we have for all , which, along with , implies that
So, there is a constant , such that
Combining (32)–(35) and (36), we derive that ,
Therefore, ,
This means that the sequence is bounded. Hence, is also bounded.
For , one has
Now, we compute the following estimation:
Let . Substituting (39) into (38), we find that ,
After arrangement, ,
Using Lemma 7, we see that , let
So, (40) is reduced to the following inequalities
Let be a sequence and assume that
This means
By , and , we deduce
which implies that
Similarly to Theorem 1, we can prove that Hence, there exists a subsequence of , such that .
- From Lemma 2 (i), we obtain
5. Numerical Experiments
In the first numerical example, we compare our Algorithm 1 with some weakly convergent algorithms, including Gibali et al.’s Algorithm 3.1 (shortly, GAlg.3.1) [] and Sahu et al.’s Algorithm 3.1 (SAlg.3.1) []. In the second example, we compare Algorithm 2 with Ma and Liu’s Algorithm 3.1 (shortly, MLAlg.3.1) [] and Ma et al.’s Algorithm 3 (shortly, Alg.3) []. These experiments were cinducted in MATLAB R2017a on a PC Desktop Intel(R) Core(TM) i7-6700 CPU @ 3.40 GHZ computer with RAM 8.00 GB.
Example 1.
LASSO problem [,,,]
Here, the following LASSO problem was changed to model the , and it recovered the sparse signal.
where and
Now, we need to find a sparse solution for the . Here, the system A is produced from a standard normal distribution with a mean zero and unit variance. The true sparse signal is created from uniform distribution at an interval of with a random k position that is nonzero, while the remainder is maintained at zero.The sample data are .
Certain assumptions are imposed on matrix A, where the solution to (49) is equal to the norm solution of the underdetermined linear system. Further, is changed into closed convex sets and . Since the projection onto C has no a closed form solution, the subgradient projection is used. We further introduce a convex function and
where . Meanwhile, the orthogonal projection of a point onto is the following
The subdifferential at is
To ensure all algorithms run efficiently, they were initialized at the start through the following
We used this to test how the algorithms behaved under different , while using the same iterative numbers (3000). Meanwhile, is reported in Table 1.

Table 1.
Results for Example 1.
In the following, the problem is the recovery of the signal ; so, we take , matrix A is randomly obtained with independent samples of a standard Gaussian distribution. In detail, the original signal contains 30 randomly placed spikes. All of the algorithms start their iterations with , and the following mean square error (MSE) is defined for measuring the accuracy of the recovery:
For SAlg.3.1, we take and ; for GAlg.3.1, we choose , and ; and for Alg.1, we set , , and .
Remark 4.
From Table 1, we see that our proposed algorithm (Alg.1) takes less CPU time to obtain a smaller compared with algorithms SAlg.3.1 and GAlg.3.1 in different situations.
Figure 1 shows the recovery results of all algorithms, and includes the original signal, MSE, and iterative time.

Figure 1.
Comparison of signal processing.
Remark 5.
From Figure 1, we know that our proposed Algorithm 1 uses less CPU time and smaller MSE to achieve iterative better numbers (3000) than SAlg.3.1 and GAlg.3.1.
Example 2
([,]). Let with norm and inner product . Let and . Set . We observe that , and so . For MLAlg. 3.1, we take , , , , , , and . For Alg. 2, we choose , , , , , and , which we use here. For all of the algorithms, the stopping criterion is denoted by . Now, the starting points can be one of the following.
Case 1: ,
Case 2: .
The obtained results are recorded in Table 2.

Table 2.
Results for Example 2.
In this experiment, the involved projections on sets C and Q have the following formulas, that is,
Remark 6.
We see from Table 2 that our proposed Algorithm 2 fulfilled the stopping criterion at a shorter iterative time and with fewer iterations than MLAlg. 3.1.
Example 3.
The following split feasibility problem is proposed in [,,,]:
where is a matrix. The set where
and where
Notice that C is the set above the function and Q is the set below the function Every element of A is randomly selected in , fulfilling
For all of the algorithms, we adopted the same starting points , which were generated randomly, every element in these two starting points is in
For Alg.3, we used , , , and which was suggested in [].
For Alg.2, we suggest , , and .
Remark 7.
From Table 3, we found that Alg.2 had fewer iterations than Alg.3; however, its iterative time was much longer than Alg.3.

Table 3.
Results for Example 3.
6. Conclusions
Our work establishes the convergence of the proposed algorithms under conditions weaker than the usual assumptions of Lipschitz continuity for and firm nonexpansiveness for , which are typically associated with the SFP. However, the addition of double inertia to Algorithm 1 makes it challenging to establish a new weak convergence result. Therefore, we leave the investigation of how double inertia affects the effectiveness and convergence analysis of the proposed algorithms for future work.
Author Contributions
Writing—original draft, Y.Z.; Writing—review & editing, X.M. All authors have read and agreed to the published version of the manuscript.
Funding
The first author was supported by the Natural Science Foundation of Hubei Province (No. 2025AFC080), the Scientific Research Project of Hubei Provincial Department of Education (No. 23Q166), the Scientific Research Foundation of Hubei University of Education for Talent Introduction (No. ESRC20220008), and the Foundation for Innovative Research Team of Hubei Provincial Department of Education (No. T2022034). The second author was supported by the National Natural Science Foundation of China (No. 12172266) and the Fundamental Research Program of Shanxi Province, China (No. 202303021222208).
Data Availability Statement
The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding author.
Conflicts of Interest
There are no conflicts of interest in this work.
References
- Censor, Y.; Elfving, T. A multiprojection algorithm using Bregman projection in a product space. Numer. Algorithms 1994, 8, 221–239. [Google Scholar] [CrossRef]
- Byrne, C. A unified treatment of some iterative algorithms in signal processing and image reconstruction. Inverse Probl. 2004, 20, 103–120. [Google Scholar] [CrossRef]
- Byrne, C. Iterative oblique projection onto convex sets and the split feasibility problem. Inverse Probl. 2002, 18, 441–453. [Google Scholar] [CrossRef]
- Censor, Y.; Elfving, T.; Kopf, N.; Bortfeld, T. The multiple-sets split feasibility problem and its applications for inverse problems. Inverse Probl. 2005, 21, 2071–2084. [Google Scholar] [CrossRef]
- Dang, Y.; Sun, J.; Xu, H. Inertial accelerated algorithm for solving a split feasibility problem. J. Ind. Manag. Optim. 2017, 13, 1383–1394. [Google Scholar] [CrossRef]
- Gibali, A.; Mai, D.T.; Vinh, N.T. A new relaxed CQ algorithm for solving split feasibility problems in Hilbert spaces and its applications. J. Ind. Manag. Optim. 2019, 15, 963–984. [Google Scholar] [CrossRef]
- López, G.; Martin, V.; Wang, F.; Xu, H.K. Solving the split feasibility problem without prior knowledge of matrix norms. Inverse Probl. 2012, 28, 085004. [Google Scholar]
- Shehu, Y.; Gibali, A. New inertial relaxed method for solving split feasibilities. Optim. Lett. 2020, 15, 2109–2126. [Google Scholar] [CrossRef]
- Sahu, D.R.; Cho, Y.J.; Dong, Q.L.; Kashyap, M.R.; Li, X.H. Inertial relaxed CQ algorithms for solving a split feasibility problem in Hilbert spaces. Numer. Algorithms 2020, 87, 1075–1095. [Google Scholar] [CrossRef]
- Suantai, S.; Pholasa, N.; Cholamjiak, P. The modified inertial relaxed CQ algorithm for solving the split feasibility problems. J. Ind. Manag. Optim. 2018, 23, 1595–1615. [Google Scholar] [CrossRef]
- Ma, X.; Liu, H. An inertial Halpern-type CQ algorithm for solving split feasibility problems in Hilbert spaces. J. Appl. Math. Comput. 2021, 68, 1699–1717. [Google Scholar] [CrossRef]
- Reich, S.; Tuyen, T.M.; Ha, M.T.N. An optimization approach to solving the split feasibility problem in Hilbert spaces. J. Glob. Optim. 2021, 79, 837–852. [Google Scholar] [CrossRef]
- Dong, Q.L.; He, S.; Rassias, M.T. General splitting methods with linearization for the split feasibility problem. J. Glob. Optim. 2021, 79, 813–836. [Google Scholar] [CrossRef]
- Yen, L.H.; Huyen, N.T.T.; Muu, L.D. A subgradient algorithm for a class of nonlinear split feasibility problems: Application to jointly constrained Nash equilibrium models. J. Glob. Optim. 2019, 73, 849–868. [Google Scholar] [CrossRef]
- Chen, C.; Pong, T.K.; Tan, L.; Zeng, L. A difference-of-convex approach for split feasibility with applications to matrix factorizations and outlier detection. J. Glob. Optim. 2020, 78, 107–136. [Google Scholar] [CrossRef]
- Wang, J.; Hu, Y.; Yu, C.K.W.; Zhuang, X. A Family of Projection Gradient Methods for Solving the Multiple-Sets Split Feasibility Problem. J. Optim. Theory Appl. 2019, 183, 520–534. [Google Scholar] [CrossRef]
- Qu, B.; Wang, C.; Xiu, N. Analysis on Newton projection method for the split feasibility problem. Comput. Optim. Appl. 2017, 67, 175–199. [Google Scholar] [CrossRef]
- Qin, X.; Wang, L. A fixed point method for solving a split feasibility problem in Hilbert spaces. RACSAM 2019, 113, 315–325. [Google Scholar] [CrossRef]
- Yang, Q. On variable-step relaxed projection algorithm for variational inequalities. J. Math. Anal. Appl. 2005, 302, 166–179. [Google Scholar] [CrossRef]
- Dong, Q.L.; Tang, Y.C.; Cho, Y.J.; Rassias, T.M. “Optimal” choice of the step length of the projection and contraction methods for solving the split feasibility problem. J. Glob. Optim. 2018, 71, 341–360. [Google Scholar] [CrossRef]
- Gibali, A.; Liu, L.W.; Tang, Y.C. Note on the modified relaxation CQ algorithm for the split feasibility problem. Optim. Lett. 2018, 12, 817–830. [Google Scholar] [CrossRef]
- Kesornprom, S.; Pholasa, N.; Cholamjiak, P. On the convergence analysis of the gradient-CQ algorithms for the split feasibility problem. Numer. Algorithms 2020, 84, 997–1017. [Google Scholar] [CrossRef]
- Qu, B.; Xiu, N. A note on the CQ algorithm for the split feasibility problem. Inverse Probl. 2005, 21, 1655–1665. [Google Scholar] [CrossRef]
- Xu, J.; Chi, E.C.; Yang, M.; Lange, K. A majorization–Cminimization algorithm for split feasibility problems. Comput. Optim. Appl. 2018, 71, 795–828. [Google Scholar] [CrossRef]
- Halpern, B. Fixed points of nonexpanding maps. Bull. Am. Math. Soc. 1967, 73, 957–961. [Google Scholar] [CrossRef]
- Polyak, B.T. Some methods of speeding up the convergence of iteration methods. Ussr Comput. Math. Math. Phys. 1964, 4, 1–17. [Google Scholar] [CrossRef]
- Saejung, S.; Yotkaew, P. Approximation of zeros of inverse strongly monotone operators in Bachna spaces. Nolinear Anal. 2012, 75, 742–750. [Google Scholar] [CrossRef]
- Alvarez, F.; Attouch, H. An inertial proximal method for maximal monotone operators via discretization of a nonlinear oscillator with damping. Set-Valued Anal. 2001, 9, 3–11. [Google Scholar] [CrossRef]
- Maingé, P.E.; Gobinddass, M.L. Convergence of one-step projected gradient methods for variational inequalities. J. Optim. Theory Appl. 2016, 171, 146–168. [Google Scholar]
- Alvarez, F. Weak convergence of a relaxed and inertial hybrid projection-proximal point algorithm for maximal monotone operators in Hilbert space. SIAM J. Optim. 2004, 14, 773–782. [Google Scholar] [CrossRef]
- Nesterov, Y. A method for solving the convex programming problem with convergence rate O(1/k2). Dokl. Akad. Nauk. SSSR 1983, 269, 543–547. [Google Scholar]
- Bauschke, H.H.; Combettes, P.L. Convex Analysis and Monotone Operator Theory in Hilbert Spaces; Springer: New York, NY, USA, 2011. [Google Scholar]
- Agarwal, R.P.; Regan, D.O.; Sahu, D.R. Fixed Point Theory for Lipschitzian-Type Mappings with Applications, Topological Fixed Point Theory and Its Applications; Springer: New York, NY, USA, 2009. [Google Scholar]
- Osilike, M.O.; Aniagbosor, S.C. Weak and strong convergence theorems for fixed points of asymptotically nonexpansive mappings. Math. Comput. Model. 2000, 32, 1181–1191. [Google Scholar] [CrossRef]
- Vinh, N.; Cholamjiak, P.; Suantai, S. A new CQ algorithm for solving split feasibility problems in Hilbert spaces. Bull. Malays. Math. Sci. Soc. 2018, 42, 2517–2534. [Google Scholar] [CrossRef]
- Bauschke, H.H.; Combettes, P.L. A weak-to-strong convergence principle for fejér-monotone methods in Hilbert spaces. Math. Oper. Res. 2001, 26, 248–264. [Google Scholar] [CrossRef]
- Xu, H.K. Iterative methods for solving the split feasibility in infinite-dimensional Hilbert spaces. Inverse Probl. 2010, 26, 105018. [Google Scholar] [CrossRef]
- Maingé, P.E. Convergence theorems for inertial KM-type algorithms. J. Comput. Appl. Math. 2018, 219, 223–236. [Google Scholar] [CrossRef]
- Ma, X.; Jia, Z.; Li, Q. On inertial non-lipschitz stepsize algorithms for split feasibility problems. Comp. Appl. Math. 2024, 43, 431. [Google Scholar] [CrossRef]
- Tibshirani, R. Regression shrinkage and selection via the Lasso. J. R. Stat. Soc. Ser. B 1996, 58, 267–288. [Google Scholar] [CrossRef]
- Tan, B.; Qin, X.; Wang, X. Alternated inertial algorithms for split feasibility problems. Numer. Algor 2024, 95, 773–812. [Google Scholar] [CrossRef]
- Okeke, C.C.; Okorie, K.O.; Nwakpa, C.E.; Mewomo, O.T. Two-step inertial accelerated algorithms for solving split feasibility problem with multiple output sets. Commun. Nonlinear Sci. Numer. Simul. 2025, 141, 108461. [Google Scholar] [CrossRef]
- van Thang, T. Projection algorithms with adaptive step sizes for multiple output split mixed variational inequality problems. Comp. Appl. Math. 2024, 43, 387. [Google Scholar] [CrossRef]
- Kesornprom, S.; Cholamjiak, P. Proximal type algorithms involving linesearch and inertial technique for split variational inclusion problem in hilbert spaces with applications. Optimization 2019, 68, 2369–2395. [Google Scholar] [CrossRef]
- He, H.; Ling, C.; Xu, H.K. An Implementable Splitting Algorithm for the ℓ1-norm Regularized Split Feasibility Problem. J. Sci. Comput. 2016, 67, 281–298. [Google Scholar] [CrossRef]
- Ma, X.; Liu, H.; Li, X. The iterative method for solving the proximal split feasibility problem with an application to LASSO problem. Comp. Appl. Math. 2022, 41, 5. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).