Abstract
The split feasibility problem (SFP) has many practical applications, which has attracted the attention of many authors. In this paper, we propose a different method to solve the SFP and the fixed-point problem involving quasi-nonexpansive mappings. We relax the conditions of the operator as well as consider the inertial iteration and the adaptive step size. For example, the convergence generated by our new method is better than that of other algorithms, and the convergence rate of our algorithm greatly improves that of previous algorithms.
Keywords:
iterative method; inertial method; quasi-nonexpansive mapping; fixed point; split feasibility problem MSC:
47H09; 47H10; 47H04
1. Introduction
Since Censor et al. [1] introduced the SFP, more and more people have paid attention to this problem due to its various applications in resolving practical issues.
Throughout this paper, we suppose that , are real Hilbert spaces, and C, Q are nonempty convex closed subsets of , , respectively. We consider a bounded linear operator, and . The SFP can be stated in the following form [2,3,4,5,6,7,8,9]:
Find a point , such that
The solution for (1) is denoted by :
We note that the algorithm of Byrne [2] is a very successful approach to (1), where is generated by the following process:
For any initial estimation as ,
The metric projections of C, Q are , , and the adjoint operator of A is . We select the step size with . The selection of is dependent on the operator norm, but the calculation of the operator norm is not easy.
According to the above function, we can get the following equation:
Therefore, (3) is also included as a particular case of a gradient projection algorithm. In order to conquer the difficulty of numerical calculation, many authors have come up with the variable step size, which does not need to calculate norms . Later on, based on predecessors, López et al. [4] thought deeply and finally put forward a new variable step size sequence , expressed in the following form:
where satisfies these conditions: the upper bound is 4, the lower bound is 0, and is a sequence of positive real numbers. If we select the step size (6), we do not need to know any other conditions of the norm , Q, and A.
In 2019, Qin et al. [5] introduced and studied a fixed point method to solve the SFP (1). Given that , calculate the following iteration as:
where is a contraction, is a nonexpansive mapping, denotes the set of fixed points of S. , , , , and are real sequences and belong to , satisfying the following:
- ()
- ;
- ()
- , ;
- ()
- , ;
- ()
- , ;
- ()
- .
Then, converges strongly to , and is the unique solution of the following variational inequality:
In 2020, Kraikaew et al. [6] further weakened the conditions and simplified the process of proof. They showed that the sequence produced by (7) converges strongly to when the following conditions are satisfied:
- ()
- ;
- ()
- ;
- ()
- , ;
- ()
- ;
- ()
- .
Based on previous works, in this paper, we further weaken the conditions and add the inertia method so that the choice of step size does not need to calculate the operator norm.
2. Preliminaries
Throughout this paper, we suppose that H is a real Hilbert space, and D is a nonempty convex closed subset of H. For sequence , and with q in H, we use to represent a strong convergence and to represent a weak convergence. denotes the fixed points of .
The mapping is called:
- (i)
- A nonexpansive mapping if for any ;
- (ii)
- A quasi-nonexpansive mapping if and for every , ;
- (iii)
- A firmly nonexpansive mapping if for any ;
- (iv)
- A Lipschitz continuous mapping if there is such that for any ;
- (v)
- A contraction mapping if there exists such that , for any .
Lemma 1
([10,11]). For any , then
- (1)
- ;
- (2)
- .
Recall that is the metric projection operator, that is:
Lemma 2
([12,13,14]). Given and ,
- (1)
- is equivalent to ;
- (2)
- .
From Lemma 2, we can easily prove that is firmly nonexpansive.
Lemma 3
([15]). Let be a non-negative number sequence, which satisfies:
where is a sequence in the open interval , is a non-negative real sequence, , are two sequences on , satisfying the following:
- (1)
- ;
- (2)
- ;
- (3)
- implies , where is a subsequence of .
Then, .
Lemma 4
([16]). Let . Then is Lipschitz continuous.
Definition 1
([17]). Let be a nonlinear operator with , I be the identity operator. If the following implication holds for :
then we say that is demiclosed at zero.
It is easy to see that this implication holds for Lipschitz continuous quasi-nonexpansive mappings (see [18]).
3. Main Results
Theorem 1.
Let be a quasi-nonexpansive mapping. Suppose that is demiclosed at zero, and is a κ-contraction. In addition, let , , , be sequences in , satisfying the following:
- ;
- ;
- , ;
- ;
- .
For each , we can define the following constant:
so that
If is defined by: are arbitrarily chosen, and we have the following equation:
where , , , and is a sequence of positive real numbers. If , then stop; otherwise, let and go to compute the next iteration. Assuming that , then converges strongly to , and is the unique solution of the following variational inequality:
Proof.
From Lemma 2, we know that is a solution of the following variational inequality:
if and only if . Since g is contractive and is nonexpansive, we know that is contractive. Hence, such exists and is unique.
First, let . Because , according to Lemma 1 and Lemma 2, we find that:
and
Note that , is a sequence in . We thus derive the following equation:
Putting , by Lemma 1, we can derive that:
From the conditions imposed on , , and (12), we have the following:
From the conditions imposed on , and , we have the following equation:
Since , we can get the equations below:
Since , we therefore have , where M is a suitable positive constant. Hence, we have the following:
We can thus deduce that:
Therefore, the sequence is bounded.
From Lemma 1, we can get the following:
We derive that:
As p is chosen arbitrarily and g is a -contraction, we have the following equations:
On the other hand, by Lemma 1, we can derive that:
Thus,
Set the following:
It is easy to see that , , and . Therefore, by Lemma 3, we prove that if we show that whenever for any subsequence .
Suppose that
By the conditions of , , , and , we have the following equations:
Equation (24) implies that:
From Lemma 4, since is bounded, we derive that as , so . By using (27) and the conditions on , we get the following:
Moreover, according to (25), we can get the equation below:
From (26), by expanding the formula, since , we can get:
By expanding (30), we can get the following equation:
Hence, we arrive at the following:
From the definition of , we can see the following:
By using (34), we can get the following:
Combining (29) and the fact that is demiclosed at zero, we know . We select a subsequence of to satisfy the following equation:
Without loss of generality, we can assume that . According to , we can derive that , so , . This means that by combining with (34). Therefore, . By using (35), we have the following:
This means that:
The proof is finished. □
4. Numerical Experiments
Now, we give two numerical experiments. We wrote these programs on Matlab 9.0, performed them on a PC Desktop Intel(R) Core(TM) i5-1035G1 CPU @ 1.00 GHz 1.19 GHz, RAM 16.0 GB.
Example 1.
Solving the system of linear equations . We assume that . In the following, we take:
and . Consider , where
We give the parameters and initial values as follows: For (7) and (9), we choose , , , , ; for (7), we choose ; for (9), we choose , , , . Denote by the solution of . Then we have . We can see that . We can see the numerical results of the main algorithms in Table 1 and Figure 1.
Table 1.
Numerical results of scheme (9) as regards Example 1.
Figure 1.
Comparison of scheme (9) and scheme (7) in Example 1.
From Table 1, we can see that with the addition of iterative steps, is closer to the exact solution . We can also see that these errors are closer to zero. Hence, we can conclude that our algorithm is reliable. From Figure 1, we can see that our method has fewer iterations than (7), therefore our method has more advantages.
Example 2.
Seeking the solution to the following problem:
where , is the bounded linear operator, and . A is a sparse matrix, and A is generated by a standard normal distribution. The uniform distribution on the interval (-2,2) generates a real sparse signal . The position of random p is not equal to zero, and the rest remains at zero. We can then obtain the sample data .
The key is to seek the sparse solution of the linear system so that we can use method (9) to solve the problem.
We define , . Because the projection on C has no closed formal solution, we consider the subgradient projection to solve it. Assume that the convex function and the level set are defined by the following equation:
then . Next, we can calculate the orthogonal projection on according to the following formula:
Note that the subdifferential on is the following:
Let , . Take as the stopping criterion. We give the parameters and initial values as follows: For (7) and (9), we choose , , , , ; for (7), we choose ; for (9), we choose , , , . We can see the numerical results of the main algorithms in Table 2. Figure 2 shows that when , we can obtain the relationship between the target function and the iterations.
Table 2.
Numerical results of scheme (9) and scheme (7) as regards Example 2.
Figure 2.
Comparison of scheme (9) and scheme (7) in Example 2, with .
Example 3.
Let , with the inner product given by the following:
Let , and . Let , . Take as the stopping criterion.
We then give the parameters and initial values as follows: For (7) and (9), we choose , , , ; For (7), we choose ; For (9), we choose , , , . The numerical results for each choice of are shown in Table 3. Figure 3 shows that the error plotting for .
Table 3.
Numerical results of scheme (9) and scheme (7) as regards Example 3.
Figure 3.
Comparison of scheme (9) and scheme (7) in Example 3, with .
5. Conclusions
In this paper, we proposed a new method to solve the SFP and the fixed-point problem involving quasi-nonexpansive mappings. Compared with the work of (7), the conditions were relaxed, and the nonexpansive mapping was extended to quasi-nonexpansive mapping. The inertia was also added to accelerate the convergence rate further. In addition, the selection of step size no longer depended on the operator norm.
By solving some examples, we have illustrated the effectiveness and practicability of the method. We compared all numerical implementations of this method with (7). As shown in Figure 1 and Figure 2, we can find that (9) is superior. For these reasons, we can see that (9) is more effective than (7).
Author Contributions
Conceptualization, T.X. and J.-C.Y.; Data curation, T.X. and B.J.; Formal analysis, Y.W.; Funding acquisition, Y.W.; Investigation, T.X. and J.-C.Y.; Methodology, Y.W.; Project administration, Y.W. and J.-C.Y.; Resources, T.X., J.-C.Y. and B.J.; Software, B.J.; Supervision, Y.W.; Visualization, B.J.; Writing—original draft, T.X. All authors have read and agreed to the published version of the manuscript.
Funding
The National Natural Science Foundation of China (No.12171435).
Institutional Review Board Statement
Not applicable.
Informed Consent Statement
Not applicable.
Data Availability Statement
The data used to support the findings of this study are included within the article.
Acknowledgments
The authors thank the referees for their helpful comments, which notably improved the presentation of this paper.
Conflicts of Interest
The authors declare no conflict of interest.
References
- Censor, Y.; Elfving, T. A multiprojection algorithm using Bregman projections in a product space. Numer. Algor. 1994, 8, 221–239. [Google Scholar] [CrossRef]
- Byrne, C. Iterative oblique projection onto convex set and the split feasibility problem. Inverse Probl. 2002, 18, 441–453. [Google Scholar] [CrossRef]
- Xu, H.K. Iterative methods for the split feasibility problem in infinite-dimensional Hilbert spaces. Inverse Probl. 2010, 26, 105018. [Google Scholar] [CrossRef]
- López, G.; Martín-Márquez, V.; Wang, F.; Xu, H.K. Solving the split feasibility problem without prior knowledge of matrix norms. Inverse Probl. 2012, 28, 085004. [Google Scholar] [CrossRef]
- Qin, X.; Wang, L. A fixed point method for solving a split feasibility problem in Hilbert spaces. Rev. R. Acad. Cienc. Exactas Fís. Nat. Ser. A Mat. RACSAM 2019, 13, 215–325. [Google Scholar] [CrossRef]
- Kraikaew, R.; Saejung, S. A simple look at the method for solving split feasibility problems in Hilbert spaces. Rev. R. Acad. Cienc. Exactas Fís. Nat. Ser. A Mat. RACSAM 2020, 114, 117. [Google Scholar] [CrossRef]
- Kesornprom, S.; Pholasa, N.; Cholamjiak, P. On the convergence analysis of the gradient-CQ algorithms for the split feasibility problem. Numer. Alogr. 2020, 84, 997–1017. [Google Scholar] [CrossRef]
- Dong, Q.L.; He, S.; Rassias, T.M. General splitting methods with linearization for the split feasibility problem. J. Global Optim. 2021, 79, 813–836. [Google Scholar] [CrossRef]
- Shehu, Y.; Dong, Q.L.; Liu, L.L. Global and linear convergence of alternated inertial methods for split feasibility problems. Rev. R. Acad. Cienc. Exactas Fís. Nat. Ser. A Mat. RACSAM 2021, 115, 53. [Google Scholar] [CrossRef]
- Yang, J.; Liu, H. Strong convergence result for solving monotone variational inequalities in Hilbert space. Numer. Algor. 2019, 80, 741–752. [Google Scholar] [CrossRef]
- Kraikaew, R.; Saejung, S. Strong convergence of the Halpern subgradient extragradient method for solving variational inequalities in Hilbert spaces. J. Optim. Theory Appl. 2014, 163, 399–412. [Google Scholar] [CrossRef]
- Xu, H.K. Averaged mappings and the gradient-projection algorithm. J. Optim. Theory Appl. 2011, 150, 360–378. [Google Scholar] [CrossRef]
- Wang, Y.; Yuan, M.; Jiang, B. Multi-step inertial hybrid and shrinking Tseng’s algorithm with Meir-Keeler contractions for variational inclusion problems. Mathematics 2021, 9, 1548. [Google Scholar] [CrossRef]
- Jiang, B.; Wang, Y.; Yao, J.C. Multi-step inertial regularized methods for hierarchical variational inequality problems involving generalized Lipschitzian mappings. Mathematics 2021, 9, 2103. [Google Scholar] [CrossRef]
- He, S.; Yang, C. Solving the variational inequality problem defined on intersection of finite level sets. Abstr. Appl. Anal. 2013, 2013, 942315. [Google Scholar] [CrossRef]
- Byrne, C. A unified treatment of some iterative algorithms in signal processing and image reconstruction. Inverse Probl. 2004, 20, 103–120. [Google Scholar] [CrossRef] [Green Version]
- Tian, M.; Xu, G. Inertial modified Tsneg’s extragradient algorithms for solving monotone variational inequalities and fixed point problems. J. Nonlinear Funct. Anal. 2020, 2020, 35. [Google Scholar]
- Wang, Y.H.; Xia, Y.H. Strong convergence for asymptotically pseudocontractions with the demiclosedness principle in Banach spaces. Fixed Point Theory Appl. 2012, 2012, 45. [Google Scholar] [CrossRef] [Green Version]
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).