Abstract
The inverse problem is one of the four major problems in computational mathematics. There is an inverse problem in medical image reconstruction and radiotherapy that is called the multiple-sets split equality problem. The multiple-sets split equality problem is a unified form of the split feasibility problem, split equality problem, and split common fixed point problem. In this paper, we present two iterative algorithms for solving it. The suggested algorithms are based on the gradient method with a selection technique. Based on this technique, we only need to calculate one projection in each iteration.
1. Introduction
The inverse problem is one of the four major problems in computational mathematics. The rapid development of the inverse problem has been a feature of recent decades; it can be found in computer vision, machine learning, statistics, geography, medical imaging, remote sensing, ocean acoustics, tomography, aviation, physics, and other fields. There is an inverse problem in medical image reconstruction and radiotherapy that can be expressed as a split feasibility problem [1,2,3,4,5,6,7,8,9], split equality problem [10,11,12,13], and split common fixed point problem [14,15,16,17,18,19].
In this paper, we focus on a unified form of the split feasibility problem, split equality problem, and split common fixed point problem that is called the multiple-sets split equality problem.
Let be three real Hilbert spaces. are positive integers, and are two families of closed and convex subsets of and , respectively. , are two bounded and linear operators. Then the multiple-sets split equality problem (MSSEP for short) can be formulated as
It reduces to the split equality problem if ; moreover, it is the split feasibility problem if and B is the identity operator on . In addition, it is the split common fixed point problem if we take to , to where are the metric projections on .
In the problem (1), without loss of generality, we may assume that and let . Then the problem (1) can be described equivalently as:
Let , , , be the adjoint operator of G where is the Cartesian product of and . Then the original problem (1) can be modified as
Assume the problem (3) is consistent and let denote its solution set, that is, is not empty. We consider the proximity function
where are positive real numbers and are metric projections from H onto . Since and are closed convex, so are , and then are well defined. Then problem (3) can be transformed into the minimization problem
Note that the proximity function is convex and differentiable with gradient
where I is the identity operator on H. The gradient function is L-Lipschitz continuous with Lipschitz constant [20]
To solve the minimization problem (4), a classical method is the gradient algorithm, which takes the iterative issue
where is the iterative step size in step n.
Note that in iteration (5), we need to calculate projections for t times in each step. On the other hand, notice that if and only if , where
in which
Then we consider the iterative issue
In iteration (6), we only need to implement a projection once in each step. Motivated by this point, we present Algorithms 1 and 2 in Section 3 to solve problem (3).
The general structure of this paper is as follows. In the next section, we go over some preliminaries. In Section 3, we present the main algorithms and their convergence. In Section 4, several numerical results are shown to confirm the effectiveness of the suggested algorithm. In the last section, there are some conclusions.
2. Preliminaries
For convenience, note that H is a real Hilbert space and I denotes the identity operator on H. By and , the strong and weak convergence of sequence to a point , respectively, and denotes the set of weak cluster points of the sequence . is the projection from H onto its closed and convex subset S.
Lemma 1
([21]). Let S be a closed, convex, and nonempty subset of H, then for any and ,
(i) ;
(ii) ;
(iii) .
Lemma 2
([22]). Let be sequences of non-negative real numbers with
Let be a sequence of real numbers with and
Then .
Lemma 3
([23]). Let S be a closed and convex subset of H, and be non-expansive, and . If and , then .
Lemma 4
([24]). Let S be a closed, convex, and nonempty subset of H and be a sequence in H. If
(i) exists for each ;
(ii) ;
then converges weakly to a point in S.
3. Main Results
Assume that the problem (3) is consistent and let denote its solution set. That is, is not empty and .
Remark 1.
is a solution of the problem (3) if and only if the equality (8) holds.
On the one hand, if , then take . We have
The first equality is from , the second one is from the definition of and , and the last inequality is from Lemma 1 and . Then
which implies that
Hence and . Namely, .
Conversely, if is a solution of the problem (3), that is and , then and , so . That is, .
Next we discuss the convergence of the iterative sequence generated by Algorithm 1 if it does not terminate in finite steps.
| Algorithm 1: Gradient method 1 |
| Take arbitrarily and compute |
Theorem 1.
If , taking initial point arbitrarily, then the sequence generated by Algorithm 1 converges weakly to a solution of the problem (3).
Proof.
First, we show the boundedness of . Take . Based on the inequality in the process of Remark 1, we get
This implies that exists. Thus the sequence is bounded and so are the sequences and , .
Next we show that .
Since exists and
together with the boundedness of the sequence and the definition of , it follows that
which implies that
Hence,
Since is bounded, let be a weak cluster point of with subsequence weakly convergent to it.
By Lemma 3, we get , and by the arbitrariness of , we deduce that . Moreover, the conditions in Lemma 4 have also been satisfied, and the sequence generated by the Algorithm 1 converges weakly to some solution of the problem (3). The proof is completed. □
There is only weak convergence in Theorem 1. Next, we show a strong convergence theorem for solving the problem (3).
Next, we discuss the convergence of the iterative sequence generated by Algorithm 2 if it does not terminate in finite steps.
| Algorithm 2: Gradient method 2 |
| Take and initial point . Compute |
Theorem 2.
Suppose that , , and . Taking and initial point arbitrarily, then the sequence generated by the Algorithm 2 converges strongly to .
Proof.
Let , for . From the process (10) in Theorem 1, we get
by the definition of , that is, . Thus
By induction, we derive
which means that the sequence is bounded and so are the sequences and , . By a simple derivation,
Then by (13),
Let
Then the inequality (14) equals
and also
It follows that
So
Next, we show that . Otherwise, if , then by the definition of the supremum, there exists m such that for all . It follows that for all ,
Thus
Hence, taking lim sup as in the above inequality, we obtain
which is a contradiction. Therefore, , and it is finite. By the boundedness of , we can take a subsequence of such that
Since the sequence is bounded, there exists a subsequence of . Without loss of generality, we may assume it’s itself, such that exists. Consequently, the following limit exists:
Together with the definitions of and , it shows that
which yields
Following the proof procedure of Theorem 1, we conclude that . Since
assume that . Then
due to the fact that and Lemma 1. Finally, applying Lemma 2 to (15), we conclude that . The proof is completed. □
4. Numerical Experiments
In this section, we provide several numerical results of the MSSEP (2) to confirm the effectiveness of the suggested Algorithm 1. The whole program was written in Wolfram Mathematica (version 9.0). All of the numerical results were carried out on a personal Lenovo computer with Intel(R)Core(TM) i5-6600 CPU 3.30 GHz and RAM 8.00 GB.
The MSSEP with , , , , , , , , , . We choose two initial values , and , and take the iterative steps n as the transverse axis and as the vertical axis in the figures below (Figure 1 and Figure 2). We considered using the Algorithm 1 to solve this MSSEP.
Figure 1.
.
Figure 2.
.
The figures above confirm the effectiveness of the proposed Algorithm 1 and also show that there is an approximately linear downward trend after finite steps, which means the convergence rate of the proposed Algorithm 1 may be fast enough.
Author Contributions
The main idea of this paper was proposed by D.T.; L.J. and L.S. reviewed all the steps of the initial manuscript. All authors approved the final manuscript.
Funding
This research was supported by NSFC Grants No. 11301379; No. 11226125.
Conflicts of Interest
The authors declare no conflict of interest.
References
- Censor, Y.; Elfving, T. A multiprojection algorithm using Bregman projections in a product space. Numer. Algorithms 1994, 8, 221–239. [Google Scholar] [CrossRef]
- Yao, Y.H.; Postolache, M.; Qin, X.L.; Yao, J.C. Iterative algorithms for the proximal split feasibility problem. UPB Sci. Bull. Ser. A Appl. Math. Phys. 2018, 80, 37–44. [Google Scholar]
- Bauschke, H.H.; Borwein, J.M. On projection algorithms for solving convex feasibility problems. SIAM Rev. 1996, 38, 367–426. [Google Scholar] [CrossRef]
- Byrne, C. Iterative oblique projection onto convex sets and the split feasibility problem. Inverse Probl. 2002, 18, 441–453. [Google Scholar] [CrossRef]
- Takahashi, W. The split feasibility problem in Banach spaces. J. Nonlinear Convex Anal. 2014, 15, 1349–1355. [Google Scholar]
- Wang, F.; Xu, H.K. Cyclic algorithms for split feasibility problems in Hilbert spaces. Nonlinear Amal. 2011, 74, 4105–4111. [Google Scholar] [CrossRef]
- Xu, H.K.; Alghamdi, M.A.; Shahzad, N. An unconstrained optimization approach to the split feasibility problem. J. Nonlinear Convex Anal. 2017, 18, 1891–1899. [Google Scholar]
- Ceng, L.C.; Wong, N.C.; Yao, J.C. Hybrid extragradient methods for finding minimum-norm solutions of split feasibility problems. J. Nonlinear Convex Anal. 2015, 16, 1965–1983. [Google Scholar]
- Yao, Y.H.; Postolache, M.; Zhu, Z.C. Gradient methods with selection technique for the multiple-sets split feasibility problem. Optimization 2019. [Google Scholar] [CrossRef]
- Moudafi, A. Alternating CQ algorithm for convex feasibility and split fixed point problems. J. Nonlinear Convex Anal. 2013, 15, 809–818. [Google Scholar]
- Shi, L.Y.; Chen, R.D.; Wu, Y.J. Strong convergence of iterative algorithms for the split equality problem. J. Inequal. Appl. 2014, 2014, 478. [Google Scholar] [CrossRef]
- Dong, Q.L.; He, S.N.; Zhao, J. Solving the split equality problem without prior knowledge of operator norms. Optimization 2015, 64, 1887–1906. [Google Scholar] [CrossRef]
- Tian, D.L.; Shi, L.Y.; Chen, R.D. Strong convergence theorems for split inclusion problems in Hilbert spaces. J. Fixed Point Theory Appl. 2017, 19, 1501–1514. [Google Scholar] [CrossRef]
- Cegielski, A. General method for solving the split common fixed point problem. J. Optim. Theory Appl. 2015, 165, 385–404. [Google Scholar] [CrossRef]
- Kraikaew, P.; Saejung, S. On split common fixed point problems. J. Math Anal. Appl. 2014, 415, 513–524. [Google Scholar] [CrossRef]
- Moudafi, A. The split common fixed-point problem for demicontractive mappings. Inverse Probl. 2010, 26, 055007. [Google Scholar] [CrossRef] [PubMed]
- Takahashi, W. The split common fixed point problem and strong convergence theorems by hybrid methods in two Banach spaces. J. Nonlinear Convex Anal. 2016, 17, 1051–1067. [Google Scholar]
- Takahashi, W.; Wen, C.F.; Yao, J.C. An implicit algorithm for the split common fixed point problem in Hilbert spaces and applications. Appl. Anal. Optim. 2017, 1, 423–439. [Google Scholar]
- Yao, Y.H.; Qin, X.L.; Yao, J.C. Self-adaptive step-sizes choice for split common fixed point problems. J. Nonlinear Convex Anal. 2018, 11, 1959–1969. [Google Scholar]
- Xu, H.K. A variable Krasnosel’skii-Mann algorithm and the multiple-set split feasibility problem. Inverse Probl. 2006, 22, 2021–2034. [Google Scholar] [CrossRef]
- Goebel, K.; Reich, S. Uniform Convexity, Hyperbolic Geometry, and Nonexpansive Mappings. In Monographs and Textbooks in Pure and Applied Mathematics; Marcel Dekker: New York, NY, USA, 1984; pp. 1–170. [Google Scholar]
- Aoyama, K.; Kimura, Y.; Takahashi, W.; Toyoda, M. Approximation of common fixed points of a countable family of nonexpansive mappings in a Banach space. Nonlinear Anal. Theory Methods Appl. 2007, 67, 2350–2360. [Google Scholar] [CrossRef]
- Browder, F.E. Fixed point theorems for noncompact mappings in Hilbert spaces. Proc. Natl. Acad. Sci. USA 1965, 53, 1272–1276. [Google Scholar] [CrossRef] [PubMed]
- Opial, Z. Weak convergence of the sequence of successive approximations for nonexpansive mappings. Bull. Am. Math. Soc. 1967, 73, 595–597. [Google Scholar] [CrossRef]
© 2019 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).