Next Article in Journal
Properties of Fluctuating States in Loop Quantum Cosmology
Previous Article in Journal
Some New Observations on Geraghty and Ćirić Type Results in b-Metric Spaces

Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

# Linear Convergence of an Iterative Algorithm for Solving the Multiple-Sets Split Feasibility Problem

by
Tingting Tian
,
Luoyi Shi
* and
Rudong Chen
Department of Mathematical Science, Tianjin Polytechnic University, Tianjin 300387, China
*
Author to whom correspondence should be addressed.
Mathematics 2019, 7(7), 644; https://doi.org/10.3390/math7070644
Submission received: 4 June 2019 / Revised: 13 July 2019 / Accepted: 15 July 2019 / Published: 18 July 2019

## Abstract

:
In this paper, we propose the simultaneous sub-gradient projection algorithm with the dynamic step size (SSPA for short) for solving the multiple-sets split feasibility problem (MSSFP for short) and investigate its linear convergence. We involve a notion of bounded linear regularity for the MSSFP and construct several sufficient conditions to prove the linear convergence for the SSPA. In particular, the SSPA is an easily calculated algorithm that uses orthogonal projection onto half-spaces. Furthermore, some numerical results are provided to verify the effectiveness of our proposed algorithm.

## 1. Introduction

Let $H 1$ and $H 2$ be two Hilbert spaces, $C ⊆ H 1$ and $Q ⊆ H 2$ be two closed, convex and nonempty sets. Let operator $A : H 1 → H 2$ be bounded and linear. The split feasibility problem (SFP for short) was proposed by Censor and Elfving [1] to solve the phase retrieval problems, and is formulated as:
$finding x ∈ C and y ∈ Q such that A x = y .$
Let $S ¯ = C × Q ⊆ H = H 1 × H 2$, $G = [ A , − I ] : H → H 2$, $G *$ be the adjoint operator of G, then SFP (1) can be reformulated as:
$finding w = ( x , y ) ∈ S ¯ such that G w = 0 .$
This class of problem has received plenty of attention due to its wide applications, such as intensity-modulated radiation therapy [2], signal processing [3], image reconstruction [4], etc.
Many algorithms have been developed to solve the SFP. One of the most popular and practical algorithms is the CQ algorithm, which was proposed by Byrne [5]:
$x n + 1 = P C ( x n − γ A * ( I − P Q ) A x n ) ,$
where $A *$ is the adjoint operator of A, $γ > 0$ is the step size, while $P C$ and $P Q$ denote the orthogonal projection onto C and Q, respectively.
As a quite important generalization of the CQ algorithm, López et al. [6] introduced the following dynamic step size CQ algorithm and obtained a weak convergence result:
$x n + 1 = P C ( x n − γ n A * ( I − P Q ) A x n ) ,$
where
The highlight of the dynamic step size CQ algorithm is that it does not require any prior knowledge about the norm of the operator A.
Under some additional assumptions, the strong convergence property of the CQ algorithm was developed in [7] as special cases of some generalized CQ-type algorithms. More papers about this topic are given [8,9] and the references therein. However, there are few results involving the rate of convergence.
In this paper, we investigate the multiple-sets split feasibility problem (MSSFP), which is to find a point such that:
$x ∈ C = ⋂ i = 1 t C i , A x ∈ Q = ⋂ j = 1 r Q j ,$
where r and t are positive integers; ${ C i } i = 1 t$ and ${ Q j } j = 1 r$ are closed, convex and nonempty subsets of Hilbert spaces $H 1$ and $H 2$, respectively; $A : H 1 → H 2$ is a bounded linear operator. Without loss of generality, suppose that $t > r$, and choose $Q r + 1 = Q r + 2 = ⋯ = Q t = H 2$. Let $S i = C i × Q i ⊆ H ˜ = H 1 × H 2$, $i = 1 , 2 , ⋯ , t$, $S ^ = ⋂ i = 1 t S i$, $G = [ A , − I ] : H ˜ → H 2$, $G *$ be the adjoint operator of G. Then MSSFP (3) can be reformulated as:
$finding w = ( x , y ) ∈ S ^ such that G w = 0 .$
Censor et al. [10] proposed the following iterative formula by using the projection gradient method for solving the MSSFP:
$x n + 1 = P Ω [ x n − s ( ∑ i = 1 t α i ( x n − P C i x n ) + ∑ j = 1 r β j A * ( A x n − P Q j A x n ) ] ,$
where $Ω ⊂ R N$ is an auxiliary simple set, $s ∈ ( 0 , 2 L ) , L = ∑ i = 1 t α i + ϱ ( A * A ) ∑ j = 1 r β j$, $ϱ ( A * A )$ is the spectral radius of $A * A$, $∑ i = 1 t α i + ∑ j = 1 r β j = 1$ with $α i > 0$, $β j > 0$. However, this algorithm is usually difficult to calculate. Then, Censor et al. [11] developed the following simultaneous sub-gradient projection algorithm, which is an easily calculated algorithm that uses orthogonal projection onto half-spaces, to solve the MSSFP:
$x n + 1 = x n − s L [ ∑ i = 1 t α i ( x n − P C i , n x n ) + ∑ j = 1 r β j A * ( A x n − P Q j , n A x n ) ] .$
Here, $s ∈ ( 0 , 2 ) , L = ∑ i = 1 t α i + ϱ ( A * A ) ∑ j = 1 r β j$, $ϱ ( A * A )$ is the spectral radius of $A * A$, $∑ i = 1 t α i + ∑ j = 1 r β j = 1$ with $α i > 0$, $β j > 0$ and:
$C i , n : = { x ∈ H 1 : c i ( x n ) + 〈 ξ i , n , x − x n 〉 ≤ 0 } ,$
where $ξ i , n ∈ ∂ c i ( x n )$, $i = 1 , 2 , ⋯ , t$, and:
$Q j , n : = { y ∈ H 2 : q j ( y n ) + 〈 η j , n , y − y n 〉 ≤ 0 } ,$
where $η j , n ∈ ∂ q j ( y n )$, $j = 1 , 2 , ⋯ , r$. However, the above projection method with a fixed step size may be very slow. Then, motivated by the extrapolated method for solving the convex feasibility problems in [12], Dang et al. [13] proposed a simultaneous sub-gradient projection algorithm to solve the MSSFP by utilizing two extrapolated factors in one iterative step. We remark here that the above algorithms only converge weakly to a solution of the MSSFP. Moreover, the rate of convergence has not been explicitly estimated. Based on the above disadvantages, we propose a simultaneous sub-gradient projection algorithm with the dynamic step size (SSPA for short) for solving the MSSFP by utilizing projections onto half-spaces to replace the original convex sets, and we investigate the linear convergence of the SSPA. Furthermore, we conclude the linear convergence rate of the SSPA.
The rest of this paper is organized as follows. Section 2 introduces the concept of bounded linear regularity for the MSSFP and presents some relevant definitions and lemmas which will be very useful for our convergence analysis. Section 3 gives the SSPA, the proof of its linear convergence and its linear convergence rate. Section 4 presents some numerical results to clarify the validity of our proposed algorithm.

## 2. Preliminaries

For convenience, we always suppose that H is a real Hilbert space with the inner product $〈 · , · 〉$ and the norm $∥ · ∥$. I denotes the identity operator on H. For a set $C ⊆ H$, $int C$ denotes the interior of C. We denote by $B$ and $B ¯$ as the unit open metric ball and unit closed metric ball with center at the origin, respectively, that is:
$B : = { x ∈ H : ∥ x ∥ < 1 } and B ¯ : = { x ∈ H : ∥ x ∥ ≤ 1 } .$
For a point $x ∈ H$ and a set $C ⊆ H$, the orthogonal projection of x onto C and the distance of x from C, denoted by $P C ( x )$ and $d C ( x )$, are respectively defined by:
$P C ( x ) : = arg min { ∥ x − y ∥ : y ∈ C } and d C ( x ) : = inf { ∥ x − y ∥ : y ∈ C } .$
The following proposition is about some well-known properties of the projection operator.
Proposition 1
([14]). Let $C ⊆ H$ be a closed, convex and nonempty set; then, for any $x ˘ , y ˘ ∈ H$ and $z ˘ ∈ C$,
(i) $〈 x ˘ − P C x ˘ , z ˘ − P C x ˘ 〉 ≤ 0$;
(ii) $∥ P C x ˘ − P C y ˘ ∥ 2 ≤ 〈 P C x ˘ − P C y ˘ , x ˘ − y ˘ 〉$;
(iii) $∥ P C x ˘ − z ˘ ∥ 2 ≤ ∥ x ˘ − z ˘ ∥ 2 − ∥ P C x ˘ − x ˘ ∥ 2$;
(iv) $〈 ( I − P C ) x ˘ − ( I − P C ) y ˘ , x ˘ − y ˘ 〉 ≥ ∥ ( I − P C ) x ˘ − ( I − P C ) y ˘ ∥ 2$.
Throughout this paper, we denote the solution set of MSSFP (1.3) by using S, which is defined by:
$S : = C ∩ A − 1 Q = { x ∈ C : A x ∈ Q } ,$
and assume that the MSSFP is consistent; thus, S is also a closed, convex and nonempty set. Then, the following equivalence holds for any $x ¯ ∈ C$:
$x ¯ ∈ S ⟺ ( I − P Q ) A x ¯ = 0$
The aim of this section is to construct several sufficient conditions to ensure the linear convergence of the SSPA for MSSFP (3). Recall that a sequence ${ x n }$ in H is said to converge linearly to its limit $x *$ (with rate $σ ∈ [ 0 , 1 )$) if there exists $ω > 0$ and a positive integer N such that:
$∥ x n − x * ∥ ≤ ω σ n for all n ≥ N .$
Next, we will introduce the concept of bounded linear regularity.
Definition 1
([15]). Let ${ Q i } i ∈ I$ be a collection of closed convex subsets in a real Hilbert space H and $Q = ⋂ i ∈ I Q i ≠ ∅$. The collection ${ Q i } i ∈ I$ is said to be bounded linearly regular if for each $r > 0$ there exists a constant $γ r > 0$ such that:
$d Q ( y ) ≤ γ r sup { d Q i ( y ) : i ∈ I } f o r a l l y ∈ r B .$
Lemma 1
([16]). Let ${ Q i } i ∈ I$ be a collection of closed convex subsets in a real Hilbert space H. If $Q i ⋂ int ( ⋂ j ∈ I \ { i } Q j ) ≠ ∅$, then the collection ${ Q i } i ∈ I$ is bounded linearly regular.
Definition 2.
The MSSFP is said to satisfy the bounded linear regularity property if for each $r > 0$ there exists a constant $τ r > 0$ such that:
$τ r d S ( x ) ≤ d Q ( A x ) f o r a l l x ∈ C ∩ r B .$
Let operator $G : H → H 2$ be bounded and linear. We use $ker G = { y ∈ H : G y = 0 }$ to denote the kernel of G. The orthogonal complement of $ker G$ is represented by $( ker G ) ⊥ = { x ∈ H : 〈 y , x 〉 = 0 , ∀ y ∈ ker G }$. As is well known, both $ker G$ and $( ker G ) ⊥$ are closed subspaces of H.
Lemma 2
([17]). Let operator $G : H → H 2$ be bounded and linear. Then G is injective and has a closed range if and only if G is bounded below, namely, there exists a positive constant v such that $∥ G w ∥ ≥ v ∥ w ∥$ for all $w ∈ H$.
Lemma 3.
Let ${ S ^ , ker G }$ be bounded linearly regular and the range of G be closed; then, MSSFP (4) satisfies the bounded linear regularity property.
Proof.
${ S ^ , ker G }$ is bounded linearly regular, so for any $r > 0$ there exists $τ r > 0$ such that:
$d S ( w ) = d S ^ ∩ ker G ( w ) ≤ τ r max { d S ^ ( w ) , d ker G ( w ) } for all w ∈ r B .$
Hence:
$d S ( w ) ≤ τ r d ker G ( w ) for all w ∈ S ^ ∩ r B .$
Since G restricted to $( ker G ) ⊥$ is injective and its range is closed, by Lemma 2, we know that there exists $v > 0$ such that:
$∥ G ( w 1 ) ∥ ≥ v ∥ w 1 ∥ for all w 1 ∈ ( ker G ) ⊥ .$
Hence:
$d G − 1 ( 0 ) ( w ) ≤ 1 v ∥ G w ∥ for all w ∈ H .$
Combining Inequations (8) and (9), we obtain:
$d S ( w ) ≤ τ r v ∥ G w ∥ = τ r v ∥ A x − y ∥ for all w = ( x , y ) ∈ S ^ ∩ r B .$
From:
$d Q ( A x ) : = inf { ∥ A x − y ∥ : y ∈ Q } ,$
it follows that:
$∃ ε > 0 , d Q ( A x ) ≥ ε ∥ A x − y ∥ .$
This, together with Inequation (10), implies that:
$d S ( w ) ≤ τ r v ε d Q ( A x ) for all w = ( x , y ) ∈ S ^ ∩ r B .$
The proof is complete. □
Now, we will provide the concept of sub-differential which is necessary to construct the iterative algorithm later.
Definition 3
([16]). Let $f : H → R$ be a convex function. The sub-differential of f at x is defined as:
$∂ f ( x ) : = { ξ ∈ H : f ( y ) ≥ f ( x ) + 〈 ξ , y − x 〉 f o r a l l y ∈ H } .$
An element of $∂ f ( x )$ is said to be a sub-gradient.
Lemma 4
([16]). Suppose that $C i = { x ∈ H : f i ( x ) ≤ 0 }$ is nonempty for any $ξ i k ∈ ∂ f i ( x k )$; define the half-space $C i k$ by:
$C i k : = C ( f i , x k , ξ i k ) : = { x ∈ H : f i ( x k ) + 〈 ξ i k , x − x k 〉 ≤ 0 } .$
Then:
$( i )$$C i ⊆ C i k$;
$( i i )$ If $ξ i k ≠ 0$, then $C i k$ is a half-space; otherwise, $C i k = H$;
$( i i i )$$P C i k ( x k ) = x k − max { f ( x k ) , 0 } ∥ g ( x 0 ) ∥ 2 ξ i k$;
$( i v )$$d C i k ( x k ) = max { f ( x k ) , 0 } ∥ ξ i k ∥$.
Finally, the following equality and concept of the Fejér monotone sequence are also important for the convergence analysis.
Lemma 5
([14]). Let ${ x n } n ∈ I$ be a finite family in H, and ${ λ n } n ∈ I$ be a finite family in R with $∑ n ∈ I λ n = 1$, then the following equality holds:
$∥ ∑ n ∈ I λ n x n ∥ 2 = ∑ n ∈ I λ n ∥ x n ∥ 2 − 1 2 ∑ n ∈ I ∑ m ∈ I λ n λ m ∥ x n − x m ∥ 2 , n ≥ 2 .$
Definition 4
([14]). Let C be a nonempty subset of H and ${ x n }$ be a sequence in H. ${ x n }$ is called Fejér monotone with respect to C if:
$∥ x n + 1 − z * ∥ ≤ ∥ x n − z * ∥ , ∀ z * ∈ C .$
Clearly, a Fejér monotone sequence ${ x n }$ is bounded and $lim n → ∞ ∥ x n − z * ∥$ exists.

## 3. Main Results

In this section, we will propose the SSPA and show that the algorithm converges linearly to a solution of MSSFP (3). Without loss of generality, the sets $C i$ and $Q j$ can be represented as:
$C i : = { x ∈ H 1 : c i ( x ) ≤ 0 } ,$
and
$Q j : = { y ∈ H 2 : q j ( y ) ≤ 0 } ,$
where $c i : H 1 → R a n d q j : H 2 → R$ are convex functions, for all $i , j = 1 , 2 , ⋯ , t$ (t is a positive integer). Suppose that both $c i$ and $q j$ are sub-differentiable on $H 1$ and $H 2$, respectively, and that $∂ c i$ and $∂ q j$ are bounded operators (namely, bounded on bounded sets). Define:
$C i , n : = { x ∈ H 1 : c i ( x n ) + 〈 ξ i , n , x − x n 〉 ≤ 0 } ,$
where $ξ i , n ∈ ∂ c i ( x n )$, $i = 1 , 2 , ⋯ , t$, and:
$Q j , n : = { y ∈ H 2 : q j ( y n ) + 〈 η j , n , y − y n 〉 ≤ 0 } ,$
where $η j , n ∈ ∂ q j ( y n )$, $j = 1 , 2 , ⋯ , t$.
By the definition of the sub-gradient, it is clear that the half-space $C i , n$ contains $C i$ and the half-space $Q j , n$ contains $Q j$. Then:
$C = ⋂ i = 1 t C i ⊆ ⋂ i = 1 t C i , n and Q = ⋂ j = 1 t Q j ⊆ ⋂ j = 1 t Q j , n .$
Hence, by Equation (5), one has that:
$( I − P Q j , n ) A x ¯ = 0 .$
Due to the specific form of $C i , n$ and $Q j , n$, from Lemma 8 we know that the orthogonal projections onto $C i , n$ and $Q j , n$ may be computed directly.
Censor et al. [11] defined the proximity function $p ( x , y )$ of the MSSFP as follows:
$p ( x , y ) : = 1 2 ∑ i = 1 t α i ∥ P C i x − x ∥ 2 + 1 2 ∑ j = 1 r β j ∥ P Q j ( A x ) − A x ∥ 2 ,$
where $α i > 0$, $β j > 0$ for all i and j, $∑ i = 1 t α i + ∑ j = 1 r β j = 1$. $C i$ and $Q j$ are defined by Equations (11) and (12), respectively. Hence, the function $p ( x , y )$ is convex and differentiable with gradient:
$∇ p ( x , y ) = ∑ i = 1 t α i ( x − P C i x ) + ∑ j = 1 r β j A * ( A x − P Q j A x ) ,$
and they constructed the following iterative algorithm for the MSSEP:
$x n + 1 = x n − s L ( ∑ i = 1 t α i ( x n − P C i , n x n ) + ∑ j = 1 r β j A * ( A x n − P Q j , n A x n ) ) .$
Here $0 < s < 2$, L is the Lipschitz constant of $∇ p ( x )$ with $L = ∑ i = 1 t α i + ϱ ( A * A ) ∑ j = 1 r β j$, $ϱ ( A * A )$ is the spectral radius of $A * A$, $α i > 0$, $β j > 0$ with $∑ i = 1 t α i + ∑ j = 1 r β j = 1$.
Now, we use the modification of Equation (13) to give our simultaneous sub-gradient projection algorithm with the dynamic stepsize for the MSSFP.
Theorem 1.
Suppose that MSSFP (3) satisfies the bounded linear regularity property. Let the sequence ${ x n }$ be defined by Algorithm 1. If the following conditions are met:
$( a )$${ A x n }$ is linearly focusing, that is, there exists $β > 0$ such that:
$β d Q i ( A x n ) ≤ d Q i , n ( A x n ) f o r a n y i ∈ { 1 , 2 , ⋯ , t } ;$
$( b )$$Q i ⋂ int ( ⋂ r ∈ I \ { i } Q r ) ≠ ∅$ ($I = { 1 , 2 , ⋯ , t }$).
then, ${ x n }$ converges linearly to a solution of the MSSFP.
Proof.
Without loss of generality, we suppose that $x n$ is not in S for all $n ≥ 0$. Otherwise, Algorithm 1 terminates in a finite number of iterations, and the conclusions are clearly true. Then, in view of Algorithm 1, one sees that $A x n$ is not in Q for all $n ≥ 0$.
 Algorithm 1: SSPA For an arbitrarily initial point $x 0 ∈ C$, the sequence ${ x n + 1 }$ is generated by: $x n + 1 = x n − γ n ∑ i = 1 t α i [ ( x n − P C i , n x n ) + A * ( I − P Q i , n ) A x n ] ,$ where at each iteration n: $( i )$ $0 < lim n → ∞ inf γ n ≤ lim n → ∞ sup γ n < min { 1 , 1 ∥ A ∥ 2 }$; $( i i )$ ${ α i } i = 1 t ⊂ ( 0 , + ∞ )$ and $∑ i = 1 t α i = 1$.
Take a point $x ¯ ∈ S$ and $n ∈ N$. For simplicity, we write:
$Φ x n : = A * ( I − P Q i , n ) A x n .$
Then, one can know that:
$∥ Φ x n ∥ ≤ ∥ A ∥ d Q i , n ( A x n ) and 〈 x n − x ¯ , Φ x n 〉 ≥ d Q i , n 2 ( A x n ) .$
In fact, the first inequality is trivial, while the second one holds because, by Proposition 1 (iv) and $( I − P Q j , n ) A x ¯ = 0$:
$〈 x n − x ¯ , Φ x n 〉 = 〈 A ( x n − x ¯ ) , ( I − P Q i , n ) A x n 〉 ≥ ∥ ( I − P Q i , n ) A x n − ( I − P Q i , n ) A x ¯ ∥ 2 = d Q i , n 2 ( A x n ) .$
We will firstly prove that the sequence ${ x n }$ is Fejér monotone with respect to S. From Algorithm 1, we have:
$∥ x n + 1 − x ¯ ∥ 2 = ∥ x n − γ n [ ∑ i = 1 t α i ( x n − P C i , n x n ) + ∑ i = 1 t α i Φ x n ] − x ¯ ∥ 2 = ∥ x n − x ¯ ∥ 2 − 2 γ n 〈 x n − x ¯ , ∑ i = 1 t α i ( x n − P C i , n x n ) + ∑ i = 1 t α i Φ x n 〉 + γ n 2 ∥ ∑ i = 1 t α i ( x n − P C i , n x n ) + ∑ i = 1 t α i Φ x n ∥ 2 ≤ ∥ x n − x ¯ ∥ 2 + 2 γ n 2 ∥ ∑ i = 1 t α i ( x n − P C i , n x n ) ∥ 2 + 2 γ n 2 ∥ ∑ i = 1 t α i Φ x n ∥ 2 − 2 γ n 〈 x n − x ¯ , ∑ i = 1 t α i ( x n − P C i , n x n ) 〉 − 2 γ n 〈 x n − x ¯ , ∑ i = 1 t α i Φ x n 〉 .$
By Lemma 5, we have:
$∥ ∑ i = 1 t α i ( x n − P C i , n x n ) ∥ 2 = ∑ i = 1 t α i ∥ x n − P C i , n x n ∥ 2 − 1 2 ∑ i = 1 t ∑ j = 1 t α i α j ∥ ( x n − P C i , n x n ) − ( x n − P C j , n x n ) ∥ 2 ≤ ∑ i = 1 t α i ∥ x n − P C i , n x n ∥ 2 .$
Hence:
$∥ x n + 1 − x ¯ ∥ 2 ≤ ∥ x n − x ¯ ∥ 2 + 2 γ n 2 ∑ i = 1 t α i ∥ x n − P C i , n x n ∥ 2 + 2 γ n 2 ∥ ∑ i = 1 t α i Φ x n ∥ 2 − 2 γ n 〈 x n − x ¯ , ∑ i = 1 t α i ( x n − P C i , n x n ) 〉 − 2 γ n 〈 x n − x ¯ , ∑ i = 1 t α i Φ x n 〉 .$
Based on the properties of the projection operator (i.e., Proposition 1) and $〈 x n − x ¯ , Φ x n 〉 ≥ d Q i , n 2 ( A x n )$, we get the following estimations:
$〈 x n − x ¯ , ∑ i = 1 t α i ( x n − P C i , n x n ) 〉 = ∑ i = 1 t α i 〈 x n − x ¯ , x n − P C i , n x n 〉 = ∑ i = 1 t α i ( 〈 x n − P C i , n x n , x n − P C i , n x n 〉 + 〈 P C i , n x n − x ¯ , x n − P C i , n x n 〉 ) = ∑ i = 1 t α i ( ∥ x n − P C i , n x n ∥ 2 + 〈 P C i , n x n − x ¯ , x n − P C i , n x n 〉 ) ≥ ∑ i = 1 t α i ∥ x n − P C i , n x n ∥ 2 ,$
and:
$〈 x n − x ¯ , ∑ i = 1 t α i Φ x n 〉 = ∑ i = 1 t α i 〈 x n − x ¯ , Φ x n 〉 ≥ ∑ i = 1 t α i d Q i , n 2 ( A x n ) .$
Substituting Inequations (15) and (16) into Inequation (14), we obtain:
$∥ x n + 1 − x ¯ ∥ 2 ≤ ∥ x n − x ¯ ∥ 2 + 2 γ n 2 ∑ i = 1 t α i ∥ x n − P C i , n x n ∥ 2 + 2 γ n 2 ∥ ∑ i = 1 t α i Φ x n ∥ 2 − 2 γ n ∑ i = 1 t α i ∥ x n − P C i , n x n ∥ 2 − 2 γ n ∑ i = 1 t α i d Q i , n 2 ( A x n ) = ∥ x n − x ¯ ∥ 2 − 2 γ n ( 1 − γ n ) ∑ i = 1 t α i ∥ x n − P C i , n x n ∥ 2 − 2 γ n ∑ i = 1 t α i ( 1 − γ n ∑ i = 1 t α i ∥ Φ x n ∥ 2 d Q i , n 2 ( A x n ) ) d Q i , n 2 ( A x n ) ≤ ∥ x n − x ¯ ∥ 2 − 2 γ n ( 1 − γ n ) ∑ i = 1 t α i ∥ x n − P C i , n x n ∥ 2 − 2 γ n ∑ i = 1 t α i ( 1 − γ n ∥ Φ x n ∥ 2 d Q i , n 2 ( A x n ) ) d Q i , n 2 ( A x n ) .$
According to (i) in Algorithm 1, it follows from Inequation (17) that:
$∥ x n + 1 − x ¯ ∥ ≤ ∥ x n − x ¯ ∥ .$
That is, the sequence ${ x n }$ is Fejér monotone with respect to S. Hence, ${ x n }$ is bounded and $lim n → ∞ ∥ x n − x ¯ ∥$ exists.
Then, we will show that ${ x n }$ converges linearly to a solution of MSSFP (3).
Since $x ¯$ is taken arbitrarily in S, by Inequation (17), we have:
$d S 2 ( x n + 1 ) ≤ d S 2 ( x n ) − 2 γ n ( 1 − γ n ) ∑ i = 1 t α i d C i , n 2 ( x n ) − 2 γ n ∑ i = 1 t α i ( 1 − γ n ∥ Φ x n ∥ 2 d Q i , n 2 ( A x n ) ) d Q i , n 2 ( A x n ) ≤ d S 2 ( x n ) − 2 γ n ∑ i = 1 t α i ( 1 − γ n ∥ Φ x n ∥ 2 d Q i , n 2 ( A x n ) ) d Q i , n 2 ( A x n )$
From (i) in Algorithm 1, one deduces that:
$lim n → ∞ inf [ 1 − γ n ∥ Φ x n ∥ 2 d Q i , n 2 ( A x n ) ] > 0 .$
Thus, there exists N such that:
$a : = inf n ≥ N [ 1 − γ n ∥ Φ x n ∥ 2 d Q i , n 2 ( A x n ) ] > 0 .$
Then Inequation (18) reduces to:
$d S 2 ( x n + 1 ) ≤ d S 2 ( x n ) − 2 γ n ∑ i = 1 t α i a d Q i , n 2 ( A x n ) for all n ≥ N .$
Note that ${ A x n }$ is linearly focusing; there exists $β > 0$ such that:
$β d Q i ( A x n ) ≤ d Q i , n ( A x n ) for all i ∈ { 1 , 2 , ⋯ , t } .$
We can know from condition (b) that $Q i ⋂ int ( ⋂ r ∈ I \ { i } Q r ) ≠ ∅$. By Lemma 1, we obtain that ${ Q i } i = 1 t$ is bounded linearly regular. In view of Definition 1, there exists $τ > 0$ such that:
$d Q ( A x n ) ≤ τ max { d Q i ( A x n ) , i = 1 , 2 , ⋯ , t } ,$
that is:
$1 τ d Q ( A x n ) ≤ max { d Q i ( A x n ) , i = 1 , 2 , ⋯ , t } .$
Substituting Inequations (20) and (21) into Inequation (19), we obtain:
$d S 2 ( x n + 1 ) ≤ d S 2 ( x n ) − 2 γ n ∑ i = 1 t α i a β 2 d Q i 2 ( A x n ) ≤ d S 2 ( x n ) − 2 γ n α a β 2 max { d Q i 2 ( A x n ) , i ∈ I } ≤ d S 2 ( x n ) − 2 γ n a α β 2 τ 2 d Q 2 ( A x n ) = d S 2 ( x n ) − 2 γ n b d Q 2 ( A x n ) ,$
where $α = min { α i , i ∈ I }$ and $I = { 1 , 2 , ⋯ , t }$ and $b = a α β 2 τ 2$.
Since the MSSFP satisfies the bounded linear regularity property, there exists $ν > 0$ such that:
$ν d S ( x n ) ≤ d Q ( A x n ) .$
Substituting Inequation (23) into Inequation (22), we get:
$d S 2 ( x n + 1 ) ≤ d S 2 ( x n ) − 2 γ n b ν 2 d S 2 ( x n ) = ( 1 − 2 γ n b ν 2 ) d S 2 ( x n ) for all n ≥ N .$
Let $c : = b ν 2$, then:
$d S 2 ( x n + 1 ) ≤ ( 1 − 2 c γ n ) d S 2 ( x n ) ≤ d S 2 ( x N ) ∏ i = N + 1 n ( 1 − 2 c γ i ) for all n ≥ N .$
Obviously, for each $x ¯ ∈ S$, $∥ x n + 1 − x ¯ ∥$ is monotone decreasing for n, hence:
$∥ x l − x n ∥ ≤ ∥ x l − P S ( x n ) ∥ + ∥ x n − P S ( x n ) ∥ ≤ 2 ∥ x n − P S ( x n ) ∥ = 2 d S ( x n ) for all l > n .$
It follows that:
$∥ x l − x n + 1 ∥ ≤ 2 d S ( x N ) ∏ i = N + 1 n 1 − 2 c γ i for all l ≥ n + 1 .$
Let $q : = e − c ∈ ( 0 , 1 )$, then:
$∏ i = N + 1 n 1 − 2 c γ i = exp { 1 2 ∑ i = N + 1 n ln ( 1 − 2 c γ i ) } ≤ q ∑ i = N + 1 n γ i .$
Therefore:
$∥ x l − x n + 1 ∥ ≤ 2 d S ( x N ) q ∑ i = N + 1 n γ i for all l ≥ n + 1 .$
Since $0 < lim n → ∞ inf γ n ≤ lim n → ∞ sup γ n < min { 1 , 1 ∥ A ∥ 2 }$, it follows that ${ x n }$ is a Cauchy sequence and converges to a solution $x *$ of MSSFP (3), satisfying:
$∥ x n + 1 − x * ∥ ≤ 2 d S ( x N ) q ∑ i = N + 1 n γ i for all n ≥ N .$
Let:
$δ : = max { 2 d S ( x N ) q − ∑ i = 1 N γ i , max { ∥ x i − x * ∥ q − ∑ j = 1 i γ j , i = 1 , 2 , … , N } } > 0 .$
then:
$∥ x n − x * ∥ ≤ δ q ∑ i = 1 n γ i .$
Moreover, from (i) in Algorithm 1, one knows that:
$0 < lim n → ∞ inf γ n .$
Let $γ = lim n → ∞ inf γ n$, then there exists $N 1$ such that $γ n > γ$ for $n ≥ N 1$. It follows that:
$∥ x n − x * ∥ ≤ δ q ∑ i = 1 N 1 γ i q ( n − N 1 ) γ = ω σ n , ∀ n ≥ m a x { N , N 1 } ,$
where $ω = δ q ∑ i = 1 N 1 ( γ i − γ ) , σ = q γ ∈ ( 0 , 1 )$. Hence, ${ x n }$ converges linearly to $x *$. The proof is complete. □
When $t = 1$, Algorithm 1 reduces to an iterative algorithm for solving SFP (2).
Definition 5.
SFP (2) is said to satisfy the bounded linear regularity property if for each $r > 0$ there exists a constant $τ r > 0$ such that:
$τ r d Γ ( w ) ≤ ∥ G w ∥ f o r a l l w ∈ r B ∩ S ¯ ,$
where $S ¯ = C × Q$, $G = [ A , − I ]$ and $w = ( x , y ) ∈ C × Q$.
Corollary 1.
Let SFP (2) satisfy the bounded linear regularity property (i.e., Inequation (24) holds). For an arbitrary initial point $w 0 = ( x 0 , y 0 ) ∈ H$, the sequence ${ w n }$ is defined by:
$w n + 1 = w n − γ n [ ( w n − P S n w n ) + G * G w n ] ,$
where $0 < lim n → ∞ inf γ n ≤ lim n → ∞ sup γ n < min { 1 , 1 ∥ G ∥ 2 }$. Then, ${ w n }$ converges linearly to a solution of SFP (2).

## 4. Numerical Experiments

Let $H 1 = R$, $H 2 = R 2$, $c : H 1 → R$ and $q : H 2 → R$ be defined by:
$c ( x ) = − x 2 a n d q ( y ) = − ( y 1 2 + y 2 2 ) for all x ∈ H 1 , y = ( y 1 , y 2 ) ∈ H 2 ,$
then $C = { x ∈ R : c ( x ) ≤ 0 } = R$, $Q = { y ∈ R 2 : q ( y ) ≤ 0 } = R 2$. Since $C ⊆ C n$ and $Q ⊆ Q n$, $C n = R$, $Q n = R 2$. $A : H 1 → H 2$ and $I : H 2 → H 2$ are defined by:
$A ( x ) = ( x , 0 ) and I ( y , z ) = ( y , z ) for all ( x , y , z ) ∈ R 3 ,$
respectively. Let $S = C × Q ⊆ H = H 1 × H 2 , G = [ A , − I ] : H → H 2$ be defined by:
$G ( x , y , z ) = ( x − y , − z ) for all ( x , y , z ) ∈ R 3 .$
Then, $ker G = { ( x , x , 0 ) : x ∈ R } ≠ ∅$, the range of G is closed and the solution set of SFP is $S = ( C × Q ) ⋂ ker G = { ( x , x , 0 ) : x ∈ R }$. It is easy to know that the SFP satisfies the bounded linear regularity property by Lemma 3.
Let $w 0 = ( x 0 , y 0 , z 0 ) ∈ C × Q$. In view of Equation (25), we have:
$x n + 1 = ( 1 − γ n ) x n + γ n y n , y n + 1 = ( 1 − γ n ) y n + γ n x n , z n + 1 = ( 1 − γ n ) z n .$
In algorithm (25), we take $γ n = 0 . 6 , n n + 1$. Moreover, we choose the error to be $10 − 10$ and $10 − 20$ and the initial value to be $w 0 = ( 5 , 8 , 3 )$ and $w 0 = ( 100 , 300 , 50 )$, respectively. In addition, under the same conditions, we also compare with Dang’s Algorithm 3.1 in [13] to confirm the effectiveness of our proposed algorithm. For convenience, we choose $s = m i n { ρ ( G * G ) 1 + ρ ( G * G ) , 1 1 + ρ ( G * G ) }$ in Dang’s Algorithm 3.1. Then we have the following numerical results displayed in Figure 1, Figure 2, Figure 3 and Figure 4. Note that we denote the number of iterations and the logarithm of the error by using the x-coordinate and the y-coordinate of the figures, respectively. We wrote all the codes in Wolfram Mathematica (version 10.3). All the numerical results were run on a personal Asus computer with AMD A9-9420 RADEON R5, 5 COMPUTE. CORES 2C+3G 3.00 GHz and RAM 8.00 GB.

## Author Contributions

The main idea of this paper was proposed by T.T.; L.S. and R.C. prepared the manuscript initially and performed all the steps of the proofs in this research. All authors read and approved the final manuscript.

## Funding

This research received no external funding.

## Acknowledgments

This research was supported by NSFC Grants No: 11301379; No: 11226125; No: 11671167.

## Conflicts of Interest

The authors declare no conflicts of interest.

## References

1. Censor, Y.; Elfving, T. A multiprojection algorithm using Bregman projections in a product space. Numer. Algorithms 1994, 8, 221–239. [Google Scholar] [CrossRef]
2. Censor, Y.; Bortfeld, T.; Martin, B. A unified approach for inversion problems in intensity-modulated radiation therapy. Phys. Med. Biol. 2006, 5, 2353–2365. [Google Scholar] [CrossRef] [PubMed]
3. Byrne, C. A unified traetment of some iterative algorithms in signal processing and image reconstruction. Inverse Probl. 2004, 20, 103–120. [Google Scholar] [CrossRef]
4. He, H.; Ling, C.; Xu, H.K. An implementable splitting algorithm for the 1-norm regularized split feasibility problem. J. Sci. Comput. 2015, 67, 1–18. [Google Scholar]
5. Byrne, C. Iterative oblique projection onto convex sets and the split feasibility problem. Inverse Probl. 2002, 18, 441–453. [Google Scholar] [CrossRef]
6. López, G.; Martin-Marquez, V.; Wang, F.; Xu, H.K. Solving the split feasibility problem without prior knowledge of matrix norms. Inverse Probl. 2012, 28, 085004. [Google Scholar]
7. Cegielski, A. Strong convergence of a hybrid steepest descent method for the split common fixed point problem. Optimization 2016, 6, 1463–1476. [Google Scholar] [CrossRef]
8. Yang, Q. The relaxed CQ algorithm solving the split feasibility problem. Inverse Probl. 2004, 20, 1261–1266. [Google Scholar] [CrossRef]
9. Dang, Y.; Gao, Y. The strong convergence of a KM-CQ-like algorithm for a split feasibility problem. Inverse Probl. 2011, 27, 015007. [Google Scholar] [CrossRef]
10. Censor, Y.; Elfving, T.; Kopf, N.; Bortfeld, T. The multiple-sets split feasibility problem and its applications for inverse problems. Inverse Probl. 2005, 21, 2071–2084. [Google Scholar] [CrossRef] [Green Version]
11. Censor, Y.; Motova, A.; Segal, A. Perturbed projections and subgradient projections for the multiple-sets split feasibility problem. J. Math. Anal. Appl. 2007, 327, 1244–1256. [Google Scholar] [CrossRef] [Green Version]
12. Combettes, P.L. Convex set theoretic image reconvery by extrapolated iterations of oarallel subgradient projections. IEEE Trans. Image Process. 1997, 6, 493–506. [Google Scholar] [CrossRef] [PubMed]
13. Dang, Y.Z.; Gao, Y. A new simulitaneous subgradient projection algorithm for solving a multiple-sets split feasibility problem. Appl. Math. 2014, 59, 37–51. [Google Scholar] [CrossRef]
14. Bauschke, H.H.; Combettes, P.L. Convex Analysis and Monotone Operator Theory in Hilbert Spaces; Springer: London, UK, 2011. [Google Scholar]
15. Zhao, X.P.; Ng, K.F.; Li, C.; Yao, J.C. Linear regularity and linear convergence of projection-based methods for solving convex feasibility problems. Appl. Math. Optim. 2018, 78, 613–641. [Google Scholar] [CrossRef]
16. Bauschke, H.H.; Borwein, J.M. On projection algorithms for solving convex feasibility problems. SIAM Rev. 1996, 38, 367–426. [Google Scholar] [CrossRef]
17. Conway, J.B. A Course in Functional Analysis, 2nd ed.; GTM 96; Springer: Berlin/Heidelberg, Germany, 1989. [Google Scholar]
Figure 1. Initial conditions: $x 1 = 5 , y 1 = 8 , z 1 = 3 . w * = ( 6.5 , 6.5 , 0 ) ,$ error = $10 − 10 .$
Figure 1. Initial conditions: $x 1 = 5 , y 1 = 8 , z 1 = 3 . w * = ( 6.5 , 6.5 , 0 ) ,$ error = $10 − 10 .$
Figure 2. Initial conditions: $x 1 = 5 , y 1 = 8 , z 1 = 3 . w * = ( 6.5 , 6.5 , 0 ) ,$ error = $10 − 20 .$
Figure 2. Initial conditions: $x 1 = 5 , y 1 = 8 , z 1 = 3 . w * = ( 6.5 , 6.5 , 0 ) ,$ error = $10 − 20 .$
Figure 3. Initial conditions: $x 1 = 100 , y 1 = 300 , z 1 = 50 . w * = ( 200 , 200 , 0 ) ,$ error = $10 − 10 .$
Figure 3. Initial conditions: $x 1 = 100 , y 1 = 300 , z 1 = 50 . w * = ( 200 , 200 , 0 ) ,$ error = $10 − 10 .$
Figure 4. Initial conditions: $x 1 = 100 , y 1 = 300 , z 1 = 50 . w * = ( 200 , 200 , 0 ) ,$ error = $10 − 20 .$
Figure 4. Initial conditions: $x 1 = 100 , y 1 = 300 , z 1 = 50 . w * = ( 200 , 200 , 0 ) ,$ error = $10 − 20 .$

## Share and Cite

MDPI and ACS Style

Tian, T.; Shi, L.; Chen, R. Linear Convergence of an Iterative Algorithm for Solving the Multiple-Sets Split Feasibility Problem. Mathematics 2019, 7, 644. https://doi.org/10.3390/math7070644

AMA Style

Tian T, Shi L, Chen R. Linear Convergence of an Iterative Algorithm for Solving the Multiple-Sets Split Feasibility Problem. Mathematics. 2019; 7(7):644. https://doi.org/10.3390/math7070644

Chicago/Turabian Style

Tian, Tingting, Luoyi Shi, and Rudong Chen. 2019. "Linear Convergence of an Iterative Algorithm for Solving the Multiple-Sets Split Feasibility Problem" Mathematics 7, no. 7: 644. https://doi.org/10.3390/math7070644

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.