Next Article in Journal
A New Approach for Motor Imagery Classification Based on Sorted Blind Source Separation, Continuous Wavelet Transform, and Convolutional Neural Network
Previous Article in Journal
Commercial Devices-Based System Designed to Improve the Treatment Adherence of Hypertensive Patients
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Convex Optimization Algorithm for Compressed Sensing in a Complex Domain: The Complex-Valued Split Bregman Method

School of Artificial Intelligence, Xidian University, Xi’an 710071, China
*
Author to whom correspondence should be addressed.
Sensors 2019, 19(20), 4540; https://doi.org/10.3390/s19204540
Submission received: 12 September 2019 / Revised: 11 October 2019 / Accepted: 15 October 2019 / Published: 18 October 2019
(This article belongs to the Section Intelligent Sensors)

Abstract

:
The Split Bregman method (SBM), a popular and universal CS reconstruction algorithm for inverse problems with both l1-norm and TV-norm regularization, has been extensively applied in complex domains through the complex-to-real transforming technique, e.g., MRI imaging and radar. However, SBM still has great potential in complex applications due to the following two points; Bregman Iteration (BI), employed in SBM, may not make good use of the phase information for complex variables. In addition, the converting technique may consume more time. To address that, this paper presents the complex-valued Split Bregman method (CV-SBM), which theoretically generalizes the original SBM into the complex domain. The complex-valued Bregman distance (CV-BD) is first defined by replacing the corresponding regularization in the inverse problem. Then, we propose the complex-valued Bregman Iteration (CV-BI) to solve this new problem. How well-defined and the convergence of CV-BI are analyzed in detail according to the complex-valued calculation rules and optimization theory. These properties prove that CV-BI is able to solve inverse problems if the regularization is convex. Nevertheless, CV-BI needs the help of other algorithms for various kinds of regularization. To avoid the dependence on extra algorithms and simplify the iteration process simultaneously, we adopt the variable separation technique and propose CV-SBM for resolving convex inverse problems. Simulation results on complex-valued l1-norm problems illustrate the effectiveness of the proposed CV-SBM. CV-SBM exhibits remarkable superiority compared with SBM in the complex-to-real transforming technique. Specifically, in the case of large signal scale n = 512, CV-SBM yields 18.2%, 17.6%, and 26.7% lower mean square error (MSE) as well as takes 28.8%, 25.6%, and 23.6% less time cost than the original SBM in 10 dB, 15 dB, and 20 dB SNR situations, respectively.

1. Introduction

Compressed sensing (CS) theory has been thoroughly analyzed and extensively applied in the signal processing [1,2] and image processing community [3,4,5] during the past decades. CS theory indicates that a sparse signal can be reconstructed from a few of measurements lower than the Nyquist rate required [6,7]. Specifically, an unknown vector x n can be recovered by solving an inverse problem with a sparsity-promoting regularization term, such as l1-norm or total variation (TV) norm, as follows:
min x λ 2 y A x 2 2 + x 1
min x λ 2 y A x 2 2 + x T V
where a measurements vector y m is generated by y = Ax + ε, A m × n with m < n denotes a sensing matrix, and ε m represents a noise vector. There are various convex optimization methods [8,9,10,11] and sparse Bayesian learning methods [12,13,14] dealing with related inverse problems. However, the vast majority of the above-mentioned algorithms consider the real-valued situations. In many applications, such as diverse as wireless communication [15,16,17], biomedical [18,19], and radar [20,21,22], the complex domain provides signals and images with a more convenient and appropriate representation to preserve their sparsity and phase information than the real domain. Motived by this, we investigate CS reconstruction methods in complex-valued cases.
In recent years, there have been many papers focusing on this problem, such as Complex Approximate Message Passing (CAMP) [23] and M-lasso [24]. CAMP is the extension of Approximate Message Passing (AMP) [25] to the complex domain. However, the reconstruction performance of CAMP for the unequal-amplitude sparse signal is poor [26]. M-lasso combines the zero subgradient equations with M-estimation settling Equation (1) in the complex domain. However, the updating strategy in M-lasso is cyclic coordinate descent (CCD) [27], which calculates one element at a time while keeping others fixed at the current iteration. Obviously, it is computationally expensive. Furthermore, these schemes are designed for specific l1-norm regularization problems. Thus, a more general algorithm for CS recovery in complex variables is needed.
The Split Bregman method (SBM) proposed in [28] is a universal convex optimization algorithm for both l1-norm and TV-norm regularization problems. By the idea of decomposing the original problem into several subproblems worked out by Bregman Iteration (BI) [29,30], SBM has been widely utilized in the complex domain through the complex-to-real converting technique [31,32], e.g., MRI imaging [33], SAR imaging [34], forward-looking scanning radar imaging [35], SAR image super-resolution [36], and massive MIMO channel estimation [37]. However, SBM still has great potential in terms of both reconstruction performance and time cost considering the following two points: The original BI defined in the real domain may not make good use of the phase information for complex variables, which degrades the recovery accuracy; secondly, the converting technique quadruples the elements of the sensing matrix A to 2m × 2n, which consumes more memory and time within the iteration process.
To tackle the aforementioned problems, this paper theoretically generalizes the original SBM into the complex domain, named the complex-valued Split Bregman method (CV-SBM). We first define the complex-valued Bregman distance (CV-BD) and replace the associated regularization term with the CV-BD in the inverse problem. Then, complex-valued Bregman Iteration (CV-BI) is proposed to solve this new problem. In addition, according to the calculation rules, Wirtinger’s Calculus, and optimization theory for complex variables, how well-defined the CV-BI is and its convergence are analyzed in detail. The proof of the above two properties reveals that CV-BI can settle inverse problems if the regularization term is convex. Since CV-BI requires the help of additional algorithms to find the solution to the specific regularization, as BI does, its solution is still complicated and computationally expensive. Inspired by SBM, we adopt the variable separation technique to avoid the requirement of other optimization algorithms and then present CV-SBM to settle the convex inverse problems with the simplified solution. Simulation results on the complex-valued l1-norm problems reveal the effectiveness of CV-SBM compared with existing methods. Particularly, the proposed CV-SBM exhibits 18.2%, 17.6%, and 26.7% lower mean square error (MSE) and takes 28.8%, 25.6%, and 23.6% less time than SBM through the complex-to-real transforming technique in 10 dB, 15 dB and 20 dB SNR cases with large signal scale n = 512.
The rest of this paper is organized as follows. In Section 2, we briefly review the original BI and SBM techniques. Section 3 proposes and analyzes CV-BI and CV-SBM in detail. Section 4 conducts numerical experiments and compares the results with some existing CS reconstruction algorithms in the complex domain. Conclusions and future work are discussed in Section 5.

2. Review of Bregman Iteration and Split Bregman Method

SBM, whose main idea is to decompose the original unconstrained problem into several equivalent subproblems solved by BI [29,30], has shown its efficiency and effectiveness for inverse problems with both l1-norm and TV-norm regularization [28]. For the convenience of the following illustration, we will first present a brief review of BI and SBM.

2.1. Bregman Iteration

BI focuses on the optimization problem
min x J ( x ) + H ( x )
where J(x) is a real convex function but not necessarily differentiable, and H is a real convex and differentiable function. By replacing J(x) with corresponding Bregman distance (BD) D J p ( u , v ) [29,38], BI tackles Equation (3) as follows:
D J p ( u , v ) = J ( u ) J ( v ) p , u v
x k = arg min x D J p k 1 ( x , x k 1 ) + H ( x )
p k = p k 1 H ( x k )
where p J ( v ) is a subgradient of J(x) at the point v, and J ( v ) is the subdifferential of J(x) at v, f , g = f T g * denotes the inner product for all f , g n , and (·)* is used for the conjugate. To make it clear, we give the definition of the subgradient and subdifferential.
Definition 1.
Let T : m be a convex function defined on the real domain. A vector x s T ( z 0 ) m is said to be a subgradient of T at z0 if T ( z ) T ( z 0 ) + z z 0 , x s T ( z 0 ) . The set of all subgradients of T at z0 is called the subdifferential of T at z0 and is denoted by z T ( z 0 ) . More detials about the subgradient and subdifferential can be found in [39].
A key property of the BD is that it has the same convex characteristic as J(x) so that (5) is still a convex problem. Furthermore, it can be used as the measurement of the closeness of two points in J(x) [29,30].
As for the l1-norm and TV-norm problems, Equation (6) is easily calculated, whereas dealing with (5) is more complicated and needs the help of another algorithm [30], such as GPSR [8] or FPC [9]. To avoid the dependence on extra algorithms and simplify the iteration process simultaneously, SBM is presented.

2.2. Split Bregman Method

SBM aims to find a solution to the unconstrained problem
min x Φ x 1 + H ( x )
where Φ is a linear operator. SBM introduces an auxiliary variable d and considers an equivalent constrained problem to Equation (7)
min x , d d 1 + H ( x )    s . t .   d = Φ x
The corresponding unconstrained version of (8) can be formulated as follows:
min x , d E ( x , d ) + μ 2 d Φ x 2 2
where E(x,d) = ||d||1 + H(x) and μ is a positive constant to balance the two terms in Equation (9) within the iterations. Since there are two unknown variables completely fulfilling the demand of BI for optimization problems, BI can be performed for each of them:
( x k , d k ) = arg min x , d D E p k 1 ( x , x k 1 , d , d k 1 ) + μ 2 d Φ x 2 2 = arg min x , d E ( x , d ) E ( x k 1 , d k 1 ) p x k 1 , x x k 1 p d k 1 , d d k 1 + μ 2 d Φ x 2 2
p x k = p x k 1 μ Φ T ( Φ x k d k )
p d k = p d k 1 μ ( d k Φ x k )
Let b k = p d k / μ , then the iterations become:
( x k , d k ) = arg min x , d d 1 + H ( x ) + μ 2 d Φ x b k 1 2 2
b k = b k 1 ( d k Φ x k )
For (13), one can decompose it into resolving two subproblems alternately, i.e., working out xk by fixing dk−1 and then clearing up dk by fixing xk
x k = arg min x H ( x ) + μ 2 d k 1 Φ x b k 1 2 2
d k = arg min d d 1 + μ 2 d Φ x k b k 1 2 2
The problems above can be computed conveniently by tackling zero subgradient equations.
Evidently, the combination of BI and SBM can be adopted to settle plenty of convex optimization problems in a real system [40,41]. However, BD and BI are established in the real domain, and consequently do not to take complex variables and phase information into account. Specifically, once variables come in the complex domain, the BD becomes complex-valued and consequently cannot be employed as the measurement of the closeness. Thus, we can no longer use the BD as the objective function. In the following section, we will generalize the original BI and SBM into the complex domain theoretically.

3. Complex-Valued Split Bregman Method

3.1. Wirtinger Calculus and Wirtinger’s Subgradients

As is well-known, convex optimization theory requires the differentiability of the objective function. For T(c) = TR(c) + jTI(c) in complex variables c = cR + jcI, the complex differentiability equals to the satisfied Cauchy–Riemann conditions:
T R ( c ) c R = T I ( c ) c I T R ( c ) c I = T I ( c ) c R
For a complex-valued l1-norm regularization problem:
min x n F ( x ) F ( x ) = H ( x ) + J ( x )
where H ( x ) = λ y A x 2 2 , J ( x ) = x 1 , y m , A m × n . Apparently, F(x) does not obey (17) so that calculating the complex gradient directly is unavailable.
To overcome such a problem, an alternative tool for computing the complex gradient was brought into light recently called Wirtinger’s calculus [39]. It relaxes the strict requirement for complex differentiability and allows the computation of the complex gradient in simple rules and principles. A key point in Wirtinger’s calculus is the definition of Wirtinger’s gradient (W-gradient) and the conjugate Wirtinger’s gradient (CW-gradient)
c T ( c ) = 1 2 ( c R T ( c ) j c I T ( c ) )
c * T ( c ) = 1 2 ( c R T ( c ) + j c I T ( c ) )
where c R T ( c ) and c I T ( c ) represent the gradient of the T at cR and cI, which can be obtained by the traditional ways. According to Equations (19) and (20), one can calculate the W-gradient of c* and the CW-gradient of c:
c c * = 1 2 ( c R c * j c I c * ) = 1 2 [ c R ( c R j c I ) j c I ( c R j c I ) ] = 1 2 [ 1 j ( j ) ] = 1 2 [ 1 1 ] = 0
c * c = 1 2 ( c R c + j c I c ) = 1 2 [ c R ( c R + j c I ) + j c I ( c R + j c I ) ] = 1 2 [ 1 + j ( j ) ] = 1 2 [ 1 1 ] = 0
Considering that both the W-gradient for c* and the CW-gradient for c are equal to zero, in Wirtinger’s calculus we can treat c and c* as two irrelevant or independent variables, which is the main approach allowing us to utilize the elegance of Wirtinger’s calculus. Here is an example: if T(c) = c(c*)2, then we have c T ( c ) = ( c * ) 2 and c * T ( c ) = 2 c c * . More details and examples can be found in [42].
In general, for the convex function in complex variables, the optimization condition is the CW-gradient equal to zeros vector. Nevertheless, in practice, some functions may not be differentiable everywhere, e.g., l1-norm in F(x) at zero. In this case, the conjugate Wirtinger’s subgradients (CW-subgradients) [39] can be adopted to construct the gradient path towards the optimal point. For a real convex function in complex variables T : n , we define a CW-subgradient c * s T ( c ) of T at c if c 0 n
c * s T ( c ) = 1 / 2 [ c R s T ( c ) + j c I s T ( c ) ]
and it satisfies
T ( c + c 0 ) T ( c ) + 2 ( c , c * s T ( c ) ) = T ( c ) + c R , c R s T ( c ) + c I , c I s T ( c )
where c R s T ( c ) and c I s T ( c ) denote the subgradient of T at cR and cI. The set of all CW-subgradients of T at c is called Wirtinger’s differential of T at c and is represented by c * T ( c ) . It should be noted that for the differentiable point of T, the Wirtinger’s differential only contains one element, i.e., its CW-gradient. Wirtinger’s differential of modulus |xi| and H ( x ) are presented as follows [43].
x * | x i | = { 1 2 sign ( x i ) ,    for   x i 0 1 2 s        for   x i = 0
x * H ( x ) = x * H ( x ) = λ A H ( A x y )
where i is the index for the element of vector x and s is some complex number verifying | s | 1 . Then, a necessary and sufficient condition for the optimization solution to Equation (18) is that 0 x * F ( x ) [43]. By the definition of the CW-subgradient, in the following subsection, we can generalize BD into the complex domain.

3.2. CV Bregman Distance

To prevent the BD becoming complex-valued, we first generalize the BD into the complex domain and introduce the CV Bregman distance (CV-BD) theoretically.
Definition 2.
For p = v * s T ( v ) v * T ( v ) , we define the quantity
D J p ( u , v ) = J ( u ) J ( v ) 2 ( u v , p ) = J ( u ) J ( v ) u R v R , u R s T ( c ) u I v I , u I s T ( c )
as a CV-BD associated with real convex function J in complex variables. Clearly, no matter whether the variables u and v are in the real or the complex domain, D J p ( u , v ) is always a real-valued scalar. According to (24), one can point out that a CV-BD is non-negative.
To ensure that the CV-BD can be utilized as the objective function as the BD, in the following Lemma 1 and Lemma 2 prove that the CV-BD is as the same convex as J(x) and can measure the closeness at two points in J.
Lemma 1.
Let D J p ( u , v ) be a CV-BD associated with real convex or strictly convex function J, where u , v n . Then D J p ( u , v ) is as the same convex property as J for variable u in each v.
Proof. 
Assume J is a real convex function and let θ [ 0 , 1 ] , x , y , v n , and p v * J ( v ) . Then we get
D J p ( θ x + ( 1 θ ) y , v ) = J ( θ x + ( 1 θ ) y ) J ( v ) 2 ( θ x + ( 1 θ ) y v , p ) = J ( θ x + ( 1 θ ) y ) J ( v ) + C 0
θ D J p ( x , v ) + ( 1 θ ) D J p ( y , v ) = θ [ J ( x ) J ( v ) 2 ( x v , p ) ]   + ( 1 θ ) [ J ( y ) J ( v ) 2 ( y v , p ) ] = θ J ( x ) + ( 1 θ ) J ( y ) J ( v ) 2 ( θ x + ( 1 θ ) y v , p ) = θ J ( x ) + ( 1 θ ) J ( y ) J ( v ) + C 0
where C 0 = 2 ( θ x + ( 1 θ ) y v , p ) .
Considering that J is a real convex function, J satisfies
J ( θ x + ( 1 θ ) y ) θ J ( x ) + ( 1 θ ) J ( y )
Then we have
D J p ( θ x + ( 1 θ ) y , v ) θ D J p ( x , v ) + ( 1 θ ) D J p ( y , v )
This completes the proof of D J p ( u , v ) is a convex function for variable u as J.
For J is a real strictly convex function, we assume θ ( 0 , 1 ) , x , y , v n and x y . And J satisfies
J ( θ x + ( 1 θ ) y ) < θ J ( x ) + ( 1 θ ) J ( y )
Then according to Equations (28), (29), and (32) we obtain
D J p ( θ x + ( 1 θ ) y , v ) < θ D J p ( x , v ) + ( 1 θ ) D J p ( y , v )
which proves that D J p ( u , v ) is a strictly convex function for variable u as J. Then, it can be concluded that D J p ( u , v ) is as the same convex property as J for variable u in each v. □
Lemma 2. 
Let D J p ( u , v ) be a CV-BD associated with real strictly convex function J and assume a point w = θ u + ( 1 θ ) v is on the line segment connecting u and v, where u , v n , θ ( 0 , 1 ) . Then D J p ( u , v ) D J p ( w , v ) and equality holds if and only if u = v.
Proof. 
Assume u v , then we derive according to Lemma 1
D J p ( w , v ) = D J p ( θ u + ( 1 θ ) v , v ) < θ D J p ( u , v ) + ( 1 θ ) D J p ( v , v ) = θ D J p ( u , v ) < D J p ( u , v )
when u = v , we yield
D J p ( w , v ) = D J p ( θ u + ( 1 θ ) v , v ) = D J p ( v , v ) = D J p ( u , v ) = 0
This completes the proof of Lemma 2. Then, the CV-BD at two points in convex function J would decrease when they get closer, and may become zero if and only if the two points coincide. This property makes the CV-BD the measurement of closeness at two points.
Thus, inspired by the original BI, we use the CV-BD between the variables to be solved and the current solution to replace real convex function J(x) as the objective function:
x k = arg min x n Q k ( x ) Q k ( x ) = H ( x ) + D J p k 1 ( x , x k 1 )
Within the iterations, the CV-BD is nonincreasing. This will be proved in the next subsection. □
Obviously, Q k ( x ) is convex because of H(x) and the CV-BD. However, the CV-BD D J p k 1 ( x , x k 1 ) may be multivalued at nondifferential xk-1, which inevitably interferes with the solution of xk. As we shall prove below, this issue is not vital, since CV-BI introduced in the following subsection automatically chooses a suitable CW-subgradient when dealing with Equation (36).

3.3. CV Bregman Iterations

CV-BI for Equation (36) is proposed directly and the definition and the convergence are proved in the following.

3.3.1. CV-BI Algorithm

Algorithm 1. Let x0 = 0, p0 = 0, for k = 1,2,
1. compute xk as a minimizer of the convex function Qk(x)
x k = arg min x n Q k ( x ) Q k ( x ) = λ y A x 2 2 + J ( x ) J ( x k 1 ) 2 ( x x k 1 , p k 1 )
2. compute p k = p k 1 λ A H ( A x y ) x * J ( x k )
Generally, we can initialize x0 and p0 whatever satisfy p 0 x * J ( x 0 ) . Nevertheless, for any x0≠0, its CW-subgradient requires optional calculation, which is not desired in practical.

3.3.2. Definition of the Iteration

In this subsection, we reveal that the iterative procedure in Algorithm 1 is well defined. Specifically, a minimizer xk exists in Qk(x) and the iteration can find an appropriate CW-subgradient pk automatically.
Proposition 1. 
Assume that H ( x ) = λ y A x 2 2 , J(x) is convex and bounded, and let x0=0, p 0 = 0 x * J ( x 0 ) . Then, for each k , there exists a minimizer xk in Qk(x), and there exists an appropriate CW-subgradient p k x * J ( x k ) and q k = x * H ( x k ) = λ A H ( A x k y ) such that
p k + q k = p k 1
Moreover, if A has no null space, a minimizer xk is unique.
Proof. 
We prove the result by induction. In the case of k = 1, Q1(x) becomes the original function F(x), of which the existence of minimizers and the optimality condition p1 + q1 = p0 = 0 is well known [44]. In addition, assume r k = λ ( y A x k ) and we have p1 = AHr1.
Then, we proceed from k−1 to k and assume p k 1 = A H r k 1 exists. To prove that the minimizers exist, we first discus the boundedness of Qk(x). Recalling the l2-norm greater than or equal to zero, Qk(x) can be estimated as
Q k ( x ) = J ( x ) J ( x k 1 ) 2 ( x x k 1 , p k 1 ) + λ y A x 2 2 = J ( x ) J ( x k 1 ) 2 ( x x k 1 , A H r k 1 ) + λ y A x 2 2 = J ( x ) J ( x k 1 ) 2 ( A x A x k 1 , r k 1 ) + λ y A x 2 2 = J ( x ) J ( x k 1 ) 2 ( y A x k 1 , r k 1 y A x , r k 1 ) + λ y A x 2 2 = J ( x ) J ( x k 1 ) 2 ( 1 λ r k 1 , r k 1 ) + y A x , r k 1 + ( y A x , r k 1 ) * + λ y A x 2 2 = J ( x ) J ( x k 1 ) 2 r k 1 2 2 λ + λ y A x + r k 1 λ 2 2 r k 1 2 2 λ = J ( x ) J ( x k 1 ) 3 r k 1 2 2 λ + λ y A x + r k 1 λ 2 2 J ( x ) J ( x k 1 ) 3 r k 1 2 2 λ
Since only J(x) is not constant, the boundedness of Qk(x) implies the boundedness of J(x). This shows that the level sets of Qk are weak-* compact [29]. Hence, a minimizer of Qk exists due to the optimization theory. Besides, if A has no null space and H(x) as well as J(x) is strictly convex, Qk(x) is also strictly convex, and therefore the minimizer is unique. This completes the proof of the existence of minimizers for all k > 1.
The following proves pk and qk exist for all k > 1. According to the optimality conditions for Qk(x)
0 x * Q k ( x ) = x * H ( x k ) + x * J ( x k ) x * 2 ( x x k 1 , p k 1 ) = x * H ( x k ) + x * J ( x k ) x * x x k 1 , p k 1 x * ( x x k 1 , p k 1 ) * = x * H ( x k ) + x * J ( x k ) p k 1
we derive that
p k 1 x * H ( x k ) + x * J ( x k )
Recalling that assume pk−1 exists, one can get that x * J ( x k ) and x * H ( x k ) are not null sets, and consequently yields the existence of p k x * J ( x k ) and q k = x * H ( x k ) = λ A H ( A x y ) , which also satisfies Equation (38).
Recalling Equation (38) and p0 = 0, we obtain that
p k = i = 1 k q i = λ i = 1 k A H ( y A x i )
The definition of CV-BI has been proved as mentioned above. The whole CV-BI can be summarized as follows:
x k = arg min x H ( x ) + D J p k 1 ( x , x k 1 )
p k = p k 1 x * H ( x k )
 □
Algorithm 1: CV-BI
Initialization: x0 = 0 p0 = 0 k = 1 λ,
  While “stopping criterion is not met” do
     x k = arg min x H ( x ) + D J p k 1 ( x , x k 1 ) ;
p k = p k 1 x * H ( x k ) ;
K = k + 1;
End while
Review the entire process of proof and one can find that CV-BI possesses the ability to solve any kind of regularization term J(x) in Equation (18) if J(x) is a real convex function in complex variables. Furthermore, since each step of Algorithm 1 obeys the optimization rules in the complex domain instead of converting the objective function Qk(x) and variable x into the real domain, one can summarize that CV-BI preserve phase information for complex variables.

3.3.3. Convergence Analysis

In this subsection, the convergence property of CV-BI is analyzed. To be specific, two monotonicity properties are proved with the help of the CV-BD.
Proposition 2.
Under the above assumption, the sequence of H(xk) obtained from the CV-BI is monotonically nonincreasing, we get
H ( x k ) H ( x k ) + D J p k 1 ( x k , x k 1 ) H ( x k 1 )
Moreover, let x be such that J(x) < ∞, then we even have
D J p k ( x , x k ) + D J p k 1 ( x k , x k 1 ) + H ( x k ) H ( x ) + D J p k 1 ( x , x k 1 )
Proof. 
Recall the nonnegative property of the CV-BD and that xk is the minimizer of the convex function Qk(x), we obtain
H ( x k ) H ( x k ) + J ( x k ) J ( x k 1 ) 2 ( x k x k 1 , p k 1 ) = Q k ( x k ) Q k ( x k 1 ) = H ( x k 1 )
which implies Equation (45).
We can derive a formula motivated by the identity of the original BD [45]:
D J p k ( x , x k ) D J p k 1 ( x , x k 1 ) + D J p k ( x k , x k 1 ) = J ( x ) J ( x k ) 2 ( x x k , p k ) J ( x ) + J ( x k 1 ) + 2 ( x x k 1 , p k 1 ) + J ( x k ) J ( x k 1 ) 2 ( x k x k 1 , p k 1 ) = 2 ( x x k , p k 1 p k ) = 2 ( x x k , q k )
Considering the definition of the CW subgradient and q k x * H ( x k ) , we yield
D J p k ( x , x k ) D J p k 1 ( x , x k 1 ) + D J p k ( x k , x k 1 ) = 2 ( x x k , q k ) H ( x ) H ( x k )
which is equivalent to Equation (46). □
Proposition 3. 
Under the same assumption as Proposition 2, let x ˜ be a minimizer of H(x) with J ( x ˜ ) < , then we have
D J p k ( x ˜ , x k ) D J p k 1 ( x ˜ , x k 1 )
Proof. 
Recall the nonnegative property of the CV-BD and that x ˜ is a minimizer of H(x), we get an inequality
D J p k ( x ˜ , x k ) D J p k ( x ˜ , x k ) + D J p k 1 ( x k , x k 1 ) + H ( x k ) H ( x ˜ )
According to Equations (49), (51) can be derived as
D J p k ( x ˜ , x k ) D J p k ( x ˜ , x k ) + D J p k 1 ( x k , x k 1 ) + H ( x k ) H ( x ˜ ) D J p k 1 ( x ˜ , x k 1 )
which proves Equation (50). The results of Equations (45) and (50) conclude a general convergence conclusion for CV-BI. More details about convergence can be found in [29]. □

3.4. CV-SBM

For various kinds of regularization terms corresponding to Equation (43), CV-BI still has to employ other algorithms as BI does, which makes the solution process complicated and computationally expensive. Inspired by SBM, we separate the original variable and present CV-SBM to settle the convex inverse problems with the simplified solutions.
A constrained optimization problem in complex variables
min x , d n J ( d ) + H ( x )    s . t .   d = Φ x
can be transformed into an unconstrained one
min x , d n F ( x , d ) + μ d Φ x 2 2 F ( x , d ) = J ( d ) + H ( x )
Evidently, F(x,d) is convex in x and d. Thus, by applying CV-BI to Equation (54) in each variable, we can derive that
( x k , d k ) = arg min x , d D F p k 1 ( x , x k 1 , d , d k 1 ) + μ d Φ x 2 2 = arg min x , d F ( x , d ) F ( x k 1 , d k 1 ) 2 ( x x k 1 , p x k 1 ) 2 ( d d k 1 , p d k 1 ) + μ d Φ x 2 2
p x k = p x k 1 μ Φ H ( Φ x k d k )
p d k = p d k 1 μ ( d k Φ x k )
To simplify the above iteration step Equation (55), we assume b k = p d k / μ and get
  2 ( x x k 1 , p x k 1 ) 2 ( d d k 1 , p d k 1 ) + μ d Φ x 2 2 = 2 μ ( x x k 1 , Φ H b k 1 ) 2 μ ( d d k 1 , b k 1 ) + μ d Φ x 2 2 = 2 μ ( Φ x Φ x k 1 , b k 1 ) 2 μ ( d d k 1 , b k 1 ) + μ d Φ x 2 2 = μ [ 2 ( Φ x d , b k 1 ) 2 ( Φ x k 1 d k 1 , b k 1 ) + d Φ x 2 2 ] = μ [ 2 ( Φ x d , b k 1 ) + d Φ x 2 2 ] + C 1 = μ [ Φ x d , b k 1 + Φ x d , b k 1 * + d Φ x 2 2 ] + C 1 = μ [ ( b k 1 ) H ( Φ x d ) + ( Φ x d ) H b k 1 + d Φ x 2 2 ] + C 1 = μ d Φ x b k 1 2 2 + C 1
where C1 is a constant. Substituting Equation (58) in Equations (55)–(57) yields
( x k , d k ) = arg min x , d F ( x , d ) + μ d Φ x b k 1 2 2
b k = b k 1 ( d k Φ x k )
One can resolve Equation (59) by alternating minimization scheme with respect to x and d
x k = arg min x H ( x ) + μ 2 d k 1 Φ x b k 1 2 2
d k = arg min d J ( d ) + μ 2 d Φ x k b k 1 2 2
The above two subproblems can be worked out easily. Considering the property of CV-BI, it can be inferred that CV-SBM is also universal for convex J(x) in any convex optimization task.
The overall CV-SBM is shown as Algorithm 2
Algorithm 2: CV-SBM
Initialization: x0 = 0, d0 = 0, p0 = 0, λ, μ, k = 1
  While “stopping criterion is not met” do
     x k = arg min x H ( x ) + μ 2 d k 1 Φ x b k 1 2 2 ;
d k = arg min d d 1 + μ 2 d Φ x k b k 1 2 2 ;
b k = b k 1 ( d k Φ x k ) ;
k =k+1;
End while
Assuming Φ = I, then we can work out Equation (18) through CV-SBM by three steps [35]:
Step1: Clear up the x subproblem
x k = arg min x n λ y A x 2 2 + μ d k 1 x b k 1 2 2
Considering the l2-norm is differentiable, Equation (63) can be tackled by taking the CW-gradient of x equal to zero, and yield
x k = ( λ A T A + μ I ) 1 ( λ A T y + μ d k 1 μ b k 1 )
Step2: Find a solution to the d subproblem
d k = arg min d n d 1 + μ 2 d x k b k 1 2 2
This subproblem can be dealt with by a shrinkage operator
d k = s h r i n k ( x k + b k 1 , 1 / μ )
shrink ( γ , η ) = sign ( γ ) max ( | γ | η , 0 )
Step3: Update b
b k = b k 1 ( d k x k )
CV-SBM for l1-norm problem can be presented as Algorithm 3
Algorithm 3: CV-SBM for l1-norm problem
Initialization: x0=0, d0=0, p0=0, λ, μ, k=1
  While “stopping criterion is not met” do
     x k = ( λ A T A + μ I ) 1 ( λ A T y + μ d k 1 μ b k 1 ) ;
     d k = s h r i n k ( x k + b k 1 , 1 / μ ) ;
     b k = b k 1 ( d k x k ) ;
    k = k+1;
    End while

4. Numerical Experiments

This section presents the performance of the proposed CV-SBM by conducting a wide range of experiments solving l1-norm problems in the complex domain. We apply the proposed method to recover a complex-valued random sparse signal x from the noisy measurements y generated by y = Ax + ε, where x n , y m , A m × n , ε m . The sparse signal x consists of L nonzero elements and the amplitudes of both x’s and A’s real part and imaginary part obey Gaussian Distribution N (0,1). The noise vector ε is assumed to be i.i.d zero-mean complex Gaussian noise. The contrastive means for the proposed scheme in the following subsections are as follows: classical OMP [46], CAMP, M-lasso, and the original SBM converting technique [47]. Noted that in the following, the original SBM is called RV-SBM. In addition, Section 4.1.2 presents the performance of the proposed method conducted in ISAR imaging.
The stopping criterion for all algorithms is given as follow
x k x k 1 2 2 x k 1 2 2 t o l
or
  k = k max
where tol = 2e−4 denotes the tolerance and kmax = 2000 is the maximum iteration times. All the experiments are carried out in MATLAB 2016b on the PC with Intel I7 7700K @4.2 GHz with 32 GB memory.

4.1. An Illustrative Example

4.1.1. Complex-Valued Random Sparse Signal Recovery

In this subsection, an illustrative example is devised to demonstrate the effectiveness of the proposed method in comparison with OMP, CAMP, M-lasso, and RV-SBM. We consider that the signal and measurement dimension is n = 256 and m = 128, respectively. Moreover, the sparsity level of x is fixed L = 32 and the Signal to Noise Ratio (SNR) is set to 15 dB.
Figure 1 shows the contrast to the real and imaginary part of reconstruction signal among the contrastive means and the proposed CV-SBM. The blue circle lines represent the recovered signal and the black stars denote the ground truth. Note that zero-valued points of x remain hidden to emphasize the nonzero ones in Figure 1. As shown in Figure 1a,b, there are five and nine accurately reconstructed points (circle and star coincide) in real and imaginary part achieved by OMP, respectively. Unsurprisingly, plenty of points mismatch far away from their position, especially the 150th point in the imaginary domain. Figure 1c,d exhibits the reconstruction result of CAMP, which yields nine well-recovered points in both the real and imaginary parts. However, there also exist outliers, but less than OMP’s. In Figure 1e,f, eight and nine points are accurately recovered in the real and imaginary domains by M-lasso, respectively. It can be seen that CAMP and M-lasso behave almost the same, better than OMP. Figure 1g,h give the recovery results for the original RV-SBM whose real part shows eight well-reconstructed points and whose imaginary part demonstrates 11 points. The proposed technique yields 10 and 15 accurately recovered points, shown in Figure 1i,j, the most among the algorithms. Comparison of the number of accurately reconstructed points is presented in Table 1. In addition, the furthest outlier given by CV-SBM is at the same length as RV-SBM’s but far less than the others’. This proves the effectiveness of CV-SBM for complex sparse signal recovery.

4.1.2. ISAR Imaging with Real Data

In this subsection, CV-SBM is applied in ISAR imaging with real data of the Yak-42 plane to demonstrate its superiority, comparing with RV-SBM, the range-Doppler (RD) algorithm, and the CS recovery method [48]. Detailed descriptions of targets and data are provided in [49]. Main radar parameters are listed as follows: The signal bandwidth is 400 MHz with carrier frequency 10 GHz, corresponding to a range resolution of 0.375 m. The pulse repetition frequency is 100 Hz, i.e., 64 pulses within dwell time [−0.32, 0.32] (s) are used in this experiment. Motion compensated data are utilized by the aforementioned four algorithms, shown in Figure 2a–d.
Figure 2a exhibits the result of the RD algorithm, in which low-quality focal and high side-lobes occur. In Figure 2b, many strong scatters are extracted by [48]. However, there still exist several strong outliers marked in red boxes. Figure 2c indicates the target’s geometry. Besides, the number of outliers recovered by RV-SBM is less than [48]. In Figure 2d, the target’s geometry is clear and scatters, marked in red box, extracted by CV-SBM are stronger than ones in the same area by [48] and RV-SBM. Furthermore, most of the outliers shown in Figure 2b,c are suppressed greatly by the proposed CV-SBM. This proves the effectiveness of CV-SBM in real data processing of ISAR imaging.

4.2. Robustness Against Measurement Noise

In this subsection, we test the robustness of the proposed technique against the measurement noise. The experimental parameters are set as follows: SNR varies from 5 dB to 20 dB and other parameters are fixed the same as in the previous subsection. For each SNR, we average the MSE of 100 independent trials as the experimental result, as shown in Figure 3.
As the SNR increases, the MSE of the proposed scheme declines, which implies CV-SBM is robust to the noise. Before the SNR reaches 7 dB, the MSEs of CAMP and CV-SBM are almost the same, but when SNR goes beyond 7 dB, the MSE of CV-SBM surpasses CAMP’s and becomes the lowest among all the algorithms. Both OMP’s and RV-SBM’s MSE numerically exceed CV-SBM’s. In addition, CV-SBM behaves better than M-lasso, except at the point when the SNR is equal to 7 dB, at which they are approximately the same. This demonstrates that the proposed algorithm has better robustness against the measurement noise among the methods.

4.3. Robustness Against Measurement Noise

In this subsection, how the dimension of measurements influences the recovery result is presented. We set n = 256, SNR = 15 dB, L = 32, and m varies from 29 to 128. As in the previous subsection, we measure the average MSE over 100 independent trials, as shown in Figure 4. It can be seen that as the dimension of the measurements rises, the MSE of CV-SBM decreases, which means that the larger the dimension of the measurements, the better the recovery performance of the proposed method.
In Figure 4, the MSE of CV-SBM is lower than RV-SBM’s and M-lasso’s. Except when the measurement dimension is equal to 52 and 59, the performance of the proposed method is better than CAMP’s. Before the dimension reaches 75, the MSE of OMP is far worse than CV-SBM’s. However, when the dimension exceeds 75, the MSE of OMP reaches to 0.01 suddenly but reduces slowly as it improves, becoming equal to CV-SBM’s at 90. When the dimension is larger than 90, the MSE of CV-SBM continues to decline and becomes the lowest among all the algorithms.

4.4. Time Cost Assessment

In this subsection, the computational cost of the proposed method is measured with increasing dimension of the signal. To this end, we vary n from 128 to 1024 and fix SNR = 20 dB, m = 0.5n, L = 0.125n. For each n, we conduct 20 independent trials and average the CPU time cost as the result, as shown in Figure 5.
The result shows that CV-SBM takes less CPU time than OMP and M-lasso in all test dimensions. Before the dimension reaches 512, RV-SBM requires the least time. However, when the dimension is more than 512, CV-SBM requires less CPU time than RV-SBM. This is because the complex-to-real transformation utilized in RV-SBM expands the dimension of the sensing matrix A to 2m × 2n, which leads to an inverse matrix with 2n × 2n elements and takes more memory and time within the resolving process, while CV-SBM only needs to calculate a complex inverse matrix with n × n elements. Nevertheless, CV-SBM takes a little more time than CAMP in large signal scale situations thanks to CAMP’s specific design for l1-norm problems, whereas CV-SBM contains an inverse operator. However, CV-SBM still has great potential to exceed CAMP considering that the gap between CAMP and CV-SBM is not large.

4.5. Performance Comparison with RV-SBM

The tests mentioned above have shown that the proposed CV-SBM presents remarkable performance compared with RV-SBM in the same experimental environment. Thus, in this subsection, we focus on the convergence, time cost, and performance of CV-SBM and RV-SBM by implementing experiments with various parameters. In the following experiments, two main parameters for CV-SBM and RV-SBM are set to λ = 0.005 and μ = 120 and the stopping criterion (the tolerance tol and kmax) varies. Furthermore, other experimental parameters are as follow: L = 0.125n, m = 0.5n and SNR varies from 10 dB, 15 dB, and 20 dB. For each stopping criterion and SNR, 20 independent trials were carried out and the average MSE, CPU time cost, and iteration time are selected as the result. The average MSE and CPU time cost of the proposed method are also presented if CV-SBM presents better performance.
In the first test, we examine CV-SBM and RV-SBM in small scale n = 256 and fix tol = 2e−4, kmax = 2000, as shown in Table 2. It can be seen that in each SNR situation, in the vast majority of trials, RV-SBM achieves the stopping criterion Equation (69) and requires less CPU time and fewer iterations, while the MSE of CV-SBM is always superior. This implies that RV-SBM possesses more rapid convergence, but this property also leads to a severe performance loss. Besides, the convergence speed of CV-SBM is about five times slower than that of RV-SBM, but this provides CV-SBM more iterations to attain better performance. It should be pointed out that the average CPU time of RV-SBM is not five times as fast as CV-SBM because a minority of trials of RV-SBM still consume 2000 iterations.
To inspect the performance regardless of the convergence condition, we reduce the tolerance to tol = 2e−5 and keep the other parameters consistent with the previous experiment (Table 3). For any situation in Table 3, both RV-SBM and CV-SBM stop by reaching the maximum iterations. As we can see, the MSE gap between CV-SBM and RV-SBM is extremely narrow, but still exists. This illustrates that CV-SBM achieves better performance due to employing the phase information. Furthermore, CV-SBM still requires a little more time than RV-SBM, as in the first experiment.
At last, large scale n = 512 is taken into consideration (Table 4). The tolerance tol and the maximum iteration times kmax are set the same as in Table 3. CV-SBM achieves superior performance in terms of both MSE and time cost in comparison with RV-SBM. Specifically, CV-SBM yields 18.20%, 17.58%, and 26.67% lower MSE and requires 28.75%, 25.59%, and 23.64% less CPU time than RV-SBM in all kinds of SNR cases, respectively. This reveals that the proposed CV-SBM is extremely applicable in large-scale complex-valued sparse signal recovery.

5. Conclusions

In this paper, a new CS recovery algorithm named CV-SBM is presented, which generalizes the widely employed SBM into the complex domain. CV-SBM induces a theoretical support to directly reconstruct the sparse signal in complex-valued variables, instead of converting them into real ones. We apply the proposed CV-SBM to a l1-norm problem to recover a complex-valued sparse signal. Experimental results demonstrate the superiority of CV-SBM over other existing CS reconstruction methods, especially RV-SBM, in both recovery accuracy and time cost for large-scale cases.
A significant goal for future work lies in applying CV-SBM to more complicated regularization problems, since CV-BI and CV-SBM are theoretically able to deal with any convex optimization problem.

Author Contributions

Conceptualization, K.X. and G.Z.; Formal analysis, K.X.; Funding acquisition, K.X.; Investigation, K.X.; Methodology, K.X.; Project administration, G.Z. and G.S.; Resources, G.S.; Software, K.X.; Writing—original draft, K.X.; Writing—review & editing, K.X., G.Z. and Y.W.

Funding

This research received no external funding.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Yu, Y.; Petropulu, A.P.; Poor, H.V. Compressive sensing for MIMO radar. In Proceedings of the 2009 IEEE International Conference on Acoustics, Speech and Signal Processing, Taipei, Taiwan, 19–24 April 2009; pp. 3017–3020. [Google Scholar]
  2. Bilik, I. Spatial Compressive Sensing for Direction-of-Arrival Estimation of Multiple Sources using Dynamic Sensor Arrays. IEEE Trans. Aerosp. Electron. Syst. 2011, 47, 1754–1769. [Google Scholar] [CrossRef]
  3. Zhu, L.; Wu, X.; Sun, Z.; Jin, Z.; Weiland, E.; Raithel, E.; Qian, T.; Xue, H. Compressed sensing accelerated 3-dimensional magnetic resonance cholangiopancreatography: Application in suspected pancreatic diseases. Investig. Radiol. 2018, 53, 150–157. [Google Scholar] [CrossRef] [PubMed]
  4. Hitomi, Y.; Gu, J.; Gupta, M.; Mitsunaga, T.; Nayar, S.K. Video from a single coded exposure photograph using a learned over-complete dictionary. In Proceedings of the 2011 International Conference on Computer Vision, Barcelona, Spain, 6–13 November 2011; pp. 287–294. [Google Scholar]
  5. Duarte, M.F.; Davenport, M.A.; Takhar, D.; Laska, J.N.; Sun, T.; Kelly, K.F.; Baraniuk, R.G. Single-pixel imaging via compressive sampling. IEEE Signal. Process. Manag. 2008, 25, 83–91. [Google Scholar] [CrossRef] [Green Version]
  6. Donoho, D.L. Compressed sensing. IEEE Trans. Inf. Theory 2006, 52, 1289–1306. [Google Scholar] [CrossRef]
  7. Candes, E.J.; Wakin, M.B. An Introduction to Compressive Sampling. IEEE Signal. Process. Manag. 2008, 25, 21–30. [Google Scholar] [CrossRef]
  8. Figueiredo, M.A.; Nowak, R.D.; Wright, S.J. Gradient Projection for Sparse Reconstruction: Application to Compressed Sensing and Other Inverse Problems. IEEE J. Sel. Top. Signal. Process. 2007, 1, 586–597. [Google Scholar] [CrossRef] [Green Version]
  9. Hale, E.; Yin, W.; Zhang, Y. A Fixed-Point Continuation Method for l1-Regularized Minimization with Applications to Compressed Sensing. Available online: https://www.caam.rice.edu/~yzhang/reports/tr0707.pdf (accessed on 17 October 2019).
  10. Li, C.; Yin, W.; Jiang, H.; Zhang, Y. An efficient augmented Lagrangian method with applications to total variation minimization. Comput. Optim. Appl. 2013, 56, 507–530. [Google Scholar] [CrossRef] [Green Version]
  11. Bioucasdias, J.M.; Figueiredo, M.A. A New TwIST: Two-Step Iterative Shrinkage/Thresholding Algorithms for Image Restoration. IEEE Trans. Image Process. 2007, 16, 2992–3004. [Google Scholar] [CrossRef] [Green Version]
  12. Tipping, M.E. Sparse bayesian learning and the relevance vector machine. J. Mach. Learn. Res. 2001, 1, 211–244. [Google Scholar]
  13. Liu, S.; Jia, J.; Zhang, Y.D.; Yang, Y. Image Reconstruction in Electrical Impedance Tomography Based on Structure-Aware Sparse Bayesian Learning. IEEE Trans. Med. Imaging. 2018, 37, 2090–2102. [Google Scholar] [CrossRef]
  14. Liu, S.; Wu, H.; Huang, Y.; Yang, Y.; Jia, J. Accelerated Structure-Aware Sparse Bayesian Learning for Three-Dimensional Electrical Impedance Tomography. IEEE Trans. Ind. Inform. 2019, 15, 5033–5041. [Google Scholar] [CrossRef]
  15. Haque, T.; Yazicigil, R.T.; Pan, K.J.; Wright, J.; Kinget, P.R. Theory and Design of a Quadrature Analog-to-Information Converter for Energy-Efficient Wideband Spectrum Sensing. IEEE Trans. Circuits Syst. I-Regul. Pap. 2015, 62, 527–535. [Google Scholar] [CrossRef]
  16. Bellasi, D.E.; Rovatti, R.; Benini, L.; Setti, G. A Low-Power Architecture for Punctured Compressed Sensing and Estimation in Wireless Sensor-Nodes. IEEE Trans. Circuits Syst. I-Regul. Pap. 2015, 62, 1296–1305. [Google Scholar] [CrossRef]
  17. Chen, F.; Chandrakasan, A.P.; Stojanovic, V. Design and Analysis of a Hardware-Efficient Compressed Sensing Architecture for Data Compression in Wireless Sensors. IEEE J. Solid-State Circuit 2012, 47, 744–756. [Google Scholar] [CrossRef] [Green Version]
  18. Lustig, M.; Donoho, D.L.; Pauly, J.M. Sparse MRI: The application of compressed sensing for rapid MR imaging. Magn. Reson. Med. 2007, 58, 1182–1195. [Google Scholar] [CrossRef]
  19. Feng, L.; Grimm, R.; Block, K.T.; Chandarana, H.; Kim, S.; Xu, J.; Otazo, R. Golden-angle radial sparse parallel MRI: Combination of compressed sensing, parallel imaging, and golden-angle radial sampling for fast and flexible dynamic volumetric MRI. Magn. Reson. Med. 2014, 72, 707–717. [Google Scholar] [CrossRef]
  20. Yu, Y.; Petropulu, A.P.; Poor, H.V. MIMO Radar Using Compressive Sampling. IEEE J. Sel. Top. Signal. Process. 2010, 4, 146–163. [Google Scholar] [CrossRef] [Green Version]
  21. Gogineni, S.; Nehorai, A. Target Estimation Using Sparse Modeling for Distributed MIMO Radar. IEEE Trans. Signal. Process. 2011, 59, 5315–5325. [Google Scholar] [CrossRef]
  22. Tan, Z.; Yang, P.; Nehorai, A. Joint Sparse Recovery Method for Compressed Sensing with Structured Dictionary Mismatches. IEEE Trans. Signal. Process. 2014, 62, 4997–5008. [Google Scholar] [CrossRef]
  23. Maleki, A.; Anitori, L.; Yang, Z.; Baraniuk, R.G. Asymptotic Analysis of Complex LASSO via Complex Approximate Message Passing (CAMP). IEEE Trans. Inf. Theory 2013, 59, 4290–4308. [Google Scholar] [CrossRef] [Green Version]
  24. Ollila, E. Direction of arrival estimation using robust complex Lasso. In Proceedings of the 2016 10th European Conference on Antennas and Propagation, Davos, Switzerland, 10–15 April 2016; pp. 1–5. [Google Scholar]
  25. Donoho, D.L.; Maleki, A.; Montanari, A. Message passing algorithms for compressed sensing. Proc. Natl. Acad. Sci. USA 2009, 106, 18914–18919. [Google Scholar] [CrossRef] [PubMed]
  26. Zheng, L.; Maleki, A.; Liu, Q.; Wang, X.; Yang, X. An lp-based reconstruction algorithm for compressed sensing radar imaging. In Proceedings of the 2016 IEEE Radar Conference, Philadelphia, PA, USA, 2–6 May 2016; pp. 1–5. [Google Scholar]
  27. Friedman, J.H.; Hastie, T.; Hofling, H.; Tibshirani, R. Pathwise coordinate optimization. Ann. Appl. Stat. 2007, 1, 302–332. [Google Scholar] [CrossRef] [Green Version]
  28. Goldstein, T.; Osher, S. The split Bregman method for L1-regularized problems. SIAM J. Imag. Sci. 2009, 2, 323–343. [Google Scholar] [CrossRef]
  29. Osher, S.; Burger, M.; Goldfarb, D.; Xu, J.; Yin, W. An Iterative Regularization Method for Total Variation-Based Image Restoration. Multiscale Model. Simul. 2005, 4, 460–489. [Google Scholar] [CrossRef]
  30. Yin, W.; Osher, S.; Goldfarb, D.; Darbon, J. Bregman iterative algorithms for l1-minimization with applications to compressed sensing. SIAM J. Imag. Sci. 2008, 1, 143–168. [Google Scholar] [CrossRef]
  31. Shalaby, W.A.; Saad, W.; Shokair, M.; Dessouky, M.I. An efficient recovery algorithms using complex to real transformation of compressed sensing. In Proceedings of the 2016 33rd National Radio Science Conference, Aswan, Egypt, 22–25 February 2016; pp. 122–131. [Google Scholar]
  32. Sharifnassab, A.; Kharratzadeh, M.; Babaiezadeh, M.; Jutten, C. How to use real-valued sparse recovery algorithms for complex-valued sparse recovery? In Proceedings of the 2012 20th European Signal Processing Conference, Bucharest, Romania, 28–31 August 2012; pp. 849–853. [Google Scholar]
  33. Qin, J.; Guo, W. An efficient compressive sensing MR image reconstruction scheme. In Proceedings of the 2013 IEEE 10th International Symposium on Biomedical Imaging, San Francisco, CA, USA, 7–11 April 2013; pp. 306–309. [Google Scholar]
  34. Bi, D.; Xie, Y.; Ma, L.; Li, X.; Yang, X.; Zheng, Y.R. Multifrequency Compressed Sensing for 2-D Near-Field Synthetic Aperture Radar Image Reconstruction. IEEE Trans. Instrum. Meas. 2017, 66, 777–791. [Google Scholar] [CrossRef]
  35. Zhang, Q.; Zhang, Y.; Mao, D.; Zhang, Y.; Huang, Y.; Yang, J. A Bayesian Super-Resolution Method for Forward-Looking Scanning Radar Imaging Based on Split Bregman. In Proceedings of the 2018 IEEE International Geoscience and Remote Sensing Symposium, Valencia, Spain, 22–27 July 2018; pp. 5135–5138. [Google Scholar]
  36. Liu, L.; Huang, W.; Wang, C.; Zhang, X.; Liu, B. SAR image super-resolution based on TV-regularization using gradient profile prior. In Proceedings of the 2016 CIE International Conference on Radar, Guangzhou, China, 10–13 October 2016; pp. 1–4. [Google Scholar]
  37. Nasser, A.; Elsabrouty, M. Adaptive Split Bregman for sparse and low rank massive MIMO channel estimation. In Proceedings of the 2016 23rd International Conference on Telecommunications, Thessaloniki, Greece, 16–18 May 2016; pp. 1–5. [Google Scholar]
  38. Bregman, L.M. The relaxation method of finding the common points of convex sets and its application to the solution of problems in convex programming. Comput. Math. Math. Phys. 1967, 7, 200–217. [Google Scholar] [CrossRef]
  39. Bouboulis, P.; Slavakis, K.; Theodoridis, S. Adaptive Learning in Complex Reproducing Kernel Hilbert Spaces Employing Wirtinger’s Subgradients. IEEE Trans. Neural Netw. Learn. Syst. 2012, 23, 425–438. [Google Scholar] [CrossRef]
  40. Cai, J.; Osher, S.; Shen, Z. Split Bregman methods and frame based image restoration. Multiscale Model. Simul. 2010, 8, 337–369. [Google Scholar] [CrossRef]
  41. Li, W.; Li, Q.; Gong, W.; Tang, S. Total variation blind deconvolution employing split bregman iteration. J. Vis. Commun. Image Represent. 2012, 23, 409–417. [Google Scholar] [CrossRef]
  42. Kreutzdelgado, K. The Complex Gradient Operator and the CR-Calculus. Optim. Control 2005, arXiv:0906.4835. [Google Scholar]
  43. Ollila, E. Adaptive LASSO based on joint M-estimation of regression and scale. In Proceedings of the 2016 24th European Signal Processing Conference, Budapest, Hungary, 29 August–2 September 2016; pp. 2191–2195. [Google Scholar]
  44. Acar, R.; Vogel, C.R. Analysis of total variation penalty methods. Inverse Probl. 1994, 10, 1217–1229. [Google Scholar] [CrossRef]
  45. Chen, G.; Teboulle, M. Convergence Analysis of a Proximal-Like Minimization Algorithm Using Bregman Functions. SIAM J. Optim. 1993, 3, 538–543. [Google Scholar] [CrossRef]
  46. Tropp, J.A.; Gilbert, A.C. Signal Recovery from Random Measurements Via Orthogonal Matching Pursuit. IEEE Trans. Inf. Theory 2007, 53, 4655–4666. [Google Scholar] [CrossRef]
  47. Li, S.; Chen, W.; Liu, W.; Yang, J.; Ma, X. Fast 2D super resolution ISAR imaging method under low signal-to-noise ratio. IET Radar Sonar Navig. 2017, 11, 1495–1504. [Google Scholar] [CrossRef]
  48. Zhang, L.; Xing, M.; Qiu, C.; Li, J.; Bao, Z. Achieving Higher Resolution ISAR Imaging with Limited Pulses via Compressed Sampling. IEEE Geosci. Remote Sens. Lett. 2009, 6, 567–571. [Google Scholar] [CrossRef]
  49. Wang, Y.; Jiang, Y. A Novel Algorithm for Estimating the Rotation Angle in ISAR Imaging. IEEE Geosci. Remote Sens. Lett. 2008, 5, 608–609. [Google Scholar] [CrossRef]
Figure 1. Comparison of the real and imaginary parts of the reconstruction results by OMP, CAMP, M-lasso, RV-SBM, and the proposed CV-SBM: (a) recovery performance for the real part of x by OMP; (b) recovery performance for the imaginary part of x by OMP; (c) recovery performance for the real part of x by CAMP; (d) recovery performance for the imaginary part of x by CAMP; (e) recovery performance for the real part of x by M-lasso; (f) recovery performance for the imaginary part of x by M-lasso; (g) recovery performance for the real part of x by RV-SBM; (h) recovery performance for the imaginary part of x by RV-SBM; (i) recovery performance for the real part of x by CV-SBM; (j) recovery performance for the imaginary part of x by CV-SBM.
Figure 1. Comparison of the real and imaginary parts of the reconstruction results by OMP, CAMP, M-lasso, RV-SBM, and the proposed CV-SBM: (a) recovery performance for the real part of x by OMP; (b) recovery performance for the imaginary part of x by OMP; (c) recovery performance for the real part of x by CAMP; (d) recovery performance for the imaginary part of x by CAMP; (e) recovery performance for the real part of x by M-lasso; (f) recovery performance for the imaginary part of x by M-lasso; (g) recovery performance for the real part of x by RV-SBM; (h) recovery performance for the imaginary part of x by RV-SBM; (i) recovery performance for the real part of x by CV-SBM; (j) recovery performance for the imaginary part of x by CV-SBM.
Sensors 19 04540 g001
Figure 2. Comparison of ISAR imaging by RD [48], RV-SBM, and the proposed CV-SBM: (a) imaging result by RD; (b) imaging result by [48]; (c) imaging result by RV-SBM; (d) imaging result by CV-SBM.
Figure 2. Comparison of ISAR imaging by RD [48], RV-SBM, and the proposed CV-SBM: (a) imaging result by RD; (b) imaging result by [48]; (c) imaging result by RV-SBM; (d) imaging result by CV-SBM.
Sensors 19 04540 g002
Figure 3. Average MSE in different measurement noise levels.
Figure 3. Average MSE in different measurement noise levels.
Sensors 19 04540 g003
Figure 4. Average MSE in different measurements dimensions.
Figure 4. Average MSE in different measurements dimensions.
Sensors 19 04540 g004
Figure 5. Average CPU time cost in different signal dimensions.
Figure 5. Average CPU time cost in different signal dimensions.
Sensors 19 04540 g005
Table 1. Comparison of recovery performance by OMP, CAMP, M-lasso, RV-SBM, and the proposed CV-SBM.
Table 1. Comparison of recovery performance by OMP, CAMP, M-lasso, RV-SBM, and the proposed CV-SBM.
Number of Well-Recovered Points
Real Part of xImaginary Part of x
OMP59
CAMP99
M-lasso89
RV-SBM811
CV-SBM1015
Table 2. Comparison of CV-SBM and RV-SBM when tol = 2e−4, kmax = 2000, and n = 256.
Table 2. Comparison of CV-SBM and RV-SBM when tol = 2e−4, kmax = 2000, and n = 256.
SNR (dB)Average MSEAverage CPU Time (s)Average Iterations
CV-SBMRV-SBMPromotionCV-SBMRV-SBMPromotionCV-SBMRV-SBM
10 dB0.04260.059228.04%0.08920.0258N/A2000.0728.4
15 dB0.01250.052376.10%0.08850.0167N/A1991.3387.3
20 dB0.00310.048493.60%0.07660.0167N/A1740.9375.3
Table 3. Comparison of CV-SBM and RV-SBM when tol = 2e−5, kmax = 2000, and n = 256.
Table 3. Comparison of CV-SBM and RV-SBM when tol = 2e−5, kmax = 2000, and n = 256.
SNR (dB)Average MSEAverage CPU Time (s)Average Iterations
CV-SBMRV-SBMPromotionCV-SBMRV-SBMPromotionCV-SBMRV-SBM
10 dB0.04260.048912.88%0.08920.0537N/A20002000
15 dB0.01250.014614.38%0.08740.0519N/A20002000
20 dB0.00290.004230.95%0.08650.0517N/A20002000
Table 4. Comparison of CV-SBM and RV-SBM when tol = 2e−4, kmax = 2000 and n = 512.
Table 4. Comparison of CV-SBM and RV-SBM when tol = 2e−4, kmax = 2000 and n = 512.
SNR(dB)Average MSEAverage CPU Time(s)Average Iterations
CV-SBMRV-SBMPromotionCV-SBMRV-SBMPromotionCV-SBMRV-SBM
10dB0.04450.054418.20%0.27060.379828.75%20002000
15dB0.01360.016517.58%0.26080.350525.59%20002000
20dB0.00330.004526.67%0.26330.344823.64%20002000

Share and Cite

MDPI and ACS Style

Xiong, K.; Zhao, G.; Shi, G.; Wang, Y. A Convex Optimization Algorithm for Compressed Sensing in a Complex Domain: The Complex-Valued Split Bregman Method. Sensors 2019, 19, 4540. https://doi.org/10.3390/s19204540

AMA Style

Xiong K, Zhao G, Shi G, Wang Y. A Convex Optimization Algorithm for Compressed Sensing in a Complex Domain: The Complex-Valued Split Bregman Method. Sensors. 2019; 19(20):4540. https://doi.org/10.3390/s19204540

Chicago/Turabian Style

Xiong, Kai, Guanghui Zhao, Guangming Shi, and Yingbin Wang. 2019. "A Convex Optimization Algorithm for Compressed Sensing in a Complex Domain: The Complex-Valued Split Bregman Method" Sensors 19, no. 20: 4540. https://doi.org/10.3390/s19204540

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop