A Convex Optimization Algorithm for Compressed Sensing in a Complex Domain: The Complex-Valued Split Bregman Method

The Split Bregman method (SBM), a popular and universal CS reconstruction algorithm for inverse problems with both l1-norm and TV-norm regularization, has been extensively applied in complex domains through the complex-to-real transforming technique, e.g., MRI imaging and radar. However, SBM still has great potential in complex applications due to the following two points; Bregman Iteration (BI), employed in SBM, may not make good use of the phase information for complex variables. In addition, the converting technique may consume more time. To address that, this paper presents the complex-valued Split Bregman method (CV-SBM), which theoretically generalizes the original SBM into the complex domain. The complex-valued Bregman distance (CV-BD) is first defined by replacing the corresponding regularization in the inverse problem. Then, we propose the complex-valued Bregman Iteration (CV-BI) to solve this new problem. How well-defined and the convergence of CV-BI are analyzed in detail according to the complex-valued calculation rules and optimization theory. These properties prove that CV-BI is able to solve inverse problems if the regularization is convex. Nevertheless, CV-BI needs the help of other algorithms for various kinds of regularization. To avoid the dependence on extra algorithms and simplify the iteration process simultaneously, we adopt the variable separation technique and propose CV-SBM for resolving convex inverse problems. Simulation results on complex-valued l1-norm problems illustrate the effectiveness of the proposed CV-SBM. CV-SBM exhibits remarkable superiority compared with SBM in the complex-to-real transforming technique. Specifically, in the case of large signal scale n = 512, CV-SBM yields 18.2%, 17.6%, and 26.7% lower mean square error (MSE) as well as takes 28.8%, 25.6%, and 23.6% less time cost than the original SBM in 10 dB, 15 dB, and 20 dB SNR situations, respectively.


Introduction
Compressed sensing (CS) theory has been thoroughly analyzed and extensively applied in the signal processing [1,2] and image processing community [3][4][5] during the past decades. CS theory indicates that a sparse signal can be reconstructed from a few of measurements lower than the Nyquist rate required [6,7]. Specifically, an unknown vector x ∈ R n can be recovered by solving an inverse problem with a sparsity-promoting regularization term, such as l 1 -norm or total variation (TV) norm, as follows:

Review of Bregman Iteration and Split Bregman Method
SBM, whose main idea is to decompose the original unconstrained problem into several equivalent subproblems solved by BI [29,30], has shown its efficiency and effectiveness for inverse problems with both l 1 -norm and TV-norm regularization [28]. For the convenience of the following illustration, we will first present a brief review of BI and SBM.

Bregman Iteration
BI focuses on the optimization problem where J(x) is a real convex function but not necessarily differentiable, and H is a real convex and differentiable function. By replacing J(x) with corresponding Bregman distance (BD) D p J (u, v) [29,38], BI tackles Equation (3) as follows: where p ∈ ∂J(v) is a subgradient of J(x) at the point v, and ∂J(v) is the subdifferential of J(x) at v, f , g = f T g * denotes the inner product for all f , g ∈ C n , and (·)* is used for the conjugate. To make it clear, we give the definition of the subgradient and subdifferential. Definition 1. Let T : R m → R be a convex function defined on the real domain. A vector ∇ s x T(z 0 ) ∈ R m is said to be a subgradient of T at z 0 if T(z) ≥ T(z 0 ) + z − z 0 , ∇ s x T(z 0 ) . The set of all subgradients of T at z 0 is called the subdifferential of T at z 0 and is denoted by ∂ z T(z 0 ). More detials about the subgradient and subdifferential can be found in [39].
A key property of the BD is that it has the same convex characteristic as J(x) so that (5) is still a convex problem. Furthermore, it can be used as the measurement of the closeness of two points in J(x) [29,30].
As for the l 1 -norm and TV-norm problems, Equation (6) is easily calculated, whereas dealing with (5) is more complicated and needs the help of another algorithm [30], such as GPSR [8] or FPC [9].
To avoid the dependence on extra algorithms and simplify the iteration process simultaneously, SBM is presented.

Split Bregman Method
SBM aims to find a solution to the unconstrained problem where Φ is a linear operator. SBM introduces an auxiliary variable d and considers an equivalent constrained problem to Equation (7) min x,d The corresponding unconstrained version of (8) can be formulated as follows: where E(x,d) = ||d|| 1 + H(x) and µ is a positive constant to balance the two terms in Equation (9) within the iterations. Since there are two unknown variables completely fulfilling the demand of BI for optimization problems, BI can be performed for each of them: Let b k = p k d /µ, then the iterations become: For (13), one can decompose it into resolving two subproblems alternately, i.e., working out x k by fixing d k−1 and then clearing up d k by fixing x k The problems above can be computed conveniently by tackling zero subgradient equations. Evidently, the combination of BI and SBM can be adopted to settle plenty of convex optimization problems in a real system [40,41]. However, BD and BI are established in the real domain, and consequently do not to take complex variables and phase information into account. Specifically, once variables come in the complex domain, the BD becomes complex-valued and consequently cannot be employed as the measurement of the closeness. Thus, we can no longer use the BD as the objective function. In the following section, we will generalize the original BI and SBM into the complex domain theoretically.

Wirtinger Calculus and Wirtinger's Subgradients
As is well-known, convex optimization theory requires the differentiability of the objective function. For T(c) = T R (c) + jT I (c) in complex variables c = c R + jc I , the complex differentiability equals to the satisfied Cauchy-Riemann conditions: For a complex-valued l 1 -norm regularization problem: where H(x) = λ y − Ax 2 2 , J(x) = x 1 , y ∈ C m , A ∈ C m×n . Apparently, F(x) does not obey (17) so that calculating the complex gradient directly is unavailable.
To overcome such a problem, an alternative tool for computing the complex gradient was brought into light recently called Wirtinger's calculus [39]. It relaxes the strict requirement for complex differentiability and allows the computation of the complex gradient in simple rules and principles.
A key point in Wirtinger's calculus is the definition of Wirtinger's gradient (W-gradient) and the conjugate Wirtinger's gradient (CW-gradient) where ∇ c R T(c) and ∇ c I T(c) represent the gradient of the T at c R and c I , which can be obtained by the traditional ways. According to Equations (19) and (20), one can calculate the W-gradient of c * and the CW-gradient of c: Considering that both the W-gradient for c * and the CW-gradient for c are equal to zero, in Wirtinger's calculus we can treat c and c * as two irrelevant or independent variables, which is the main approach allowing us to utilize the elegance of Wirtinger's calculus. Here is an example: if T(c) = c(c * ) 2 , then we have ∇ c T(c) = (c * ) 2 and ∇ c * T(c) = 2cc * . More details and examples can be found in [42].
In general, for the convex function in complex variables, the optimization condition is the CW-gradient equal to zeros vector. Nevertheless, in practice, some functions may not be differentiable everywhere, e.g., l 1 -norm in F(x) at zero. In this case, the conjugate Wirtinger's subgradients (CW-subgradients) [39] can be adopted to construct the gradient path towards the optimal point. For a real convex function in complex variables T : C n → R , we define a CW-subgradient ∇ s and it satisfies where ∇ s c R T(c) and ∇ s c I T(c) denote the subgradient of T at c R and c I . The set of all CW-subgradients of T at c is called Wirtinger's differential of T at c and is represented by ∂ c * T(c). It should be noted that for the differentiable point of T, the Wirtinger's differential only contains one element, i.e., its CW-gradient. Wirtinger's differential of modulus |x i | and H(x) are presented as follows [43].
where i is the index for the element of vector x and s is some complex number verifying |s| ≤ 1. Then, a necessary and sufficient condition for the optimization solution to Equation (18) is that 0 ∈ ∂ x * F(x) [43]. By the definition of the CW-subgradient, in the following subsection, we can generalize BD into the complex domain.

CV Bregman Distance
To prevent the BD becoming complex-valued, we first generalize the BD into the complex domain and introduce the CV Bregman distance (CV-BD) theoretically.
as a CV-BD associated with real convex function J in complex variables. Clearly, no matter whether the variables u and v are in the real or the complex domain, D p J (u, v) is always a real-valued scalar. According to (24), one can point out that a CV-BD is non-negative.
To ensure that the CV-BD can be utilized as the objective function as the BD, in the following Lemma 1 and Lemma 2 prove that the CV-BD is as the same convex as J(x) and can measure the closeness at two points in J. Proof. Assume J is a real convex function and let ∀θ ∈ [0, 1], ∀x, y, v ∈ C n , and p ∈ ∂ v * J(v). Then we get Considering that J is a real convex function, J satisfies This completes the proof of D p J (u, v) is a convex function for variable u as J. For J is a real strictly convex function, we assume ∀θ ∈ (0, 1), ∀x, y, v ∈ C n and x y. And J satisfies Then according to Equations (28), (29), and (32) we obtain which proves that D p J (u, v) is a strictly convex function for variable u as J. Then, it can be concluded that D p J (u, v) is as the same convex property as J for variable u in each v. Lemma 2. Let D p J (u, v) be a CV-BD associated with real strictly convex function J and assume a point w = θu + (1 − θ)v is on the line segment connecting u and v, where u, v ∈ C n , θ ∈ (0, 1).
and equality holds if and only if u = v.
Proof. Assume u v, then we derive according to Lemma 1 This completes the proof of Lemma 2. Then, the CV-BD at two points in convex function J would decrease when they get closer, and may become zero if and only if the two points coincide. This property makes the CV-BD the measurement of closeness at two points. Thus, inspired by the original BI, we use the CV-BD between the variables to be solved and the current solution to replace real convex function J(x) as the objective function: Within the iterations, the CV-BD is nonincreasing. This will be proved in the next subsection.
Obviously, Q k (x) is convex because of H(x) and the CV-BD. However, the CV-BD D p k−1 J (x, x k−1 ) may be multivalued at nondifferential x k-1 , which inevitably interferes with the solution of x k . As we shall prove below, this issue is not vital, since CV-BI introduced in the following subsection automatically chooses a suitable CW-subgradient when dealing with Equation (36). (36) is proposed directly and the definition and the convergence are proved in the following.

CV-BI Algorithm
Generally, we can initialize x 0 and p 0 whatever satisfy p 0 ∈ ∂ x * J(x 0 ). Nevertheless, for any x 0 0, its CW-subgradient requires optional calculation, which is not desired in practical.

Definition of the Iteration
In this subsection, we reveal that the iterative procedure in Algorithm 1 is well defined. Specifically, a minimizer x k exists in Q k (x) and the iteration can find an appropriate CW-subgradient p k automatically.

Proposition 1.
Assume that H(x) = λ y − Ax 2 2 , J(x) is convex and bounded, and let x 0 =0, p 0 = 0 ∈ ∂ x * J(x 0 ). Then, for each k ∈ N, there exists a minimizer x k in Q k (x), and there exists an appropriate CW-subgradient Moreover, if A has no null space, a minimizer x k is unique.
Proof. We prove the result by induction. In the case of k = 1, Q 1 (x) becomes the original function F(x), of which the existence of minimizers and the optimality condition p 1 + q 1 = p 0 = 0 is well known [44].
In addition, assume r k = λ(y − Ax k ) and we have p 1 = A H r 1 . Then, we proceed from k−1 to k and assume p k−1 = A H r k−1 exists. To prove that the minimizers exist, we first discus the boundedness of Q k (x). Recalling the l 2 -norm greater than or equal to zero, Q k (x) can be estimated as Since only J(x) is not constant, the boundedness of Q k (x) implies the boundedness of J(x). This shows that the level sets of Q k are weak-* compact [29]. Hence, a minimizer of Q k exists due to the optimization theory. Besides, if A has no null space and H(x) as well as J(x) is strictly convex, Q k (x) is also strictly convex, and therefore the minimizer is unique. This completes the proof of the existence of minimizers for all k > 1.
The following proves p k and q k exist for all k > 1. According to the optimality conditions for Q k (x) we derive that Recalling that assume p k−1 exists, one can get that ∂ x * J(x k ) and ∂ x * H(x k ) are not null sets, and consequently yields the existence of , which also satisfies Equation (38).
Recalling Equation (38) and p 0 = 0, we obtain that The definition of CV-BI has been proved as mentioned above. The whole CV-BI can be summarized as follows: x k = arg min

Algorithm 1: CV-BI
Initialization: Review the entire process of proof and one can find that CV-BI possesses the ability to solve any kind of regularization term J(x) in Equation (18) if J(x) is a real convex function in complex variables. Furthermore, since each step of Algorithm 1 obeys the optimization rules in the complex domain instead of converting the objective function Q k (x) and variable x into the real domain, one can summarize that CV-BI preserve phase information for complex variables.

Convergence Analysis
In this subsection, the convergence property of CV-BI is analyzed. To be specific, two monotonicity properties are proved with the help of the CV-BD.

Proposition 2.
Under the above assumption, the sequence of H(x k ) obtained from the CV-BI is monotonically nonincreasing, we get Moreover, let x be such that J(x) < ∞, then we even have Proof. Recall the nonnegative property of the CV-BD and that x k is the minimizer of the convex function Q k (x), we obtain which implies Equation (45). We can derive a formula motivated by the identity of the original BD [45]: Considering the definition of the CW subgradient and q k ∈ ∂ x * H(x k ), we yield which is equivalent to Equation (46).

Proposition 3.
Under the same assumption as Proposition 2, let xbe a minimizer of H(x) with J( x) < ∞, then we have D which proves Equation (50). The results of Equations (45) and (50) conclude a general convergence conclusion for CV-BI. More details about convergence can be found in [29].

CV-SBM
For various kinds of regularization terms corresponding to Equation (43), CV-BI still has to employ other algorithms as BI does, which makes the solution process complicated and computationally expensive. Inspired by SBM, we separate the original variable and present CV-SBM to settle the convex inverse problems with the simplified solutions.
A constrained optimization problem in complex variables can be transformed into an unconstrained one Evidently, F(x,d) is convex in x and d. Thus, by applying CV-BI to Equation (54) in each variable, we can derive that To simplify the above iteration step Equation (55), we assume b k = p k d /µ and get where C 1 is a constant. Substituting Equation (58) in Equations (55)-(57) yields One can resolve Equation (59) by alternating minimization scheme with respect to x and d The above two subproblems can be worked out easily. Considering the property of CV-BI, it can be inferred that CV-SBM is also universal for convex J(x) in any convex optimization task.
The overall CV-SBM is shown as Algorithm 2

Algorithm 2: CV-SBM
Initialization: x 0 = 0, d 0 = 0, p 0 = 0, λ, µ, k = 1 While "stopping criterion is not met" do Assuming Φ = I, then we can work out Equation (18) through CV-SBM by three steps [35]: Step1: Clear up the x subproblem Considering the l 2 -norm is differentiable, Equation (63) can be tackled by taking the CW-gradient of x equal to zero, and yield Step2: Find a solution to the d subproblem This subproblem can be dealt with by a shrinkage operator Step3: CV-SBM for l 1 -norm problem can be presented as Algorithm 3

Algorithm 3: CV-SBM for l1-norm problem
Initialization: While "stopping criterion is not met" do

Numerical Experiments
This section presents the performance of the proposed CV-SBM by conducting a wide range of experiments solving l 1 -norm problems in the complex domain. We apply the proposed method to recover a complex-valued random sparse signal x from the noisy measurements y generated by y = Ax + ε, where x ∈ C n , y ∈ C m , A ∈ C m×n , ε ∈ C m . The sparse signal x consists of L nonzero elements and the amplitudes of both x's and A's real part and imaginary part obey Gaussian Distribution N (0,1). The noise vector ε is assumed to be i.i.d zero-mean complex Gaussian noise. The contrastive means for the proposed scheme in the following subsections are as follows: classical OMP [46], CAMP, M-lasso, and the original SBM converting technique [47]. Noted that in the following, the original SBM is called RV-SBM. In addition, Section 4.1.2 presents the performance of the proposed method conducted in ISAR imaging.
The stopping criterion for all algorithms is given as follow where tol = 2e −4 denotes the tolerance and k max = 2000 is the maximum iteration times. All the experiments are carried out in MATLAB 2016b on the PC with Intel I7 7700K @4.2 GHz with 32 GB memory.

Complex-Valued Random Sparse Signal Recovery
In this subsection, an illustrative example is devised to demonstrate the effectiveness of the proposed method in comparison with OMP, CAMP, M-lasso, and RV-SBM. We consider that the signal and measurement dimension is n = 256 and m = 128, respectively. Moreover, the sparsity level of x is fixed L = 32 and the Signal to Noise Ratio (SNR) is set to 15 dB. Figure 1 shows the contrast to the real and imaginary part of reconstruction signal among the contrastive means and the proposed CV-SBM. The blue circle lines represent the recovered signal and the black stars denote the ground truth. Note that zero-valued points of x remain hidden to emphasize the nonzero ones in Figure 1. As shown in Figure 1a,b, there are five and nine accurately reconstructed points (circle and star coincide) in real and imaginary part achieved by OMP, respectively. Unsurprisingly, plenty of points mismatch far away from their position, especially the 150th point in the imaginary domain. Figure 1c,d exhibits the reconstruction result of CAMP, which yields nine well-recovered points in both the real and imaginary parts. However, there also exist outliers, but less than OMP's. In Figure 1e,f, eight and nine points are accurately recovered in the real and imaginary domains by M-lasso, respectively. It can be seen that CAMP and M-lasso behave almost the same, better than OMP. Figure 1g,h give the recovery results for the original RV-SBM whose real part shows eight well-reconstructed points and whose imaginary part demonstrates 11 points. The proposed technique yields 10 and 15 accurately recovered points, shown in Figure 1i,j, the most among the algorithms. Comparison of the number of accurately reconstructed points is presented in Table 1. In addition, the furthest outlier given by CV-SBM is at the same length as RV-SBM's but far less than the others'. This proves the effectiveness of CV-SBM for complex sparse signal recovery.

ISAR Imaging with Real Data
In this subsection, CV-SBM is applied in ISAR imaging with real data of the Yak-42 plane to demonstrate its superiority, comparing with RV-SBM, the range-Doppler (RD) algorithm, and the CS recovery method [48]. Detailed descriptions of targets and data are provided in [49].

Robustness Against Measurement Noise
In this subsection, we test the robustness of the proposed technique against the measurement noise. The experimental parameters are set as follows: SNR varies from 5 dB to 20 dB and other parameters are fixed the same as in the previous subsection. For each SNR, we average the MSE of 100 independent trials as the experimental result, as shown in Figure 3.
As the SNR increases, the MSE of the proposed scheme declines, which implies CV-SBM is robust to the noise. Before the SNR reaches 7 dB, the MSEs of CAMP and CV-SBM are almost the same, but when SNR goes beyond 7 dB, the MSE of CV-SBM surpasses CAMP's and becomes the lowest among all the algorithms. Both OMP's and RV-SBM's MSE numerically exceed CV-SBM's. In addition, CV-SBM behaves better than M-lasso, except at the point when the SNR is equal to 7 dB, at which they are approximately the same. This demonstrates that the proposed algorithm has better robustness against the measurement noise among the methods.   Figure 2b, many strong scatters are extracted by [48]. However, there still exist several strong outliers marked in red boxes. Figure 2c indicates the target's geometry. Besides, the number of outliers recovered by RV-SBM is less than [48]. In Figure 2d, the target's geometry is clear and scatters, marked in red box, extracted by CV-SBM are stronger than ones in the same area by [48] and RV-SBM. Furthermore, most of the outliers shown in Figure 2b,c are suppressed greatly by the proposed CV-SBM. This proves the effectiveness of CV-SBM in real data processing of ISAR imaging.

Robustness Against Measurement Noise
In this subsection, we test the robustness of the proposed technique against the measurement noise. The experimental parameters are set as follows: SNR varies from 5 dB to 20 dB and other parameters are fixed the same as in the previous subsection. For each SNR, we average the MSE of 100 independent trials as the experimental result, as shown in Figure 3.

Robustness Against Measurement Noise
In this subsection, how the dimension of measurements influences the recovery result is presented. We set n = 256, SNR = 15 dB, L = 32, and m varies from 29 to 128. As in the previous subsection, we measure the average MSE over 100 independent trials, as shown in Figure 4. It can be seen that as the dimension of the measurements rises, the MSE of CV-SBM decreases, which means that the larger the dimension of the measurements, the better the recovery performance of the proposed method.
In Figure 4, the MSE of CV-SBM is lower than RV-SBM's and M-lasso's. Except when the measurement dimension is equal to 52 and 59, the performance of the proposed method is better than CAMP's. Before the dimension reaches 75, the MSE of OMP is far worse than CV-SBM's. However, when the dimension exceeds 75, the MSE of OMP reaches to 0.01 suddenly but reduces slowly as it improves, becoming equal to CV-SBM's at 90. When the dimension is larger than 90, the MSE of CV-SBM continues to decline and becomes the lowest among all the algorithms. As the SNR increases, the MSE of the proposed scheme declines, which implies CV-SBM is robust to the noise. Before the SNR reaches 7 dB, the MSEs of CAMP and CV-SBM are almost the same, but when SNR goes beyond 7 dB, the MSE of CV-SBM surpasses CAMP's and becomes the lowest among all the algorithms. Both OMP's and RV-SBM's MSE numerically exceed CV-SBM's. In addition, CV-SBM behaves better than M-lasso, except at the point when the SNR is equal to 7 dB, at which they are approximately the same. This demonstrates that the proposed algorithm has better robustness against the measurement noise among the methods.

Robustness Against Measurement Noise
In this subsection, how the dimension of measurements influences the recovery result is presented. We set n = 256, SNR = 15 dB, L = 32, and m varies from 29 to 128. As in the previous subsection, we measure the average MSE over 100 independent trials, as shown in Figure 4. It can be seen that as the dimension of the measurements rises, the MSE of CV-SBM decreases, which means that the larger the dimension of the measurements, the better the recovery performance of the proposed method.
In Figure 4, the MSE of CV-SBM is lower than RV-SBM's and M-lasso's. Except when the measurement dimension is equal to 52 and 59, the performance of the proposed method is better than CAMP's. Before the dimension reaches 75, the MSE of OMP is far worse than CV-SBM's. However, when the dimension exceeds 75, the MSE of OMP reaches to 0.01 suddenly but reduces slowly as it improves, becoming equal to CV-SBM's at 90. When the dimension is larger than 90, the MSE of CV-SBM continues to decline and becomes the lowest among all the algorithms.

Time Cost Assessment
In this subsection, the computational cost of the proposed method is measured with increasing dimension of the signal. To this end, we vary n from 128 to 1024 and fix SNR = 20 dB, m = 0.5n, L = 0.125n. For each n, we conduct 20 independent trials and average the CPU time cost as the result, as shown in Figure 5.
The result shows that CV-SBM takes less CPU time than OMP and M-lasso in all test dimensions. Before the dimension reaches 512, RV-SBM requires the least time. However, when the dimension is more than 512, CV-SBM requires less CPU time than RV-SBM. This is because the complex-to-real transformation utilized in RV-SBM expands the dimension of the sensing matrix A to 2m × 2n, which leads to an inverse matrix with 2n × 2n elements and takes more memory and time within the resolving process, while CV-SBM only needs to calculate a complex inverse matrix with n × n elements. Nevertheless, CV-SBM takes a little more time than CAMP in large signal scale situations thanks to CAMP's specific design for l1-norm problems, whereas CV-SBM contains an inverse operator. However, CV-SBM still has great potential to exceed CAMP considering that the gap between CAMP and CV-SBM is not large.

Time Cost Assessment
In this subsection, the computational cost of the proposed method is measured with increasing dimension of the signal. To this end, we vary n from 128 to 1024 and fix SNR = 20 dB, m = 0.5n, L = 0.125n. For each n, we conduct 20 independent trials and average the CPU time cost as the result, as shown in Figure 5.

Performance Comparison with RV-SBM
The tests mentioned above have shown that the proposed CV-SBM presents remarkable performance compared with RV-SBM in the same experimental environment. Thus, in this subsection, we focus on the convergence, time cost, and performance of CV-SBM and RV-SBM by implementing experiments with various parameters. In the following experiments, two main parameters for CV-SBM and RV-SBM are set to λ = 0.005 and μ = 120 and the stopping criterion (the The result shows that CV-SBM takes less CPU time than OMP and M-lasso in all test dimensions. Before the dimension reaches 512, RV-SBM requires the least time. However, when the dimension is more than 512, CV-SBM requires less CPU time than RV-SBM. This is because the complex-to-real transformation utilized in RV-SBM expands the dimension of the sensing matrix A to 2m × 2n, which leads to an inverse matrix with 2n × 2n elements and takes more memory and time within the resolving process, while CV-SBM only needs to calculate a complex inverse matrix with n × n elements. Nevertheless, CV-SBM takes a little more time than CAMP in large signal scale situations thanks to CAMP's specific design for l 1 -norm problems, whereas CV-SBM contains an inverse operator. However, CV-SBM still has great potential to exceed CAMP considering that the gap between CAMP and CV-SBM is not large.

Performance Comparison with RV-SBM
The tests mentioned above have shown that the proposed CV-SBM presents remarkable performance compared with RV-SBM in the same experimental environment. Thus, in this subsection, we focus on the convergence, time cost, and performance of CV-SBM and RV-SBM by implementing experiments with various parameters. In the following experiments, two main parameters for CV-SBM and RV-SBM are set to λ = 0.005 and µ = 120 and the stopping criterion (the tolerance tol and k max ) varies. Furthermore, other experimental parameters are as follow: L = 0.125n, m = 0.5n and SNR varies from 10 dB, 15 dB, and 20 dB. For each stopping criterion and SNR, 20 independent trials were carried out and the average MSE, CPU time cost, and iteration time are selected as the result. The average MSE and CPU time cost of the proposed method are also presented if CV-SBM presents better performance.
In the first test, we examine CV-SBM and RV-SBM in small scale n = 256 and fix tol = 2e −4 , k max = 2000, as shown in Table 2. It can be seen that in each SNR situation, in the vast majority of trials, RV-SBM achieves the stopping criterion Equation (69) and requires less CPU time and fewer iterations, while the MSE of CV-SBM is always superior. This implies that RV-SBM possesses more rapid convergence, but this property also leads to a severe performance loss. Besides, the convergence speed of CV-SBM is about five times slower than that of RV-SBM, but this provides CV-SBM more iterations to attain better performance. It should be pointed out that the average CPU time of RV-SBM is not five times as fast as CV-SBM because a minority of trials of RV-SBM still consume 2000 iterations. To inspect the performance regardless of the convergence condition, we reduce the tolerance to tol = 2e −5 and keep the other parameters consistent with the previous experiment (Table 3). For any situation in Table 3, both RV-SBM and CV-SBM stop by reaching the maximum iterations. As we can see, the MSE gap between CV-SBM and RV-SBM is extremely narrow, but still exists. This illustrates that CV-SBM achieves better performance due to employing the phase information. Furthermore, CV-SBM still requires a little more time than RV-SBM, as in the first experiment. At last, large scale n = 512 is taken into consideration ( Table 4). The tolerance tol and the maximum iteration times k max are set the same as in Table 3. CV-SBM achieves superior performance in terms of both MSE and time cost in comparison with RV-SBM. Specifically, CV-SBM yields 18.20%, 17.58%, and 26.67% lower MSE and requires 28.75%, 25.59%, and 23.64% less CPU time than RV-SBM in all kinds of SNR cases, respectively. This reveals that the proposed CV-SBM is extremely applicable in large-scale complex-valued sparse signal recovery.

Conclusions
In this paper, a new CS recovery algorithm named CV-SBM is presented, which generalizes the widely employed SBM into the complex domain. CV-SBM induces a theoretical support to directly reconstruct the sparse signal in complex-valued variables, instead of converting them into real ones. We apply the proposed CV-SBM to a l 1 -norm problem to recover a complex-valued sparse signal. Experimental results demonstrate the superiority of CV-SBM over other existing CS reconstruction methods, especially RV-SBM, in both recovery accuracy and time cost for large-scale cases.
A significant goal for future work lies in applying CV-SBM to more complicated regularization problems, since CV-BI and CV-SBM are theoretically able to deal with any convex optimization problem.