Next Article in Journal
A Stellar Imaging Error Correction Method Based on an Ellipsoid Model: Taking Ziyuan 3-02 Satellite Data Analysis as an Example
Next Article in Special Issue
Mathematical Methods and Algorithms for Improving Near-Infrared Tunable Diode-Laser Absorption Spectroscopy
Previous Article in Journal
Energy-Balancing Unequal Clustering Approach to Reduce the Blind Spot Problem in Wireless Sensor Networks (WSNs)
Previous Article in Special Issue
Less Data Same Information for Event-Based Sensors: A Bioinspired Filtering and Data Reduction Algorithm
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Regularized Weighted Smoothed L0 Norm Minimization Method for Underdetermined Blind Source Separation

College of Information and Communication Engineering, Harbin Engineering University, Harbin 150001, China
*
Author to whom correspondence should be addressed.
Sensors 2018, 18(12), 4260; https://doi.org/10.3390/s18124260
Submission received: 22 October 2018 / Revised: 28 November 2018 / Accepted: 30 November 2018 / Published: 4 December 2018
(This article belongs to the Special Issue Sensor Signal and Information Processing II)

Abstract

:
Compressed sensing (CS) theory has attracted widespread attention in recent years and has been widely used in signal and image processing, such as underdetermined blind source separation (UBSS), magnetic resonance imaging (MRI), etc. As the main link of CS, the goal of sparse signal reconstruction is how to recover accurately and effectively the original signal from an underdetermined linear system of equations (ULSE). For this problem, we propose a new algorithm called the weighted regularized smoothed L 0 -norm minimization algorithm (WReSL0). Under the framework of this algorithm, we have done three things: (1) proposed a new smoothed function called the compound inverse proportional function (CIPF); (2) proposed a new weighted function; and (3) a new regularization form is derived and constructed. In this algorithm, the weighted function and the new smoothed function are combined as the sparsity-promoting object, and a new regularization form is derived and constructed to enhance de-noising performance. Performance simulation experiments on both the real signal and real images show that the proposed WReSL0 algorithm outperforms other popular approaches, such as SL0, BPDN, NSL0, and L p -RLSand achieves better performances when it is used for UBSS.

1. Introduction

The problem that UBSS [1,2] needs to address is how to separate multiple signals from a small number of sensors. The essence of this problem is to solve the optimal solution of the undetermined linear system of equations (ULSE). Fortunately, as a new undersampling technique, compressed sensing (CS) [3,4,5] is an effective way to solve ULSE, which makes it possible to apply CS to UBSS.
The model of CS is shown in Figure 1. According to this figure, it can be see that CS boils down to the form,
y = Φ x + b ,
where Φ = [ ϕ 1 , ϕ 2 , , ϕ n ] R m × n is a sensing matrix with the condition of m n and ϕ i R m , i = 1 , 2 , , n , which can be further represented as Φ = ψ φ , while ψ is a random matrix, and φ is the sparse basis matrix. y R m is the vector of measurements. Moreover, b R m denotes the additive noise.
To solve the ULSE in Equation (1), we try to recover the sparse signal x from the given { y , Φ } by CS. According to CS, this problem is transformed into solving the L 0 -norm minimization problem.
( P 0 ) arg   min x R n x 0 , s . t . Φ x y 2 2 ϵ .
where ϵ denotes error. This rather wonderful attempt is actually supported by a brilliant theory [6]. Based on this theory, in the noiseless case, it is proven that the sparsest solution is indeed a real signal when x is sufficiently sparse and Φ satisfies the restricted isometry property (RIP) [7]:
1 δ K Φ x 2 2 x 2 2 1 + δ K ,
where K is the sparsity of signal x and δ K ( 0 , 1 ) is a constant. In Equation (2), the L 0 -norm is nonsmooth, which leads an NP-hard problem. In practice, two alternative approaches are usually employed to solve the problem [8]:
  • Greedy search by using the known sparsity as a constraint;
  • The relaxation method for the P 0 .
For greedy search, the main methods are based on greedy matching pursuit (GMP) algorithms, such as orthogonal matching pursuit (OMP) [9,10], stage-wise orthogonal matching pursuit (StOMP) [11], regularized orthogonal matching pursuit (ROMP) [12], compressive sampling matching pursuit (CoSaMP) [13], generalized orthogonal matching pursuit (GOMP) [14,15], and subspace pursuit (SP) [16,17] algorithms. The objective function of these algorithms is given by:
arg   min x R n 1 2 Φ x y 2 2 , s . t . x 0 K .
As shown in the above equation, the features of GMP algorithms can be concluded as:
  • Using sparsity as prior information;
  • Using the least squares error as the iterative criterion.
The advantage of GMP algorithms is that the computational complexity is low, but the reconstruction accuracy is not high in the noise case.
At present, the relaxation method for P 0 is widely used. The relaxation method is mainly divided into two categories: the constraint-type algorithm and the regularization method. The constraint-type algorithm can also be divided into L 1 -norm minimization methods and smoothed L 0 -norm minimization methods. The representative algorithm of the former is the BPalgorithm [18], and the latter is the smoothed L 0 -norm minimization (SL0) algorithm. For the SL0 algorithm, the objective function can be expressed as:
( P F ) arg   min x R n F σ ( x ) , s . t . Φ x y 2 2 ϵ . lim σ 0 F σ ( x ) = lim σ 0 i = 1 n f σ ( x i ) x 0 .
where F σ ( x ) is a smoothed function, which approximates the L 0 -norm when σ 0 . Compared with L 1 or L p , a small σ is selected to make the function close to L 0 -norm [8]; therefore, F σ ( x ) are closer to the optimal solution.
Based on the idea of approximation, Mohimani used a Gauss function to approximate the L 0 -norm [19], which is described as:
f σ ( x i ) = 1 exp ( x i 2 2 σ 2 ) .
According to the equation, we can know:
f σ ( x i ) 1 if x i σ 0 if x i σ .
when σ is a small enough positive value, the Gauss function is almost equal to the L 0 -norm. Furthermore, the Gauss function is differentiable and smoothed; hence, it can be optimized by optimization methods such as the gradient descent (GD) method. Zhao proposed another smoothed function: the hyperbolic tangent (tanh) [20],
f σ ( x i ) = exp ( x i 2 2 σ 2 ) exp ( x i 2 2 σ 2 ) exp ( x i 2 2 σ 2 ) + exp ( x i 2 2 σ 2 ) .
This smoothed function makes a closer approximation to the L 0 -norm than the Gauss function, as shown in [19], with the same σ ; hence, it performs better in sparse signal recovery. Indeed, a large number of simulation experiments confirmed this view.
Another relaxation method is the regularization method. For CS, sparse signal recovery in the noise case is a very practical and unavoidable problem. Fortunately, the regularization method makes the solution of this problem possible [21,22]. The regularization method can be described as a “relaxation” approach that tries to solve the following unconstrained recovery problem:
( P υ ) arg   min x R n 1 2 Φ x y 2 2 + λ υ ( x ) ,
where λ > 0 is the parameter that balances the trade-off between the deviation term Φ x y 2 2 and the sparsity regularizer υ ( x ) . The sparse prior information is enforced via the regularizer υ ( x ) , and a proper υ ( x ) is crucial to the success of the sparse signal recovery task: it should favor sparse solutions and make sure the problem P υ can be solved efficiently in the meantime.
For regularization, various sparsity regularizers have been proposed as the relaxation of the L 0 -norm. The most popular algorithms are the convex L 1 -norm [22,23] and the nonconvex L p -norm to the p th power [24,25]. In the noiseless case, the L 1 -norm is equivalent to the L 0 -norm, and the L 1 -norm is the only norm with sparsity and convexity. Hence, it can be optimized by convex optimization methods. However, according to [8], in the noisy case, the L 1 -norm is not exactly equivalent to the L 0 -norm, so the effect of promoting sparsity is not obvious. Compared to the L 1 -norm, the nonconvex L p -norm to the p th power makes a closer approximation to the L 0 -norm; therefore, L p -norm minimization has a better sparse recovery performance [8].
In view of the above explanation, in this paper, a compound inverse proportional function (CIPF) function is proposed as a new smoothed function, and a new weighted function is proposed to promote sparsity. For the noise case, a new regularization form is derived and constructed to enhance de-noising performance. The experimental simulation verifies the superior performance of this algorithm in signal and image recovery, and it has achieved good results when applied to UBSS.
This paper is organized as follows: Section 2 introduces the main work of this paper. The steps of the ReRSL0algorithm and the selection of related parameters are described in Section 3. Experimental results are presented in Section 4 to evaluate the performance of our approach. Section 5 verifies the effect of the proposed weighted regularized smoothed L 0 -norm minimization (WReSL0) algorithm in UBSS. Section 6 concludes this paper.

2. Main Work of This Paper

In this paper, based on the P F in Equation (9), we propose a new objective function, which is given by:
arg   min x R n W H σ ( x ) , s . t . Φ x y 2 2 ϵ .
According to this equation, We not only propose a smoothed function approximating the L 0 -norm, but also propose a weighted function to promote sparsity. This section focuses on the relevant contents of W = [ w 1 , w 2 , w n ] T and H σ ( x ) .

2.1. New Smoothed Function: CIPF

According to [26], some properties of the smoothed functions are summarized in the following:
Property: Let f : R [ , + ] and, define f σ ( r ) f σ ( r / σ ) for any σ > 0 . The function f has the property, if:
(a)
f is real analytic on ( r 0 , ) for some r 0 ;
(b)
r 0 , f ( r ) ϵ 0 , where ϵ 0 > 0 is some constant;
(c)
f is convex on R ;
(d)
f ( r ) = 0 r = 0 ;
(e)
lim r + f ( r ) = 1 .
It follows immediately from Property that { f σ ( r ) } converges to the L 0 -norm as σ 0 + , i.e.,
lim σ 0 + f σ ( r ) = 0 if r = 0 1 otherwise .
Based on Property , this paper proposes a new smoothed function model called CIPF, which satisfies Property and better approximates the L 0 -norm. The smoothed function model is given as:
f σ ( r ) = 1 σ 2 α r 2 + σ 2 .
In Equation (12), α denotes a regularization factor, which is a large constant. By experiments, the factor α is determined to be 10, which is a good result of the simulation. σ represents a smoothed factor, and when it is smaller, it will make the proposed model closer to the L 0 -norm. Obviously, lim σ 0 f σ ( r ) = 0 , r = 0 1 , r 0 or approximately f σ ( r ) 0 , | r | σ 1 , | r | σ is satisfied. Let:
H σ ( x ) = i = 1 n f σ ( x i ) = n i = 1 n σ 2 α x i 2 + σ 2
where H σ ( x ) x 0 for small values of σ , and the approximation tends to equality when σ 0 .
Figure 2 shows the effect of the CIPF model approximating the L0-norm. Obviously, the CIPF model makes a better approximation.
In conclusion, the merits of the CIPF model can be summarized as follows:
  • It closely approximates the L 0 -norm;
  • It is simpler in form than that in the Gauss and tanh function models.
These merits make it possible to reduce the computational complexity on the premise of ensuring the accuracy of sparse signal reconstruction, which is of practical significance for sparse signal reconstruction.

2.2. New Weighted Function

Candès et al. [27] proposed the weighted L 1 -norm minimization method, which employs the weighted norm to enhance the sparsity of the solution. They provided an analytical result of the improvement in the sparsity recovery by incorporating the weighted function with the objective function. Pant et al. [28] applied another weighted smoothed L 0 -norm minimization method, which uses a similar weighted function to promote sparsity. The weighted function can be summarized as follows:
  • Candès et al.: w i = 1 | x i | x i 0 x i = 0 ;
  • Pant et al.: w i = 1 | x i | + ζ , ζ is a small enough positive constant.
From the two weighted functions, we can find a phenomenon: a large signal entry x i is weighted with a small w i ; on the contrary, a small signal entry x i is weighted with a large value w i . By analysis, the large w i forces the solution x to concentrate on the indices where w i is small, and by construction, these correspond precisely to the indices where x is nonzero.
Combined with the above idea, we propose a new weighted function, which is given by:
w i = e | x i | σ , s . t . i = 1 , 2 , , n .
As for Candès et al., when the signal entry is zero or close to zero, the value of w i will be very large, which is not suitable for computation by a computer. Although Pant et al. noticed the problem and improved the weighted function to avoid it, the constant ζ depends on experience. Actually, the proposed weighted function can avoid the two problems. Moreover our weighted function can be satisfied with the phenomenon. When the small signal entry x i can be weighted with a large w i and a large signal entry x i can be weighted with a small w i , this can make the large signal entry and small signal entry closer. In this way, the direction of optimization can be kept as consistent as possible, and the optimization process tends to be more optimal. Therefore, the proposed weighted function can have a better effect.

3. New Algorithm for CS: WReSL0

3.1. WReSL0 Algorithm and Its Steps

Here, in order to analyze the problem more clearly, we rewrite Equation (10) as follows:
arg   min x R n W H σ ( x ) , s . t . Φ x y 2 2 ϵ .
where H σ ( x ) = I σ 2 α x 2 + σ 2 ( I R N is a unit vector) is a differentiable smoothed accumulated function. The weighted function W = e | x | σ . Therefore, we can obtain the gradient of CIPF, which is written as:
G = H σ ( x ) x = 2 α σ 2 x α x 2 + σ 2 2
According to Equation (15), as in [28], we can obtain:
WG = e | x | σ T 2 α σ 2 x α x 2 + σ 2 2
Solving the problem of ULSE is to solve the optimization problem in Equation (10). As for this problem, there are many methods, such as split Bregman methods [29,30,31], FISTA [32], alternating direction methods [33], gradient descent (GD) [34], etc. In order to reduce the computational complexity, this paper adopts the GD method to optimize the proposed objective function.
Given σ , a small target value σ min , and a sufficiently large initial value σ max , after referring to the annealing mechanism in simulated annealing [35], this paper proposes a monotonically-decreasing sequence { σ t | t = 2 , 3 , , T } , which is generated as:
σ t = σ max θ γ ( t 1 ) , s . t . t = 1 , 2 , 3 , , T .
where γ = log θ ( σ max / σ min ) T 1 , θ is a constant that is larger than one, and T is the maximum number of iterations. Using such a monotonically-decreasing sequence can avoid the case of too small of a σ leading to the local optimum.
Similar to SL0, WReSL0 also consists of two nested iterations: the external loop, which begins with a sufficiently large value of σ , i.e, σ max , responsible for the gradually decreasing strategy in Equation (17), and the internal loop, which for each value of σ , finds the maximizer of H σ ( x ) on { x | A x y 2 ϵ } .
According to the GD algorithm, the internal loop consists of the gradient descent step, which is given by:
x ^ = x + μ d ,
where d = g and μ denotes a step size factor. This part is similar to SL0, followed by solving the problem:
arg   min x R n x x ^ 2 2 , s . t . Φ x y 2 2 ϵ
where x denotes the optimal solution. By regularization, this form can be converted to another form as follows,
arg   min x R n x x ^ 2 2 + λ Φ x y 2 2 .
where λ is the regularization parameter, which is adapted to balance the fit of the solution to the data y and the approximation of the solution to the maximizer of H σ ( x ) . Weighted least squares (WLS) can be used to solve this problem, and the solution is:
x = I n Φ H I n 0 0 λ I m I n Φ 1 I n Φ H I n 0 0 λ I m x ^ y .
By calculation, Equation (21) is equivalent to:
x = I n + λ Φ H Φ 1 x ^ + λ Φ H y
where I n and I m are both identity matrices of size n × n and m × m , respectively. Therefore, we can obtain:
x x ^ = I n + λ Φ H Φ 1 x ^ + λ Φ H y x ^ = I n + λ Φ H Φ 1 x ^ + λ Φ H y I n + λ Φ H Φ x ^ = I n + λ Φ H Φ 1 x ^ + λ Φ H y x ^ λ Φ H Φ x ^ = λ 1 I n + Φ H Φ 1 Φ H Φ x ^ y
According to the above analysis and derivation, we can get:
x = x ^ λ 1 I n + Φ H Φ 1 Φ H Φ x ^ y
The initial value of the internal loop is the maximizer of H σ ( x ) obtained for σ max . To increase the speed, the internal loop is repeated a fixed and small number of times (L). In other words, we do not wait for the GD method to converge in the internal loop.
According to the explanation above, we can conclude the steps of the proposed WReSL0 algorithm, which are given in Table 1. As for σ , it can be shown that function H σ ( x ) remains convex in the region where the largest magnitude of the component of x is less than σ . As the algorithm starts at the original value x ( 0 ) = Φ H ( Φ Φ H ) 1 y , the above choice of σ 1 ensures that the optimization starts in a convex region. This greatly facilitates the convergence of the WReSL0 algorithm.

3.2. Selection of Parameters

The selection of parameters μ and σ will affect the performance of the WReSL0 algorithm; thus, this paper discusses the selection of these two above parameters in this section.

3.2.1. Selection of Parameter μ

According to the algorithm, each iteration consists of a descent step x i x i μ e | x i | σ 2 α σ 2 x i α x i 2 + σ 2 2 , 1 i n , followed by a projection step. If for some values of i, we have | x i | σ , then the algorithm does not change the value of x i in that descent step; however, it might be changed in the projection step. If we are looking for a suitably large μ , a suitable choice is to make the algorithm force all those values of x satisfying | x i | σ toward zero. Therefore, we can get:
x i μ e | x i | σ 2 α σ 2 x i α x i 2 + σ 2 2 0
and:
e | x i | σ x i 0 1
Combining Equations (24) and (25), we can further obtain:
x i μ 2 α σ 2 x i α x i 2 + σ 2 2 0
By calculation, we can obtain:
μ α x i 2 + σ 2 2 2 α σ 2 x i 0 σ 2 2 α
According to the above derivation, we have come to the conclusion that μ σ 2 2 α . Therefore, we can set μ = σ 2 2 α .

3.2.2. Selection of Parameter σ

According to Equation (17), the descending sequence of σ is generated by σ t = σ max σ min σ max t 1 T 1 (it is obtained through simplification of Equation (17)). Parameter σ min and parameter σ max should be appropriately selected. The selection of σ min and σ max is discussed below.
For the initial value of σ , i.e., σ max , here, let x ˜ = max { | x | } ; suppose there is a constant b, in order to make the algorithm converge quickly; let parameter σ max satisfy:
H σ ( x ˜ ) = 1 σ max 2 α x ˜ 2 + σ max 2 b σ max 1 b b α x ˜ .
From the equation, we can see that constant b satisfies 1 b b 0 ; thus 0 < b 1 , and here, we define constant b as 0.5 . Hence, σ max = α max { | x | } .
For the final value σ min , when σ min 0 , H σ min ( x ) x 0 . That is, the smaller σ min , the more H σ min ( x ) can reflect the sparsity of signal x , but at the same time, it is also more sensitive to noise; therefore, the value σ min should not be too small. Combining [19], we choose σ min = 0.01 .

4. Performance Simulation and Analysis

The numerical simulation platform is MATLAB 2017b, which is installed on a computer with a Windows 10, 64-bit operating system. The CPU of the simulation computer is the Intel (R) Core (TM) i5-3230M, and the frequency is 2.6 GHz. In this section, the performance of the WReSL0 algorithm is verified by signal and image recovery in the noise case.
Here, some state-of-the-art algorithms are selected for comparison. The parameters are selected to obtain the best performance for each algorithm: for the BPDNalgorithm [36], the regularization parameter λ = σ N 2 log ( n ) ; for the SL0 algorithm [19], the initial value of smoothed factor δ max = 2 max { | x | } , the final value of smoothed factor δ min = 0.01 , scale factor is set as step size L = 5 , and the attenuation factor ρ = 0.8 ; for the NSL0algorithm [20], the initial value of smoothed factor δ max = 4 max { x } , the final value of smoothed factor δ min = 0.01 , the step size L = 10 , and the attenuation factor ρ = 0.8 ; for L p -RLSalgorithm [24], the number of iterations T = 80 , the norm initial value p 1 = 1 , the norm final value p T = 0.1 , the initial value of regularization factor ϵ 1 = 1 , the final value of regularization factor ϵ T = 0.01 , and the algorithm termination threshold E t = 10 25 ; for the WReSL0 algorithm, the initial value of smoothed factor σ max = c max { | x | } , the final value of smoothed factor σ min = 0.01 , the iterations T = 30 , the step size L = 5 , and the regularization parameter λ = 0.1 . All experiments are based on 100 trials.

4.1. Signal Recovery Performance in the Noise Case

In this part, we discuss signal recovery performance in the noise case. We add noise b to the measurement vector y ; moreover, b = δ N Ω , Ω is randomly formed and follows the Gaussian distribution of N ( 0 , 1 ) . For signal recovery under noise conditions, we evaluate the performance of algorithms by the normalized mean squared error (NMSE) and the CPU running time (CRT). NMSE is defined as x x ^ 2 / x 2 . CRT is measured with t i c and t o c . In order to analyze the de-noising performance of the WReSL0 algorithm in context closer to the real situation, we constructed a certain signal as an experimental object in the experiments in this section. The signal is given by:
x 1 = α 1 sin ( 2 π f 1 T s t ) x 2 = β 1 cos ( 2 π f 2 T s t ) x 3 = α 2 sin ( 2 π f 3 T s t ) x 4 = β 2 cos ( 2 π f 4 T s t ) X = x 1 + x 2 + x 3 + x 4
where α 1 = 0.2 , α 2 = 0.1 , β 1 = 0.3 , and β 2 = 0.4 . f 1 = 50 Hz; f 2 = 100 Hz; f 3 = 200 Hz; and f 4 = 300 Hz. Here, t is a sequence with t = [ 1 , 2 , 3 , , n ] , and T s is sampling interval with the value of 1 f s . f s is the sampling frequency with the value of 800 Hz. The object that needs to be reconstructed can be expressed as:
y = Φ x + δ N Ω .
where x R n is a sparse signal in the frequency domain, and it is the Fourier transform expression of X , y R m . Here, let n = 128 , m = 64 . Moreover, Φ can be represented as Φ = ψ φ ; here, ψ is a randn matrix generated by a Gaussian distribution, and φ is a sparse basis matrix generated by Fourier transform. Here, φ can be given by Fourier I n × n , and I n × n is a unit matrix. This target signal X is sparse in Fourier space; hence, the signal X can be recovered from given { y , Φ } by CS recovery methods.
Figure 3 shows the signal recovery effect. Obviously, BPDN and SL0 do not perform well, while NSL0, L p -RLS and the proposed WReSL0 perform quite well. This verifies that the regularization mechanism has a good de-noising effect. Figure 4 shows the frequency spectrum of the recovered signal by the selected algorithms. The spectrum of the signal recovered by our proposed WReSL0 algorithm is almost the same as the original signal, while other algorithms fail to achieve this effect.
Table 2 shows the CRT of all algorithms. The n changes according to a given sequence [ 170 , 220 , 270 , 320 , 370 , 420 , 470 , 520 ] . From the table, for any n, SL0 has the shortest computation time, followed by WReSL0, NSL0, and L p -RLS, and BPDN has the longest computation time. The BPDN algorithm is generally implemented by the quadratic programming method, and the computational complexity of this method is very high, thus resulting in a large increase in the overall computation time of the algorithm. Furthermore, in L p -RLS, the iterative process adopts the conjugate gradient method with high complexity, while NSL0 and WReSL0 do not. Compared with NSL0, WReSL0 is more prominent in the decrease of computation time.
The performance of each algorithm under different noise intensities is shown in Figure 5. When δ N = 0 , SL0 outperforms other algorithms, but with the increase of δ N , the effect of SL0 becomes worse and worse. This result further illustrates that the traditional constrained sparse recovery algorithm does not have the performance of anti-noising. For BPDN, NSL0, L p -RLS, and WReSL0, they all applied the regularization mechanism, and they are indeed superior to SL0 in the noise case. Therefore, the proposed WReSL0 in this paper has the best de-noising performance.

4.2. Image Recovery Performance in the Noise Case

Real images are considered to be approximately sparse under some proper basis, such as the DCT basis, DWT basis, etc. Here, we choose the DWT basis to recover these images. We compare the recovery performances based on the four real images in Figure 6: boat, Barbara, peppers, and Lena. The size of these images is 256 × 256 ; the compression ratio (CR; defined as m / n ) is 0.5; and the noise δ N equals 0.01. We still choose SL0, BPDN, NSL0, and L p -RLS to make comparisons. For image recovery, the object of image processing is given by:
Y = Φ X + B
Here, Y , X , B are matrices, and among these, Y , B R m × n , X R n × n . In order to meet the basic requirements of CS, we perform the following processing:
Y i = Φ X i + B i s . t . i = 1 , 2 , , n .
where Y i , X i , B i are the column vectors of Y , X , B , respectively. B i = δ N Ω , Ω obeys the Gaussian distribution N ( 0 , 1 ) .
To perform image recovery, we valuate it by the peak signal to noise ratio (PSNR) and the structural similarity index (SSIM). PSNR is defined as:
PSNR = 10 log ( 255 2 / MSE )
where MSE = x x ^ 2 2 , and SSIM is defined as:
SSIM ( p , q ) = ( 2 μ p + μ q + c 1 ) ( 2 σ p q + c 2 ) ( μ p 2 + μ q 2 + c 1 ) ( σ p 2 + σ q 2 + c 2 ) .
Among these, μ p is the mean of image p, μ q is the mean of image q, σ p is the variance of image p, σ q is the variance of image q, and σ p q is the covariance between image p and image q. Parameters c 1 = z 1 L and c 2 = z 2 L , for which z 1 = 0.01 , z 2 = 0.03 , and L is the dynamic range of pixel values. The range of SSIM is [ 1 , 1 ] , and when these two images are the same, SSIM equals one.
Figure 7 shows the recovery effect of boat and Barbara with noise intensity δ N = 0.01 . For boat and Barbara, the recovered images by SL0 and BPDN have obvious water ripples, while recovered images by other algorithms have no such water ripples. Similarly, for peppers and Lena, the recovered images by SL0 and BPDN are blurred compared with the recovered images by other algorithms. The NSL0, L p -RLS, and WReSL0 algorithms are also effective at noisy image recovery. For the NSL0, L p -RLS, and WReSL0 algorithms, their recovery effects are very similar. In order to further analyze the advantages and disadvantages of the algorithms, we analyze the PSNR and SSIM of the images recovered by these algorithms, and the results are shown in Table 3 and Table 4. By observation and analysis, L p -RLS performs better than NSL0, and at the same time, WReSL0 outperforms L p -RLS. Hence, the WReSL0 proposed by this paper is superior to the other selected algorithms in image processing.

5. Application in Underdetermined Blind Source Separation

The problem of UBSS stems from cocktail reception, which is shown in Figure 8. Suppose the source signal matrix S ( t ) = [ s 1 ( t ) , s 2 ( t ) , , s m ( t ) ] T , the mixed matrix (Sensors) A is m × n ( m n ) matrix, the Gaussian noise G ( t ) = [ g 1 ( t ) , g 2 ( t ) , , g m ( t ) ] T is generated by Gaussian distribution, and the observed mixed signal matrix X ( t ) = [ x 1 ( t ) , x 2 ( t ) , , x n ( t ) ] T ; therefore, the general mathematical models of UBSS can be summarized as:
X ( t ) = A S ( t ) + G ( t )
In fact, each signal has L data collected; therefore, X R m × L , A R m × n ( m n ) , S ( t ) R n × L , and G R m × L , and G can be represented as δ N W ( W obeys N ( 0 , 1 ) ). The purpose of UBSS is to use the mixed signal matrix x ( t ) to estimate the sof the source signal matrix s ( t ) . In fact, this is the process of solving the underdetermined linear system of equations (ULSE). For this problem, we can use the two-step method to solve it, which is shown in Figure 9.
From Figure 9, firstly, we get the mixed matrix by the clustering method and then use CS technology to separate the signal, so as to restore the original signal.

5.1. Process Analysis of CS Applied to UBSS

5.1.1. Solving the Mixed Matrix by the Potential Function Method

In this section, we choose the potential function method to solve the mixed matrix A . To verify the performance of the proposed WReSL0 algorithm better, we choose four simulated signals and four real images to organize experiments in this section.
Suppose there are four source signals, which are:
s 1 ( t ) = 5 sin ( 2 π f 1 t ) s 2 ( t ) = 5 sin ( 2 π f 2 t ) s 3 ( t ) = 5 sin ( 2 π f 3 t ) s 4 ( t ) = 5 sin ( 2 π f 4 t ) S = [ s 1 ( t ) , s 2 ( t ) , s 3 ( t ) , s 4 ( t ) ] T
where f 1 = 310 Hz, f 2 = 210 Hz, f 3 = 110 Hz, and f 4 = 10 Hz. The length of each source signal s i ( i = 1 , 2 , 3 , 4 ) is 1024, and the sample frequency is 1024 Hz. These four signals are shown in Figure 10.
The four source images are the classic standard test images: boat, Barbara, peppers, and Lena, which are in Figure 6.
Suppose there are two sensors that receive signals and another two sensors that receive images. Mixed matrices A and B are set as:
A = A 1 A 2 = 0.9930 0.9941 0.1092 0.9304 0.2116 0.0757 0.9647 0.3837 B = B 1 B 2 = 0.9354 0.9877 0.6730 0.1097 0.3535 0.07846 0.7396 0.9940
By this mixed matrix and added Gaussian noise ( δ N = 0.1 ), we can get the two mixed signals, which are shown in Figure 11, and the two mixed images, which are shown in Figure 12. Then, we can get the estimated mixed matrix A ^ and B ^ by clustering by the potential function method [37]. As shown in Figure 13, the potential function method can cluster well. By clustering, we get the estimated values of A and B , as follows:
A ^ = A ^ 1 A ^ 2 = 0.9792 0.9969 0.1097 0.9239 0.2028 0.0785 0.9940 0.3827 B ^ = B ^ 1 B ^ 2 = 0.9478 0.9431 0.6483 0.1130 0.3476 0.0765 0.7075 0.9979
By calculation, the error of solving the mixed matrix is A A ^ | | F A | | F × 100 % = 1.763 % and B B ^ | | F B | | F × 100 % = 3.64 % . This error range is much smaller than the classical k-means and fuzzy c-means, thus laying a foundation for the reconstruction of compressed sensing.

5.1.2. Using CS to Separate Source Signals

The next problem is to get S ( t ) from known A ( t ) and X ( t ) . Here, we solve this problem by CS. The solution process is similar to the image reconstruction process. The difference is that the sparse basis used here is the Fourier basis. Then, we apply the proposed RWeSL0 algorithm to this process. First, we transform the obtained x ( t ) into column vectors:
x ( t ) = [ x 1 ( t ) , x 2 ( t ) ] T x ˜ ( t ) = x 1 ( t ) x 2 ( t )
Then, we use the Fourier (for the sparse signal) or DWT (for the image) basis for sparse representation and extend the matrix and the valuated mixed matrix to obtain the sensing matrix.
A ˜ = A ^ I L × L , o r B ˜ = B ^ I L × L Ψ = F o u r i e r ( I L × L ) / L , o r Ψ = D W T ( I L × L ) / L Ψ ˜ = Ψ 0 0 0 Ψ 0 0 0 Ψ Φ = A ˜ Ψ ˜ , o r Φ = B ˜ Ψ ˜
For this equation, ⊗ denotes the Kronecker product sign, F o u r i e r ( · ) represents the Fourier transform, and DWT represents the discrete wavelet transform. Therefore, the CS-UBSS model can be described as:
X ^ ( t ) = A ˜ ( t ) S ( t ) + G ( t ) = A ˜ ( t ) Ψ ˜ Θ ( t ) + G ( t ) = Φ Θ ( t ) + G ( t ) o r X ^ ( t ) = B ˜ ( t ) S ( t ) + G ( t ) = B ˜ ( t ) Ψ ˜ Θ ( t ) + G ( t ) = Φ Θ ( t ) + G ( t )
where Θ is the Fourier transform or DWT of S ( t ) , so Θ is a sparse signal. As for UBSS in the images, firstly, each image matrix needs to be transformed into a row vector, then the four row vectors form a matrix S ( t ) . At the same time, the sparse basis in Equation (40) needs to be replaced by DWT.
Then, we can recover the source signal by CS. In summary, the above can be described as the flowchart in Figure 14.

5.2. Performance Analysis of the WReSL0 Algorithm Applied to UBSS

5.2.1. The Effect of the WReSL0 Algorithm Applied to UBSS

In this section, we evaluate the effect of the WReSL0 algorithm applied to UBSS by the separation of signals and spectrum analysis.
The effect of the separation of signals is shown in Figure 15: the source signals are well separated, and the separation signals and the original signals are very similar. Figure 16 displays the error between the original source signal and the recovered source signal. It indicates that the error between the original source signal and the recovered source signal is fairly small, and the WReSL0 algorithm can better deal with the problem of UBSS. In addition, We get the time-frequency diagram of the restored signal by short-time Fourier transform. Figure 17 is the time-frequency diagram. From this figure, we find that each signal has the same frequency as the original signal, and it also validates the rationality of the proposed algorithm for UBSS.

5.2.2. Performance Comparisons of the Selected Algorithms

Here, we use the SL0, NSL0, and L p -RLS algorithms and the classical shortest path method (SPM) [38] to make a comparison in different noise cases. In order to analyze the situation of signal recovery clearly, we apply average SNR (ASNR) (for the signal) and average peak SNR (APSNR) (for the image) to evaluate. Let the original source signal be s i and the recovered source signal be s ^ i , so ANSR is defined as:
ASNR = 1 n i = 1 n SNR i SNR i = 20 log s ^ i s i | | 2 s i | | 2 ,
and PSNR is defined as:
APSNR = 1 n i = 1 n PSNR i PSNR i = 10 log 255 2 × M × N s ^ i s i | | 2
where M and N are the width and height of the image.
The ASNR comparisons are shown in Table 5. From the table, we can see that ASNR attenuates sharply when δ N increases from 0.15 0.2 . The reason is that the error of the valuated mixed matrix A ^ increases obviously, which leads those CS recovery algorithms to perform poorly. In fact, from this table, our proposed RWeSL0 algorithm performs well when δ N is less than 0.15 , and when δ N is greater than 0.15 , the L p -RLS algorithm performs best, followed by our proposed RWeSL0 algorithm.
The APSNR comparisons are shown in Table 6. In this table, It is clear that APSNR is not high, and it drops greatly when δ N increases from 0.15 0.2 . From Figure 18, we can see that these separated images seem to be enveloped in mist, which leads to a low APSNR. Therefore, we will try our best to improve this problem in the future.
In summary, the CS technique can be used in UBSS and performs well especially for the signal recovery. Our proposed WReSL0 algorithm can perform well in UBSS for the signal recoverywhen the noise is small; and regarding image recovery, we will develop this in the future.

6. Conclusions

In this paper, we propose the WReSL0 algorithm to recover the sparse signal from given { y , Φ } in the noise case. The WReSL0 algorithm is constructed under the GD method, in which the update process of x in the inner loop adopts the regularization mechanism to enhance the de-noising performance. As a key part of the WReSL0 algorithm, a weighted smoothed function W T H σ ( x ) is proposed to promote sparsity and provide the guarantee of robust and accurate signal recovery. Furthermore, We deduced the value of μ and the initial value σ max to ensure the optimization performance of the algorithm. Performance simulation experiments on both real signals and real images show that the proposed WReSL0 algorithm performs better than the L 1 or L p regularization methods and the classical L 0 regularization methods. Finally, we apply the proposed WReSL0 algorithm to solve the problem of UBSS and also make comparisons with the classical SPM, SL0, NSL0, and Lp-RLS algorithms. Experiments show that this algorithm has some advanced performance. In addition, we would also like to apply the the proposed algorithm to other CS applications such as the RPCA [39], SAR imaging [40], and other de-noising methods [41].

Author Contributions

All authors have made great contributions to the work. L.W., X.Y., H.Y., and J.X. conceived of and designed the experiments; X.Y. and H.Y. performed the experiments and analyzed the data; X.Y. gave insightful suggestions for the work; X.Y. and H.Y. wrote the paper.

Funding

This research received no external funding.

Acknowledgments

This paper is supported by the National Key Laboratory of Communication Anti-jamming Technology.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Zhang, C.Z.; Wang, Y.; Jing, F.L. Underdetermined Blind Source Separation of Synchronous Orthogonal Frequency Hopping Signals Based on Single Source Points Detection. Sensors 2017, 17, 2074. [Google Scholar] [CrossRef]
  2. Zhen, L.; Peng, D.; Zhang, Y.; Xiang, Y.; Chen, P. Underdetermined blind source separation using sparse coding. IEEE Trans. Neural Netw. Learn. Syst. 2017, 99, 1–7. [Google Scholar] [CrossRef] [PubMed]
  3. Donoho, D.L. Compressed sensing. IEEE Trans. Inf. Theory 2006, 52, 1289–1306. [Google Scholar] [CrossRef]
  4. Candès, E.J.; Wakin, M.B. An introduction to compressive sampling. IEEE Signal Process. Mag. 2008, 2, 21–30. [Google Scholar] [CrossRef]
  5. Badeńska, A.; Błaszczyk, Ł. Compressed sensing for real measurements of quaternion signals. J. Frankl. Inst. 2017, 354, 5753–5769. [Google Scholar] [CrossRef] [Green Version]
  6. Candès, E.J. The restricted isometry property and its implications forcompressed sensing. C. R. Math. 2008, 910, 589–592. [Google Scholar] [CrossRef]
  7. Cahill, J.; Chen, X.; Wang, R. The gap between the null space property and the restricted isometry property. Linear Algebra Its Appl. 2016, 501, 363–375. [Google Scholar] [CrossRef] [Green Version]
  8. Huang, S.; Tran, T.D. Sparse Signal Recovery via Generalized Entropy Functions Minimization. arXiv, 2017; arXiv:1703.10556. [Google Scholar]
  9. Tropp, J.A.; Gilbert, A.C. Signal recovery from random measurements via orthogonal matching pursuit. IEEE Trans. Inf. Theory 2007, 12, 4655–4666. [Google Scholar] [CrossRef]
  10. Determe, J.F.; Louveaux, J.; Jacques, L.; Horlin, F. On the noise robustness of simultaneous orthogonal matching pursuit. IEEE Trans. Signal Process. 2016, 65, 864–875. [Google Scholar] [CrossRef]
  11. Donoho, D.L.; Tsaig, Y.; Starck, J.L. Sparse solution of underdetermined linear equations by stagewise orthogonal matching pursuit. IEEE Trans. Inf. Theory 2012, 2, 1094–1121. [Google Scholar] [CrossRef]
  12. Needell, D.; Vershynin, R. Signal recovery from incompleteand inaccurate measurements via regularized orthogonal matching pursuit. IEEE J. Sel. Top. Signal Process. 2010, 2, 310–316. [Google Scholar] [CrossRef]
  13. Needell, D.; Tropp, J.A. CoSaMP: Iterative signal recovery from incomplete and inaccurate samples. Commun. ACM 2010, 12, 93–100. [Google Scholar] [CrossRef]
  14. Jian, W.; Seokbeop, K.; Byonghyo, S. Generalized orthogonal matching pursuit. IEEE Trans. Signal Process. 2012, 12, 6202–6216. [Google Scholar] [CrossRef]
  15. Wang, J.; Kwon, S.; Li, P.; Shim, B. Recovery of sparse signals via generalized orthogonal matching pursuit: A new analysis. IEEE Trans. Signal Process. 2016, 64, 1076–1089. [Google Scholar] [CrossRef]
  16. Dai, W.; Milenkovic, O. Subspace pursuit for compressive sensing signal reconstruction. IEEE Trans. Inf. Theory 2009, 5, 2230–2249. [Google Scholar] [CrossRef]
  17. Goyal, P.; Singh, B. Subspace pursuit for sparse signal reconstruction in wireless sensor networks. Procedia Comput. Sci. 2018, 125, 228–233. [Google Scholar] [CrossRef]
  18. Liu, X.J.; Xia, S.T.; Fu, F.W. Reconstruction guarantee analysis of basis pursuit for binary measurement matrices in compressed sensing. IEEE Trans. Inf. Theory 2017, 63, 2922–2932. [Google Scholar] [CrossRef]
  19. Mohimani, H.; Babaie-Zadeh, M.; Jutten, C. A Fast Approach for Overcomplete Sparse Decomposition Based on Smoothed L0 Norm. IEEE Trans. Signal Process. 2009, 57, 289–301. [Google Scholar] [CrossRef]
  20. Zhao, R.; Lin, W.; Li, H.; Hu, S. Reconstruction algorithm for compressive sensing based on smoothed L0 norm and revised newton method. J. Comput.-Aided Des. Comput. Graph. 2012, 24, 478–484. [Google Scholar]
  21. Ye, X.; Zhu, W.P. Sparse channel estimation of pulse-shaping multiple-input–multiple-output orthogonal frequency division multiplexing systems with an approximate gradient L2-SL0 reconstruction algorithm. Iet Commun. 2014, 8, 1124–1131. [Google Scholar] [CrossRef]
  22. Nowak, R.D.; Wright, S.J. Gradient projection for sparse reconstruction: Application to compressed sensing andother inverse problems. IEEE J. Sel. Top. Signal Process. 2007, 1, 586–597. [Google Scholar]
  23. Long, T.; Jiao, W.; He, G. RPC estimation via 1-norm-regularized least squares (L1LS). IEEE Trans. Geosci. Remote Sens. 2015, 8, 4554–4567. [Google Scholar] [CrossRef]
  24. Pant, J.K.; Lu, W.S.; Antoniou, A. New improved algorithms for compressive sensing based on p norm. IEEE Trans. Circuits Syst. II Express Br. 2014, 3, 198–202. [Google Scholar] [CrossRef]
  25. Wipf, D.; Nagarajan, S. Iterative Reweighted and Methods for Finding Sparse Solutions. IEEE J. Sel. Top. Signal Process. 2016, 2, 317–329. [Google Scholar]
  26. Zhang, C.; Hao, D.; Hou, C.; Yin, X. A New Approach for Sparse Signal Recovery in Compressed Sensing Based on Minimizing Composite Trigonometric Function. IEEE Access 2018, 6, 44894–44904. [Google Scholar] [CrossRef]
  27. Candès, E.J.; Wakin, M.B.; Boyd, S.P. Enhancing sparsity by weighted L1 minimization. J. Fourier Anal. Appl. 2008, 14, 877–905. [Google Scholar] [CrossRef]
  28. Pant, J.K.; Lu, W.S.; Antoniou, A. Reconstruction of sparse signals by minimizing a re-weighted approximate L0-norm in the null space of the measurement matrix. In Proceedings of the IEEE International Midwest Symposium on Circuits and Systems, Seattle, WA, USA, 1–4 August 2010; pp. 430–433. [Google Scholar]
  29. Aggarwal, P.; Gupta, A. Accelerated fmri reconstruction using matrix completion with sparse recovery via split bregman. Neurocomputing 2016, 216, 319–330. [Google Scholar] [CrossRef]
  30. Chu, Y.J.; Mak, C.M. A new qr decomposition-based rls algorithm using the split bregman method for L1-regularized problems. Signal Process. 2016, 128, 303–308. [Google Scholar] [CrossRef]
  31. Hu, Y.; Liu, J.; Leng, C.; An, Y.; Zhang, S.; Wang, K. Lp regularization for bioluminescence tomography based on the split bregman method. Mol. Imaging Biol. 2016, 18, 1–8. [Google Scholar] [CrossRef]
  32. Liu, Y.; Zhan, Z.; Cai, J.F.; Guo, D.; Chen, Z.; Qu, X. Projected iterative soft-thresholding algorithm for tight frames in compressed sensing magnetic resonance imaging. IEEE Trans. Med. Imaging 2016, 35, 2130–2140. [Google Scholar] [CrossRef]
  33. Yang, L.; Pong, T.K.; Chen, X. Alternating direction method of multipliers for a class of nonconvex and nonsmooth problems with applications to background/foreground extraction. Mathematics 2016, 10, 74–110. [Google Scholar] [CrossRef]
  34. Antoniou, A.; Lu, W.S. Practical Optimization: Algorithms and Engineering Applications; Springer: New York, NY, USA, 2007. [Google Scholar]
  35. Samora, I.; Franca, M.J.; Schleiss, A.J.; Ramos, H.M. Simulated annealing in optimization of energy production in a water supply network. Water Resour. Manag. 2016, 30, 1533–1547. [Google Scholar] [CrossRef]
  36. Goldstein, T.; Studer, C. Phasemax: Convex phase retrieval via basis pursuit. IEEE Trans. Inf. Theory 2018, 64, 2675–2689. [Google Scholar] [CrossRef]
  37. Wei-Hong, F.U.; Ai-Li, L.I.; Li-Fen, M.A.; Huang, K.; Yan, X. Underdetermined blind separation based on potential function with estimated parameter’s decreasing sequence. Syst. Eng. Electron. 2014, 36, 619–623. [Google Scholar]
  38. Bofill, P.; Zibulevsky, M. Underdetermined blind source separation using sparse representations. Signal Process. 2001, 81, 2353–2362. [Google Scholar] [CrossRef] [Green Version]
  39. Su, J.; Tao, H.; Tao, M.; Wang, L.; Xie, J. Narrow-band interference suppression via rpca-based signal separation in time–frequency domain. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2017, 99, 1–10. [Google Scholar] [CrossRef]
  40. Ni, J.C.; Zhang, Q.; Luo, Y.; Sun, L. Compressed sensing sar imaging based on centralized sparse representation. IEEE Sens. J. 2018, 18, 4920–4932. [Google Scholar] [CrossRef]
  41. Li, G.; Xiao, X.; Tang, J.T.; Li, J.; Zhu, H.J.; Zhou, C.; Yan, F.B. Near—Source noise suppression of AMT by compressive sensing and mathematical morphology filtering. Appl. Geophys. 2017, 4, 581–589. [Google Scholar] [CrossRef]
Figure 1. Frame of compressed sensing (CS).
Figure 1. Frame of compressed sensing (CS).
Sensors 18 04260 g001
Figure 2. Different functions used in the literature to approximate the L 0 -norm; some of them are plotted in this figure, and the L 0.5 -norm is displayed for comparison. CIPF, compound inverse proportional function.
Figure 2. Different functions used in the literature to approximate the L 0 -norm; some of them are plotted in this figure, and the L 0.5 -norm is displayed for comparison. CIPF, compound inverse proportional function.
Sensors 18 04260 g002
Figure 3. Signal recovery effect by BPDN, SL0, NSL0, L p -RLS, and weighted regularized smoothed L 0 -norm minimization (WReSL0) when noise intensity δ N = 0.2. (a) signal recovery by the BPDN algorithm; (b) signal recovery by the SL0 algorithm; (c) signal recovery by NSL0 algorithm; (d) signal recovery by the L p -RLS algorithm; (e) signal recovery by the WReSL0 algorithm.
Figure 3. Signal recovery effect by BPDN, SL0, NSL0, L p -RLS, and weighted regularized smoothed L 0 -norm minimization (WReSL0) when noise intensity δ N = 0.2. (a) signal recovery by the BPDN algorithm; (b) signal recovery by the SL0 algorithm; (c) signal recovery by NSL0 algorithm; (d) signal recovery by the L p -RLS algorithm; (e) signal recovery by the WReSL0 algorithm.
Sensors 18 04260 g003
Figure 4. Frequency spectrum analysis of the original signal and the signal recovered by BPDN, SL0, NSL0, L p -RLS, and WReSL0 when noise intensity δ N = 0.2. (a) original signal; (b) signal recovery by the BPDN algorithm; (c) signal recovery by the SL0 algorithm; (d) signal recovery by the NSL0 algorithm; (e) signal recovery by the L p -RLS algorithm; (f) signal recovery by the WReSL0 algorithm.
Figure 4. Frequency spectrum analysis of the original signal and the signal recovered by BPDN, SL0, NSL0, L p -RLS, and WReSL0 when noise intensity δ N = 0.2. (a) original signal; (b) signal recovery by the BPDN algorithm; (c) signal recovery by the SL0 algorithm; (d) signal recovery by the NSL0 algorithm; (e) signal recovery by the L p -RLS algorithm; (f) signal recovery by the WReSL0 algorithm.
Sensors 18 04260 g004
Figure 5. NMSE analysis by BPDN, SL0, NSL0, L p -RLS, and WReSL0 when noise intensity δ N changes according to the sequence [0, 0.1, 0.2, 0.3, 0.4, 0.5].
Figure 5. NMSE analysis by BPDN, SL0, NSL0, L p -RLS, and WReSL0 when noise intensity δ N changes according to the sequence [0, 0.1, 0.2, 0.3, 0.4, 0.5].
Sensors 18 04260 g005
Figure 6. Original images: (a) boat; (b) Barbara; (c) peppers; (d) Lena.
Figure 6. Original images: (a) boat; (b) Barbara; (c) peppers; (d) Lena.
Sensors 18 04260 g006
Figure 7. Image recovery effect by the BPDN, SL0, NSL0, L p -RLS, and WReSL0 algorithms with noise intensity δ N = 0.01. In (ad), from left to right, are: image recovered by the BPDN, SL0, NSL0, L p -RLS, and WReSL0 algorithms.
Figure 7. Image recovery effect by the BPDN, SL0, NSL0, L p -RLS, and WReSL0 algorithms with noise intensity δ N = 0.01. In (ad), from left to right, are: image recovered by the BPDN, SL0, NSL0, L p -RLS, and WReSL0 algorithms.
Sensors 18 04260 g007
Figure 8. Schematic diagram of cocktail reception signal mixing.
Figure 8. Schematic diagram of cocktail reception signal mixing.
Sensors 18 04260 g008
Figure 9. Schematic diagram of two-step method for UBSS.
Figure 9. Schematic diagram of two-step method for UBSS.
Sensors 18 04260 g009
Figure 10. Source signal.
Figure 10. Source signal.
Sensors 18 04260 g010
Figure 11. Mixed signal by sensors.
Figure 11. Mixed signal by sensors.
Sensors 18 04260 g011
Figure 12. Mixed image by sensors.
Figure 12. Mixed image by sensors.
Sensors 18 04260 g012
Figure 13. Clustering analysis.
Figure 13. Clustering analysis.
Sensors 18 04260 g013
Figure 14. Flowchart of UBSS by CS.
Figure 14. Flowchart of UBSS by CS.
Sensors 18 04260 g014
Figure 15. Separation signal.
Figure 15. Separation signal.
Sensors 18 04260 g015
Figure 16. Separation signal error analysis.
Figure 16. Separation signal error analysis.
Sensors 18 04260 g016aSensors 18 04260 g016b
Figure 17. Separation signals’ frequency spectrum. Subfigures (ad) show the frequency spectrums of separation signals s ^ 1 , s ^ 2 , s ^ 3 , and s ^ 4 .
Figure 17. Separation signals’ frequency spectrum. Subfigures (ad) show the frequency spectrums of separation signals s ^ 1 , s ^ 2 , s ^ 3 , and s ^ 4 .
Sensors 18 04260 g017
Figure 18. Separated images: (a) boat; (b) Barbara; (c) peppers; (d) Lena.
Figure 18. Separated images: (a) boat; (b) Barbara; (c) peppers; (d) Lena.
Sensors 18 04260 g018
Table 1. Weighted regularized smoothed L 0 -norm minimization (WReSL0) algorithm using the GD method.
Table 1. Weighted regularized smoothed L 0 -norm minimization (WReSL0) algorithm using the GD method.
● Initialization:
 (1) Set L , μ = σ / ( 2 α ) , x ^ ( 0 ) = Φ H ( Φ Φ H ) 1 y .
 (2) Set σ max = α max | x | , σ min = 0.01 , and σ t = σ max θ γ ( t 1 ) , where γ = log θ ( σ max / σ min ) T 1 , and T is the maximum number of iterations.
while t < T , do
 (1) Let σ = σ t .
 (2) Let x = x ^ ( t 1 ) .
  for l = 1 , 2 , , L
  (a) x x μ e | x | σ T 2 α σ 2 x α x 2 + σ 2 2 .
  (b) x x λ 1 I n + Φ H Φ 1 Φ H Φ x ^ y
 (3) Set x ^ ( t 1 ) = x .
● The estimated value is x ^ = x ^ ( t ) .
Table 2. Signal CPU running time (CRT) analysis for BPDN, SL0, NSL0, L p -RLS, and the proposed WReSL0 with signal length changes according to the sequence [170, 220, 270, 320, 370, 420, 470, 520] when δ N = 0.2 .
Table 2. Signal CPU running time (CRT) analysis for BPDN, SL0, NSL0, L p -RLS, and the proposed WReSL0 with signal length changes according to the sequence [170, 220, 270, 320, 370, 420, 470, 520] when δ N = 0.2 .
Signal Length (n)CPU Running Time (Seconds)
BPDNSL0NSL0Lp-RLSWReSL0
1700.1950.0570.0910.1940.063
2200.2890.1390.2300.3500.142
2700.4950.2290.4260.5050.291
3200.7670.3200.6390.7120.509
3701.0590.4560.9260.9820.892
4201.4770.6131.1331.4911.017
4701.9410.7961.4782.1181.344
5202.6191.0382.0892.9101.882
Table 3. PSNR and SSIM analysis of recovered images (boat and Barbara) by SL0, BPDN, NSL0, L p -RLS, and WReSL0.
Table 3. PSNR and SSIM analysis of recovered images (boat and Barbara) by SL0, BPDN, NSL0, L p -RLS, and WReSL0.
ItemsBarbaraBoat
PSNR (dB)SSIMPSNR (dB)SSIM
SL027.9830.98126.9590.969
BPDN28.8340.98427.3760.971
NSL031.2960.99131.2470.988
L p -RLS31.7860.99231.7970.989
WReSL032.2440.99332.3690.991
Table 4. PSNR and SSIM analysis of recovered images (peppers and Lena) by SL0, BPDN, NSL0, L p -RLS, and WReSL0.
Table 4. PSNR and SSIM analysis of recovered images (peppers and Lena) by SL0, BPDN, NSL0, L p -RLS, and WReSL0.
ItemsPeppersLena
PSNR (dB)SSIMPSNR (dB)SSIM
SL028.6770.98230.3340.987
BPDN29.5420.98529.8750.983
NSL031.3730.99132.6390.993
L p -RLS33.7570.99434.0510.995
WReSL034.2310.99634.6530.997
Table 5. Average SNR (ASNR) analysis for separated signals by SPM, SL0, NSL0, L p -RLS, and the proposed WReSL0 with δ N changing according to sequence [0,0.1,0.15,0.18,0.2] with 100 runs.
Table 5. Average SNR (ASNR) analysis for separated signals by SPM, SL0, NSL0, L p -RLS, and the proposed WReSL0 with δ N changing according to sequence [0,0.1,0.15,0.18,0.2] with 100 runs.
Oise Intensity ( δ N )Error of A ^ (%)ASNR (dB)
SPMSL0NSL0Lp-RLSWReSL0
01.76345.44341.57642.32438.41239.993
0.11.76336.78835.27836.03437.09139.295
0.151.76331.40730.75432.93035.33238.975
0.18112.626.35524.06325.43728.30526.650
0.2126.311.2019.97412.35817.54915.581
Table 6. APSNR analysis for separated images by SPM, SL0, NSL0, L p -RLS, and the proposed WReSL0 with δ N changing according to the sequence [0,0.1,0.15,0.18,0.2] with 100 runs.
Table 6. APSNR analysis for separated images by SPM, SL0, NSL0, L p -RLS, and the proposed WReSL0 with δ N changing according to the sequence [0,0.1,0.15,0.18,0.2] with 100 runs.
Noise Intensity ( δ N )Error of B ^ (%)APSNR (dB)
SPMSL0NSL0Lp-RLSWReSL0
03.6416.44719.21120.03516.37218.483
0.13.6415.63916.30517.32715.40717.849
0.153.6413.40714.75414.93014.93217.351
0.18133.29.35511.06311.43710.30511.650
0.2142.45.2015.9746.3583.5495.581

Share and Cite

MDPI and ACS Style

Wang, L.; Yin, X.; Yue, H.; Xiang, J. A Regularized Weighted Smoothed L0 Norm Minimization Method for Underdetermined Blind Source Separation. Sensors 2018, 18, 4260. https://doi.org/10.3390/s18124260

AMA Style

Wang L, Yin X, Yue H, Xiang J. A Regularized Weighted Smoothed L0 Norm Minimization Method for Underdetermined Blind Source Separation. Sensors. 2018; 18(12):4260. https://doi.org/10.3390/s18124260

Chicago/Turabian Style

Wang, Linyu, Xiangjun Yin, Huihui Yue, and Jianhong Xiang. 2018. "A Regularized Weighted Smoothed L0 Norm Minimization Method for Underdetermined Blind Source Separation" Sensors 18, no. 12: 4260. https://doi.org/10.3390/s18124260

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop