Next Article in Journal
Method for Automatic Estimation of Instantaneous Frequency and Group Delay in Time–Frequency Distributions with Application in EEG Seizure Signals Analysis
Previous Article in Journal
Extensive Gaseous Emissions Reduction of Firewood-Fueled Low Power Fireplaces by a Gas Sensor Based Advanced Combustion Airflow Control System and Catalytic Post-Oxidation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Comparison of Common Algorithms for Single-Pixel Imaging via Compressed Sensing

1
College of Physics and Optoelectronics, Taiyuan University of Technology, No. 79 West Main Street, Taiyuan 030024, China
2
Key Laboratory of Advanced Transducers and Intelligent Control System, Ministry of Education, and Shanxi Province, Taiyuan University of Technology, No. 79 West Main Street, Taiyuan 030024, China
*
Author to whom correspondence should be addressed.
Sensors 2023, 23(10), 4678; https://doi.org/10.3390/s23104678
Submission received: 14 March 2023 / Revised: 19 April 2023 / Accepted: 9 May 2023 / Published: 11 May 2023
(This article belongs to the Section Sensing and Imaging)

Abstract

:
Single-pixel imaging (SPI) uses a single-pixel detector instead of a detector array with a lot of pixels in traditional imaging techniques to realize two-dimensional or even multi-dimensional imaging. For SPI using compressed sensing, the target to be imaged is illuminated by a series of patterns with spatial resolution, and then the reflected or transmitted intensity is compressively sampled by the single-pixel detector to reconstruct the target image while breaking the limitation of the Nyquist sampling theorem. Recently, in the area of signal processing using compressed sensing, many measurement matrices as well as reconstruction algorithms have been proposed. It is necessary to explore the application of these methods in SPI. Therefore, this paper reviews the concept of compressive sensing SPI and summarizes the main measurement matrices and reconstruction algorithms in compressive sensing. Further, the performance of their applications in SPI through simulations and experiments is explored in detail, and then their advantages and disadvantages are summarized. Finally, the prospect of compressive sensing with SPI is discussed.

1. Introduction

In traditional imaging using silicon-based charge-coupled devices (CCD) or complementary metal-oxide-semiconductor (CMOS) array detectors, the imaging resolution is proportional to the number of pixels on the detector. Thus, to improve the imaging resolution, the integration of the detector needs to be increased, which leads to higher requirements for manufacturing. Single-pixel imaging (SPI) [1,2] uses a single-pixel detector, instead of a detector array with a lot of pixels in traditional imaging, to realize two-dimensional or even multi-dimensional imaging, which gives cost-efficiency to broadband imaging and resolution enhancement, and recently has a lot of attention with the potential of applications in various imaging areas such as visible range imaging [3], multispectral imaging [4,5,6,7], hyperspectral imaging [8], ultrafast imaging [9], infrared imaging [10], terahertz imaging [11,12,13], gas imaging [14], microscopic imaging [15,16,17], imaging through scattering media [18,19], photoacoustic imaging [20], long-range imaging [21,22], X-ray imaging [23,24], 3D imaging [25] and holography [26], etc.
For SPI, the target to be projected is modulated by a series of patterns with spatial resolution, and then the reflected or transmitted intensity is collected by the single-pixel detector to reconstruct the target image. The modulation patterns can be realized by a diffuser, a spinning mask, or a spatial light modulator. The single-pixel detector can be a photodiode, a photon multiplier, or a conventional image detector used as a bucket detector. SPI can be traced back to quantum ghost imaging and thermal ghost imaging. According to the position of the modulation device, SPI can be divided into computational ghost imaging (CGI, which uses active illumination) and single-pixel cameras (SPC, which uses passive detection) [27,28]. Despite being commonly treated as separate research fields, it has become obvious that, from an optical perspective, CGI and SPC are the same. Therefore, in this paper, we uniformly call it SPI.
SPI can be divided into orthogonal SPI [29,30,31,32,33] and compressed sensing SPI (CSSPI) [1,34,35,36,37]. Firstly, orthogonal SPI often seeks to solve an inverse problem or perform a reconstruction from an ensemble average. Secondly, CSSPI often seeks the sparse estimation of an optimization problem. CSSPI was first realized by Duarte et al. [1]. The target to be imaged is modulated by a series of random patterns. Then, the reflected or transmitted intensity is compressively sampled by the single-pixel detector, which reduces the measurement number. Finally, the target image is reconstructed using compressed sensing (CS) theory.
The CS theory was proposed in 2006 [38,39], which holds that it is possible to recover the original signal from under-sampled data when the original target signal is sparse or is sparse in the transform domain. This breaks the limitation of the Nyquist sampling theorem in data acquisition and thus reduces the sampling rate.
CS mainly focuses on how to obtain the whole information of the target signal by sampling the data much less than the Nyquist sampling method does, as well as how to recover the target signal from the down-sampled data. Recently, on these two key issues, researchers have developed a variety of sampling frameworks, i.e., different measurement matrices, and a variety of signal reconstruction algorithms. It is necessary to explore the application of these methods in SPI.
Additionally, refs. [40,41] review the development of CS; refs. [42,43] review the main measurement matrices in CS; and refs. [44,45,46] review the reconstruction algorithms in CS. In addition, refs. [2,47] review the development of SPI. Refs. [48,49] review the algorithms of SPI; however, for CS, only the TVAL3 algorithm is involved. There are no reviews on compressive sensing SPI and, especially, no detailed investigations on the performance of SPI using these different CS methods. As a result, this paper reviews the concept of CSSPI and summarizes the main measurement matrices and reconstruction algorithms. Further, their performance is explored in detail through simulations and experiments, and then their advantages and disadvantages are summarized. Finally, the prospect of CSSPI is discussed. Table 1 shows the comparison between the works of this paper and the existing works.
The paper is structured as follows: In Section 2, we review the principles of CSSPI and point out that the measurement matrix and reconstruction algorithm are the two important factors affecting its performance. In Section 3, we classify some main measurement matrices and briefly introduce how to generate them. In Section 4, we classify the existing CS reconstruction algorithms based on sparse and image gradient sparse and review their principles of reconstruction. Section 5 and Section 6 compare the performance of these measurement matrices and reconstruction algorithms through simulations and experiments, respectively. Finally, Section 7 summarizes the work of this paper, and discussions on the prospect of CSSPI are given.

2. Compressed Sensing SPI

As shown in Figure 1, assuming that a target object with a spatial resolution of u × v pixels is to be captured by the CSSPI, the imaging process is comprised of two procedures. The first procedure is to modulate the object with a set of patterns with a spatial resolution of u × v pixels, and measure the corresponding reflected light y M × 1 , which can be mathematically described as,
y = Φ x ,
where each row of the measurement matrix Φ M × N represents a 2-D modulation pattern in its 1-D representation, x N × 1 is the signal of the 2-D target object in its 1-D representation, N = u × v is the length of the 1-D representations, and M is the number of measurements and also equals the number of the modulation patterns.
The second procedure is to reconstruct the image of the target object using CS [50,51]. Normally, the number of measurements is less than the length of the signal, i.e., M < N , which is demanded. This makes solving the signal x from Equation (1) an undetermined question. Fortunately, CS suggests that when the signal x is sparse or has some sparse representation in some sparse dictionaries, e.g., discrete cosine transform (DCT) and wavelet transform (WT) [52,53,54,55,56], the joint matrix is composed of the measurement matrix, and the sparse dictionary satisfies the restricted isometry property (RIP) [38,57], it is possible to reconstruct the original signal x [58]. As shown in Figure 1, using Ψ s to replace x in Equation (1) as the object vector, the equivalent form of Equation (1) can be written:
y = Φ x = Φ Ψ s = A s ,
where Ψ N × N is the sparse dictionary, A M × N is the joint matrix, and s N × 1 is the sparse vector. In this case, solving the signal x from Equations (1) and (2) can be transformed into an optimization problem for solving l0-norm minimization [39,59]:
min s ^ n s ^ l 0   subject   to   A s ^ = y .
In order to solve the optimization problem of Equation (3), there are two requirements. First, the matrix A that satisfies the RIP of the following equation is demanded.
( 1 δ ) s 2 2 A s 2 2 ( 1 + δ ) s 2 2 ,
where δ ( 0 , 1 ) is the restricted isometry constant (RIC) value of the matrix A . In practical use, the random matrix is a method to obtain joint matrices following RIP conditions, however, it is difficult to verify whether these matrices satisfy the RIP property with a low RIC value. Most of the time, it only needs to ensure that the coherence between the measurement matrix and the sparse dictionary [60] is minimized. The correlation between measurement matrix Φ and sparse dictionary Ψ is defined as: μ ( Φ , Ψ ) = N max 1 k , j N | φ k , ψ j | . To put it simply, it is to find the maximum correlation between the two matrix elements coherently. If there are correlation elements between Φ and Ψ , the correlation is very large, otherwise, the correlation is very small. There are three types of measurement matrices for CSSPI [46]: random measurement matrices, partial orthogonal measurement matrices, and semi-deterministic random measurement matrices. Second, finding a suitable reconstruction algorithm to solve the optimization problem of Equation (3) is necessary. Over the past years, several typical sparse recovery algorithms have been proposed [50], which can be classified into five main categories: convex optimization algorithms, greedy algorithms, non-convex optimization algorithms, Bregman distance minimization algorithms, and total variation minimization algorithms. Comments on the measurement matrices and the reconstruction algorithms will be made in Section 3 and Section 4, respectively.

3. Selection of Measurement Matrix

As mentioned in Section 2, to achieve CSSPI, it is necessary to select an appropriate measurement matrix, i.e., the matrix that satisfied the RIP or the coherence between and is small, to encode the object. There are three types of measurement matrices for compressive sensing SPI [46]: random measurement matrices [47,61,62,63], partial orthogonal measurement matrices [47,64,65,66], and semi-deterministic random measurement matrices [64,67,68,69,70,71,72,73,74,75,76]. In addition, there are some optimized measurement matrices proposed for improving the RIP characteristics of the measurement matrix [77,78,79,80,81,82,83,84,85] or machine learning [86,87,88,89].

3.1. Random Measurement Matrix

Each element of the random measurement matrix is independent and obeys the same distribution, e.g., Gaussian distribution, Bernoulli distribution, etc. It has been proven in [61] that such matrices have no coherence with most sparse signals and sparse dictionaries; that is, a small number of measurements are needed to accurately reconstruct the target object. However, such matrices almost do not exist in reality and can only be generated in the laboratory. At the same time, the drawbacks of the high computational complexity and large storage space of the reconstructed object limit its use in practice. Common random measurement matrices include the Gaussian random measurement matrix [62], the Bernoulli random measurement matrix [63], etc.

3.1.1. Gaussian Random Measurement Matrix

A Gaussian random measurement matrix is the most widely used measurement matrix in compressed sensing, which constructs a Gaussian distribution matrix in which each element independently obeys the mean of 0 and the variance of M. That is:
ϕ i , j ~ N ( 0 , 1 M ) .
Each element of the matrix is distributed independently and has strong randomness, which is not related to most sparse signals and sparse bases. When the Gaussian random matrix is used to measure the signal, if the number of measured values satisfies M c k log N k , the RIP condition ( c is a constant) is greatly satisfied, and the signal is reconstructed accurately.

3.1.2. Bernoulli Random Measurement Matrix

The difference between the Bernoulli random measurement matrix and the Gaussian random measurement matrix is that each element in the matrix is independent and uniformly distributed and obeys the binomial Bernoulli distribution with a probability of 1/2, that is:
ϕ i , j = { 1 P = 1 2 1 P = 1 2 .
Similar to the Gaussian random matrix, each element of the matrix is independently distributed, which also has strong randomness and is not related to most sparse signals and sparse bases. Additionally, since the matrix elements are only 1 and −1, it is easy to implement on hardware devices; hence, it is more widely used than a Gaussian random matrix in practical applications.

3.2. Partial Orthogonal Measurement Matrix

Given the drawbacks of high computational complexity, large storage space, and high uncertainty of the random measurement matrices, it is particularly important to find or design a deterministic measurement matrix to reduce the computational complexity and storage space. The partial orthogonal matrix is derived from the existing orthogonal matrix with some special properties in the field of signal processing.

3.2.1. Partial Hadamard Matrix

In refs. [65,66], the partial Hadamard matrix is proposed as the measurement matrix of CS. It is mainly composed of M-row vectors in the N × N Hadamard matrix. The entries of the Hadamard matrix are 1 and −1, and its columns are orthogonal. The matrix satisfies the following properties:
H H T = N I N ,
where IN is the N × N identity matrix and HT is the transpose of the matrix H. This measurement matrix is incoherent for most sparse signals and sparse dictionaries. However, since the order of the Hadamard matrix must satisfy 2n, n = 1 , 2 , 3 , , there are strict requirements for the dimension of the target signal, which limits its use in practice.

3.2.2. Partial Fourier Matrix

In addition, the partial Fourier matrix is proposed as the measurement matrix of CS in Refs [47]. It can reduce the complexity of the algorithm by using a fast Fourier transform. However, it is usually only incoherent to time-domain sparse signals, and most natural images do not meet this condition, so this kind of matrix is rarely used in CSSPI.

3.3. Semi-Deterministic Random Measurement Matrix

The semi-deterministic random measurement matrices are designed to follow a deterministic construction to satisfy the RIP or to have low mutual coherence, which can be regarded as a combination of random measurement matrices and deterministic orthogonal measurement matrices. At present, this kind of matrix mainly includes Toeplitz and circulant matrices [67,68], structured random matrices [68,69,72], sparse random matrices [70,71,72,73], binary random matrices [74], block-diagonal matrices [75,76], etc.

3.3.1. Toeplitz and Circulant Matrix

The Toeplitz and circulant measurement matrices are generated based on random measurement matrices and have the following forms:
T = ( t n t n 1 t 1 t n + 1 t n t 2 t 2 n 1 t 2 n 2 t n )   and   C = ( t n t n 1 t 1 t 1 t n t 2 t n 1 t n 2 t n ) ,
where the diagonal elements of each matrix are the same (Ti,j = Ti+1,j+1). A circulant matrix is a special form of the Toeplitz matrix. The elements in the first row of the matrix obey the same random distribution as the random measurement matrix.

3.3.2. Sparse Random Matrix

The structure of the sparse random matrix is simple, and it is easy to generate and save in the experiment. The elements of d random positions in each column are 1, and the rest are all 0, and d { 4 , 8 , 10 , 16 } [72].
In summary, the three types of measurement matrices have their advantages and disadvantages. The random measurement matrices are close to the characteristics of RIP to a great extent; however, they are not easy to generate in reality and require high computational complexity and storage capacity. Part of the orthogonal measurement matrices has a very low computational complexity according to its orthogonal transformation, but it also has some limitations in its transformation. The semi-deterministic random measurement matrices combine the properties of random measurement matrices and partial orthogonal measurement matrices to some extent. Table 2 shows the advantages and disadvantages of each measurement matrix in detail.
The measurement matrices constructed by different construction methods are proposed [77,78,79,80,81,82,83]. Researchers try to improve the RIP property of the existing measurement matrix. David L. Donoho proposed in Ref. [61] that the minimum singular value of the submatrix composed of the column vectors of the measurement matrix must be greater than a positive constant, i.e., the column vectors of the matrix satisfy certain independence. The QR decomposition of a matrix can increase the singular value of the matrix without changing its properties. In addition, methods for matrix optimization are also proposed, such as an optimal projection matrix that optimizes the measurement matrix by using the sparse dictionary of signals [84] and an optimal measurement matrix based on effective projection [85]. Researchers are also trying to use machine learning to train and optimize the sampling framework of CS [86,87,88,89].

4. Selection of the Reconstruction Algorithm

The previous section gives requirements for the measurement matrices Φ , so that x can be recovered from the given measurements y = Φ x = Φ Ψ s = A s . Since knowledge of s is equivalent to knowledge of x . Here, we only need to discuss the reconstruction algorithms for solving the equation y = A s .
As mentioned in Section 2, the unique solution of Equation (2) can be obtained by posing the reconstruction problem as an l 0 -minimization problem given by Equation (3). However, solving the l 0 -minimization program is an NP-complete problem. Therefore, in practice, it is unable to solve it. The crux of CS is to propose faster algorithms that can solve Equation (3) with high probability. Over the past years, several typical sparse recovery algorithms have been proposed [50,90,91,92,93], which can be classified into five main categories: convex optimization algorithms, greedy algorithms, non-convex optimization algorithms, Bregman distance minimization algorithms, and total variation minimization algorithms.

4.1. Convex Optimization Algorithms

The algorithms reduce the CS reconstruction problem to a convex optimization problem of Equation (9) [94,95,96,97,98,99,100], which can be solved using methods of linear programming.
min s ^ n s ^ l 1   subject   to   A s = y .

4.1.1. Basis Pursuit

Basis pursuit (BP) is a signal processing technique [101,102], which aims to decompose a signal into a superposition of dictionary elements that have the smallest l 0 -norm of the coefficients, subject to the equality constraint given in Equation (9). Since BP is based on global optimization, it can be solved stably in many ways. Especially, it is a principle of global optimization without any specified algorithm, which is closely connected with linear programming. Therefore, Equation (9) can be expressed as:
min   c T s ^   subject   to   O   s ^ = y ;   s ^ 0 ,
where c T   s ^ is the objective function,   s ^ 0 is a set of bounds, O = (A, -A), and c = (1,1). Over the past few decades, many algorithms have been proposed to solve linear programming problems, such as the simplex method and interior point method [103].

4.1.2. Basis Pursuit Denoising/Least Absolute Shrinkage and Selection Operator

If the measurements are corrupted by noise in CSSPI, strategies for noise suppression must be sought. Based on the principle of global optimization of BP, the widely used algorithms for robust data recovery from noisy measurements are basis pursuit denoising (BPDN) [101] and least absolute shrinkage and selection operator (LASSO) [104,105,106], etc. The BPDN and LASSO algorithms consider the sparse estimation problem as shown in Equation (11):
min s ^ 1   s . t .   y = A s + e ,
where e denotes noise and s is a pure signal without noise contamination. The system can be translated into the following optimization problem:
min s 1 2 y A s ^ 2 2 + λ s ^ 1 ,
where λ is a scalar parameter, which determines the magnitude of the signal estimation residual and has a great impact on the performance of BPDN and LASSO algorithms. As λ 0 the residual goes to zero, this problem translates into the same problem as BP. As λ the residual gets larger, the measurements are completely drowned out by noise. In ref. [101], the author suggests that λ can be set as λ p = σ 2 log ( p ) , where σ is the level of noise and p is the number of dictionary bases.

4.1.3. Decoding by Linear Programming

Ref. [107] proposed a faster method to solve sparse solutions of underdetermined equations: decoding by linear programming (DLP). It is similar to BPDN and LASSO, which consider the existence of noise in CSSPI. To recover s from corrupted data y = A s + e , DLP considers solving the following l1-minimization problem:
min g y A g l 1 ,
where g is the estimate and s is the unique solution of Equation (13). Equation (13) is a linear programming with inequality constraints and can be solved efficiently using standard optimization algorithms, see ref. [108].

4.1.4. Dantzig Selector

Dantzig selector (DS) is another solver for CS [109], which estimates the target signal s from measurements contaminated by noise. DS considers solving the convex optimization problem:
min s ^ 1   s . t .   A T r 1 + δ 1 λ N · σ ,
where r = y A s ^ is the residual vector, σ is the standard deviation of the additive white Gaussian noise, λ N > 0 and 1 + δ 1 is the maximum Euclidean norm of A . In addition, the program DS can easily be recast as linear programming.
The toolbox proposed in ref. [108] uses a primal-dual algorithm to solve all kinds of linear programming transformed from convex optimization problems.

4.2. Greedy Algorithms

Greedy algorithms [110,111,112] are the second class of CS reconstruction algorithms. These algorithms are different from the convex optimization algorithms, which try to find the global optima; they try to find the best local optima in the immediate neighborhood for each iteration of the optimization.

4.2.1. Orthogonal Matching Pursuit

Orthogonal matching pursuit (OMP) computes the best nonlinear approximation for the sparse solution of Equation (3), which is proposed by Y. C. Pati et al. [113,114,115,116,117]. By taking the higher absolute value of the inner product calculated between each column and the residue, it locates the column in the matrix A with the largest correlation to the residue r = y A s ^ . In addition, OMP fits the original function to all the already selected dictionary elements via least squares or projects the function orthogonally onto all selected dictionary atoms, so it does not repeatedly select the same atom.
OMP has become one of the most widely used CS algorithms in recent years. A couple of improved OMPs have been proposed, such as stagewise orthogonal matching pursuit (StOMP) and regularized orthogonal matching pursuit (ROMP) [118,119].

4.2.2. Compressive Sampling Matching Pursuit/Subspace Pursuit

Each iteration of OMP selects only one atom, and is also called the serial greedy algorithm. To overcome the instability of serial greedy algorithms, researchers propose parallel greedy algorithms, i.e., compressive sampling matching pursuit (CoSaMP) [120] and subspace pursuit (SP) [121]. This kind of algorithm has stricter limits on convergence and performance, selecting multiple atoms once in each iteration and allowing the previously selected wrong atoms to be discarded. CoSaMP selects 2 k atoms, and SP selects k atoms in each iteration.

4.2.3. Iterative Hard Thresholding

Iterative hard thresholding (IHT) is yet another greedy algorithm that was proposed by Blumensath and Davies [122,123,124]. It introduces the thresholding function in each iteration to maintain the k maximum non-zero entries in the estimated signal, and the remaining entries are set to zero.
The critical update step of IHT is,
s ( n + 1 ) = H k ( s ( n ) + λ A T ( y A s ( n ) ) ) ,
where Hk is the hard thresholding function and λ denotes the step size.

4.3. Non-Convex Optimization Algorithms

All CS reconstruction algorithms based on signal sparsity try to find the approximate solution that satisfies the minimum norm. Non-convex optimization algorithms recover signals from fewer measurements by replacing l1-norm with lp-norm where p 1 [125,126,127].

4.3.1. Iterative Reweighted Least Square Algorithm

The equivalent variant form of Equation (9) is considered in the non-convex optimization algorithm:
min s ^ n s ^ l p   subject   to   A s = y ,
where p < 1 . The iterative reweighted least squares algorithm (IRLS) [128,129] is to replace the lp objective function in (16) with a weighted l2-norm:
min s i = 1 N ω i s i 2   subject   to   A s = y ,
where the weights are computed from the previous iterate s(n−1) so that the objective in (17) is a first-order approximation ω i = | s i ( n 1 ) | p 2 . The solution of (17) can be given explicitly, giving the next iteration s(n):
s ( n ) = Q n A T ( A Q n A T ) 1 y ,
where Qn is the diagonal matrix with entries 1 / ω i = | s i ( n 1 ) | 2 p .

4.3.2. Bayesian Compressed Sensing Algorithm

The Bayesian approach applies to the input signals, which belong to some known probability distributions. The Bayesian compressed sensing algorithm (BCS) [130,131,132,133] provides an estimate of the posterior density function of the additive noise encountered when performing compressive measurements, e.g., y = A s + n . Let σ 2 be the noise variance; there is a Gaussian likelihood model:
p ( y | s , σ 2 ) = ( 2 π σ 2 ) k 2 exp ( 1 2 σ 2 y A s 2 ) .
In this way, the traditional sparse weight inversion problem of CS is transformed into a linear regression problem with sparse constraints, and the BCS seeks a complete posterior density function.

4.4. Bregman Distance Minimization Algorithms

For the large-scale and completely dense matrix A, the As and ATs can be calculated by fast transformation, which makes it possible to solve the unconstrained problem:
min s μ s 1 + 1 2 y A s 2 2 .
By iteratively solving a series of unconstrained subproblems shown in Equation (20) generated by the Bregman iterative regularization scheme, the exact solution of the constrained problem is given [134,135,136].
In [137], the split Bregman (SB) algorithm has been proposed, which can be used in CS. Due to its parallelizing nature, it can be efficiently implemented to have faster computation.

4.5. Total Variation Minimization Algorithms

For two-dimensional image signals, Rudin, Osher, and Fatemi [138] first introduced the concept of total variation (TV) for image denoising in 1992. As a result, a new restoration model is proposed that is based on the sparse image gradient. Research has confirmed that the use of TV minimization in CS makes the recovered image quality sharper by preserving the edges or boundaries more accurately, which is essential to characterizing images. Different from other algorithms, the TV minimization algorithms do not need a specific sparse dictionary to represent image signals. A detailed discussion of TV minimization algorithms can be found in [139], and different versions of the TV minimization algorithm for image restoration are proposed in [140,141,142].
Definition: Let xij denote the pixel in the i-th row and j column of an n × n image, and define the operators
D h ; i j x = { x i + 1 , j x i j   i < n 0 i = n   ,   D v ; i j x = { x i , j + 1 x i j   j < n 0   j = n ,
and
D i j x = ( D h ; i j x D v ; i j x ) ,
where the 2-vector Dijx can be interpreted as a kind of discrete gradient of the digital image x . The total variation of x is simply the sum of the magnitudes of this discrete gradient at every point:
T V ( x ) = i j ( D h ; i j x ) 2 + ( D v ; i j x ) 2 = i j D i j x 2 .

4.5.1. Min-TV with Equality Constraints

If Dijx is nonzero for only a small number of indices i j for the image x , the image signal can be restored by solving the following equality constraint problems, which are called min-TV with equality constraints (TV-EQ):
min   T V ( x )   subject   to   Φ x = y .

4.5.2. Min-TV with Quadratic Constraints

Additionally, for the image signal with a gradient sparse property (i.e., only a small amount of Dijx is non-zero) and the measured value of the single pixel is polluted by noise, the equality constraint problem of Equation (24) can be transformed into the following min-TV with quadratic constraints (TV-QC) problem:
min   T V ( x )   subject   to   Φ x b 2 ε .

4.5.3. TV Dantzig Selector

The solver of the DS proposed in [109] can also solve the TV minimization of the CSSPI problem when considering noise pollution. The TV Dantzig selector (TV-DS) considers the following TV minimization issues:
min   T V ( x )   subject   to   Φ * ( Φ x b ) γ .

4.5.4. Total Variation Augmented Lagrangian Alternating Direction Algorithm

Compared with the traditional convex optimization algorithms based on signal sparsity, the above three kinds of TV minimization algorithms are still much slower (e.g., TV-DS) or have poor image reconstruction quality (e.g., TV-QC). The total variation augmented Lagrangian alternating direction algorithm (TVAL3) has successfully overcome this difficulty and accepts a vast range of measurement matrices [142].
The TV minimization model is very difficult to solve directly due to the non-differentiability and non-linearity of the TV term. TVAL3 minimizes augmented Lagrangian functions through an alternating minimization scheme and updates multipliers after each sweep. Instead of employing the augmented Lagrangian method to minimize Equation (23) directly, consider an equivalent variant of Equation (23):
min ω i , x i ω i ,   subject   to   Φ x = y   and   D i x = ω i   for   all   i .

4.6. Other Algorithms

The CSSPI restoration algorithms summarized in this paper are widely used after consulting the relevant literature. However, there are a large number of recovery algorithms based on CS, which makes it a huge project to verify their performance in the field of SPI. Their provenance [143,144,145,146,147,148,149,150,151,152,153,154,155,156,157,158,159,160,161] is as follows: iterative soft threshold algorithm (IST) [162], fast iterative soft threshold algorithm (FIST) [163], approximate message passing algorithm (AMP) [164,165], least angle regression (LARS) [166], gradient projection for sparse reconstruction (GPSR) [167], FOCUSS solver [168], sparsity adaptive matching pursuit (StaMP) [169], and gradient pursuit (GP) [170].
Table 3 is a comparative summary of the several commonly used reconstruction algorithms in simulation, which may help to select reconstruction algorithms that meet different SPI systems.

5. Simulation

In terms of several aspects such as imaging quality, running time, and robustness to noise, to show their pros and cons, this section compared the performance of the above CSSPI measurement matrices and reconstruction algorithms on both simulated and experimental data. Without losing generality, we used “cameraman” with a size of 64 × 64 pixels as the test image in the simulation. “Cameraman” is a natural image with a continuous tone, which is extensively employed in MATLAB image databases and the digital image processing field.
Here, multiple experimental settings need to be clarified. (1) Concerning the quantitative comparison of image quality, we employ peak signal-to-noise ratio (PSNR) and structural similarity (SSIM) as metrics. (2) When the measurement matrix is compared with the results of the OMP algorithm, the number of iterations is set to 1/4 of the sampling patterns. (3) In comparison with the performance of the reconstruction algorithm, the iteration number of all greedy algorithms is set to 200.

5.1. Comparison of Measurement Matrix Performance

When testing the performance of the CSSPI measurement matrices mentioned above, we first selected DCT as a sparse dictionary. Subsequently, the OMP algorithm was adopted to reconstruct the image. Due to the fact that there is some uncertainty in the random measurement matrix, 10 times for each test, the average data were recorded.
The simulation results are exhibited in Figure 2 and Figure 3. Table 2 summarizes the six measurement matrices universally used in CSSPI. Figure 2a depicts the PSNR, SSIM, and running time under the sampling rate of 20%–100%. In the case of full sampling, the reconstruction quality of the partial Fourier matrix is better than that of other measurement matrices. Under sampling, the reconstruction quality of the partial Fourier and Toplitz matrices is worse than that of other measurement matrices. Apart from that, the Gaussian random matrix, Bernoulli random matrix, sparse random matrix, and partial Hadamard matrix show the same performance in terms of reconstruction quality. As conspicuously revealed in Figure 2a, the running time of the partial Fourier matrix is the longest. Figure 2b illustrates the reconstructed images of each measurement matrix at several different sampling rates.
Furthermore, different levels of Gaussian noise were used to pollute the measurements, and the image was reconstructed at a sampling rate of 80%. When the signal-to-noise ratio (SNR) is 35 dB to −5 dB, the performance of different measurement matrices is shown in Figure 3. The degradation rates of PSNR and SSIM values of Gaussian random matrices, Bernoulli random matrices, sparse random matrices, and Hadamard matrices are almost the same under different noise levels. The image reconstruction quality degradation rate of the Toplitz matrix and Fourier matrix is the lowest at low noise levels. Additionally, the PSNR and SSIM values of the Toplitz matrix are the same as those of all other matrices at high noise levels. Therefore, the Toplitz matrix and Fourier matrix show a better ability to suppress noise. In addition, the Gaussian random matrix, Bernoulli random matrix, sparse random matrix, and Hadamard matrix have the same ability to suppress noise. From the running time curve in Figure 3a, it can be seen that the reconstruction efficiency of all measurement matrices is not affected by noise. The above results can be further confirmed in Figure 3b.

5.2. Comparison of Reconstruction Algorithm Performance

In this work, we test the CSSPI algorithm mentioned above at different sampling rates and noise levels. It should be noted that not all algorithms are directly comparable because many algorithms make different assumptions about the measurement matrix at the beginning of their design. Therefore, to compare the performance of the algorithm in universality, the measurement matrix is the most widely used Gaussian random matrix, and the sparse dictionary is the DCT dictionary. At the same time, to reduce the influence of randomness on the results, the simulation of each algorithm is repeated 10 times, and the results are averaged.
In the Appendix A, we make an intra-class quantitative comparison and compare the mentioned algorithms in detail. From the result, we get several algorithms with better comprehensive performance, which are: OMP, IHT, BP, BPDN, TVAL3, BCS, and SB. Therefore, in the following comparison, we mainly take these algorithms as a representative and make an inter-class comparison of all kinds of algorithms in detail.
Figure 4 shows the results of an inter-class comparison of representative algorithms that perform better among various types of algorithms under different sampling rates. Figure 5 shows that partial algorithms reconstruct images at 20%, 40%, 60%, and 80% sampling rates, respectively. As can be seen from Figure 4, the reconstruction quality of the TV minimization algorithms is the best. This is because the image signal cannot be perfectly sparse even on a specific sparse basis, and the TV minimization algorithms were specially designed for the gradient sparsity of the image signal, so they are more suitable for image reconstruction. TVAL3 takes the least time to reconstruct the image. The reconstruction quality of the convex optimization algorithm is better than that of the greedy algorithm because the convex optimization algorithm looks for the global optimal solution in each iteration, while the greedy algorithm looks for the local optimal solution. In addition, the running time of the greedy algorithm is less than that of the convex optimization algorithm since the greedy algorithm only seeks the best matching atom rather than an atomic set in each iteration. The reconstruction quality and time of the Bayesian algorithm and Bregman minimization algorithm are between the convex optimization algorithm and the greedy algorithm.
In the next test, we will further study the performance of various reconstruction algorithms with different noise. When the SNR is 35 dB to −5 dB, the performance of various reconstruction algorithms at a sampling rate of 80% is shown in Figure 6 and Figure 7. Figure 6 shows the results of the algorithms with better anti-noise performance in each group. It can be seen from the graph that the deterioration rate of DLP, TV-QC, and the greedy algorithm is the lowest, and the reconstruction result of TVAL3 is the best. The reconstruction quality of BPDN and SB is better than that of BCS and the greedy algorithm. It should be emphasized that for greedy algorithms, DLP and TV-QC, which have poor reconstruction quality in noiseless, they all have a low rate of deterioration in anti-noise, especially at a low SNR (that is, SNR < 10 dB), the reconstruction quality is equal to that of other algorithms. The PSNR of OMP and IHT is higher than that of all other algorithms under low SNR.
The same conclusion as a quantitative comparison can also be drawn from the reconstructed image shown in Figure 7. It can be seen from the figure that the reconstruction quality of the greedy algorithm is less affected by noise, and it is difficult to observe the difference under different SNRs. When the SNR is 0 dB, the four greedy algorithms can still observe the rough contours of the characters; however, other algorithms find it difficult to distinguish the contours of the characters in the image (such as TVAL3).
Based on the summary of the simulation test results of various algorithms, the TV minimization algorithm based on image gradient sparse shows the best comprehensive performance in various cases and is more suitable for image signal reconstruction. Table 3 is a summary of the simulation-tested CSSPI reconstruction algorithms in this paper.

6. Experiment

In order to further compare the performance of different CSSPI measurement matrices and reconstruction algorithms, we designed a laboratory experiment to test the performance indicators under real experimental data, and the experiment setup is shown in Figure 8. A projector (ACER V36X) illuminates the target by patterns, and the reflected light of the target is incident onto a single-pixel detector (KG-PR-200K-A-FS) and transferred to the computer by the DAQ (NI DAQ USB-6216) for image reconstruction. In our experiment, we set the resolution of the projector to 1024 × 1280 pixels and projected a pattern every 0.3 s. This speed is very slow compared with the digital micromirror device (DMD), but the long acquisition time also means high SNR in the measurements. A simple picture of “four bars” and a complicated picture of a “ladybug” are used for the demonstration. The results are shown in Figure 9 and Figure 10. All measurement matrices are projected by differential projection to further reduce the impact of noise, and the imaging resolution is 64 × 64 .
Figure 9 shows the reconstructed image results of unstructured random, partially orthogonal, and structured random measurement matrices (that is, Bernoulli, Hadamard, and Sparse) at different sampling rates. We select DCT as a sparse dictionary, and the reconstruction algorithm adopts the OMP. It can be found that the reconstruction quality of the sparse random matrix is slightly lower than that of the other two measurement matrices. At a sampling rate of 20%, the approximate outline of the target can be distinguished from the reconstructed results. For the “ladybug” model with more details, the quality of the reconstructed image is slightly worse. Additionally, the reconstruction quality of Hadamard matrices is slightly better than that of the other two kinds of matrices. The experimental results of the measurement matrix are consistent with the conclusions of the above simulation tests.
Figure 10 shows the image reconstruction results of the convex optimization algorithm, greedy algorithm, TV minimization algorithm, non-convex optimization algorithm, and Bregman minimization algorithm under different sampling rates. We use the partial Hadamard matrix as the measurement matrix and the DCT as the sparse dictionary. For the sparse “four bars”, the SSIM of the reconstructed image is 0.7 at a 40% sampling rate for OMP, BP, and BCS algorithms. For the “ladybug” with more details, the reconstruction performance of the TVAL3 algorithm is the best. The performance of the SB algorithm based on Bregman minimization in the actual imaging system is not satisfactory. The experimental results support the conclusion drawn in Section 5.

7. Discussion and Conclusions

The different CS measurement matrices and reconstruction algorithms used in SPI have certain tradeoffs in hardware implementation, acquisition efficiency, recovery efficiency, imaging quality, and noise robustness, as shown in Table 2 and Table 3.
In addition, with regard to CSSPI, the mutual containment relationship between imaging quality and imaging efficiency is the predominant factor that limits its application. To deal with this problem, the author believes that it is essential to further ameliorate the performance of CSSPI in the following aspects: (1) In the aspect of signal acquisition, most of the current methods directly adopt the measurement matrix to conduct the linear measurement. If we can first take into consideration the possible noise in the actual environment and introduce some local nonlinear operations in the measurement, we can expect to get more robust measurements. (2) In terms of image reconstruction, assuming that we can combine the sparsity of the signal to deal with the optimization problem, it is expected to get a better reconstruction effect. (3) Machine learning is adopted to optimize the measurement matrix and reconstruction algorithm of CSSPI at the same time to achieve the best match, which is expected to strike a balance between measurement efficiency and imaging quality. There have been some efforts to use machine learning algorithms to improve the performance of CS, which shows the strong impact of machine learning on CS [87,171,172,173,174].
In summary, we introduced the principle of SPI technology based on CS. Under different parameter settings, we subsequently tested and compared the main-stream measurement matrices and reconstruction algorithms in CSSPI on both simulated data and real captured data, including sampling ratio and noise. Afterward, we investigated the problems currently existing in CSSPI. Furthermore, we set forth the future research direction and development trend accordingly. Our work provides a comprehensive summary of conventional CSSPI and provides experience in the development and application of CSSPI.

Author Contributions

Conceptualization, D.W. and L.G.; methodology, W.Z.; software, L.G.; validation, W.Z., L.G. and D.W.; formal analysis, W.Z.; investigation, L.G.; resources, D.W.; data curation, L.G.; writing—original draft preparation, W.Z. and L.G; writing—review and editing, D.W. and A.Z.; visualization, W.Z.; supervision, D.W.; project administration, W.Z., A.Z. and D.W.; funding acquisition, W.Z., A.Z. and D.W. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Natural Science Foundation of China, grant number 62275188, in part by the International Scientific and Technological Cooperative Project in Shanxi Province under Grant 202104041101009, and in part by the Natural Science Foundation of Shanxi Province of China through the Research Project under Grants 202103021223091 and 202103021223047.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Publicly available datasets were analyzed in this study. This data can be found here: [https://github.com/tyut-207/compressed-sensing, accessed on 8 May 2023].

Acknowledgments

We thank the support of the projects supported by the International Scientific and Technological Cooperative Project in Shanxi Province, the Shanxi Scholarship Council of China, and the Natural Science Foundation of Shanxi Province of China.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

Here, we make an intra-class quantitative comparison and compare the mentioned algorithms in detail. We use “cameraman” with a size of 64 × 64 pixels as the test image in the simulation. The iteration number of all greedy algorithms is set to 200. Figure A1 shows the PSNR, SSIM, and running time curves at different sampling ratios, and Figure A2 is the reconstructed image at partial sampling rates. Figure A3 shows the PSNR and SSIM curves with different Gaussian noises, and Figure A4 shows the reconstructed images.
Figure A1. Quantitative comparison of various reconstruction algorithms at different sampling rates. (ad) are the quantitative comparison results of the convex optimization algorithm, greedy algorithm, TV minimization algorithm, and non-convex optimization algorithm.
Figure A1. Quantitative comparison of various reconstruction algorithms at different sampling rates. (ad) are the quantitative comparison results of the convex optimization algorithm, greedy algorithm, TV minimization algorithm, and non-convex optimization algorithm.
Sensors 23 04678 g0a1
Figure A2. The “Cameraman” images were reconstructed by various reconstruction algorithms at different sampling rates.
Figure A2. The “Cameraman” images were reconstructed by various reconstruction algorithms at different sampling rates.
Sensors 23 04678 g0a2
Figure A3. Quantitative comparison of various reconstruction algorithms with different noises. (ad) are the quantitative comparison results of the convex optimization algorithm, greedy algorithm, TV minimization algorithm, and non-convex optimization algorithm.
Figure A3. Quantitative comparison of various reconstruction algorithms with different noises. (ad) are the quantitative comparison results of the convex optimization algorithm, greedy algorithm, TV minimization algorithm, and non-convex optimization algorithm.
Sensors 23 04678 g0a3
Figure A4. The “Cameraman” image was reconstructed by various reconstruction algorithms with Gaussian noise.
Figure A4. The “Cameraman” image was reconstructed by various reconstruction algorithms with Gaussian noise.
Sensors 23 04678 g0a4
As can be observed in Figure A1a, the four convex optimization algorithms show similar performance; all of them can achieve perfect reconstruction at the full sampling rate. The reconstruction quality of DLP at a low sampling rate is weaker than that of the other three. In addition, the running time of BPDN and DLP algorithms is almost not affected by the sampling rate. The running time of the BP and DS is greatly affected by the sampling rate, especially the running time of the DS, which increases greatly with the increase in sampling rate.
As can be seen from Figure A1b, the reconstruction quality of the four greedy algorithms is almost the same. The PSNR and SSIM of OMP are slightly higher than those of other algorithms at the low sampling rate, while the SP is the highest at the high sampling rate. The running time of OMP and IHT is the least affected and is almost not affected by the sampling rate, while the running time of CoSaMP and SP increases with the increased sampling rate.
Figure A1c makes a quantitative analysis of three commonly used algorithms based on TV minimization, and the reconstruction quality of each algorithm improves continuously with the increased sampling rate. It should be noted that when the sampling rate is greater than 70%, the SSIM of TVAL3 and TV-DS tends to be stable, while the corresponding PSNR even decreases slightly. TV-DS takes much more time to rebuild than the other two algorithms; hence, to see the time spent by TVAL3 and TV-QC clearly, the green box area where they are located is locally enlarged. We find that the running time of TVAL3 and TV-QC is almost not affected by the sampling rate, even if there is a small increase in the range of less than 1 s. Therefore, in the reconstruction algorithm based on TV minimization, the reconstruction quality of using the Dantzig solver to solve the TV minimization problem is the best; however, it takes a lot of time, which is almost 100 times that of other algorithms. Although the reconstruction quality of the TVAL3 algorithm is slightly lower than that of the TV-DS, the SSIM value is more than 80% at a 70% sampling rate and takes little time.
From Figure A1d, we can see that the reconstruction quality of IRLS is better than that of BCS and SB at all sampling rates; however, it takes a lot of time, which seriously reduces its practical value, especially in real-time imaging scenes. The reconstruction quality of BCS is better than that of SB at the low sampling rate, and the running time is the least among the three algorithms. However, when the sampling rate is high, the comprehensive performance of SB is better. Therefore, BCS is more suitable for low sampling rate scenarios, while SB is suitable for high sampling rate scenarios.
When the SNR is 35 dB to −5 dB, the performance of various reconstruction algorithms at a sampling rate of 80% is shown in Figure A3 and Figure A4. From Figure A3a, we can observe that with the decrease in SNR of single-pixel measurement, the image reconstruction quality of the four convex optimization algorithms will decline. The decline rate in reconstruction quality in DLP is the slowest. The degradation rates of the DS, BP, and BPDN are almost the same. From Figure A3b, we can see that the anti-noise performance of the four greedy algorithms is the same; however, there is only a slight difference. From Figure A3c, we can see that in the algorithm based on TV minimization, the deterioration rate of TV-QC is the lowest. The reconstruction quality of TVAL3 is slightly worse than that of TV-DS in cases of high SNR. The reconstruction efficiency of TVAL3 is higher, so the comprehensive performance of TVAL3 is still the best. Figure A3d gives the anti-noise performance of the non-convex optimization algorithm and Bregman minimization algorithm. The two types of algorithms have the same deterioration rate under the condition of Gaussian noise, in which the reconstruction result of IRLS, which belongs to the non-convex optimization algorithm, is the best.

References

  1. Duarte, M.F.; Davenport, M.A.; Takhar, D.; Laska, J.N.; Sun, T.; Kelly, K.F.; Baraniuk, R.G. Single-pixel imaging via compressive sampling. IEEE Signal Process. Mag. 2008, 25, 83–91. [Google Scholar] [CrossRef]
  2. Edgar, M.P.; Gibson, G.M.; Padgett, M.J. Principles and prospects for single-pixel imaging. Nat. Photonics 2019, 13, 13–20. [Google Scholar] [CrossRef]
  3. Sen, P.; Chen, B.; Garg, G.; Marschner, S.R.; Horowitz, M.; Levoy, M.; Lensch, H.P. Dual photography. In ACM SIGGRAPH 2005 Papers; Association for Computing Machinery: New York, NY, USA, 2005; pp. 745–755. [Google Scholar]
  4. Bian, L.; Suo, J.; Situ, G.; Li, Z.; Fan, J.; Chen, F.; Dai, Q. Multispectral imaging using a single bucket detector. Sci. Rep. 2016, 6, 24752. [Google Scholar] [CrossRef]
  5. Rousset, F.; Ducros, N.; Peyrin, F.; Valentini, G.; D’andrea, C.; Farina, A. Time-resolved multispectral imaging based on an adaptive single-pixel camera. Opt. Express 2018, 26, 10550–10558. [Google Scholar] [CrossRef]
  6. Zhang, Z.; Liu, S.; Peng, J.; Yao, M.; Zheng, G.; Zhong, J. Simultaneous spatial, spectral, and 3D compressive imaging via efficient Fourier single-pixel measurements. Optica 2018, 5, 315–319. [Google Scholar] [CrossRef]
  7. Qi, H.; Zhang, S.; Zhao, Z.; Han, J.; Bai, L. A super-resolution fusion video imaging spectrometer based on single-pixel camera. Opt. Commun. 2022, 520, 128464. [Google Scholar] [CrossRef]
  8. Tao, C.; Zhu, H.; Wang, X.; Zheng, S.; Xie, Q.; Wang, C.; Wu, R.; Zheng, Z. Compressive single-pixel hyperspectral imaging using RGB sensors. Opt. Express 2021, 29, 11207–11220. [Google Scholar] [CrossRef]
  9. Liu, A.; Gao, L.; Zou, W.; Huang, J.; Wu, Q.; Cao, Y.; Chang, Z.; Peng, C.; Zhu, T. High speed surface defects detection of mirrors based on ultrafast single-pixel imaging. Opt. Express 2022, 30, 15037–15048. [Google Scholar] [CrossRef]
  10. Wang, Y.; Huang, K.; Fang, J.; Yan, M.; Wu, E.; Zeng, H. Mid-infrared single-pixel imaging at the single-photon level. Nat. Commun. 2023, 14, 1073. [Google Scholar] [CrossRef]
  11. Watts, C.M.; Shrekenhamer, D.; Montoya, J.; Lipworth, G.; Hunt, J.; Sleasman, T.; Krishna, S.; Smith, D.R.; Padilla, W.J. Terahertz compressive imaging with metamaterial spatial light modulators. Nat. Photonics 2014, 8, 605–609. [Google Scholar] [CrossRef]
  12. Lu, Y.; Wang, X.K.; Sun, W.F.; Feng, S.F.; Ye, J.S.; Han, P.; Zhang, Y. Reflective single-pixel terahertz imaging based on compressed sensing. IEEE Trans. Terahertz Sci. Technol. 2020, 10, 495–501. [Google Scholar] [CrossRef]
  13. Li, W.; Hu, X.; Wu, J.; Fan, K.; Chen, B.; Zhang, C.; Hu, W.; Cao, X.; Jin, B.; Lu, Y. Dual-color terahertz spatial light modulator for single-pixel imaging. Light Sci. Appl. 2022, 11, 191. [Google Scholar] [CrossRef] [PubMed]
  14. Gibson, G.M.; Sun, B.; Edgar, M.P.; Phillips, D.B.; Hempler, N.; Maker, G.T.; Malcolm, G.P.A.; Padgett, M.J. Real-time imaging of methane gas leaks using a single-pixel camera. Opt. Express 2017, 25, 2998–3005. [Google Scholar] [CrossRef] [PubMed]
  15. Studer, V.; Bobin, J.; Chahid, M.; Mousavi, H.S.; Candes, E.; Dahan, M. Compressive fluorescence microscopy for biological and hyperspectral imaging. Proc. Natl. Acad. Sci. USA 2012, 109, E1679–E1687. [Google Scholar] [CrossRef] [PubMed]
  16. Radwell, N.; Mitchell, K.J.; Gibson, G.M.; Edgar, M.P.; Bowman, R.; Padgett, M.J. Single-pixel infrared and visible microscope. Optica 2014, 1, 285–289. [Google Scholar] [CrossRef]
  17. Mostafavi, S.M.; Amjadian, M.; Kavehvash, Z.; Shabany, M. Fourier photoacoustic microscope improved resolution on single-pixel imaging. Appl. Opt. 2022, 61, 1219–1228. [Google Scholar] [CrossRef]
  18. Durán, V.; Soldevila, F.; Irles, E.; Clemente, P.; Tajahuerce, E.; Andrés, P.; Lancis, J. Compressive imaging in scattering media. Opt. Express 2015, 23, 14424–14433. [Google Scholar] [CrossRef]
  19. Deng, H.; Wang, G.; Li, Q.; Sun, Q.; Ma, M.; Zhong, X. Transmissive single-pixel microscopic imaging through scattering media. Sensors 2021, 21, 2721. [Google Scholar] [CrossRef]
  20. Guo, Y.; Li, B.; Yin, X. Dual-compressed photoacoustic single-pixel imaging. Natl. Sci. Rev. 2023, 10, nwac058. [Google Scholar] [CrossRef]
  21. Radwell, N.; Johnson, S.D.; Edgar, M.P.; Higham, C.F.; Murray-Smith, R.; Padgett, M.J. Deep learning optimized single-pixel LiDAR. Appl. Phys. Lett. 2019, 115, 231101. [Google Scholar] [CrossRef]
  22. Huang, J.; Li, Z.; Shi, D.; Chen, Y.; Yuan, K.; Hu, S.; Wang, Y. Scanning single-pixel imaging lidar. Opt. Express 2022, 30, 37484–37492. [Google Scholar] [CrossRef] [PubMed]
  23. Sefi, O.; Klein, Y.; Strizhevsky, E.; Dolbnya, I.P.; Shwartz, S. X-ray imaging of fast dynamics with single-pixel detector. Opt. Express 2020, 28, 24568–24576. [Google Scholar] [CrossRef] [PubMed]
  24. He, Y.H.; Zhang, A.X.; Li, M.F.; Huang, Y.Y.; Quan, B.G.; Li, D.Z.; Wu, L.A.; Chen, L.M. High-resolution sub-sampling incoherent x-ray imaging with a single-pixel detector. APL Photonics 2020, 5, 056102. [Google Scholar] [CrossRef]
  25. Salvador-Balaguer, E.; Latorre-Carmona, P.; Chabert, C.; Pla, F.; Lancis, J.; Tajahuerce, E. Low-cost single-pixel 3D imaging by using an LED array. Opt. Express 2018, 26, 15623–15631. [Google Scholar] [CrossRef]
  26. Gao, L.; Zhao, W.; Zhai, A.; Wang, D. OAM-basis wavefront single-pixel imaging via compressed sensing. J. Light. Technol. 2023, 41, 2131–2137. [Google Scholar]
  27. Gong, W. Performance comparison of computational ghost imaging versus single-pixel camera in light disturbance environment. Opt. Laser Technol. 2022, 152, 108140. [Google Scholar] [CrossRef]
  28. Padgett, M.J.; Boyd, R.W. An introduction to ghost imaging: Quantum and classical. Philos. Trans. R. Soc. A Math. Phys. Eng. Sci. 2017, 375, 20160233. [Google Scholar] [CrossRef]
  29. Zhang, Z.; Ma, X.; Zhong, J. Single-pixel imaging by means of Fourier spectrum acquisition. Nat. Commun. 2015, 6, 6225. [Google Scholar] [CrossRef]
  30. Chen, Y.; Yao, X.R.; Zhao, Q.; Liu, S.; Liu, X.F.; Wang, C.; Zhai, G.J. Single-pixel compressive imaging based on the transformation of discrete orthogonal Krawtchouk moments. Opt. Express 2019, 27, 29838–29853. [Google Scholar] [CrossRef]
  31. Su, J.; Zhai, A.; Zhao, W.; Han, Q.; Wang, D. Hadamard Single-pixel Imaging Using Adaptive Oblique Zigzag Sampling. Acta Photonica Sin. 2021, 50, 311003. [Google Scholar]
  32. Wang, Z.; Zhao, W.; Zhai, A.; He, P.; Wang, D. DQN based single-pixel imaging. Opt. Express 2021, 29, 15463–15477. [Google Scholar] [CrossRef]
  33. Xu, C.; Zhai, A.; Zhao, W.; He, P.; Wang, D. Orthogonal single-pixel imaging using an adaptive under-Nyquist sampling method. Opt. Commun. 2021, 500, 127326. [Google Scholar] [CrossRef]
  34. Kallepalli, A.; Innes, J.; Padgett, M.J. Compressed sensing in the far-field of the spatial light modulator in high noise conditions. Sci. Rep. 2021, 11, 17460. [Google Scholar] [CrossRef] [PubMed]
  35. Shin, Z.; Chai, T.Y.; Pua, C.H.; Wang, X.; Chua, S.Y. Efficient spatially-variant single-pixel imaging using block-based compressed sensing. J. Signal Process. Syst. 2021, 93, 1323–1337. [Google Scholar] [CrossRef]
  36. Sun, M.J.; Meng, L.T.; Edgar, M.P.; Padgett, M.J.; Radwell, N. A Russian Dolls ordering of the Hadamard basis for compressive single-pixel imaging. Sci. Rep. 2017, 7, 3464. [Google Scholar] [CrossRef]
  37. Shin, J.; Bosworth, B.T.; Foster, M.A. Single-pixel imaging using compressed sensing and wavelength-dependent scattering. Opt. Lett. 2016, 41, 886–889. [Google Scholar] [CrossRef] [PubMed]
  38. Donoho, D.L. Compressed sensing. IEEE Trans. Inf. Theory 2006, 52, 1289–1306. [Google Scholar] [CrossRef]
  39. Candès, E.J.; Romberg, J.; Tao, T. Robust uncertainty principles: Exact signal reconstruction from highly incomplete frequency information. IEEE Trans. Inf. Theory 2006, 52, 489–509. [Google Scholar] [CrossRef]
  40. Wang, L.H.; Zhang, W.; Guan, M.H.; Jiang, S.Y.; Fan, M.H.; Abu, P.A.R.; Chen, C.A.; Chen, S.L. A Low-Power High-Data-Transmission Multi-Lead ECG Acquisition Sensor System. Sensors 2019, 19, 4996. [Google Scholar] [CrossRef]
  41. Gibson, G.M.; Johnson, S.D.; Padgett, M.J. Single-pixel imaging 12 years on: A review. Optics Express 2020, 28, 28190. [Google Scholar] [CrossRef]
  42. Qaisar, S.; Bilal, R.M.; Iqbal, W.; Naureen, M.; Lee, S. Compressive sensing: From theory to applications, a survey. J. Commun. Netw. 2013, 15, 443–456. [Google Scholar] [CrossRef]
  43. Rani, M.; Dhok, S.B.; Deshmukh, R.B. A systematic review of compressive sensing: Concepts, implementations and applications. IEEE Access 2018, 6, 4875–4894. [Google Scholar] [CrossRef]
  44. Arjoune, Y.; Kaabouch, N.; El Ghazi, H.; Tamtaoui, A. A performance comparison of measurement matrices in compressive sensing. Int. J. Commun. Syst. 2018, 31, e3576. [Google Scholar] [CrossRef]
  45. Abo-Zahhad, M.M.; Hussein, A.I.; Mohamed, A.M. Compressive sensing algorithms for signal processing applications: A survey. Int. J. Commun. Netw. Syst. Sci. 2015, 8, 197–216. [Google Scholar]
  46. Gunasheela, S.K.; Prasantha, H.S. Compressed Sensing for Image Compression: Survey of Algorithms. In Emerging Research in Computing, Information, Communication and Applications; Springer: Singapore, 2019; pp. 507–517. [Google Scholar]
  47. Marques, E.C.; Maciel, N.; Naviner, L.; Cai, H.; Yang, J. A review of sparse recovery algorithms. IEEE Access 2018, 7, 1300–1322. [Google Scholar] [CrossRef]
  48. Bian, L.; Suo, J.; Dai, Q.; Chen, F. Experimental comparison of single-pixel imaging algorithms. J. Opt. Soc. Am. A 2018, 35, 78–87. [Google Scholar] [CrossRef]
  49. Qiu, Z.; Zhang, Z.; Zhong, J. Comprehensive comparison of single-pixel imaging methods. Opt. Lasers Eng. 2020, 134, 106301. [Google Scholar]
  50. Arjoune, Y.; Kaabouch, N.; El Ghazi, H.; Tamtaoui, A. Compressive sensing: Performance comparison of sparse recovery algorithms. In Proceedings of the 2017 IEEE 7th Annual Computing and Communication Workshop and Conference (CCWC), Las Vegas, NV, USA, 9–11 January 2017; IEEE: Piscataway, NJ, USA, 2017; pp. 1–7. [Google Scholar]
  51. Candès, E.J. Compressive sampling. In Proceedings of the International Congress of Mathematicians, Madrid, Spain, 22–30 August 2006; Volume 3, pp. 1433–1452. [Google Scholar]
  52. Baraniuk, R.; Davenport, M.A.; Duarte, M.F.; Hegde, C. An introduction to compressive sensing. Connex. e-Textb. 2011. [Google Scholar]
  53. Donoho, D.; Tanner, J. Counting faces of randomly projected polytopes when the projection radically lowers dimension. J. Am. Math. Soc. 2009, 22, 1–53. [Google Scholar] [CrossRef]
  54. Mallat, S. A Wavelet Tour of Signal Processing; Elsevier: Amsterdam, The Netherlands, 1999. [Google Scholar]
  55. Kovacevic, J.; Chebira, A. Life beyond bases: The advent of frames (Part I). IEEE Signal Process. Mag. 2007, 24, 86–104. [Google Scholar] [CrossRef]
  56. Kovacevic, J.; Chebira, A. Life beyond bases: The advent of frames (Part II). IEEE Signal Process. Mag. 2007, 24, 115–125. [Google Scholar] [CrossRef]
  57. Rubinstein, R.; Bruckstein, A.M.; Elad, M. Dictionaries for sparse representation modeling. Proc. IEEE 2010, 98, 1045–1057. [Google Scholar] [CrossRef]
  58. Gribonval, R.; Nielsen, M. Sparse representations in unions of bases. IEEE Trans. Inf. Theory 2003, 49, 3320–3325. [Google Scholar] [CrossRef]
  59. Candes, E.J. The restricted isometry property and its implications for compressed sensing. Comptes Rendus Math. 2008, 346, 589–592. [Google Scholar] [CrossRef]
  60. Donoho, D.L.; Tsaig, Y. Fast Solution of ℓ0-Norm Minimization Problems When the Solution May Be Sparse. IEEE Trans. Inf. Theory 2008, 54, 4789–4812. [Google Scholar] [CrossRef]
  61. Candes, E.; Romberg, J. Sparsity and incoherence in compressive sampling. Inverse Probl. 2007, 23, 969–985. [Google Scholar] [CrossRef]
  62. Chen, Z.; Dongarra, J.J. Condition numbers of Gaussian random matrices. SIAM J. Matrix Anal. Appl. 2005, 27, 603–620. [Google Scholar] [CrossRef]
  63. Zhang, G.; Jiao, S.; Xu, X.; Wang, L. Compressed sensing and reconstruction with bernoulli matrices. In Proceedings of the The 2010 IEEE International Conference on Information and Automation, Harbin, China, 20–23 June 2010; IEEE: Piscataway, NJ, USA, 2010; pp. 455–460. [Google Scholar]
  64. Duarte, M.F.; Eldar, Y.C. Structured compressed sensing: From theory to applications. IEEE Trans. Signal Process. 2011, 59, 4053–4085. [Google Scholar] [CrossRef]
  65. Tsaig, Y.; Donoho, D.L. Extensions of compressed sensing. Signal Process. 2006, 86, 549–571. [Google Scholar] [CrossRef]
  66. Zhang, G.; Jiao, S.; Xu, X. Compressed sensing and reconstruction with semi-hadamard matrices. In Proceedings of the 2010 2nd International Conference on Signal Processing Systems, Dalian, China, 5–7 July 2010; IEEE: Piscataway, NJ, USA, 2010; Volume 1, pp. 194–197. [Google Scholar]
  67. Yin, W.; Morgan, S.; Yang, J.; Zhang, Y. Practical compressive sensing with Toeplitz and circulant matrices. In Proceedings of the Visual Communications and Image Processing, Huangshan, China, 14 July 2010; Volume 7744, p. 77440K. [Google Scholar]
  68. Do, T.T.; Tran, T.D.; Gan, L. Fast compressive sampling with structurally random matrices. 2008 IEEE International Conference on Acoustics, Speech and Signal Processing, Las Vegas, NV, USA, 30 March–4 April 2008; IEEE: Piscataway, NJ, USA, 2008; pp. 3369–3372. [Google Scholar]
  69. Do, T.T.; Gan, L.; Nguyen, N.H.; Tran, T.D. Fast and efficient compressive sensing using structurally random matrices. IEEE Trans. Signal Process. 2011, 60, 139–154. [Google Scholar] [CrossRef]
  70. Sarvotham, S.; Baron, D.; Baraniuk, R.G. Compressed sensing reconstruction via belief propagation. Preprint 2006, 14. [Google Scholar]
  71. Akçakaya, M.; Park, J.; Tarokh, V. Compressive sensing using low density frames. arXiv 2009, arXiv:0903.0650. [Google Scholar]
  72. Gilbert, A.; Indyk, P. Sparse recovery using sparse matrices. Proc. IEEE 2010, 98, 937–947. [Google Scholar] [CrossRef]
  73. Baron, D.; Sarvotham, S.; Baraniuk, R.G. Bayesian compressive sensing via belief propagation. IEEE Trans. Signal Process. 2009, 58, 269–280. [Google Scholar] [CrossRef]
  74. Akçakaya, M.; Park, J.; Tarokh, V. A coding theory approach to noisy compressive sensing using low density frames. IEEE Trans. Signal Process. 2011, 59, 5369–5379. [Google Scholar] [CrossRef]
  75. Baron, D.; Duarte, M.F.; Wakin, M.B.; Sarvotham, S.; Baraniuk, R.G. Distributed compressive sensing. arXiv 2009, arXiv:0901.3403. [Google Scholar]
  76. Park, J.Y.; Yap, H.L.; Rozell, C.J.; Wakin, M.B. Concentration of measure for block diagonal matrices with applications to compressive signal processing. IEEE Trans. Signal Process. 2011, 59, 5859–5875. [Google Scholar] [CrossRef]
  77. Li, S.; Gao, F.; Ge, G.; Zhang, S. Deterministic construction of compressed sensing matrices via algebraic curves. IEEE Trans. Inf. Theory 2012, 58, 5035–5041. [Google Scholar] [CrossRef]
  78. Berinde, R.; Gilbert, A.C.; Indyk, P.; Karloff, H.; Strauss, M.J. Combining geometry and combinatorics: A unified approach to sparse signal recovery. In Proceedings of the 2008 46th Annual Allerton Conference on Communication, Control, and Computing, Monticello, IL, USA, 23–26 September 2008; IEEE: Piscataway, NJ, USA, 2008; pp. 798–805. [Google Scholar]
  79. Calderbank, R.; Howard, S.; Jafarpour, S. Construction of a large class of deterministic sensing matrices that satisfy a statistical isometry property. IEEE J. Sel. Top. Signal Process. 2010, 4, 358–374. [Google Scholar] [CrossRef]
  80. DeVore, R.A. Deterministic constructions of compressed sensing matrices. J. Complex. 2007, 23, 918–925. [Google Scholar] [CrossRef]
  81. Nguyen, T.L.N.; Shin, Y. Deterministic sensing matrices in compressive sensing: A survey. Sci. World J. 2013, 2013, 192795. [Google Scholar] [CrossRef]
  82. Amini, A.; Montazerhodjat, V.; Marvasti, F. Matrices with small coherence using p-ary block codes. IEEE Trans. Signal Process. 2011, 60, 172–181. [Google Scholar] [CrossRef]
  83. Khajehnejad, M.A.; Dimakis, A.G.; Xu, W.; Hassibi, B. Sparse recovery of nonnegative signals with minimal expansion. IEEE Trans. Signal Process. 2010, 59, 196–208. [Google Scholar] [CrossRef]
  84. Elad, M. Optimized projections for compressed sensing. IEEE Trans. Signal Process. 2007, 55, 5695–5702. [Google Scholar] [CrossRef]
  85. Nhat, V.D.M.; Vo, D.; Challa, S.; Lee, S. Efficient projection for compressed sensing. In Proceedings of the Seventh IEEE/ACIS International Conference on Computer and Information Science (icis 2008), Portland, OR, USA, 14–16 May 2008; IEEE: Piscataway, NJ, USA, 2008; pp. 322–327. [Google Scholar]
  86. Wu, S.; Dimakis, A.; Sanghavi, S.; Yu, F.; Holtmann-Rice, D.; Storcheus, D.; Rostamizadeh, A.; Kumar, S. Learning a compressed sensing measurement matrix via gradient unrolling. In International Conference on Machine Learning; PMLR: New York, NY, USA, 2019; pp. 6828–6839. [Google Scholar]
  87. Wu, Y.; Rosca, M.; Lillicrap, T. Deep compressed sensing. In International Conference on Machine Learning; PMLR: New York, NY, USA, 2019; pp. 6850–6860. [Google Scholar]
  88. Islam, S.R.; Maity, S.P.; Ray, A.K.; Mandal, M. Deep learning on compressed sensing measurements in pneumonia detection. Int. J. Imaging Syst. Technol. 2022, 32, 41–54. [Google Scholar] [CrossRef]
  89. Ahmed, I.; Khan, A. Genetic algorithm based framework for optimized sensing matrix design in compressed sensing. Multimed. Tools Appl. 2022, 81, 39077–39102. [Google Scholar] [CrossRef]
  90. Pope, G. Compressive Sensing: A Summary of Reconstruction Algorithms. Master’s Thesis, ETH, Swiss Federal Institute of Technology Zurich, Department of Computer Science, Zürich, Switzerland, 2009. [Google Scholar]
  91. Siddamal, K.V.; Bhat, S.P.; Saroja, V.S. A survey on compressive sensing. In Proceedings of the 2015 2nd International Conference on Electronics and Communication Systems (ICECS), Coimbatore, India, 26–27 February 2015; IEEE: Piscataway, NJ, USA, 2015; pp. 639–643. [Google Scholar]
  92. Carmi, A.Y.; Mihaylova, L.; Godsill, S.J. Compressed Sensing & Sparse Filtering; Springer: Berlin/Heidelberg, Germany, 2014. [Google Scholar]
  93. Hameed, M.A. Comparative Analysis of Orthogonal Matching Pursuit and Least Angle Regression; Michigan State University, Electrical Engineering: East Lansing, MI, USA, 2012. [Google Scholar]
  94. Santosa, F.; Symes, W.W. Linear inversion of band-limited reflection seismograms. SIAM J. Sci. Stat. Comput. 1986, 7, 1307–1330. [Google Scholar] [CrossRef]
  95. Donoho, D.L.; Stark, P.B. Uncertainty principles and signal recovery. SIAM J. Appl. Math. 1989, 49, 906–931. [Google Scholar] [CrossRef]
  96. Donoho, D.L.; Logan, B.F. Signal recovery and the large sieve. SIAM J. Appl. Math. 1992, 52, 577–591. [Google Scholar] [CrossRef]
  97. Donoho, D.L.; Elad, M. Optimally sparse representation in general (nonorthogonal) dictionaries via ℓ1 minimization. Proc. Natl. Acad. Sci. USA 2003, 100, 2197–2202. [Google Scholar] [CrossRef] [PubMed]
  98. Elad, M.; Bruckstein, A.M. A generalized uncertainty principle and sparse representation in pairs of bases. IEEE Trans. Inf. Theory 2002, 48, 2558–2567. [Google Scholar] [CrossRef]
  99. Zhang, Y. Theory of compressive sensing via ℓ1-minimization: A non-rip analysis and extensions. J. Oper. Res. Soc. China 2013, 1, 79–105. [Google Scholar] [CrossRef]
  100. Candes, E.J.; Wakin, M.B.; Boyd, S.P. Enhancing sparsity by reweighted ℓ1 minimization. J. Fourier Anal. Appl. 2008, 14, 877–905. [Google Scholar] [CrossRef]
  101. Chen, S.S.; Donoho, D.L.; Saunders, M.A. Atomic decomposition by basis pursuit. SIAM Rev. 2001, 43, 129–159. [Google Scholar] [CrossRef]
  102. Huggins, P.S.; Zucker, S.W. Greedy basis pursuit. IEEE Trans. Signal Process. 2007, 55, 3760–3772. [Google Scholar] [CrossRef]
  103. Biegler, L.T. Nonlinear Programming: Concepts, Algorithms, and Applications to Chemical Processes; Society for Industrial and Applied Mathematics: Philadelphia, PA, USA, 2010. [Google Scholar]
  104. Tibshirani, R. Regression shrinkage and selection via the lasso. J. R. Stat. Soc. Ser. B 1996, 58, 267–288. [Google Scholar] [CrossRef]
  105. Fu, W.J. Penalized regressions: The bridge versus the lasso. J. Comput. Graph. Stat. 1998, 7, 397–416. [Google Scholar]
  106. Maleki, A.; Anitori, L.; Yang, Z.; Baraniuk, R.G. Asymptotic analysis of complex LASSO via complex approximate message passing (CAMP). IEEE Trans. Inf. Theory 2013, 59, 4290–4308. [Google Scholar] [CrossRef]
  107. Candes, E.J.; Tao, T. Decoding by linear programming. IEEE Trans. Inf. Theory 2005, 51, 4203–4215. [Google Scholar] [CrossRef]
  108. Candes, E.; Romberg, J. l1-Magic: Recovery of Sparse Signals Via Convex Programming. Available online: www.acm.caltech.edu/l1magic/downloads/l1magic.pdf (accessed on 14 April 2005).
  109. Candes, E.; Tao, T. The Dantzig selector: Statistical estimation when p is much larger than n. Ann. Stat. 2007, 35, 2313–2351. [Google Scholar]
  110. Meenakshi, S.B. A survey of compressive sensing based greedy pursuit reconstruction algorithms. Int. J. Image Graph. Signal Process. 2015, 7, 1–10. [Google Scholar] [CrossRef]
  111. Akhila, T.; Divya, R. A survey on greedy reconstruction algorithms in compressive sensing. Int. J. Res. Comput. Commun. Technol. 2016, 5, 126–129. [Google Scholar]
  112. Tropp, J.A. Greed is good: Algorithmic results for sparse approximation. IEEE Trans. Inf. Theory 2004, 50, 2231–2242. [Google Scholar] [CrossRef]
  113. Mallat, S.G.; Zhang, Z. Matching pursuits with time-frequency dictionaries. IEEE Trans. Signal Process. 1993, 41, 3397–3415. [Google Scholar] [CrossRef]
  114. Pati, Y.C.; Rezaiifar, R.; Krishnaprasad, P.S. Orthogonal matching pursuit: Recursive function approximation with applications to wavelet decomposition. In Proceedings of the 27th Asilomar Conference on Signals, Systems and Computers, Pacific Grove, CA, USA, 1–3 November 1993; IEEE: Piscataway, NJ, USA, 1993; pp. 40–44. [Google Scholar]
  115. DeVore, R.A.; Temlyakov, V.N. Some remarks on greedy algorithms. Adv. Comput. Math. 1996, 5, 173–187. [Google Scholar] [CrossRef]
  116. Wen, J.; Zhou, Z.; Wang, J.; Tang, X.; Mo, Q. A sharp condition for exact support recovery with orthogonal matching pursuit. IEEE Trans. Signal Process. 2016, 65, 1370–1382. [Google Scholar] [CrossRef]
  117. Wang, J. Support recovery with orthogonal matching pursuit in the presence of noise: A new analysis. arXiv 2015, arXiv:1501.04817. [Google Scholar]
  118. Needell, D.; Vershynin, R. Uniform uncertainty principle and signal recovery via regularized orthogonal matching pursuit. Found. Comput. Math. 2009, 9, 317–334. [Google Scholar] [CrossRef]
  119. Needell, D.; Vershynin, R. Signal recovery from incomplete and inaccurate measurements via regularized orthogonal matching pursuit. IEEE J. Sel. Top. Signal Process. 2010, 4, 310–316. [Google Scholar] [CrossRef]
  120. Needell, D.; Tropp, J.A. CoSaMP: Iterative signal recovery from incomplete and inaccurate samples. Appl. Comput. Harmon. Anal. 2009, 26, 301–321. [Google Scholar] [CrossRef]
  121. Dai, W.; Milenkovic, O. Subspace pursuit for compressive sensing signal reconstruction. IEEE Trans. Inf. Theory 2009, 55, 2230–2249. [Google Scholar] [CrossRef]
  122. Blumensath, T.; Davies, M.E. Iterative thresholding for sparse approximations. J. Fourier Anal. Appl. 2008, 14, 629–654. [Google Scholar] [CrossRef]
  123. Blumensath, T.; Davies, M.E. Iterative hard thresholding for compressed sensing. Appl. Comput. Harmon. Anal. 2009, 27, 265–274. [Google Scholar] [CrossRef]
  124. Blumensath, T.; Davies, M.E. Normalized iterative hard thresholding: Guaranteed stability and performance. IEEE J. Sel. Top. Signal Process. 2010, 4, 298–309. [Google Scholar] [CrossRef]
  125. Chartrand, R. Exact reconstruction of sparse signals via nonconvex minimization. IEEE Signal Process. Lett. 2007, 14, 707–710. [Google Scholar] [CrossRef]
  126. Wen, J.; Li, D.; Zhu, F. Stable recovery of sparse signals via lp-minimization. Appl. Comput. Harmon. Anal. 2015, 38, 161–176. [Google Scholar] [CrossRef]
  127. Kanevsky, D.; Carmi, A.; Horesh, L.; Gurfil, P.; Ramabhadran, B.; Sainath, T.N. Kalman filtering for compressed sensing. In Proceedings of the 2010 13th International Conference on Information Fusion, Edinburgh, UK, 26–29 July 2010; IEEE: Piscataway, NJ, USA, 2010; pp. 1–8. [Google Scholar]
  128. Chartrand, R.; Staneva, V. Restricted isometry properties and nonconvex compressive sensing. Inverse Probl. 2008, 24, 035020. [Google Scholar] [CrossRef]
  129. Chartrand, R.; Yin, W. Iteratively reweighted algorithms for compressive sensing. In Proceedings of the 2008 IEEE International Conference on Acoustics, Speech and Signal Processing, Las Vegas, NA, USA, 30 March –4 April 2008; IEEE: Piscataway, NJ, USA, 2008; pp. 3869–3872. [Google Scholar]
  130. Wipf, D.P.; Rao, B.D. Sparse Bayesian learning for basis selection. IEEE Trans. Signal Process. 2004, 52, 2153–2164. [Google Scholar] [CrossRef]
  131. Ji, S.; Xue, Y.; Carin, L. Bayesian compressive sensing. IEEE Trans. Signal Process. 2008, 56, 2346–2356. [Google Scholar] [CrossRef]
  132. Ji, S.; Carin, L. Bayesian compressive sensing and projection optimization. In Proceedings of the 24th International Conference on Machine Learning, Corvallis, OR, USA, 20–24 June 2007; pp. 377–384. [Google Scholar]
  133. Bernardo, J.M.; Smith, A.F.M. Bayesian Theory; Wiley: New York, NY, USA, 1994. [Google Scholar]
  134. Yin, W.; Osher, S.; Goldfarb, D.; Darbon, J. Bregman iterative algorithms for ℓ1-minimization with applications to compressed sensing. SIAM J. Imaging Sci. 2008, 1, 143–168. [Google Scholar] [CrossRef]
  135. Osher, S.; Burger, M.; Goldfarb, D.; Xu, J.; Yin, W. An iterative regularization method for total variation-based image restoration. Multiscale Model. Simul. 2005, 4, 460–489. [Google Scholar] [CrossRef]
  136. Cai, J.F.; Osher, S.; Shen, Z. Linearized Bregman iterations for compressed sensing. Math. Comput. 2009, 78, 1515–1536. [Google Scholar] [CrossRef]
  137. Goldstein, T.; Osher, S. The split Bregman method for ℓ1-regularized problems. SIAM J. Imaging Sci. 2009, 2, 323–343. [Google Scholar] [CrossRef]
  138. Rudin, L.I.; Osher, S.; Fatemi, E. Nonlinear total variation based noise removal algorithms. Phys. D Nonlinear Phenom. 1992, 60, 259–268. [Google Scholar] [CrossRef]
  139. Chambolle, A. An algorithm for total variation minimization and applications. J. Math. Imaging Vis. 2004, 20, 89–97. [Google Scholar]
  140. Yang, J.; Zhang, Y.; Yin, W. An efficient TVL1 algorithm for deblurring multichannel images corrupted by impulsive noise. SIAM J. Sci. Comput. 2009, 31, 2842–2865. [Google Scholar] [CrossRef]
  141. Geman, D.; Yang, C. Nonlinear image recovery with half-quadratic regularization. IEEE Trans. Image Process. 1995, 4, 932–946. [Google Scholar] [CrossRef]
  142. Li, C. An Efficient Algorithm for Total Variation Regularization with Applications to the Single Pixel Camera and Compressive Sensing; Rice University: Houston, TX, USA, 2010. [Google Scholar]
  143. Wang, J.; Kwon, S.; Shim, B. Generalized orthogonal matching pursuit. IEEE Trans. Signal Process. 2012, 60, 6202–6216. [Google Scholar] [CrossRef]
  144. Rangan, S. Generalized approximate message passing for estimation with random linear mixing. In Proceedings of the 2011 IEEE International Symposium on Information Theory Proceedings, St. Petersburg, Russia, 31 July–5 August 2011; IEEE: Piscataway, NJ, USA, 2011; pp. 2168–2172. [Google Scholar]
  145. Khajehnejad, M.A.; Xu, W.; Avestimehr, A.S.; Hassibi, B. Weighted ℓ1 minimization for sparse recovery with prior information. In Proceedings of the 2009 IEEE International Symposium on Information Theory, Seoul, Republic of Korea, 28 June–3 July 2009; IEEE: Piscataway, NJ, USA, 2009; pp. 483–487. [Google Scholar]
  146. De Paiva, N.M.; Marques, E.C.; de Barros Naviner, L.A. Sparsity analysis using a mixed approach with greedy and LS algorithms on channel estimation. In Proceedings of the 2017 3rd International Conference on Frontiers of Signal Processing (ICFSP), Paris, France, 6–8 September 2017; IEEE: Piscataway, NJ, USA, 2017; pp. 91–95. [Google Scholar]
  147. Kwon, S.; Wang, J.; Shim, B. Multipath matching pursuit. IEEE Trans. Inf. Theory 2014, 60, 2986–3001. [Google Scholar] [CrossRef]
  148. Wen, J.; Zhou, Z.; Li, D.; Tang, X. A novel sufficient condition for generalized orthogonal matching pursuit. IEEE Commun. Lett. 2016, 21, 805–808. [Google Scholar] [CrossRef]
  149. Sun, H.; Ni, L. Compressed sensing data reconstruction using adaptive generalized orthogonal matching pursuit algorithm. In Proceedings of the 2013 3rd International Conference on Computer Science and Network Technology, Dalian, China, 12–13 October 2013; IEEE: Piscataway, NJ, USA, 2013; pp. 1102–1106. [Google Scholar]
  150. Huang, H.; Makur, A. Backtracking-based matching pursuit method for sparse signal reconstruction. IEEE Signal Process. Lett. 2011, 18, 391–394. [Google Scholar] [CrossRef]
  151. Gilbert, A.C.; Strauss, M.J.; Tropp, J.A.; Vershynin, R. Algorithmic linear dimension reduction in the l_1 norm for sparse vectors. arXiv 2006, arXiv:cs/0608079. [Google Scholar]
  152. Blanchard, J.D.; Tanner, J.; Wei, K. CGIHT: Conjugate gradient iterative hard thresholding for compressed sensing and matrix completion. Inf. Inference A J. IMA 2015, 4, 289–327. [Google Scholar] [CrossRef]
  153. Zhu, X.; Dai, L.; Dai, W.; Wang, Z.; Moonen, M. Tracking a dynamic sparse channel via differential orthogonal matching pursuit. In Proceedings of the MILCOM 2015–2015 IEEE Military Communications Conference, Tampa, FL, USA, 26–28 October 2015; IEEE: Piscataway, NJ, USA, 2015; pp. 792–797. [Google Scholar]
  154. Karahanoglu, N.B.; Erdogan, H. Compressed sensing signal recovery via forward–backward pursuit. Digit. Signal Process. 2013, 23, 1539–1548. [Google Scholar] [CrossRef]
  155. Gilbert, A.C.; Muthukrishnan, S.; Strauss, M. Improved time bounds for near-optimal sparse Fourier representations. In Wavelets XI; International Society for Optics and Photonics: Washington, DC, USA, 2005; Volume 5914, p. 59141A. [Google Scholar]
  156. Foucart, S. Hard thresholding pursuit: An algorithm for compressive sensing. SIAM J. Numer. Anal. 2011, 49, 2543–2563. [Google Scholar] [CrossRef]
  157. Gilbert, A.C.; Strauss, M.J.; Tropp, J.A.; Vershynin, R. One sketch for all: Fast algorithms for compressed sensing. In Proceedings of the Thirty-Ninth Annual ACM Symposium on Theory of Computing, San Diego, CA, USA, 11–13 June 2007; pp. 237–246. [Google Scholar]
  158. Tanner, J.; Wei, K. Normalized iterative hard thresholding for matrix completion. SIAM J. Sci. Comput. 2013, 35, S104–S125. [Google Scholar] [CrossRef]
  159. Mileounis, G.; Babadi, B.; Kalouptsidis, N.; Tarokh, V. An adaptive greedy algorithm with application to nonlinear communications. IEEE Trans. Signal Process. 2010, 58, 2998–3007. [Google Scholar] [CrossRef]
  160. Lee, J.; Choi, J.W.; Shim, B. Sparse signal recovery via tree search matching pursuit. J. Commun. Netw. 2016, 18, 699–712. [Google Scholar] [CrossRef]
  161. Rangan, S.; Schniter, P.; Fletcher, A.K. Vector approximate message passing. IEEE Trans. Inf. Theory 2019, 65, 6664–6684. [Google Scholar] [CrossRef]
  162. Daubechies, I.; Defrise, M.; De Mol, C. An iterative thresholding algorithm for linear inverse problems with a sparsity constraint. Commun. Pure Appl. Math. J. Issued Courant Inst. Math. Sci. 2004, 57, 1413–1457. [Google Scholar] [CrossRef]
  163. Beck, A.; Teboulle, M. A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J. Imaging Sci. 2009, 2, 183–202. [Google Scholar] [CrossRef]
  164. Donoho, D.L.; Maleki, A.; Montanari, A. Message-passing algorithms for compressed sensing. Proc. Natl. Acad. Sci. USA 2009, 106, 18914–18919. [Google Scholar] [CrossRef] [PubMed]
  165. Montanari, A.; Eldar, Y.C.; Kutyniok, G. Graphical models concepts in compressed sensing. Compress. Sens. 2012, 394–438. [Google Scholar]
  166. Efron, B.; Hastie, T.; Johnstone, I.; Tibshirani, R. Least angle regression. Ann. Stat. 2004, 32, 407–499. [Google Scholar] [CrossRef]
  167. Figueiredo, M.A.T.; Nowak, R.D.; Wright, S.J. Gradient projection for sparse reconstruction: Application to compressed sensing and other inverse problems. IEEE J. Sel. Top. Signal Process. 2007, 1, 586–597. [Google Scholar] [CrossRef]
  168. Gorodnitsky, I.F.; Rao, B.D. Sparse signal reconstruction from limited data using FOCUSS: A re-weighted minimum norm algorithm. IEEE Trans. Signal Process. 1997, 45, 600–616. [Google Scholar] [CrossRef]
  169. Do, T.T.; Gan, L.; Nguyen, N.; Tran, T.D. Sparsity adaptive matching pursuit algorithm for practical compressed sensing. In Proceedings of the 2008 42nd Asilomar Conference on Signals, SYSTEMS and computers, Pacific Grove, CA, USA, 26–29 October 2008; IEEE: Piscataway, NJ, USA, 2008; pp. 581–587. [Google Scholar]
  170. Blumensath, T.; Davies, M.E. Gradient pursuits. IEEE Trans. Signal Process. 2008, 56, 2370–2382. [Google Scholar] [CrossRef]
  171. Gu, H.; Yaman, B.; Moeller, S.; Ellermann, J.; Ugurbil, K.; Akçakaya, M. Revisiting ℓ1-wavelet compressed-sensing MRI in the era of deep learning. Proc. Natl. Acad. Sci. USA 2022, 119, e2201062119. [Google Scholar]
  172. Adler, A.; Boublil, D.; Elad, M.; Zibulevsky, M. A deep learning approach to block-based compressed sensing of images. arXiv 2016, arXiv:1606.01519. [Google Scholar]
  173. Xie, Y.; Li, Q. A review of deep learning methods for compressed sensing image reconstruction and its medical applications. Electronics 2022, 11, 586. [Google Scholar] [CrossRef]
  174. Zonzini, F.; Carbone, A.; Romano, F.; Zauli, M.; De Marchi, L. Machine learning meets compressed sensing in vibration-based monitoring. Sensors 2022, 22, 2229. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Principle of CSSPI.
Figure 1. Principle of CSSPI.
Sensors 23 04678 g001
Figure 2. CSSPI results of different measurement matrices at different sampling ratios. (a) The three curve graphs show reconstruction PSNR, SSIM, and running time. (b) The reconstructed images under sampling ratios of 20%, 40%, 60%, and 80%.
Figure 2. CSSPI results of different measurement matrices at different sampling ratios. (a) The three curve graphs show reconstruction PSNR, SSIM, and running time. (b) The reconstructed images under sampling ratios of 20%, 40%, 60%, and 80%.
Sensors 23 04678 g002
Figure 3. CSSPI results of different measurement matrices under different SNRs. (a) The three curve graphs show reconstruction PSNR, SSIM, and time. (b) The reconstructed images with SNR = 30 dB, 20 dB, 10 dB, and 0 dB.
Figure 3. CSSPI results of different measurement matrices under different SNRs. (a) The three curve graphs show reconstruction PSNR, SSIM, and time. (b) The reconstructed images with SNR = 30 dB, 20 dB, 10 dB, and 0 dB.
Sensors 23 04678 g003
Figure 4. Quantitative comparison of various reconstruction algorithms at different sampling rates.
Figure 4. Quantitative comparison of various reconstruction algorithms at different sampling rates.
Sensors 23 04678 g004
Figure 5. The “Cameraman” images were reconstructed by partial reconstruction algorithms at different sampling rates.
Figure 5. The “Cameraman” images were reconstructed by partial reconstruction algorithms at different sampling rates.
Sensors 23 04678 g005
Figure 6. Quantitative comparison of various reconstruction algorithms under noisy conditions.
Figure 6. Quantitative comparison of various reconstruction algorithms under noisy conditions.
Sensors 23 04678 g006
Figure 7. Comparison of the “cameraman” images reconstructed by different reconstruction algorithms from single-pixel measurements contaminated with Gaussian noise.
Figure 7. Comparison of the “cameraman” images reconstructed by different reconstruction algorithms from single-pixel measurements contaminated with Gaussian noise.
Sensors 23 04678 g007
Figure 8. Schematic of the experiment setup.
Figure 8. Schematic of the experiment setup.
Sensors 23 04678 g008
Figure 9. Comparison of images reconstructed by different measurement matrices at different sampling ratios. (a) Imaging results of “four bars”. (b) Imaging results of “ladybug”.
Figure 9. Comparison of images reconstructed by different measurement matrices at different sampling ratios. (a) Imaging results of “four bars”. (b) Imaging results of “ladybug”.
Sensors 23 04678 g009
Figure 10. Comparison of images reconstructed by different reconstruction algorithms at different sampling ratios. (a) Imaging results of “four bars”. (b) Imaging results of “ladybug”.
Figure 10. Comparison of images reconstructed by different reconstruction algorithms at different sampling ratios. (a) Imaging results of “four bars”. (b) Imaging results of “ladybug”.
Sensors 23 04678 g010
Table 1. A comparison between the work of this paper and the existing work.
Table 1. A comparison between the work of this paper and the existing work.
WorksContent
Our works-Reviews the concept of CSSPI.
-Summarizes the main measurement matrices in CSSPI.
-Summarizes the main reconstruction algorithms in CSSPI.
-The performance of measurement matrices and reconstruction algorithms in CSSPI is discussed in detail through simulations and experiments.
-The advantages and disadvantages of mainstream measurement matrices and reconstruction algorithms in CSSPI are summarized.
Existing worksRefs. [40,41]: Review the development of CS.
Refs. [42,43]: Review the main measurement matrices in CS.
Refs. [44,45,46]: Review the reconstruction algorithms in CS.
Refs. [2,47]: Review the development of SPI.
Refs. [48,49]: Review the algorithms of SPI. For CS, only the TVAL3 algorithm is involved.
Table 2. Analysis and summary of various measurement matrices in CSSPI.
Table 2. Analysis and summary of various measurement matrices in CSSPI.
Type of Measurement MatrixDefinitionAdvantagesDisadvantagesReferences
Random MatrixGaussian-Each coefficient obeys a random distribution separately.-The RIP property is satisfied with high probability.
-Fewer measurements and noise robustness.
-Large storage space.
-Difficult to implement in hardware.
-No explicit constructions.
[62]
Bernoulli[63]
Semi-deterministic random matrixToeplitz and Circulant-Each coefficient is generated in a particular way.-Easy hardware implementation and robustness.
-Sparse random matrix retains the advantage of an unstructured random matrix.
-High uncertainty.
-More measurements.
-For a particular type of signal.
[67,68]
Sparse random[70,71,72,73]
Partial orthogonal matrixPartial Fourier-Some rows are randomly selected from the orthogonal matrix.-It is fast to generate and easy to save.
-Easy hardware implementation and robustness.
-Fourier matrix needs more recovery time and measurement times.
-The dimension limit of the Hadamard matrix.
[47]
Partial Hadamard[65,66]
Table 3. Summary of various reconstruction algorithms in CSSPI.
Table 3. Summary of various reconstruction algorithms in CSSPI.
AlgorithmAdvantagesDisadvantagesReferences
ConvexDantzig-Fewer measurements.
-Noise robustness.
-High computational complexity.
-Slower, not suitable for large-scale problems.
[109]
BPDN[101,104,105,106]
BP[101,102]
DLP[107,108]
GreedyOMP-Easier to implement and faster.
-IHT and CoSaMP can add/discard entries per iteration.
- Noise robustness.
-A priori knowledge of signal sparsity is required.
-The sparsity of the solution cannot be guaranteed.
-More measurements.
[113,114,115,116,117]
CoSaMP[120]
IHT[122,123,124]
SP[121]
TVTVAL3-Preserves the sharp edges and prevents blurring.
-Noise robustness.
-TVAL3 and TV-Qc are faster.
-Not linear.
-Not differentiable.
-Dantzig selector is slower.
[142]
TV-DS[109]
TV-QC[108]
Non-ConvexBCS-More sparse solution.
-Faster, suitable for large-scale problems.
-Rely more on the prior knowledge of the signal.
-High computational complexity.
[130,131,132,133]
IRLS-Fewer measurements than the convex algorithm.
-Can be implemented under weaker RIP.
-High computational complexity.
-Slower, not suitable for large-scale problems.
[128,129]
BregmanSB-Faster, noise robustness.
-Small memory footprint.
-More measurements than the convex algorithm.[137]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zhao, W.; Gao, L.; Zhai, A.; Wang, D. Comparison of Common Algorithms for Single-Pixel Imaging via Compressed Sensing. Sensors 2023, 23, 4678. https://doi.org/10.3390/s23104678

AMA Style

Zhao W, Gao L, Zhai A, Wang D. Comparison of Common Algorithms for Single-Pixel Imaging via Compressed Sensing. Sensors. 2023; 23(10):4678. https://doi.org/10.3390/s23104678

Chicago/Turabian Style

Zhao, Wenjing, Lei Gao, Aiping Zhai, and Dong Wang. 2023. "Comparison of Common Algorithms for Single-Pixel Imaging via Compressed Sensing" Sensors 23, no. 10: 4678. https://doi.org/10.3390/s23104678

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop