Next Article in Journal
Comprehensive Approach to the Evaluation of Off-Line License Plate Recognition Data
Previous Article in Journal
Fairness-Based User Scheduling and Performance Optimization in Energy Harvesting Cognitive Network
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Single-Pixel Three-Dimensional Compressive Imaging System Using Volume Structured Illumination

1
School of Media Engineering, Communication University of Zhejiang, Hangzhou 310018, China
2
Key Lab of Film and TV Media Technology of Zhejiang Province, Hangzhou 310018, China
*
Author to whom correspondence should be addressed.
Electronics 2025, 14(17), 3463; https://doi.org/10.3390/electronics14173463
Submission received: 6 August 2025 / Revised: 26 August 2025 / Accepted: 26 August 2025 / Published: 29 August 2025

Abstract

Single-pixel imaging enables two-dimensional image capture through a single-pixel detector, yet extending this to three-dimensional or higher-dimensional information capture in single-pixel optical imaging systems has remained a challenging problem. In this study, we present a single-pixel camera system for three-dimensional (3D) imaging based on compressed sensing (CS) with continuous wave (CW) pseudo-random volume structured illumination. An estimated image, which incorporates both spatial and depth information of a 3D scene, is reconstructed using an L1-norm minimization reconstruction algorithm. This algorithm employs prior knowledge of non-overlapping objects as a constraint in the target space, resulting in improved noise performance in both numerical simulations and physical experiments. Our simulations and experiments demonstrate the feasibility of the proposed 3D CS framework. This approach achieves compressive sensing in a 3D information capture system with a measurement ratio of 19.53%. Additionally, we show that our CS 3D capturing system can accurately reconstruct the color of a target using color filter modulation.

1. Introduction

Single-pixel imaging is a vital technique for locating target objects in both spatial and depth dimensions, emerging alongside the development of compressed sensing theory. As an innovative imaging method, it not only facilitates high-quality image reconstruction but also significantly reduces the cost and complexity of imaging systems. Additionally, it showcases notable characteristics such as wide-band response, high sensitivity, and robust stability. As a result, it holds considerable potential for applications across various fields, including medical imaging and aerospace remote sensing [1,2,3].
However, optical quantum noise imposes limitations on the measurement accuracy of traditional optical 3D imaging. Particularly under specific energy constraints and practical device conditions, the resolution and accuracy of 3D imaging are further restricted by the efficiency of light energy utilization and the performance of light intensity detectors. To address the limitations of conventional active 3D optical imaging systems and enhance their performance, compressed sensing has been widely adopted in single-pixel 3D imaging. Recent efforts have focused on developing innovative measurement frameworks, such as compressive imaging and feature-specific imaging, to improve the performance of conventional imaging systems [4,5]. Liu [6] proposed an innovative photon-limited imaging method that investigates the correlation between photon detection probability within a single pulse and light intensity distribution in a single-pixel correlated imaging system. Lai [7] introduced a new single-pixel imaging approach based on Zernike patterns. Its reconstruction algorithm utilizes the inverse Zernike moment transform, enabling rapid reconstruction with minimal computation time.
This paper proposes a method to capture 3D information using a single-pixel sensor, which can reduce measurement time and enhance the utilization of light energy. Unlike traditional single-pixel compressive sensing measurements of two-dimensional signals, three-dimensional compressive imaging simultaneously involves two key challenges: second-order dimensionality reduction in sampling and second-order dimensionality enhancement in recovery. This method offers potential for reduced system complexity compared to conventional 3D imaging systems and some 3D imaging systems based on compressed sensing referenced in this paper. In our system, the intensity contribution from the axial and lateral locations are treated equally. Thus, neither temporal modulation nor pulsed laser is used to reconstruct the depth information, which reduces the demand for the light source and simplifies the complexity of the hardware system. To improve the reconstructed 3D image and avoid artificial overlapped objects in different depth slice planes, we present a reconstruction algorithm with disjoint constraint. Although the reconstruction algorithm was purposively designed for our 3D compressive imaging system, we would like to note that the algorithm can be extended to other applications of reconstruction in multidimensional CS systems. The main contributions of this paper are as follows:
  • We derive and prove the mathematical feasibility of three-dimensional and multi-dimensional simultaneous compressive sensing.
  • And we propose a 3D reconstruction algorithm based on Bregman’s iterative method, which takes advantage of prior knowledge of object space with disjoint constraint.
  • Based on compressed sensing, we present a framework of a 3D compressive imaging system using a single-pixel sensor and volume structure illumination. Additionally, a simulation and experiment demonstrate the feasibility of this algorithm.

2. Related Work

Some researchers have adapted CS to 3D imaging systems by employing single-pixel imaging, which requires fewer measurements to capture spatial information and results in faster measurements compared to traditional 3D imaging systems [8,9]. In this approach, a depth map is obtained using the conventional Time-of-Flight (TOF) camera method [10,11,12] to generate a 3D image. The data post-processing involves CS reconstruction at each peak and the identification of peak locations. Snapshot compression imaging is grounded in the principles of CS and utilizes low-resolution detectors to achieve optical compression of high-resolution images. A significant challenge in this field is the construction of an optical projection system that satisfies the requirements for CS reconstruction. Esteban [13] proposed a single-lens compression imaging architecture that optimizes optical projection by introducing aberrations in the pupil plane of the low-resolution imaging system. To accelerate the image reconstruction process in the terahertz compressed sensing single-pixel imaging system, Wang [14] introduced a novel tensor-based compressed sensing model. This newly developed CS model significantly reduced the computational complexity of various CS algorithms by several orders of magnitude. Ndagijimana [15] presented a method for extending a two-dimensional single-pixel terahertz imaging system to three dimensions using a single frequency. This approach achieved 3D resolution by leveraging the single-pixel method while avoiding mechanical scanning and eliminating the need for bandwidth through the use of a single frequency. To reduce artifacts and enhance the image quality of reconstructed scenes, Huang [16] proposed an enhanced compressive single-pixel imaging technique utilizing zig-zag-ordered Walsh–Hadamard light modulation.
In recent years, advancements in computing power have propelled the rapid development of machine learning, particularly deep learning (DL) and other artificial intelligence algorithms. Researchers have begun to investigate the potential of applying machine learning to single-pixel imaging [17,18]. Saad [19] developed a deep convolutional autoencoder network architecture, known as DCAN, which models and analyzes artifacts through context learning mechanisms. This approach employs convolutional neural networks for denoising, effectively eliminating ringing effects and enhancing the image quality of low-resolution images generated from Fourier single-pixel imaging. To tackle the issue of image quality degradation in low sampling rate scenarios of single-pixel imaging, Xu [20] introduced the FSI-GAN method, which is based on generative adversarial networks. This method integrates perceptual loss, pixel domain loss, and frequency domain loss functions into the GAN model, thereby effectively preserving image details. Jiang [21] designed an SR-FSI generative adversarial network that combines U-Net with an attention mechanism, overcoming the trade-off between efficiency and quality in Fourier single-pixel imaging. This network addresses the challenge of insufficient image quality in low sampling rate reconstruction, achieving high-resolution reconstructions from low-sampling-rate Fourier single-pixel imaging results. Lim [22] conducted a comparative study of Fourier and Hadamard single-pixel imaging in the context of deep learning-enhanced image reconstruction. This study is notable for being the first to compare conventional Hadamard single-pixel imaging (SPI), Fourier SPI, and their DL-enhanced variants using a state-of-the-art nonlinear activation-free network. Song [23] developed a Masked Attention Network to eliminate interference in the optical sections of samples. This network effectively addresses the problem of overlapping sections, which arises due to the limited axial resolution of photon-level single-pixel tomography. Although deep learning methods offer certain advantages in high-quality reconstruction and real-time performance, they experience a significant decline in generalization performance in scenarios with limited samples. For example, these methods require a substantial amount of labeled data for training, and the reconstruction quality may be poor in scenarios that are either absent or not represented in the training data.

3. Three-Dimensional Extension of Compressive Imaging

Given the sparse nature of most objects, CS is capable of measuring signals with significantly fewer measurements than the native dimensionality of the signals, constituting one of the greatest advantages of measurement systems based on this technique. A linear measurement system is typically modeled as follows:
s = P x + n = P T θ + n
where the vector s R M represents a signal from the measurements, the matrix P R M × N denotes the measurement projection matrix depending on the measurement method and system parameters, x is the information of interest which can be represented a vector R N , and n R M denotes random noise. Some advanced linear measurement systems transform x to another domain by using a transform projection x = T θ , where T R N × N represents the transform matrix and θ remains a vector of R N . To obtain the solution of the underlying x or θ , M = N is a necessary condition to keep the traditional measurement systems nonsingular. The CS paradigm exploits the inherent sparsity of the information of interest to obtain the solution of x or θ from under-determined systems (M < N). A sparse vector has a much smaller number of non-zero elements than the native dimensionality of the vector, e.g., K (K < N, or K << N) non-zero entries in vector θ . Solving a noise-free under-determined problem with sparsity regularization is akin to solving an L0 norm problem. While it is combinatorially expensive to solve the L0 problem, it has been proven that a recovery sparse signal θ can be obtained by solving an L1-norm constrained minimization problem known as a basis pursuit problem:
l 1 : min θ { θ 1 : Ψ θ = s }
where θ is used herein for generality (if T is a trivial basis transform projection I R N × N , x equals θ ), 1 denotes the L1 norm, and Ψ = P T is the measurement matrix. Importantly, when the measurement matrix Ψ satisfies the restricted isometry property, a perfect reconstruction of the sparse signal is guaranteed using random projection, with the number of measurements being M = O ( K log ( N / K ) ).
Before describing the framework of our three-dimensional compressive imaging (3DCI), we briefly exploit the CS form to the three-dimensional signal. We represent a three-dimensional signal of Cartesian coordinates with a matrix X R n 1 n 2 × d , where n1 and n2 denote the spatial resolution of the lateral plane slides the scene, and d is the resolution along the axial axis (see Figure 1). In each measurement, a linear measurement projection P r R n 1 × n 2 × d is projected to the object scene and the measurement signal s r R is captured. From Equation (1), x and Pr are, respectively, reformed to a column vector x R n 1 n 2 d and a row p r R 1 × n 1 n 2 d by a vectorization operation.
In a noise-free measurement system, s r equals the inner product of x and p r . Finding a feasible linear measurement projection is the principal challenge of constructing a 3D CS imaging system.
It is important to note that the signal can also keep a matrix form that makes the 2D transformation convenient and also brings structure benefit, which will be discussed in Section 4 and Section 5. Let us reform the signal matrix as X R n 1 n 2 × d , and also p r and θ. The vectorization operator denoted as ‘vec’ is used to stack the columns of the matrix on top of one another. Given m measurements, Equation (1) can be reformed as follows:
s = v e c ( T t p 1 ) t v e c ( T t p m ) t v e c ( θ ) + n
where T is a 2D transform matrix and T t is a transpose of the matrix T . According to the compatibility property of the Kronecker product, the derivation from Equation (1) to Equation (3) is trivial and is hence omitted from this paper. By using Equation (3), not only is the form of the matrix of the signal kept, but also the construct of a compressive matrix is calculated. Otherwise, the present solution for Equation (2) is suitable for Equation (3).
The intrinsic essence of compressed sensing theory lies in the sparsity of the measured signal and the linear measurement properties of the detection system, thereby ensuring the adaptability of the L1-norm-based solution method, which is highly scalable. Under this premise, the dimensional characterization of detection information in actual detection scenarios is merely reflected in the different mathematical expressions of the sensing matrix and the sensing space.

4. Framework of Single-Pixel 3D Compressive Imaging System

Before we describe the single-pixel 3D compressive imaging system (SP3DCI), we will define the model represented by the 3D scene and the objects utilized in this paper, as illustrated in Figure 1. The measurement scope pyramid in the 3D space of interest was discretely labeled in multiple slices of parallel planes, each containing an equal number of pixels. Specifically, the size of the pixels varied along the depth axis to maintain a consistent number of pixels across each lateral plane.
The objects were labeled as a 3D matrix X R n 1 × n 2 × d , with entries having a value of reflectivity at the pixel grid at the corresponding location. When recalling the linear measurement requirements for the measurement projection matrix Pi for 3D compressive imaging, the random projection of the 3D scene of interest resulted in measurements of the 3D compression ratio. We then introduced the concept of random volume structured illumination, which was adapted to the pyramid of the 3D object scene, as illustrated in Figure 2. The intensity distribution of each lateral plane at the kth depth was the sum of the distribution from the focal plane and the defocus patterns from other planes. These patterns were derived from a 2D convolution of the defocus blurring function, and the original random pattern is as follows:
I k = i = 1 d C r k 2 h i k ( u , v ) p ˜ i ( u x , v y ) d u d v ,
where p ˜ i and h i k denote, respectively, the ideal all-in-focus random projection pattern and the spatially defocused kernel from the ith depth projecting to the kth depth along the axis. In the simulation, h i k could be a disk function or 2D Gaussian function at the relative position of i and k. One example of 3D volume structure illuminations is shown in Figure 3.
In each measurement, the one-pixel sensor received the reflected optical flow from the 3D objects as
S δ ( x , y , z ) = C E δ ( x , y , z ) z 2 ρ δ ( x , y , z )
where E δ ( x , y , z ) = j P S F j , z z j ( x x j , y y j ) δ ( x , y ) d x d y is the cumulative illumination quantity of the point spread function projected by a random matrix onto the target.
We observed that the imaging system could only identify one corresponding pixel at various depths when dealing with non-transparent objects. This meant that an object positioned closer to the imaging system was likely to obstruct the view of objects located behind it if they overlapped in the lateral plane. A preliminary condition would facilitate the estimation process [7]. Consequently, we were motivated to incorporate a disjoint constraint into the traditional CS equation.
Next, we found a mathematical expression for the non-overlapping features in 3D imaging in the framework of compressed sensing. Let us keep the matrix form of the 3D object signals as x R n 1 n 2 × d , where each row vector of the matrix represents all the corresponding pixels at all d depths, and there is at most one non-zero entry in each row. The mathematical expression of this constraint is as follows:
x r k 0 1 , k 1 n 1 n 2
where x r k denotes the kth row of x , and · 0 represents the L0-norm, which is a stricter constraint than the original L0-norm constraint in the CS and is another combinatorially high-cost problem. Notably, one definite inference from Equation (6) is that every column in x is orthogonal to the others, that is,
x c i t x c j = 0 ; i j λ i ; i = j
where x c represents the column vector of x, and λ i is a constant (≥0) that differs for different i. One operational expression is
x t x = Λ
where Λ R d × d is a diagonal matrix. Now we could formulate our 3D compressive imaging measurement design problem, which we have presented as the following constrained optimization problem:
arg min θ v e c ( θ ) 1   subject   to   Ψ v e c ( θ ) = s , θ t Q θ = Λ , T θ 0
where Ψ = v e c ( T t p 1 ) t v e c ( T t p m ) t , θ R n 2 × d is a coefficient matrix from a basis transform projection x = T θ , and we make the lateral plane square and its spatial resolution is n 2 , and Q = T t T is a symmetric positive definite matrix. The L1 minimization problem has a data-fidelity constraint and non-overlapped constraint, and T θ 0 is a constraint to make sure that the reconstruction is non-negative.

5. Proposed Algorithm of 3D Compressive Imaging

When theoretically extending single-detector three-dimensional synchronized compressed sensing in Section 3, the solution approach for compressed sensing in Equation (3) remains useful, requiring only modifications to the shape of the measurement matrix and the target based on our conceptual framework. However, when implementing a practical synchronized compressed optical sensing experimental system using the actual 3D volume structure illumination from Section 4, it introduces a spatially defocused kernel that exhibits correlations with other depth vectors in the measurement matrix. Under similar circumstances, the original compressed sensing reconstruction methods did not account for optical non-overlapping constraints, where targets at different depths cannot overlap. Therefore, classic compressed sensing reconstruction methods such as TVAL3, Bregman, Shrinkage and L1-eq were used for simulation reconstruction experiments without noise to verify their feasibility. We set n = 64, with two targets ‘R’ located at depths of 1500 mm and 3300 mm, respectively, occupying the upper and lower halves of the image plane.
To our knowledge, there is currently no solution to Equation (9). The results of the algorithm comparison experiment for noise-free signal recovery are shown in Figure 4. We found that TVAL3 performed the worst in terms of recovery effectiveness without constraints; both the colors and shapes in its pseudo-color images were difficult to distinguish, whereas L1-eq performed very well. However, in later stages, when we attempted recovery using actual captured data from our hardware experiment, we found that none of these existing reconstruction algorithms could successfully recover a target. Consequently, we had to attempt to incorporate constraints into the recovery algorithm to enhance its reconstruction performance, using the L1-eq as a reference algorithm for our proposed algorithm for single-pixel three-dimensional compressive imaging systems.
The introduction of a non-overlapping constraint was intended to ensure that, within the depth dimension of 3D imaging, only the surface of the closest opaque objects to the imaging system are assigned a value, while all other elements in this depth dimension are set to zero. This constraint is commonly adhered to in optical 3D surface modeling and target depth detection. The mathematical derivation of this specific constraint is detailed in Equations (6)–(8), as presented in Section 4.
Therefore, we developed an algorithm for the aforementioned model using L1-norm minimization, which is applied in both our numerical simulations and the reconstruction of our experimental data. The proposed algorithm is primarily based on the framework of Bregman iterative regularization for L1 minimization, an efficient method for addressing the fundamental pursuit problem. Each iteration consists of three steps.
As introduced in Section 4, Q = T t T is a symmetric positive definite matrix in Equation (9). We first introduce, utilizing x = T θ , the split orthogonality constraint, a new function where both μ v e c ( θ ) 1 and 1 2 Ψ v e c ( θ ) s F 2 are convex. Therefore, the sum of them is also convex. Thus, the first step is to solve the unconstrained convex problem. We attempted the shrinkage method, also known as soft thresholding, to obtain a solution for every iteration when we realized that it was a convex optimization problem without constraints.The algorithmic framework was depicted in Algorithm 1.
Firstly, a shrinkage operation was performed to obtain the iterative update of θ , also known as a shrinkage-thresholding operation, which will solve the L1 minimization problem with a data-fidelity constraint as follows:
θ k + 1 = arg min θ   μ v e c ( θ ) 1 + H ( θ )
where H ( θ ) = Ψ v e c ( θ ) s k + 1 F 2 . Note that here, s k + 1 is an “adding noise back” iterative signal that not only makes H ( θ ) a data-fidelity constraint but also gives it the property of a Bregman distance. The shrinkage–thresholding operation is based on the gradient algorithm, which converts Equation (10) to follow an iterative scheme:
θ k + 1 arg min θ   μ v e c ( θ ) 1 + 1 2 δ k θ ( θ k δ k H ( θ ) ) 2
where δ k is a positive step size at iterative k, and H ( θ ) is a matrix gradient of H ( θ ) , which can be derived from Equation (9) as follows:
H ( θ ) = i m 2 ( t r ( θ t p i ) s k + 1 i ) p i
where p i R n 2 × d represents the random projection matrix of the ith measurement and t r ( ) is an operation to sum all the elements in the diagonal from the upper left to the lower right of the matrix. The parameter θ k + 1 can be obtained by a component-wise shrinkage operation:
θ k + 1 = s h r i n k ( θ k δ k H ( θ ) , μ δ k )
where s h r i n k ( y , α ) : = sgn ( y ) max { | y | α , 0 } . Subsequently, we can convert θ to object space χ k + 1 = T θ k + 1 and apply the disjoint constraint θ t Q θ = Λ and positive constraint sequentially.
To keep the θ t Q θ = Λ constraint, the procedure of the proposed algorithm has two steps. Firstly, we employed a diagonal matrix Ω d × d to exclude the energy same proportion of each depth of the current estimate object χ k + 1 from the shrinkage operation. Secondly, the Orthogonal Procrustes matrix operation was introduced, which χ ^ k + 1 could be rebuilt to a matrix with one depth image orthogonal to the others, where we computed SVD factorization by singular value decomposition U V * = χ k + 1 Ω , χ ^ k + 1 was computed by U I V * Ω , and the matrix I is an eye matrix of R n 2 × d .
Finally, χ k + 1 was projected onto the coefficient space using θ k + 1 = T 1 χ ^ k + 1 and put into the equivalent expression of the Bregman distance procedure as follows:
s k + 1 = s + ( s k + 1 Ψ v e c ( θ k + 1 ) )
Note that Equation (14) adds the noise from the θ t Q θ = Λ constraint back to the L1 minimization procedure and alternately minimized θ k + 1 until converging.
Algorithm 1 Bregman’s iterative method as the solution to Equation (9)
Input: s, P, μ , δ 0 , c
While “stop criteria c” are not satisfied, repeat
      1: Shrinkage operation to find the θ k + 1 by Equation (12),
      2: Orthogonal Procrustes matrix operation to get χ ^ k + 1 and apply positive constraint on the object space,
      3: Equivalent Bregman distance to update s k + 1 by Equation (13).
End
Output  χ

6. Numerical Simulation and Reconstruction Performance Analysis

In this section, we theoretically demonstrate the feasibility and potential of the proposed method by presenting simulation results of CS reconstruction of 3D objects using pseudo-random volume structured illumination. Volume structured illumination was simulated through optical systems utilizing digital micromirror device (DMD) projectors with a pixel size of 7.6 μm and a projection lens with an effective focal length of 14.95 mm and an F-number of 2. By adjusting the imaging distances, we obtained volume pseudo-random intensity distributions across five different lateral planes ranging from 450 mm to 750 mm at equal intervals, with example patterns illustrated in Figure 2. In the remainder of this paper, we employ the normalized root mean square error (RMSE) metric to assess reconstruction error, which is defined as R M S E = E ( χ r χ o F 2 ) D R , where DR denotes the object dynamic range, and χ r and χ o are, respectively, the three-dimensional reconstructed signal and the ground true signal. Note that the RMSE here is a joint evaluation with both the location and the reflective ratio of objects.
Firstly, the noiseless 3D CS simulation results use the three objects shown in Figure 5. Note that objects have a sparsity of K = 125, N = 5120, thus M = O (Klog(N/K)) = O (464.0745) (e.g., measurement ratio ≥ 9.09%).
As illustrated in Figure 6a, we implemented the L1 magic algorithm in the reconstruction simulation within a framework based on Equation (9) without a disjoint constraint. In Figure 6b, the simulation operates under the framework based on Equation (9) with the constraint, utilizing the proposed algorithm with the user-defined parameters μ = 3 × 10 2 and δ 0 = 1.58 × 10 7 .
In the noiseless situation, our simulations demonstrated an exact reconstruction of 3D objects with a 17.58% measurement ratio using just M = 900, both using L1 magic implementation without constraints and our algorithm with a disjoint constraint. As illustrated in Figure 6c, the reconstructed image from the L1 magic algorithm exhibits overlapping pixels in the lateral planes at depths 3, 4, and 5, which are the indices of depth in the pseudo-color, pseudo depth images (e.g., 600 mm, 675 mm, 750 mm), resulting in poor RMSE performance. The proposed algorithm effectively addresses the issue of overlapping pixels and demonstrates significantly improved RMSE performance. Further simulations yield an acceptable reconstruction with an RMSE of 0.0556 using only 600 measurements (e.g., measurement ratio = 11.71%), as shown in Figure 7.
In physical systems, measurements are often subject to noise. In addition to the various types of noise associated with specific imaging systems, noise from the detector and readout circuit represents a universal and significant component that is challenging to compensate for. This noise is effectively modeled as Additive White Gaussian Noise (AWGN), characterized by a zero mean and a standard deviation. In our simulation of a noisy compressive system, the standard deviation is defined as a percentage of the average received intensity from our structured illumination volume.
To simulate noise, we modified the optical intensity signals obtained from the noise-free simulation experiment. The process involved averaging received optical intensity signals and subsequently multiplying them by a noise factor, such as 1% or 2%. We then multiplied this result by a random number drawn from a distribution defined by a probability distribution function with a mean of 1. The resultant values were then added back as AWGN. An increase in the number of measurements would enhance reconstruction performance in both noiseless and noisy scenarios; therefore, we also incorporated the measurement ratio into the simulation of noisy measurements.
From Figure 8, we can observe that the reconstruction performance was influenced by both the measurement ratio (the number of measurements) and the noise level. For a fixed number of measurements, the root mean square error (RMSE) of reconstruction increases as the noise level rises. The simulation results clearly demonstrate a superior RMSE performance, showing a difference of 30% compared to the L1 magic method without constraints at a 2% noise level. It is important to note that a 2% noise level corresponds to 2% noise of the total received energy from objects with a single projection pattern, which equates to a signal-to-noise ratio (SNR) of 17 dB, while a 1% noise level corresponds to an SNR of 20 dB.
Therefore, we can conclude that employing our constrained algorithm enhances RMSE performance under the same measurement ratio. The simulation indicates that our algorithm is more effective at managing noise, thereby making the proposed single-pixel three-dimensional compressive imaging system more viable. An example of the reconstruction is illustrated in Figure 9, where the measurement ratio is 23.44% with a noise level of 2%. It is noteworthy that the RMSE does not improve when the noise level exceeds 10% (e.g., SNR = 10 dB). However, we can observe that the same RMSE level of reconstruction can be achieved by increasing the measurement ratio for a given noise level.

7. Prototype of Single-Pixel Three-Dimensional Compressive Imaging System

In this section, we experimentally demonstrate the feasibility of the proposed single-pixel three-dimensional compressive imaging system. As seen in the schematic presented in Figure 10, the experimental setup primarily consisted of two adjustable projection lenses that generate projection patterns at different distances. These patterns were then merged along the main projection optical axis by a 50:50 beam splitter to form a volume structured illumination. We set the same working distance, D0, between the two projection lenses and the beam splitter, ensuring the coincidence of the illumination starting points for our volume structured illumination, which facilitated subsequent optical path adjustment and calibration. Two digital micromirror devices (DMDs) were synchronized to modulate and generate a measurement matrix, while a single-pixel detector, connected to a beam collection lens, collected reflected light signals from the detection area. Additionally, our system’s optical path diagram was scalable. By utilizing the same optical path diagram for both the beam splitter and projection system, complex hierarchical three-dimensional light illumination was achieved through the continuous expansion of beam splitters and projection system components.
The hardware experimental setup photograph is illustrated in Figure 11. Projectors A and B were positioned on the same horizontal plane with their optical axes perpendicular to each other, converging through beam splitter C, yet projecting at varying focal distances to create a multi-depth volume illumination. The projection matrices within both projector A and projector B were random 64 × 64 black-and-white matrices. D was a single-pixel detector, which was fitted with lens E for the collection of reflected light energy. The data from sensor D was captured through a data acquisition board and processed using a low-pass filter. The collection optics E parameters were D = 11.5 mm and EFL = 17.5 mm, coated with MgF2. The projectors we selected were portable DLP LED projectors using a white light source, which featured a 0.45-inch diagonal DMD with a 7.6 µm micromirror pitch. For the sensor, we utilized the PDA55 and acquired its data using an NI-DAQ device.
Although the spatial resolution of our proposed system was not constrained by this principle, the resolution of the projection matrices was set to 64 × 64. This is because the measurements time cost increases proportionally with the total resolution when conducting compressed sensing experiments. Furthermore, hardware limitations, such as pixel configurations like 4 × 4 or 8 × 8 binning, were utilized to enhance the signal-to-noise ratio for elements of the projection matrices. Higher-resolution configurations could upgrade the hardware, such as DMD systems, in future implementations.
To construct a high-quality coaxial volume structured illumination, during the light adjustment of the hardware experimental system, we designed a five-fixed-light-point coaxial alignment method, where five light spots are located at the four vertices and the geometric center of the square projection area; see (a) and (b) in Figure 12. Pattern images obtained from the camera enabled visual feedback to manually regulate the platform carrying the projector and the projector’s working distance, ensuring the coincidence of the projection optical axes between the central point and the edge boundary points.
Figure 12 also depicts a photograph taken during our adjustment process. According to Equation (3), calibration is necessary between the ideal measurement matrix sent to the projection system and the actual measurement matrix modulated by the detection target. In our experiment, we conducted a ‘pixel-by-pixel’ scan to balance the non-uniform inter-pixel intensity of the projection system while obtaining the relationship between the intensities of the two planes; one of the scan pattern arrays is shown in (c) of Figure 12. In the calibration data shown in Figure 13, the SNR from the plane at a depth of 450 mm is 21.5 dB, while SNR drops to 15.3 dB at the plane at a depth of 750 mm.
The actual defocus patterns were captured by photographing the real light spots and processed to serve as a defocus spread function to replace the point spread function in Equation (3). (d) and (e) display two of the defocus patterns in our experiment at 750 mm and 450 mm, respectively, in Figure 12.
In the prototype experiment, the single-pixel detector and receiving optics operated continuously under continuous white light illumination without the use of a narrowband filter to block ambient light. Consequently, to account for the system’s baseline noise and the influence of ambient light and stray light, a measurement without any detection target was conducted before each detection to obtain the background noise of the prototype, followed by the detection target measurement.
After system adjustment and calibration, the experimental results in Figure 13 and Figure 14 demonstrated the feasibility of the proposed three-dimensional synchronous compressed sensing architecture and the effectiveness of our hardware prototype. Additionally, the superiority of our proposed algorithm is well demonstrated by the reconstruction results at a 25% measurement rate.
We also conducted reconstruction experiments on objects with varying measurement ratios and colors, as illustrated in Figure 15 and Figure 16. It should be noted that both Figure 15 and Figure 16 show the imaging and reconstruction results at two different depths, with a resolution of 64 × 64. Among them, the reconstructed images did not use pseudo-color reconstruction due to consideration of the effect of color reconstruction. Instead, we used two reconstructed images with a horizontal resolution of 64 × 64 for stitching different depths. The results indicate that increasing the measurement ratio enhances reconstruction performance, and the reconstruction of colored objects also yields improved results.
The experiments demonstrate that three-dimensional perception of target colors can be achieved through three monochromatic 3D perception measurements followed by linear color superposition of images. The monochromatic measurement experiment we conducted involved sequentially changing a white (255, 255, 255) pattern to a red (255, 0, 0), green (0, 255, 0), or blue (0, 0, 255) pattern, and then remixing their respective reconstructed results. The color target three-dimensional perception experiment effectively demonstrates that our single-pixel three-dimensional compressive imaging system exhibits excellent scalability for higher-dimensional information, such as color, with linear superposition. However, our experimental observations reveal that, during each single-color-channel 3D perception imaging, the emitted light intensity is only 1/3 of the original white light illumination. The experiment data showed that, in the color reconstruction experiment, the SNR from the plane at a depth of 450 mm was 16.1 dB while the SNR dropped to 12.5 dB at the plane at a depth of 750 mm. As illustrated in Figure 14a, it is evident that the intensity of light at near targets was significantly greater than that at far targets, resulting in a lower signal-to-noise ratio at far targets. Consequently, the distant ‘A’ region failed to achieve effective 3D reconstruction due to its weaker light intensity and higher noise levels.

8. Discussion and Conclusions

In this paper, we present a framework for a 3D compressive imaging system that utilizes a single sensor and continuous wave (CW) volume structured illumination. The feasibility of this algorithm was demonstrated through both simulation and experimental results. The proof-of-principle experiment yielded an acceptable reconstruction with a measurement ratio of 19.53% and exhibited color reconstruction capabilities when the signal-to-noise ratio of monochromatic compressed sensing measurement was guaranteed. More depth systems will be realized with the advancement of volume lithography modulation technology, which will also become one of the ongoing objectives for the authors’ future work. To enhance reconstruction performance, we propose a 3D compressive reconstruction algorithm based on Bregman’s iterative method, which leverages prior knowledge of an object space. Both the simulation and reconstruction experiments indicate that the proposed algorithm outperforms the L1 magic algorithm without constraints. For the first time, we introduce a non-overlapping constraint in optical imaging systems as an additional practical constraint, providing valuable insights for continuous compressed sensing in optical imaging. The system architecture and algorithms discussed in this paper, which involve using single-pixel senor signals to perform multi-dimensional recovery, are not only applicable to the field of optical imaging, but can also be extended to compressed sensing imaging/detecting of multi-dimensional information.

Author Contributions

All authors contributed to the study and wrote the article. Y.J. is responsible for validation, analysis, writing—original draft preparation, methodology and implementation. S.M. is responsible for review, editing, and conceptualization. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the “Pioneer” and “Leading Goose” R&D Program of Zhejiang under Grant No. 2023C01212, No. 2023C01222, No. 2025C02014 and National Natural Science Foundation of China (61601404). This work was conducted under the guidance of Professor Mark A. Neifield at the Computational Imaging Laboratory of the University of Arizona, with the support of the China Scholarship Council. Sincere gratitude is hereby expressed.

Data Availability Statement

The data that support the findings of this study can be accessed upon reasonable request from the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Wang, G.; Shao, L.; Xiao, D.; Zhao, F.; Shum, P.; Wang, C. A compressive sensing single pixel imaging system using in-fiber grating. In Proceedings of the International Conference on Optical Communications and Networks (ICOCN), Qufu, China, 23–27 August 2021; pp. 1–3. [Google Scholar] [CrossRef]
  2. Cao, M.; Wang, L.; Zhu, M.; Yuan, X. Hybrid CNN-Transformer Architecture for Efficient Large-Scale Video Snapshot Compressive Imaging. Int. J. Comput. Vis. 2024, 132, 4521–4540. [Google Scholar] [CrossRef]
  3. Guo, Q.; Wang, Y.X.; Chen, H.W.; Chen, M.H.; Yang, S.G.; Xie, S.Z. Principles and applications of high-speed single-pixel imaging technology. Front. Inf. Technol. Electron. Eng. 2017, 18, 1261–1267. [Google Scholar] [CrossRef]
  4. Zhang, Y.K.; Chou, C.Y.; Yang, S.H.; Huang, Y.H. Two-stage adaptive compressive sensing and reconstruction for terahertz single-pixel imaging. In Proceedings of the IEEE International Symposium on Circuits and Systems, Singapore, 19–22 May 2024; pp. 1–5. [Google Scholar]
  5. Sun, Y.; Chen, J.; Liu, Q.; Liu, B.; Guo, G. Dual-path attention network for compressed sensing image reconstruction. IEEE Trans. Image Process. 2020, 29, 9482–9495. [Google Scholar] [CrossRef]
  6. Liu, X.; Shi, J.; Sun, L.; Li, Y.; Fan, J.; Zeng, G. Photon-limited single-pixel imaging. Opt. Express 2020, 28, 8132–8144. [Google Scholar] [CrossRef] [PubMed]
  7. Lai, W.; Lei, G.; Meng, Q.; Shi, D.; Cui, W.; Ma, P.; Wang, Y.; Han, K. Single-pixel imaging using discrete Zernike moments. Opt. Express 2022, 30, 47761–47775. [Google Scholar] [CrossRef] [PubMed]
  8. Marcos, D.; Lasser, T.; Lopez, A. Compressed imaging by sparse random convolution. Opt. Express 2016, 24, 1269–1290. [Google Scholar] [CrossRef] [PubMed]
  9. Zhang, Z.; Liu, S.; Peng, J.; Yao, M.; Zheng, G.; Zhong, J. Simultaneous spatial, spectral, and 3D compressive imaging via efficient Fourier single-pixel measurements. Optica 2018, 5, 315–319. [Google Scholar] [CrossRef]
  10. Zhai, X.L.; Wu, X.Y.; Sun, Y.W.; Shi, J.H.; Zeng, G.H. Theory and approach of single-pixel imaging. Infrared Laser Eng. 2021, 50, 1–14. [Google Scholar]
  11. Shen, S.; Gu, G.; Mao, T.; Chen, Q.; He, W.; Shi, J. Pseudo-Random Spread Spectrum Technique Based Single-Pixel Imaging Method. IEEE Photonics J. 2022, 14, 1–9. [Google Scholar] [CrossRef]
  12. Piron, F.; Morrison, D.; Yuce, M.R.; Redouté, J.M. A Review of Single-Photon Avalanche Diode Time-of-Flight Imaging Sensor Arrays. IEEE Sens. J. 2021, 21, 12654–12666. [Google Scholar] [CrossRef]
  13. Vera, E.; Meza, P. Snapshot compressive imaging using aberrations. Opt. Express 2018, 26, 1206–1218. [Google Scholar] [CrossRef] [PubMed]
  14. Wang, W.C.; Hung, Y.C.; Du, Y.H.; Yang, S.H.; Huang, Y.H. FPGA-Based Tensor Compressive Sensing Reconstruction Processor for Terahertz Single-Pixel Imaging Systems. IEEE Open J. Circuits Syst. 2022, 3, 336–350. [Google Scholar] [CrossRef]
  15. Ndagijimana, A.; Ederra, I.; Conde, M.H. Single-Pixel Compressive Terahertz 3D Imaging. IEEE Trans. Comput. Imaging 2025, 11, 570–585. [Google Scholar] [CrossRef]
  16. Huang, S.X.; Chen, B.H.; Chen, B.J.; Chan, C.H. Enhancing Compressive Single-Pixel Imaging with Zig-Zag-Ordered Walsh-Hadamard Light Modulatio. IEEE Photonics Technol. Lett. 2024, 36, 803–806. [Google Scholar] [CrossRef]
  17. Zhu, Y.L.; She, R.B.; Liu, W.Q.; Lu, Y.F.; Li, G.Y. Deep Learning Optimized Terahertz Single-Pixel Imaging. IEEE Trans. Terahertz Sci. Technol. 2022, 12, 165–172. [Google Scholar] [CrossRef]
  18. Güven, B.; Güngör, A.; Bahçeci, M.U.; Çukur, T. Deep Learning Reconstruction for Single Pixel Imaging with Generative Adversarial Networks. In Proceedings of the IEEE International Conference on Image Processing (ICIP), Kuala Lumpur, Malaysia, 8–11 October 2023; pp. 2060–2064. [Google Scholar]
  19. Rizvi, S.; Cao, J.; Zhang, K.; Hao, Q. Deringing and denoising in extremely under-sampled Fourier single pixel imaging. Opt. Express 2020, 28, 7360–7374. [Google Scholar] [CrossRef] [PubMed]
  20. Yang, X.; Jiang, P.; Jiang, M.; Xu, L.; Wu, L.; Yang, C.; Zhang, W.; Zhang, J.; Zhang, Y. High imaging quality of Fourier single pixel imaging based on generative adversarial networks at low sampling rate. Opt. Lasers Eng. 2021, 140, 106533. [Google Scholar] [CrossRef]
  21. Jiang, P.; Liu, J.; Wu, L.; Xu, L.; Hu, J.; Zhang, J.; Zhang, Y.; Yang, X. Fourier single pixel imaging reconstruction method based on the U-net and attention mechanism at a low sampling rate. Opt. Express 2022, 30, 18638–18654. [Google Scholar] [CrossRef] [PubMed]
  22. Lim, J.Y.; Roslan, M.R.; Lim, J.Y.; Baskaran, V.M.; Chiew, Y.S.; Phan, R.C.W.; Wang, X. A Comparison Between Fourier and Hadamard Single-Pixel Imaging in Deep Learning-Enhanced Image Reconstruction. IEEE Sens. Lett. 2023, 7, 3502204. [Google Scholar] [CrossRef]
  23. Song, K.; Bian, Y.; Zeng, F.; Liu, Z.; Han, S.; Li, J.; Tian, J.; Li, K.; Shi, X.; Xiao, L. Photon-level single-pixel 3D tomography with masked attention network. Opt. Express 2024, 32, 4387–4399. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Three-dimensional scene space model represented by 2D pseudo color images.
Figure 1. Three-dimensional scene space model represented by 2D pseudo color images.
Electronics 14 03463 g001
Figure 2. Principle of 3D volume structure illumination.
Figure 2. Principle of 3D volume structure illumination.
Electronics 14 03463 g002
Figure 3. An example of the intensity distribution of lateral planes at five different depths between 450 mm and 750 mm with equal intervals (450 mm, 525 mm, 600 mm, 675 mm, 750 mm), which is simulated by DMD projectors with a pixel size of 7.6 μm, a projection lens with an efficient focus length of 14.95 mm, and F# of 2 (note that the size of the image increases along the depth axis).
Figure 3. An example of the intensity distribution of lateral planes at five different depths between 450 mm and 750 mm with equal intervals (450 mm, 525 mm, 600 mm, 675 mm, 750 mm), which is simulated by DMD projectors with a pixel size of 7.6 μm, a projection lens with an efficient focus length of 14.95 mm, and F# of 2 (note that the size of the image increases along the depth axis).
Electronics 14 03463 g003
Figure 4. The intuitive comparison results of classic reconstruction algorithms in noiseless simulation experiments without constraints. The spatial resolution is 64 × 64, the measurement ratio is 18.31%, and the targets are located at 1500 mm and 3300 mm.
Figure 4. The intuitive comparison results of classic reconstruction algorithms in noiseless simulation experiments without constraints. The spatial resolution is 64 × 64, the measurement ratio is 18.31%, and the targets are located at 1500 mm and 3300 mm.
Electronics 14 03463 g004
Figure 5. The objects “U”, “of”, and “A” located, respectively, at 450 mm, 600 mm, and 750 mm along the axial axis, shown in the lateral image with 32 × 32 relative resolution, represent the 3D model defined in Section 2. We set a reflective ratio equal to 1 as white and a transparent ratio equal to 0 as black, as shown in the binary images.
Figure 5. The objects “U”, “of”, and “A” located, respectively, at 450 mm, 600 mm, and 750 mm along the axial axis, shown in the lateral image with 32 × 32 relative resolution, represent the 3D model defined in Section 2. We set a reflective ratio equal to 1 as white and a transparent ratio equal to 0 as black, as shown in the binary images.
Electronics 14 03463 g005
Figure 6. (a) and (c), respectively, show the reconstructed gray image (left) and the pseudo depth image (right) using L1_eq implementation, while (b) and (d), respectively, show the reconstructed gray image (left) and the pseudo depth image (right) using the proposed algorithm.
Figure 6. (a) and (c), respectively, show the reconstructed gray image (left) and the pseudo depth image (right) using L1_eq implementation, while (b) and (d), respectively, show the reconstructed gray image (left) and the pseudo depth image (right) using the proposed algorithm.
Electronics 14 03463 g006
Figure 7. The image in the upper part of figure shows the noiseless reconstructed gray image (left) and the pseudo depth image (right) using L1_eq implementation, RMSE = 0.11077, while the images in the lower part of figure show the noiseless reconstructed gray image (left) and the pseudo depth image (right) using the proposed algorithm, RMSE = 0.0556.
Figure 7. The image in the upper part of figure shows the noiseless reconstructed gray image (left) and the pseudo depth image (right) using L1_eq implementation, RMSE = 0.11077, while the images in the lower part of figure show the noiseless reconstructed gray image (left) and the pseudo depth image (right) using the proposed algorithm, RMSE = 0.0556.
Electronics 14 03463 g007
Figure 8. Reconstruction performance of the L1_eq implementation and the proposed algorithm with constraints as a function of measurement ratio and measurement noise level.
Figure 8. Reconstruction performance of the L1_eq implementation and the proposed algorithm with constraints as a function of measurement ratio and measurement noise level.
Electronics 14 03463 g008
Figure 9. (a) shows the noisy reconstructed gray image (left) and the pseudo depth image (right) using L1_eq implementation, while (b) shows the noisy reconstructed gray image (left) and the pseudo depth image (right) using the proposed algorithm, using a 23.44% measurement ratio with a noise level of 2% (e.g., SNR = 17 dB).
Figure 9. (a) shows the noisy reconstructed gray image (left) and the pseudo depth image (right) using L1_eq implementation, while (b) shows the noisy reconstructed gray image (left) and the pseudo depth image (right) using the proposed algorithm, using a 23.44% measurement ratio with a noise level of 2% (e.g., SNR = 17 dB).
Electronics 14 03463 g009
Figure 10. Schematic of the proposed single-pixel three-dimensional compressive imaging system (SP3DCI).
Figure 10. Schematic of the proposed single-pixel three-dimensional compressive imaging system (SP3DCI).
Electronics 14 03463 g010
Figure 11. The hardware prototype of our single-pixel three-dimensional compressive imaging system.
Figure 11. The hardware prototype of our single-pixel three-dimensional compressive imaging system.
Electronics 14 03463 g011
Figure 12. (a) is a photograph of the light adjustment, showing the projection pattern with a reflective plane. (b) depicts the image captured in a dark room. (d,e) show the defocus patterns used to replace the defocus PSF located at 750 mm and 450 mm, respectively. (c) shows the 25 patterns used to perform the scan calibration.
Figure 12. (a) is a photograph of the light adjustment, showing the projection pattern with a reflective plane. (b) depicts the image captured in a dark room. (d,e) show the defocus patterns used to replace the defocus PSF located at 750 mm and 450 mm, respectively. (c) shows the 25 patterns used to perform the scan calibration.
Electronics 14 03463 g012
Figure 13. The image (a) depicts an experimental photograph of our SP3DCI prototype in operation. In the distance, the paper-cut letter ‘A’ is positioned, while the paper-cut letters ‘U CS’ are placed closer to the foreground. We deliberately included a whiteboard further back to showcase the projected volume illumination pattern, as well as the spatial relationship between the target ‘A’ and the target ‘U CS’. The image (b,c) represent the reconstructed images of A at 750 mm and U CS at 450 mm at their respective depths, as well as their pseudo-color 2D depth maps.
Figure 13. The image (a) depicts an experimental photograph of our SP3DCI prototype in operation. In the distance, the paper-cut letter ‘A’ is positioned, while the paper-cut letters ‘U CS’ are placed closer to the foreground. We deliberately included a whiteboard further back to showcase the projected volume illumination pattern, as well as the spatial relationship between the target ‘A’ and the target ‘U CS’. The image (b,c) represent the reconstructed images of A at 750 mm and U CS at 450 mm at their respective depths, as well as their pseudo-color 2D depth maps.
Electronics 14 03463 g013aElectronics 14 03463 g013b
Figure 14. The experiment scene of ‘U of A’ with our SP3DCI system and the reconstruction images. The left part shows the experiment scene of ‘U of A’, and the right part shows the reconstruction result of ‘U of A’ with a measurement ratio of 25%.
Figure 14. The experiment scene of ‘U of A’ with our SP3DCI system and the reconstruction images. The left part shows the experiment scene of ‘U of A’, and the right part shows the reconstruction result of ‘U of A’ with a measurement ratio of 25%.
Electronics 14 03463 g014
Figure 15. The experiment scene of ‘UA CS’ and ‘UA’ on a depth of 450 mm and ‘CS’ on a depth of 750 mm. And reconstruction of the objects with different measurement ratios. (a) was original image. (bd) were obtained by stitching reconstructed images of different depths, with the 450 mm depth 2D reconstruction image on the left half and the 750 mm depth 2D reconstruction image on the right half.
Figure 15. The experiment scene of ‘UA CS’ and ‘UA’ on a depth of 450 mm and ‘CS’ on a depth of 750 mm. And reconstruction of the objects with different measurement ratios. (a) was original image. (bd) were obtained by stitching reconstructed images of different depths, with the 450 mm depth 2D reconstruction image on the left half and the 750 mm depth 2D reconstruction image on the right half.
Electronics 14 03463 g015
Figure 16. (a) depicts a three-dimensional color experimental scene, where we utilized a pink paper-cut ‘S’ and a blue paper-cut ‘C’, along with a white paper-cut ‘A’ positioned farther away. The letter ‘U’ was placed at the same depth as ‘S’ and ‘C’. (b) illustrates the experiment in which we conducted tests using red, green, and blue light, obtaining the target’s response for each color channel. These responses were then superimposed using RGB to reconstruct the three-dimensional information along with the color target.
Figure 16. (a) depicts a three-dimensional color experimental scene, where we utilized a pink paper-cut ‘S’ and a blue paper-cut ‘C’, along with a white paper-cut ‘A’ positioned farther away. The letter ‘U’ was placed at the same depth as ‘S’ and ‘C’. (b) illustrates the experiment in which we conducted tests using red, green, and blue light, obtaining the target’s response for each color channel. These responses were then superimposed using RGB to reconstruct the three-dimensional information along with the color target.
Electronics 14 03463 g016
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Jiang, Y.; Mu, S. Single-Pixel Three-Dimensional Compressive Imaging System Using Volume Structured Illumination. Electronics 2025, 14, 3463. https://doi.org/10.3390/electronics14173463

AMA Style

Jiang Y, Mu S. Single-Pixel Three-Dimensional Compressive Imaging System Using Volume Structured Illumination. Electronics. 2025; 14(17):3463. https://doi.org/10.3390/electronics14173463

Chicago/Turabian Style

Jiang, Yanbing, and Shaoshuo Mu. 2025. "Single-Pixel Three-Dimensional Compressive Imaging System Using Volume Structured Illumination" Electronics 14, no. 17: 3463. https://doi.org/10.3390/electronics14173463

APA Style

Jiang, Y., & Mu, S. (2025). Single-Pixel Three-Dimensional Compressive Imaging System Using Volume Structured Illumination. Electronics, 14(17), 3463. https://doi.org/10.3390/electronics14173463

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop