Next Article in Journal
Feature Selection for Edge Detection in PolSAR Images
Previous Article in Journal
Annual and Seasonal Trends of Vegetation Responses and Feedback to Temperature on the Tibetan Plateau since the 1980s
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Research on Photon-Integrated Interferometric Remote Sensing Image Reconstruction Based on Compressed Sensing

1
School of Optoelectronic Engineering, Xidian University, 2 South Taibai Road, Xi’an 710071, China
2
China Siwei Surveying and Mapping Technology Co., Ltd., 5 Fengxian East Road, Haidian District, Beijing 100094, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2023, 15(9), 2478; https://doi.org/10.3390/rs15092478
Submission received: 7 April 2023 / Revised: 29 April 2023 / Accepted: 3 May 2023 / Published: 8 May 2023
(This article belongs to the Section Remote Sensing Image Processing)

Abstract

:
Achieving high-resolution remote sensing images is an important goal in the field of space exploration. However, the quality of remote sensing images is low after the use of traditional compressed sensing with the orthogonal matching pursuit (OMP) algorithm. This involves the reconstruction of the sparse signals collected by photon-integrated interferometric imaging detectors, which limits the development of detection and imaging technology for photon-integrated interferometric remote sensing. We improved the OMP algorithm and proposed a threshold limited-generalized orthogonal matching pursuit (TL-GOMP) algorithm. In the comparison simulation involving the TL-GOMP and OMP algorithms of the same series, the peak signal-to-noise ratio value (PSNR) of the reconstructed image increased by 18.02%, while the mean square error (MSE) decreased the most by 53.62%. The TL-GOMP algorithm can achieve high-quality image reconstruction and has great application potential in photonic integrated interferometric remote sensing detection and imaging.

1. Introduction

With the increasingly mature manufacturing process of photonic integrated devices and interference detection technology, the segmented planar imaging detector for electro-optical reconnaissance (SPIDER), which has photonic integrated interference imaging as its core technology, has attracted a lot of attention from researchers in the field of astronomical observation or remote sensing detection. It has been used to replace traditional optical telescopes with large volume, weight, and energy consumption [1] in the detection of targets. For example, the Hubble Telescope is 13.3 m long and weighs 27,000 pounds [2].
Interferometry is an important technology used in photonic integrated interferometric imaging systems. It uses electromagnetic wave superposition to extract the wave source information and provides technical support for the reconstruction of high-resolution images. Optical interferometry unifies the light from many lens pairs on a photonic integrated chip (PIC) and then reconstructs the remote sensing image from the optical signal obtained by interferometry. Optical interferometer arrays are the preferred instruments for high-resolution imaging. Such interferometer arrays include the CHARA array [3,4], larger telescope interferometer [5], and navy precision optical interferometer [6]. These systems use far-field spatial coherence measurements to form intensity images of light source targets [7]. In our previous publication [8,9], we discussed the definition of a small-scale interferometric imager, which we called a planar photoelectric detection imaging detector (SPIDER). The SPIDER imager [8] comprises one-dimensional interferometric arms arranged along the azimuth angles in multiple directions. Each interference arm has the same design structure. Any two lenses on the interference arm form the interference baseline; the collected optical signals are coupled in the PIC and interfered in the multi-mode interferometer (MMIs), while the fringe data are read by the two-dimensional detector array. Because the interference arms in the PIC are distributed along the azimuth angle [0, 2 π ], and since interference baselines of any length on the interference arms correspond to the spatial frequency information in the two-dimensional Fourier Transform domain, the PIC can obtain optical frequency information through sparse sampling in all directions. We can use the compressing sensing (CS) theory algorithm to reconstruct sparse optical signal data in order to obtain the content information of detection targets. The CS theory can be applied in the field of photonic integrated interference imaging to meet our needs in life, production, and scientific exploration.
In recent years, CS has attracted increasing attention in signal processing. Donoho et al. proposed this theory in 2006. The traditional Nyquist sampling theorem [10] requires that the sampling frequency of the signal be greater than or equal to twice the signal frequency. The proposed compressed sensing theory overcomes the limitations of traditional sampling theorems. If the collected signals are sufficiently sparse, the original signals can be reconstructed by projection onto random vectors. More specifically, the original signals could be reconstructed at low speeds. Therefore, this innovative theory of improving sampling efficiency has been of great interest in the fields of digital signal processing [11], optical imaging [12], medical imaging [13], radio communication [14], radar imaging [15], and pattern recognition [16]. The research conducted on compressed sensing comprises three main areas: (1) the sparse representation of original signals, with commonly used sparse transform methods such as the Fourier Transform (FT) [17], Discrete Cosine Transform (DCT) [18], and Wavelet Transform (DWT) [19]; (2) the design of the measurement matrix, including the random measurement matrix [20,21] and deterministic measurement matrix [22,23]; (3) reconstruction algorithms, such as the basis pursuit (BP) algorithm [24,25], matching pursuit (MP) algorithm [26], and orthogonal matching pursuit algorithm [27,28,29,30].
The compressed sensing OMP algorithm is one of the most representative greedy algorithms; it is simple, stable, has low computational complexity, and has been widely studied by researchers. In contrast, the traditional OMP algorithm produces Gaussian noise when reconstructing an image, which significantly affects the quality of the reconstructed image. Consequently, the traditional OMP algorithm has continuously been improved over time, and enhanced algorithms such as stagewise orthogonal matching pursuit (STOMP), generalized orthogonal matching pursuit (GOMP), and stagewise weak orthogonal matching pursuit (SWOMP) have been generated to improve the quality of the reconstructed image. To further solve the above-mentioned problems, we improved the threshold limited-generalized orthogonal matching tracing algorithm using the traditional OMP algorithm.
The main contributions of this paper can be summarized as follows:
(1)
We improved the traditional OMP algorithm and proposed the TL-GOMP algorithm, which was used to reconstruct the sparse spatial frequency information collected by the PIC and recover the content information of the detected target. In the simulation, we compared the TL-GOMP algorithm with the other improved OMP image reconstruction algorithm from the same series and the non-OMP image reconstruction algorithm, and subsequently verified its superiority in image reconstruction.
(2)
Simultaneously, we used this algorithm to reconstruct and simulate the sparse signals collected by photonic integrated chips at different distances. The simulation results showed that the TL-GOMP algorithm can be applied in the field of photon-integrated interferometric remote sensing detection and imaging to recover the content information of unknown targets.

2. Related Work

Image reconstruction is based on sparse original signals from the target or image, and the content and feature information of the target or image are restored and reproduced by designing reconstruction algorithms. At present, the compressed sensing reconstruction algorithm has become the mainstream in the field of image reconstruction, mainly because the image signal has two characteristics: high dimension and can be sparse. The research on compressed sensing theory mainly includes three aspects: sparse signal representation, measurement matrix design, and reconstruction algorithm design.

2.1. Sparse Signal Representation

The sparse representation of signals is an important premise and foundation of compressed sensing theory. When a signal can become approximately sparse under the action of a change domain, it is said to have sparsity or compressibility, which can achieve the purpose of reducing signal storage space and effectively compressed sampling. If the length of a signal is N, and the number of non-zero value elements is no more than k after representation by the sparse basis matrix, we can define it as a k-sparse signal. The sparsity k of the sparse signal directly affects the accuracy of the reconstructed signal; that is, the higher the sparsity, the higher the accuracy of the reconstructed signal. Based on the above reasons, the reasonable selection of the sparse basis matrix is very important. The commonly used transform bases are as follows: Fourier Transform basis [17], Discrete Cosine Transform basis [18], Discrete Wavelet Transform basis [19], Contourlet Transform basis [31], and the K-singular value decomposition method based on matrix decomposition [32].

2.2. Design of Measurement Matrix

In compressed sensing theory, the measurement matrix has the function of sampling the original signal, and its selection is very important. It can project the signal from a high-dimensional space to a low-dimensional space to obtain the corresponding measurement value. In order to obtain an accurate sparse representation through measurement values, an uncorrelated relationship between the observed matrix and the sparse basis matrix was required to satisfy the Restricted Isometry Property (RIP), which guaranteed that the original space and the sparse space could be mapped one-to-one. At the same time, the matrix formed by arbitrarily extracting the number of column vectors that was equal to the number of observed values is non-singular. Commonly used measurement matrices are as follows: Gaussian random matrix [33], measurement matrix constructed based on equilibrium Gold sequence [34], partial Fourier matrix [35], and partial Hadamard matrix [36]. Wang Xia proposed a deterministic random sequence measurement matrix [37] and verified its effectiveness through experiments.

2.3. Design of Reconstruction Algorithm

In recent years, remarkable achievements have been made in the research on compressed sensing reconstruction algorithms, which can be divided into the traditional iterative compressed sensing reconstruction algorithm and the deep compressed sensing network-based reconstruction algorithm.

2.3.1. Traditional Iterative Compressed Sensing Reconstruction Algorithm

The purpose of compressed sensing is to find the sparsest original signal to meet the demands of measurement, which can be understood as the inverse problem of minimizing the norm of l0. Specific methods for achieving this are as follows: (1) convex relaxation method, which converts the minimum l0-norm problem into the minimum l1 norm problem under certain conditions, that is, the non-convex problem is converted into a convex problem such as the basis pursuit algorithm [38] and the Gradient Projection for Sparse Reconstruction (GPSR) [39]; (2) greedy matching tracking algorithms such as the matching pursuit algorithm [26] and the orthogonal matching pursuit algorithm [27,28]; (3) non-convex optimization methods, including the Bayesian Compressed Sensing algorithm (BCS) [40]; and (4) model-based optimization algorithms, the first three of which are based on the sparsity of original signals; these may not be valid for ordinary signals, such as the improved Total Variation-based algorithm (TV) [41].

2.3.2. Reconstruction Algorithm Based on Deep Compressed Sensing Network

As the use of deep learning in various research fields has increased, it has gradually been introduced into the research on compressed sensing image reconstruction algorithms. Ali Mousavi et al. proposed a Stacked Denoising Autoencode (SDA) algorithm, which mainly realized the end-to-end mapping between measured values and reconstructed images and adopted an unsupervised learning method. Kulkarni et al. proposed ReconNet [42], a non-iterative framework based on convolutional neural networks, and applied convolutional neural networks to compressed sensing reconstruction for the first time. The network structure consisted of a fully connected layer and six convolutional layers. Yao and Dai et al. combined the idea of residual learning with ReconNet and proposed a Deep Residual Reconstruction Network (DR2-Net) for compressed image perception reconstruction [43]; the network was cascaded. Kulkarni K, Lohit S et al. [44] further deepened the network structure of ReconNet and used the network structure of a full connection layer to replace the original Gaussian matrix in order to realize the image sampling. This kind of network is called adaptive sampling ReconNet. Xuemei Xie et al. [45] made some improvements to the sampling process of compressed sensing and also used full connection and deconvolution methods to optimize the compressed sensing network. Nie and Fu et al. not only used the convolutional neural network for image reconstruction, but also added image denoising into the network. The ResConv network they proposed [46] has these two characteristics. The CSnet network proposed by Shi et al. [47] redesigned the sampling process, which, as with the previous algorithms, does not only realize image reconstruction, but also puts forward a novel sampling mechanism to match the reconstructed network.

3. Methods

The theoretical framework of compressed sensing consists of three main aspects: the sparse representation of the original signal vector x ; the measurement matrix designed to change the high-dimensional original signal into a low-dimensional measurement vector y ; and the algorithm designed to obtain the approximate sparse representation θ ^ in order to recover the original signal.

3.1. The Reconstruction Principle of the OMP Algorithm Based on Compressed Sensing

Figure 1 shows a schematic for solving sparse representations in compressed sensing. Here, we consider the compressed sensing theory as a linear model:
y 1 , y 2 , , y n = B m × n × x 1 , x 2 , , x n
where y R m and x R n represent the column vectors in the observation data and unknown image, respectively, and the measurement matrix B R m × n arranged in the order of the column vectors is known. We chose the unknown image D R n × n , which can be represented by D = x 1 , x 2 , , x n .
Because the original signal x is not absolutely sparse, to transform it into a compressible signal, a sparse basis matrix Ψ R n × n is adopted, which transforms the original signal into a sparse domain and forms a sparse representation vector θ R n × 1 . The number of non-zero values in the sparse representation vector θ = [ k 1 , k 2 k n 1 , 0 , 0 ] is k n , and thus the vector θ is called the k-sparse representation. The measured data of the target can be written as follows:
y 1 , y 2 , , y n = B × Ψ × θ 1 , θ 2 , , θ n = b 11 b 1 n b m 1 b m n × ψ 11 ψ 1 n ψ m 1 ψ m n × θ 1 , θ 2 , , θ n
Here, we define the sensor matrix A , whose function is to establish the linear relationship between the sparse representation θ and the measured value y .
The measurement data of the target can then be expressed as
y 1 , y 2 , , y n = A × θ 1 , θ 2 , , θ n
Here, we use the most commonly used OMP algorithm [30,31] to illustrate the approximate solution θ ^ of the sparse representation. A column vector y y 1 , y 2 , , y n of the measurement data is selected, and r ( k ) is used to represent the residual value after the k T H iteration. The initial value of the residual is set as r ( k ) = y ( 0 ) . Λ k represents the matrix used to store the column vector a k of the sensor matrix after the k T H iteration. The initial value of the matrix is represented by Λ 0 . The sensor matrix is defined as follows:
A = a 11 a 1 n a m 1 a m n m × n = a 1 , a 2 a n
After multiplying the transposed form [ a 1 , a 2 a n ] T of the sensor matrix A with the initial residual value r ( 0 ) , b can be expressed as
b = a 11 a 1 m a n 1 a n m n × m × y = a 11 a 1 m a n 1 a n m n × m × y 1 y m m × 1 = b 1 b m m × 1
Here, each element in the vector b T = [ b 1 , b 2 b m ] represents the inner product of each row vector in [ a 1 , a 2 a n ] T with r ( 0 ) , that is, b i = a j T × r ( 0 ) ( i = 1 , 2 m ; j = 1 , 2 n ). The corresponding column vector a j in the sensor matrix is selected according to the maximum inner product value b i , and a j is stored in the Λ matrix. The least-squares method is used to obtain the minimum residual value c ( k ) = ( a j T × a j ) 1 × a j T × y ( k 1 ) . The residual value r ( k ) after the k T H iteration is
r ( k ) = y ( k 1 ) a j ( k ) × c ( k )
where a j ( k ) represents the column vector selected from the sensor matrix during the k T H iteration. Finally, after k iterations, we can obtain the k -sparse representation (approximate solution θ ^ ), which comprises k non-zero values such as c ( 1 ) , c ( 2 ) c ( k ) . This is an optimization problem for the smallest norm of l 1 , which can be mathematically expressed as follows:
min θ θ l 1 s . t . y = B ψ θ
Algorithm 1 presents the execution steps of the OMP algorithm. As shown in Figure 2, we multiply the approximate solution θ ^ by the sparse basis matrix ψ ; then, the original signal recovered is x ^ = Ψ × θ ^ . The final reconstructed image is obtained as follows:
x ^ 1 , x ^ 2 , , x ^ n = Ψ × θ ^ 1 , θ ^ 2 , , θ ^ n
Algorithm 1: Orthogonal Matching Pursuit
Input: Sensor matrix B, Sparseness k
Output: Sparse   representation   θ
Initialize:  Residual r 0 = y , Index   set Λ 0 = , t = 1
Loop performs the following five steps:
(1) Find   out : q : q t = arg max j = 1 N r t 1 , α j ;
(2) Update the index set: Λ t = Λ t 1 { q t } ; Reconstruction of atomic collection: B t = [ B t 1 , α q ] ;
(3) Least-squares method: θ t = arg min y B t x 2 ;
(4) Update the residual: r t = y B t θ t , t = t + 1 ;
(5) Judgment: If t > k, stop the iteration, or go to step (1).

3.2. The Reconstruction Principle of the TL-GOMP Algorithm Based on Compressed Sensing

In this section, we introduce an improved TL-GOMP algorithm based on the traditional OMP algorithm. We selected unknown images D R N × N . To illustrate the principle of the improved TL-GOMP algorithm, we took the unknown target D R N × N and converted it into the form of a column vector D = x 1 , x 2 x n , as expressed by the equation below:
y 1 , y 2 y n = b 11 b 1 n b m 1 b m n m × n × x 1 , x 2 x n
Here, Ψ = b 11 b 1 n b m 1 b m n m × n is the measurement matrix. Assuming that the residual value after k iterations is r ( k ) , we arbitrarily extracted a column y y 1 , y 2 y n and assigned it to the initial value r ( 0 ) = y of the residual value. By using the transpose form [ a 1 , a 2 a n ] T of the sensor matrix B = a 11 a 1 n a m 1 a m n m × n = [ a 1 , a 2 a n ] , and multiplying the residual value r ( k 1 ) R m × 1 , we could obtain vector d R n × 1 as follows:
d = d 1 d n = a 11 a 1 m a n 1 a n m × r ( k 1 )
Here, we define one parameter q s = r ( k 1 ) / M and the other parameter m s = 1 ; the parameter m s can be understood as a variable that controls or adjusts the threshold, which is a range value. The principle of its selection is to constantly change the threshold value and form the corresponding column vector of the first S inner product values in the sensor matrix into a matrix, with the purpose of solving the optimal S least-squares solutions to form a sparse representation. After k cycles, the sparsity of the sparse representation is k S . Parameter M represents the number of rows of perception matrix and measurement matrix in compressed sensing theory, or the number of measurements of observation matrix. The threshold T h is then denoted as
T h = m s q s = r ( k 1 ) / M
Subsequently, we took the absolute value of each of the elements in the vector d and placed them in descending order to obtain the vector d T = [ d 11 , d 22 d n n ] . We stored the sequence numbers of the elements satisfying the inequality relation in Equation (12).
d T = [ d 11 , d 22 d n n ] r ( k 1 ) / M
The algorithm cycles k times in total, where k refers to the number of non-zero-valued elements in the sparse representation. Each cycle will store the maximum number of elements S that satisfy the threshold conditions. After k cycles, there is a k S value. Thereafter, the column vector of the sensor matrix corresponding to the value of k S is stored in the matrix A t , where A t R M × k S . We then use the least-squares method to obtain the approximate solution θ ^ for the sparse representation, as expressed by the equation below:
θ ^ = ( A t T × A t ) 1 × A t T × r ( k 1 )
After each iteration, the updated residual value r ( k ) is expressed as:
r ( k ) = r ( k 1 ) A t × θ ^
Finally, the reconstructed image D ^ = x ^ 1 , x ^ 2 x ^ n is obtained as follows:
D ^ = x ^ 1 , x ^ 2 x ^ n = ψ 11 ψ 1 n ψ m 1 ψ m n × θ ^ 1 , θ ^ 2 θ ^ n
As listed in Algorithm 2, the core function of the algorithm is to effectively select the maximum number S using the additional threshold value. After the algorithm iterates k times, the sparse representation vector has k S sparsity. The threshold value used was T h = m s q s . In the subsequent simulations, we selected m s = 1 and q s = r ( k 1 ) / M .
Algorithm 2: Threshold Limited–Generalized Orthogonal Matching Pursuit
Input: Sensor matrix B, Sparseness k
Output:   Sparse   representation   θ ,   Residual   r k
Initialize:   Residual   r 0 = y ,   Index   set   Λ 0 = ,   A 0 = , t = 1
Loop performs the following five steps:
( 1 )   Find   out q : q t = arg max j = 1 N r t 1 , α j ,   selecting   the   maximum   number   S   of   values   that   are   greater   than   the   threshold   value   T h = m s q s ;
( 2 )   Update   the   index   set :   Λ t = Λ t 1 { q t } ;   Reconstruction   of   atomic   collection B t = [ B t 1 , α q ] ;
(3) Least-squares method: θ t = arg min y B t x 2 ;
( 4 )   Update   the   residual :   r t = y B t θ t , t = t + 1 ;
(5) Judgment: If t > k, stop the iteration, or go to step (1).
The advantage of this algorithm is that the inner product values meeting the threshold conditions can be quickly screened out in time by setting the limiting threshold value T h = m s q s , and corresponding column vectors can be directly found in the sensor matrix according to the serial number of the first S inner product values. These inner product values are represented by logical value 1 in the code, while other inner product values are represented by logical value 0. On the other hand, by setting the threshold coefficient m s to adjust the limiting threshold, we constantly combine the serial numbers of the first S inner product values into the corresponding column vectors in the sensor matrix to form a matrix, aiming at solving the optimal S least-squares solutions with good universality and flexibility. After k iterations, we can reconstruct the image information of the target through sparse representation θ with a sparsity of k S .

4. Experiments

To demonstrate the performance of the TL-GOMP algorithm (refer to Algorithm 2) in reconstructing the target images, we present some simulation results in this section. First, we used the TL-GOMP algorithm and the same series of OMP, STOMP, and GOMP algorithms to conduct a comparative simulation of the target test images, as shown in Figure 3. In this case, the simulation results show that the image quality reconstructed using the TL-GOMP algorithm is better than that reconstructed using the same series of OMP algorithms. Subsequently, in order to rigorously prove the advantages of the TL-GOMP algorithm, we selected algorithms other than the OMP series to conduct a comparative simulation of the targets shown in Figure 3, and the simulation results once again showed that the image reconstructed by the TL-GOMP algorithm was better than that reconstructed by the other algorithms. We then applied the TL-GOMP algorithm in the field of photonic integrated interference image reconstruction and used this algorithm to reconstruct sparse spatial frequency information collected by the PIC at different distances. The simulation image results show that this algorithm can reconstruct the content information of the detected target well. Finally, we explored the measurement number M and sparsity k in the TL-GOMP algorithm and their influence on the quality of the reconstructed images.
In all the experiments below, we used the Gaussian random matrix as the measurement matrix, which is established by the randn function in the code, and the values of each element in this matrix satisfy the standard normal distribution. Meanwhile, we used the discrete cosine transform matrix as the sparse matrix, whose function is the sparse representation or compression of the original signal. In the experiments, the measurement matrix was updated with the operation of the code every time, which reflects the randomness. Therefore, we conducted several simulation experiments in each research part and verified the reliability of the conclusion through the data results.

4.1. Comparison of Simulation Results of the TL-GOMP and OMP Series Algorithms

For this section, we selected test images with pixel values of 350 × 350, 500 × 500, 650 × 650, and 800 × 800 as the target scenes; four image reconstruction algorithms (OMP, STOMP, GOMP, and TL-GOMP) to perform image reconstruction simulation; and used peak signal-to-noise ratio and mean square error to evaluate the image quality. Figure 4a–d show the simulation results of the 800 × 800 image reconstruction. From an intuitive point of view, the improved TL-GOMP tracing algorithm can be used to further improve the quality of the reconstructed images. Table 1 presents the quality evaluation data and the code runtime for the 350 × 350 reconstructed images. From the perspective of quantitative data, we can also see that the PSNR values of the images obtained by the TL-GOMP algorithm increased by 15.82% (compared with the results of the GOMP algorithm), 14.60% (compared with the STOMP algorithm), and 16.63% (compared with the OMP algorithm). The MSE values of the images decreased by 48.64%, 46.30%, and 50.12%, respectively, and the code running time was relatively fast. Therefore, from the above simulation data, we conclude that the TL-GOMP algorithm based on compressed sensing can rely on sparse data collected by the PIC to restore the content information of the detected target.
Table 2 shows simulation results, with the test image at a resolution of 500 × 500 pixels selected as the target. The data results show that the PSNR values of the reconstructed images obtained by the TL-GOMP algorithm increased by 18.05% (compared with the GOMP algorithm), 17.82% (compared with the STOMP algorithm), and 18.02% (compared with the OMP algorithm). The MSE values of the images decreased by 53.68%, 53.28%, and 53.62%, respectively.
Table 3 shows the simulation results, with the test image at a resolution of 650 × 650 pixels selected as the target. The data results show that the PSNR values of the reconstructed images obtained by the TL-GOMP algorithm increased by 17.58% (compared with the GOMP algorithm), 15.40% (compared with the STOMP algorithm), and 15.45% (compared with the OMP algorithm). The MSE values of the images decreased by 52.95%, 48.97%, and 49.08%, respectively.
Table 4 shows the simulation results, for which a test image with a resolution of 800 × 800 pixels was selected as the target. The resulting data show that the PSNR values of the reconstructed image obtained by the TL-GOMP algorithm increased by 14.40% (compared with the result of the GOMP algorithm), 11.21% (compared with the result of the STOMP algorithm), and 12.29% (compared with the result of the OMP algorithm). The MSE values of the images decreased by 46.36%, 39.28%, and 41.82%, respectively. In the image quality evaluation, the higher the peak signal-to-noise ratio, the better the image quality; in contrast, the smaller the mean square error value, the better the image quality. The red curve shown in Figure 5 is the simulation result of the image reconstruction with different resolutions using the TL-GOMP algorithm. We can observe that image quality improves with an increase in resolution, and thus this algorithm has the advantage of improving the quality of the reconstructed image. Table 5 shows PSNR values and MSE values of the different pixel image repeatedly reconstructed by TL-GOMP algorithms. Table 6 shows PSNR values and MSE values of the different pixel image repeatedly reconstructed by OMP algorithms. Table 7 shows PSNR values and MSE values of the different pixel image repeatedly reconstructed by STOMP algorithms. Table 8 shows PSNR values and MSE values of the different pixel image repeatedly reconstructed by GOMP algorithms. The data results show the advantages of the TL-GOMP image reconstruction algorithm once again.

4.2. Comparison of Simulation Results of the TL-GOMP and Other Algorithms

In this section, to verify the excellent performance of the TL-GOMP algorithm in image reconstruction, we adopt the research method of simulating the same target and comparing the results with those of other algorithms. In the following section, we simulate the test images with resolutions of 350 × 350, 500 × 500, 650 × 650, and 800 × 800. Figure 6 shows the simulation results for the test image with a resolution of 800 × 800 pixels using different types of algorithms. The figure shows that the image reconstructed by the TL-GOMP algorithm reflects the details or content information of the results. We present the results of the simulation and conduct a quantitative analysis below. In conclusion, the TL-GOMP algorithm has great potential for applications in the field of photonic integrated interference imaging.
In this section, we adopted a test image with a resolution of 350 × 350 as the scene target and simulated it using a series of six different image reconstruction algorithms: Compressive Sampling Matching Pursuit (CoSaMP), Generalized Back Propagation (GBP), Iterative Hard Thresholding (IHT), Iteration Reweighted Least Square (IRLS), Subspace Pursuit (SP), and TL-GOMP. The image quality was evaluated using the peak signal-to-noise ratio and mean square error. Table 9 lists the quality evaluation data and code running times of the reconstructed images. From the quantitative data, it can be seen that the PSNR values of the images obtained by the TL-GOMP algorithm are increased by 25.76% (compared with CoSaMP), 10.32% (compared with GBP), 38.50% (compared with IHT), 5.15% (compared with IRLS), and 17.18% (compared with SP). The MSE values of the images decreased by 63.19%, 36.64%, 74.23%, 21.27%, and 51.09%, respectively, and the code running time was also relatively fast.
Table 10 shows the simulation results for the use of a test image with a resolution of 500 × 500 as the target. The results show that the PSNR values of the reconstructed images obtained by the TL-GOMP algorithm improved by 27.74% (compared with CoSaMP), 10.69% (compared with GBP), 42.08% (compared with IHT), 4.01% (compared with IRLS), and 19.75% (compared with SP). The MSE values of the images decreased by 66.47%, 38.51%, 77.47%, 17.64%, and 56.39%, respectively.
Table 11 shows the simulation results for the use of a test image with a resolution of 650 × 650 as the target. The data in the table show that the PSNR value of the reconstructed image obtained by the TL-GOMP algorithm improved by 26.29% (compared with CoSaMP), 8.87% (compared with GBP), 42.31% (compared with IHT), 2.83% (compared with IRLS), and 17.26% (compared with SP). The MSE values of the images decreased by 64.99%, 33.70%, 77.67%, 12.97%, and 52.39%, respectively.
Table 12 lists the simulation results for the use of the test image with a resolution of 800 × 800 as the target. The data in the table show that the PSNR value of the image reconstructed by the TL-GOMP algorithm increased by 22.92% (compared with CoSaMP), 5.35% (compared with GBP), 37.98% (compared with IHT), 0.40% (compared with IRLS), and 14.74% (compared with SP). The MSE values of the images decreased by 60.26%, 22.23%, 74.40%, 1.91%, and 47.05%, respectively. In Figure 7, the red curve represents the image data as reconstructed by the TL-GOMP algorithm. It can be observed that the peak signal-to-noise ratio and mean square error of the image reconstructed by this algorithm are higher than those reconstructed by the other algorithms. Its mean square error value is much lower than that of the image reconstructed by the other algorithms, and it is also faster in terms of the code running time. In summary, the TL-GOMP algorithm has better image reconstruction performance.
Table 13 shows PSNR values and MSE values of the different pixel image repeatedly reconstructed by CoSaMP algorithms. Table 14 shows PSNR values and MSE values of the different pixel image repeatedly reconstructed by GBP algorithms. Table 15 shows PSNR values and MSE values of the different pixel image repeatedly reconstructed by IHT algorithms. Table 16 shows PSNR values and MSE values of the different pixel image repeatedly reconstructed by IRLS algorithms. Table 17 shows PSNR values and MSE values of the different pixel image repeatedly reconstructed by SP algorithms. The data results show the advantages of the TL-GOMP image reconstruction algorithm once again.

4.3. Simulation Results of Single-Column Signal Reconstruction by the CS TL-GOMP Algorithm

Figure 8 shows the 256 × 256 target test images. Figure 9 shows the simulation results from the use of the TL-GOMP algorithm to reconstruct a single column of the original signals with different running times. We first selected an image with a resolution of 256 × 256 for testing, arbitrarily selected a 256 × 1 column vector as the original signal, and then performed signal reconstruction 1, 50, 100, and 200 times. The simulation results of the signal reconstruction are shown in Figure 9. It can be observed that the reconstructed signal swings around the original signal and gradually approaches the original signal with an increase in the reconstruction time. Table 18 shows that the residual values of the TL-GOMP algorithm after different runs are 168.5664, 161.6117, 150.3473, and 136.5506 upon comparison of the reconstructed signal with the original signal. Among these, the residual value is an important index for measuring the size of the error or the degree of deviation. The simulation results show that with an increase in the number of original signal reconstructions, the residual value demonstrates a decreasing trend; that is, the accuracy of the reconstructed signal gradually approaches that of the original signal. The TL-GOMP algorithm exhibits good stability in the reconstruction of the original signal. Table 19 shows multiple simulation data with different signal reconstruction times.

4.4. Simulation Results of the CS TL-GOMP Algorithm in Image Reconstruction at Different Distances

We used the Photonic Integrated Circuit to collect the spatial frequency information emitted by the target to form the restoration image. Figure 10 shows the imaging results for the frequency information collected by the microlens array on the PIC at different distances d; the resolution of the restored images is 256 × 256. The “Original image” in Figure 10 represents the restoration image of the microlens array on PIC as the target test image. Because the signal acquisition of the PIC is an under-sampling process, it is necessary to use a sparse signal image reconstruction algorithm in order to recover the content information of the detected target.
In the experiment, the Gaussian random matrix was selected as the measurement matrix, and the discrete cosine transform matrix was used as the sparse matrix. In this part, we conducted several simulation experiments and displayed the experimental data and reconstructed images of one of them, as shown in Figure 11 and Table 20.
Figure 11 shows the simulation results after the reconstruction of restored images at d = 75, 125, 175, and 225 m using the compressed sensing TL-GOMP algorithm. We used two image quality evaluation indices, the peak signal-to-noise ratio, and the mean square error to measure the image quality. That is, the higher the peak signal-to-noise ratio, the better the image quality. Conversely, the lower the mean square error, the better the image quality. The simulation data in Table 20 show that the compressed sensing TL-GOMP image reconstruction algorithm is suitable for the content restoration of detected targets at different distances. “1d rec img” in Figure 11 represents the result of target reconstruction by TL-GOMP algorithm. We used the image quality evaluation function for evaluation of the reconstructed image; 13.5770 dB, 10.4228 dB, 11.2921 dB, and 12.2664 dB show the peak signal-to-noise ratio of the reconstructed image.

4.5. Influence of Measurement Number M in the CS TL-GOMP Algorithm

Figure 8 shows the 256 × 256 target test images. The observation matrix ( M × N ) Φ is an important parameter in the CS TL-GOMP algorithm, which can collect the original signal x , obtain the sparse representation θ by combining with the algorithm, and finally reconstruct the desired signal θ ^ . Figure 12a shows the relationship curve of different measurement numbers M on the sparsity k and quality of the reconstructed image. The value of M ranged from 56 to 256, with a step size of 5. The simulation data show that the sparsity k (the number of non-zero values) in the sparse representation θ is 18. Meanwhile, with an increase in M in the observation vector y , the PSNR value of the image also increases. In addition, the MSE value of the image exhibits a decreasing trend. Therefore, we can conclude that the quality of the reconstructed image also improves with an increase in M in the observation vector. In the experiment, the Gaussian random matrix was selected as the measurement matrix, and the discrete cosine transform matrix was used as the sparse matrix; Figure 12 shows the multiple simulation results.

4.6. Influence of Measurement Matrix M × N and Sparsity k in the CS TL-GOMP Algorithm on the Quality of the Reconstructed Image

Figure 8 shows the 256 × 256 target test images. Figure 13 shows the results from the simulation of the relationship between different sparsity k values and the quality of the reconstructed image. For the sparsity k , we selected the values of 9, 10, 11, 12, and 13 for the simulation of the test image with a resolution of 256 × 256. The results show that the PSNR increases with an increase in k , whereas the MSE decreases with an increase in k . Figure 14 shows the simulation results for the relationship between the different measurement matrix sizes and the quality of the reconstructed image. We selected measurement matrices with dimensions of 36 × 256, 42 × 256, 64 × 256, and 85 × 256 to simulate the same test image. The simulation results show that PSNR increases with an increase in the size of the size of the measurement matrix. The MSE decreases with an increase in the size of the measurement matrix. Table 21 and Table 22 list one of the multiple simulation results. Therefore, we can conclude that the quality of the reconstructed image improves with an increase in k and the size of the measurement matrix.
We studied the influence of different sparsity and different measurement matrix sizes on reconstruction quality. The measurement matrix selected by us was Gaussian random matrix, and the sparse matrix was discrete cosine transform matrix. For the sparsity k , we selected the values of 9, 10, 11, 12, and 13. In each case of sparsity, we conducted several experiments, and the results of the experiments are shown in Table 23. Similarly, we selected measurement matrices with dimensions of 36 × 256, 42 × 256, 64 × 256, and 85 × 256 to simulate the same test image. In the case of the size of each measurement matrix, we also used the same measurement matrix and the same research method to carry out many experiments. The experimental data results are shown in Table 24. According to the data results for many experiments, we conclude that “the quality of the reconstructed image improves with an increase in k and the size of the measurement matrix”. However, in order to achieve high-quality image reconstruction, the measurement matrix and sparse matrix of compressed sensing theory are constantly being studied. In future research, we will use an updated measurement matrix to verify the above conclusions.

5. Conclusions

In this study, we improved the traditional image reconstruction algorithm and proposed a TL-GOMP tracing algorithm for compressed sensing. In the simulation, we used the TL-GOMP algorithm and the same series of traditional OMP, STOMP, and GOMP algorithms to perform simulations using the same test targets. The results of the simulation show that the quality of the image reconstructed by the TL-GOMP algorithm was better than that reconstructed by the other traditional algorithms in the same series. To illustrate the advantages of this algorithm more rigorously, we also conducted a comparison simulation between the TL-GOMP algorithm and other image reconstruction algorithms. The results also showed that the quality of the image reconstructed by the TL-GOMP algorithm was better than that reconstructed by the other algorithms, which has potential application value. To verify the stability of the algorithm, we arbitrarily extracted a column of the original signal column vectors and performed signal reconstruction several times. The simulation results showed that the accuracy of the reconstructed signal gradually approached that of the original signal with an increase in the number of reconstruction runs. The TL-GOMP algorithm was also used to reconstruct the restored images at different detection distances, and the simulation results showed that the algorithm could reproduce the content information of the target. Therefore, the TL-GOMP algorithm is advantageous for applications in photonic integrated interference imaging. It can reconstruct sparse spatial frequency information collected by the PIC and recover the content information of the detected target. In summary, the TL-GOMP algorithm can reconstruct the sparse and unknown information collected as well as recover the content information of unknown targets. This could benefit scientific and technological exploration and production, and it also has good potential for application in the field of photonic integrated interference detection technology.

Author Contributions

Conceptualization, C.C., Z.F. and J.Y.; methodology, J.Y.; software, C.C.; validation, Z.W., R.W. and K.L.; formal analysis, S.Y. and B.S.; investigation, J.Y.; resources, C.C.; data curation, J.Y. and Z.F.; writing—original draft preparation, J.Y.; writing—review and editing, Z.F. and C.C.; visualization, C.C.; supervision, Z.W.; project administration, J.Y.; funding acquisition, J.Y. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

The data used to support the findings of this study are available from the corresponding author upon request.

Acknowledgments

The authors thank the optical sensing and measurement team of Xidian University for their help. This research was supported by the National Natural Science Foundation of Shaanxi Province (Grant No. 2020JM-206), the National Defense Basic Research Foundation (Grant No. 61428060201), and 111 Project (B17035).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Ogden, C.; Wilm, J.; Stubbs, D.M.; Thurman, S.T.; Su, T.; Scott, R.P.; Yoo, S.J.B. Flat panel space based space surveillance sensor. In Proceedings of the Advanced Maui Optical and Space Surveillance Technologies (AMOS) Conference, Maui, HI, USA, 10–13 September 2013. [Google Scholar]
  2. Su, T.H.; Liu, G.Y.; Badham, K.E.; Thurman, S.T.; Kendrick, R.L.; Duncan, A.; Wuchenich, D.; Ogden, C.; Chriqui, G.; Feng, S.; et al. Interferometric imaging using Si3N4 photonic integrated circuits for a SPIDER imager. Opt. Express 2018, 26, 12801–12812. [Google Scholar] [CrossRef] [PubMed]
  3. Aufdenberg, J.; Mérand, A.; Foresto, V.C.D.; Absil, O.; Di Folco, E.; Kervella, P.; Ridgway, S.; Berger, D.; Brummelaar, T.T.; McAlister, H. First results from the CHARA Array. VII. Long-baseline interferometric measurements of Vega consistent with a pole-on, rapidly rotating star. Astrophys. J. 2006, 645, 664–675. [Google Scholar] [CrossRef]
  4. Brummelaar, T.A.; McAlister, H.A.; Ridgway, S.T.; Bagnuolo, W.G., Jr.; Turner, N.H.; Sturmann, L.; Sturmann, J.; Berger, D.H.; Ogden, C.E.; Cadman, R.; et al. First Results from the CHARA Array. II. A Description of the Instrument. Astrophys. J. 2005, 628, 453–465. [Google Scholar] [CrossRef]
  5. Petrov, R.G.; Malbet, F.; Weigelt, G.; Antonelli, P.; Beckmann, U.; Bresson, Y.; Chelli, A.; Dugué, M.; Duvert, G.; Gennari, S.; et al. AMBER, the near-infrared spectro-interferometric three-telescope VLTI instrument. Astron. Astrophys. 2007, 464, 1–12. [Google Scholar] [CrossRef]
  6. Armstrong, J.T.; Mozurkewich, D.; Rickard, L.J.; Hutter, D.J.; Benson, J.A.; Bowers, P.; Elias, N., II; Hummel, C.; Johnston, K.; Buscher, D.; et al. The navy prototype optical interferometer. Astrophys. J. 1998, 496, 550–571. [Google Scholar] [CrossRef]
  7. Pearson, T.; Readhead, A. Image formation by self-calibration in radio astronomy. Annu. Rev. Astron. Astrophys. 1984, 22, 97–130. [Google Scholar] [CrossRef]
  8. Badham, K.; Kendrick, R.L.; Wuchenich, D.; Ogden, C.; Chriqui, G.; Duncan, A.; Thurman, S.T.; Su, T.; Lai, W.; Chun, J.; et al. Photonic integrated circuit-based imaging system for SPIDER. In Proceedings of the 2017 Conference on Lasers and Electro-Optics Pacific Rim (CLEO-PR), Singapore, 31 July–4 August 2017. [Google Scholar]
  9. Scott, R.P.; Su, T.; Ogden, C.; Thurman, S.T.; Kendrick, R.L.; Duncan, A.; Yu, R.; Yoo, S. Demonstration of a photonic integrated circuit for multi-baseline interferometric imaging. In Proceedings of the 2014 IEEE Photonics Conference (IPC), San Diego, CA, USA, 12–16 October 2014; pp. 1–2. [Google Scholar]
  10. Mishali, M.; Eldar, Y.C. From theory to practice: Sub-Nyquist sampling of sparse wideband analog signals. IEEE J. Sel. Top. Signal Process. 2010, 4, 375–391. [Google Scholar] [CrossRef]
  11. Hariri, A.; Babaie-Zadeh, M. Compressive detection of sparse signals in additive white Gaussian noise without signal reconstruction. Signal Process. 2017, 131, 376–385. [Google Scholar] [CrossRef]
  12. Usala, J.D.; Maag, A.; Nelis, T.; Gamez, G. Compressed sensing spectral imaging for plasma optical emission spectroscopy. J. Anal. At. Spectrom. 2016, 31, 2198–2206. [Google Scholar] [CrossRef]
  13. Chen, Y.; Ye, X.; Huang, F. A novel method and fast algorithm for MR image reconstruction with significantly under-sampled data. Inverse Probl. Imaging 2017, 4, 223–240. [Google Scholar] [CrossRef]
  14. Lv, S.T.; Liu, J. A novel signal separation algorithm based on compressed sensing for wideband spectrum sensing in cognitive radio networks. Int. J. Commun. Syst. 2014, 27, 2628–2641. [Google Scholar] [CrossRef]
  15. Bu, H.; Tao, R.; Bai, X.; Zhao, J. A novel SAR imaging algorithm based on compressed sensing. IEEE Geosci. Remote Sens. Lett. 2017, 12, 1003–1007. [Google Scholar] [CrossRef]
  16. He, Z.; Zhao, X.; Zhang, S.; Ogawa, T.; Haseyama, M. Random combination for information extraction in compressed sensing and sparse representation-based pattern recognition. Neurocomputing 2014, 145, 160–173. [Google Scholar] [CrossRef]
  17. Kajbaf, H.; Case, J.T.; Yang, Z.; Zheng, Y.R. Compressed sensing for SAR-based wideband three-dimensional microwave imaging system using non-uniform fast Fourier transform. IET Radar Sonar Navig. 2013, 7, 658–670. [Google Scholar] [CrossRef]
  18. Li, Q.; Han, Y.H.; Dang, J.W. Image decomposing for inpainting using compressed sensing in DCT domain. Front. Comput. Sci. 2014, 8, 905–915. [Google Scholar] [CrossRef]
  19. Zhang, J.; Xia, L.; Huang, M.; Li, G. Image reconstruction in compressed sensing based on single-level DWT. In Proceedings of the IEEE Workshop on Electronics, Computer & Applications, Ottawa, ON, Canada, 8–9 May 2014. [Google Scholar]
  20. Monajemi, H.; Jafarpour, S.; Gavish, M.; Stat 330/CME 362 Collaboration Donoho; Donoho, D.L.; Ambikasaran, S.; Bacallado, S.; Bharadia, D.; Chen, Y.; Choi, Y.; et al. Deterministic matrices matching the compressed sensing phase transitions of Gaussian random matrices. Proc. Natl. Acad. Sci. USA 2013, 110, 1181–1186. [Google Scholar] [CrossRef]
  21. Lu, W.; Li, W.; Kpalma, K.; Ronsin, J. Compressed sensing performance of random Bernoulli matrices with high compression ratio. IEEE Signal Process. Lett. 2015, 22, 1074–1078. [Google Scholar]
  22. Li, X.; Zhao, R.; Hu, S. Blocked polynomial deterministic matrix for compressed sensing. In Proceedings of the International Conference on Wireless Communications Networking & Mobile Computing, Chengdu, China, 23–25 September 2010. [Google Scholar]
  23. Yan, T.; Lv, G.; Yin, K. Deterministic sensing matrices based on multidimensional pseudo-random sequences. Circ. Syst. Signal Process. 2014, 33, 1597–1610. [Google Scholar]
  24. Boyd, S.; Vandenberghe, L. Convex optimization. IEEE Trans. Autom. Control 2006, 51, 1859. [Google Scholar]
  25. Mota, J.F.; Xavier, J.M.; Aguiar, P.M.; Puschel, M. Distributed basis pursuit. IEEE Trans. Signal Process. 2012, 60, 1942–1956. [Google Scholar] [CrossRef]
  26. Lee, J.; Choi, J.W.; Shim, B. Sparse signal recovery via tree search matching pursuit. J. Commun. Netw. 2016, 18, 699–712. [Google Scholar] [CrossRef]
  27. Wang, R.; Zhang, J.; Ren, S.; Li, Q. A reducing iteration orthogonal matching pursuit algorithm for compressive sensing. Tsinghua Sci. Technol. 2016, 21, 71–79. [Google Scholar] [CrossRef]
  28. Sahoo, S.K.; Makur, A. Signal recovery from random measurements via extended orthogonal matching pursuit. IEEE Trans. Signal Process 2015, 63, 2572–2581. [Google Scholar] [CrossRef]
  29. Tropp, J.A.; Gilbert, A.C. Signal recovery from random measurements via orthogonal matching pursuit. IEEE Trans. Inf. Theory 2007, 53, 4655–4666. [Google Scholar] [CrossRef]
  30. Hu, Z.; Liang, D.; Xia, D.; Zheng, H. Compressing sampling in computed tomography: Method and application. Nucl. Instrum. Methods Phys. Res. 2014, 748, 26–32. [Google Scholar] [CrossRef]
  31. Shang, F.F.; Li, K.Y. The Application of Wavelet Transform to Breast-Infrared Images. Cogn. Inform. 2006, 2, 939–943. [Google Scholar]
  32. Rubinstein, R.; Brucktein, A.M.; Elad, M. Dictionaries for sparse representation modeling. Proc. IEEE 2010, 98, 1045–1057. [Google Scholar] [CrossRef]
  33. He, J.; Wang, T.; Wang, C.; Chen, Y. Improved Measurement Matrix Construction with Random Sequence in Compressed Sensing. Wirel. Pers. Commun. 2022, 123, 3003–3024. [Google Scholar] [CrossRef]
  34. Wang, X.; Cui, G.; Wang, L.; Jia, X.L.; Nie, W. Construction of measurement matrix in compressed sensing based on balanced Gold sequence. Chin. J. Sci. Instrum. 2014, 35, 97–102. [Google Scholar]
  35. Xu, G.; Xu, Z. Compressed sensing matrices from Fourier matrices. IEEE Trans. Inf. Theory 2015, 61, 469–478. [Google Scholar] [CrossRef]
  36. Lum, D.J.; Knarr, S.H.; Howell, J.C. Fast Hadamard transforms for compressive sensing of joint systems: Measurement of a 3.2 million-demensional bi-photon probability distribution. Opt. Express 2015, 23, 27636–27649. [Google Scholar] [CrossRef] [PubMed]
  37. Wang, X.; Wang, K.; Wang, Q.Y.; Liang, R.; Zuo, J.; Zhao, L.; Zou, C. Deterministic Random Measurement Matrices Construction for Compressed Sensing. J. Signal Process. 2014, 30, 436–442. [Google Scholar]
  38. Narayanan, S.; Sahoo, S.; Makur, A. Greedy pursuits assisted basis pursuit for reconstruction of joint-sparse signals. Signal Process. 2018, 142, 485–491. [Google Scholar] [CrossRef]
  39. Figueiredo, M.A.T.; Nowak, R.D.; Wright, S.J. Gradient Projection for Sparse Reconstruction: Application to Compressed Sensing and Other Inverse Problems. IEEE J. Sel. Top. Signal Process. 2008, 1, 586–597. [Google Scholar] [CrossRef]
  40. Ji, S.H.; Xue, Y.; Carin, L. Bayesian compressive sensing. IEEE Trans. Signal Process. 2008, 56, 2346–2356. [Google Scholar] [CrossRef]
  41. Zhang, J.; Liu, S.; Xiong, R.; Ma, S.; Zhao, D. Improved total variation based image compressive sensing recovery by nonlocal regularization. In Proceedings of the 2013 IEEE International Symposium on Circuits and Systems (ISCAS), Beijing, China, 19–23 May 2013; pp. 2836–2839. [Google Scholar]
  42. Kulkarni, K.; Lohit, S.; Turaga, P.; Kerviche, R.; Ashok, A. Reconnet: Non-iterative reconstruction of images from compressively sensed measurements. In Proceedings of the Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2016; pp. 449–458. [Google Scholar]
  43. Yao, H.; Dai, F.; Zhang, D.; Ma, Y.; Zhang, S.; Zhang, Y.; Tian, Q. DR 2 -Net: Deep Residual Reconstruction Network for Image Compressive Sensing. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017. [Google Scholar]
  44. Lohit, S.; Kulkarni, K.; Kerviche, R.; Turaga, P.; Ashok, A. Convolutional neural networks for noniterative reconstruction of compressively sensed images. IEEE Trans. Comput. Imaging 2018, 4, 326–340. [Google Scholar] [CrossRef]
  45. Du, J.; Xie, X.; Wang, C.; Shi, G.; Xu, X.; Wang, Y. Fully convolutional measurement network for compressive sensing image reconstruction. Neurocomputing 2019, 328, 105–112. [Google Scholar] [CrossRef]
  46. Nie, G.; Fu, Y.; Zheng, Y.; Huang, H. Image Restoration from Patch-based Compressed Sensing Measurement. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017. [Google Scholar]
  47. Shi, W.; Jiang, F.; Zhang, S.; Zhao, D. Deep networks for compressed image sensing. In Proceedings of the 2017 IEEE International Conference on Multimedia and Expo (ICME), Hong Kong, China, 10–14 July 2017; pp. 877–882. [Google Scholar]
Figure 1. Schematic of the sparse representation of compressed sensing.
Figure 1. Schematic of the sparse representation of compressed sensing.
Remotesensing 15 02478 g001
Figure 2. Schematic of original image reconstruction using sparse representation θ ^ .
Figure 2. Schematic of original image reconstruction using sparse representation θ ^ .
Remotesensing 15 02478 g002
Figure 3. Original image.
Figure 3. Original image.
Remotesensing 15 02478 g003
Figure 4. Reconstruction results of four different algorithms: (a) OMP; (b) STOMP; (c) GOMP; and (d) TL-GOMP.
Figure 4. Reconstruction results of four different algorithms: (a) OMP; (b) STOMP; (c) GOMP; and (d) TL-GOMP.
Remotesensing 15 02478 g004
Figure 5. (a) Relationship between the image size and PSNR of reconstructed images; (b) relationship between the image size and MSE of reconstructed images; (c) relationship between the image size and running times.
Figure 5. (a) Relationship between the image size and PSNR of reconstructed images; (b) relationship between the image size and MSE of reconstructed images; (c) relationship between the image size and running times.
Remotesensing 15 02478 g005
Figure 6. Reconstructed image using different algorithms: (a) CoSaMP; (b) IHT; (c) IRLS; (d) GBP; (e) SP; and (f) TL-GOMP.
Figure 6. Reconstructed image using different algorithms: (a) CoSaMP; (b) IHT; (c) IRLS; (d) GBP; (e) SP; and (f) TL-GOMP.
Remotesensing 15 02478 g006
Figure 7. (a) Relationship between the image size and PSNR of the reconstructed image; (b) relationship between the image size and MSE of the reconstructed image; (c) relationship between image size and running time.
Figure 7. (a) Relationship between the image size and PSNR of the reconstructed image; (b) relationship between the image size and MSE of the reconstructed image; (c) relationship between image size and running time.
Remotesensing 15 02478 g007
Figure 8. Original version of the 256 × 256 image.
Figure 8. Original version of the 256 × 256 image.
Remotesensing 15 02478 g008
Figure 9. Reconstruction results of randomly selected single-column signals: (a) 1 time; (b) 50 times; (c) 100 times; (d) 200 times.
Figure 9. Reconstruction results of randomly selected single-column signals: (a) 1 time; (b) 50 times; (c) 100 times; (d) 200 times.
Remotesensing 15 02478 g009
Figure 10. Restoration image results of the PIC at different distances: (a) d = 75 m; (b) d = 125 m; (c) d = 175 m; and (d) d = 225 m.
Figure 10. Restoration image results of the PIC at different distances: (a) d = 75 m; (b) d = 125 m; (c) d = 175 m; and (d) d = 225 m.
Remotesensing 15 02478 g010
Figure 11. Simulation results of the TL-GOMP algorithm reconstruction of restored images at different distances: (a) d = 75 m; (b) d = 125 m; (c) d = 175 m; and (d) d = 225 m.
Figure 11. Simulation results of the TL-GOMP algorithm reconstruction of restored images at different distances: (a) d = 75 m; (b) d = 125 m; (c) d = 175 m; and (d) d = 225 m.
Remotesensing 15 02478 g011
Figure 12. Influence of measurement number M on sparsity k and reconstructed image quality. (a) The results of the first experiment; (b) The results of the second experiment; (c) The results of the third experiment; (d) The results of the fourth experiment.
Figure 12. Influence of measurement number M on sparsity k and reconstructed image quality. (a) The results of the first experiment; (b) The results of the second experiment; (c) The results of the third experiment; (d) The results of the fourth experiment.
Remotesensing 15 02478 g012
Figure 13. (a) Variation curves for the PSNR of the reconstructed images with sparsity k; (b) variation curves for the MSE of the reconstructed images with sparsity k.
Figure 13. (a) Variation curves for the PSNR of the reconstructed images with sparsity k; (b) variation curves for the MSE of the reconstructed images with sparsity k.
Remotesensing 15 02478 g013
Figure 14. (a) Variation curves for the MSE of reconstructed images with matrix size M × N; (b) variation curves for the PSNR of reconstructed images with matrix size M × N.
Figure 14. (a) Variation curves for the MSE of reconstructed images with matrix size M × N; (b) variation curves for the PSNR of reconstructed images with matrix size M × N.
Remotesensing 15 02478 g014
Table 1. PSNR, MSE, and time of the 350 × 350 image reconstructed by OMP series algorithms.
Table 1. PSNR, MSE, and time of the 350 × 350 image reconstructed by OMP series algorithms.
OMPSTOMPGOMPTL-GOMP
PSNR (dB)18.167118.488418.294421.1882
MSE991.6670920.9654963.0243494.6024
Running time (s)3.35771.76632.72782.5031
Table 2. PSNR, MSE, and time of the 500 × 500 image reconstructed by OMP series algorithms.
Table 2. PSNR, MSE, and time of the 500 × 500 image reconstructed by OMP series algorithms.
OMPSTOMPGOMPTL-GOMP
PSNR (dB)18.519318.550618.513621.856
MSE914.4386907.8610915.6401424.1134
Running time (s)6.77393.25745.55945.6994
Table 3. PSNR, MSE, and time of the 650 × 650 image reconstructed by OMP series algorithms.
Table 3. PSNR, MSE, and time of the 650 × 650 image reconstructed by OMP series algorithms.
OMPSTOMPGOMPTL-GOMP
PSNR (dB)18.968318.977418.624621.8993
MSE824.6088822.8859892.5204419.9057
Running time (s)12.16035.555712.008511.5920
Table 4. PSNR, MSE, and time of the 800 × 800 image reconstructed by OMP series algorithms.
Table 4. PSNR, MSE, and time of the 800 × 800 image reconstructed by OMP series algorithms.
OMPSTOMPGOMPTL-GOMP
PSNR (dB)19.143319.328418.790321.4954
MSE792.0467759.0033859.1182460.8311
Running time (s)18.80398.704022.844423.6628
Table 5. PSNR, MSE, and time of the different pixel image reconstructed by TL-GOMP algorithms.
Table 5. PSNR, MSE, and time of the different pixel image reconstructed by TL-GOMP algorithms.
350 × 350 Pixel Values500 × 500 Pixel Values650 × 650 Pixel Values800 × 800 Pixel Values
PSNRMSETimePSNRMSETimePSNRMSETimePSNRMSETime
21.3302478.69863.045021.9785412.31445.776921.7634433.251012.141921.4110469.872824.8091
21.3607475.34852.476221.8199427.64765.706221.8976420.068312.262121.1822495.287824.8688
21.2495487.67852.481021.6895440.68685.767821.7275436.847416.261921.3847472.725924.6147
21.3407477.54552.456721.9302416.92795.695521.9430415.701312.206221.2746484.861224.4856
21.1693496.76212.441421.6719442.47605.702121.9532414.729812.259221.2543487.140424.2110
21.2787484.40302.496021.7925430.36125.827921.8741422.350515.785021.2940482.708524.5258
21.2071492.46052.465221.8286426.79955.663921.7166437.944215.491221.3495476.571124.5294
21.3974471.34562.435621.8625423.47845.672321.6169448.116415.566521.1921494.160824.6656
21.2362489.17022.473021.8421425.46955.640021.7768431.920114.771021.2277490.129123.7997
21.1882494.60242.503121.8560424.11345.699421.8993419.905711.592021.4954460.831123.6628
PSNR Mean: 21.2758
MSE Mean: 484.8015
PSNR Mean: 21.8272
MSE Mean: 427.0275
PSNR Mean: 21.8168
MSE Mean: 428.0835
PSNR Mean: 21.3066
MSE Mean: 481.4289
Table 6. PSNR, MSE, and time of the different pixel image reconstructed by OMP algorithms.
Table 6. PSNR, MSE, and time of the different pixel image reconstructed by OMP algorithms.
350 × 350 Pixel Values500 × 500 Pixel Values650 × 650 Pixel Values800 × 800 Pixel Values
PSNRMSETimePSNRMSETimePSNRMSETimePSNRMSETime
18.1671991.66703.988818.4080938.16067.585918.9861821.236112.303819.0968800.565820.3478
18.2682968.86293.219818.5454908.95717.792518.9347831.017112.565019.1620788.651021.0002
18.3101959.55543.473718.4466929.87077.241118.9399830.029612.550919.2571771.556820.4160
18.3829943.60683.629118.5152915.30307.139519.1091798.315812.455319.0280813.348620.9783
18.03931021.33.688318.4265934.18217.728118.9553827.081212.302319.2065780.603722.2521
18.2515972.59833.235418.5735903.09857.490118.9398830.034812.416419.1368793.227220.6466
18.3869942.72793.296118.6313891.14027.599919.0351812.030612.442119.0966800.609020.4755
18.2823965.72203.244518.6419888.98686.757518.8986837.964012.560819.1766785.994820.5140
18.2712968.19293.231218.6113895.25666.741419.0885802.096412.497119.1135797.493719.9576
18.1671991.66703.357718.5193914.43866.773918.9683824.608812.160319.1433792.046718.8039
PSNR Mean: 18.2523
MSE Mean: 972.5902
PSNR Mean: 18.5319
MSE Mean: 911.9394
PSNR Mean: 18.9855
MSE Mean: 821.4414
PSNR Mean: 19.1417
MSE Mean: 792.4097
Table 7. PSNR, MSE, and time of the different pixel image reconstructed by STOMP algorithms.
Table 7. PSNR, MSE, and time of the different pixel image reconstructed by STOMP algorithms.
350 × 350 Pixel Values500 × 500 Pixel Values650 × 650 Pixel Values800 × 800 Pixel Values
PSNRMSETimePSNRMSETimePSNRMSETimePSNRMSETime
18.4565927.74391.987118.8762842.28533.299219.1958782.52865.354619.4015746.32518.9414
18.6146894.57571.753018.7574865.64603.089719.0214814.58555.517719.1294794.58588.4978
18.3691946.60531.767218.9131835.16743.259119.1162797.00745.340419.3628753.01198.3268
18.7747862.20631.650918.6830880.59343.142819.0212814.63515.472819.3334758.11628.3945
18.5523907.50751.658318.7680863.52693.152719.1821784.99305.800419.2737768.62338.8009
18.5249913.24491.756518.9101835.73863.141319.1222795.91225.262319.2195778.27669.1070
18.4711924.64571.816218.9665824.96263.076919.1388792.86155.366619.1329793.94588.9904
18.5027917.93821.732718.8543846.54763.208519.0365811.76375.285119.2444773.82639.3969
18.5617905.55081.663418.6145894.59403.170119.1874784.04886.356219.0907801.69809.0372
18.4884920.96541.766318.5506907.86103.257418.9774822.88595.555719.3284759.00338.7040
PSNR Mean: 18.5316
MSE Mean: 912.0984
PSNR Mean: 18.7894
MSE Mean: 859.6923
PSNR Mean: 19.0999
MSE Mean: 800.1222
PSNR Mean: 19.2517
MSE Mean: 772.7412
Table 8. PSNR, MSE, and time of the different pixel image reconstructed by GOMP algorithms.
Table 8. PSNR, MSE, and time of the different pixel image reconstructed by GOMP algorithms.
350 × 350 Pixel Values500 × 500 Pixel Values650 × 650 Pixel Values800 × 800 Pixel Values
PSNRMSETimePSNRMSETimePSNRMSETimePSNRMSETime
18.5342911.30282.534218.5828901.15445.520218.6002897.557512.235218.6504887.242625.3709
18.5119915.98632.397418.5438909.28805.295718.7124874.660312.712318.7940858.384224.2956
18.4565927.74952.461918.5660904.65365.326118.7070875.753711.930518.6203893.404925.9419
18.4604926.90532.385718.5033917.79475.301018.6821880.793711.647318.7787861.408824.9687
18.4557927.93022.423618.5870900.29035.282118.8511847.160111.745918.6419888.981925.1073
18.5121915.93992.422918.4860921.46205.417518.6688883.496211.586518.7756862.025025.7629
18.6042896.72132.418318.4826922.18335.320518.6093895.680911.457418.7594865.252228.2525
18.4933919.91852.448918.5674904.36665.270018.5587906.162411.596018.5693903.965925.3977
18.3236956.57772.421018.5627905.32995.312518.6640884.457211.632318.6851880.182424.9104
18.2944963.02432.727818.5136915.64015.559418.6246892.520412.008518.7903859.118222.8444
PSNR Mean: 18.4646
MSE Mean: 926.2056
PSNR Mean: 18.5395
MSE Mean: 910.2163
PSNR Mean: 18.6680
MSE Mean: 883.8242
PSNR Mean: 18.7063
MSE Mean: 875.9966
Table 9. PSNR, MSE, and running time of the 350 × 350 image reconstructed by different algorithms.
Table 9. PSNR, MSE, and running time of the 350 × 350 image reconstructed by different algorithms.
CoSaMPGBPIHTIRLSSPTL-GOMP
PSNR (dB)16.848419.206715.298420.149818.081921.1882
MSE1343.5780.56901919.8628.20361011.3494.6024
Running time (s)8.256115.19440.956710.86656.91272.5031
Table 10. PSNR, MSE, and running time of the 500 × 500 image reconstructed by different algorithms.
Table 10. PSNR, MSE, and running time of the 500 × 500 image reconstructed by different algorithms.
CoSaMPGBPIHTIRLSSPTL-GOMP
PSNR (dB)17.109919.744415.382821.013418.251621.8560
MSE1265689.67481882.8514.9214972.5734424.1134
Running time (s)20.436560.43862.560064.173716.42635.6994
Table 11. PSNR, MSE, and running time of the 650 × 650 image reconstructed by different algorithms.
Table 11. PSNR, MSE, and running time of the 650 × 650 image reconstructed by different algorithms.
CoSaMPGBPIHTIRLSSPTL-GOMP
PSNR (dB)17.341020.114715.388821.296218.676121.8993
MSE1199.4633.30421880.2482.4597881.9959419.9057
Running time (s)47.6477151.26795.5886260.722934.794011.5920
Table 12. PSNR, MSE, and running time of the 800 × 800 image reconstructed by different algorithms.
Table 12. PSNR, MSE, and running time of the 800 × 800 image reconstructed by different algorithms.
CoSaMPGBPIHTIRLSSPTL-GOMP
PSNR (dB)17.487220.403215.578521.411418.734421.4954
MSE1159.7592.59211799.8469.8253870.2398460.8311
Running time (s)96.3305308.280110.4591650.776666.166923.6628
Table 13. PSNR, MSE, and time of the different pixel image reconstructed by CoSaMP algorithms.
Table 13. PSNR, MSE, and time of the different pixel image reconstructed by CoSaMP algorithms.
350 × 350 Pixel Values500 × 500 Pixel Values650 × 650 Pixel Values800 × 800 Pixel Values
PSNRMSETimePSNRMSETimePSNRMSETimePSNRMSETime
16.80921355.78.894916.95691310.421.042417.45051169.653.195317.42791175.7104.1132
16.49401457.78.563817.1272126020.515417.36481192.951.823917.56281139.7100.1575
16.49711456.78.611717.04101285.220.896117.41141180.247.898317.46181166.699.2829
16.81511353.88.688017.07061276.520.251617.38351187.848.297817.60961127.5102.0728
16.71481385.58.674217.1166126320.092617.47451163.146.985717.49451157.8102.7467
16.51591450.48.233316.94661313.520.180917.32261204.548.504417.61681125.6106.1725
16.79071361.58.024217.15521251.920.092217.14361255.247.407317.59321131.8103.6256
16.69241392.78.110717.11431263.720.170217.3318120247.130717.58431134.1102.4037
16.64521407.98.028116.91001324.620.180317.38891186.346.712617.60811127.9102.3405
16.84841343.58.256117.1099126520.436517.34101199.447.647717.48721159.796.3305
PSNR Mean: 16.6823
MSE Mean: 1396.54
PSNR Mean: 17.0548
MSE Mean: 1281.38
PSNR Mean: 17.3613
MSE Mean: 1194.1
PSNR Mean: 17.5446
MSE Mean: 1144.64
Table 14. PSNR, MSE, and time of the different pixel image reconstructed by GBP algorithms.
Table 14. PSNR, MSE, and time of the different pixel image reconstructed by GBP algorithms.
350 × 350 Pixel Values500 × 500 Pixel Values650 × 650 Pixel Values800 × 800 Pixel Values
PSNRMSETimePSNRMSETimePSNRMSETimePSNRMSETime
19.4978729.966616.717119.9097663.906360.867120.2651611.7477152.681920.4798582.2355313.1546
19.1973782.258415.419419.8177678.119358.844820.2567612.9301151.431820.4731583.1371307.3742
19.4500738.042815.718819.9173662.747660.387820.0630640.8796152.051720.3483600.1356306.4753
19.3991746.740115.412419.8520672.789760.006520.1227632.1369152.850320.4549585.5920315.1839
19.3752750.867315.488919.7395690.447759.859720.2543613.2665151.660920.4367588.0450307.7829
19.2693769.393215.359319.9330660.356562.004820.3432600.8463151.605120.4022592.7377311.2913
19.4267742.007415.528719.7074695.563260.551820.2234617.6454150.137620.5054578.8216306.9459
19.3039763.286015.437319.7071695.614060.306120.2587612.6538150.425820.3715596.9458308.9359
19.4260742.132515.449019.8390674.801260.036620.2990606.9950149.536920.4816581.9978308.8351
19.2067780.569015.194419.7444689.674860.438620.1147633.3042151.267920.4032592.5921308.2801
PSNR Mean: 19.3552
MSE Mean: 754.5263
PSNR Mean: 19.8167
MSE Mean: 678.4020
PSNR Mean: 20.2201
MSE Mean: 618.2406
PSNR Mean: 20.4357
MSE Mean: 588.2240
Table 15. PSNR, MSE, and time of the different pixel image reconstructed by IHT algorithms.
Table 15. PSNR, MSE, and time of the different pixel image reconstructed by IHT algorithms.
350 × 350 Pixel Values500 × 500 Pixel Values650 × 650 Pixel Values800 × 800 Pixel Values
PSNRMSETimePSNRMSETimePSNRMSETimePSNRMSETime
15.32301908.91.029215.52891820.52.730815.53571817.65.806815.14951986.710.6558
15.26961932.50.965715.35721893.92.560815.51491826.45.788515.60011790.910.7756
15.43491860.30.922315.34381899.82.588715.31081914.35.597115.48841837.610.4584
15.64661771.80.923715.534818182.547315.27201931.45.678715.35431895.210.7111
15.62981778.70.933615.38861880.32.574515.39441877.85.585215.15891982.410.4426
15.49501834.70.934315.62861779.22.578415.42191865.95.605615.28741924.610.5495
15.36711889.60.926515.37101887.92.564515.47741842.25.606315.46661846.810.5276
15.41871867.30.926215.31871910.82.559615.25401939.55.612115.40231874.410.4417
15.67381760.80.931515.61331785.52.575515.35341895.65.583815.40031875.210.4518
15.29841919.80.956715.38281882.82.560015.38881880.25.588615.57851799.810.4591
PSNR Mean: 15.4557
MSE Mean: 1852.44
PSNR Mean: 15.4468
MSE Mean: 1855.87
PSNR Mean: 15.3923
MSE Mean: 1879.09
PSNR Mean: 15.3886
MSE Mean: 1881.36
Table 16. PSNR, MSE, and time of the different pixel image reconstructed by IRLS algorithms.
Table 16. PSNR, MSE, and time of the different pixel image reconstructed by IRLS algorithms.
350 × 350 Pixel Values500 × 500 Pixel Values650 × 650 Pixel Values800 × 800 Pixel Values
PSNRMSETimePSNRMSETimePSNRMSETimePSNRMSETime
20.6086565.228211.730221.2215490.824965.600621.3258479.1809270.920721.3758473.7010664.0105
20.6022566.051510.552721.0093515.410974.964321.4131469.6444262.227621.5207458.1548693.0186
20.6176564.052811.448120.8764531.420266.348621.4715463.3728260.093221.3714474.1812668.7674
20.4171590.710310.621321.0181514.367264.894021.6391445.8292261.372121.5438455.7273662.6378
20.7530546.741611.023821.2248490.461065.644221.2211490.8747258.204321.6174448.0666666.4153
21.0103515.293711.265020.8733531.798164.794521.2544487.1234261.844221.5099459.2951668.0591
21.1637497.402111.412720.9998516.531964.556121.2597486.5327255.483321.3478476.7588677.1501
20.4757582.782110.914121.1686496.844464.549321.2293489.9456255.109621.3538476.1018666.8833
20.6876555.031310.617321.1205502.376564.556221.3770473.5656264.421521.3173480.1209663.1420
20.1498628.203610.866521.0134514.921464.173721.2962482.4597260.722921.4114469.8253650.7766
PSNR Mean: 18.6069
MSE Mean: 561.1497
PSNR Mean: 21.0526
MSE Mean: 510.4957
PSNR Mean: 21.3487
MSE Mean: 476.8529
PSNR Mean: 21.4369
MSE Mean: 467.1933
Table 17. PSNR, MSE, and time of the different pixel image reconstructed by SP algorithms.
Table 17. PSNR, MSE, and time of the different pixel image reconstructed by SP algorithms.
350 × 350 Pixel Values500 × 500 Pixel Values650 × 650 Pixel Values800 × 800 Pixel Values
PSNRMSETimePSNRMSETimePSNRMSETimePSNRMSETime
17.90461053.57.146618.4409931.092417.312218.5753902.721136.538818.8447848.425970.6439
17.88791057.56.938018.1981984.621716.659518.6085895.833435.275718.8404849.257668.9989
17.84051069.17.007618.1930985.776216.742018.5203914.215736.191318.8246852.363774.357
17.816810756.825918.1758989.695716.660918.4910920.403534.947818.5796901.812070.4628
17.79431080.66.771118.2247978.619116.669818.5779902.178635.190818.7513866.865074.8542
17.77691084.96.73918.2543971.971316.768818.7444868.231335.483718.7457867.982870.8790
17.77371085.76.675318.3440952.087117.069518.2689968.695035.205418.7504867.036571.0472
18.06861014.46.821118.1780989.191717.346618.7020876.757835.129918.7856860.044969.8509
17.86761062.56.849918.2797966.303616.677518.5849900.717935.192118.7113874.879071.3209
18.08191011.36.912718.2516972.573416.426318.6761881.995934.794018.7344870.239866.1669
PSNR Mean: 17.8813
MSE Mean: 1059.45
PSNR Mean: 18.2540
MSE Mean: 972.1932
PSNR Mean: 18.5800
MSE Mean: 903.1750
PSNR Mean: 18.7568
MSE Mean: 865.8907
Table 18. Signal reconstruction times and residual values.
Table 18. Signal reconstruction times and residual values.
Times of Signal Reconstruction150100200
Value of residual168.5664161.6117150.3473136.5506
Table 19. Multiple simulation data with different signal reconstruction times.
Table 19. Multiple simulation data with different signal reconstruction times.
Residual Values
1 time168.5664168.6020162.6642165.0314169.6857165.8414158.1089155.4569155.4202149.4011
50 times161.6117159.0616160.1910155.4877155.4719150.6663142.2037149.5014148.7272155.5769
100 times150.3473150.5078150.1486147.8456148.4673153.6050153.2938147.4182155.6388149.4355
200 times136.5506143.0227139.9187143.3128146.7156147.7237146.9315141.7615144.6260143.4051
Table 20. PSNR and MSE of reconstructed images at different distances.
Table 20. PSNR and MSE of reconstructed images at different distances.
d (m)75125175225
PSNR (dB)13.577010.422811.292112.2664
MSE2.8525 × 1035.8992 × 1034.8292 × 1033.8587 × 103
Table 21. PSNR and MSE of the reconstructed images with different sparsity k.
Table 21. PSNR and MSE of the reconstructed images with different sparsity k.
k 910111213
PSNR24.465825.945126.274726.440226.5216
MSE232.5423165.4131153.3254147.5921144.8496
Table 22. PSNR and MSE of the reconstructed images with different matrix sizes M × N.
Table 22. PSNR and MSE of the reconstructed images with different matrix sizes M × N.
M 8564514236
PSNR24.505523.916623.313322.652721.3493
MSE230.4277263.8865303.2150353.0308476.5985
Table 23. PSNR and MSE of the multiple simulation data with different sparsity k.
Table 23. PSNR and MSE of the multiple simulation data with different sparsity k.
k = 9PSNR24.465824.457824.564224.608324.542924.383524.463924.456624.573424.5788
MSE232.5423232.9702227.3323225.0338228.4496236.9926232.6450233.0364226.8511226.5680
k = 10PSNR25.945125.928125.964425.916326.103126.067026.116525.943526.248125.9240
MSE165.4131166.0630164.6806166.5131159.5034160.8350159.0111165.4743154.2653166.2188
k = 11PSNR26.274726.250826.337326.191226.349226.286126.206226.274426.262226.1795
MSE153.3254154.1708151.1292156.3016150.7153152.9223155.7619153.3364153.7671156.7223
k = 12PSNR26.440226.475326.341326.257426.533826.470726.345726.402026.365726.5155
MSE147.5921146.4039150.9896153.9345144.4438146.5585150.8391148.8964150.1456145.0545
k = 13PSNR26.521626.497826.579826.637026.609426.414226.671626.477126.477426.6881
MSE144.8496145.6461142.9226141.0527141.9519148.4760139.9344146.3412146.3327139.4024
Table 24. PSNR and MSE of the multiple simulation data with different matrix size M × N.
Table 24. PSNR and MSE of the multiple simulation data with different matrix size M × N.
M = 85PSNR24.505524.642524.552824.505624.552224.565924.542124.543424.491924.4003
MSE230.4277223.2689227.9270230.4216227.9595227.2443228.4929228.4230231.1488236.0733
M = 64PSNR23.916624.100623.972524.080324.131024.004624.057624.129423.975424.0435
MSE263.8865252.9398260.5121254.1288251.1774258.5935255.4592251.2689260.3421256.2885
M = 51PSNR23.313323.538823.597023.489323.447423.298823.295823.341323.427923.3476
MSE303.2150287.8705284.0408291.1752293.9946304.2291304.4373301.2679295.3188300.8316
M = 42PSNR22.652722.758723.102322.705022.548522.887123.279922.935422.946722.6748
MSE353.0308344.5174318.3073348.8021361.6032334.4826305.5548330.7809329.9219351.2371
M = 36PSNR21.349321.987821.942121.849922.194621.982821.808921.800722.139121.0789
MSE476.5985411.4304415.7896424.7125392.2991411.9064428.7317429.5465397.3452507.2072
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Yong, J.; Li, K.; Feng, Z.; Wu, Z.; Ye, S.; Song, B.; Wei, R.; Cao, C. Research on Photon-Integrated Interferometric Remote Sensing Image Reconstruction Based on Compressed Sensing. Remote Sens. 2023, 15, 2478. https://doi.org/10.3390/rs15092478

AMA Style

Yong J, Li K, Feng Z, Wu Z, Ye S, Song B, Wei R, Cao C. Research on Photon-Integrated Interferometric Remote Sensing Image Reconstruction Based on Compressed Sensing. Remote Sensing. 2023; 15(9):2478. https://doi.org/10.3390/rs15092478

Chicago/Turabian Style

Yong, Jiawei, Kexin Li, Zhejun Feng, Zengyan Wu, Shubing Ye, Baoming Song, Runxi Wei, and Changqing Cao. 2023. "Research on Photon-Integrated Interferometric Remote Sensing Image Reconstruction Based on Compressed Sensing" Remote Sensing 15, no. 9: 2478. https://doi.org/10.3390/rs15092478

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop