Next Article in Journal
Disposable Multi-Walled Carbon Nanotubes-Based Plasticizer-Free Solid-Contact Pb2+-Selective Electrodes with a Sub-PPB Detection Limit
Previous Article in Journal
Recent Advances in Pipeline Monitoring and Oil Leakage Detection Technologies: Principles and Approaches
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Dual-Channel Reconstruction Network for Image Compressive Sensing

School of Artificial Intelligence, Xidian University, Xi’an 710071, China
*
Author to whom correspondence should be addressed.
Sensors 2019, 19(11), 2549; https://doi.org/10.3390/s19112549
Submission received: 17 April 2019 / Revised: 1 June 2019 / Accepted: 2 June 2019 / Published: 4 June 2019
(This article belongs to the Section Intelligent Sensors)

Abstract

:
The existing compressive sensing (CS) reconstruction algorithms require enormous computation and reconstruction quality that is not satisfying. In this paper, we propose a novel Dual-Channel Reconstruction Network (DC-Net) module to build two CS reconstruction networks: the first one recovers an image from its traditional random under-sampling measurements (RDC-Net); the second one recovers an image from its CS measurements acquired by a fully connected measurement matrix (FDC-Net). Especially, the fully connected under-sampling method makes CS measurements represent original images more effectively. For the two proposed networks, we use a fully connected layer to recover a preliminary reconstructed image, which is a linear mapping from CS measurements to the preliminary reconstructed image. The DC-Net module is used to further improve the preliminary reconstructed image quality. In the DC-Net module, a residual block channel can improve reconstruction quality and dense block channel can expedite calculation, whose fusion can improve the reconstruction performance and reduce runtime simultaneously. Extensive experiments manifest that the two proposed networks outperform state-of-the-art CS reconstruction methods in PSNR and have excellent visual reconstruction effects.

1. Introduction

In the past decade, compressive sensing [1] theory has achieved great success in signal sampling paradigm because it can obtain high-quality recovery from CS measurements. Based on CS, several new imaging systems have been developed, such as single-pixel camera [2], compressive spectral imaging system [3], Hyperspectral imaging [4], high-speed video camera [5] and fast Magnetic Resonance Imaging (MRI) system [6].
For a given image x R N , the CS linear measurements y = Φ x R M , where Φ is an M × N measurement matrix and M N . The original image has a sparse representation x = Ψ s where Ψ is an N × N basis matrix. Compressive sensing is mainly concerned with the problem of recovering original image x from CS measurements y, which contains two kinds of methods: conventional iterative optimization strategies [7,8,9,10,11,12,13,14] and deep learning-based methods [15,16,17,18].
Early researchers have proposed some iterative algorithms such as matching pursuit [7], orthogonal matching pursuit (OMP) [8,9], iterative hard thresholding [11], iterative soft thresholding [12] and approximate message passing (AMP) [13,19]. However, these iterative algorithms are usually very slow to converge. To alleviate such difficulty, block-based CS methods have been proposed [20,21], although they still need expensive computation. Inspired by the great success of deep neural networks for computer vision tasks [22,23], learning-based CS reconstruction methods have been developed [15,16,17,18,24]. However, compared with the traditional CS methods, deep learning-based methods require additional training process, which brings the need for a training set. However, deep learning-based methods have faster reconstruction speed via a simple forward computing. Especially, the reconstruction performance and time complexity of existing learning-based methods are still not satisfying and can be further improved. In some learning-based methods such as SDA [15], DR 2 -Net [17], ConvCSNet [24] and ASRNet [25], especially, SDA and ConvCSNet directly obtain the CS measurements based on the whole image. This easily leads to an increase in computational complexity as the input image size increases. DR 2 -Net and ASRNet extract image patches from data set, while they need to build deeper models (enormous computation) to recover the original image. We have analysed this phenomenon: in the DR 2 -Net with single channel four residual blocks [17], authors compare the performance of f c 1089 - Re s 1 , f c 1089 - Re s 2 , f c 1089 - Re s 3 , f c 1089 - Re s 4 . We can easily find that f c 1089 - Re s 2 and f c 1089 - Re s 4 have similar performance at MRs = 0.01, 0.10, while the performance of f c 1089 - Re s 4 has not been greatly improved compared with f c 1089 - Re s 2 . Especially, dense blocks [22] can strengthen feature propagation, encourage feature reuse, and substantially reduce the number of parameters. Therefore, we use one dense block to replace the two residual blocks of the f c 1089 - Re s 4 . What is more, the connection mode is not a cascade but parallel, which can alleviate the vanishing-gradient problem to some extent. So we use dual-channel shallower network (less computation) of two residual blocks and one dense block to build the dual-channel reconstruction module. which can further improve the image reconstruction quality. Especially, a residual block channel can capture rich image features and improve the reconstruction performance. The dense block channel can expedite calculation because of its fewer parameters.
In this paper, we propose a novel dual-channel reconstruction module (DC-Net module) based on two residual blocks and one dense block, and we use this module to build two CS reconstruction networks: RDC-Net and FDC-Net. The first layer of the RDC-Net senses an input image by traditional Gaussian random matrix, while the first layer of the FDC-Net senses an input image by fully connected measurement matrix. The second layer of RDC-Net and FDC-Net is a fully connected layer to recover a preliminary reconstructed image. Then, DC-Net module is used to further improve the preliminary reconstructed image quality.
Extensive experiments show that the proposed networks can obtain better performance than the state-of-the-art CS reconstruction algorithms in terms of PSNR and visual effects. Our contributions can be summarized as follows:
  • Unlike the deep-learning network with a very deep single-channel, we propose a novel shallow dual-channel reconstruction module for image compressive sensing reconstruction, in which each channel can extract different level features. It brings the better reconstruction quality.
  • The proposed DC-Net module has two residual blocks and one dense block. Because the dense block has fewer parameters than residual block, the time complexity of the proposed method is lower than DR 2 -Net with four residual blocks.
  • In our method, two residual blocks in one channel can obtain high level features and one dense block in another channel can obtain the low level features. Experiment results show both RDC-Net and FDC-Net have better robustness than DR 2 -Net.

2. Related Work

There are many traditional optimization algorithms [7,8,9,10,20,26,27] which are used to solve the CS reconstruction problem. AmitSatish Unde et al. proposed a reconstruction algorithm based on iterative re-weighted l 1 norm minimization [20]. A.Metzler et al. proposed a denoising-based AMP framework (D-AMP), which integrated a wide class of denoiser within its iterations [14]. A. Metzler et al. also developed a novel neural network architecture that mimics the behavior of the denoising-based approximate message algorithm (LDAMP) [19]. Jin Tan et al. employed an adaptive Wiener filter as the image denoiser into AMP framework, called “AMP-Wiener”. They extended AMP-Wiener to three-dimension, called “AMP-3D-Wiener” for compressive hyperspectral imaging reconstruction problem [28]. Philip Schniter et al. integrated the D-AMP into auto-tuning method to form the D-VAMP [29]. E Tipping et al. presented an accelerated training algorithm for sparse bayesian models. They exploited a recent result concerning the properties of the marginal likelihood function to derive a ’constructive’ method for maximisation thereof [30]. Jiao Wu et al. proposed a stage-wise fast l p -sparse Bayesian learning algorithm through integrating with a fast sequential learning scheme and a stage-wise strategy for CS reconstruction [31]. Thomas et al. proposed an iterative hard thresholding for compressed sensing [11]. Xiangming Meng et al. presented a unified Bayesian inference framework for generalized linear models (GLM), which iteratively reduced the GLM problem to a sequence of standard linear model (SLM) problems [32]. Jiang Zhu et al.proposed an approximate message passing-based generalized sparse Bayesian learning (AMP-Gr-SBL) algorithm to reduce the computation complexity of Gr-SBL algorithm [33]. Jun Fang et al. proposed a 2D pattern-coupled hierarchical Gaussian prior model to exploit the underlying block-sparse structure. This pattern-coupled hierarchical Gaussian prior model imposed a soft coupling mechanism among neighboring coefficients through their shared hyperparameters [34]. Mohammad Shekaramiz et al. proposed a new sparse Bayesian learning (SBL) method that incorporated a total variation-like prior as a measure of the overall clustering pattern in the solution [35]. Saman et al.presented an generative iterative thresholding algorithm for linear inverse problems with multi-constraints and its applications [26]. Bin Kang et al. proposed an efficient image fusion framework for multi-focus images based on compressed sensing. The new fusion framework consisted of three parts: image sampling, measurement fusion and image reconstruction. This novel fusion framework was capable of saving computational resource and enhancing the fusion result and was easy to implement [36]. Kezhi Li et al. proposed a new class of orthogonal circulant matrices built from deterministic sequences for convolution-based compressed sensing [37]. Nam Yul Yu et al. proposed to construct a filter with real-valued coefficients by taking the discrete Fourier transform of a decimated binary Sidelnikov sequence [38]. Weisheng Dong and Guangming Shi et al. presented a learning method for compressive image recovery. PAR models were first learned from training set and then used to regularize the compressive image recovery process [39]. However, the above algorithms suffer from serious time-consuming, which has become the bottleneck for the application of image compressive sensing.
In recent years, deep learning-based methods have shown promising performance in compressive image recovery [15,16,17,18,40]. Yu Simiao et al. proposed a conditional Generative Adversarial Networks-based deep learning framework for de-aliasing and reconstructing MRI images from highly undersampled data with great promise to accelerate the data acquisition process [41]. Guang Yang et al. provided a deep learning-based strategy for reconstruction of CS-MRI, and bridged a substantial gap between conventional non-learning methods working only on data from a single image, and prior knowledge from large training data sets [40]. Seitzer, Maximilian et al. proposed a hybrid method, in which a visual refinement component was learnt on top of an MSE loss-based reconstruction network [42]. Schlemper, Jo et al. proposed a novel cascaded convolution neural networks based on compressive sensing technique and explore its applicability to improve DT-CMR acquisitions [43]. The stacked denoising autoencoder (SDA) [15] considered the mapping from original signal to its measurement vector as one layer of the SDA. This kind of measurement method made SDA adapt its structure to the training set. However, it enhanced computational complexity along with the size of input image increased. Kulkarni et al. [16] proposed a block-based Network to realize the non-iterative image recovery. It took CS measurements of image block as input and output its corresponding reconstruction image block. DR 2 -Net [17] contained a linear mapping to recover a preliminary reconstructed image, in which residual blocks [23] could further improve the reconstruction quality. Xiaotong Lu and Weisheng Dong et al. [24] proposed a novel convolutional compressive sensing framework (ConvCSNet) based on deep convolutional neural network, which captured the image measurements by a convolutional operation.
Deep   Residual   Network : Lately, the deep residual network (ResNet) [23] had achieved promising performance on many computer vision tasks such as Image Recognition [23] and Image Denoising [44]. The ResNet introduces identity shortcut connections that directly pass the data flow to later layers compared with the traditional convolutional network. Therefore, we use the ResNet to avoid the loss attenuation caused by multiple non-linear transformations and ResNet consists of many residual blocks.
Densely   Connected   Network : Recently, the densely connected network (DensNet) [22] also obtained an enormous success in image detection, classification and semantic segmentation. Compared with the deep Residual Network [23], the DensNet introduces identity shortcuts to all layers, which makes a better use information of all features. Especially in reconstruction tasks, the architecture of DensNet can make comprehensive utilization of shallow detailed features to recover original image and DensNet consists of many dense blocks.
To further improve reconstruction quality and reduce runtime in CS reconstruction, in this paper, we use two residual blocks and one dense block to build a dual-channel reconstruction network module. This module can improve the image reconstruction quality and reduce time complexity simultaneously, which is used to build two CS reconstruction networks: the first one recovers original image from its CS measurements acquired by random Gaussian under-sampling measurements (RDC-Net) and the second one recovers original image from its CS measurements acquired by the fully connected measurement matrix (FDC-Net).
The remainder of this paper is organized as follows: In Section 3, we introduce dual-channel reconstruction network module and two kinds of reconstruction networks. In Section 4, we design extensive experiments to evaluate our proposed reconstruction networks. Finally, we conclude the paper in Section 5.

3. Network Architecture

As shown in Figure 1. we propose two kinds of reconstruction networks: RDC-Net and FDC-Net. Firstly we introduce traditional random under-sampling, fully connected under-sampling approaches and preliminary reconstructed module. Then we discuss dual-channel reconstruction network module.

3.1. Under-Sampling and Preliminary Reconstruction

In the compressive sensing theory [1], there are some under-sampling approaches such as random Gaussian measurement [45], random Fourier measurement [46] and random Bernoulli measurement [47]. Random Gaussian measurement matrix is mostly used in CS theory and we also use Gaussian measurement matrix in RDC-Net.
y = y 1 y 2 y M = w 11 w 12 w 13 w 14 w 1 N w 21 w 22 w 23 w 24 w 2 N w M 1 w M 2 w M 3 w M 4 w M N x 1 x 2 x 3 x 4 x N
In the FDC-Net, we use a fully connected layer (Figure 2b) as the measurement matrix to imitate the traditional under-sampling method in Figure 2a. In particular, such fully connected layer has no bias and activation function, and it learns a linear transformation from the original image to CS measurements. Both random Gaussian measurement matrix and learning measurement matrix have the similar mathematical formula as Equation (1). We expect that the learning measurement matrix (Equation (4)) is well adapted to the distribution of original image and denote this layer as y f = W 1 · x , and the traditional Gaussian measurement method can also be expressed by y r = W 2 · x , where W 1 , W 2 R M × N ( M N ) and W 2 conforms to the Gaussian distribution. Especially, x is the original image and y r , y f are the corresponding CS measurements of RDC-Net and FDC-Net, respectively.
Afterwards, we use a fully connected layer to recover a preliminary reconstructed image x c * . We denote the preliminary reconstructed module and the corresponding parameters as f and Ω c p respectively, where c RDC - Net , FDC - Net , p represents the preliminary reconstructed module. The preliminary reconstructed image can be expressed by:
x c * = f y c , Ω c p
and mean squared error (MSE) is used as the loss function for the training set:
L { Ω c p } = 1 M i = 1 M f ( y i c , Ω c p ) x i 2 2
W 1 = arg min w 1 1 M i = 1 M f W 1 x i , Ω f p x i 2 2
where M , W 1 represent the number of training samples and learning measurement matrix respectively. Back propagation [48] algorithm is used to minimize the loss function defined in Equation (3).

3.2. Dual-Channel Network Module

In Section 3.1, we only obtain a preliminary reconstructed image for the reason that it is not easy to get an exact solution in preliminary reconstructed module. Then, the dual-channel network module is used to further improve the reconstruction quality. In this paper, two residual blocks and one dense block are used as one channel separately, and they are fused to build a dual-channel network module. We firstly make a brief introduction to residual block and dense block.
Compared with the traditional convolutional network, the main difference of the residual network is that it introduces identity connections that directly pass the data flow to later layers. Given an input χ , we expect the output of a few stacked layers in network as T χ . However, it takes great expense to optimize T in traditional convolutional network. In [23], K. He et al. proposed to approximate the residual value between T χ and χ with the stacked layers. The residual block (Figure 1c) can be expressed by
F ( χ ) = T χ χ
In [22], Gao Huang et al. proposed the Dense Convolutional Network (DensNet) for many computer vision tasks. The traditional convolutional networks with L layers have L connections. While the DensNet has L ( L + 1 ) 2 direct connections, which strengthens feature propagation, encourages feature reuse and enormously reduces the number of parameters. This kind of network is very useful in the compressive sensing field. In dense block (Figure 1d), for each layer, all preceding feature maps are used as its inputs, and its own feature maps are also used as inputs into all subsequent layers. In other words, it means the mth layer can connect the feature maps of all preceding layers χ 0 , χ 1 , , χ m 1 as inputs:
χ m = Γ m ( [ χ 0 , χ 1 , , χ m 1 ] )
where [ χ 0 , χ 1 , , χ m 1 ] denotes the concatenate operation of the feature maps in layers 0 , 1 , , m 1 . Γ m . can be regarded as a composite function of four consecutive operations: batch normalization (BN), scale layer, rectified linear unit (ReLU) and convolution (Conv).
We denote the dual-channel network module as H ( χ ) that contains two residual blocks and one dense block, which can be expressed by
H ( χ ) = 2 F ( χ ) 1 Γ ( χ )
where the symbol ⊗ represents cascaded operation and ⊕ represents parallel operation between one dense block and two residual blocks.
In this paper, H ( χ ) takes x c * as input and outputs final reconstruction result, which can be represented as:
x i c ^ = H c ( x c * , Ω c d )
where d represents the dual-channel network module and the Ω c d represents the parameters of dual-channel network module. The loss function of the proposed networks can be expressed by
L { Ω c p , Ω c d ) = 1 M i = 1 M x i c ^ x i 2 2 = 1 M i = 1 M H c x c * , Ω c d x i 2 2 = 1 M i = 1 M H c f y i c , Ω c p , Ω c d x i 2 2

3.3. Architecture

The architectures of proposed networks are shown in Figure 1. In the RDC-Net (Figure 1a), we take the 33 × 33 sized image block as input and acquire CS measurements by traditional random measurement matrix. In the FDC-Net (Figure 1b), we take the same sized image block as input and acquire CS measurements by fully connected measurement matrix. With the CS measurements, the preliminary reconstructed image can be realized via a fully connected layer. Then, the dual-channel reconstruction network module H ( χ ) takes the preliminary reconstructed image as input and outputs the corresponding higher quality image. Finally, the BM3D [49] is used to remove the artifacts caused by block-wise processing.

4. Experiments

In this section, we perform a multitude of experiments to test the performance of the proposed networks on the Caffe [50] platform. Our computer is equipped with intel core i7-6700 with frequency of 3.4 GHz and Nvidia GeForce GTX 1080Ti, and the network framework runs on the ubuntu system.

4.1. Training Data

For a fair comparison, the same dataset [16] is used to generate the training data and test data. We use the luminance component of the images and extract 33 × 33 sized image patches with stride 14 from 91 images [16] as training set. We also use the luminance component of the images and extract 33 × 33 sized image patches with stride 14 from 5 images [16] as test images. Both RDC-Net and FDC-Net use the same dataset and are trained with different MRs = 0.01, 0.04, 0.10 and 0.25. Especially, we take about 8 h to train the proposed networks.

4.2. Training Strategy

The training procedure of RDC-Net and FDC-Net consists of two steps. In the first step, we train preliminary reconstructed module with a slightly big learning rate to obtain the preliminary reconstructed image and parameters of Ω c p . The maximum number of iterations, the learning rate, the step size, the batch size and the gamma are set as 800,000, 0.001, 200,000, 128 and 0.5, respectively. The second step is to optimize preliminary reconstruction module and DC-Net module with a gradually decline learning rate and updates parameters of Ω c p and Ω c d . Especially, the maximum number of iterations, the learning rate, the decay rate, the decay steps and the batch size are set as 200,000, 0.0001, 0.98, 1000 and 64.

4.3. Comparison with Other Methods

In this part, we compare two proposed networks with existing methods such as NLR-CS [51], D-AMP [14], TVAL3 [10], ReconNet [16], SDA [15], DR 2 -Net [17] and ConvCSNet [24]. In particular, NLR-CS, TVAL3, D-AMP, ReconNet, DR 2 -Net, CSRNet and RDC-Net obtain the CS measurements by traditional random measurement matrix. SDA [15], ConvCSNet [24], ASRNet and FDC-Net obtain CS measurements by learning-based approaches. The results of TVAL3, NLR-CS, D-AMP, ReconNet and DR 2 -Net are from the code presented by the respective authors on their websites. Especially, the results of SDA are from our own reproduction. The results of CSRNet and ASRNet refer to the paper [25]. In the training stage, we use the default parameters to train these networks many times to get the many test models. Then we use these test models to obtain reconstruction results. In this paper, we choose PSNR and SSIM as the evaluation criterions. The related experiment results are summarized in Table 1 and Table 2, where the best results are highlighted in bold.
As shown in Table 1, RDC-Net obtains the higher mean PSNR values than other methods at MRs = 0.10, 0.25. However, in some test images (e.g., barbara, Fingerprint, Flinstones), other reconstruction methods (NLR-CS or DR 2 -Net) obtain slightly higher reconstruction quality, and we also compare the reconstruction performance of FDC-Net, SDA, ConvCS-Net and ASRNet in Table 2. It is obvious that FDC-Net outperforms other methods at measurement rates 0.01, 0.04, 0.10 and 0.25. Especially at MR = 0.25. FDC-Net obtains 2.3 dB improvement than the second highest value. In Figure 3, Figure 4 and Figure 5, we compare the visual reconstruction results among FDC-Net, RDC-Net and DR 2 -Net. We can easily find that our reconstruction results have better visual effects. For example, Figure 4 is a fingerprint image. Our visual reconstruction results have a clearer texture, clean areas and sharp edges than DR 2 -Net in the enlarged patches at four MRs, while the visual reconstruction results of DR 2 -Net have blurred textures and confusing areas.

4.4. Evaluation on Different Network Architectures

In order to evaluate the effectiveness of our main model, FDC-Net, we design other different network architectures such as single channel networks (One-densblock and Two-resblocks)and dual-channel networks (one-resblock + one-densblock, two-resblocks + two-densblock, three-resblocks + one-densblock). “One-densblock” means that we use the dense block channel (Figure 1b) to recover the image from CS measurements. “Two-resblocks” means that we use the residual block channel (Figure 1b) to recover the image. “one-resblock + one-densblock”, “two-resblocks + two-densblock” and “three-resblocks + one-densblock” represent that we use one residual block and one dense block, two residual blocks and two dense blocks, three residual blocks and one dense block to improve the preliminary reconstructed image quality respectively. The relevant results are summarized in Table 3, where the best results are highlighted in bold.
As shown in Table 3, it is obvious that FDC-Net outperforms other networks at MRs = 0.04, 0.10, 0.25. When we only use one channel module (one-densblock or two-resblocks) to recover the original image from its CS measurements, the reconstruction results are good. But we combine two channel modules, FDC-Net obtains obviously outstanding performances, which is probably because the residual block channel can improve reconstruction quality and dense block channel can expedite calculation. One-resblock + one-densblock, three-resblocks + one-densblock and two-resblocks + two-densblocks all obtain outstanding performance. Although the three-resblocks + one-densblocks obtains higher PSNR than FDC-Net at MR = 0.01, it increases the time complexity and has lower PSNR than FDC-Net at MRs = 0.04, 0.10, 0.25. Therefore, we use the two residual blocks and one dense block to build the dual-channel reconstruction module.

4.5. Robustness to Noise

To show the robustness of proposed networks to noise, we perform reconstruction experiments under the presence of measurement noise. The standard Gaussian noise is added to CS measurements of test set. We add five levels of noise corresponding to δ = 0.01, 0.05, 0.10, 0.25 and 0.5, where δ is the standard variance for the Gaussian noise. Then, two proposed networks trained on the noiseless CS measurements take the noisy CS measurements as input and output the reconstruction images. Here, we mainly compare the three algorithms: DR 2 -Net, RDC-Net, FDC-Net. The related results are summarized in Figure 6.
From Figure 6, it is obvious that two proposed networks mostly outperform the DR 2 -Net for δ = 0.01, 0.05, 0.10, 0.25 and 0.5 at four MRs. Especially, the decay of FDC-Net’s performance is slower than DR 2 -Net’s at MRs = 0.01, 0.04, which indicates that our FDC-Net have outstanding robustness at low measurement rates.

4.6. Evaluation on ImageNet Val Dataset

To testify the scalability of proposed networks, we also perform reconstruction experiments between two proposed networks with DR 2 -Net on the large-scale ImageNet val dataset [52] and it includes 50,000 images of 1000 classes. The experimental results are shown in Table 4, where the best results are highlighted in bold.
As shown in Table 4, the two proposed networks obtain better performances than DR 2 -Net at four MRs. Especially at MR = 0.25, RDC-Net, FDC-Net achieves nearly 3 dB and 5 dB improvement over DR 2 -Net, respectively, which indicates that our proposed networks have better generalization ability than DR 2 -Net.

4.7. Time Complexity and Network Convergence

In this paper, we also perform the time complexity experiments between two proposed algorithms and DR 2 -Net. The related results are shown in Table 5, where the best results are highlighted in bold.
From Table 5, we can observe that two proposed networks have slightly less runtime than DR 2 -Net, and FDC-Net gains best results, which is helpful for CS real-time applications.
In order to further demostrate that our proposed networks have better convergence performance than DR 2 -Net, we perform a convergence experiment between FDC-Net and DR 2 -Net at MR = 0.04. Figure 7 shows that training error and test error of FDC-Net are smaller than DR 2 -Net, which demonstrates that our network is easier to converge than DR 2 -Net.

5. Conclusions

Inspired by the fact that deep learning-based methods can improve reconstruction performance and enormously reduce computation compared to traditional iterative reconstruction algorithms, we propose a novel dual-channel reconstruction network module (DC-Net module) to build two CS reconstruction networks: the first one recovers an image from its traditional random under-sampling measurements (RDC-Net); the second one recovers an image from its CS measurements acquired by a fully connected measurement matrix (FDC-Net). Especially, DC-Net module consists of one dense block and two residual blocks. We use a fully connected layer to obtain a preliminary reconstructed image, and DC-Net module is used to further improve the preliminary reconstructed image quality. Extensive experiments show that our networks outperform the state-of-the-art CS algorithms in both PSNR and visual quality. Moreover, our networks also have outstanding robustness and lower time complexity.

Author Contributions

All authors contributed equally to this work.

Funding

This work is supported by Natural Science Foundation (NSF) of China (Nos. 61632019, 61836008, 61871304, 61875157, 61572387, 61672404), the Foundation for Innovative Research Groups of the National Natural Science Foundation of China (No. 61621005), and the Fundamental Research Funds for the Central Universities (No. JB191908, JC1904).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Donoho, D.L. Compressed sensing. IEEE Trans. Inf. Theory 2006, 52, 1289–1306. [Google Scholar] [CrossRef]
  2. Huang, G.; Jiang, H.; Matthews, K.; Wilford, P. Lensless imaging by compressive sensing. In Proceedings of the 2013 IEEE International Conference on Image Processing, Melbourne, Australia, 15–18 September 2013; pp. 2101–2105. [Google Scholar] [CrossRef]
  3. Gehm, M.E.; John, R.; Brady, D.J.; Willett, R.M.; Schulz, T.J. Single-shot compressive spectral imaging with a dual-disperser architecture. Opt. Express 2007, 15, 14013–14027. [Google Scholar] [CrossRef] [PubMed]
  4. Rajwade, A.; Kittle, D.; Tsai, T.H.; Brady, D.; Carin, L. Coded Hyperspectral Imaging and Blind Compressive Sensing. SIAM J. Imaging Sci. 2013, 6, 782–812. [Google Scholar] [CrossRef] [Green Version]
  5. Hitomi, Y.; Gu, J.; Gupta, M.; Mitsunaga, T.; Nayar, S.K. Video from a single coded exposure photograph using a learned over-complete dictionary. In Proceedings of the International Conference on Computer Vision, Barcelona, Spain, 6–13 November 2011; pp. 287–294. [Google Scholar]
  6. Lustig, M.; Donoho, D.; Santos, J.; Pauly, J. Compressed Sensing MRI. IEEE Signal Process. Mag. 2008, 25, 72–82. [Google Scholar] [CrossRef]
  7. Mallat, S.G.; Zhang, Z. Matching pursuits with time-frequency dictionaries. IEEE Trans. Signal Process. 1993, 41, 3397–3415. [Google Scholar] [CrossRef] [Green Version]
  8. Tropp, J.A.; Gilbert, A.C. Signal Recovery From Random Measurements Via Orthogonal Matching Pursuit. IEEE Trans. Inf. Theory 2007, 53, 4655–4666. [Google Scholar] [CrossRef] [Green Version]
  9. Tropp, J.A. Greed is good: Algorithmic results for sparse approximation. IEEE Trans. Inf. Theory 2004, 50, 2231–2242. [Google Scholar] [CrossRef]
  10. Li, C.; Yin, W.; Jiang, H.; Zhang, Y. An efficient augmented Lagrangian method with applications to total variation minimization. Comput. Optim. Appl. 2013, 56, 507–530. [Google Scholar] [CrossRef] [Green Version]
  11. Blumensath, T.; Davies, M.E. Iterative hard thresholding for compressed sensing. Appl. Comput. Harmon. Anal. 2009, 27, 265–274. [Google Scholar] [CrossRef] [Green Version]
  12. Daubechies, I.; Defrise, M.; De Mol, C. An iterative thresholding algorithm for linear inverse problems with a sparsity constraint. Commun. Pure Appl. Math. 2003, 57, 1413–1457. [Google Scholar] [CrossRef]
  13. Donoho, D.L.; Maleki, A.; Montanari, A. Message-passing algorithms for compressed sensing. Proc. Natl. Acad. Sci. USA 2009, 106, 18914–18919. [Google Scholar] [CrossRef] [Green Version]
  14. Metzler, C.A.; Maleki, A.; Baraniuk, R.G. From Denoising to Compressed Sensing. IEEE Trans. Inf. Theory 2016, 62, 5117–5144. [Google Scholar] [CrossRef]
  15. Mousavi, A.; Patel, A.B.; Baraniuk, R.G. A deep learning approach to structured signal recovery. In Proceedings of the 2015 53rd Annual Allerton Conference on Communication, Control, and Computing (Allerton), Monticello, IL, USA, 29 September–2 October 2015; pp. 1336–1343. [Google Scholar] [CrossRef]
  16. Kulkarni, K.; Lohit, S.; Turaga, P.K.; Kerviche, R.; Ashok, A. ReconNet: Non-Iterative Reconstruction of Images from Compressively Sensed Random Measurements. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 26 June–1 July 2016. [Google Scholar]
  17. Yao, H.; Dai, F.; Zhang, D.; Ma, Y.; Zhang, S.; Zhang, Y. DR2-Net: Deep Residual Reconstruction Network for Image Compressive Sensing. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017. [Google Scholar]
  18. Mousavi, A.; Baraniuk, R.G. Learning to Invert: Signal Recovery via Deep Convolutional Networks. In Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), New Orleans, LA, USA, 5–9 March 2017; pp. 2272–2276. [Google Scholar]
  19. Metzler, C.; Mousavi, A.; Baraniuk, R. Learned D-AMP: Principled Neural Network based Compressive Image Recovery. In Proceedings of the 30th International Conference on Neural Information Processing Systems, Long Beach, CA, USA, 4–9 December 2017; pp. 1772–1783. [Google Scholar]
  20. Unde, A.S.; Deepthi, P. Block compressive sensing: Individual and joint reconstruction of correlated images. J. Vis. Commun. Image Represent. 2017, 44, 187–197. [Google Scholar] [CrossRef]
  21. Mun, S.; Fowler, J.E. Block Compressed Sensing of Images Using Directional Transforms. In Proceedings of the 2009 16th IEEE International Conference on Image Processing (ICIP), Cairo, Egypt, 7–10 November 2009; pp. 3021–3024. [Google Scholar]
  22. Huang, G.; Liu, Z.; Weinberger, K.Q. Densely Connected Convolutional Networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 26 June–1 July 2016. [Google Scholar]
  23. He, K.; Zhang, X.; Ren, S.; Sun, J. Deep Residual Learning for Image Recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 26 June–1 July 2016; pp. 770–778. [Google Scholar]
  24. Lu, X.; Dong, W.; Wang, P.; Shi, G.; Xie, X. ConvCSNet: A Convolutional Compressive Sensing Framework Based on Deep Learning. arXiv 2018, arXiv:1801.10342. [Google Scholar]
  25. Wang, Y.; Bai, H.; Zhao, L.; Zhao, Y. Cascaded reconstruction network for compressive image sensing. EURASIP J. Image Video Process. 2018, 2018, 77. [Google Scholar] [CrossRef]
  26. Khoramian, S. An iterative thresholding algorithm for linear inverse problems with multi-constraints and its applications. Appl. Comput. Harmon. Anal. 2012, 32, 109–130. [Google Scholar] [CrossRef] [Green Version]
  27. Jin, T.; Ma, Y.; Baron, D. Compressive Imaging via Approximate Message Passing with Image Denoising. IEEE Trans. Signal Process. 2015, 63, 2085–2092. [Google Scholar]
  28. Tan, J.; Ma, Y.; Rueda, H.; Baron, D.; Arce, G.R. Compressive Hyperspectral Imaging via Approximate Message Passing. IEEE J. Sel. Top. Signal Process. 2016, 10, 389–401. [Google Scholar] [CrossRef]
  29. Schniter, P.; Rangan, S.; Fletcher, A. Denoising based Vector Approximate Message Passing. arXiv 2016, arXiv:1611.01376. [Google Scholar]
  30. Tipping, M.E.; Faul, A. Fast Marginal Likelihood Maximisation for Sparse Bayesian Models. In Proceedings of the Ninth International Workshop on Artificial Intelligence and Statistics, Key West, FL, USA, 3–6 January 2003; pp. 3–6. [Google Scholar]
  31. Wu, J.; Liu, F.; Jiao, L. Fast lp-sparse Bayesian learning for compressive sensing reconstruction. In Proceedings of the 2011 4th International Congress on Image and Signal Processing, Shanghai, China, 15–17 October 2011; Volume 4, pp. 1894–1898. [Google Scholar] [CrossRef]
  32. Meng, X.; Wu, S.; Zhu, J. A Unified Bayesian Inference Framework for Generalized Linear Models. IEEE Signal Process. Lett. 2018, 25, 398–402. [Google Scholar] [CrossRef]
  33. Zhu, J.; Han, L.; Meng, X. An AMP-Based Low Complexity Generalized Sparse Bayesian Learning Algorithm. IEEE Access 2019, 7, 7965–7976. [Google Scholar] [CrossRef]
  34. Fang, J.; Zhang, L.; Li, H. Two-Dimensional Pattern-Coupled Sparse Bayesian Learning via Generalized Approximate Message Passing. IEEE Trans. Image Process. 2016, 25, 2920–2930. [Google Scholar] [CrossRef]
  35. Shekaramiz, M.; Moon, T.K.; Gunther, J.H. Bayesian Compressive Sensing of Sparse Signals with Unknown Clustering Patterns. Entropy 2019, 21, 247. [Google Scholar] [CrossRef]
  36. Kang, B.; Zhu, W.; Yan, J. Fusion framework for multi-focus images based on compressed sensing. IET Image Process. 2013, 7, 290–299. [Google Scholar] [CrossRef]
  37. Li, K.; Gan, L.; Ling, C. Convolutional Compressed Sensing Using Deterministic Sequences. IEEE Trans. Signal Process. 2013, 61, 740–752. [Google Scholar] [CrossRef]
  38. Yu, N.Y.; Gan, L. Convolutional Compressed Sensing Using Decimated Sidelnikov Sequences. IEEE Signal Process. Lett. 2014, 21, 591–594. [Google Scholar] [CrossRef]
  39. Dong, W.; Shi, G.; Wu, X.; Zhang, L. A learning-based method for compressive image recovery. J. Vis. Commun. Image Represent. 2013, 24, 1055–1063. [Google Scholar] [CrossRef]
  40. Yang, G.; Yu, S.; Dong, H.; Slabaugh, G.; Firmin, D. DAGAN: Deep De-Aliasing Generative Adversarial Networks for Fast Compressed Sensing MRI Reconstruction. IEEE Trans. Med. Imaging 2018, 37, 1310–1321. [Google Scholar] [CrossRef]
  41. Yu, S.; Dong, H.; Yang, G.; Slabaugh, G.G.; Dragotti, P.L.; Ye, X.; Liu, F.; Arridge, S.R.; Keegan, J.; Firmin, D.N.; et al. Deep De-Aliasing for Fast Compressive Sensing MRI. arXiv 2017, arXiv:1705.07137. [Google Scholar]
  42. Seitzer, M.; Yang, G.; Schlemper, J.; Oktay, O.; Würfl, T.; Christlein, V.; Wong, T.; Mohiaddin, R.; Firmin, D.; Keegan, J.; et al. Adversarial and Perceptual Refinement for Compressed Sensing MRI Reconstruction. In Proceedings of the International Conference on Medical Image Computing and Computer Assisted Intervention—MICCAI, Granada, Spain, 16–20 September 2018. [Google Scholar]
  43. Schlemper, J.; Yang, G.; Ferreira, P.; Scott, A.; McGill, L.A.; Khalique, Z.; Gorodezky, M.; Roehl, M.; Keegan, J.; Pennell, D.; et al. Stochastic Deep Compressive Sensing for the Reconstruction of Diffusion Tensor Cardiac MRI. In Proceedings of the Medical Image Computing and Computer Assisted Intervention—MICCAI, Granada, Spain, 16–20 September 2018. [Google Scholar]
  44. Zhang, K.; Zuo, W.; Chen, Y.; Meng, D.; Zhang, L. Beyond a Gaussian Denoiser: Residual Learning of Deep CNN for Image Denoising. IEEE Trans. Image Process. 2017, 26, 3142–3155. [Google Scholar] [CrossRef]
  45. Wang, B.; Ma, S.X. Improvement of Gaussian Random Measurement Matrices in Compressed Sensing. Adv. Mater. Res. 2011, 301–303, 245–250. [Google Scholar] [CrossRef]
  46. Adcock, B.; Hansen, A.C.; Roman, B. A Note on Compressed Sensing of Structured Sparse Wavelet Coefficients From Subsampled Fourier Measurements. IEEE Signal Process. Lett. 2016, 23, 732–736. [Google Scholar] [CrossRef] [Green Version]
  47. Huang, T.; Fan, Y.Z.; Hu, M. Compressed sensing based on random symmetric Bernoulli matrix. In Proceedings of the 2017 32nd Youth Academic Annual Conference of Chinese Association of Automation (YAC), Hefei, China, 19–21 May 2017; pp. 191–196. [Google Scholar]
  48. Rumelhart, D.E.; Hinton, G.E.; Williams, R.J. Learning Representations by Back-Propagating Errors; MIT Press: Cambridge, MA, USA, 1988; pp. 533–536. [Google Scholar]
  49. Dabov, K.; Foi, A.; Katkovnik, V.; Egiazarian, K. Image Denoising by Sparse 3-D Transform-Domain Collaborative Filtering. IEEE Trans. Image Process. 2007, 16, 2080–2095. [Google Scholar] [CrossRef]
  50. Jia, Y.; Shelhamer, E.; Donahue, J.; Karayev, S.; Long, J.; Girshick, R.; Guadarrama, S.; Darrell, T. Caffe: Convolutional Architecture for Fast Feature Embedding. In Proceedings of the 22nd ACM International Conference on Multimedia, Orlando, FL, USA, 3–7 November 2014; ACM: New York, NY, USA, 2014; pp. 675–678. [Google Scholar] [CrossRef]
  51. Dong, W.; Shi, G.; Li, X.; Ma, Y.; Huang, F. Compressive Sensing via Nonlocal Low-Rank Regularization. IEEE Trans. Image Process. 2014, 23, 3618–3632. [Google Scholar] [CrossRef]
  52. Deng, J.; Dong, W.; Socher, R.; Li, L.J.; Li, K.; Li, F.-F. ImageNet: A large-scale hierarchical image database. In Proceedings of the 2009 IEEE Conference on Computer Vision and Pattern Recognition, Miami, FL, USA, 20–25 June 2009; pp. 248–255. [Google Scholar] [CrossRef]
Figure 1. (a) The architecture of dual-channel reconstruction network with random measurement matrix (RDC-Net). (b) The architecture of dual-channel reconstruction network with fully connected measurement matrix (FDC-Net). (c) The structure of residual block. (d) The structure of dense block.
Figure 1. (a) The architecture of dual-channel reconstruction network with random measurement matrix (RDC-Net). (b) The architecture of dual-channel reconstruction network with fully connected measurement matrix (FDC-Net). (c) The structure of residual block. (d) The structure of dense block.
Sensors 19 02549 g001
Figure 2. (a) Random Gaussian matrix is used as measurement matrix Φ . (b) The fully connected matrix is used as measurement matrix and the parameters are learned from training set.
Figure 2. (a) Random Gaussian matrix is used as measurement matrix Φ . (b) The fully connected matrix is used as measurement matrix and the parameters are learned from training set.
Sensors 19 02549 g002
Figure 3. Barbara image reconstruction results from different networks. We can obviously find that two proposed networks obtain the excellent reconstruction performance and FDC-Net has better visual effects than RDC-Net, DR 2 -Net.
Figure 3. Barbara image reconstruction results from different networks. We can obviously find that two proposed networks obtain the excellent reconstruction performance and FDC-Net has better visual effects than RDC-Net, DR 2 -Net.
Sensors 19 02549 g003
Figure 4. Fingerprint image reconstruction results from different networks. We can obviously find that two proposed networks obtain the excellent reconstruction performance and FDC-Net has better visual effects than RDC-Net, DR 2 -Net.
Figure 4. Fingerprint image reconstruction results from different networks. We can obviously find that two proposed networks obtain the excellent reconstruction performance and FDC-Net has better visual effects than RDC-Net, DR 2 -Net.
Sensors 19 02549 g004
Figure 5. Monarch image reconstruction results from different networks. We can obviously find that two proposed networks obtain the excellent reconstruction performance and FDC-Net has better visual effects than RDC-Net, DR 2 -Net.
Figure 5. Monarch image reconstruction results from different networks. We can obviously find that two proposed networks obtain the excellent reconstruction performance and FDC-Net has better visual effects than RDC-Net, DR 2 -Net.
Sensors 19 02549 g005
Figure 6. Robustness comparison to different Gaussian noise among the different networks.
Figure 6. Robustness comparison to different Gaussian noise among the different networks.
Sensors 19 02549 g006
Figure 7. Convergence comparison between FDC-Net and DR 2 -Net. It is obvious that our FDC-Net has smaller training error and test error than DR 2 -Net.
Figure 7. Convergence comparison between FDC-Net and DR 2 -Net. It is obvious that our FDC-Net has smaller training error and test error than DR 2 -Net.
Sensors 19 02549 g007
Table 1. Reconstruction results for test images through different algorithms at different measurement rates. “Mean” is the mean value among all test images.
Table 1. Reconstruction results for test images through different algorithms at different measurement rates. “Mean” is the mean value among all test images.
Image NameMethodsPSNR (without Using BM3D/with Using BM3D)
MR = 0.01MR = 0.04MR = 0.10MR = 0.25
BarbaraTVAL311.9411.9618.9718.9921.8522.2324.2124.26
NLR-CS5.505.8611.0811.5614.8014.84 28.01 28.00
D-AMP5.485.5116.3716.3721.2321.2425.0825.96
ReconNet18.6119.0720.3821.2021.9022.5123.2023.55
DR 2 -Net18.6519.1020.6921.3122.6922.8425.7725.99
CSRNet19.1019.21 21.27 21.49 22.9422.9526.1726.34
RDC-Net 19.13 19.22 21.0721.18 23.17 23.21 25.8025.91
FingerprintTVAL310.3510.3716.0316.0718.6818.7122.7122.68
NLR-CS4.855.199.6710.1012.8012.8423.5123.52
D-AMP4.664.7413.8314.0017.1317.1425.1824.15
ReconNet14.8214.8816.9116.9620.7520.9625.5725.14
DR 2 -Net14.7314.9517.3817.47 22.02 22.44 27.65 27.76
CSRNet 15.11 15.18 17.59 17.68 21.6421.9127.2227.49
RDC-Net15.0315.0517.4317.4521.8721.8927.38 27.91
FlinstonesTVAL39.759.7814.8714.9118.8918.9324.0624.08
NLR-CS4.454.768.989.2612.1512.2422.4122.66
D-AMP4.334.3512.9413.0716.9416.8625.0224.46
ReconNet13.9614.0716.3116.5618.9219.2022.4622.60
DR 2 -Net14.0014.1816.9417.06 21.08 21.45 26.19 26.79
CSRNet14.3214.39 17.29 17.41 20.5220.8225.4625.47
RDC-Net 14.29 14.51 17.15 17.38 21.00 21.88 25.9426.08
LenaTVAL311.8711.9119.4719.5324.1724.2128.6828.72
NLR-CS5.966.2611.6211.9815.3115.3429.3929.67
D-AMP5.735.9616.5316.8722.5322.5428.0027.46
ReconNet17.8718.0721.2821.8323.8324.5126.5226.55
DR 2 -Net17.9718.4322.1322.7325.3825.7729.4229.64
CSRNet 19.11 19.20 22.89 23.17 25.7225.9729.5529.70
RDC-Net18.6918.96 23.17 23.37 26.19 26.57 29.78 29.97
MonarchTVAL311.0911.1216.7416.7521.1621.1627.7527.77
NLR-CS6.386.7611.6211.9814.6014.6725.9126.10
D-AMP6.216.2114.5714.5719.0019.0026.3926.56
ReconNet15.3915.4718.1818.3321.1022.5124.3225.05
DR 2 -Net15.3315.5018.9319.2323.1023.5427.9428.30
CSRNet15.4215.46 19.41 19.60 22.9923.2527.9828.37
RDC-Net 15.73 15.97 19.03 19.31 23.47 23.61 28.10 29.52
ParrotTVAL311.4411.4618.8818.9123.1323.1527.1827.24
NLR-CS5.125.4410.6010.9214.1414.1826.5326.72
D-AMP5.085.0815.7815.7821.6321.6326.8826.99
ReconNet17.6118.3120.2721.0622.6323.2525.5926.22
DR 2 -Net18.0118.4121.1621.8623.9524.3228.72 29.10
CSRNet 19.50 19.61 22.16 22.31 24.79 25.01 28.8629.05
RDC-Net19.27 19.71 21.8621.9824.4524.98 28.94 29.01
BoatsTVAL311.8611.8719.2119.2123.8523.8628.8128.81
NLR-CS5.385.7310.7711.2214.8314.8629.1129.25
D-AMP5.345.3516.0116.0121.9521.9529.2629.26
ReconNet18.4918.8721.3821.6224.1524.2127.3027.35
DR 2 -Net18.6718.9622.11 22.50 25.5825.9130.0930.30
CSRNet18.9919.09 22.38 22.5525.6525.8030.14 30.36
RDC-Net 19.16 19.32 22.2022.35 25.91 26.08 30.18 30.35
CameramanTVAL311.9711.9818.3018.3321.9121.9225.6925.70
NLR-CS5.986.3611.0411.4614.1814.2224.8824.97
D-AMP5.645.6515.1215.1220.3520.3524.4224.56
ReconNet17.1117.4919.2819.7321.2921.6723.1623.61
DR 2 -Net17.0817.3419.8420.3122.4622.7625.6125.91
CSRNet17.7517.9020.2320.3822.2922.5325.8526.15
RDC-Net 17.95 18.21 20.38 20.61 22.93 23.11 25.96 26.17
ForemanTVAL310.9811.0220.6420.6528.6928.7435.4135.55
NLR-CS3.924.269.089.4613.5313.54 35.73 35.91
D-AMP3.843.8416.2716.3125.5025.5335.4535.06
ReconNet20.0420.3323.7224.6127.1028.5829.4730.79
DR 2 -Net20.5921.0825.3426.3229.2030.1833.5334.28
CSRNet 23.12 23.32 27.78 28.18 30.9631.3534.8935.10
RDC-Net22.98 23.07 27.2727.29 31.29 31.61 35.1135.31
HouseTVAL311.8611.9020.9420.9626.2926.3332.0932.14
NLR-CS4.965.2610.6611.0814.7714.80 34.20 34.21
D-AMP5.005.0116.9116.3724.8324.7333.6432.96
ReconNet19.3119.5222.5723.2026.6926.7028.4729.20
DR 2 -Net19.6119.9923.9124.7027.5228.4231.8232.52
CSRNet20.6720.7924.5524.8528.2428.6832.4633.05
RDC-Net 20.68 20.87 24.89 24.92 28.57 28.81 32.8733.07
PeppersTVAL311.3511.3718.2118.2322.6422.65 29.61 29.65
NLR-CS5.766.1111.3811.8114.9414.9928.8929.24
D-AMP5.795.8416.1716.4621.3321.3829.8828.96
ReconNet16.8316.9819.5720.0022.1522.6824.7725.15
DR 2 -Net16.9017.1120.3220.7523.7224.2628.4829.11
CSRNet17.6117.67 21.18 21.51 24.35 24.65 28.5829.19
RDC-Net 17.69 17.71 21.0321.21 24.39 24.6429.27 29.97
MeanTVAL311.3111.3418.3918.4122.8422.9027.8427.87
NLR-CS5.305.6410.5910.9814.1914.2328.0528.20
D-AMP5.195.2315.5015.5421.1321.1228.1127.85
ReconNet17.2817.5519.9920.4622.7723.3425.5325.93
DR 2 -Net17.4117.7320.8021.2924.2524.7228.6629.06
CSRNet 18.25 18.35 21.52 21.74 24.5524.8128.8329.11
RDC-Net18.24 18.42 21.4121.55 24.84 25.13 29.03 29.39
Table 2. Reconstruction results for test images through different algorithms at different measurement rates. “Mean” is the mean value among all test images.
Table 2. Reconstruction results for test images through different algorithms at different measurement rates. “Mean” is the mean value among all test images.
Image NameMethodsPSNR (without Using BM3D/with Using BM3D)
MR = 0.01MR = 0.04MR = 0.10MR = 0.25
BarbaraSDA18.5918.7620.4920.8622.1722.3923.1923.21
ConvCSNet18.1418.3520.8521.0022.9523.0125.8525.98
ASRNet21.4021.5223.4823.5424.3424.3526.3026.43
FDC-Net 21.65 21.71 23.56 23.71 24.25 24.52 27.91 28.00
FingerprintSDA14.8114.8216.8516.8720.2920.3224.2924.21
ConvCSNet14.5414.8218.4418.7119.7620.1128.0028.11
ASRNet16.2016.2120.98 21.45 26.25 26.83 28.8229.23
FDC-Net 16.47 16.52 21.07 21.1125.9326.11 30.91 31.08
FlinstonesSDA13.9113.9616.2116.1018.4018.2120.8820.21
ConvCSNet15.0415.3217.2217.5819.4919.8226.4226.53
ASRNet16.3016.3919.7820.0824.01 24.56 26.9327.40
FDC-Net 16.43 16.49 20.29 20.54 24.42 24.55 28.81 28.91
LenaSDA17.8417.9521.1721.5623.8124.1625.8725.70
ConvCSNet17.9718.1621.7822.0825.2725.6127.1127.32
ASRNet 21.74 21.93 25.7425.9328.5428.7830.6530.89
FDC-Net21.6721.71 26.25 26.38 28.85 28.93 32.69 32.91
MonarchSDA15.3115.3818.1118.1920.9521.0423.5423.32
ConvCSNet16.3116.8118.9219.1821.7622.0126.5926.71
ASRNet17.7417.8523.2323.4927.1727.5029.2929.60
FDC-Net 18.03 18.46 23.53 23.52 27.51 27.83 31.79 31.97
ParrotSDA17.7117.8920.3720.6722.1422.3524.4824.37
ConvCSNet17.8618.1520.5521.1824.4124.8526.2626.38
ASRNet21.8722.01 24.52 24.6727.68 27.85 29.6129.80
FDC-Net 22.09 22.25 24.50 24.74 27.68 27.84 31.76 31.94
BoatsSDA18.5518.6821.2921.5424.0124.1826.5626.24
ConvCSNet18.1118.3921.8122.0824.8225.3127.8627.98
ASRNet 21.53 21.69 25.5225.7228.8629.1731.2831.64
FDC-Net21.3921.50 25.77 26.00 29.08 29.18 33.95 33.97
CameramanSDA17.0617.1919.3119.5621.1521.3022.7722.64
ConvCSNet17.6117.9219.4020.0122.3122.6925.1525.26
ASRNet19.7719.8922.7422.8825.0025.1326.4626.66
FDC-Net 20.13 20.21 22.94 23.08 25.28 25.31 28.97 29.04
ForemanSDA20.0820.2423.6224.0926.4327.1628.4028.91
ConvCSNet19.0919.5422.4622.8125.9726.1130.3930.81
ASRNet25.77 26.14 30.5630.7833.7934.0935.8536.19
FDC-Net 25.87 25.92 31.08 31.18 34.51 34.71 38.25 38.39
HouseSDA19.4519.5922.5122.9425.4126.0727.6527.86
ConvCSNet18.4018.8222.2222.7126.4626.5126.7626.98
ASRNet 23.13 23.31 27.8228.2131.47 31.87 33.4433.84
FDC-Net23.0823.14 28.03 28.21 31.67 31.79 35.75 35.98
PeppersSDA16.9317.0419.6319.8922.1022.3524.3124.15
ConvCSNet17.6918.0120.7621.0823.1223.6626.2626.51
ASRNet20.1720.3324.0324.3227.03 27.37 29.7230.18
FDC-Net 20.21 20.56 24.41 24.72 27.10 27.21 32.81 32.92
MeanSDA17.2917.4119.9620.2122.4322.6824.7224.55
ConvCSNet17.3417.6620.4020.7723.3023.6126.9727.14
ASRNet20.5120.6624.4024.6527.6527.9629.8530.17
FDC-Net 20.64 20.77 24.68 24.83 27.84 28.00 32.15 32.28
Table 3. Performance comparison among different network architectures.
Table 3. Performance comparison among different network architectures.
ModelsPSNR (without Using BM3D/with Using BM3D)
MR = 0.01MR = 0.04MR = 0.10MR = 0.25
One-densblock20.4120.5224.1724.8227.2927.5530.2930.85
Two-resblocks20.3720.5024.4124.7227.5227.8330.5730.99
one-resblock+one-densblock20.4320.4824.4324.7527.5027.7830.4730.98
two-resblocks+two-densblock20.6020.7824.5724.8027.5427.7631.6432.06
three-resblocks+one-densblock 20.66 20.83 24.6024.8927.4927.6831.5531.81
FDC-Net20.6420.82 24.68 24.97 27.84 28.27 32.15 32.42
Table 4. The PSNR value of different networks on ImageNet Val dataset.
Table 4. The PSNR value of different networks on ImageNet Val dataset.
ModelsMR = 0.01MR = 0.04MR = 0.10MR = 0.25
DR 2 -Net23.2725.9027.7829.10
RDC-Net23.8726.9229.7632.07
FDC-Net 25.76 29.11 31.32 34.05
Table 5. Time (s) for reconstructing a single 512 × 512 image.
Table 5. Time (s) for reconstructing a single 512 × 512 image.
ModelsMR = 0.01MR = 0.04MR = 0.10MR = 0.25
DR 2 -Net0.06860.06760.06800.0678
RDC-Net0.05900.05910.05950.0591
FDC-Net 0.0570 0.0566 0.0579 0.0571

Share and Cite

MDPI and ACS Style

Zhang, Z.; Gao, D.; Xie, X.; Shi, G. Dual-Channel Reconstruction Network for Image Compressive Sensing. Sensors 2019, 19, 2549. https://doi.org/10.3390/s19112549

AMA Style

Zhang Z, Gao D, Xie X, Shi G. Dual-Channel Reconstruction Network for Image Compressive Sensing. Sensors. 2019; 19(11):2549. https://doi.org/10.3390/s19112549

Chicago/Turabian Style

Zhang, Zhongqiang, Dahua Gao, Xuemei Xie, and Guangming Shi. 2019. "Dual-Channel Reconstruction Network for Image Compressive Sensing" Sensors 19, no. 11: 2549. https://doi.org/10.3390/s19112549

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop