Next Article in Journal
Identification and Localization of Track Circuit False Occupancy Failures Based on Frequency Domain Reflectometry
Next Article in Special Issue
Lights and Shadows: A Comprehensive Survey on Cooperative and Precoding Schemes to Overcome LOS Blockage and Interference in Indoor VLC
Previous Article in Journal
Geometric Accuracy Assessment of Deimos-2 Panchromatic Stereo Pairs: Sensor Orientation and Digital Surface Model Production
Previous Article in Special Issue
MAC/PHY Comprehensive Visible Light Communication Networks Simulation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Letter

Feasibility of Laser Communication Beacon Light Compressed Sensing

1
Changchun Institute of Optics, Fine Mechanics and Physics, Chinese Academy of Sciences, Changchun 130033, China
2
University of Chinese Academy of Sciences, Beijing 100049, China
*
Author to whom correspondence should be addressed.
Sensors 2020, 20(24), 7257; https://doi.org/10.3390/s20247257
Submission received: 24 September 2020 / Revised: 11 December 2020 / Accepted: 12 December 2020 / Published: 18 December 2020

Abstract

:
The Compressed Sensing (CS) camera can compress images in real time without consuming computing resources. Applying CS theory in the Laser Communication (LC) system can minimize the assumed transmission bandwidth (normally from a satellite to a ground station) and minimize the storage costs of beacon light-spot images; this can save more than ten times the typical bandwidth or storage space. However, the CS compressive process affects the light-spot tracking and key parameters in the images. In this study, we quantitatively explored the feasibility of the CS technique to capture light-spots in LC systems. We redesigned the measurement matrix to adapt to the requirement of light-tracking. We established a succinct structured deep network, the Compressed Sensing Denoising Center Net (CSD-Center Net) for denoising tracking computation from compressed image information. A series of simulations was made to test the performance of information preservation in beacon light spot image storage. With the consideration of CS ratio and application scenarios, coupled with CSD-Center Net and standard centroid, CS can achieve the tracking function well. The information preserved in compressed information correlates with the CS ratio; higher CS ratio can preserve more details. In fact, when the data rate is up than 10%, the accuracy could meet the requirements what we need in most application scenarios.

1. Introduction

The use of laser beams to carry information through free space is a popular technique due to its directivity, power dissipation, high bandwidth and high data rate [1,2,3,4,5]. The effects of atmospheric turbulence are highly complex [6,7,8,9]; optical signals and real-time beacon light-spot images reflect such effects in communication applications. Light spot images transmit and storage are challenging because of the high resolution and high frame frequency of the image sensor, which generates several gigabytes of data per second. It is difficult to accomplish real-time compression over such a magnitude of data while preserving computing resources. Likewise, it is extremely challenging to transmit (especially from satellite to ground station) and store light-spot images due to the high data volume. To the best of our knowledge, however, there has been no previous research focused specifically on these issues. Typically, in Laser Communication (LC) experiments, all the spot image information in the satellite is discarded and vast quantities of storage space are occupied to save the real-time images.
Compressed Sensing (CS) is a technique to acquire compressed images in real-time without consuming computing resources. This approach totally differs from traditional image compression. CS receives a compressed image directly from a special optical system and reconstructs the compressed information as required. Mathematically, the compressive projecting process can be abstracted as follows [10,11,12,13]:
y = Ax
where x R n is the simple image in the traditional image sensor, y R m is the compressed data received from the CS camera and A R n is the “measurement matrix” or “project matrix”; Matrix A must satisfy the restricted isometric property (RIP), which guarantees the fewest possible measurements but with high probability of recovering the original signal [14]. CS reconstruction is a problem of reconstructing the unknown vector x after observing the m   <   n liner measurement, y, of its entry. We assume that the signal x here is k-sparse in some known basis, which is common for a photograph and that Ψ is the matrix transform x to this domain. Thus, x = Ψα and α is the transformed k-sparse signal. Next, x can be calculated as follows:
arg   min | | ϕ x | | 0   s . t .   y = Ax = A Ψ α = Θ α
where | | | | 0 denotes the l 0 norm and α = ϕ x . Unfortunately, the l 0 norm problem is an NP-hard problem. There is an extended version:
arg   min | | ϕ x | | p   s . t .   y = Ax = Θ α
where | | | | p denotes the l 0 norm. As per the Lagrange method:
arg   min   | | ϕ x | | p + λ y Θ α | | 2
Practically, it is a mature technology for this problem by use of traditional optimization or deep-learning methods [15,16,17,18,19,20,21,22,23,24,25,26].
Many CS imaging modules have been proposed in recent years. These modules project Equation (1) to optics systems. In one such study, a digital mirror array device was used to randomly project the image on a single sensor [27]. Successive random exposures were taken by randomly changing the digital mirror array. Other researchers [28] placed a random phase mask on a lens to randomly project the object on the array of sensors with fewer pixels. This captured compressed images in a single shot. Stern [29] proposed a linear sensor to capture compressed images and allow for faster acquisition of each frame compared to traditional scanning imaging systems. Ye et al. [30] proposed a code pattern design for multi-shot measurement. Other researchers built an optical model for the modeling and design of specifically tailored phase masks ensuring satisfactory contract-to-noise ratios [31]. An image reproduction scheme with an ultra-high-speed temporally compressive multi-aperture CMOS image sensor was demonstrated in another study [32]. In this system, because signals are modulated pixel-by-pixel during the capturing process, the maximum frame rate is defined only by the charge transfer speed and can thus exceed that of conventional ultra-high-speed cameras. Esteban et al. [33] proposed the use of aberrations to achieve effective single-shot compressive imaging. A CMOS approach to obtain compressed-domain image was established in another study [34], where the compressive module is achieved by an intelligent readout integrated circuit. Indeed, much more previous researchers have established compressed imaging systems; taking place of image sensors is practicable with workable support from these modules.
CS techniques are popular in some special areas, for example, remote sense [35,36,37], radar [37,38,39] and Magnetic Resonance Imaging (MRI) [40,41,42]. These scenarios always preoccupied with the contradiction of huge volume of raw data and data compression computing consumption or the unacceptable sensing time. The compressing process of CS is succinct and low-spending in power and time, which conforms to the demand of scenarios described above. Actually, beacon light image belongs to remote sense but we need to acquire the centroid for spot tracking in real-time; the problem became even more complicated.
A system utilizing CS imaging to take place of image sensors is illustrated in Figure 1. The CS performs two necessary functions: light coarse tracking and light-spot image storage. This requires obtaining the plot center directly from compressed information and maintaining sufficient atmospheric details for subsequent analysis. However, obtaining the plot center from the compressed information is challenging because the image reconstruction requires burdensome calculations. We approached this problem by redesigning the measurement matrix and building a succinct structured deep network, the CSD-Center Net, which can compute light-plot centers directly and swiftly while requiring relatively little computation, as discussed in greater detail below. As for image storage, compressed images restored in real-time must preserve sufficient details and crucial information, for example, the refractive index structure constant ( C n 2 ) and angle-of-arrival fluctuation [9,43]. It is critical to quantify the effects that compression processes with different ratios exert upon various parameters. The next two sections will discuss the beacon light tracking and compression performance respectively.

2. Beacon Light Tracking and CSD-Center Net

For an ordinary image, it is easy to access the centroid ( C h ,   C v ) ,
C h =   i , j = 0 n x i , j w i i , j = 0 n x i , j = Sum ( Xw T ) / Sum ( X ) = Sum [ ( x 1 , x 2 , , x n ) T w T ] / Sum ( x 1 , x 2 , ,   x n ) T = Sum ( x 1 w T , x 2 w T , , x n w T ) / Sum [ Sum ( x 1 ) , Sum ( x 2 ) , , Sum ( x n ) ]
C v = i , j = 0 n w i x i , j i , j = 0 n x i , j = Sum ( wX ) / Sum ( X ) = Sum [ w ( x 1 , x 2 , , x n ) ] / Sum ( x 1 , x 2 , , x n ) = Sum ( wx 1 , wx 2 , , wx n ) / Sum [ Sum ( x 1 ) , Sum ( x 2 ) , , Sum ( x n ) ] ,
where w is the row vector of position weight, x k denotes the k-th row in C h (k-th column in C v ) of X and Sum(·) sums all elements of the vector or matrix. To get C v , we have to get Sum ( x k , w ) and Sum( x k ) through measured matrix y respectively; matrix A in Equation (1) has a great influence on this issue and proper A can promote solution of the problem. As we mentioned before, traditional measurement matrix A must satisfy RIP:
( 1 σ ) | | c | | 2 | | Ac | | 2   ( 1   +   σ ) | | c | | 2
where | | | | 2 is l 2 norm, c is a sparse vector and σ ∈ (0, 1). Gaussian random matrix, Toeplitz measurement matrix, cyclic matrix, Bernoulli’s matrix and so forth are typical matrixes meet the requirement. However, there are many inspiring chosen, for example, a Gaussian random matrix A (u, δ), we have:
y   =   Ax
which can be write as:
y 1   =   a 11 x 11   +   a 12 x 12   +     +   a 1 n x 1 n y 2   =   a 21 x 11   +   a 22 x 12   +     +   a 2 n x 1 n y m   =   a m 1 x 11   +   a m 2 x 12   +     +   a mn x 1 n .
Actually, for Sum ( x k w ) and Sum ( x k ) in Equations (5) and (6), we have:
Sum ( x k w )   =   w 1 x k 1   +   w 2 x k 2   +     +   w n x kn Sum ( x k ) =   x k 1   +   x k 2   + x k 3 +     +   x kn
then Sum( y 1 ,   y 2 ,   ,   y m ) ≈ m   u   Sum ( x k ) if the m is large enough, which requires high resolution ratio of image sensor and high CS ratio to preserve Sum ( a k , i   |   k = 1 , 2 , , m ; i < n ) mE ( A ) = m   u . Unfortunately, we cannot acquire Sum ( x k w ) directly with these measure matrix. If we add two extra row vector to measure matrix A, then we have:
y 1   = a 11 x 11   +   a 12 x 12   +     +   a 1 n x 1 n y 2   = a 21 x 11   +   a 22 x 12   +     +   a 2 n x 1 n y m   = a m 1 x 11   +   a m 2 x 12   +     +   a mn x 1 n y m + 1   = β w 1 x 11   + β w 2 x 12   +     + β w n x 1 n y m + 2   = α x 11   + α x 12   + α x 13     + α x 1 n
then:
C v   = α β Sum ( y m + 1 , 1 ,   y m + 1 , 2 ,   ,   y m + 1 , n ) Sum ( y m + 2 , 1 ,   y m + 2 , 2 ,   ,   y m + 2 , n ) C h   = Sum ( w 1 y m + 2 , 1 ,   w 2 y m + 2 , 2 ,   ,   w n y m + 2 , n ) Sum ( y m + 2 , 1 ,   y m + 2 , 2 ,   ,   y m + 2 , n ) .
However, when we are applying cyclic matrix as measurement matrix [44]:
A =   ( a n a n 1 a 1 a 1 a n a n 1 a n 1 a n 2 a n )
then we have Sum ( y )   =   mSum ( t ) Sum ( x k ) and y m + 2 is not required.
To make full use of measurement matrix, we redesigned an m-rows’ measurement matrix using a deep learning network. In this network, the rows of measure matrix can be considered as m non-overlapping filters respectively and the sampling process can be considered as a convolutional layer. This means the extra rows using for centroid are utilized. Notice, the last two rows are fixed and there are m − 2 row need training. Which means:
y m 1   =   β w 1 x 11   + β w 2 x 12   +     + β w n x 1 n y m   =   α x 11   + α x 12   + α x 13     + α x 1 n
combined with the RIP constraint of A, the loss function can be defined as:
loss =   | | ( | | S out | | 2   | | S in | | 2 ) | | 2
where S in R n , S out R m are the input and output of the network, S out = Y m 2 ( y m 1 , y m ) . Network can be trained by sparse vector. Unless otherwise specified, the measurement matrixes below are using the trained matrixes by this network.
In fact, light-spot images in experiment always contain noise and the approach we mentioned above acquiring centroid without considering interference factors. A signal x R n with noise can be expressed as:
x   =   s   + ε .
s is the useful part and ε is the noise. The most common way to eliminate noise is to transfer x to a specific domain where we can separate s and ε; it is natural to consider the similarity of CS and denoising. Actually, tradition reconstruction method and denoising method always using same transform matrixes, for example, wavelet and DCT transform matrix and so forth. In such a domain, s always reflected to the strong signal on non-zero elements and ε always reflects to the small fluctuation on all elements. When we talked about CS, the fluctuation can approximately be dropped while removing the fluctuation is the means (or purpose) to the denoising. Practically, in some extent, the fluctuation limits the precision of CS reconstruction.
We constructed a CNN network, dubbed CSD-Center Net, to calculate centroid with adjustable denoising element. The network diagram is presented in Figure 2. It is an improvement on the original Le-Net [45], from which we removed the pooling layer and reduced convolution layer to reduce the algorithm complexity and guarantee real-time performance. The bottom branch in Figure 2 is the standard centroid acquired from measurement matrix. In the upper branch, we are using a sparse controller to preserve the sparsity. This branch can be trained to acquire the influence of useless signals to the standard centroid. The sparse controller can be denoising model or sparsity model, only if it can preserve the real-time performance and signals sparsity. In this paper, we are using a fixed and well-trained ISTA-Net phase there. Actually, input of the network can be the whole image or row by row, which influenced the training process and the output, (Csh, Csv) or (Sum(w·xi), Sum(xi)), respectively. However, whole image pattern is more stable than row by row. The overall loss function is trivial:
Loss =   | | C h     C sh | | 2   +   | | C v     C sv | | 2 .
To train the CSD-Center Net and explore the performance of tracking and storage, we generate a targeted dataset, which contains distorted light-spot images with floating position, while the existing datasets open accessed are not available to this issue. The normal distorted images are generated by power spectrum inversion, which simulated the whole light propagation process and splits the process into two parts: one irrelevant to the refractive index in a vacuum and another relevant to the phase modulation. There are two approximations at work here: that the refractive index fluctuations are relatively small and that the two steps are independent. We applied a multi-layer phase screen to simulate the influence of the atmosphere on optical fields with phases generated under Kolmogorov theory and a von Karman power spectrum [46,47]. Fresnel diffraction theory was used to investigate the light propagation process in a vacuum, where the refractive index structure constant C n 2 = 1 × 10−14 m−2/3 and the transmitting distance is 5 km. As described above, the compressed images received in the receiver were simulated by normal distorted images via Equation (1). The noise is adjustable according to the application environments.
To train this network, we divided 10,000 distorted images into 25 groups; different levels of noise was added to each group. 250 images, 10 images per group, are used to test the effects of CSD-Center Net. In fact, the training dataset can be more targeted than this sample according to the application scenarios. The speed of the network are checked on a laptop with Intel Core i5-3230 CPU. After training, the CSD-Center Net can complete the necessary computation of a 1 megapixels image in less than 1 millisecond—thus, it can track a light spot across more than one thousand frames per second. Before beginning, certain operational steps are necessary for the proper initialization.
Input Initialization: Compressed signals correlate to different sampling rates; each groups of signals must be trained individually based on the different sampling rates because of the dimension mismatch of input vectors y. To normalize the network input, we established an initialization for the different rates with a liner mapping matrix, denoted by A init , which can be computed by solving a least squares problem: A init   =   arg   min   | | AY X | | 2 2   =   XY T ( YY T ) 1 . The liner mapping process is S in = A init Y and input dimension of vectors was mapped from R m to R n Any input CS measurement is suitable for the network [22].
Figure 3 is the mean effects under different CS ratios of the network. As we can see, standard centroid error Δ acquired through Equation (8) effects extremely good in comparatively idea situations, that is, δ   2 . With the increasing of the δ, standard centroid suffers a rapid deterioration. In this case, CSD-Center Net effectively lower the influence of noise, which makes the error less than 1 pixel even in higher levels of noise. As expected, the CS ratio affects the centroid error; higher CS ratio is often more effective than lower one. In a LC system, if the tracking accuracy is e ≤ 2 urad and each pixel corresponds to 1urad, we can acquire the overall tracking accuracy. When δ   2 , standard centroid performs almost perfectly and CSD-Center Net has the opposite effect. When   δ   2 , we need to use the CSD-Center Net because of the deterioration of standard centroid. Choose the ratio according to requirement is important in this situation. We evaluate the performance of CSD-Center Net with the Δ 0.5 pix, that is, e ≤ 0.25 urad, which encompasses 12.5% of the overall system precision. As we can see, the Δ of different data rate started to be bad (e > 0.25) when the value of δ is over 7.5, 10.75 and 10.75 to 4%, 10% and 25% respectively. As for the 50% data rate, it is robust in the supplied scale. Practically, the deterioration speed of CSD-Center Net is relatively slow. Significantly, the precision of tracking using CSD-Center Net is not severely impacted by the measurement data rate. We can obtain the relatively good results using an extremely low data rate, for example, 4% data. This is a good feature for LC light tracking and real-time images storage; it means we can achieve the tracking function with fewer information. As a practical matter, coordinating the standard centroid and CSD-Center Net allow the light-tracking more efficiently.

3. Image Storage

CS imaging can track the spot in real-time with CSD-Center Net; it is necessary for a LC system. However, saving the details of light-spot image is actually the object of applying CS. In a LC system, it is normal that there always be a complex atmospheric processes in remotely distance between two ends of the communication. After the transmitting, the communication laser was distorted by the atmospheric turbulence. Light-spot images display the light intensity distribution of the communication laser. Practically, to a certain extent, the real-time images can reflect the influence of atmosphere turbulence. To test the effectiveness of the image saving, we determined the parameters in a reconstructed image from four different algorithms for comparison: Irls [17], ISTA-Net [22] and Ols [20,21] and FCSR [23]. Images without noise but still with distortion of atmosphere turbulence are used for this controlled experiment. The parameter reconstruction performance with the four algorithms was summarized in Table 1. It is worth mentioning that the effects can be improved with the development of CS reconstruction algorithm.
The reconstructed image directly represents the reconstruction results. Figure 4 shows reconstructed images by different algorithms with different rates. The ISTA-Net and FCSR reconstruction algorithm performs well and shows no significant differences between the ground truth and the reconstructed image at a 10% CS rate with the naked eye. We also determined the average peak signal-to-noise ratio (PSNR) to quantitatively report reconstruction deviations over the test image (Table 1). Even at a 4% CS rate, the ISTA-Net algorithm has a PSNR of 47.5 dB. While producing a reconstructed image that is indistinguishable from the original by the naked eye, the algorithm saves over ten times the storage space of other algorithms in operating the CS. The reconstruction effects observed here indicate that the CS technique meets the requirements of LC systems.
The refractive index structure constant ( C n 2 ) is one of the main parameters in atmospheric optics [43,48,49]. C n 2 described the random variation of atmospheric structure and its physical parameters on various time and space scales. With the improvement of modern nonlinear dynamical [50,51], the atmosphere turbulence in various scale and various space-time plays an important role in saltation and its predictability of atmospheric processes. Therefore, it plays an important role in atmospheric turbulence and related problems. Fluctuations in the refractive index destroy the coherence of the light-wave leading to scintillation, laser beam drift and spread. In the LC system, long-term C n 2 changes recorded in real time are significant in terms of communication quality. The C n 2 losses shown in Table 1 are negligible compared to its fluctuations (normally more than one or two orders of magnitude in one day). Even at the extremely low compression rate of 4%, the loss is below 7%. At a CS rate of 10%, ISTA-Net has a loss below 1%.
Random jitter in optical images occurs in the focal plane of the receiving terminal [43,52]. This so-called angle-of-arrival fluctuation affects communication effects and is the primary cause of tracking error. Practically, the angle-of-arrival fluctuation affects the LC tracking results significantly; a high frame frequency is generally used in the image sensor to track light plots with random fluctuation. Wavefront aberration, which is caused by atmosphere influence, decreases the wavefront inclination α and laser plot deviation of the sensor’s focal plane. Figure 5 shows the angle-of-arrival fluctuation formation. We can obtain α by measuring the variation of the difference Δx between the plot center and focal plane combined with pixel size p and focal length f:
α =   ( Δ x · p ) / f
Although we can obtain the Δx approximately by using the redesigned measurement matrixes, we still used the standard way here in order to test the preservation performance of CS. We assumed that the CS system and traditional image sensor have same pixel size p and focal length f here. We can then measure the angle-of-arrival fluctuation by measuring the variation of the laser plot center from reconstructed images.
Figure 6 shows a 30-image Δx line chart of different reconstruction algorithms. The overall system precision is represented by E(er) in Table 1. Apparently, The CS rates influence overall precision; higher CS rate can preserve more details. The average Δ x deviations of 4%, 10%, 25% and 50% CS ratio are about 0.24, 0.13, 0.05 and 0.02 pixels respectively. From the statistics, extremely low CS rate, for example 4% data, result in inconvenient deviation. Actually, when the data rate is up to 10%, the loss always can be ignored in most application scenarios. This perception can be confirmed by PSNR, C n 2 and random jitter (or Δ x ). Of course, some special decision such as the measurement matrixes, can help us decrease or remove the influence to some parameters.

4. Conclusions

In this study, we explored the feasibility of the CS technique for capturing light-spot in LC systems in order to minimize the storage and bandwidth cost of beacon light-spot images. In order to meet the requirement of light tracking, we redesigned the measurement matrixes. The redesigned matrixes can acquire the standard centroid directly. Standard centroid effects extremely well in comparatively idea situations. To achieve denoising tracking with compressed information, we built a succinct deep learning net, dubbed CSD-Center Net. The CSD-Center Net has low computational complexity, high precision and quick calculation speed. With the deterioration of image quality, CSD-Center Net effectively lower the influence of noise, which makes the error less than 1 pixel even in higher levels of noise. Standard centroid and CSD-Center Net are functionally complementary in different noisy environment. We can achieve high accuracy light-tracking by appropriate selection.
We measured the effects CS to LC with special focus on quantitative parameters by simulating the entire light propagation and detection process. The CS was able to compress a beacon light-spot image in real-time without using computing resources and to reconstruct an image even at extremely low compressive rates (e.g., 4%) with PSNR values higher than 47 dB. The influence of CS to C n 2 was found to be negligible compared to its own floating range. By contrast, the angle-of-arrival was found to be more sensitive to the CS rate if we do not use redesigned matrixes. A sufficiently small CS rate can help preserve more information in limited storage space when light-plot image information is necessary over extended periods of time; a larger CS rate can be set to preserve more precise or accurate light-spot information when necessary without consuming available bandwidth and storage resources. The interactions and mutual restriction among CS rate, data volume and parameter precision can be flexibly adjusted to suit different needs.
However, the analysis in this paper are based on a fairly idea condition in principle. There should be many practical difficulties in experiment such as optical manufacture, installation and adjustment, sampling noise and so forth. Parameters selection for different LC systems is critical; communication distance, optical antenna, coupled with pixels size all of this have great impact on CS rate. And that, in our opinion, how to reduce the impact of optical and electronics device and choose proper parameter and CS rate are the emphasis and difficulty in follow-up research.

Author Contributions

Conceptualization, Z.W., S.G. and L.S.; methodology, Z.W., S.G. and L.S.; software, Z.W.; validation, Z.W., S.G. and L.S.; formal analysis, Z.W., S.G. and L.S.; investigation, Z.W.; resources, S.G. and L.S.; data curation, Z.W.; writing—original draft preparation, Z.W.; writing—review and editing, Z.W., S.G. and L.S.; visualization, Z.W.; supervision, S.G. and L.S.; project administration, Z.W. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the National Natural Science Foundation of China (No. 11603024, No. 11973041).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Ricklin, J.C.; Davidson, F.M. Atmospheric turbulence effects on a partially coherent Gaussian beam: Implications for free-space laser communication. J. Opt. Soc. Am. A Opt. Image Sci. Vis. 2002, 19, 1794–1802. [Google Scholar] [CrossRef]
  2. Hemmati, H. Interplanetary Laser Communications. Opt. Photonics News 2007, 18, 22–27. [Google Scholar] [CrossRef]
  3. Smutny, B.; Kaempfner, H.; Muehlnikel, G.; Sterr, U.; Wandernoth, B.; Heine, F.; Hildebrand, U.; Dallmann, D.; Reinhardt, M.; Freier, A.; et al. 5.6 Gbps Optical Intersatellite Communication Link; SPIE: Bellingham, DC, USA, 2009; Volume 7199. [Google Scholar]
  4. Sun, X.; Skillman, D.R.; Hoffman, E.D.; Mao, D.; McGarry, J.F.; Zellar, R.S.; Fong, W.H.; Krainak, M.A.; Neumann, G.A.; Smith, D.E. Free Space Laser Communication Experiments from Earth to the Lunar Reconnaissance Orbiter in Lunar Orbit. Opt. Express 2013, 21, 1865–1871. [Google Scholar] [CrossRef] [PubMed]
  5. Toyoshima, M.; Takayama, Y. Space-Based Laser Communication Systems and Future Trends. In Proceedings of the Conference on Lasers and Electro-Optics 2012, San Jose, CA, USA, 6 May 2012; p. JW1C.2. [Google Scholar]
  6. Wood, R.M. Optical Detection Theory for Laser Applications; Osche, G.R., Ed.; Wiley: New York, NY, USA, 2002; 412p, ISBN 0-471-22411-1. [Google Scholar]
  7. Yura, H.T.; Fields, R.A. Level crossing statistics for optical beam wander in a turbulent atmosphere with applications to ground-to-space laser communications. Appl. Opt. 2011, 50, 2875–2885. [Google Scholar] [CrossRef] [PubMed]
  8. Toyoshima, M.; Takahashi, N.; Jono, T.; Yamawaki, T.; Yamamoto, A. Mutual alignment errors due to the variation of wave-front aberrations in a free-space laser communication link. Opt. Express 2001, 9, 592–602. [Google Scholar] [CrossRef] [PubMed]
  9. Strasburg, J.D.; Harper, W.W. Impact of atmospheric turbulence on beam propagation. Proc. SPIE 2004, 5413, 93–102. [Google Scholar]
  10. Donoho, D.L. Compressed sensing. IEEE Trans. Inf. Theory 2006, 52, 1289–1306. [Google Scholar] [CrossRef]
  11. Cand’Es, E.J. An Introduction to Compressive Sampling. IEEE Signal Process. Mag. 2008, 25, 21–30. [Google Scholar] [CrossRef]
  12. Tropp, J.A.; Gilbert, A.C. Signal Recovery from Random Measurements via Orthogonal Matching Pursuit. IEEE Trans. Inf. Theory 2007, 53, 4655–4666. [Google Scholar] [CrossRef] [Green Version]
  13. Daubechies, I.; Defrise, M.; Mol, C.D. An iterative thresholding algorithm for linear inverse problems with a sparsity constraint. Commun. Pure Appl. Math. 2003, 57, 1413–1457. [Google Scholar] [CrossRef] [Green Version]
  14. Baraniuk, R.; Davenport, M.; Devore, R.; Wakin, M. A Simple Proof of the Restricted Isometry Property for Random Matrices. Constr. Approx. 2008, 28, 253–263. [Google Scholar] [CrossRef] [Green Version]
  15. Figueiredo, M.A.T.; Nowak, R.D. An EM algorithm for wavelet-based image restoration. IEEE Trans. Image Process. A Publ. IEEE Signal Process. Soc. 2003, 12, 906–916. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  16. BECH, A. A fast iterative shrinkage-thresholding algorithms for linear inverse problems. SIAM J. Imaging Sci. 2009, 2, 183–202. [Google Scholar] [CrossRef] [Green Version]
  17. Chartrand, R.; Yin, W. Iteratively reweighted algorithms for compressive sensing. In Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing 2008, ICASSP 2008, Las Vegas, NV, USA, 30 March–4 April 2008. [Google Scholar]
  18. Needell, D.; Tropp, J.A. CoSaMP: Iterative signal recovery from incomplete and inaccurate samples. Appl. Comput. Harmon. Anal. 2009, 26, 301–321. [Google Scholar] [CrossRef] [Green Version]
  19. Carrillo, R.E.; Polania, L.F.; Barner, K.E. Iterative hard thresholding for compressed sensing with partially known support. In Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing 2011, Prague, Czech Republic, 22–27 May 2011. [Google Scholar]
  20. Blumensath, T.; Davies, M.E. On the Difference between Orthogonal Matching Pursuit and Orthogonal Least Squares; Technique Report; University of Edinburgh: Edinburgh, UK, 2007. [Google Scholar]
  21. Hashemi, A.; Vikalo, H. Sparse Linear Regression via Generalized Orthogonal Least-Squares. In Proceedings of the 2016 IEEE Global Conference on Signal and Information Processing, Washington, DC, USA, 7–9 December 2016. [Google Scholar]
  22. Zhang, J.; Ghanem, B. ISTA-Net: Interpretable Optimization-Inspired Deep Network for Image Compressive Sensing. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition 2018, Salt Lake City, UT, USA, 18–22 June 2018. [Google Scholar]
  23. Xu, S.; Zeng, S.; Romberg, J. Fast Compressive Sensing Recovery Using Generative Models with Structured Latent Variables. In Proceedings of the ICASSP 2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Brighton, UK, 12–17 May 2019. [Google Scholar]
  24. Liu, R.; Zhang, Y.; Cheng, S.; Fan, X.; Luo, Z. A Theoretically Guaranteed Deep Optimization Framework for Robust Compressive Sensing MRI. In Proceedings of the AAAI Conference on Artificial Intelligence, Honolulu, HI, USA, 27 January–1 February 2019. [Google Scholar]
  25. Shi, W.; Jiang, F.; Liu, S.; Zhao, D. Image Compressed Sensing using Convolutional Neural Network. IEEE Trans. Image Process. 2019, 29, 375–388. [Google Scholar] [CrossRef]
  26. Veen, D.V.; Jalal, A.; Price, E.; Vishwanath, S.; Dimakis, A.G. Compressed Sensing with Deep Image Prior and Learned Regularization. arXiv 2018, arXiv:abs/1806.06438. [Google Scholar]
  27. Takhar, D.; Laska, J.; Wakin, M.; Duarte, M.; Baron, D.; Sarvotham, S.; Kelly, K.; Baraniuk, R. A new Compressive Imaging camera architecture using optical-domain compression. Proc. IS&T/SPIE Symp. Electron. Imaging 2006. [Google Scholar] [CrossRef]
  28. Stern, A.; Javidi, B. Random Projections Imaging With Extended Space-Bandwidth Product. J. Disp. Technol. 2007, 3, 315–320. [Google Scholar] [CrossRef]
  29. Stern, A. Compressed imaging system with linear sensors. Opt. Lett. 2007, 32, 3077–3079. [Google Scholar] [CrossRef] [Green Version]
  30. Arguello, H.; Ye, P.; Arce, G.R. Spectral Aperture Code Design for Multi-Shot Compressive Spectral Imaging. In Proceedings of the International Congress of Digital Holography & Three-Dimensional Imaging, Miami, FL, USA, 12–14 April 2010. [Google Scholar]
  31. Marcos, D.; Lasser, T.; López, A.; Bourquard, A. Compressed imaging by sparse random convolution. Opt. Express 2016, 24, 1269–1290. [Google Scholar] [CrossRef] [Green Version]
  32. Mochizuki, F.; Kagawa, K.; Okihara, S.I.; Seo, M.W.; Zhang, B.; Takasawa, T.; Yasutomi, K.; Kawahito, S. Single-event transient imaging with an ultra-high-speed temporally compressive multi-aperture CMOS image sensor. Opt. Express 2016, 24, 4155–4176. [Google Scholar] [CrossRef] [PubMed]
  33. Esteban, V.; Pablo, M. Snapshot compressive imaging using aberrations. Opt. Express 2018, 26, 1206–1218. [Google Scholar]
  34. Javad, G.; Manish, B.; Fiorante, G.R.C.; Payman, Z.H.; Sanjay, K.; Hayat, M.M. CMOS approach to compressed-domain image acquisition. Opt. Express 2017, 25, 4076–4096. [Google Scholar]
  35. Zhi-Li, F.; Shu-Yan, X.U.; Jun, H.U. Design of multispectral remote sensing image compression system. Electron. Des. Eng. 2010, 1, V1–V254. [Google Scholar]
  36. Wang, L.; Lu, K.; Liu, P. Compressed Sensing of a Remote Sensing Image Based on the Priors of the Reference Image. IEEE Geosci. Remote Sens. Lett. 2014, 12, 736–740. [Google Scholar] [CrossRef]
  37. Fan, C.; Liu, P.; Wang, L. Spatiotemporal resolution enhancement via compressed sensing. In Proceedings of the Geoscience & Remote Sensing Symposium, Quebec City, QC, Canada, 13–18 July 2014; pp. 3061–3064. [Google Scholar]
  38. You, Y.; Li, C.; Yu, Z. Parallel frequency radar via compressive sensing. In Proceedings of the 2011 IEEE International Geoscience and Remote Sensing Symposium, Vancouver, BC, Canada, 24–29 July 2011. [Google Scholar]
  39. Liechen, L.I.; Daojing, L.I.; Pan, Z. Compressed sensing application in interferometric synthetic aperture radar. Sci. China Inf. Sci. 2017, 60, 102305. [Google Scholar]
  40. Lustig, M.; Donoho, D.L.; Santos, J.M.; Pauly, J.M. Compressed Sensing MRI. IEEE Signal Process. Mag. 2008, 25, 72–82. [Google Scholar] [CrossRef]
  41. Gamper, U.; Boesiger, P.; Kozerke, S. Compressed sensing in dynamic MRI. Magn. Reson. Med. 2010, 59, 365–373. [Google Scholar] [CrossRef]
  42. Mun, S.; Fowler, J.E. Motion-compensated compressed-sensing reconstruction for dynamic MRI. In Proceedings of the 2013 20th IEEE International Conference on Image Processing (ICIP), Melbourne, Australia, 15–18 September 2013. [Google Scholar]
  43. Jiang, D.; Zhang, P.; Deng, K.; Zhu, B. The atmospheric refraction and beam wander influence on the acquisition of LEO-Ground optical communication link. J. Light Electronoptic 2014, 125, 3986–3990. [Google Scholar] [CrossRef]
  44. Rauhut, H. Circulant and Toeplitz matrices in compressed sensing. arXiv 2009, arXiv:abs/0902.4394. [Google Scholar]
  45. Lecun, Y.; Bottou, L.; Bengio, Y.; Haffner, P. Gradient-based learning applied to document recognition. Proc. IEEE 1998, 86, 2278–2324. [Google Scholar] [CrossRef] [Green Version]
  46. Lane, R.G.; Glindemann, A.; Dainty, J.C. Simulation of a Kolmogorov phase screen. Waves Random Media 1992, 2, 209–224. [Google Scholar] [CrossRef]
  47. Frehlich, R. Simulation of laser propagation in a turbulent atmosphere. Appl. Opt. 2000, 39, 393–397. [Google Scholar] [CrossRef]
  48. Li, Y.; Zhu, W.; Wu, X.; Rao, R. Equivalent refractive-index structure constant of non-Kolmogorov turbulence. Opt. Express 2015, 23, 23004–23012. [Google Scholar] [CrossRef]
  49. Ben-Yosef, N.; Tirosh, E.; Weitz, A.; Pinsky, E. Refractive-index structure constant dependence on height. J. Opt. Soc. Am. 1979, 69, 1616–1618. [Google Scholar] [CrossRef]
  50. Majda, A.J.; Chen, N. Model Error, Information Barriers, State Estimation and Prediction in Complex Multiscale Systems. Entropy 2018, 20, 644. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  51. Alessandri, A.; Bagnerini, P.; Cianci, R. State Observation for Lipschitz Nonlinear Dynamical Systems Based on Lyapunov Functions and Functionals. Mathematics 2020, 8, 1424. [Google Scholar] [CrossRef]
  52. Toyoshima, M.; Jono, T.; Nakagawa, K.; Yamamoto, A. Optimum divergence angle of a Gaussian beam wave in the presence of random jitter in free-space laser communication systems. J. Opt. Soc. Am. A 2002, 19, 567–571. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Laser Communication (LC) tracking system utilizing Compressed Sensing (CS): two algorithms, CSD-Center, image reconstruction deployed for various functions.
Figure 1. Laser Communication (LC) tracking system utilizing Compressed Sensing (CS): two algorithms, CSD-Center, image reconstruction deployed for various functions.
Sensors 20 07257 g001
Figure 2. Proposed CSD-Center Net framework with CS process: convolutional layer and full connected layer were abbreviated with ‘cov layer’ and ‘fc layer,’ respectively.
Figure 2. Proposed CSD-Center Net framework with CS process: convolutional layer and full connected layer were abbreviated with ‘cov layer’ and ‘fc layer,’ respectively.
Sensors 20 07257 g002
Figure 3. Tracking error of standard centroid and CSD-Center Net in noisy condition.
Figure 3. Tracking error of standard centroid and CSD-Center Net in noisy condition.
Sensors 20 07257 g003
Figure 4. Reconstructed images of different rates and algorithms: (a, b, c, d) Irls, ISTA-Net, Ols and FCSR algorithm; (1, 2, 3, 4) 4%, 10%, 25% and 50% CS rates, respectively.
Figure 4. Reconstructed images of different rates and algorithms: (a, b, c, d) Irls, ISTA-Net, Ols and FCSR algorithm; (1, 2, 3, 4) 4%, 10%, 25% and 50% CS rates, respectively.
Sensors 20 07257 g004
Figure 5. Center error formation of light beam in focal plane.
Figure 5. Center error formation of light beam in focal plane.
Sensors 20 07257 g005
Figure 6. 30-image center error line chart of different algorithms. (a) Irls; (b) ISTA-Net; (c) Ols; (d) FCSR.
Figure 6. 30-image center error line chart of different algorithms. (a) Irls; (b) ISTA-Net; (c) Ols; (d) FCSR.
Sensors 20 07257 g006
Table 1. Average error of important parameters recovered from different algorithms.
Table 1. Average error of important parameters recovered from different algorithms.
AlgorithmCS Rate (%)PSNR (dB) C n 2   ( % ) Δx (pix)E(er) (%)
Irls441.316.760.19570.0979
1043.714.330.21610.1081
2550.45.870.14530.0727
5052.45.860.05880.0294
ISTA-Net447.56.360.23720.1186
1050.60.960.12970.0649
2558.10.400.05350.0268
5060.800.02130.0107
Ols441.112.740.22520.1126
1041.516.890.21880.1094
2542.919.340.12400.0620
5050.87.220.12200.0610
FCSR446.38.410.21870.1094
1049.91.880.13370.0669
2555.70.570.05280.0264
5059.60.030.03640.0182
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Wang, Z.; Gao, S.; Sheng, L. Feasibility of Laser Communication Beacon Light Compressed Sensing. Sensors 2020, 20, 7257. https://doi.org/10.3390/s20247257

AMA Style

Wang Z, Gao S, Sheng L. Feasibility of Laser Communication Beacon Light Compressed Sensing. Sensors. 2020; 20(24):7257. https://doi.org/10.3390/s20247257

Chicago/Turabian Style

Wang, Zhen, Shijie Gao, and Lei Sheng. 2020. "Feasibility of Laser Communication Beacon Light Compressed Sensing" Sensors 20, no. 24: 7257. https://doi.org/10.3390/s20247257

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop