Next Article in Journal
A Survey on Denoising Techniques of Electroencephalogram Signals Using Wavelet Transform
Next Article in Special Issue
Application of Compressive Sensing in the Presence of Noise for Transient Photometric Events
Previous Article in Journal
Case Report: Modulation of Effective Connectivity in Brain Networks after Prosthodontic Tooth Loss Repair
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Compressive Sensing Based Space Flight Instrument Constellation for Measuring Gravitational Microlensing Parallax

by
Asmita Korde-Patel
1,2,*,
Richard K. Barry
1 and
Tinoosh Mohsenin
2
1
NASA Goddard Space Flight Center, Greenbelt, MD 20771, USA
2
Computer Science and Electrical Engineering Department, University of Maryland, Baltimore County, Baltimore, MD 21250, USA
*
Author to whom correspondence should be addressed.
Signals 2022, 3(3), 559-576; https://doi.org/10.3390/signals3030034
Submission received: 1 April 2022 / Revised: 7 June 2022 / Accepted: 28 June 2022 / Published: 15 August 2022
(This article belongs to the Special Issue Compressive Sensing and Its Applications)

Abstract

:
In this work, we provide a compressive sensing architecture for implementing on a space based observatory for detecting transient photometric parallax caused by gravitational microlensing events. Compressive sensing (CS) is a simultaneous data acquisition and compression technique, which can greatly reduce on-board resources required for space flight data storage and ground transmission. We simulate microlensing parallax observations using a space observatory constellation, based on CS detectors. Our results show that average CS error is less than 0.5 % using 25 % Nyquist rate samples. The error at peak magnification time is significantly lower than the error for distinguishing any two microlensing parallax curves at their peak magnification. Thus, CS is an enabling technology for detecting microlensing parallax, without causing any loss in detection accuracy.

1. Introduction

Gravitational microlensing is an astronomical phenomena. When a lensing body comes in precise alignment with a source star, which is being observed through an optical system, the light rays bend due to the gravitational effects of the lensing system, causing a magnification in the observed light curve. By obtaining these photometric curves and analyzing their characteristics, we can determine the science parameters of a lensing system. We show a compressive sensing (CS) based architecture system for a space flight constellation here, performing photometric measurements. Typically, the galactic bulge is surveyed in order to increase the chances of capturing a microlensing event. A space based microlensing parallax can be obtained using a CS architecture. A CS system will enable capturing and collecting data in SmallSat type instruments by reducing the need for on-board data storage and data downlink bandwidth.
Compressive sensing is a mathematical theory for sampling at a rate much lower than the Nyquist rate, and yet reconstructing the signal back with little or no loss of information. The signal is reconstructed by solving an underdetermined system. In a CS architecture, to acquire a signal of size n, we collect m measurements, where m < < n . One measurement sample consists of a collective sum. We solve for Equation (1) to determine x through the observation y [1,2,3,4].
y m x 1 = A m x n x n x 1
Using the acquired measurements vector y and the known measurement matrix A, we can solve for a sparse x by applying various techniques, including greedy algorithms and optimization algorithms.

1.1. Motivation

We target our research for space flight instrumentation. For gravitational microlensing measurements, obtaining microlensing parallax is of critical importance in order to acquire properties of the lensing system. Our research enables development of a space flight instrument telescope constellation to acquire microlensing parallax measurements. These measurements would not be feasible on any SmallSat type instruments in a telescope constellation due to limited on-board resources available on these instruments—unless an intelligent and efficient way of sampling and compressing data is available, such as the one we demonstrate here using compressive sensing techniques.

1.2. Goals

The goal of this paper is to demonstrate the application of CS to acquire gravitational microlensing measurements. We aim to show that CS does not cause any significant errors in detection of gravitational microlensing parallax measurements through our theory-based simulation models.

1.3. Organization

This paper is organized in the following manner:
  • In Section 2, we provide background on the theory of gravitational microlensing parallax measurements.
  • In Section 3, we provide background on CS theory and its application for gravitational microlensing measurements.
  • In Section 4, we show a potential CS detector implementation architecture for a telescope constellation.
  • In Section 5, we describe our simulation and modelling setup.
  • In Section 6, we provide our simulation results and analysis.
  • In Section 7, we discuss data volume and on-board resources required for space instrument implementation.
  • Finally, in Section 8, we provide conclusions.

2. Microlensing Parallax

In gravitational lensing, the surface brightness, which is the flux per area, is conserved. The total flux increases or decreases, since the area increases or decreases. In microlensing, distinct images, due to the gravitational effects of the lensing system, are not seen, but, instead, magnification or demagnification of the source star is observed; the images are not resolved. Since the Jacobian matrix gives the amount of change in the source star flux in each direction, the transformation of the original source to the stretched source can be mapped by the Jacobian. The absolute value of the inverse of determinant gives the amount of magnification.
Einstein’s ring forms when there is an exact alignment of the source, lens and observer and is an important parameter for the basis of gravitational microlensing equations. Einstein’s ring radius, θ E , can be defined by Equation (2).
θ E = 4 G M D L S c 2 D L D S
where M is the total mass of the lensing system, D L S is the distance from the lens to the source, D L is the distance from the observer to the lensing system, and D S is the distance from the observer to the source [5].
From the formalization from [6], rewriting this in terms of relative lens-source parallax, π r e l , where π r e l = A U D L S D s D L , we obtain
θ E = k M π r e l
Here k = 4 G c 2 A U and AU is 1 Astronomical unit or 1.5 × 10 8 km.
If we define microlensing parallax in terms of the relative lens-source parallax, we obtain π E = π r e l θ E [7,8],
M = θ E 2 k π r e l
= θ E k π E
The amplification of a single lensed microlensing event light curve with time dependency is given by Equation (6) [5].
A ( t ) = u 0 2 + t t 0 t E 2 + 2 u 0 2 + t t 0 t E 2 1 / 2 u 0 2 + t t 0 t E 2 + 4 ) 1 / 2
The flux at each time sample, t, is given by Equation (7) [5].
F ( t ) = F s A ( t ) + F b
Thus, from a photometric curve, for a single microlensing event, we can obtain the parameters: t 0 , t E , u 0 , F s and F b from a microlensing photometric curve. All of the parameters are defined in Table 1.
In Table 1, μ r e l is the relative lens-source proper motion. Here, obtaining the lens mass remains unresolved, as we have two unknown parameters: θ E and π r e l . In order to break the degeneracy to obtain specific microlensing parameters, measuring the parallax, π E , offers once such solution [9,10]. If we obtain u 0 , we can solve for M, given π E . According to [8], microlensing parallax can be measured in three ways:
  • Motion of the Earth around the sun causing an annual parallax;
  • Two or more space based observatories, separated by a significant baseline;
  • Terrestrial parallax measured using a ground and space based observatory.
In our work, we focus on a constellation of a space based observatory to create simultaneous parallax measurements.
We can define the parallel and perpendicular shifts due to a microlensing parallax as in [6,8]. Let us assume o as the vector for the motion of the observatory. In a telescope constellation, from [8], o = < o 1 , o 2 > , where:
o 1 = ϵ c o s Ω
and
o 2 = ϵ s i n λ s i n Ω
Here, ϵ = R A U and we use λ = π 6 in our simulations.
The parallax vector π E = ( π E c o s θ , π E s i n θ ) , where θ is the lens source trajectory angle.
To obtain the shifts due to parallax, we obtain:
δ τ = π E · o
= π E c o s θ ϵ c o s Ω + π E s i n θ ϵ s i n λ s i n Ω
δ β = π E × o
   = π E c o s θ ϵ s i n λ s i n Ω π E s i n θ ϵ c o s Ω
where Ω = 2 π P ( t t 0 ) + Φ , P is the orbital period, and Φ is the orbital phase relative to t 0 [8]. We use θ = π 4 in our modelling. From Equation (6), we can write
u ( t ) = u 0 2 + τ 2 1 / 2
where τ = t t 0 t E .
We can define u ˜ ( t ) as the microlensing equations due to parallax [8]:
u ˜ ( t ) = ( u 0 + δ β ) 2 + ( τ + δ τ ) 2
In this manner, we can define the new amplification equation as
A ˜ ( t ) = u ˜ ( t ) 2 + 2 u ˜ ( t ) u ˜ ( t ) 2 + 4
Thus, from a photometric curve with a microlens parallax, we can obtain u 0 + δ β and τ + δ τ .

3. CS Application for Gravitational Microlensing

3.1. CS Theory

Compressive sensing is a mathematical theory for sampling at a rate much lower than the Nyquist rate, and yet reconstructing the signal back with little or no loss of information. The signal is reconstructed by solving an underdetermined system. Sparsity in data sets is a key component required for the accuracy in reconstruction using CS methods. If it is not sparse in the sampling domain, we can transform it to a sparse domain, perform the reconstruction, and then transform it back to the original domain [3,4]. In a CS architecture, to acquire a signal of size n, we collect m measurements, where m < < n . One measurement sample consists of a collective sum. We solve for Equation (1) to determine x through the observation y [1,2,3,4]: We can then reconstruct x using the acquired measurements vector, y, and the known measurement matrtix, A.

3.2. CS Application

Transient events such as gravitational microlensing events can be detected using differenced imaging. In differenced imaging, we take a difference of a good seeing reference image with an observed image. We show a CS architecture for microlensing events, to obtain the differenced images, which contain the source star magnification flux as shown in Figure 1.
In architecture (Figure 1), differencing is applied to the CS measurements, resulting in a reconstructed differenced image. A photometric curve can then be generated using the flux of the source star pixels of the differenced image over time.
The architecture is implemented in the following manner:
1
Obtain CS based measurements, y o , for a spatial image.
CS can be applied by projecting a matrix, A, onto the region of interest, x o . This can be done on a column-by-column basis for a n × n spatial region, x o . Thus, for 2D images, y 0 and A are of size m × n, where m < < n .
2
Given A and a clean reference image, x r , construct measurements matrix y r , where y r = A x r .
3
Apply a 2D differencing algorithm on y o and y r to obtain a differenced image, y diff , and the corresponding convolution kernel, M, which is used to match the observed and reference CS measurement vectors, y o and y r [11]. In our modelling, we use y diff = y o y r , by using the assumption that the PSF of the reference and observed image is the same. This assumption is valid for space flight instruments when both observed and reference images are obtained on-board the instrument.
4
Reconstruct the differenced image, x diff using CS reconstruction algorithms, given A and y diff .
In our modelling, we assume the same PSF for the reference and observed image; hence, in an ideal scenario, we would obtain a differenced image with only the microlensing event present. A sample reference image, the observed image, a differenced image, is shown in Figure 2, Figure 3, and Figure 4, respectively.
CS reconstruction, using the architecture in Figure 1, reconstructs the differenced image.

4. CS Detector Architecture

There are numerous options for implementing a CS based detector system as discussed in [12,13,14]. Spatial light modulators are used to implement measurement matrices. Previous work by [15] show a CS implementation using coded aperture masks. In the literature from [16,17,18], they implemented a single pixel camera using liquid crystal displays (LCD). Their work shows that lensless single pixel cameras can be implemented using LCD as their coded apertures.
In our analysis, we focus on using a DMD array for the CS projection. A CS architecture using a DMD array can have a frame rate of 32 KHz with 2048 × 1080 pixels [13]. In our simulations, we show the effectiveness with 25% of n. Let us assume n = 2048 × 1040 pixels. We would need 550,800 measurements. With the frame rate, this gives us 17.2 s to obtain one CS image, in addition to the needed exposure time. A typical microlensing event can last for 30 days. In [19], they discovered the shortest time scale microlensing event measured, where t e ≈ 41.5 min. Using Nyquist rate for this short time scale, we would expect to sample at least t e 2 ≈ 20.75 s. Detection efficiency will not only depend on the cadence of the system but also on the flux magnification and Field-of-view (FOV). The larger the FOV, the greater the chances of detection of a microlensing event.
A CS detector system would be beneficial for use on Small Satellites, where data storage and downlink can be limiting factors. Using a CS detector system on a constellation of satellites, we can detect a microlensing parallax, as shown in Figure 5. In our simulations, we assume the number of satellites in the constellation to be [0, n s a t ], where the maximum number is n s a t = 8 .
A detector concept for placing a telescope on a Small Satellite can be based off of ASTERIA (Arcsecond Space Telescope Enabling Research in Astrophysics) [20]. ASTERIA is a 6U CubeSat with a telescope aperture of 6.7 cm with a CMOS detector of 2592 × 2192 pixels. For a CS system, our optics would include a telescope, micro-mirror arrays, or any spatial light modulators, as well as a photodiode to acquire the sum total of the reflected light from the micro-mirrors, as shown in Figure 6.

State-of-the-Art Space Flight Instrumentation

In this section, we briefly discuss the current state-of-the-art and the advantages of using a CS detector as compared to traditional detectors.
The front-end-electronics for acquiring photometric measurements will be completely transformed to a novel data acquisition approach using CS techniques, resulting in fewer data samples, eventually putting the image reconstruction load onto computational imaging. Key differences in the optical setup and read-out of traditional detectors and CS based detectors are shown in Table 2.
The current state-of-the-art for microlensing parallax consists of large space observatories, like Nancy Grace Roman Space Telescope detecting an event, then alerting a microlensing event, which is then followed up by a ground observatory. Use of large observatories is very costly. A CS based instrument can detect a microlensing event and capture the complete set. If a parallax measurement is needed, a ground based observatory may be alerted. However, replacing a large observatory with a smaller instrument constellation for parallax detection, which acquires the same resolution science will be a game-changer, causing significant reduction in costs and resources.

5. Simulation Setup

In this section, we discuss the microlensing parameters and the CS parameters used for our simulation modelling.

5.1. Parallax Measurement Setup

In this section, we show effectiveness of CS over a range of δ τ and δ β as described in equations. We vary Φ , the space-flight instrument orbital phase to span over a range of values of δ τ and δ β . Our microlensing parallax, π E , is given by Equation (20). From [8], we make the same assumptions of the source being located in the galactic bulge at 4 kpc, lens at 8 kpc with a relative lens-source speed of 200 km/s to obtain the value of π E .
Hence, in a simple case, with origin as the center of the satellite trajectories, we can write
δ τ = π E ϵ c o s Ω
δ β = π E ϵ s i n Ω
To generate our parallax measurements, we make the assumptions as in [8]:
  • Source is in the galactic bulge: D s = 8 kpc
  • D L = 4 kpc
  • μ r e l = 200 K m s
We can write t E μ r e l = θ E [21,22].
π E = A U ( D L S ) t E μ r e l D L D S
= A U ( 0.000624 s ) t E
We can use Equations (11) and (13) with the given value for π E in Equation (20). Our simulations vary the value of R and Φ to determine the effect of CS reconstruction on photometric curves with a microlensing parallax. We use R values as shown in Table 3.
The different R values approximately correspond to the type of orbit the constellation could be in: low earth orbit, geosynchronous orbit, and solar orbit, respectively [8,23]. We use eight equally space Φ values. Hence, our results could show the effect for any constellation spaced at the given Φ values: π 8 , π 4 , 3 π 8 , π 2 , 5 π 8 , 3 π 4 , 7 π 8 , π .
A t E value of 1 day depicts photometric curves due to free floating planets.

5.2. Compressive Sensing Setup

For CS application, we generate a realistic crowded star field with airy shaped PSF with PSF radius ranging from (1,5) pixel units. For n × n image, we generate 0.75 × n × n star sources. The flux of the star sources ranges from 50 to 5000 units. In our simulations, we use m = 0.25 × n , where m is the number of CS measurements obtained. The number of measurements, m, is a trade-off with the amount of error tolerance desired. We chose m value such that we would keep the average error tolerance to under 1 % . A Bernoulli measurement matrix is used, which is varied during each Monte Carlo simulation. For a given R and Ω , we run 100 Monte Carlo simulations at each time sample, t. The number of Monte Carlo simulations chosen are based on two factors: (1) available computational resources and (2) the minimum number needed, such that the average obtained over all Monte Carlo simulations is as close to the “true” average. In order to find the “true” average, we look at the standard deviation of the Monte Carlo simulations to verify that it is as close to 0 as possible. Using 100 Monte Carlo simulations, for R = 7000 , we obtain the average standard deviation over all time samples and Φ values to be 1.2 × 10 11 . Similarly, for R = 42,000 and R = 1 A U , we obtain the average standard deviation to be 1.1 × 10 11 and 0.0 , respectively. Hence, we use 100 Monte Carlo simulations because the standard deviation decreases to a minimal amount across that many number of simulations in our setup. For CS reconstruction, we use the greedy algorithm, orthogonal matching pursuit, as its computational time is relatively less than optimization algorithms.

6. Results

In our first set of simulations for R = 7000 km, we obtain the different parallax light curves for each varying Φ as shown in Figure 7, with the figure legend shown in Figure 8. The photometric curve without parallax, labeled as Original, is also shown for comparison. We perform 100 Monte Carlo simulations for each time sample. For each R value, we show % error in peak magnification value between each parallax photometric curve as well as the difference in time shift for the peak magnification. The average CS reconstruction error for R = 7000 km is shown in Table 4. Table 5 and Table 6 show the error at peak magnification and the time difference in hours at peak magnification for R = 7000 km, respectively. Microlensing parallax provides a Δ u 0 , which shows the change in magnification amplitude, as well as a Δ t 0 , which shows a change in t 0 location. The higher the R value, the greater the Δ .
In the next set of simulations, we use R = 42,000 km. We show the microlensing parallax curves and the photometric curve without a microlensing parallax, shown as Original, in Figure 9, with the figure legend shown in Figure 10. The corresponding errors due to CS reconstruction are shown in Table 7. Table 8 and Table 9 show % error at peak magnification and time difference in hours at peak magnification, respectively.
Figure 11 shows the microlensing parallax curves for R = 1 AU. In this figure, we do not show the microlensing curve without parallax (Original curve in Figure 7 and Figure 9) for comparison, as the Δ in u 0 and t 0 are significantly high and will not be clearly readable with the given sampling cadence and observation window. Figure 12 is the corresponding legend. In Table 10, we show the % error due to CS reconstruction in the photometric curves. Table 11 and Table 12 show % error at peak magnification and time difference in days at peak magnification, respectively.
For all three categories, as shown in Figure 7, Figure 9, and Figure 11, the CS reconstructed curves and original curves overlap, as CS reconstruction nearly perfectly reconstructs the original curve. Average CS reconstruction % error over all samples for the photometric curve with no microlensing parallax was 0.175 % , and the average % error at t 0 was 0.100 % .
Our results show that there is no significant error for a microlensing parallax event using CS techniques. For all the photometric curves, the average error is less than 0.5 % for all CS reconstructed curves and less than 1.1 % at peak magnification time.
From our simulations, we note that the parallax curves get more distinguished for higher R values, that is, the time separation and amplitude separation between the photometric curves is higher. From our simulations, we show that the error due to CS reconstruction is less than the error between any two microlens parallax curves. However, we can use Table 5, Table 8 and Table 11 as a basis to determine the optimal orbital phases to choose for satellite placement. Although CS reconstruction results show average error within the % error for the two curves, a better placement, if using less than eight satellites, would be to choose orbital phases which have a greater microlensing detectability.

7. Data Volume and Resources

In this section, we perform a comparison analysis of using traditional detectors versus CS based detectors.
1
Data volume storage
Using a n × n image, with a 14-bit ADC resolution, we would expect the total data volume to be:
Traditional DetectorCS detector
14 bits × n × n 14 bits × m × n
We would make the assumption that the photodetector is not saturated with the ADC bit resolution needed to sample. Without data compression, we will need to transfer 14 n 2 bits/ FOV using a traditional detector. Using CS approach for 25 % measurements, we can will need to transmit 14 × 0.25 × n × n = 3.5 n 2 bits/ FOV.
2
Computational resources
On-board computation will consist of programming the spatial modulator and storing the m × n size acquired data for each n × n spatial image. To compare this with a traditional detector system, we would require computational resources for compressing data on-board, in order to be accommodated in the data down-link bandwidth.
In terms of on-board Field Programmable Gate Array (FPGA) resources for each of the modules listed in Table 13, we would expect a similar amount of logic gates, except for item 3. There are different methods for implementing data compression, including compression algorithms and pixel averaging [24,25]. For CS detectors, spatial modulation implementation will depend on the spatial modulator used. In addition, we would require either storage or generation and transmission of the spatial modulation matrix (CS measurement matrix) on-board. The on-board storage needed for traditional detectors would be significantly higher than storage needed for CS architecture modules.
3
Optics
A traditional detector consists of a telescope and a detector, typically a CCD camera. In the case for CS, we would need a telescope, as well as lenses, to focus the light on a spatial modulator device, such as a DMD array, followed by a photodetector. However, lensless cameras for CS applications have been implemented [16,17,18] and would need to be studied for a SmallSat type instrument. The optical path required to implement the detector system will be further studied in future work.

8. Conclusions

We simulated microlensing parallax curves with different orbital phases using CS techniques. Our CS simulation results show an error of less than 0.5 % over all time samples and an average error of less than 1.1 % at t 0 , while using 25 % of traditional detector measurements for microlensing parallax light curves with a range of Φ from [0, 7 π 8 ]. Despite different microlensing parallaxes at the three orbital radii generating different photometric curves with significant difference in flux magnitude, CS worked well for all the cases. The CS error at peak magnification for R = 7000 km, 42,000 km, and 1 AU, at each Φ value is less than the error between the parallax curve generated with that particular Φ value and any other parallax curve generated using the Φ value ranges. This shows that CS reconstruction should not cause any significant errors in detection of a microlensing parallax curve for any given Φ value. Using CS, we can significantly reduce the data storage volume, as well as data downlink bandwidth—both of which can be a limitation for SmallSat type instruments. CS shows potential for implementation in a SmallSat constellation for detecting microlensing parallax events.

Author Contributions

Conceptualization, A.K.-P. and R.K.B.; methodology, A.K.-P.; writing—original draft preparation, A.K.-P.; writing—review and editing, R.K.B.; supervision, T.M. and R.K.B.; All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

We have used simulated data using simulation parameters described in this work. This study has not used any publicly available datasets.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
CSCompressive Sensing
SmallSatSmall Satellite
FOVField-of-View
ASTERIAArcsecond Space Telescope Enabling Research in Astrophysics
FPGAField Programmable Gate Array
DMDDigital Micro-mirror Device
ADCAnalog-to-Digital Converter

References

  1. Eldar, Y.C.; Kutyniok, G. (Eds.) Compressed Sensing: Theory and Applications; Cambridge University Press: Cambridge, UK, 2012. [Google Scholar]
  2. Candès, E.J.; Wakin, M.B. An introduction to compressive sampling [a sensing/sampling paradigm that goes against the common knowledge in data acquisition]. IEEE Signal Process. Mag. 2008, 25, 21–30. [Google Scholar]
  3. Candes, E.; Romberg, J. Sparsity and incoherence in compressive sampling. Inverse Probl. 2007, 23, 969. [Google Scholar] [CrossRef]
  4. Bobin, J.; Starck, J.L.; Ottensamer, R. Compressed sensing in astronomy. IEEE J. Sel. Top. Signal Process. 2008, 2, 718–726. [Google Scholar] [CrossRef]
  5. Seager, S. Exoplanets; University of Arizona Press: Tucson, AZ, USA, 2010. [Google Scholar]
  6. Gould, A. “Rigorous” Rich Argument in Microlensing Parallax. arXiv 2020, arXiv:2002.00947. [Google Scholar]
  7. Yee, J.C. Lens Masses and Distances from Microlens Parallax and Flux. Astrophys. J. Lett. 2015, 814, L11. [Google Scholar] [CrossRef]
  8. Bachelet, E.; Hinse, T.C.; Street, R. Measuring the Microlensing Parallax from Various Space Observatories. Astron. J. 2018, 155, 191. [Google Scholar] [CrossRef]
  9. Lee, C.-H. Microlensing and Its Degeneracy Breakers: Parallax, Finite Source, High-Resolution Imaging, and Astrometry. Universe 2017, 3, 53. [Google Scholar] [CrossRef]
  10. Smith, M.C.; Mao, S.; Paczyński, B. Acceleration and parallax effects in gravitational microlensing. Mon. Not. R. Astron. Soc. 2003, 339, 925–936. [Google Scholar] [CrossRef]
  11. Bramich, D.M. A new algorithm for difference image analysis. Mon. Not. R. Astron. Soc. Lett. 2008, 386, L77–L81. [Google Scholar] [CrossRef]
  12. Wakin, M.B.; Laska, J.N.; Duarte, M.F.; Baron, D.; Sarvotham, S.; Takhar, D.; Kelly, K.F.; Baraniuk, R.G. An architecture for compressive imaging. In Proceedings of the 2006 International Conference on Image Processing, Atlanta, GA, USA, 8–11 October 2006; IEEE: Piscataway, NJ, USA, 2006. [Google Scholar]
  13. Guzzi, D.; Coluccia, G.; Labate, D.; Lastri, C.; Magli, E.; Nardino, V.; Palombi, L.; Pippi, I.; Coltuc, D.; Marchi, A.Z.; et al. Optical compressive sensing technologies for space applications: Instrumental concepts and performance analysis. In Proceedings of the International Conference on Space Optics—ICSO 2018, Chania, Greece, 9–12 October 2018; p. 111806B. [Google Scholar]
  14. Frederic, Z.; Lanzoni, P.; Tangen, K. Micromirror arrays for multi-object spectroscopy in space. In Proceedings of the International Conference on Space Optics—ICSO 2010, Rhodes Island, Greece, 4–8 October 2010; International Society for Optics and Photonics: Bellingham, WA, USA, 2019; Volume 10565. [Google Scholar]
  15. Xiao, L.L.; Liu, K.; Han, D.P.; Liu, J.Y. A compressed sensing approach for enhancing infrared imaging resolution. Opt. Laser Technol. 2012, 44, 2354–2360. [Google Scholar] [CrossRef]
  16. Zhang, Z.; Su, Z.; Deng, Q.; Ye, J.; Peng, J.; Zhong, J. Lensless single-pixel imaging by using LCD: Application to small-size and multi-functional scanner. Opt. Express 2019, 27, 3731–3745. [Google Scholar] [CrossRef] [PubMed]
  17. Kuusela, T.A. Single-pixel camera. Am. J. Phys. 2019, 87, 846–850. [Google Scholar] [CrossRef]
  18. Gang, H.; Hong, J.; Matthews, K.; Wilford, P. Lensless imaging by compressive sensing. In Proceedings of the 2013 IEEE International Conference on Image Processing, Melbourne, VIC, Australia, 15–18 September 2013; IEEE: Piscataway, NJ, USA, 2013. [Google Scholar]
  19. Mróz, P.; Poleski, R.; Gould, A.; Udalski, A.; Sumi, T.; Szymański, M.K.; Soszyński, I.; Pietrukowicz, P.; Kozłowski, S.; Skowron, J.; et al. A terrestrial-mass rogue planet candidate detected in the shortest-timescale microlensing event. Astrophys. J. Lett. 2020, 903, L11. [Google Scholar] [CrossRef]
  20. Krishnamurthy, A.; Knapp, M.; Günther, M.N.; Daylan, T.; Demory, B.; Seager, S.; Bailey, V.P.; Smith, M.W.; Pong, C.M.; Hughes, K.; et al. Transit Search for Exoplanets around Alpha Centauri A and B with ASTERIA. Astron. J. 2021, 161, 275. [Google Scholar] [CrossRef]
  21. Gould, A. A natural formalism for microlensing. Astrophys. J. 2000, 542, 785. [Google Scholar] [CrossRef]
  22. Yan, S.; Zhu, W. Measuring microlensing parallax via simultaneous observations from Chinese Space Station Telescope and Roman Telescope. Res. Astron. Astrophys. 2021, 22, 025006. [Google Scholar] [CrossRef]
  23. Mogavero, F.; Beaulieu, J.P. Microlensing planet detection via geosynchronous and low Earth orbit satellites. Astron. Astrophys. 2016, 585, A62. [Google Scholar] [CrossRef]
  24. Anuradha, D.; Bhuvaneswari, S. A detailed review on the prominent compression methods used for reducing the data volume of big data. Ann. Data Sci. 2016, 3, 47–62. [Google Scholar] [CrossRef]
  25. Yaman, D.; Kumar, V.; Singh, R.S. Comprehensive review of hyperspectral image compression algorithms. Opt. Eng. 2020, 59, 090902. [Google Scholar]
Figure 1. CS Architecture. The blue block represents CS data acquisition which can be performed on-board a spaceflight instrument, while the orange blocks represent computations which can be performed on the ground. Image differencing can also be performed on-board to further reduce data volume.
Figure 1. CS Architecture. The blue block represents CS data acquisition which can be performed on-board a spaceflight instrument, while the orange blocks represent computations which can be performed on the ground. Image differencing can also be performed on-board to further reduce data volume.
Signals 03 00034 g001
Figure 2. Sample reference image.
Figure 2. Sample reference image.
Signals 03 00034 g002
Figure 3. Sample observed image.
Figure 3. Sample observed image.
Signals 03 00034 g003
Figure 4. Sample differenced image.
Figure 4. Sample differenced image.
Signals 03 00034 g004
Figure 5. A diagram of satellite constellations observing the same spatial region in order to capture a microlensing parallax of any microlensing event occurring the given field-of-view. X represents a satellite with a CS detector system.
Figure 5. A diagram of satellite constellations observing the same spatial region in order to capture a microlensing parallax of any microlensing event occurring the given field-of-view. X represents a satellite with a CS detector system.
Signals 03 00034 g005
Figure 6. A potential CS implementation of the detector system using a telescope to acquire the light from the spatial region, a set of micro-mirror arrays to reflect light using CS projection methods, and a photodiode to capture a single measurement of the total reflected light.
Figure 6. A potential CS implementation of the detector system using a telescope to acquire the light from the spatial region, a set of micro-mirror arrays to reflect light using CS projection methods, and a photodiode to capture a single measurement of the total reflected light.
Signals 03 00034 g006
Figure 7. Photometric curves generated by different parallax values, shown with its corresponding CS reconstructed curve for R = 7000 km.
Figure 7. Photometric curves generated by different parallax values, shown with its corresponding CS reconstructed curve for R = 7000 km.
Signals 03 00034 g007
Figure 8. Legend for Figure 7.
Figure 8. Legend for Figure 7.
Signals 03 00034 g008
Figure 9. Photometric curves generated by different parallax values, shown with its corresponding CS reconstructed curve for R = 42,000 km. The original photometric curve without any microlensing effects is shown in red for comparison.
Figure 9. Photometric curves generated by different parallax values, shown with its corresponding CS reconstructed curve for R = 42,000 km. The original photometric curve without any microlensing effects is shown in red for comparison.
Signals 03 00034 g009
Figure 10. Legend for Figure 9.
Figure 10. Legend for Figure 9.
Signals 03 00034 g010
Figure 11. Photometric curves generated by different parallax values, shown with its corresponding CS reconstructed curve for R = 1 AU. The magnification is significantly lower because the differenced image is reconstructed using our CS technique, and the Δ in both u 0 and t 0 is significantly high.
Figure 11. Photometric curves generated by different parallax values, shown with its corresponding CS reconstructed curve for R = 1 AU. The magnification is significantly lower because the differenced image is reconstructed using our CS technique, and the Δ in both u 0 and t 0 is significantly high.
Signals 03 00034 g011
Figure 12. Legend for Figure 11.
Figure 12. Legend for Figure 11.
Signals 03 00034 g012
Table 1. Microlensing parameter definitions obtained from a photometric curve.
Table 1. Microlensing parameter definitions obtained from a photometric curve.
ParameterDefinition
t 0 Time of peak magnification
t E Einstein ring crossing time: θ E μ r e l
μ 0 Impact parameter in units of θ E
F s Microlensing source star flux
F b Microlensing source star blended flux
Table 2. Comparison between traditional detectors and CS detectors.
Table 2. Comparison between traditional detectors and CS detectors.
Traditional DetectorsCS Detectors
CCD DetectorsTypically designed with spatial light modulators and photodiode
Pixel by pixel readout of the imageTotal power reflected from the matrix projected onto the image is measured
Digitization of each pixel readoutDigitization of the total power read
Table 3. Simulation setup parameters.
Table 3. Simulation setup parameters.
R t E CadenceObservation Time
7000 km1 day48 min1 day
42,000 km1 day48 min1 day
1 AU1 day5.02 days150.5 days
Table 4. Percent Error for CS reconstruction for each Φ for R = 7000 km. The second row shows average % error over all time samples, the third row shows average % error at peak magnification, and the last row shows the standard deviation of the % error at peak magnification.
Table 4. Percent Error for CS reconstruction for each Φ for R = 7000 km. The second row shows average % error over all time samples, the third row shows average % error at peak magnification, and the last row shows the standard deviation of the % error at peak magnification.
Φ 0 π 8 π 4 3 π 8 π 2 5 π 8 3 π 4 7 π 8
Avg % Err0.1750.2300.0880.1090.1630.1610.2400.098
Avg % Err at peak0.0750.061.070.0681.090.0760.0810.073
Std dev. % Err at peak0.0570.0649.940.0569.940.0860.0700.068
Table 5. Percent error at peak magnification over 100 Monte Carlo simulations, between a microlensing photometric curve with Φ shown in the first row, compared to the photometric curve with Φ in the first column. Error values for R = 7000 km. Values in bold underline show where % error between the two curves is less than 10 % .
Table 5. Percent error at peak magnification over 100 Monte Carlo simulations, between a microlensing photometric curve with Φ shown in the first row, compared to the photometric curve with Φ in the first column. Error values for R = 7000 km. Values in bold underline show where % error between the two curves is less than 10 % .
0 π 8 π 4 3 π 8 π 2 5 π 8 3 π 4 7 π 8
0-12.025.340.050.256.559.860.1
π 8 13.6-15.231.843.450.654.454.7
π 4 33.917.9-19.633.341.746.246.6
3 π 8 66.746.724.5-16.927.533.133.5
π 2 10176.649.820.4-12.719.420.0
5 π 8 13010271.637.914.5-7.728.34
3 π 4 14911985.949.424.18.36-0.674
7 π 8 15112187.250.424.99.100.679-
Table 6. Time difference in Hours at peak magnification between a microlensing photometric curve with Φ shown in the first row, compared to the photometric curve with Φ in the first column. R = 7000 km.
Table 6. Time difference in Hours at peak magnification between a microlensing photometric curve with Φ shown in the first row, compared to the photometric curve with Φ in the first column. R = 7000 km.
0 π 8 π 4 3 π 8 π 2 5 π 8 3 π 4 7 π 8
0-00.8280.8281.661.660.8280.828
π 8 0-0.8280.8281.661.660.8280.828
π 4 0.8280.828-00.8280.8280.0
3 π 8 0.8280.8280-0.8280.82800
π 2 1.661.660.8280.828-00.8280.828
5 π 8 1.661.660.8280.8280-0.8280.828
3 π 4 0.8280.828000.8280.828-0
7 π 8 0.8280.828000.8280.8280-
Table 7. Percent Error for CS reconstruction for each Φ for R = 42,000 km. The second row shows average % error over all time samples, the third row shows average % error at t 0 , and the last row shows the standard deviation of the % error at t 0 .
Table 7. Percent Error for CS reconstruction for each Φ for R = 42,000 km. The second row shows average % error over all time samples, the third row shows average % error at t 0 , and the last row shows the standard deviation of the % error at t 0 .
Φ 0 π 8 π 4 3 π 8 π 2 5 π 8 3 π 4 7 π 8
Avg % Err0.1080.1100.1820.1100.1910.2080.1730.201
Avg % Err at peak1.070.0590.0911.070.0860.0630.0620.071
Std dev. % Err at peak9.940.0410.2059.940.0940.0490.0620.057
Table 8. Percent error at peak magnification between a microlensing photometric curve with Φ shown in the first row, compared to the photometric curve with Φ in the first column. Error values for R = 42,000 km. Values in bold underline show where % error between the two curves is less than 10 % .
Table 8. Percent error at peak magnification between a microlensing photometric curve with Φ shown in the first row, compared to the photometric curve with Φ in the first column. Error values for R = 42,000 km. Values in bold underline show where % error between the two curves is less than 10 % .
0 π 8 π 4 3 π 8 π 2 5 π 8 3 π 4 7 π 8
0-22.17.5010921525.159.770.9
π 8 28.3-18.71683053.8548.362.6
π 4 8.1115.8-12624119.056.568.5
3 π 8 52.162.755.7-51.064.180.786.1
π 2 68.375.370.733.8-76.287.290.8
5 π 8 33.54.0023.5179321-46.261.1
3 π 4 14893.413041868286.0-27.7
7 π 8 24316821861798215738.3-
Table 9. Time difference in Hours at peak magnification between microlensing photometric curve with Φ shown in the first row, compared to the photometric curve with Φ in the first column. Error values for R = 42,000 km.
Table 9. Time difference in Hours at peak magnification between microlensing photometric curve with Φ shown in the first row, compared to the photometric curve with Φ in the first column. Error values for R = 42,000 km.
0 π 8 π 4 3 π 8 π 2 5 π 8 3 π 4 7 π 8
0-1.664.146.629.1010.810.810.8
π 8 1.66-2.484.977.459.109.109.10
π 4 4.142.48-2.484.976.626.626.62
3 π 8 6.624.972.48-2.484.144.144.14
π 2 9.107.454.972.48-1.661.661.66
5 π 8 10.89.106.624.141.66-00
3 π 4 10.89.106.624.141.660-0
7 π 8 10.89.106.624.141.6600-
Table 10. Percent Error for CS reconstruction for each Φ for R = 1 AU. The second row shows average % error over all time samples, the third row shows average % error at the peak of each curve, and the last row shows the standard deviation of the % error at the peak.
Table 10. Percent Error for CS reconstruction for each Φ for R = 1 AU. The second row shows average % error over all time samples, the third row shows average % error at the peak of each curve, and the last row shows the standard deviation of the % error at the peak.
Φ 0 π 8 π 4 3 π 8 π 2 5 π 8 3 π 4 7 π 8
Avg % Err0.4370.4410.6330.7430.6210.3480.6160.582
Avg % Err at peak0.1860.1920.1920.1940.1830.1190.1060.117
Std dev. % Err at peak0.1460.1900.1780.1830.1370.2870.0920.083
Table 11. Percent error at peak between a microlensing photometric curve with Φ shown in the first row, compared to the photometric curve with Φ in the first column. Error values for R = 1 AU. Values in bold underline show where % error between the two curves is less than 5 % .
Table 11. Percent error at peak between a microlensing photometric curve with Φ shown in the first row, compared to the photometric curve with Φ in the first column. Error values for R = 1 AU. Values in bold underline show where % error between the two curves is less than 5 % .
0 π 8 π 4 3 π 8 π 2 5 π 8 3 π 4 7 π 8
0-12.727.243.718.7276217167
π 8 11.3-12.827.55.34234181137
π 4 21.411.4-13.06.65196149110
3 π 8 30.421.511.5-17.416212185.6
π 2 15.85.077.1221.0-217167125
5 π 8 73.470.066.261.868.4-15.729.1
3 π 4 68.564.459.954.762.518.6-15.9
7 π 8 62.557.752.346.155.541.018.9-
Table 12. Time difference in Days at peak between a microlensing photometric curve with Φ shown in the first row, compared to the photometric curve with Φ in the first column. Error values for R = 1 AU. Values in bold underline show where % error between the two curves is less than 5 % .
Table 12. Time difference in Days at peak between a microlensing photometric curve with Φ shown in the first row, compared to the photometric curve with Φ in the first column. Error values for R = 1 AU. Values in bold underline show where % error between the two curves is less than 5 % .
0 π 8 π 4 3 π 8 π 2 5 π 8 3 π 4 7 π 8
0-25.951.977.888.262.336.310.4
π 8 25.9-25.951.962.388.262.336.3
π 4 51.925.9-25.936.311488.262.3
3 π 8 77.851.925.9-10.414011488.2
π 2 88.262.336.310.4-15112598.6
5 π 8 62.388.2114140151-25.951.9
3 π 4 36.362.388.211412525.9-25.9
7 π 8 10.436.362.388.298.651.925.9-
Table 13. FPGA modules comparison for a traditional detector and CS based detector.
Table 13. FPGA modules comparison for a traditional detector and CS based detector.
Traditional DetectorCS Detector
1Data acquisition (ADC) interfaceData acquisition (ADC) interface
2Data storage moduleData storage module
3Data compressionSpatial modulation implementation
4Data packetization and transmissionData packetization and transmission
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Korde-Patel, A.; Barry, R.K.; Mohsenin, T. Compressive Sensing Based Space Flight Instrument Constellation for Measuring Gravitational Microlensing Parallax. Signals 2022, 3, 559-576. https://doi.org/10.3390/signals3030034

AMA Style

Korde-Patel A, Barry RK, Mohsenin T. Compressive Sensing Based Space Flight Instrument Constellation for Measuring Gravitational Microlensing Parallax. Signals. 2022; 3(3):559-576. https://doi.org/10.3390/signals3030034

Chicago/Turabian Style

Korde-Patel, Asmita, Richard K. Barry, and Tinoosh Mohsenin. 2022. "Compressive Sensing Based Space Flight Instrument Constellation for Measuring Gravitational Microlensing Parallax" Signals 3, no. 3: 559-576. https://doi.org/10.3390/signals3030034

Article Metrics

Back to TopTop