Next Article in Journal
Novel Spectrum Inversion-Based Double-Sideband Modulation with Low Complexity for a Self-Coherent Detection System
Previous Article in Journal
Experimental Research on Drilling and Cutting Urban Hedge Branches Using Multi-Wavelength Lasers
Previous Article in Special Issue
Computation of the Multi-Spheres Scattering Coefficient Using the Prime Index Method
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

The Rise of the Brown–Twiss Effect

by
David Charles Hyland
Independent Researcher, 1913 Sherrill Court, College Station, TX 77845, USA
Photonics 2025, 12(4), 301; https://doi.org/10.3390/photonics12040301
Submission received: 26 January 2025 / Revised: 11 March 2025 / Accepted: 21 March 2025 / Published: 25 March 2025
(This article belongs to the Special Issue Optical Imaging and Measurements: 2nd Edition)

Abstract

:
Despite the simplicity of flux collecting hardware, robustness to misalignments, and immunity to seeing conditions, Intensity Correlation Imaging arrays using the Brown–Twiss effect to determine two-dimensional images have been burdened with very long integration times. The root cause is that the essential phase retrieval algorithms must use image domain constraints, and the traditional signal-to-noise calculations do not account for these. Thus, the conventional formulations are not efficient estimators. Recently, the long integration times have been emphatically removed by a sequence of papers. This paper is a review of the previous theoretical work that removes the long integration times, making the Intensity Correlation Imaging a practical and inexpensive method for high-resolution astronomy.

1. Introduction

The first experiments on imaging techniques based upon the correlation of intensity fluctuations measured at two or more spatial–temporal points for thermal sources of light were carried out by R. Hanbury Brown and R. Q. Twiss (HBT) [1,2,3,4]. From the outset, it was clear that this first exploration in quantum optics had many advantages over conventional telescopes and amplitude interferometers. Among these advantages are the followsing:
HBT does not require the propagation of collected light beams to combiner units as in existing amplitude interferometers. Routing optics and beam combiners are eliminated.
HBT does not require nanometer-level control of nor nanometer knowledge of the relative positions of the optical components.
The signal-to-noise ratio (SNR) does not suffer from beam splitting and the throughput losses that plague amplitude interferometry.
HBT does not require expensive diffraction-limited optics. Photo detective hardware consists of simple, flux collecting apertures.
HBT is insensitive to atmospheric turbulence and therefore can be used by ground-based as well as space-based facilities.
Above all, each aperture transmits its own intensity fluctuation signal. All pairs of these signals are correlated via a separate computational facility. Thus each pair of apertures are electronically coupled so that the relative distance vectors between apertures (the baseline vectors) can be arbitrarily large, allowing for the possibility of indefinitely fine image resolution.
Unfortunately, for HBT correlators, the measurement signal is proportional to the square of the average light intensity. Most astronomical light sources are not bright enough to determine stellar diameters with reasonable integration times (hours or minutes rather than millions of hours). Until very recently, integration time has been the major obstacle for measurements of stellar diameters. A second difficulty is that HBT has been concerned with the calculation of stellar diameters modeled as radially symmetric intensity distributions that require only the measurement of coherence magnitudes. For much broader astronomical investigations, an algorithm capable of two-dimensional imaging is needed. This requires both coherence magnitude and coherence phase retrieval. Finally, in the interests of astronomy, we concentrate on imaging an illuminated body against a black sky where the positions of the black pixels are unknown but are determined by ICI.
The present discussion is a survey of a sequence of four papers [5,6,7,8] that developed an algorithm for two-dimensional imaging that enormously reduces the integration times for the HBT effect while retaining the advantages listed above. First of all, Reference [5] developed the noise reducing phase retrieval (NRPR) algorithm. This led to three papers [6,7,8] produced by the author as an independent researcher, and these three most recent papers were the most instrumental in solving the curse of the weak signal. The four articles were arranged in both topical and chronological order with rigorous technical detail. While the salient features of the mathematical development are herein presented, this paper encourages its users to appreciate the rather simple process for the determination of high-resolution images from the extremely noisy database. We emphasize that the HBT technique has been called “Intensity Correlation Interferometry”. However, this significant step forward merits the name “Intensity Correlation Imaging” (which we designate as ICI hereafter).
The plan of the present development is as follows. In the next section, we explain the coherence magnitude measurement process and a reformulation of the HBT SNR (signal-to-noise ratio) that reflects the fact that the conditional probabilities arising from the image domain constraints (needed in the Cramér–Rao bound) must be accounted for in any estimate of measurement accuracy as a function of integration time. In other words, the conventional SNR treatments [1,2,3,4] are inappropriate in the context of two-dimensional image reconstruction. Evidence of this appeared in earlier studies, which showed that conventional estimates of integration time for many imaging examples were grossly conservative. This section explains why the ICI data collection process must utilize coherence magnitudes that are strictly nonnegative in order to correct the HBT SNR.
Next, in Section 3, we offer an overview of the noise reducing phase retrieval (NRPR) algorithm [5]. This section postulates a complete knowledge of the positions of the black pixels, wherein an illuminated, square foreground of a specified size is centered amid the constrained black pixels of the background. This is termed the “Box”. Using NRPR to compute a multitude of “Box” sizes, we present computational studies that examine the correct integration time under the assumption that integration times are the same for all aperture pairs (i.e., all baselines in the u-v plane). The results show an enormously more precise estimate for integration time as a function of image size and accuracy.
Section 4 describes the technique wherein the black pixels in the background having specified the “Box” size are able to determine the zero-noise image. The process embeds the NRPR algorithm (equipped with the correct integration times) within a stochastic search algorithm [8]. By varying the box sizes on each NRPR run, the occurrence of two completely correlated images (aside from 180-degree rotations and translations) indicates the correct zero-noise image. The correct identification is attained via very few iterations.
Section 5 discusses theorems that lead to an even faster identification of the correct, zero-noise image. Then, in the Section 7, we discuss further possibilities for both the advancement of the ICI algorithm within the photonic community and its application to advanced image resolution.

2. ICI Data Collection Process

For simplicity, we postulate a set of flux collecting apertures forming a square grid on the u-v plane. Furthermore, the parameters of all apertures, including the integration times (denoted by Δ T ), are identical. For a given pair of apertures, the fundamental data collected by Brown and Twiss to obtain an estimate of the coherence magnitude are a running time average of the intensity fluctuation cross-correlation as measured by each pair of intensity sensors. We designate u as the baseline vector in the u-v plane associated with the two apertures. Here, we reconsider the statistics of intensity fluctuation cross-correlation that prevail when the object of study has very low spectral radiance. For the present discussion, we assume that (1) the averaging time, Δ T , is much larger than the detector response time, T d ; (2) the field is quasi-monochromatic, i.e., that it is confined to a narrow spectral band, Δ ν c , centered at frequency ν ¯ such that Δ ν c ν ¯ ; (3) the light is approximately cross-spectrally pure, i.e., the normalized mutual coherence takes the form γ r 1 , t 1 ; r 2 , t 2 = γ r 1 , r 2 , 0 γ t 2 t 1 , which implies that the coherence time of the light satisfies T c = γ τ 2 d τ = 1 / Δ ν c ( Δ ν c denoting the optical bandwidth); (4) the slow detector assumes that T d T c ; and (5) the thermal light radiated by the targets to be imaged has very small total photodetection events per second per Hertz (a restriction almost universally valid). For simplicity of exposition, we also assume that the intensity and mutual coherence magnitudes are approximately constant over the spatial extent of each aperture pair. In other words, the partial coherence factor calculated in Reference [2] is approximately unity.
Let us introduce some further notation. The symbol denotes the ensemble average, and Δ T denotes the time average, defined as follows:
Δ T = 1 Δ T t Δ T t   d ξ
The prefixes “ Δ ” and “ Δ T ” indicate fluctuations about the ensemble average and about the time average, respectively. Finally, let the electronic outputs of the two detectors be denoted by J 1 t , u   and   J 2 t , u . Then, the time-averaged intensity fluctuation cross-correlation (the principal datum of the measurement) is Δ Δ T J 1 t , u Δ Δ T J 2 t , u Δ T .
The statistics of the time-averaged intensity fluctuation cross-correlation (the quantity Δ Δ T J 1 t , u Δ Δ T J 2 t , u Δ T ) has been treated in great detail in [1,2,3,4]. Accordingly, assumption (5), which holds for all but the brightest objects, leads to the conclusion that in the presence of conditions (1)–(4) and at any particular integration time where Δ Δ T J 1 t , u Δ Δ T J 2 t , u Δ T is a Gaussian random variable, the mean and standard deviation are as follows:
Δ Δ T J 1 t , u Δ Δ T J 2 t , u Δ T = κ 2 η 2 T d T c β ¯ 2 γ 2 + κ 2 η T d T c β ¯ T d Δ T   N u 0 , 1 β ¯ = Number   density   per   mode   A I T c   A = Aperture   area   I = Number   of   photon   arrivals   per   square   meter ,   per   second T c = Correlation   time   of   the   light T d = Response   time   of   the   detector κ = Magnitude   of   detector   response   over   T d η = Photon   detection   efficiency γ u γ r 1 , r 2 , 0 = The   normalized   magnitude   of   optical   coherence
N u 0 , 1 is a zero mean, unit variance Gaussian random variable where N u 0 , 1 and N u 0 , 1 are statistically independent if u u . Note that as the integration time approaches infinity, Δ Δ T J 1 t , u Δ Δ T J 2 t , u Δ T converges to a constant multiplier times the magnitude squared of the normalized degree of optical coherence, γ u 2 . This is the primary discovery of Brown and Twiss for the case of thermal light.
At this point, we observe that the coefficient of the noise term can be determined at the outset by calibrating the detectors, using the aperture parameters, and measuring the intensity observed by the individual aperture sensors. In other words, the quantity κ 2 η β ¯ T d / T c T d / Δ T   can be obtained before any intensity fluctuation cross-correlation measurements are attempted. Hence, it is convenient to divide Δ Δ T J 1 t Δ Δ T J 2 t Δ T by this factor, thereby normalizing the noise component to unity standard deviation. Therefore, we can write the equation as follows:
S u = Δ Δ T J 1 t , u Δ Δ T J 2 t , u Δ T κ 2 η β ¯ T d / T c T d / Δ T     = S 0 Δ T γ u 2 + N u 0 , 1 S 0 = η A I T c 1 / T d
S 0 may be recognized as the zero baseline SNR for a unit integration time, which can be determined at the outset via measurements from the individual apertures and equipment calibrations. It is important to note that the statistical properties of S u and S 0 are completely determined by the parameters κ ,   η ,   T d ,   T c   , A   , I ,   β ¯ = A I T c aside from γ u 2 and Δ T . Any statistical data regarding supposedly specific radio astronomy, adaptive optics, or other noise distributions are carried within these parameters.
Clearly, the meaning of S u is the same as the time-averaged cross-correlation, and they are interchangeable. However, S u is more easily interpreted, as it is the “empirical SNR” that one can use to compare the signal component of the measurement to the noise component, whose standard deviation is always one. Hence, as the integration time increases, one can compare S 0 Δ T γ u 2 to unity to ascertain if the desired SNR is reached with a 50% probability. S u can be used to illustrate the process of experiment design and data gathering and demonstrates the essential differences between the measurement of stellar diameters versus the capture of two-dimensional images.
Let us discuss the initial experiment design as practiced by Brown and Twiss. Obviously, one cannot prolong the running time average to an impractical degree. At the outset, one must estimate the length of the final time average, denoted here by Δ T so that the final empirical SNR is roughly of the same magnitude as the desired SNR, SNRd, allowing for an approximately unit standard deviation fluctuation. Based upon the above description of the estimation process, Equation (3) leads to the following specification of the limiting integration time, denoted by Δ T :
Δ T = S N R d S 0 2 1 γ e s t 4
where S u has been replaced by S N R d in (3) to arrive at this expression and γ e s t is some estimate of the coherence magnitude. As expected, any reduction of the factor S 0 (traditionally characterized by the inefficiency factor) lengthens the necessary integration time so that accurate equipment calibration is essential. We also see that design of in-the-field measurements needs an a priori estimate of γ e s t .
The process adopted by Brown and Twiss to measure stellar diameters involves measuring only the coherence magnitudes and curve fitting the results to a simple, radially symmetric model of the stellar intensity pattern. Because of the simplicity of the technique, one finds that reasonable accuracy for the Brown–Twiss calculation may be obtained using a γ e s t close to unity (note that the normalized coherence has the properties γ u > 0 ,   max u γ u   = γ u = 0 = 1 ). In fact, some equalities in Refs. [1,2,3,4] actually assign the factor γ u to unity, although a more precise estimate is likely to be γ e s t 0.2 0.3 . This estimate of integration time is roughly correct for stellar interferometry and provides a good starting value to the needed integration time. Also, the quantity S 0 Δ T may be considered an efficient estimator of the coherence magnitude squared in that it is inherently positive and does not depend on the value of the quantity being estimated (namely the normalized magnitude of the optical coherence). In contrast, a fully two-dimensional image of a complex object, based on interferometric experience, requires at least γ e s t 0.01 0.001 . Thus, in this case, the method used by Brown and Twiss to determine a two-dimensional image is clearly inappropriate.
The inconvenient aspect of the raw, noisy data delivered by the empirical SNR is that it cannot be guaranteed to be nonnegative, and therefore, the noisy data cannot be considered an efficient estimator for the purpose of two-dimensional imaging. Further, the Cramér–Rao bound entails that the Fisher information include conditional probabilities that are necessary for phase retrieval algorithms that demand positivity. Indeed,   S u can assume negative values with high probability when Δ T is sufficiently small. Before proceeding further, we remedy this by formulating a nonnegative function of the measurements, without somehow ignoring negative-valued data. We take some pairs of independent measurements, say, the following:
  S 1 u = S 0 Δ T   γ u 2 + N u 1 0 , 1 , S 2 u = S 0 Δ T   γ u 2 + N u 2 0 , 1 ,  
then multiply one of them by the unit imaginary, add the results, evaluate the square root of the magnitude, and divide by S 0 2 Δ T , which is known in advance, to obtain the following:
G ^ u , Δ T u = 1 S 0 2 Δ T u   S 1 u + i S 2 u 1 / 2
where the nonnegative quantity, G ^ u , Δ T , now takes the place of the coherence magnitude measurement. Substituting the expressions in (5) for S 1 u   and   S 2 u , we have the following:
G ^ u , Δ T = γ u   1 + 1 γ u 2 S 0 2 Δ T 1 1 + i N u 1 0 , 1 + i N u 2 0 , 1 1 / 2
However, 1 1 + i N u 1 0 , 1 + i N u 2 0 , 1 is equivalent to N 1 u 0 , 1 + i N 2 u 0 , 1 / 2 , where N 1 u 0 , 1 and N 2 u 0 , 1 are independent Gaussian random variables with zero mean and unit variance. Therefore, the following nonnegative quantity may be taken as a single measurement:
G ^ u , Δ T u = γ u 1 + N 1 u 0 , 1 + i N 2 u 0 , 1 S 0 2 Δ T   γ u 2 1 / 2
The quantity S 0 2 Δ T   γ u 2 is the SNR of the combined measurement at position u and observation time Δ T . It is possible to control the relative SNR values of the various baseline measurements used as image reconstruction data. This can be calculated by taking an a priori estimate of   γ u 2 , calling it   γ e s t 2 , and choosing to stop observation at duration Δ T such that the signal-to-noise ratio is 1 / σ :
Δ T = 1 2 σ 2 S 0 2 γ e s t 4 ,   σ 1 / S N R
where σ is the same positive constant for all u . Consequently, G ^ u , Δ T u becomes
G ^ u , Δ T = γ u 2 + σ γ e s t 2 N 1 u 0 , 1 + i N 2 u 0 , 1 1 / 2
We remark that the formulation of γ e s t permits the introduction of whatever a priori information on the image that might be available (overall dimensions, foreground configuration, etc.).
Before leaving this subject and for later use, we establish the formulation, based upon (10), that is closest to the traditional practice, namely the situation wherein there is no a priori information on γ u for any baseline. Under this circumstance, it is obvious that the desired SNR for all baselines cannot be obtained unless
γ e s t u = min u γ u γ min
in which case the noise model becomes
Δ T 1 2 σ 2 S 0 2 γ min 4 ,     u G ^ u , Δ T = γ u 2 + σ γ min 2 N 1 u 0 , 1 + i N 2 u 0 , 1 1 / 2
To close this section, we recognize the most important difference between stellar diameter determination via ICI and two-dimensional imaging by ICI. In the former, only measurements of coherence magnitude are made, while by virtue of the Van Cittert–Zernike theorem, the latter requires both magnitude and phase. Hence, phase retrieval algorithms are needed that use additional image domain information to supplement the lack of available phase information. This process must operate in the presence of very significant noise. The next section reviews the noise reducing phase retrieval (NRPR) algorithm, which has been shown [5] to be effective in handling very large amounts of noise.

3. Noise Reducing Phase Retrieval Algorithm and Reduction of Integration Times

In References [5,6], we have described a new phase retrieval algorithm and its application to imaging in the context of ICI. The algorithm recognizes that much of the noise in the averaging data is inconsistent with the image domain constraints and can be rendered harmless if both the Fourier domain and Image domain constraints can be made to intersect. The algorithm accepts the noisy coherence magnitude data and uses a relaxation technique to project these data onto a subspace wherein the image domain constraints can be satisfied. When it is run to completion for a single set of coherence magnitude data, we have shown by computational examples that the impact of much of the noise is eliminated, even for extremely large amounts of noise. For this section, it is assumed that the positions of the black pixels are known a priori.
As mentioned previously, we assume that measurements of the coherence magnitude are accumulated at points on a square grid, N pixels on the side. The image is likewise pixelated into a square grid. Thus, both the image and the coherence magnitudes compose nonnegative matrices of the same dimensions. For this discussion, it is supposed that the image consists of a bright object against a black (zero intensity) sky and that we correctly identify a boundary enclosing the bright object, such that the object cannot be translated or rotated by 180° without being clipped. The foreground and background pixels are defined as the pixel sets that are contained within and outside of, respectively, the boundary. It is assumed that the size of the background is roughly three times the size of the foreground.
It is convenient to consider both image and coherence as N 2 -dimensional vectors. To be specific, if g k , j denotes the two-dimensional matrix representing the image, then g C N 2 = v e c g k , j , where v e c M is the Kronecker algebra operator that stacks the columns of the two-dimensional matrix into a single-column vector. We further define the following notation:
g C N 2 = Current   value   of   the   estimated   image   ( pixellated ) g ¯ R N 2 = The   true   image τ R N 2 × N 2 = Projection   onto   the   image   pixels   constrained       to   have   zero   intensity   ( τ k = 1 ) , while   all   other       pixels   are   unconstrained   ( τ j = 0 )
C N 2 × N 2 =   Discrete   Fourier   transform   ( unitary   matrix ,   1 = H ) G ¯ = g ¯ = The   true   coherence G ^ R N 2 = Measured   coherence   magnitude
In this discussion, the measured coherence magnitudes are assumed to have the form (12). Thus, the integration time is presumed to be the same for all baselines. Occasionally, we may wish to work with the two-dimensional matrix forms of g ,   g ¯ ,   G ¯ , etc. These are distinguished by an under-bar, as in g _ , etc., and the Fourier transform of g _ is denoted by _ g _ , where indicates the Hadamard product.
The following phase retrieval algorithm [5] is considered:
Generate   noisy   data   batch   G ^ u , Δ T   using   observation   times   Δ T : Δ T 1 2 σ 2 S 0 2 γ min 4 ,     u G ^ u , Δ T = γ u 2 + σ γ min 2 N 1 u 0 , 1 + i N 2 u 0 , 1 1 / 2 Initialize   g : g = δ   r a n d ( N 2 )   ,   δ = 10 6   Each   element   of   N 2 independently   and uniformly   distributed   in   0 , 1
Photonics 12 00301 i001
where τ = I τ and denotes the Hadamard product. F denotes the Frobenius norm. If A is a complex-valued vector or matrix, A denotes the matrix wherein each element of A is replaced by its absolute magnitude. Likewise, Re A is the result of replacing each element of A by its real part. If A and B are real matrices of equal dimension, then each element of max A , B is the larger of the corresponding elements of A and B. Element-by-element inequalities are denoted by double inequality symbols, e.g., A >> B, A << B, etc. Finally, r a n d ( N 2 ) is a vector of statistically independent, uniform random variables in 0 , 1 .
A brief explanation of the algorithm is in order. Steps B, C, D, and F are taken from the Hybrid Input/Output (HIO) algorithm of Fienup [9]. For small or zero-noise data, HIO converges rapidly, whereas other methods are prone to very slow convergence. On the other hand, if noise is more than unity, HIO fails to converge and in fact will sustain rapid oscillatory motions. Thus, steps B and E are necessary.
The fundamental input to the algorithm is a noisy batch of coherence magnitude measurements as formulated in (12). The algorithm begins with a random guess at the image such that each pixel intensity is uniformly distributed within a small interval, δ . There follows the core of NRPR consisting of a recursive algorithm of six steps. Step A merely subjects the previous value of the image to Fourier transformation to produce the corresponding coherence (both amplitude and phase). Step B is crucial to noise reduction. It modifies the coherence magnitude data by discounting slightly (through the relaxation parameter ε ) the existing data values while adding ε times the magnitude of the latest coherence estimate. Useful values of ε are typically 10 5 10 6 . This step permits the joint convergence of both the image domain constraints and the coherence magnitude modified data that would otherwise be impossible in the presence of significant noise. An analysis of the algorithm [5] shows that from iteration to iteration, the noise in the image is inexorably reduced relative to the signal component. The remaining steps, except for E, are very similar to the HIO algorithm of Fienup [9]. Step C takes G = g and preserves its phase but replaces its magnitude with the modified coherence magnitude data to obtain the preliminary coherence estimate G p . This is essential for the gradual reduction of the constrained pixel intensities. Step D calculates the corresponding preliminary image, g p . Step E speeds convergence by ensuring that the constrained pixel intensities are real-valued, and the unconstrained intensities are real and nonnegative. Finally, step F reduces the intensities of the constrained pixels in accordance with the HIO algorithm. The parameter β equals 0.7. After each iterate of the recursive algorithm, the 1-norm of the constrained pixel intensities, E g = τ g 1 , is computed. In this work, convergence to an image having errors indetectable by visual inspection are obtained with E g < 10 3 .
In Reference [5], various properties of the algorithm were explored. In particular, when run to convergence, the algorithm is equivalent to an operator that projects the coherence magnitude measurement data into the subspace that is consistent with the image domain constraints, thereby eliminating a large portion of the coherence magnitude measurement noise. These conclusions do not depend upon the noise model.
Before leaving this section, we determine the value of the reciprocal SNR, σ . Ref. [8] gives an overview of the imaging examples using the integration times as calculated by Brown and Twiss. These results reveal that the small features of the image converge first. Using the Hirschman entropic uncertainty analysis [10,11,12] shows that the Brown–Twiss integration time is much larger than needed. To correct this, σ must include terms involving the constrained pixels. Also, to treat all possible factors, the sum of the number of constrained pixels must appear in all of the terms N 1 u 0 , 1 + i N 2 u 0 , 1 in G ^ u , Δ T . Finally, since Δ T is a constant for all coherence magnitude measurements, we conclude the following:
σ = N C C N C The   number   of   constrained   pixels C A   constant   O 1
Now, we proceed to determine the value of C . The validity of (17) is explored using randomly generated image examples capable of encompassing a multitude of image sizes. As mentioned previously, we assume that there is an illuminated foreground that contains the object to be imaged and that it is centered within the field of view, while the background contains the constrained black pixels ( τ k = 1 ). The boundary of the foreground/background can be almost any convex contour, but as it turns out, the most convenient shape is a square with B x pixels on the side. This is the contour that separates the unconstrained pixels in the foreground from the black pixels in the background. Reference [7] designates this contour as the box.
Now, we consider the constant C. With the above foreground/background prescription, we run the NRPR algorithm (15–16) for many box sizes, some of which are depicted in Figure 1. To prevent 180-degree rotation of the image during NRPR convergence, a dark square “tab” is placed in the upper left corner of the foreground, within the box.
The intensity of each pixel of the foreground is uniformly distributed in 0 , 1 , with all pixels statistically independent. Model (12) is implemented through the precise calculation of γ min = min u γ u in every case. For each B x , the algorithm is run 20 times for a selected σ value, and the convergence is checked to see if the correct convergence is achieved in at least 18 out of 20 cases. We note that NRPR always converges so that the background constraints are satisfied, while “correct” convergence is deemed to occur if the foreground image is in agreement with the actual image to the extent that the 1-norm of the error is less than 0.001. Failure to correctly converge is accompanied not only by significant foreground error but also a very noticeable decline in the average intensity of the foreground.
Using this definition of convergence failure, we start with σ = 1 , then increase σ until correct convergence in more than 10% of cases fails (in all cases when σ = 1 , there was 100% convergence). Then, the next lowest value of σ is recorded. This process is undertaken for each value of B x . We then summarized the results in Figure 2, which shows the largest values of σ for which convergence succeeds at least 90% of the time as a function of B x . The resulting variation of σ with B x is approximately as follows:
σ 1.2 ± 0.011 2 B x 2
Of course, by the reasoning of (17), the increase in sigma is due to the number of image domain constraints (the number of constrained pixels), denoted here by N C . By virtue of the foreground/background proportions, N C = 8 B x 2 . Consequently, we have the following:
σ 1 S N R C = 1 8 1.2 ± 0.011 2 N C N C 4 2 Δ T 1 S 0 2 γ min 4 4 N C 2
where S N R C denotes the minimum SNR required for NRPR convergence. This result substantially validates [6]. Note that in comparison with the conventional value of Δ T corresponding to a desired SNR of unity, the integration time is diminished by the factor N C 2 / 32 . For a relatively small image that is only 100 pixels on a side, this factor amounts to slightly more than two hundred thousand. Thus the σ value is an asymptotic result.
It is also evident that the above relations lead to a simple criterion for NRPR convergence, namely that the product of the S N R C 1 / σ and the number of constraints has a nonzero, positive lower bound:
N C S N R C 4 2
To test whether or not the above integration time results are sensitive to image content, we examined several non-random images using the same foreground/background arrangement and computational procedure as is described following Equation (19). Figure 3 depicts the noise-free images being tested, where only the foreground in each case is shown. Corresponding to each recovered image, the stars indicate the largest values of 1 / σ for which convergence succeeds at least 90% of the time as a function of B x . The straight line shows the previously derived estimate for 1 / σ . At least for the cases tested, there appears to be little sensitivity to image content.
The conclusion to this section is that the formula for the computation of the coherence magnitude values is as follows:
Δ T 1 S 0 2 γ min 4 4 N C 2 ,     u G ^ u , Δ T = γ u 2 + N C 4 2 γ min 2 N 1 u 0 , 1 + i N 2 u 0 , 1 1 / 2
Note that the ratio of the above integration time to the Brown–Twiss calculation is as follows:
  Δ T N R P R Δ T B r o w n T w i s s 4 N C S N R d 2
where S N R d is typically 10 to 100. Hence, this ratio of the two is many orders of magnitude smaller.
A limitation of the present section is that the set of constrained pixels is determined a priori. To apply ICI to image space objects of unknown configuration, we must be able to discover the constrained set without prior knowledge (except for the fact that we are dealing with a bright object against a black sky). This is treated in the following section.

4. NRPR Characteristics and Stochastic Search

Having obtained the NRPR algorithm with a box size along with its appropriate integration time, we can run NRPR to completion for multiple box sizes. The outcome of each iterate is a hypothesis concerning the correct, zero-noise image. In this section, we show how to embed NRPR within a stochastic search mechanism that quickly discovers the true image. We begin with the stochastic search algorithm approach introduced in [7] and shown in Figure 4.
The first step is to collect the intensity fluctuation data and compute all the cross-correlations, using the correct integration time. The stochastic search algorithm uses only these data. The first stage of the search algorithm (shown in green) sets up a “Box” of size B x . The second loop (shown in red) runs NRPR up to n > 1 times. Each run uses an initial guess of the image such that the intensities of all pixels are statistically independent and uniformly distributed in 0 , δ , where δ is a small real and nonnegative number. The second part of the loop ends by updating the image correlation matrix. Then (shown in black), we test the correlation matrix to see if there is a pair of fully correlated images among the n NRPR trials (allowing for 180-degree rotations and translations in the field of view). If this is not the case, then we set up a new box size and try again. If there are two correlated images, then we stop the stochastic search because as we will show, both correlated images are the zero-noise image. Generally, excellent image quality is attained when the image domain constraint error, E τ , is less than 10 3   g .
Both the analysis [6,7] and thousands of computational runs show that the NRPR algorithm always converges to some g . Thus, we may consider NRPR as a mapping from the intial values τ ,   g 0 ,     and   G ^ 0 to the converged result, denoted by g c :
g c = N g τ , G ^ 0 , g 0
as well as a mapping from τ ,   g 0 ,   and   G ^ 0 to G c :
G c = N G τ , G ^ 0 , g 0   N G τ , G ^ 0 , g 0 = N g τ , G ^ 0 , g 0
One expects that the randomness of g 0 will result in the randomness of g c so that g c may not be the true, noise-free image g ¯ . However, there is one instance where g c = g ¯ independently of g 0 , and that is when τ equals the true τ ; we call it τ ¯ . This is illustrated in Figure 5. For all cases the coherence magnitude data are fixed, and the results are all obtained after 500 iterates. The upper left shows τ equal to the true constraint matrix. The top sequence of results shows a sample of independent trials of g c = N τ , G ^ 0 , g 0 (taken from 10,000 samples), obviously duplicating the true image repeatedly. In contrast, the lower left shows an erroneous specification of τ . Repeated trials of g c = N τ , G ^ 0 , g 0 , using different random samples of g 0 each time result in randomly varying images. This basic characteristic of NRPR suggests that at the very least, the correct background region can be discovered by trial and error using NRPR as a test. Of course, a purely trial-and-error approach would entail the evaluation of an enormous number of possible trials (perhaps as much as 2 N 2 ), especially for any reasonably large image. A quicker technique rests upon another characteristic feature of NRPR convergence, namely the following: Given random trial values of τ (that could include τ = 0 ), there is a nonzero probability that the repeated trials will produce two or more identical repetitions of the correct image. Once such a repetition occurs the identical images produced can be identified as the correct image. The probability of such image repetition grows with the number of correctly assigned elements of τ .
Therefore, there is the possibility of identifying the correct image domain constraints with relatively few random trials.
We illustrate the algorithm’s characteristics using the simple spacecraft image first introduced in [5] and introduced here as Figure 6. This image is set in a total field of view of 90 by 90 pixels. The NRPR parameters are specified to be as follows:
ε = 2 × 10 5 ,     β = 0.7 Number   of   iterations = 1000
The choice of these algorithm parameters warrants some explanation. First, consider epsilon. If the epsilon is less than or equal to zero, NRPR devolves into HIO and generally does not converge for large values of noise. Thus, epsilon must be positive. An enormous number of trials of NRPR on a great many different images has shown excellent performance for epsilon in the range epsilon = 2.0 × 10 5 to 1.0 × 10 6 .
The first step in developing this example is to calculate the raw coherence magnitude data batch that will be used repeatedly in every sample run. This is given by G ^ 2 u k , Δ T . The value for G ^ 2 u k , Δ T is the initial value G ^ , which is subsequently adjusted via Step B of NRPR. For this example, we estimate that γ min = 1.0 × 10 3 (it turns out that the minimum value is γ min = 0.79 × 10 3 ). The number of constraints, N C , is given as a function of the box size, denoted by B X and by 90 2 B X 2 .
With the raw coherence magnitude data in hand and the box size specified, the procedure is to compute a sequence of NRPR cases (each run for 1000 iterations). Each member of this sequence is an instance of g c = N g τ , G ^ 0 , g 0 . This sequence terminates when there are two images that are identical (aside from the trivial ambiguities and within 0.001 relative error), whereupon these are deemed the true, noise-free image.
We begin by showing a typical example of the image sequence for B X = 44 , which is a reasonable starting value. The results shown in Figure 7 are quite surprising. There are two distinct categories of image: a set of random and non-repeatable images (images 1, 2, 4, and 5) and two identical images (numbers 3 and 6) that perfectly match (within the above mentioned error criteria) the correct noise-free image. In the latter, not only are the constrained pixels outside the box correctly prescribed, but all the truly black pixels within the box are also rendered correctly, as are the unconstrained pixels of the noise-free image.
The above phenomena can be explained by the structure of NRPR considered as a discrete time dynamical system with state g k N × N ,     k = 1 , 2 , , , where k is the iteration number. Here, we distinguish between the assumed value of τ and the value of τ that includes all of the black pixels belonging to the true, noise-free image, denoted by τ * . There are two primary noise reduction processes operating in NRPR. The first is step F, which reduces the noise in the constrained region of the state space. Assuming that the specified box is not smaller than the bright object, then τ g k 0 . The second mechanism for noise reduction is Step B. The application of Step B distinguishes two subsets of the initial values of the pixels denoted by g 0 . One subset evolves via the action of Step B into the noise reduction of pixels k , j that do not contain k , j : τ τ * k , j = 1 . This set converges to an image that retains noise in the foreground of the image, as illustrated by the random configurations in images 1, 2, 4, and 5 of Figure 7. The second set, however, is the one under which Step B reduces the noise for the entirety of the unconstrained set k , j : τ k , j = 1 . In this case, the whole image converges to the noise-free image (as in images 3 and 6), and the final result is independent of the initial noise in the coherence magnitude measurements. A few more deductions can be made on the basis of this discussion. For example, the probability of obtaining two identical images from a sequence of runs should increase as the box size decreases as long as the bright object can be contained in the box. On the other hand, if the box is too small to contain the bright object, we have an irreconcilable conflict between the image-domain and Fourier-domain constraints so that there can only be random images (as we illustrate below).
Clearly, a trained human operator can use the above stochastic search method to rapidly determine the correct constraints and the image. For example, the results for B X = 70 already indicate the approximate size of the object, so a skilled technician would go immediately to B X = 30 without passing through the intermediate steps and then secure the correct image.
To illustrate this example further, we proceed to examine the statistical characterisitics of the number of trials (for a given box size) needed to secure the noise-free image. The main element determining the computational labor is the number of NRPR trials needed to identify the noise-free image for a given box size. Here, we show typical statistical estimates using histograms calculated in [7]. For each box size, we obtain 500 statistical samples. Each sample consisting of repeated NRPR runs, with statistically independent initial conditions, provides the number of independent NRPR trials, R , needed to obtain two images that have a normalized correlation coefficient within 10 3 of unity.
Figure 8 shows the resulting histograms of the computed number of NRPR trials, R , for box sizes of 70, 60, 50, 40, and 30. These histograms, denoted by Pr R , display for each R the probability of obtaining the noise-free image for each repetition, R, of the stochastic search, noting that each repetition ends with the second noise-free image. In practice, a reasonable starting value of box size would be roughly half the size of the field-of-view, corresponding in this case to B X = 50 . As we reduce the box size down to the autocorrelation of the true foreground, the lowest histogram in the figure shows that all samples arrive at the noise-free image in just two iterations of NRPR. When we consider that a purely stochastic search would take on the order of 2 N 2 NRPR runs, it is clear that the stochastic search algorithm is highly efficient and that the data processing time for the combined stochastic search/NRPR algorithm is quite modest.
Another observation is that for B X below 30 and greater than 16, the correct convergence to the noise-free image again requires only two iterations of NRPR. It is to be noted that as long as the box is large enough to contain the foreground, the imaged object will be located within the box. However, for 16 and below, the box is too small to contain the bright object, and therefore, NRPR cannot correctly converge to a pattern that is entirely distributed within the box. In fact, repeated trials produce completely different image patterns that remind one of a disordered fragmentation that is widely distributed beyond the confines of the box. Figure 9 illustrates this behavior. The main conclusion is that the algorithm informs the user when the specified box size is too small.

5. Analysis and Refinement

Having explored the features of the stochastic search method via computational experimentation in Section 4, we review the supporting theorems discussed in Reference [8] to validate the previous results. This will also result in an even faster method of image identification. To illustrate and validate the analytical derivations presented in [8], we adopt a more complex test image, shown in Figure 10. This is a rendering of a GOES 16-type geophysical satellite at geostationary orbit with 20 cm resolution. For the present investigation, the values of the parameters in the above are fixed at
β = 0.7 ,   ε = 2 × 10 5 ,   δ = 10 6
and it is customary practice to choose the field of view size, N , so that it is more than three times the size of the illuminated object.
The first step is to recast the Equations (15) and (16) into a discrete-time dynamic system such that all the principal quantities are indicated by the sequence of integers k = 0 , 1 , . Consider the equations pertaining to k = 0   and   1 :
G ^ k j ( 0 ) γ k j 2 + N C 4 2 γ e s t 2 N k j 0 , 1 1 / 2 g 0 = δ   r a n d N , N ,     δ 10 6     G 1 = δ   r a n d N , N G ^ 1 = 1 ε G ^ 0 + ε δ e i π 2 r a n d N , N 1 = G ^ 0 + O ε + ε δ τ g 1 = τ g 0 β Re H G ^ 0 e i π 2 r a n d N , N 1 + O ε + ε δ τ g 1 = τ Re + H G ^ 0 e i π 2 r a n d N , N 1 + O ε + ε δ
where the function r a n d N × N is a Kronecker vector having statistically independent elements that are uniformly distributed in 0 , 1 , as defined previously. Because ε and δ are very small, the residual terms are discarded for all k. This leads to the discrete dynamic system:
G ^ k 1 G ^ k 0 γ min 2 + N C 4 2 γ min 2 N k 0 , 1 1 / 2 τ g 1 = β τ Re H G ^ 0 e i π 2 r a n d N , N 1 τ g 1 = τ Re + H G ^ 0 e i π 2 r a n d N , N 1       k = 1 , , G ^ k + 1 = 1 ε G ^ k + ε g k τ g k + 1 = τ g k β Re H G ^ k + 1 e i arg g k τ g k + 1 = τ Re + H G ^ k + 1 e i arg g k
Now, retaining the Kronecker vector notation, g k is represented by the column vector:
g k = g b k g f k , g b k N 2 B x 2 , g b k B x 2
In other words, g b k is a vector composed of the matrix elements of the background pixels, and g f k is the vector composed of the matrix elements of the foreground pixels. Corresponding to this is the following:
G k g k = F b F b f F b f H F f g b k g f k , G ^ k = G ^ b k G ^ f k
F b F b f F b f H F f is the unitary matrix corresponding to the Fourier transform operator in the Kronecker notation. Finally, τ is represented by
τ = I N 2 B x 2 0 0 0
Reference [8] commenced the computation of an asymptotic approximation of the background, introducing the averages of g b k N 2 B x 2 ,   g b k B x 2 as defined below:
g b k 1 N 2 B x 2 m = 1 N 2 B x 2 g b k m     g f k 1 B x 2 m = 1 B x 2 g f k m
To the first order, the maximum estimate for the limit on g b k can be found to be
G f k 1 + α ε k G ^ f k , α 10
The resulting asymptotic approximation is
g b k + 1 = 1 1 4 β g b k + 1 2 β A N 2 1 + α ε k g b 0 g b k = 0 0.05 ,   A = 30 2 ,   N = 140 ,   α = 10
The computations for the first 40 iterations are shown in Figure 11 and Figure 12, which give the complete 7000 iterations.
A comparison of the predicted and computed results in Figure 11 and Figure 12 establishes that the averaged background intensity is reduced by an order of magnitude at the outset and then levels off beyond iterate k = k T 25 to form a floor that does not vary appreciably beyond k = k T and within several hundred iterations thereafter. These features match the computed results rather well, especially for the immediate drop-off for k < k T , where it is established that g b k is very weakly dependent on B x . Figure 12 shows the complete averaged background from k = 1 to k = 7000. The conclusion of this derivation is that after k = k T , g b k is so suppressed relative to g f k that its influence on the foreground may be neglected. Thus, we set g b k 0 ,   and   G b k G f k 0 0 0 F f g b k g f k . In addition, for the first few iterates, the initial value of e i arg F f g f k is hardly affected so that we are left with
arg F f g f k 0 π 2 r a n d B x , B x 1 G ^ f k + 1 = G ^ f k + ε F f g f k G ^ f k g f k + 1 = Re + F f H G ^ f k + 1 e i arg F f g f k
At this point, References [8,13,14] executed a complex derivation to transform the above system into two quantities.
Ψ m   n k f H G ^ f k m   n   , Λ ^ m   n k f H e i arg F f g f k m   n defined on the finite support set: B x , B x + 1 , , B x 1 , B x × B x , B x + 1 , , B x 1 , B x . The transformed equations are
Λ ^ k 0 = r a n d B x , B x Ψ k + 1 Ψ k = ε   g f k Ψ k Λ ^ k = f H e i arg F f g f k g f   m   n k + 1 = Re + Ψ k Λ ^ k m     n
where is the correlation operator. Next, we have several proofs of the nonnegativity of Ψ k ,   Λ k 0   and   g f :
Theorem 1.
Ψ k f H G ^ f k is the real, nonnegative autocorrelation of f H G ^ f k .
Proof. 
Since G ^ f k is nonnegative, we have G ^ f k = G ^ f k G ^ f k H ; thus, the result follows Parseval’s theorem. □
Corollary 1.
Since both g f k   and   Ψ k are real and nonnegative, so is Λ ^ k .
Proof. 
g f k + 1 = Ψ k Λ k = Re + Ψ k Λ k = Ψ k Re + Λ k . □
Thus, Ψ k , Λ ^ k , and g are nonnegative and real. The following shows a necessary condition for convergence to the noise-free image.
Theorem 2.
If the box contains the entirety of the region of support of Ψ k , then (a) the correlation, and Ψ k Λ ^ k is maximum; (b) this is a necessary condition for g to converge to the noise-free image.
Proof. 
Earlier in the analysis (see Equation (33)), it follows that Ψ k converges to a maximum value. Given this, Ψ k is bijective to G ^ f k and e i arg f g f k . Then, the main conclusion follows from the work of Klibanov [15,16]. □
Theorem 3.
If the box contains the entirety of the region of support of the noise-free image, g ¯ , and no less, then this is a necessary and sufficient condition for the convergence to the noise-free image.
Proof. 
The region of support of Ψ k encompasses that of g ¯ , implying the loss of constraints on the elements of Λ ^ k . Since a maximum cannot be reduced via the elimination of constraints, the stated condition remains necessary. Then, in view of the analysis in [8], the stated condition is necessary and sufficient. □
Comment. 
A multitude of computations indicate that if the regions of support of Ψ   and   Λ ^ overlap to the extent of 50% of area, this is almost invariable that g converges to the noise-free image. Thus, with the probability of nearly one, we may relax Theorems 2 and 3.
The following will be of use:
Theorem 4.
If the box is too small to completely contain the noise-free image, then NRPR cannot converge to the noise-free image. Furthermore, the converged NRPR run exhibits illuminated pixels outside of the box.
Proof. 
The fundamental operation of NRPR is that it simultaneously attempts to satisfy the constraints imposed by the coherence magnitude data and the image domain background constraints. These are in direct conflict so that correct convergence is impossible. □
Now, in Figure 13, we illustrate the fact that Ψ k and Λ ^ k occupy statistically independent positions within the foreground box, and the brighter portions of Ψ k and Λ ^ k encompass the region of support of the true image (or the region of support for the possible convolutions of the true image). One notices from Figure 13 that there is almost no overlap between Ψ k and Λ ^ k . In conformity with Theorems 2 and 3 and the comment above, Figure 13 shows the convergence to a poor image, where it is apparent that the magnetometer boom is on the wrong side of the satellite. These results show that with high correlation and good image quality, there is significant overlap between Ψ k and Λ ^ k .
The foregoing observations suggest a rather simple proof of the stochastic algorithm for high correlation and convergence to the noise-free image. This is summarized in Figure 14 as follows:
We can now construct a complete model for the probability of the zero-noise image convergence as a function of the repetitions, R, of the stochastic search algorithm for a given box size. The first step is a model for the probability for a correct convergence on any single trial as a function of the box size. This function, denoted here by μ , involves both A / N and B x / N . For the situation on the right side of Figure 14, since the relative positions within the box are completely random, the probability of an overlap between Ψ k and Λ ^ k can be approximated simply as the ratio of the areas A and B x 2 , i.e., A / B x 2 . Note that this satisfies Theorem 3 when the situation on the right side of Figure 14 holds. Finally, when B x 2 A , Theorem 4 is satisfied. In summary, we derive the following:
μ B x 0     ,           B x < A   A B x 2   ,       B x A  
Figure 15 shows the above formula rendered as the blue line. The red stars represent the percentage of correct images out of one hundred trials as calculated by NRPR. The agreement is close. Also, there are a few correct images even for the case wherein the box size coincides with the whole field of view (in other words, the number of constraints are equal to zero).
It must be emphasized that this expression is the probability of NRPR attaining the correct image in a single trial, but a successful determination of the true image via the stochastic search method requires the occurrence of two identical and correct images within R independent trials, where the last trial produces the second correct image in the sequence of R trials. Let us replace Pr R in Section 4 with p n , μ , the probability that the correct image is discovered in n trials given that μ is the probability of obtaining the correct image in one trial. The following theorem displays the required result:
Theorem 5.
The probability of obtaining the correct image via the occurrence of two correlated images in n independent trials is
p n , μ = n 1 μ 2 1 μ n 2
Proof. 
In the stochastic search algorithm, the n t h trial is defined as the second successful correct image, thus
p n , μ = μ × Probability   that   there   is   one   successful   trial   in   the   first   n 1   trials
The above probability has the form of a Bernoulli distribution since each trial is independent. Therefore, we derive the following:
p n , μ = μ × n 1 ! 1 ! n 2 ! μ 1 μ n 2 = n 1 μ 2 1 μ n 2
Figure 16 shows the probability distributions pertaining to the test image for a range of box sizes. These results verify that the probabilities of identifying the correct image via the stochastic search process are quite high. Figure 16 is also in line with the computational results shown in [8]. Furthermore, the progression of distributions from large to small box sizes suggests the following process. Having made a rough estimate of the object size, one starts with a box size that is twice the estimate. As seen in the third from the top distribution in Figure 16, 90% of the probability mass is contained in the first 20 trials, and the maximum probability is only 5 trials. Thus, one would set the maximum set of trials at approximately 10. In the event of no success, one should reduce the box size by approximately a quarter until two identical images occur. If one reduces the box size to less than the object size, then by Theorem 4, NRPR will not converge properly. According to [8], such an event can be recognized by the fact that the distribution of intensity in the image plane is several times the box size and the object size, in which case one must enlarge the box size slightly and continue to completion. In the next section, a refinement to the above process is introduced that exploits the reverse procedure of starting with a too small box size and proceeds to enlarge the box size.

6. A Refinement of the Stochastic Search Algorithm

Theorems 3, 4, and 5 motivate the reversal of the search process. In this refinement, one starts with a box size significantly smaller than the smallest box that can contain the illuminated object and then progressively enlarges the box until success is obtained. To understand the implications of this procedure, we illustrate with the test image.
Figure 17 shows the sequence of images produced by NRPR starting with B x = 20 and proceeding to B x = 20 ,   22 ,   24 ,   26   and   28 . These first five images show what happens when the box size is smaller than the object. The large halo of intensity surrounding the object is due to the fact that in each cycle of calculation NRPR attempts to impose Fourier domain constraints and then undoes those constraints by attempting to impose image domain constraints. The lower row of images illustrates the behavior when the box just contains the true image. The process for the identification of the noise-free image should end with the two images on the lower left of the figure, but we show additional images, all with B x = 30 , to emphasize the stark contrast in behavior once the threshold of the smallest box allowed by Theorem 3 is crossed. This behavior reveals a significant advance for ICI because by starting with too small boxes and increasing the box size rather than vice-versa as in [8], very few box sizes must be tested, and once the box size is just that of the object, it takes only two trials to secure the true image.
This modified stochastic search algorithm also has a beneficial effect on the necessary integration time required. Note from Equation (27) that the initially measured coherence magnitudes, G ^ u k , Δ T = γ u k 2 + N C 4 2 γ e s t 2 N u k 0 , 1 1 / 2 , have noise terms proportional to the number of image domain constraints, N C . This quantity is inversely proportional to the box size, and in Equation (35) of [11], the ratio of the integration time needed by NRPR to the integration time as conventionally estimated by Brown and Twiss is Δ T N R P R / Δ T c o n v e n t i o n a l 4 / N C S N R d 2 , where the desired signal-to-noise ratio, S N R d , is typically 10 to 100. If we take S N R d = 100 and estimate B x 30 and N = 140 as for our test image, then this ratio is 0.04 / 140 2 30 2 2 = 4.57 × 10 12 . This amounts to 0.4 h versus ten million years!

7. Conclusions

This paper is a survey of a sequence of our papers [5,6,7,8] that the author has developed for an algorithm capable of two-dimensional imaging that enormously reduces the integration times for the HBT effect while retaining the advantages of simple, inexpensive flux collecting hardware, robustness to misalignments, immunity to seeing conditions, and unlimited baselines and image resolution. This significant step forward merits the name “Intensity Correlation Imaging” (which we designate as ICI hereafter).
Now, how may our readers say that the mathematics are rigorous and yet say that it lacks validation? Someone could give you reams of empirical data but that would not suffice because how would you know that the data are real? This manuscript establishes an algorithm that produces the reality of a noise-free image. The best way to validate an algorithm (and the only way) is to try it out. I would expect readers to copy (perhaps in a simple MatLab script) the algorithm as it is stated in the manuscript and then validate the algorithm by running it. Please think about this! Let us suppose that my readers see an Appendix A that has the MatLab script with a space for a nonzero-noise image that the user can select at will to run the simulation and validate the algorithm (and find improvements!). I believe that this is very helpful for the validation of the theory and for the continued growth of the theory and practice of the photonic community. Consequently, I have added Appendix A containing the complete algorithm.
Subsequent to the present paper, the practical benefits of ICI to astronomical science must be pursued, emphasizing its low cost and very high resolution. Already, Reference [17] has initiated a design for a ground-based ICI observatory for geostationary satellite inspection with (1) a resolution of 10 cm (at 36,000 km), (2) a range of apparent magnitudes of 10 through 14, (3) integration times from 0.043 min to 5.18 min, and (4) an integration time reduction of 10 30 below the Brown–Twiss calculation. No conventional visible range multipixel imager with these parameters can claim a cost below USD 140 million. Furthermore, designs for high-resolution low-Earth-orbit space-debris-tracking as well as deep-space astronomical research facilities are being planned.
These efforts could open the door to the construction of advanced ICI observatories that are well within the financial resources of colleges and universities around the world yet able to compete with very large telescopes, such as those discussed in the National Academy of Sciences’ most recent decadal survey in astronomy and astrophysics [18]. We look forward to significant advances in astronomical technology as well as a broadening participation in scientific education.

Funding

Aside from the author’s efforts, there are no outside sources of funding for this review.

Conflicts of Interest

There are no conflicts of interest.

Appendix A

% ICI Algorithm
% In order to run:
% Set fix=0, and %fix=1 then type  start=0;ICIAlgorithm
% Then when the first run is completed, set fix=1 and type  start=0;ICIAlgorithm

% fix=0, introduce a new coherence data batch
fix=0
% Keep the previous data batch
%fix=1

Niter=3000.
Epsilon=1.0×10−6
beta=0.7
delta=1.0×10−6

if start=0
    load noisefreepicture
    gg=noisefreepicture
MM=size(gg)
% Specify Bx:
Bx=?
Marg=(MM-Bx)/2

tau=zeros(MM,MM)
tau(MM-Marg+1:MM,:)=1
    tau(:,1:Marg)=1
      tau(:,MM-Marg+1:MM)=1
    tauperp=ones(MM,MM)-tau

NC=sum(sum(tau))

% Compute Jtruemag with noise

      Gtrue0=fft2(gg)
      Gtruenorm=abs(Gtrue0/max(max(abs(Gtrue0))))

    if fix==0
        Sigma=NC/(4*sqrt(2))
            Gmin=min(min(Gtruenorm))

            GGG=(abs(Gtruenorm).^2)+Sigma*(Gmin^2)*
            (randn(MM,MM)+1i*randn(MM,MM))
    end
    Gtruemaginit=sqrt(abs(GGG))
    Gtruemag=Gtruemaginit

  % Initialize g and G
    g0=delta*rand(MM,MM)
    JG=fft2(g0)
    Ginitmag=abs(G);g+g0

for m=1:Niter

    %  Input-output device
    G=fft2(g)

    Gtruemag=(1-Epsilon)*Gtruemag+Epsilon*abs(G)

    G=Gtruemag.*G./(abs(G)+eps)

    gp=ifft2(G)

    % positivity constraint
gp=tau.*real(gp)+max(zeros(MM,MM),0.5*(gp+conj(gp))).*(ones(MM,MM)-tau)

    % Hybrid input-output algorithm
    g=(ones(MM,MM)-tau).*gp+tau.*(I-beta*gp)

    if start==0
        start=1
    end

    step=1+step
end

gM=abs(g)/max(max(abs(g))
image(gM*65+3*tauperp);axis('square')

References

  1. Brown, R.H.; Twiss, R.Q. Interferometry of the Intensity Fluctuations in Light. I. Basic Theory: The Correlation between Photons in Coherent Beams of Radiation. Proc. R. Soc. 1957, 242, 300–324. [Google Scholar]
  2. Brown, R.H.; Twiss, R.Q. Interferometry of the Intensity Fluctuations in Light. II. An Experimental Test of the Theory for Partially Coherent Light. Proc. R. Soc. 1958, 243, 291–319. [Google Scholar]
  3. Brown, R.H.; Twiss, R.Q. Interferometry of the Intensity Fluctuations in Light. III. Applications to Astronomy. Proc. R. Soc. 1958, 248, 199–221. [Google Scholar]
  4. Brown, R.H.; Twiss, R.Q. Interferometry of the Intensity Fluctuations in Light. IV. A Test of an Intensity Interferometer on Sirius A. Proc. R. Soc. 1958, 248, 222–237. [Google Scholar]
  5. Hyland, D.C. Analysis of Noise Reducing Phase Retrieval. Appl. Opt. 2016, 54, 9728–9735. [Google Scholar] [CrossRef] [PubMed]
  6. Hyland, D.C. Improved Integration Time Estimates for Intensity Correlation Imaging. Appl. Opt. 2022, 61, 10002–10011. [Google Scholar] [CrossRef] [PubMed]
  7. Hyland, D.C. Algorithm for Determination of Image Domain Constraints for Intensity Correlation Imaging. Appl. Opt. 2022, 61, 10425–10432. [Google Scholar] [CrossRef] [PubMed]
  8. Hyland, D.C. Analysis and Refinement of Intensity Correlation Imaging. Appl. Opt. 2023, 62, 5683–5695. [Google Scholar] [PubMed]
  9. Fienup, R. Phase retrieval algorithms: A comparison. Appl. Opt. 1982, 21, 2758–2769. [Google Scholar] [CrossRef] [PubMed]
  10. Folland, G.B.; Sitaram, A. The uncertainty principle: A mathematical survey. J. Fourier Anal. Appl. 1997, 3, 207–238. [Google Scholar] [CrossRef]
  11. DeBrunner, V.; Havlicek, J.P.; Przebinda, T.; Özaydın, M. Entropy-Based Uncertainty Measures for L 2 R n , l 2 Z , and l 2 ( Z / N Z ) with a Hirschman Optimal Transform for l 2 ( Z / N Z ) . IEEE Trans. Signal Process. 2005, 53, 2690. [Google Scholar] [CrossRef]
  12. Billingsley, P. Probability and Measure, 3rd ed.; Wiley Interscience: Hoboken, NJ, USA, 1995; p. 362. [Google Scholar]
  13. Dembo, A.; Cover, T.M.; Thomas, J.A. Information Theoretic Inequalities. IEEE Trans. Inf. Theory 1991, 37, 1501–1517, see Theorem 23 on page 1513. [Google Scholar] [CrossRef]
  14. Press, W.H.; Flannery, B.P.; Teukolsky, S.A.; Vetterling, W.T. Numerical Recipes in Pascal; Chapter 9; Cambridge University Press: Cambridge, UK; p. 450. ISBN 0-521-37516-9.
  15. Klibanov, M.V.; Sacks, P.E. Use of partial knowledge of the potential in the phase problem of inverse scattering. J. Comput. Phys. 1989, 112, 273–281. [Google Scholar] [CrossRef]
  16. Klibanov, M.V. On the recovery of a 2-D function from the modulus of its Fourier transform. J. Math. Anal. Appl. 2006, 323, 818. [Google Scholar] [CrossRef]
  17. Hyland, D.C. Intensity Correlation Imaging Design for Geostationary Satellite Inspection. Appl. Opt. 2024, 63, 4095–4108. [Google Scholar] [PubMed]
  18. National Academies of Sciences, Engineering, and Medicine. Pathways to Discovery in Astronomy and Astrophysics for the 2020s; National Academies of Sciences, Engineering, and Medicine: Washington, DC, USA, 2021. [Google Scholar]
Figure 1. Realizations of the randomly generated image examples corresponding to various image sizes.
Figure 1. Realizations of the randomly generated image examples corresponding to various image sizes.
Photonics 12 00301 g001
Figure 2. The variation of σ with N f g = B x for values of σ such that the correct convergence is achieved for 90% of the trials. As described in the text relating to the NRPR algorithm, the parameter ε appearing in step B is crucial to noise reduction. It controls the speed with which step B modifies the coherence magnitude data by discounting slightly the existing data values while adding ε times the magnitude of the latest coherence estimate. Parameter β controls the speed with which step F reduces the intensity of the constrained pixels Nfg is the size of the square foreground region. Finally, σ is defined in Equation (9) as the reciprocal of the SNR needed to attain correct convergence with a probability of 90%.
Figure 2. The variation of σ with N f g = B x for values of σ such that the correct convergence is achieved for 90% of the trials. As described in the text relating to the NRPR algorithm, the parameter ε appearing in step B is crucial to noise reduction. It controls the speed with which step B modifies the coherence magnitude data by discounting slightly the existing data values while adding ε times the magnitude of the latest coherence estimate. Parameter β controls the speed with which step F reduces the intensity of the constrained pixels Nfg is the size of the square foreground region. Finally, σ is defined in Equation (9) as the reciprocal of the SNR needed to attain correct convergence with a probability of 90%.
Photonics 12 00301 g002
Figure 3. The vastly improved integration time estimate has been verified with numerous cases and has little sensitivity to the image content. Each of the images in this figure is actually a foreground region that is centered in a dark background such that the total field of view is three times larger than the displayed image. Thus, the complete image actually features a set of constrained pixels surrounding the images shown. This rendering allows the foreground details to be reasonably discerned. The algorithm parameters ε and β are the same as those listed above Figure 2 in all cases.
Figure 3. The vastly improved integration time estimate has been verified with numerous cases and has little sensitivity to the image content. Each of the images in this figure is actually a foreground region that is centered in a dark background such that the total field of view is three times larger than the displayed image. Thus, the complete image actually features a set of constrained pixels surrounding the images shown. This rendering allows the foreground details to be reasonably discerned. The algorithm parameters ε and β are the same as those listed above Figure 2 in all cases.
Photonics 12 00301 g003
Figure 4. Stochastic search algorithm to find the true noise-free image.
Figure 4. Stochastic search algorithm to find the true noise-free image.
Photonics 12 00301 g004
Figure 5. An example showing that with the correct background specification, every trial of NRPR converges to the same correct image. We present results for a simple image having pixel intensities that are either zero or one. The assigned constrained pixels are shown by the two pictures on the left-hand side (showing yellow for the constrained set and dark blue for the unconstrained). The top-left picture shows the correct constraint set, and the three resulting images to the right of the arrow are identical despite the randomness of the initial image provided to NRPR. On the other hand, the bottom-left constraint specification is incorrect. The resulting trio of images in the bottom are random, reflecting the randomness of the initial guess provided to NRPR.
Figure 5. An example showing that with the correct background specification, every trial of NRPR converges to the same correct image. We present results for a simple image having pixel intensities that are either zero or one. The assigned constrained pixels are shown by the two pictures on the left-hand side (showing yellow for the constrained set and dark blue for the unconstrained). The top-left picture shows the correct constraint set, and the three resulting images to the right of the arrow are identical despite the randomness of the initial image provided to NRPR. On the other hand, the bottom-left constraint specification is incorrect. The resulting trio of images in the bottom are random, reflecting the randomness of the initial guess provided to NRPR.
Photonics 12 00301 g005
Figure 6. The true image of the example in [9]. The image to the left shows the entire field-of-view, and an enlarged image of the target object is shown at right.
Figure 6. The true image of the example in [9]. The image to the left shows the entire field-of-view, and an enlarged image of the target object is shown at right.
Photonics 12 00301 g006
Figure 7. The typical sequence of convergent images, terminating when two of the images are perfectly correlated. The discussion surrrounding the figure traces the steps in the NRPR run that result in the discovery of the zero-noise image.
Figure 7. The typical sequence of convergent images, terminating when two of the images are perfectly correlated. The discussion surrrounding the figure traces the steps in the NRPR run that result in the discovery of the zero-noise image.
Photonics 12 00301 g007
Figure 8. Histograms of the computed number of NRPR trials, R , needed to identify the true image for box sizes of 70, 60, 50, 40, and 30, each based upon 500 statistical samples. (R is the abscissa).
Figure 8. Histograms of the computed number of NRPR trials, R , needed to identify the true image for box sizes of 70, 60, 50, 40, and 30, each based upon 500 statistical samples. (R is the abscissa).
Photonics 12 00301 g008
Figure 9. The two panels to the left show that box size 18 is the smallest that can contain the target object, and every trial of NRPR converges to the correct noise-free image. On the right, box size 16 leads to nonrepeating intensity patterns that extend throughout the field-of-view. Thus the algorithm can identify the minimum box size.
Figure 9. The two panels to the left show that box size 18 is the smallest that can contain the target object, and every trial of NRPR converges to the correct noise-free image. On the right, box size 16 leads to nonrepeating intensity patterns that extend throughout the field-of-view. Thus the algorithm can identify the minimum box size.
Photonics 12 00301 g009
Figure 10. The test image used to illustrate and validate the derivations. (a) shows the total field-of-view, and (b) gives a close-up.
Figure 10. The test image used to illustrate and validate the derivations. (a) shows the total field-of-view, and (b) gives a close-up.
Photonics 12 00301 g010
Figure 11. The results for g b k in the regime 0 < k 1 / ε . The prediction is on the left and NRPR computation on the right.
Figure 11. The results for g b k in the regime 0 < k 1 / ε . The prediction is on the left and NRPR computation on the right.
Photonics 12 00301 g011
Figure 12. The computed (blue line) and predicted (red stars) results for g b k for k O 1 / ε .
Figure 12. The computed (blue line) and predicted (red stars) results for g b k for k O 1 / ε .
Photonics 12 00301 g012
Figure 13. Three images are present under the conditions k = 10 and a box size of 90 pixels. The left-hand image shows a concentrated intensity region approximately coterminous with Ψ k . The center image shows the distribution of Λ ^ k , where there is no overlap with the concentrated intensity in the left. Consequently the final convergence (3000 iterates) is shown on the right and offers a poor image. Note that the magnetometer is on the wrong side of the satellite.
Figure 13. Three images are present under the conditions k = 10 and a box size of 90 pixels. The left-hand image shows a concentrated intensity region approximately coterminous with Ψ k . The center image shows the distribution of Λ ^ k , where there is no overlap with the concentrated intensity in the left. Consequently the final convergence (3000 iterates) is shown on the right and offers a poor image. Note that the magnetometer is on the wrong side of the satellite.
Photonics 12 00301 g013
Figure 14. A schematic of the convergence toward the zero-noise image.
Figure 14. A schematic of the convergence toward the zero-noise image.
Photonics 12 00301 g014
Figure 15. The graph shows a plot of the probability of producing the noise-free image in a single trial as a function of the box size (blue line) as given by Equation (32). As a rough validation, the red stars show the percentage of correct images for 100 trials as computed by NRPR.
Figure 15. The graph shows a plot of the probability of producing the noise-free image in a single trial as a function of the box size (blue line) as given by Equation (32). As a rough validation, the red stars show the percentage of correct images for 100 trials as computed by NRPR.
Photonics 12 00301 g015
Figure 16. Probability distributions for test image identification using box sizes from 80 to 30.
Figure 16. Probability distributions for test image identification using box sizes from 80 to 30.
Photonics 12 00301 g016aPhotonics 12 00301 g016b
Figure 17. A sequence of images of the test image proceeding from left to right and top to bottom. The top row progresses from B x = 20   to   B x = 28 , illustrating the contradictions inherent when the box size is smaller than the illuminated object. Once the box size can contain the object, the lower row shows five correct images, all using B x = 30 .
Figure 17. A sequence of images of the test image proceeding from left to right and top to bottom. The top row progresses from B x = 20   to   B x = 28 , illustrating the contradictions inherent when the box size is smaller than the illuminated object. Once the box size can contain the object, the lower row shows five correct images, all using B x = 30 .
Photonics 12 00301 g017
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Hyland, D.C. The Rise of the Brown–Twiss Effect. Photonics 2025, 12, 301. https://doi.org/10.3390/photonics12040301

AMA Style

Hyland DC. The Rise of the Brown–Twiss Effect. Photonics. 2025; 12(4):301. https://doi.org/10.3390/photonics12040301

Chicago/Turabian Style

Hyland, David Charles. 2025. "The Rise of the Brown–Twiss Effect" Photonics 12, no. 4: 301. https://doi.org/10.3390/photonics12040301

APA Style

Hyland, D. C. (2025). The Rise of the Brown–Twiss Effect. Photonics, 12(4), 301. https://doi.org/10.3390/photonics12040301

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop