Time-Dependent Image Restoration of Low-SNR Live-Cell Ca2 Fluorescence Microscopy Data

Live-cell Ca2+ fluorescence microscopy is a cornerstone of cellular signaling analysis and imaging. The demand for high spatial and temporal imaging resolution is, however, intrinsically linked to a low signal-to-noise ratio (SNR) of the acquired spatio-temporal image data, which impedes on the subsequent image analysis. Advanced deconvolution and image restoration algorithms can partly mitigate the corresponding problems but are usually defined only for static images. Frame-by-frame application to spatio-temporal image data neglects inter-frame contextual relationships and temporal consistency of the imaged biological processes. Here, we propose a variational approach to time-dependent image restoration built on entropy-based regularization specifically suited to process low- and lowest-SNR fluorescence microscopy data. The advantage of the presented approach is demonstrated by means of four datasets: synthetic data for in-depth evaluation of the algorithm behavior; two datasets acquired for analysis of initial Ca2+ microdomains in T-cells; finally, to illustrate the transferability of the methodical concept to different applications, one dataset depicting spontaneous Ca2+ signaling in jGCaMP7b-expressing astrocytes. To foster re-use and reproducibility, the source code is made publicly available.


Introduction
T-cell activation represents the on-switch of the adaptive immune system [1]. Within tens of milliseconds after activation, highly dynamic, spatio-temporally restricted Ca 2+ signals, termed Ca 2+ microdomains, start occurring [1,2], but the molecular machinery underlying this process still remains elusive. To better understand the principles of the formation and the temporal propagation of these signals as well as the contributions and roles of different components, high-resolution live-cell fluorescence microscopy is required, ideally implemented with both the spatial and the temporal resolution as high as possible. However, high-resolution Ca 2+ imaging has severe limitations: Low photon doses due to phototoxicity and photobleaching as well as the fugitive nature of Ca 2+ signals in combination with out-of-focus light lead to an intrinsically low signal-to-noise ratio (SNR) [3]. This, in turn, significantly impedes the identification and detailed analysis of Ca 2+ microdomains and their spatio-temporal architecture.
The analysis of initial Ca 2+ microdomains in T-cells and the corresponding need to reliably identify related signaling events in live-cell imaging data with poor SNR forms the basis and motivation of the present work, but it is only one example application; a related prominent Ca 2+ imaging application is, e.g., capturing Ca 2+ waves with a high imaging speed (often >80 Hz) and corresponding short exposure times [4]. However, the general challenge to extract meaningful information from low-SNR image time series data is applicable to many applications in the context of spatio-temporal fluoresence microscopy. Techniques to increase the quality of captured images are typically referred to as image restoration or deconvolution.
In recent years, microscopy image restoration has been increasingly tackled using deep learning methods [5,6], but a systematic problem with these approaches remains the risk of hallucinations, i.e., the generation of structures not present in the acquired imaging data [7]. In addition, extensive amounts of suitable training data are usually required, limiting their applicability.
Conventional approaches, in contrast, work purely on the image data to be processed. They include classic, straight-forward methods, such as nearest neighbor deconvolution or naive inverse filtering, which are computationally inexpensive but have drawbacks such as poor noise reduction and the introduction of ringing artifacts. More sophisticated methods are often formulated as iterative algorithms and variational models, with a variety of data fidelity and regularization terms being proposed in literature. The most common approaches are (regularized) inverse filtering, including, e.g., Wiener filtering [8,9] and (regularized) Lucy-Richardson (LR) deconvolution [10,11]. For an overview, please refer to, e.g., [12,13].
Most functional formulations are, however, rather general, and the resulting algorithms perform poorly in low-SNR scenarios [14]. In 2013, Arigovindan et al. introduced a functional formulation that was tailored to the specific characteristics of fluorescence microscopy [14]. In particular, they proposed an entropy-like formalism in combination with a second order derivatives-based regularization functional that suppresses noise but still preserves object details. A central rationale behind their formulation was, e.g., that, in contrast to general imaging data, in fluorescence microscopy data, "high intensity points are more sparsely distributed and are co-localized with high-magnitude derivative points" [14]. The presented results were impressive especially for low-SNR conditions. However, although motivated by demands of spatio-temporal imaging, the proposed formulation addressed only frame-by-frame deconvolution, i.e., the resulting algorithm was to be applied independently to each frame. While this is common to most image restoration methods (both deep learning and conventional approaches), recent work illustrated the benefits of taking the spatio-temporal nature of the acquired data into account [15,16].
In the present work, we extend the principle of entropy-like deconvolution proposed by [14] and suggest a novel variational approach tailored to image restoration of spatiotemporal fluoresence microscopy. The proposed approach utilizes the temporal information available in the imaging data to further improve image quality and SNR at low exposure times, thus enabling imaging with a higher temporal resolution. To foster re-using the developed methods, the source code is freely available at https://github.com/IPMI-ICNS-UKE/TDEntropyDeconvolution, accessed on 26 October 2021. The repository also covers an implementation of the approach described in [14] to be applied to static microscopy data (no publicly available source code provided with the original publication). The corresponding practical notes are given in Appendix A.2; further documentation and example scripts are provided as a part of the repository.
The advantage of the proposed approach is illustrated for four datasets. The first three datasets are related to the analysis of Ca 2+ microdomains in T-cells: (1) a synthetic dataset with simulated Ca 2+ signals and noise patterns to systematically evaluate the algorithm performance; two super-resolution spinning disc microscope datasets, one acquired with a genetically encoded Ca 2+ indicator tagged to a lysosomal channel (2) and the other one to study the free cytosolic Ca 2+ concentration ([Ca 2+ ] i ) (3) in Jurkat T-cells. The fourth dataset illustrates the use in different application contexts. Here, confocal data of spontaneous Ca 2+ signals in branches of an astrocyte in a mouse brain slice.

Synthetic Data
A comparison of the effects of the different deconvolution approaches for an example of the generated synthetic data with different noise levels is given in Figure 1. The simulated ground truth data are shown in Figure 1a.  The region of interest (ROI) indicated by the yellow box is then focused on in the panels (b-d), which are all structured in the same way: The left upper image represents the noisy ROI of the image that is input into the deconvolution algorithms. The other images are the corresponding restored image ROI for LR deconvolution (right upper image), static entropy-based deconvolution (ER, left lower image) and the proposed temporal entropybased deconvolution (TD ER, right lower image). The input noise levels are as follows: (b) low noise, (c) medium noise and (d) high noise.
For all noise levels, the proposed time-dependent algorithm presents the smallest amount of background noise and highest SNR after image restoration, with the discrepancy between time-dependent and static deconvolution becoming most evident for the high noise level (i.e., low-SNR) input data as in panel (d). In contrast to the entropy-based approaches, LR deconvolution tends to introduce ringing artifacts to the result, which amplify the background noise for low-SNR input data. Merely for the lowest noise level shown in panel (b), the LR algorithm performs best, as it sharpens the signal, whereas the entropy algorithms tend to blur it instead.
The corresponding quantitative analysis is presented in Figure 2, showing the mean normalized SSIM as well as the estimated background noise for the image restoration approaches as a function of the input Gaussian noise level and averaged over different Poisson noise levels.  The SSIM is calculated according to Equation (2) for the individual frames of the restored images and the original input data and averaged over all time frames. In the diagram, the respective SSIM values are normalized by dividing the SSIM obtained for an image restoration approach by the SSIM for comparison of the noisy input data and the underlying original data. Thus, SSIM values larger than one indicate an image improvement in terms of SSIM.
The quantitative data support the visual impression. The TD ER algorithm performs best, except for very low noise values, where the LR deconvolution reveals higher SSIM values. With increasing noise, image quality improvement by LR deconvolution drastically decreases in terms of SSIM (normalized SSIM values ≈ 1; a value of 1 indicates similar SSIM of the noisy input image and the restored image). Better results are obtained using the entropy approaches (ER: normalized SSIM of approximately 1.7; TD ER: normalized SSIM of approximately 2.4), which are optimized for processing low-SNR fluorescence microscopy data. The amount of Poisson noise has a comparatively small influence on the result, as can be seen by the error bars in Figure 2a, which show the standard deviation for processing similar image series with different Poisson noise levels.
The estimated Gaussian noise variance of the image background depicted in Figure 2b is in line with the SSIM results and the visual impression. The grey line represents a consistency check of the automated background noise estimation method, as it shows the estimated Gaussian noise variance of the noisy input image as a function of the input Gaussian noise variance. The linear relationship indicates the reliability of the respective results.
Both the ER and the TD ER algorithms considerably decrease the measured background noise, while the LR algorithm appears to magnify it with an increasing input noise level, in agreement with the visual impression of Figure 1b,c. The LR results are also the only ones that depended on the input Poisson noise level, with the standard deviation indicating this influence as explained and visualized in Figure 2a. The remaining background noise of the images obtained by both entropy algorithms as well as the background noise of the original noisy images differ only a little for different Poisson levels, and the respective error bars are too small to be pictured in Figure 2a. Overall, the smallest amount of background noise is present in the images generated using the TD ER algorithm.

Live-Cell Fluorescence Microscopy
In Figure 3, representative frames of the acquired image sequences of dataset 2 and the corresponding outputs of the deconvolution approaches are shown.  Movies corresponding to the results at 100 ms exposure are provided as Supplementary Videos S1-S4, see Supplementary Materials.
The data represent Ca 2+ released through the two pore channel 2 (TPC2) on lysosomes in Jurkat T-cells using TPC2-R.GECO.1.2. With this low affinity genetically encoded Ca 2+ indicator, only Ca 2+ at the mouth of the TPC pore can be visualized.
The first column shows raw images captured with exposure times of 100 ms (a), 150 ms (e), 200 ms (i) and 400 ms (m). The second column depicts the corresponding deconvolution results obtained with the MATLAB implementation of the LR algorithm for these exposure times. Images restored by the ER deconvolution (λ = 2.0) are shown in the third column, and the corresponding results for the proposed TD ER approach (λ = 2.0, λ T = 2.0) are given in the fourth column. In both the static and time-dependent entropy algorithms, ε = 0.001.
It can be clearly seen that the noise level in the raw images increases significantly when reducing the exposure time from 400 ms to 100 ms. The Lucy-Richardson deconvolution increases the noise level even further for lower exposure times, while the entropy-based algorithms perform much better (third and fourth column). The time-dependent algorithm (rightmost column) shows the least amount of noise while recovering much of the original signal. This effect is especially pronounced for the lower exposure times, where in the raw image, hardly any signal can be discerned, while our algorithm manages to recover a relatively clear signal. For very high exposure times, such as 400 ms shown in the last row of Figure 3, the improvement is, however, minimal at best. While, even here, background noise is reduced, the signal also appears slightly blurred.
A more detailed look at the signal recovery is given in Figure 4.  Here, the results for the different deconvolution methods are shown for a ROI of a time series captured with 100 ms exposure time. The first column, Figure 4a, shows the raw image of a single time frame in total (first row) with a zoomed-in ROI below. Panel (e) represents an intensity profile along the blue line in the zoom plot. The same structure applies to the other columns, with (b-d) showing the deconvolution results of the LR, ER and TD ER algorithms, respectively, and (f-h) showing the corresponding intensity profiles along the pictured blue lines. The intensity plots illustrate that the LR deconvolution seems to sharpen the signal but also amplifies the noise; based on the intensity profile, it is difficult to distinguish the signal from noise. The ER algorithm clearly recovers the signal while reducing the noise, and the TD ER further enhances the SNR.
To further illustrate the potential of the proposed approach, two additional live-cell imaging datasets were processed and analyzed.
The results obtained on dataset 3 are illustrated in Figures 5 and 6. Shown in Figure 5a is an image frame captured using Fluo-4 (upper row) and Fura Red (lower row) as indicator dyes. The frame corresponds to a time point shortly after T-cell activation. The panels show, from left to right, the original raw data and the deconvolution results obtained with the LR, the ER and the TD ER algorithms. Similar to the above experiments and datasets, the least amount of background noise is present both visually in (a) and, in terms of estimated Gaussian background noise, quantitatively in (b) and (c) in the output images of the TD ER algorithm. In fact, entropy deconvolution eliminates the background noise almost entirely. The numbers given in (b) and (c) are shown as a ratio, i.e., the noise level after image restoration divided by the estimated noise level of the raw data. Thus, values < 1 represent a decrease of background noise. Performing the postprocessing process as described in [2] (rigid registration of the two channel time series data, bleaching correction, cell segmentation), the two-channel image data were then combined to ratio images representing the free cytosolic Ca 2+ concentration, [Ca 2+ ] i . One exemplary cell is shown in Figure 6, where (a-d) show the ratio images computed based on the aligned and processed raw images, the images after LR deconvolution, and after ER and TD ER image restoration. Panels (e-h) show the intensity profile along the blue line in the images above. While the ratio of the raw channels appears to be very grainy, the Ca 2+ microdomains in the deconvolved images, in particular for ER and TD ER, can be much more easily distinguished from noise. The results for dataset 4, a jGCaMP7b-expressing astrocyte in a mouse brain slice, are shown in Figure 7, illustrating the transferability of the developed approach to different application contexts than Ca 2+ microdomain analysis.  While the original SNR for the input data appears to be already quite good for the large and brightly labeled cell body, the fine cell branches barely stand out against the background. Here, the entropy algorithms both considerably decrease the amount of background noise, making it easier to separate the delicate structures of the astrocyte branches from the background. For a visual impression, see Figure 7a-d and Supplementary videos S5-S8. Quantitatively, the background noise reduction is shown in Figure 7i in the measurement of the background noise variance of the deconvolved images, again normalized onto the original noise level. The intensity profile plotted in panels (e) and (f) further confirms that the SNR, while already acceptable in the raw image, is further improved by the entropy algorithms, as shown in (g) and (h).
The computation time for a 500 × 500 pixel time series with 100 frames using the two entropy algorithms ranges between a few seconds and a few minutes on a standard desktop PC depending on the convergence of the algorithm, which, in turn, depends on the input data.

Discussion
Motivated by the intrinsically low-SNR for live-cell Ca 2+ image sequences acquired by fluorescence microscopy at both high spatial resolution and high temporal resolution, we proposed the integration of the temporal dimension of the respective image data into variational image restoration. Method development built on an image restoration specifically tailored to particularities of fluorescence microscopy [14]. Here, (1) we extended the underlying entropy-based regularization and the dedicated numerical solving scheme to spatio-temporal image sequences, (2) demonstrated the superiority of the proposed timedependent image restoration approach compared to static entropy-based image restoration and common LR deconvolution and (3) made the source code publicly available.
Demonstration of the advantages of the proposed image restoration approach was based on synthetic as well as real live-cell Ca 2+ imaging data, with two of the latter being acquired in the context of Ca 2+ microdomain formation analysis after T-cell activation and one additional dataset showing a jGCaMP7b-expressing astrocyte in a mouse brain slice. For all datasets, the observed effects were consistent: The time-dependent deconvolution considerably reduced the level of noise, in particular for low-SNR input image sequences. Therefore, we expect the approach to be promising for live-cell imaging data acquired in different application contexts.
For high(er) SNR input image sequences, the quantitative evaluation has, however, shown that the performance of the common LR deconvolution is on par with both entropybased image restoration approaches. Moreover, visually, the entropy approaches tend to blur spots of high Ca 2+ concentration (also particularly visible for high-SNR input data). We hypothesize that this is due to the present data fidelity term of the functional in Equation (1) and will, in the future, adjust the functional by changing the term from least squares to a Poisson noise-specific term, as low photon rates typical for fluorescence microscopy tend to obey Poisson statistics. This, however, requires a different numerical scheme and algorithm for the minimization of the overall functional and is beyond the scope of the present paper.
Moreover, at the moment, it is also not clear whether there exists theoretically and/or practically a lower SNR threshold below which the entropy-based image restoration will fail. Similarly, we tested the TD ER algorithm on imaging data acquired with a frame rate up to 40 Hz. It remains to be shown that it also performs as expected for faster image acquisition as well as for different magnification Ca 2+ imaging data. We expect that faster and higher spatial resolution imaging leads to more continuous representation of the biological processes and structures as well as less discontinuous between-frame sample movement and motion of intracellular structures. Thus, in principle, the TD ER model should benefit from it when compared to static image restoration approaches. However, this is currently a hypothesis to be tested in future work, including in-depth comparison to the performance of other methodical approaches than LR deconvolution when applied to corresponding data. We therefore encourage other researchers to test the proposed algorithm on their data and to contact us-both in the case of problems and to share their experience and positive examples-to further optimize the proposed image restoration.

Mathematical Formulation
Following the concept of variational image deconvolution, the proposed approach builds on a common quadratic data fidelity term [13,14] but extended to spatio-temporal image data, i.e., ||Hv t − w t || 2 , where w t is the measured image at time or frame t, H is the distortian matrix, and v t the sought solution. The direct minimization of the data fidelity term would, in practice, suppress high-frequency components of the solution and poses, mathematically, an ill-posed problem.
Thus, additional regularizing terms need to be included in the functional to be minimized. As described in the introduction, in [14], an algorithm specifically tailored to fluorescence microscopy was introduced. The main innovation was using second derivatives for a spatial smoothness enforcing regularization functional in an entropy-like term. Here, we expanded this approach to also include a functional term that enforces smoothness in the time domain. Similar to the motivation formulated for the spatial domain in [14]-sparsity of high-intensity signals and high-magnitude derivatives-which we consider also applicable to the temporal characteristics of, e.g., Ca 2+ microdomains, we also chose an entropy-like structure for the temporal regularization functional.
The overall minimization problem is given by where v ∈ R K is a vectorized processed time series comprised of T frames, each a N x × N y dimensional image, M = N x · N y and K = M · T. Each individual frame is denoted by the subscript t ∈ {1, ..., T}, i.e., v t ∈ R M . The measured image time series is vectorized in the variable w ∈ R K , with w t ∈ R M being the vectorized time frame at time t, and v opt ∈ R K refers to the sought solution in terms of optimality with regard to the defined functional. The operator and distortian matrix H ∈ R M×M is, in our case, the point spread function (PSF) in Toeplitz form. The operators L i ∈ R M×M in the spatial regularization term represent the discretized second derivative filters in spatial directions, where the sum over i runs over ∂ 2 /∂x 2 , ∂ 2 /∂y 2 and ∂ 2 /∂x∂y. The operator · • · refers to the Hadamard, or element-wise, product. In the temporal regularization term, the D i ∈ R K×K refer to the discretized second derivative filters containing the derivatives with respect to time. In this case, the sum over i runs over ∂ 2 /∂t 2 , ∂ 2 /∂x∂t and ∂ 2 /∂y∂t.
The sum over m runs over all pixel within one time frame, so m ∈ {1...M ≡ N x · N y }, while σ is a composite index referring to a pixel within a specific time frame and running over all pixels in all time frames, i.e., σ ∈ {1...K ≡ N x · N y · T}. In our notation, the value of a pixel in a specific time frame can thus be addressed by either v σ or [v t ] m .
The vector n ∈ R K ensures positivity of the result and contains the entries n σ , where n σ = 0 if v σ ≥ 0 and n σ = v 2 σ if v σ < 0. The parameters λ, λ T and λ N are Lagrange parameters to weigh the regularization terms. They are to be determined empirically. ε is a small positive constant to avoid the occurrence of log(0) in the regularization terms.
The first term in the cost function, i.e., the data fidelity term in Equation (1), ensures the agreement with the forward model, i.e., the image distortion w t = Hv t . The second term, controlled with Lagrange parameter λ and denoted as spatial regularization in Equation (1), is known from [14] and denotes the regularization functional enforcing smoothness within the spatial dimensions of the image. New in our proposed method is the third term proportional to λ T . This regularization functional enforces smoothness over time. The last term proportional to λ N is a standard term to avoid negative pixel values in the resulting image.
The optimal solution of the problem in Equation (1) is found using an iterative minimization algorithm detailed in Appendix A.
Note that the static entropy-based image restoration algorithm described in [14] is also included as a limit for λ T → 0 in the above description. Whenever this algorithm is referred to in the following for comparison purposes, this means our implementation with λ T set to zero. We choose the abbreviations ER for the static entropy deconvolution and TD ER for the time-dependent entropy deconvolution.

Experiments: Imaging Data and Evaluation
The performance of the proposed spatio-temporal deconvolution was tested and compared to static entropy-based deconvolution and standard LR deconvolution (implementation of the MATLAB Image Processing Toolbox 2019a) by means of four different datasets: a synthetic image dataset, two fluorescence microscopy image datasets acquired in the context of Ca 2+ microdomain analysis in T-cells and a last dataset acquired by confocal fluorescence Ca 2+ imaging of an astrocyte in an acute mouse brain slice to illustrate transferability of the proposed methodical developments to a different application context.

Dataset 1: Synthetic Image Data
The simulation of Ca 2+ fluorescence microscopy time series data started on a black canvas. To generate a texture, Perlin noise was added [5]. The texture was used to place "glowing" spots (small Gaussian intensity peaks) in a randomly clustered manner. Afterwards, the Perlin noise was removed, and the spots were moved over time according to Brownian motion.
To degrade the images for deconvolution evaluation purposes, they were first convolved with a PSF. The applied PSF was the same as used for the real microscopic data of dataset 3 (see below). Then, Poisson and Gaussian noise were added. The noise levels were varied to analyze the performance of the image restoration algorithms as a function of input image data SNR. This can also be interpreted to simulate different exposure times.
Poisson noise was varied by dividing the signal by a parameter before calculating the Poisson distribution. The result was re-scaled by this same parameter to preserve the original dynamic range. The Gaussian noise was varied by adding Gaussian noise with different variances. Since the exact values of these parameters are rather arbitrary in synthetic images, we scaled them to dimensionless noise levels to better illustrate the amount of noise present in the images. The exact parameters and generation methods can be seen in the published source code.
Different to subsequent real live-cell microscopy imaging data, the synthetic imaging data allow for a quantitative comparison of sought optimal images, i.e., the original input images before degradation by the PSF and noise application. For evaluation purposes, we calculated the structural similarity index (SSIM) between patches of the original images v orig and the restored images v dec , given by where µ x denotes the average and σ 2 x the variance of the intensity values of image patch x, while σ xy denotes the covariance between two image patches x and y. c 1 and c 2 are small constants.
Moreover, as an approach that requires no ground truth reference images for a quantitative assessment of image restoration success, the Gaussian noise variance of the image background was estimated according to the patch-based approach presented in [17]. 4.2.2. Dataset 2: Genetically Encoded Ca 2+ Indicator for Optimal Imaging (GECO) Tagged to Lysosomal TPC2 in Jurkat T-Cells The second dataset was acquired in the context of the analysis of the role of Ca 2+ release processes during the formation of initial Ca 2+ microdomains in T-cells. Jurkat T-cells were transiently transfected with two pore channel 2 (TPC2) fused to a low affinity genetically encoded Ca 2+ indicator for optical imaging (GECO-1.2). Previously, this GECO was tagged to ORA1 channels in the plasma membrane, and only Ca 2+ entry through Orai1 was visualized [18]. Here, only Ca 2+ released from the lysosomes through TPC2 should be detected.

Dataset 4: Confocal Ca 2+ Imaging in Astrocytes In Situ
The fourth dataset was aquired in an astrocyte in an acute mouse brain slice. The genetically encoded Ca 2+ indicator jGCaMP7b (Addgene # 171118) was subcloned into a AAV-PhP.eB vector under the control of the GFAP promoter, and viruses were systemically applied by retrobulbar injection [19]. After 14 days, jGCaMP7b-expressing astrocytes were visualized with a confocal fluorescence microscope (eC1, Nikon, Düsseldorf, Germany; equipped with a 16× objective, NA 0.8) in acute brain slices of the olfactory bulb using a 488 nm laser for excitation (emission filter 515/15) at a frame rate of 1 Hz.

Conflicts of Interest:
The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results.

Abbreviations
The following abbreviations are used in this manuscript:  (1) As described in the main text, our algorithm is conceptually an extension of the entropy deconvolution proposed by Arigovindan et al. [14]. There was, however, no corresponding source code available. We, therefore, developed the program completely from scratch with the ideas in [14] as a starting point. Introduced in Section 4, the cost functional to minimize is given by Taking the derivative and setting it to zero leads to the following minimality condition for the t-th time frame. The elements of the vector c are given by where (t, m) denotes a composite index to reference both the time frame t and pixel location m. The elements of the vector k are given by Since Equation (A2) is solved iteratively, a starting condition is needed. It is useful to define an approximation that can be easily inverted. Extending the ansatz in [14], we choose the following initial condition for the t-th time frame Note that the matrices H, L i and D i are circulant to represent the convolution. Therefore, they are diagonalized by the discrete Fourier transform. This leads to the following solution to Equation (A5):v Here,· denotes the Fourier transform and diag(x) the diagonal matrix with entries x along the diagonal. For circulant matrices, the eigenvalues are given by one of its rows (all other rows are permutations), and the Fourier transform is simply a diagonal matrix with these values along the diagonal.
Going back to the full problem, the left-hand side of Equation (A2) can be re-written as a single operator as where b t = H T w t , and v again denotes the entire time series. The solution, i.e., v, can be found iteratively following the ansatz of [14] with where k is the iteration index, ζ k a damping factor andÃ (k) an approximation of A (k) that can be inverted easily. HereÃ (k) = P I diag(m (k) )P I , where P I is the inverse Fourier transform of ( √P ) −1 , whereP is given by Equation (A7) and diag(m (k) ) is the diagonal approximation of A (k) with m (k) where the vectors c and k are given by Equations (A3) and (A4), and the vectorh is constant with elementsh (A12) To facilitate notation in the algorithm, some abbreviations are introduced as follows with c and k given by Equations (A3) and (A4) and λ N = 100λ. Another definition is To evaluate the "goodness" of the iteration result, the following measure is introduced The resulting deconvolution algorithm is given in Algorithm A1.