Next Article in Journal
Comparison of the Design of 3-Pole BLDC Actuators/Motors with a Rotor Based on a Single Permanent Magnet
Next Article in Special Issue
Comparison of Deep Learning Algorithms in Predicting Expert Assessments of Pain Scores during Surgical Operations Using Analgesia Nociception Index
Previous Article in Journal
Problems Using Data Gloves with Strain Gauges to Measure Distal Interphalangeal Joints’ Kinematics
Previous Article in Special Issue
Multi-ROI Spectral Approach for the Continuous Remote Cardio-Respiratory Monitoring from Mobile Device Built-In Cameras
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Fast Analysis of Time-Domain Fluorescence Lifetime Imaging via Extreme Learning Machine

1
Department of Biomedical Engineering, University of Strathclyde, Glasgow G4 0RE, UK
2
Department of Physics, University of Strathclyde, Glasgow G4 0NG, UK
*
Author to whom correspondence should be addressed.
Sensors 2022, 22(10), 3758; https://doi.org/10.3390/s22103758
Submission received: 22 March 2022 / Revised: 11 May 2022 / Accepted: 13 May 2022 / Published: 15 May 2022
(This article belongs to the Collection Biomedical Imaging and Sensing)

Abstract

:
We present a fast and accurate analytical method for fluorescence lifetime imaging microscopy (FLIM), using the extreme learning machine (ELM). We used extensive metrics to evaluate ELM and existing algorithms. First, we compared these algorithms using synthetic datasets. The results indicate that ELM can obtain higher fidelity, even in low-photon conditions. Afterwards, we used ELM to retrieve lifetime components from human prostate cancer cells loaded with gold nanosensors, showing that ELM also outperforms the iterative fitting and non-fitting algorithms. By comparing ELM with a computational efficient neural network, ELM achieves comparable accuracy with less training and inference time. As there is no back-propagation process for ELM during the training phase, the training speed is much higher than existing neural network approaches. The proposed strategy is promising for edge computing with online training.

1. Introduction

Fluorescence lifetime imaging microscopy (FLIM) has attracted growing interest in biomedical applications, such as surgical procedures [1], tumor detection [2,3], cancer diagnosis [4], and the study of protein interaction networks using Förster resonance energy transfer (FRET) techniques [5]. It can quantitatively investigate local microenvironments of fluorophores by measuring fluorophores’ lifetimes. For example, FLIM can observe dynamic metabolic changes in living cells by measuring autofluorescence lifetimes of NAD and NADP. This is utilized to mediate cell fate for diabetes and neurodegeneration research [6]. Fluorescence lifetime is the average time a fluorophore stays excited before releasing fluorescence. The process can be analyzed in the time or frequency domain. Time-correlated single-photon counting (TCSPC) techniques [7] are more widely used [8,9,10] due to their superior signal-to-noise ratio (SNR) and precise temporal resolution (in picoseconds) compared with frequency-domain approaches. During data acquisitions, emitted photons are detected by a single-photon detector, wherein a high-precision stopwatch circuit records timestamps of detected photons. The stopwatch circuit generates an exponential histogram, from which the fluorescence lifetime is extracted.
Estimating lifetime parameters is an ill-posed problem with high computational complexity. Numerous algorithms have been developed to quantify lifetimes and relevant parameters. Iterative fitting and optimization approaches were reported to deduce fluorescence lifetimes. A convex optimization method [11] was utilized for high-resolution FLIM, where the accuracy is related to fine-tuned hyperparameters in the cost function. An F-value-based optimization algorithm [12] was used to minimize signal distortion introduced by pile-up effects and the dead time of single-photon detectors. A Laguerre expansion method [13,14,15] was reported to speed up least-squares deconvolutions.
On the other hand, non-iterative fitting methods were introduced to reduce computing complexity whilst maintaining high accuracy. A new nonparametric empirical Bayesian framework [16] was adopted for lifetime analysis based on a statistical model, where the expectation–maximization algorithm was employed to solve the optimization problem. A hardware-friendly fitting-free center of mass (CMM) [17,18,19] algorithm was proposed to deliver fast analysis and has been applied to a flow-cytometry system [20,21]. Integral equation methods (IEM) [22] were also implemented in FPGA devices to provide real-time analysis. Direction-of-arrivals estimation [23] was adopted to deliver a non-iterative and model-free lifetime reconstruction strategy, requiring a few time bins. A histogram cluster method [24] divides histograms into clusters instead of processing histograms pixel-by-pixel, enhancing the analysis speed. However, challenges remain. Firstly, most of these algorithms need a long acquisition time to guarantee the reconstruction fidelity, likely causing photobleaching. A fast algorithm suitable for low photon counts conditions is, therefore, desirable. Secondly, iterative or probabilistic methods are not portable to hardware, impeding the on-chip computing of TCSPC systems.
Artificial neural networks (ANNs) have proved promising for FLIM analysis. FLI-NET [25] used a 3-D convolutional neural network (CNN) to analyze bi-exponential decays via a branched architecture. Its compressed-sensing [26] version used a single-pixel detector and a digital micromirror device to reconstruct intensity and lifetime images. A 1-D CNN architecture [27] was introduced to reduce the computational load for multi-exponential analysis, using a similar branched structure. A multi-layer perceptron (MLP) method [28] was proposed for mono-exponential analysis with high spatial-resolution SPAD arrays. Another MLP [29] was reported combining maximum likelihood estimation algorithms and using fully connected layers to resolve bi-exponential decays. Moreover, another ANN technique [30] was introduced to fuse high-resolution fluorescence intensity and low-resolution lifetime images for wield-field FLIM systems. However, the training and inference of the ANNs are slow. Even with powerful GPUs, it usually takes a long training time (hours) to train a network. It is also time consuming to retrain a model when the lifetime range is altered.
Pixel-wise lifetime recovery has been widely used, since it is consistent with the sensor readout and more computationally economical than 3-D algorithms. The extreme learning machine (ELM) [31] is an efficient algorithm to process 1-D signals for biological applications, such as electrocardiogram (ECG) and electroencephalogram (EEG) signals [32]. Inspired by related literature, we used ELM to reconstruct lifetimes from 1-D histograms using multi-variable regression. Contributions of the ELM-based lifetime inference approach are that:
(1)
It is data-driven without a back-propagation learning strategy. It achieves less training time than existing ANN methods, paving the way for fast online training on embedded hardware for FLIM.
(2)
It can resolve mono- and bi-exponential models widely employed in practical experiments, wherein the amplitude and intensity average lifetimes were investigated.
(3)
Reconstructed lifetime parameters from ELM are more accurate than fitting and non-fitting algorithms regarding synthetic and experimental data under different photon-counting conditions whilst maintaining fast computing speed.
This paper presents a theory applying ELM to FLIM (Section 2), algorithms’ comparisons regarding synthetic data with low-photon-count scenarios (Section 3), and algorithms’ comparisons regarding an incubated living cell under different levels of photon counts (Section 4).

2. Apply ELM to FLIM

Due to ELM’s superior capability of processing 1-D signals, we associated synthetic 1-D histograms with ELM regarding training and inferencing phases. We also illustrate the probabilistic model of photon arrivals of FLIM data and the artificial IRF based on TCSPC.

2.1. ELM Theory

Conventionally, back-propagation is the gold standard to minimize object functions in most ANN architectures. ELM is theoretically a single hidden layer feed-forward neural network (SLFN) that uses matrix inversion (or Moore–Penrose matrix inversion) and minimum norm least-square solution to train models. The training can be accelerated significantly compared with iterative back-propagation procedures whilst avoiding slow converges and over-fitting resulting from back-propagation. Assume H training samples (H pairs of vectors x i = [ x i 1 ,   x i 2 ,     ,   x i m ] T m and y i = [ y i 1 ,   y i 2 ,     ,   y i n ] n are the ith input vectors and the ith target vectors, respectively, and suppose there are L nodes in the single hidden layer; the output matrix of the hidden layer can be defined as:
A = [ φ ( w 1 x 1 + b 1 ) φ ( w L x 1 + b L ) φ ( w 1 x H + b 1 ) φ ( w L x H + b L ) ] H × L ,
where φ(·) is the activation function, and usually, a sigmoid function can achieve a relatively good result, and w l = [ w l 1 ,   w l 2 ,     ,   w l m ] T and b l = [ b 1 ,   b 2 ,     ,   b L ] T , l = 1, …, L. are randomly assigned weights and biases between the input nodes and the hidden layer before training. Say βl is the weighting connecting the lth hidden layer and output nodes, defined as:
β = [ β 1 T β L T ] = [ β 11 β 1 n β L 1 β L n ] L × n .
To learn the parameter matrix of β with a dimension of L × n, the ridge loss function is widely adopted as:
arg min β L × n A β Y 2 + λ β 2 ,
where A is the matrix composed of the activation functions with dimensions H × L; Y is a matrix with dimensions H × n containing ground truth (GT) data:
Y = [ y 1 T y H T ] = [ y 11 y 1 n y H 1 y H n ] H × n .
Through solving the loss function, we can obtain the matrix β by:
β ^ = ( A T A + λ I ) 1 A T Y ,
where I is an identity matrix with dimensions L × L, the hyperparameter λ helps obtain a reliable result when the matrixATA+ λI is not full rank.

2.2. TCSPC Model for FLIM

Fluorescence emission can be modeled with mono- or multi-exponential decay functions and a bi-exponential model can approximately deduce a signal following a multi-exponential decay. Therefore, we focus on lifetime analysis from mono- and bi-exponential models in this work. Fluorescence functions can be adopted to formulate measured histograms containing multiple lifetime components and corresponding amplitude fractions. Therefore, for each pixel, the measured decay consisting of K lifetime components is formulated as:
h ( t ) = I R F ( t ) P k = 1 K α k e t / τ k + n ( t ) ,
where the IRF(·) is the system’s instrument response function, P is proportional to the fluorescence intensity, τk is the kth lifetime component, αk is the kth amplitude fraction, and n(t) includes Poisson noise [33] and dark count rate of the sensor, t = [1, 2, …, T] is the time-bin index of the TCSPC module. As photon arrivals follow the Poisson distribution, with C cycles of laser excitation, the ultimate distribution in one pixel can be derived as:
D ~ P o i s s o n ( C 0 T h ( t ) d t ) .
Based on this theoretical TCSPC model, we can generate training datasets for ELM. Synthetic curves correspond to column vectors in the input matrix x. Apart from multi-exponential decays, we define the amplitude-weighted lifetime τA
τ A = k = 1 K α k τ k
and intensity-weighted average lifetime τI
τ I = k = 1 K α k τ k 2 k = 1 K α k τ k
to evaluate ELM.

2.3. Training Data Preparation

The training datasets contain 20,000 synthetic histograms, and ground truth (GT) lifetime parameters were generated to train the ELM network. Synthetic decays comply with Equation (6) and the IRF curve is modelled via a Gaussian curve:
I R F ( t ) = e [ ( t t 0 2 4 ln 2 h 2 / F W H M 2 ] ,
where FWHM (0.1673 ns) is compatible with the two-photon FLIM system for FLIM measurements, t0 (14th) is the index of the peak, h (0.039 ns) is the bin width of the TCSPC system. Both mono- and bi-exponential decay models were generated for performance evaluation. Lifetime constants t were set in [0.1, 5] ns for the mono-exponential decay model and τ1, τ2 are set in [0.1, 1], [1, 3] ns for bi-exponential models. The structure of ELM is depicted in Figure 1. Suppose the input vector is a pixel-wise histogram measured by a TCSPC system containing 256 time bins in the inference phase. The number of output nodes depends on the number of lifetime components we defined in synthetic datasets. For instance, if the measured data consists of bi-exponential decay model, the output layer should be configured as three nodes, namely, τ1, τ2, and α. We can easily obtain average lifetimes from Equations (8) and (9). All the histograms from the sensor are fed into the network sequentially; lifetime parameters can be obtained from output nodes pixel by pixel. The number of nodes in the hidden layer can be flexibly adjusted to achieve a trade-off between accuracy and computing time consumption.

3. Synthetic Data Analysis

τA and τI are used to estimate energy transfer for FRET or indicate fluorescence quenching behaviours [34]. This section compares NLSF, BCMM, and ELM to retrieve τA from bi-exponential decays. Likewise, we also compared NLSF, CMM, and ELM to reconstruct τI. Besides, ELM was compared with existing ANNs for FLIM in terms of (1) the network scale and (2) training time. Multiple widely used metrics (F-value, SSIM, R2, MSE) were adopted for performance evaluations.

3.1. Comparisons of Individual Lifetime Components

As NLSF was usually adopted in previous studies [25,27,35], we compared the inference performances of ELM and deconvolution-based NLSF (implemented with lsqcurvefit(·) function in MATLAB using iterative Levenberg–Marquardt algorithm) in Figure 2. As such, 2000 simulated testing datasets were generated for recovery for single and double lifetimes. Here, we define the absolute error Δg = |ggest|, where g = τ1, τ2, α, τA and gest is the estimated g. ΔgELM and ΔgNLSF are the absolute errors for ELM and NLSF. Figure 2a,b show the Δg of ELM and NLSF for mono-exponential decays, respectively. Δg decreases as the peak intensity increases, and ΔgELM is smaller than ΔgNLSF. Likewise, Figure 2c,d indicate Δg plots for g = τ1, τ2, and α, where ΔgELM is smaller than ΔgNLSF. Similarly, Figure 2e,f indicate ELM obtained a much more accurate τA than NLSF. Therefore, ELM can perform better than NLSF in mono- and bi-exponential decays. Additionally, as shown in Figure 3, we visually inspected estimated τ1, τ2, and α, based on pre-defined variables in synthetic 2-D images. We used the SSIM to evaluate reconstructed images in Figure 3a,b. The 2-D lifetime images were reconstructed from a 3-D synthetic data cube, composed of either mono- or bi-exponential decays (256 × 256 × 256, representing spatial and temporal dimensions). All the GT lifetime parameters (τ and α) are pre-defined in Equation (1). The 2-D lifetime images are recovered pixel by pixel from noisy synthetic 3-D data cubes. Figure 3a shows reconstructed 2-D images from mono-exponential decays with GT τ varying from 0.1 to 5 ns. Likely, Figure 3b shows estimated τ1, τ2, and α bi-exponential decays. Results obtained from ELM are more accurate than NLSF. Figure 3c shows the phasor plots of GT distributions of mono- (Figure 3a) and bi-exponential (Figure 3b) decays. From the phasor theory [36], cluster points of mono-exponential decays should locate on the semi-circle. For bi-exponential decays, two-lifetime components are indicated by the intersections of a fitted line and the semi-circle. We utilized R2 defined as:
R 2 = 1 i = 1 P ( τ A i τ A _ G T i ) i = 1 P ( τ A i τ A _ A v e ) ,
to evaluate the estimation consistency, where τ A i is the predicted parameter, τ A _ G T i is the GT parameter, τA_Ave is the average of GT parameters, P is the number of simulated decay curves. As shown in Figure 3d, the scatter plots show ELM is closer to GT, and NLSF shows more outliers. We further evaluated ELM and NLSF using the F-value defined as Equation (12) [37] with synthetic mono- and bi-exponential decays.
F = δ x x I .
F > 1 and a lower F means higher precision, where I is the detected photon count, δx is the standard deviation of the estimated lifetime parameter, and x is the GT parameter. We generated 200 synthetic decays for given ranges of lifetimes and peak intensities in Figure 4. Figure 4a shows the F-value of mono-exponential decays versus the lifetime in the range ~ [0.1, 5] ns. Figure 4b shows the F-value of bi-exponential decays versus τ1, τ2, and α in [0.1, 1] ns, [1, 3] ns, and [0, 1], respectively. We assigned 200 decays with a total photon count (<2000) per synthetic histogram for both scenarios. Both figures show that ELM obtained a smaller F than NLSF, meaning ELM can achieve better precision. Furthermore, we defined the bias Δτ/τ to evaluate ELM and NLSF versus the photon count. τ was set to 3.0 ns for mono-exponential decays. τ1, τ2, and α were set to 0.3 ns, 3.0 ns, and 0.5 for bi-exponential decays. Figure 4c shows that the bias of NLSF increases as the photon count increases, which is worse than ELM. Figure 4d shows that the bias of ELM is smaller than NLSF, and ELM is more robust to varying photon counts. Moreover, NLSF is also sensitive to initial conditions of lifetime parameters [34]. The bias decreases when the initial conditions are closed to GT values, meaning that users need to have prior knowledge about the parameters to be extracted.

3.2. Comparisons of τA

We evaluated ELM in estimating τA in various count conditions. As shown in Figure 5a, we set three regions at three count levels, changing τA from top to bottom. We refer to the three regions as low, middle, and high counts hereafter. Figure 5b depicts the GT τA. From Figure 5c,d, ELM shows a more accurate τA image than NLSF, with ELM producing a smaller MSE than NLSF in each region. We also included the non-fitting BCMM [18] for the comparison due to its fast speed and capacity to resolve bi-exponential decays. From Figure 5e, BCMM is not robust in low counts, outperforming NLSF in middle and high regions. Further, ELM obtained better results than BCMM. BCMM is less photon efficient, and it is sensitive to the measurement window T (T should be larger than 5 × τ2, otherwise bias correction is needed [18]).
Table 1 compares ELM with NLSF regarding the time consumption for inference (forward-propagation) tasks in Figure 3a,b. NLSF resolving mono-exponential decays consumes more time than for bi-exponential decay models. In contrast, the analysis time of ELM is not affected by the number of lifetime components and it is substantially less than NLSF.

3.3. Comparisons of τI

CMM [17] achieves the fastest speed for intensity average lifetime analysis. We further compared CMM with ELM for τI reconstruction. As shown in Figure 6, the result from ELM is better than NLSF but slightly worse than CMM. However, CMM is sensitive to and biased by the measurement window if bias correction is not included. Although CMM obtained a smaller overall MSE, the bias occurs as τI becomes longer. It agrees with the conclusion from the previous work [34], indicating that CMM causes misleading inference when there are multi-lifetime species in the field of view. Further, τI sometimes generates a shorter dynamic lifetime range than τA as τI cannot correctly distinguish clusters with different lifetimes, especially for strong FRET phenomena [5]. ELM and CMM can achieve shorter processing time than NLSF and BCMM, as shown in Table 1. In this case, although ELM is slightly slower than CMM, the consumed time varies with the number of nodes in the hidden layer. Figure 7a shows training errors indicated by mean square errors (MAE) versus different numbers of nodes in the hidden layer. Here, the number of the hidden layer is set to 500 for both mono- and bi-exponential models, as there was no apparent MAE decrease, and a moderate processing time was achieved, as shown in Figure 7b. Moreover, we compared ELM with relevant ANNs for FLIM. Since ELM uses the Moore–Penrose matrix inversion strategy to learn parameters instead of back-propagation, it is much faster. As shown in Table 2, although ELM has more parameters than 1-D CNN [27], the training time is much shorter than the other existing studies [25,27,28,29]. Many CNN hyperparameters should be fine-tuned, and batch normalizations should be implemented to avoid gradient vanishing [38]. In contrast, ELM’s architecture is much simpler, and we simply need to adjust the number of nodes in the hidden layer. Furthermore, the efficient training process enables online training and is suitable for embedded hardware implementations [39]. ELM is highly reconfigurable to provide a flexible solution to balance the trade-off between computing complexity and accuracy. The evaluations of ELM and NLSF were conducted in MATLAB R2016a, 64-bit CPU (Intel Core i5-4200H @ 2.80 GHz) with 8 GB memory. Notably, other studies in Table 2 used much more powerful GPU to train their models. Despite this, ELM still delivers the shortest training time.
Based on the analysis of synthetic datasets, ELM is more robust for analyzing mono- and bi-exponential decays than traditional NLSF methods. We will evaluate ELM using realistic experiment data in the next section.

4. Experimental FLIM Data Analysis

To investigate the feasibility of ELM for experimental FLIM data, we utilized living prostate cancer cells incubated with functionalized gold nanorods (GNRs). A commercial two-photon FLIM system was used to acquire raw 3-D data cubes. This section compares ELM with 1D-CNN, NLSF, and BCMM.

4.1. Experimental Setup and Sample Preparation

We used the proposed ELM to analyze a living cellular sample, acquired by a two-photon FLIM system. To achieve an efficient imaging contrast, prostate cancer cells were treated with GNRs functionalized with Cy5 labeled ssDNA [40]. GNRs have tunable longitudinal surface plasmon resonance and enable the interactions between the strong electromagnetic field and activated fluorophore in biological samples [41,42]. Functionalizing GNRs with fluorophore-labelled DNA has been adopted to probe endocellular components [43,44], including microRNA detections for human breast cancer or monitoring the intracellular level of metal ions in human serums. Here, prostate cancer cells were incubated with nanoprobe for 6 h and washed three times with phosphate-buffered saline (PBS). Cells were blended with 4% paraformaldehyde for 15 min. After removing paraformaldehyde, cells were washed with distilled water three times. The two-photon FLIM platform consists of a confocal microscope (LSM 510, Carl Zeiss, Oberkochen, Germany) with 256 × 256 spatial resolution, where the scan module includes four individual PMTs. A TCSPC module (SPC-830, Becker & Hickl GmbH, Berlin, Germany) with 256 time bins and 39 picosecond timing resolution was mounted on the microscope. A tunable femtosecond Ti: sapphire laser (Chameleon, Coherent, Santa Clara, CA, USA) was configured with a repetition frequency 80 MHz and 850 nm wavelength to excite the sample. The emission light was collected using a 60× water-immersion objectives lens (numerical aperture = 1.0) and a 500–550 nm bandpass filter. One hundred scanning cycles were selected to prevent GNRs heating and obtain sufficient photons, where each cycle took three seconds.

4.2. Algorithm Evaluation

Due to the strong two-photon photoluminescence property of GNRs, high optical discernibility can be observed between the GNRs and cell tissues [45]. Figure 8a shows the grey-scale intensity image of the sample, where the bright spots are GNRs. As the background pixels with fewer photon counts imply less useful information, they can be neglected during the analysis. In this case, a threshold (100 photon counts) was considered to neglect these pixels. As conventional data readout from TCSPC systems is pixel by pixel, accumulated histograms can be directly fed into the ELM without data conversion. The biological sample should be illuminated with a long acquisition time to achieve a high SNR to obtain a reliable reference. However, a long acquisition time can easily lead to photobleaching. The previous study [27] reported that a phasor projection image could alternatively serve as a reference image to identify autofluorescence and gold nanoprobes. Two clusters representing autofluorescence of the cell and gold nanoprobes can be observed in the phasor plot shown in Figure 8b, after we had applied pixel filtering. Cluster 2 contains the majority of pixels with shorter lifetimes depicting gold nanoprobes. A fitted line was obtained by a linear regression fitting algorithm:
arg min a , b n = 1 N s n ( a g n + b ) 2 2
where a and b are slope and intercept of the fitted line, gn and sn are locations of pixels in the phasor domain. The intersection points A(ga,sa) and B(gb,s2b) can be obtained accordingly. As shown in Figure 8c, we employed the pixel-wise phasor score ρ to generate a phasor projection image by computing:
ρ n = [ ( g n g 2 ) ( g 1 g 2 ) + ( s n s 2 ) ( s 1 s 2 ) ] / D ,
where D is the Euclidean distance between A and B, n is the number of filtered pixels.
By comparing τA images obtained from ELM (Figure 8d), 1D-CNN (Figure 8e), NLSF (Figure 8f), and BCMM (Figure 8g), the image from NLSF shows obvious bias because, as mentioned, NLSF is sensitive to initial values and fails to converge sometimes. Given that the 1-D CNN [27] achieved high speed and accuracy, we compared ELM and 1-D CNN in terms of τA using the same training datasets. From Figure 8d,e, ELM is in good agreement with 1D-CNN, and they showed similar distributions of pixel counts, as shown in Figure 8h. However, in Figure 8g, the NLSF’s result is significantly more biased than the other three algorithms. This is because deconvolution was involved in NLSF, causing non-convergent results due to dealing with ultra-short decays caused by gold nanoprobes. As mentioned, BCMM is not robust in varying ranges of photon counts; many pixels are out of the defined range (0 to 2 ns), as the white pixels show in Figure 8g. Nevertheless, BCMM is a fast algorithm that only took 6.53 s to reconstruct the image. The inference time of 1-D CNN on a GPU (NVIDIA GTX 850M) is 116.43 s, whereas ELM only consumed 1.73 s during inference on the CPU.

4.3. Low Counts Scenarios

Fragile tissues, such as retinas, cannot be excited by laser for a long time. To avoid tissue damage and photobleaching caused by a long acquisition time, we investigated ELM’s performance for data in low-photon scenarios. We kept the experimental setup identical to Section 4.1. To acquire less-emitted photons, we chose the field of view with fewer nanoprobes. Increased scanning cycles were set on the software. As the number of cycles increased, we changed the intensity threshold to guarantee sufficient pixels were saved. The value of the intensity threshold should be fine-tuned according to different bio-samples (5% of total counts in our experiments). Figure 9a,b depict intensity and reconstructed τA images, respectively. The lifetime of cells and nanoprobes can be consistently reconstructed, even if the cycle decreases to 10. Notably, nanoprobes and boundaries of cells cannot be identified in intensity images with 10 and 40 cycles, yet lifetime images can restore the lifetime and reveal cell boundaries. Below each lifetime image in Figure 9b, histograms of pixel occurrence were below τA images, showing means μ and standard deviations σ. There was no distinct shift in μ and σ at different collection cycles, indicating that ELM is robust, even at low counts.

5. Conclusions

In summary, we presented an ELM architecture to accurately retrieve fluorescence lifetime parameters from mono- and bi-exponential decays. Both synthetic and realistic experimental FLIM datasets were employed to evaluate the proposed network. Our results show ELM outperforms fitting and non-fitting methods, regarding synthetic datasets at different photon counts. Further, ELM can better identify NRs and cells and yield a comparable result to the 1-D CNN method. Since ELM does not need back-propagation to train the network, it is more flexible to reconfigure the network topology. Due to the potential online training property, it is promising to implement it on embedded hardware in the future, coupling with sensors and readout circuits to achieve fast on-chip training and inference. More FLIM applications relying on gold nanoparticles will benefit from this study for cellular cancer diagnosis.

Author Contributions

Conceptualization and methodology Z.Z.; software, Z.Z. and D.X.; validation, Z.Z.; formal analysis, Z.Z.; investigation, Z.Z.; Bio-sample preparation, Z.L. and Q.W.; writing—original draft preparation, Z.Z.; writing—review and editing, Z.Z., Y.C. and D.D.U.L.; visualization, Z.Z. and W.X.; supervision, D.D.U.L.; project administration, D.D.U.L.; funding acquisition, D.D.U.L. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported, in part, by Medical Research Scotland (MRS-1179-2017), and BBSRC (BB/V019643/1 and BB/K013416/1). We would like to acknowledge Photon Force, Ltd. and Datalab for supporting this project.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Gorpas, D.; Ma, D.; Bec, J.; Yankelevich, D.R.; Marcu, L. Real-Time Visualization of Tissue Surface Biochemical Features Derived from Fluorescence Lifetime Measurements. IEEE Trans. Med. Imaging 2016, 35, 1802–1811. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  2. Harbater, O.; Ben-David, M.; Gannot, I. Fluorescence Lifetime and Depth Estimation of a Tumor Site for Functional Imaging Purposes. IEEE J. Sel. Top. Quantum Electron. 2010, 16, 981–988. [Google Scholar] [CrossRef]
  3. Eruv, T.; Ben-David, M.; Gannot, I. An Alternative Approach to Analyze Fluorescence Lifetime Images as a Base for a Tumor Early Diagnosis System. IEEE J. Sel. Top. Quantum Electron. 2008, 14, 98–104. [Google Scholar] [CrossRef]
  4. Marsden, M.; Weyers, B.W.; Bec, J.; Sun, T.; Gandour-Edwards, R.F.; Birkeland, A.C.; Abouyared, M.; Bewley, A.F.; Farwell, D.G.; Marcu, L. Intraoperative Margin Assessment in Oral and Oropharyngeal Cancer Using Label-Free Fluorescence Lifetime Imaging and Machine Learning. IEEE Trans. Biomed. Eng. 2021, 68, 857–868. [Google Scholar] [CrossRef] [PubMed]
  5. Heger, Z.; Kominkova, M.; Cernei, N.; Krejcova, L.; Kopel, P.; Zitka, O.; Adam, V.; Kizek, R. Fluorescence resonance energy transfer between green fluorescent protein and doxorubicin enabled by DNA nanotechnology. Electrophoresis 2014, 35, 3290–3301. [Google Scholar] [CrossRef] [PubMed]
  6. Blacker, T.S.; Mann, Z.F.; Gale, J.E.; Ziegler, M.; Bain, A.J.; Szabadkai, G.; Duchen, M.R. Separating NADH and NADPH fluorescence in live cells and tissues using FLIM. Nat. Commun. 2014, 5, 1–9. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  7. Becker, W. Advanced Time-Correlated Single Photon. Counting Techniques, 1st ed.; Springer: Berlin/Heidelberg, Germany, 2005. [Google Scholar]
  8. Shin, D.; Xu, F.; Venkatraman, D.; Lussana, R.; Villa, F.; Zappa, F.; Goyal, V.K.; Wong, F.N.C.; Shapiro, J.H. Photon-efficient imaging with a single-photon camera. Nat. Commun. 2016, 7, 1–8. [Google Scholar] [CrossRef] [Green Version]
  9. Rapp, J.; Goyal, V.K. A Few Photons Among Many: Unmixing Signal and Noise for Photon-Efficient Active Imaging. IEEE Trans. Comput. Imaging 2017, 3, 445–459. [Google Scholar] [CrossRef]
  10. Zang, Z.; Xiao, D.; Li, D.D.U. Non-fusion time-resolved depth image reconstruction using a highly efficient neural network architecture. Opt. Express 2021, 29, 19278–19291. [Google Scholar] [CrossRef]
  11. Callenberg, C.; Lyons, A.; Brok, D.; Fatima, A.; Turpin, A.; Zickus, V.; Machesky, L.; Whitelaw, J.; Faccio, D.; Hullin, M.B. Super-resolution time-resolved imaging using computational sensor fusion. Sci. Rep. 2021, 11, 1–8. [Google Scholar] [CrossRef]
  12. Turgeman, L.; Fixler, D. Photon Efficiency Optimization in Time-Correlated Single Photon Counting Technique for Fluorescence Lifetime Imaging Systems. IEEE. Trans. Biomed. Eng. 2013, 60, 1571–1579. [Google Scholar] [CrossRef] [PubMed]
  13. Zhang, Y.; Chen, Y.; Li, D.D.U. Optimizing Laguerre expansion-based deconvolution methods for analyzing bi-exponential fluorescence lifetime images. Opt. Express 2016, 24, 13894–13905. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  14. Jo, J.A.; Fang, Q.; Marcu, L. Ultrafast method for the analysis of fluorescence lifetime imaging microscopy data based on the Laguerre expansion technique. IEEE J. Sel. Top. Quantum Electron. 2005, 11, 835–845. [Google Scholar] [CrossRef]
  15. Pande, P.; Jo, J.A. Automated Analysis of Fluorescence Lifetime Imaging Microscopy (FLIM) Data Based on the Laguerre Deconvolution Method. IEEE. Trans. Biomed. Eng. 2011, 58, 172–181. [Google Scholar] [CrossRef] [PubMed]
  16. Wang, S.; Chacko, J.V.; Sagar, A.K.; Eliceiri, K.W.; Yuan, M. Nonparametric empirical Bayesian framework for fluorescence-lifetime imaging microscopy. Biomed. Opt. Express 2019, 10, 5497–5517. [Google Scholar] [CrossRef] [PubMed]
  17. Li, D.D.U.; Arlt, J.; Tyndall, D.; Walker, R.; Richardson, J.; Stoppa, D.; Charbon, E.; Henderson, R.K. Video-rate fluorescence lifetime imaging camera with CMOS single-photon avalanche diode arrays and high-speed imaging algorithm. J. Biomed. Opt. 2011, 16, 096012. [Google Scholar] [CrossRef] [Green Version]
  18. Li, D.D.U.; Yu, H.; Chen, Y. Fast bi-exponential fluorescence lifetime imaging analysis methods. Opt. Lett. 2015, 40, 336–339. [Google Scholar] [CrossRef] [Green Version]
  19. Tyndall, D.; Rae, B.R.; Li, D.D.U.; Arlt, J.; Johnston, A.; Richardson, J.A.; Henderson, R.K. A high-throughput time-resolved mini-silicon photomultiplier with embedded fluorescence lifetime estimation in 0.13 μm CMOS. IEEE Trans. Biomed. Circuits Syst. 2012, 6, 562–570. [Google Scholar] [CrossRef]
  20. Mai, H.; Poland, S.P.; Rocca, F.M.D.; Treacy, C.; Aluko, J.; Nedbal, J.; Erdogan, A.T.; Gyongy, I.; Walker, R.; Ameer-Beg, S.M.; et al. Flow cytometry visualization and real-time processing with a CMOS SPAD array and high-speed hardware implementation algorithm. Proc. SPIE 2020, 11243, 112430S. [Google Scholar]
  21. Xiao, D.; Zang, Z.; Sapermsap, N.; Wang, Q.; Xie, W.; Chen, Y.; Li, D.D.U. Dynamic fluorescence lifetime sensing with CMOS single-photon avalanche diode arrays and deep learning processors. Biomed. Opt. Express 2021, 12, 3450–3462. [Google Scholar] [CrossRef]
  22. Li, D.D.U.; Arlt, L.; Richardson, J.; Walker, R.; Buts, A.; Stoppa, D.; Charbon, E.; Henderson, R. Real-time fluorescence lifetime imaging system with a 32 × 32 0.13μm CMOS low dark-count single-photon avalanche diode array. Opt. Express 2010, 18, 10257–10269. [Google Scholar] [CrossRef] [PubMed]
  23. Yu, H.; Saleeb, R.; Dalgarno, P.; Li, D.D.U. Estimation of Fluorescence Lifetimes Via Rotational Invariance Techniques. IEEE. Trans. Biomed. Eng. 2016, 63, 1292–1300. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  24. Li, Y.; Sapermsap, N.; Yu, J.; Tian, J.; Chen, Y.; Li, D.D.U. Histogram clustering for rapid time-domain fluorescence lifetime image analysis. Biomed. Opt. Express 2021, 12, 4293–4307. [Google Scholar] [CrossRef] [PubMed]
  25. Smith, J.T.; Yao, R.; Sinsuebphon, N.; Rudkouskaya, A.; Un, N.; Mazurkiewicz, J.; Barroso, M.; Yan, P.; Intes, X. Fast fit-free analysis of fluorescence lifetime imaging via deep learning. Proc. Natl. Acad. Sci. USA 2019, 116, 24019–24030. [Google Scholar] [CrossRef]
  26. Yao, R.; Ochoa, M.; Yan, P.; Intes, X. Net-FLICS: Fast quantitative wide-field fluorescence lifetime imaging with compressed sensing—A deep learning approach. Light Sci. 2019, 8, 1–7. [Google Scholar] [CrossRef] [Green Version]
  27. Xiao, D.; Chen, Y.; Li, D.D.U. One-Dimensional Deep Learning Architecture for Fast Fluorescence Lifetime Imaging. IEEE J. Sel. Top. Quantum Electron. 2021, 27, 1–10. [Google Scholar] [CrossRef]
  28. Zickus, V.; Wu, M.; Morimoto, K.; Kapitany, V.; Fatima, A.; Turpin, A.; Insall, R.; Whitelaw, J.; Machesky, L.; Bruschini, C.; et al. Fluorescence lifetime imaging with a megapixel SPAD camera and neural network lifetime estimation. Sci. Rep. 2020, 10, 1–10. [Google Scholar] [CrossRef]
  29. Wu, G.; Nowotny, T.; Zhang, Y.; Yu, H.; Li, D.D.U. Artificial neural network approaches for fluorescence lifetime imaging techniques. Opt. Lett. 2016, 41, 2561–2564. [Google Scholar] [CrossRef] [Green Version]
  30. Kapitany, V.; Turpin, A.; Whitelaw, L.; McGhee, E.; Insall, R.; Machesky, L.; Faccio, D. Data fusion for high resolution fluorescence lifetime imaging using deep learning. Proc. Comput. Opt. Sens Imag. Opt. Soc. Am. 2020, CW1B-4. [Google Scholar] [CrossRef]
  31. Huang, G.; Zhou, H.; Ding, X.; Zhang, R. Extreme Learning Machine for Regression and Multiclass Classification. IEEE Trans. Syst. Man Cybern. B 2021, 42, 513–529. [Google Scholar] [CrossRef] [Green Version]
  32. Li, H.; Chou, C.; Chen, Y.; Wang, S.; Wu, A. Robust and Lightweight Ensemble Extreme Learning Machine Engine Based on Eigenspace Domain for Compressed Learning. IEEE Trans. Circuits Syst I Regul Pap. 2019, 66, 4699–4712. [Google Scholar] [CrossRef]
  33. Fereidouni, F.; Gorpas, D.; Ma, D.; Fatakdawala, H.; Marcu, L. Rapid fluorescence lifetime estimation with modified phasor approach and Laguerre deconvolution: A comparative study. Methods Appl. Fluoresc. 2017, 5, 35003. [Google Scholar] [CrossRef] [PubMed]
  34. Li, Y.; Natakorn, S.; Chen, Y.; Safar, M.; Cunningham, M.; Tian, J.; Li, D.D.U. Investigations on average fluorescence lifetimes for visualizing multi-exponential decays. Front. Phys. 2020, 8, 576862. [Google Scholar] [CrossRef]
  35. Chen, Y.; Chang, Y.; Liao, S.; Nguyen, T.D.; Yang, J.; Kuo, Y.A.; Hong, S.; Liu, Y.L.; Rylander, H.G., III; Santacruz, S.R.; et al. Deep learning enables rapid and robust analysis of fluorescence lifetime imaging in photon-starved conditions. Commun. Biol. 2021, 5, 18. [Google Scholar] [CrossRef] [PubMed]
  36. Jameson, D.M.; Gratton, E.; Hall, R.D. The Measurement and Analysis of Heterogeneous Emissions by Multifrequency Phase and Modulation Fluorometry. Appl. Spectrosc. Rev. 1984, 20, 55–106. [Google Scholar] [CrossRef] [Green Version]
  37. Gerritsen, H.C.; Asselbergs, M.A.H.; Agronskaia, A.V.; Van Sark, W.G.J.H.M. Fluorescence lifetime imaging in scanning microscopes: Acquisition speed, photon economy and lifetime resolution. J. Microsc. 2002, 206, 218–224. [Google Scholar] [CrossRef] [Green Version]
  38. Bishop, C.M. Pattern Recognition and Machine Learning, 1st ed.; Springer: Berlin/Heidelberg, Germany, 2006. [Google Scholar]
  39. Tsukada, M.; Kondo, M.; Matsutani, H. A Neural Network-Based On-Device Learning Anomaly Detector for Edge Devices. IEEE Trans. Comput. 2020, 69, 1027–1044. [Google Scholar] [CrossRef] [Green Version]
  40. Wei, G.; Yu, J.; Wang, J.; Gu, P.; Birch, D.J.S.; Chen, Y. Hairpin DNA-functionalized gold nanorods for mRNA detection in homogenous solution. J. Biomed. Opt. 2016, 21, 97001. [Google Scholar] [CrossRef] [Green Version]
  41. Kang, K.A.; Wang, J.; Jasinski, J.B.; Achilefu, S. Fluorescence manipulation by gold nanoparticles: From complete quenching to extensive enhancement. J. Nanobiotechnol. 2011, 9, 1–13. [Google Scholar] [CrossRef] [Green Version]
  42. Racknor, C.; Singh, M.R.; Zhang, Y.; Birch, D.J.; Chen, Y. Energy transfer between a biological labelling dye and gold nanorods. Methods Appl. Fluoresc. 2013, 2, 15002. [Google Scholar] [CrossRef] [Green Version]
  43. Jungemann, A.H.; Harimech, P.K.; Brown, T.; Kanaras, A.G. Goldnanoparticles and fluorescently-labelled DNA as a platform for biological sensing. Nanoscale 2013, 5, 9503–9510. [Google Scholar] [CrossRef] [PubMed]
  44. Zhang, Y.; Wei, G.; Yu, J.; Birch, D.J.S.; Chen, Y. Surface plasmon enhanced energy transfer between gold nanorods and fluorophores: Application to endocytosis study and RNA detection. Faraday Discuss. 2015, 178, 383–394. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  45. Zhang, Y.; Yu, J.; Birch, D.J.S.; Chen, Y. Gold nanorods for fluorescence lifetime imaging in biology. J. Biomed. Opt. 2010, 15, 20504. [Google Scholar] [CrossRef] [PubMed]
Figure 1. ELM is used for lifetime analysis. The input data are a 1-D pixel-wise histogram from the raw point cloud that contains 256 time bins. The histogram is fed into a single-hidden-layer ELM, and lifetime parameters (τ1, τ2, and α) can be obtained from output nodes.
Figure 1. ELM is used for lifetime analysis. The input data are a 1-D pixel-wise histogram from the raw point cloud that contains 256 time bins. The histogram is fed into a single-hidden-layer ELM, and lifetime parameters (τ1, τ2, and α) can be obtained from output nodes.
Sensors 22 03758 g001
Figure 2. Box plots of absolute error versus different peak intensity levels regarding testing datasets. (a,b) Single lifetime estimations of mono-exponential decays from ELM and NLSF, respectively. (c,d) Double lifetime estimations of bi-exponential decays from ELM and NLSF, respectively. (e,f) τA estimated by ELM and NLSF, respectively.
Figure 2. Box plots of absolute error versus different peak intensity levels regarding testing datasets. (a,b) Single lifetime estimations of mono-exponential decays from ELM and NLSF, respectively. (c,d) Double lifetime estimations of bi-exponential decays from ELM and NLSF, respectively. (e,f) τA estimated by ELM and NLSF, respectively.
Sensors 22 03758 g002
Figure 3. Lifetime parameters’ estimation results, photon counts for each pixel were randomly picked between 25 and 500. (a) The estimated single lifetime using a mono-exponential decay model, where τ ∊ [0.1, 5] ns from top to down in the image. (b) The two estimated lifetimes using a bi-exponential decay model where τ1= 0.3 ns, τ2= 3 ns, and α ∊ [0, 1] from the top down. (c) Two phasor plots of ground truth distributions of (a,b). (d) Prediction accuracy and R2 of τA from ELM and NLSF, with τ1 = 0.3 ns and τ2 = 2.5 ns, respectively.
Figure 3. Lifetime parameters’ estimation results, photon counts for each pixel were randomly picked between 25 and 500. (a) The estimated single lifetime using a mono-exponential decay model, where τ ∊ [0.1, 5] ns from top to down in the image. (b) The two estimated lifetimes using a bi-exponential decay model where τ1= 0.3 ns, τ2= 3 ns, and α ∊ [0, 1] from the top down. (c) Two phasor plots of ground truth distributions of (a,b). (d) Prediction accuracy and R2 of τA from ELM and NLSF, with τ1 = 0.3 ns and τ2 = 2.5 ns, respectively.
Sensors 22 03758 g003
Figure 4. (a) F-value for mono-exponential decays with a range [0.1, 5] ns. (b) F-values for bi-exponential decays with τ1, τ2, and α in the ranges [0.1, 1] ns, [1, 3] ns, and [0, 1], respectively. (c,d) Bias per histogram for mono- and bi-exponential decays, respectively.
Figure 4. (a) F-value for mono-exponential decays with a range [0.1, 5] ns. (b) F-values for bi-exponential decays with τ1, τ2, and α in the ranges [0.1, 1] ns, [1, 3] ns, and [0, 1], respectively. (c,d) Bias per histogram for mono- and bi-exponential decays, respectively.
Sensors 22 03758 g004
Figure 5. (a) Intensity image of GT τA in exact ranges. Ipc depicts total photon counts in one pixel. The range from 40 to 400 is viewed as low photon counts. (b) the GT τA lifetime image with the range ~ [0.3, 2.5] ns. (ce) reconstructed τA images from ELM, NLSF, and BCMM.
Figure 5. (a) Intensity image of GT τA in exact ranges. Ipc depicts total photon counts in one pixel. The range from 40 to 400 is viewed as low photon counts. (b) the GT τA lifetime image with the range ~ [0.3, 2.5] ns. (ce) reconstructed τA images from ELM, NLSF, and BCMM.
Sensors 22 03758 g005
Figure 6. (a) GT τI image in exact ranges. (bd) Reconstructed τI images from ELM, NLSF, and CMM for bi-exponential decays.
Figure 6. (a) GT τI image in exact ranges. (bd) Reconstructed τI images from ELM, NLSF, and CMM for bi-exponential decays.
Sensors 22 03758 g006
Figure 7. (a,b) Loss curves and time consumption vs. different numbers of nodes in the hidden layer.
Figure 7. (a,b) Loss curves and time consumption vs. different numbers of nodes in the hidden layer.
Sensors 22 03758 g007
Figure 8. Lifetime analysis of prostatic cells loaded with gold nanoprobes. (a) The intensity image, (b) phasor plot, and (c) phasor projection image. (dg) τA restored by ELM, 1-D CNN, NLSF, and BCMM. (h) Lifetime histograms of ELM, 1-D CNN, NLSF, and BCMM.
Figure 8. Lifetime analysis of prostatic cells loaded with gold nanoprobes. (a) The intensity image, (b) phasor plot, and (c) phasor projection image. (dg) τA restored by ELM, 1-D CNN, NLSF, and BCMM. (h) Lifetime histograms of ELM, 1-D CNN, NLSF, and BCMM.
Sensors 22 03758 g008
Figure 9. (a) Intensity images with different scales of colorbars, scanning cycles were set to 10, 40, 60, and 80. Colorbars are unified. (b) τA images and pixel occurrence reconstructed by ELM in different cycles.
Figure 9. (a) Intensity images with different scales of colorbars, scanning cycles were set to 10, 40, 60, and 80. Colorbars are unified. (b) τA images and pixel occurrence reconstructed by ELM in different cycles.
Sensors 22 03758 g009
Table 1. Time Consumption (Seconds) of NLSF And ELM for Inference Lifetime Parameters.
Table 1. Time Consumption (Seconds) of NLSF And ELM for Inference Lifetime Parameters.
AlgorithmMono-Exponential Decay ModeBi-Exponential Decay Mode
NLSF371.9 (s)670.9 (s)
ELM6.26.5
CMM [17]1.91.9 (τI)
BCMM [18]-16.1 (τA)
Table 2. Comparisons of Existing NN Architecture for Lifetime Estimation.
Table 2. Comparisons of Existing NN Architecture for Lifetime Estimation.
AlgorithmTraining ParametersHidden LayerRevolve Multi-Exp. DecaysTraining Time
ELM205,600110.85 s
FLI-NET [25]1,084,04574 h
1-D CNN [27]48,675723 min
MLP [28]3,750,205338 min
MLP [29]149,25224 h
✓ means the algorithm can resolve multiple-exponential decays, ✕ means the algorithm cannot resolve multiple-exponential decays.
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Zang, Z.; Xiao, D.; Wang, Q.; Li, Z.; Xie, W.; Chen, Y.; Li, D.D.U. Fast Analysis of Time-Domain Fluorescence Lifetime Imaging via Extreme Learning Machine. Sensors 2022, 22, 3758. https://doi.org/10.3390/s22103758

AMA Style

Zang Z, Xiao D, Wang Q, Li Z, Xie W, Chen Y, Li DDU. Fast Analysis of Time-Domain Fluorescence Lifetime Imaging via Extreme Learning Machine. Sensors. 2022; 22(10):3758. https://doi.org/10.3390/s22103758

Chicago/Turabian Style

Zang, Zhenya, Dong Xiao, Quan Wang, Zinuo Li, Wujun Xie, Yu Chen, and David Day Uei Li. 2022. "Fast Analysis of Time-Domain Fluorescence Lifetime Imaging via Extreme Learning Machine" Sensors 22, no. 10: 3758. https://doi.org/10.3390/s22103758

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop