Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (2)

Search Parameters:
Keywords = multispectral autofluorescence lifetime imaging

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
19 pages, 3900 KiB  
Article
Contrastive Clustering-Based Patient Normalization to Improve Automated In Vivo Oral Cancer Diagnosis from Multispectral Autofluorescence Lifetime Images
by Kayla Caughlin, Elvis Duran-Sierra, Shuna Cheng, Rodrigo Cuenca, Beena Ahmed, Jim Ji, Mathias Martinez, Moustafa Al-Khalil, Hussain Al-Enazi, Javier A. Jo and Carlos Busso
Cancers 2024, 16(23), 4120; https://doi.org/10.3390/cancers16234120 - 9 Dec 2024
Cited by 1 | Viewed by 1159
Abstract
Background: Multispectral autofluorescence lifetime imaging systems have recently been developed to quickly and non-invasively assess tissue properties for applications in oral cancer diagnosis. As a non-traditional imaging modality, the autofluorescence signal collected from the system cannot be directly visually assessed by a clinician [...] Read more.
Background: Multispectral autofluorescence lifetime imaging systems have recently been developed to quickly and non-invasively assess tissue properties for applications in oral cancer diagnosis. As a non-traditional imaging modality, the autofluorescence signal collected from the system cannot be directly visually assessed by a clinician and a model is needed to generate a diagnosis for each image. However, training a deep learning model from scratch on small multispectral autofluorescence datasets can fail due to inter-patient variability, poor initialization, and overfitting. Methods: We propose a contrastive-based pre-training approach that teaches the network to perform patient normalization without requiring a direct comparison to a reference sample. We then use the contrastive pre-trained encoder as a favorable initialization for classification. To train the classifiers, we efficiently use available data and reduce overfitting through a multitask framework with margin delineation and cancer diagnosis tasks. We evaluate the model over 67 patients using 10-fold cross-validation and evaluate significance using paired, one-tailed t-tests. Results: The proposed approach achieves a sensitivity of 82.08% and specificity of 75.92% on the cancer diagnosis task with a sensitivity of 91.83% and specificity of 79.31% for margin delineation as an auxiliary task. In comparison to existing approaches, our method significantly outperforms a support vector machine (SVM) implemented with either sequential feature selection (SFS) (p = 0.0261) or L1 loss (p = 0.0452) when considering the average of sensitivity and specificity. Specifically, the proposed approach increases performance by 2.75% compared to the L1 model and 4.87% compared to the SFS model. In addition, there is a significant increase in specificity of 8.34% compared to the baseline autoencoder model (p = 0.0070). Conclusions: Our method effectively trains deep learning models for small data applications when existing, large pre-trained models are not suitable for fine-tuning. While we designed the network for a specific imaging modality, we report the development process so that the insights gained can be applied to address similar challenges in other non-traditional imaging modalities. A key contribution of this paper is a neural network framework for multi-spectral fluorescence lifetime-based tissue discrimination that performs patient normalization without requiring a reference (healthy) sample from each patient at test time. Full article
(This article belongs to the Section Cancer Causes, Screening and Diagnosis)
Show Figures

Figure 1

17 pages, 2322 KiB  
Article
Machine-Learning Assisted Discrimination of Precancerous and Cancerous from Healthy Oral Tissue Based on Multispectral Autofluorescence Lifetime Imaging Endoscopy
by Elvis Duran-Sierra, Shuna Cheng, Rodrigo Cuenca, Beena Ahmed, Jim Ji, Vladislav V. Yakovlev, Mathias Martinez, Moustafa Al-Khalil, Hussain Al-Enazi, Yi-Shing Lisa Cheng, John Wright, Carlos Busso and Javier A. Jo
Cancers 2021, 13(19), 4751; https://doi.org/10.3390/cancers13194751 - 23 Sep 2021
Cited by 32 | Viewed by 4700
Abstract
Multispectral autofluorescence lifetime imaging (maFLIM) can be used to clinically image a plurality of metabolic and biochemical autofluorescence biomarkers of oral epithelial dysplasia and cancer. This study tested the hypothesis that maFLIM-derived autofluorescence biomarkers can be used in machine-learning (ML) models to discriminate [...] Read more.
Multispectral autofluorescence lifetime imaging (maFLIM) can be used to clinically image a plurality of metabolic and biochemical autofluorescence biomarkers of oral epithelial dysplasia and cancer. This study tested the hypothesis that maFLIM-derived autofluorescence biomarkers can be used in machine-learning (ML) models to discriminate dysplastic and cancerous from healthy oral tissue. Clinical widefield maFLIM endoscopy imaging of cancerous and dysplastic oral lesions was performed at two clinical centers. Endoscopic maFLIM images from 34 patients acquired at one of the clinical centers were used to optimize ML models for automated discrimination of dysplastic and cancerous from healthy oral tissue. A computer-aided detection system was developed and applied to a set of endoscopic maFLIM images from 23 patients acquired at the other clinical center, and its performance was quantified in terms of the area under the receiver operating characteristic curve (ROC-AUC). Discrimination of dysplastic and cancerous from healthy oral tissue was achieved with an ROC-AUC of 0.81. This study demonstrates the capabilities of widefield maFLIM endoscopy to clinically image autofluorescence biomarkers that can be used in ML models to discriminate dysplastic and cancerous from healthy oral tissue. Widefield maFLIM endoscopy thus holds potential for automated in situ detection of oral dysplasia and cancer. Full article
(This article belongs to the Collection Artificial Intelligence in Oncology)
Show Figures

Figure 1

Back to TopTop