Next Article in Journal
A Disulfidptosis-Related Gene Signature Associated with Prognosis and Immune Cell Infiltration in Osteosarcoma
Next Article in Special Issue
Enhancing Knee MR Image Clarity through Image Domain Super-Resolution Reconstruction
Previous Article in Journal
SECP-Net: SE-Connection Pyramid Network for Segmentation of Organs at Risk with Nasopharyngeal Carcinoma
Previous Article in Special Issue
Design and Validation of a Deep Learning Model for Renal Stone Detection and Segmentation on Kidney–Ureter–Bladder Images
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Application of Deep Learning for Prediction of Alzheimer’s Disease in PET/MR Imaging

1
Department of Information Center, The First Affiliated Hospital, Dalian Medical University, Dalian 116011, China
2
Department of Nuclear Medicine, Beijing Cancer Hospital, Beijing 100142, China
3
Department of Radiology, The First Affiliated Hospital, Dalian Medical University, Dalian 116011, China
4
Department of Nuclear Medicine, The First Affiliated Hospital, Dalian Medical University, Dalian 116011, China
5
Beijing United Imaging Research Institute of Intelligent Imaging, Beijing 100094, China
*
Authors to whom correspondence should be addressed.
Bioengineering 2023, 10(10), 1120; https://doi.org/10.3390/bioengineering10101120
Submission received: 21 August 2023 / Revised: 19 September 2023 / Accepted: 22 September 2023 / Published: 24 September 2023
(This article belongs to the Special Issue Recent Progress in Biomedical Image Processing)

Abstract

:
Alzheimer’s disease (AD) is a progressive neurodegenerative disorder that affects millions of people worldwide. Positron emission tomography/magnetic resonance (PET/MR) imaging is a promising technique that combines the advantages of PET and MR to provide both functional and structural information of the brain. Deep learning (DL) is a subfield of machine learning (ML) and artificial intelligence (AI) that focuses on developing algorithms and models inspired by the structure and function of the human brain’s neural networks. DL has been applied to various aspects of PET/MR imaging in AD, such as image segmentation, image reconstruction, diagnosis and prediction, and visualization of pathological features. In this review, we introduce the basic concepts and types of DL algorithms, such as feed forward neural networks, convolutional neural networks, recurrent neural networks, and autoencoders. We then summarize the current applications and challenges of DL in PET/MR imaging in AD, and discuss the future directions and opportunities for automated diagnosis, predictions of models, and personalized medicine. We conclude that DL has great potential to improve the quality and efficiency of PET/MR imaging in AD, and to provide new insights into the pathophysiology and treatment of this devastating disease.

1. Introduction

Alzheimer’s disease (AD) is a progressive neurodegenerative disorder that detrimentally impacts cognitive function, memory, and behavior. This devastating condition is primarily characterized by the accumulation of amyloid-beta (Aβ) plaques and tau protein tangles within the brain [1,2]. It stands as the predominant cause of dementia, responsible for 60–80% of all cases [3]. Alarmingly, its prevalence is projected to triple globally by 2050, with AD estimates reaching threefold higher rates [4]. Given the absence of a cure and the limited efficacy of available treatments in halting symptom progression, early detection and intervention play a pivotal role in managing AD [5].
In the study of AD and mild cognitive impairment (MCI), non-invasive imaging techniques such as positron emission tomography (PET) and magnetic resonance imaging (MRI) have been instrumental in unraveling insights into brain structure and function [6,7,8]. PET imaging, for instance, facilitates the identification of Aβ protein accumulation [9,10], while MRI provides high-resolution structural brain images capable of detecting AD-related atrophy in specific brain regions [11]. Nonetheless, the analysis of PET and MRI images in AD and MCI poses challenges due to subtle cerebral changes and discrepancies among observers. Consequently, there exists a pressing need for objective and quantitative techniques that can enhance the early detection of AD and MCI.
Recent publications [12,13,14] have witnessed a burgeoning interest in the utilization of deep learning (DL) algorithms for the analysis of medical images. The number of publications in the Web of Science related to Alzheimer’s disease, deep learning, and PET/MR imaging, as depicted in Figure 1. DL offers the advantage of automatically learning features from extensive datasets, obviating the need for manual feature extraction [15,16,17,18]. This not only reduces the time and effort required for image analysis but also improves accuracy [19,20,21]. Moreover, DL algorithms can be employed for image segmentation, a critical task in identifying specific regions of interest during medical imaging analysis. Nevertheless, further research is imperative to validate the efficacy of these algorithms in real-world clinical settings.
In this paper, we introduce several innovative characteristics in the application of DL for the prediction of AD in PET/MR imaging. We introduce novel DL models that leverage the power of artificial intelligence to analyze complex imaging data and accurately predict the presence and progression of AD. Additionally, we highlight the importance of multimodal imaging, combining PET and MR scans, to enhance the accuracy and reliability of the predictions. We also discuss the potential of automated diagnosis and personalized medicine in optimizing the performance of DL models for Alzheimer’s disease prediction. Overall, we present cutting-edge advancements and novel approaches that contribute to the field of DL in AD research.
The paper begins by discussing the significance of PET/MR imaging in AD research. It then introduces various DL algorithms. The paper further explores the application of DL in AD PET/MR imaging, focusing on image segmentation, image reconstruction, diagnosis and prediction, and visualization of pathological features. Finally, the paper suggests future directions for research in this area.

2. PET/MR Imaging in AD

The early detection of AD is of paramount importance for effective patient management and prognostication. PET tracers for glucose metabolism [8,22,23,24], amyloid [25,26], tau [27,28,29,30,31,32], and neuroinflammation imaging [33,34,35,36,37,38,39], as well as MRI techniques such as arterial spin labeling (ASL) [40,41,42], resting-state fMRI and task-related fMRI [43,44,45,46,47,48,49,50,51,52], multi-nuclear MRI [53,54], and chemical exchange saturation transfer (CEST) [55,56], have provided valuable insights into the pathological mechanisms of AD in patients (Table 1).
Synapse loss is a major pathological change in AD, but its relationship to functional and structural connectivity dysfunction remains unclear. In the latest published research [57], 18F-SynVesT-1 PET/MR was used to measure synaptic vesicle glycoprotein 2 A (SV2A) binding and evaluate synaptic alterations in participants with AD, MCI, and controls. The PET and MRI data were acquired simultaneously on the United Imaging uPMR790 system. Compared to controls, lower synaptic density was found in the cortex and hippocampus of the AD group. Cognitive decline was correlated with synaptic density changes in the right insular cortex and bilateral caudal middle frontal gyrus (MFG). Specifically, the synaptic density in the right MFG was positively associated with functional connectivity between the right MFG and bilateral superior frontal gyrus (SFG). The AD group also had a lower probability of tract (POT) between the right MFG and SFG, which was significantly associated with global cognition. These findings suggest that synapse loss contributes to functional and structural connectivity alterations underlying cognitive impairment in AD.
The utilization of PET/MR imaging technology offers several notable advantages in the early diagnosis of AD. Firstly, the synchronous acquisition of PET and MR images enhances the accuracy and efficiency of diagnosis and treatment [58,59]. Secondly, the non-invasive nature of this technology eliminates the need for surgical procedures, thereby minimizing the risk to patients and increasing overall safety. Thirdly, the high-resolution imaging capabilities of PET/MR provide detailed biological biomarkers and anatomical images, facilitating a more precise evaluation of AD (Figure 2 and Figure 3) [60,61]. Lastly, the ability to simultaneously evaluate multiple biological biomarkers enables a comprehensive assessment of the disease. Consequently, the integration of hybrid PET/MR imaging is anticipated to play a pivotal role in improving the early diagnostic accuracy and clinical outcomes for patients with AD.
The different stages of AD include preclinical stage, MCI, mild, moderate, and severe AD. When analyzing PET/MR images, important information that needs to be extracted from the images includes detecting and localizing lesions, measuring volume, assessing lesion heterogeneity, segmenting organs and tissues, quantifying lesions, analyzing functionality and metabolism, and merging PET and MR data. Overall, the analysis of PET/MR images plays a critical role in detecting and characterizing AD, planning treatments, and evaluating organ function and anatomical structures. Compared to traditional methods and machine learning (ML), DL techniques have demonstrated better performance in various medical imaging tasks. DL models can effectively learn from large datasets, automatically extract meaningful features from raw PET/MR images, and this is particularly important for AD analysis due to the availability of significant imaging data. DL models have the potential to capture subtle imaging biomarkers and complex patterns associated with the progression of AD. However, the application of DL methodologies in AD analysis is limited by various challenges and limitations, such as the availability and quality of data, interpretability of results, the risk of overfitting and generalization issues, and the computational requirements.

3. Deep Learning Algorithms

Deep learning, an artificial neural network (ANN) technique, replicates the learning mechanism of the human brain through the utilization of multi-layer neural networks. This methodology facilitates the effective processing and analysis of intricate data [62]. Various deep learning algorithms exist (Table 2), encompassing feed forward neural networks (FFNN), convolutional neural networks (CNN), recurrent neural networks (RNN), generative adversarial networks (GAN), autoencoders, and deep reinforcement learning (DRL).

3.1. Feed Forward Neural Networks

Feed forward neural network (FFNN) excels in addressing classification and regression problems [63]. Comprising an input layer, hidden layer, and output layer, the FFNN receives raw data in the input layer, extracts feature through the hidden layer, and produces the final prediction outcome in the output layer. Each layer consists of multiple neurons with their own weights and biases, which can be refined through training.
The FFNN’s advantage lies in its ability to handle high-dimensional data and nonlinear relationships, making it suitable for various data types. However, substantial amounts of data and computational resources are required for training, and overfitting is a potential concern. In the medical field, the FFNN has been extensively applied in medical image processing [64], as well as disease prediction and classification, enabling healthcare professionals to provide personalized treatment and preventive measures [65].

3.2. Convolutional Neural Networks

The core principle underlying Convolutional Neural Networks (CNN) lies in their ability to extract salient features from input data through convolutional operations [66,67,68,69]. These extracted features are then further refined through pooling operations, resulting in a reduction in the feature map size. Finally, these refined features are mapped onto the output layer through fully connected layers, enabling robust classification and recognition. The widespread adoption and acclaim of CNNs across various domains attest to their remarkable efficacy [70,71].
CNNs have emerged as an indispensable deep learning algorithm, distinguished by their unparalleled ability to automatically unveil intrinsic features and structural patterns from input data. This unique capability empowers CNNs to excel in classification and recognition tasks with exceptional precision [72,73].

3.3. Recurrent Neural Networks

Recurrent neural networks (RNN) have gained widespread recognition for their exceptional prowess in processing sequential data. Their defining characteristic lies in the incorporation of recurrent connections, which facilitate the seamless transfer of information across different time steps, thus enabling the modeling and prediction of sequential data [74].
The underlying principle that governs RNNs revolves around the utilization of recurrent neurons to establish intricate relationships among sequential data, thereby capturing the essence of the data through the exchange of information across various temporal instances. In essence, RNNs represent a neural network model that possesses the capability to predict sequential data, and they have found extensive utilization within the domain of medical imaging. With the relentless advancements in artificial intelligence technology, RNNs are poised to assume an increasingly pivotal role in the field of medical imaging in the foreseeable future.

3.4. Autoencoder

The autoencoder, a widely used unsupervised learning algorithm, possesses the ability to compress and decompress input data while preserving its essential characteristics [75]. It consists of two essential components: an encoder and a decoder.
The autoencoder holds significant importance in the field of medical imaging, where data often exhibit high dimensionality and complexity, as seen in PET and MRI images. By compressing high-dimensional data into a lower-dimensional representation, it simultaneously reduces computational and storage costs, while improving image quality by reducing the impact of noise and artifacts [76]. The autoencoder serves as a valuable unsupervised learning algorithm that facilitates feature extraction, dimensionality reduction, and image reconstruction [77].

3.5. Generative Adversarial Network

Generative Adversarial Network (GAN) is a deep learning algorithm that produces realistic images and data [78]. It consists of a generator and a discriminator. The generator creates realistic outputs using random noise as input, while the discriminator determines the authenticity of the input.
During training, the generator and discriminator compete to improve their abilities. GAN has advantages such as generating high-quality outputs without explicit rules and stronger generation ability than other models [79,80]. It can be used for tasks such as image transformation. However, GAN has drawbacks. Its training process is complex and requires adjusting multiple hyperparameters. It can also suffer from mode collapse, producing limited types of samples. Controlling the generated results can be challenging. In conclusion, GAN is a powerful generative model that produces high-quality outputs. Despite its limitations, advancements in deep learning technology are expected to enhance GAN’s capabilities.

3.6. Deep Reinforcement Learning

Deep reinforcement learning (DRL) is a combination of deep learning and reinforcement learning that allows machines to complete tasks without human intervention [81]. It guides behavior through rewards and punishments, selecting actions based on the current state and adjusting strategies to maximize future rewards. DRL uses deep learning models to learn optimal strategies and behaviors.
DRL has advantages in autonomous learning and adaptation, improving performance through trial and error. It has applications in the medical field, autonomous driving, robotics, and gaming. However, DRL requires significant time and computing resources, and faces challenges such as data and resource requirements, uncertain results, and problems such as overfitting and sample bias. In conclusion, DRL is a promising algorithm for autonomous task completion, improving efficiency and safety. As technology advances, DRL will be applied to more fields, bringing convenience and innovation.

4. Application of DL in AD PET/MR Imaging

The challenge of early diagnosis and prediction of AD is still significant. Despite using clinical symptoms and imaging examinations, the current methods for prediction lack accuracy. Additionally, the rate and pattern of AD development can differ greatly among individuals, making prediction even more difficult. However, the use of deep learning technology in AD PET/MR imaging is helping to automate early diagnosis and prediction of disease progression trends, as well as enhance medical image analysis and processing. Several studies have utilized DL in PET and MR evaluations of AD (refer to Table 3). The following sections provide a summary of the application of DL in image analysis and diagnosis [82].

4.1. Image Segmentation

Image segmentation is the process of dividing an image into smaller sub-regions. In the context of PET/MR imaging, precise segmentation of the human brain is of utmost importance for accurate diagnosis, especially when dealing with AD patient data. Conventional image segmentation methods require manual selection of features and parameters, which can be time-consuming and require significant expertise. However, deep learning has emerged as a highly adaptable approach that can automatically learn features and parameters, leading to more precise classification, segmentation, and prediction tasks for brain imaging data [83,84,85]. Overall, DL provides a promising avenue for automated brain image processing in neuroscience research.

4.2. Image Reconstruction

DL is widely used in medical imaging for segmentation, particularly in PET/MR image reconstruction. It improves accuracy and speed, providing precise and rapid support for medical imaging diagnosis.
Deep learning automatically discovers patterns and features in images by learning from a large amount of data, improving the accuracy of PET/MR image reconstruction. Traditional methods are time-consuming and susceptible to noise and interference. By identifying features of different tissues and organs, deep learning can reconstruct images more accurately, enabling early diagnosis of AD. Deep learning optimizes the model structure and algorithm parameters to reduce computational complexity and time consumption, enhancing the speed of PET/MR image reconstruction [16,86,87,88]. This improves efficiency and reduces examination time for patients who cannot cooperate.

4.3. Diagnosis and Prediction

The identification of subtle changes in the brain that differentiate AD from normal aging or other neurological conditions is a challenging task in PET and MRI imaging. Traditional methods of disease diagnosis in PET/MR imaging require manual selection of features and parameters, but deep learning can automatically learn features and parameters, uncovering complex latent patterns in MRI and PET. Studies have demonstrated the efficacy of this approach [82,89,90]. Similarly, Zhou et al.’s [91] study demonstrated the effectiveness of amyloid PET/MRI using deep learning techniques (AUC = 0.87 in separating between AD and NC groups; AUC = 0.79 in separating MCI and NC groups; AUC = 0.71 separating AD and MCI groups). This approach also showed better early diagnosis and prediction of AD, providing valuable guidance for clinical practice.

4.4. Visualization of Pathological Features

Early diagnosis of AD is crucial for effective treatment to slow down further deterioration. Visualizing the morphological features of early-stage AD is of great clinical value, including the presence of neurofibrillary tangles and amyloid plaques. However, traditional imaging diagnosis methods often require specialized skills and experience, making large-scale data analysis inefficient. A recent study has proposed a novel approach called the Multi-Directional Perceptual GAN (MP-GAN) for visualizing the severity of AD in different stages of patients [92]. By introducing a new multi-directional mapping mechanism into the model, the MP-GAN can efficiently capture significant global features. Compared to traditional manual feature extraction methods, deep learning has the advantages of efficiency, accuracy, and automation in capturing pathological features, improving the accuracy and efficiency of diagnosis. This approach can also be applied to morphological feature analysis of other diseases.
Table 3. Summary of the findings of deep learning in the PET and MRI of Alzheimer’s Disease.
Table 3. Summary of the findings of deep learning in the PET and MRI of Alzheimer’s Disease.
Sr. NoAuthorNetworkSamplesFeaturesDatasetOptimal ResultClinical ImplicationReference
1Jo et al., 2020CNN300tau PETADNIAccuracy = 90.8%Potentially aiding in the early detection of AD during its prodromal stages.[93]
2Hamghalam et al., 2020GAN-MRIBraTS’18 Enhances DSCs by approximately 1%Accurately segments brain tissue, the source code for synthesizing high tissue contrast images is publicly available.[88]
3Kim et al., 2021CNN1433FDG and amyloid PET, MRIADNI, KBASEAccuracy = 75.0%,
AUC = 0.86
Potential to accurately identify amyloid PET positivity in a clinical setting.[82]
4Peng et al., 2021CNN, GAN25Amyloid PET-100% classification accuracyPET imaging workflow can be enhanced by utilizing deep learning-based techniques.[86]
5W. Zhang et al., 2021CNN2386FDG PET, MRI and neuropsychological testsADNIAccuracy = 95.6%Valid diagnoses explained uncertain cases based on neurodegeneration and depression.[89]
6Zhou et al., 2021CNN355FDG PETADNIAccuracy = 90.6%Promising approach for diagnosis of conversion from MCI to AD.[94]
7Zou et al., 2021CNN766tau PETADNIAccuracy > 80%Improve tau PET’s role in early disease and extend the utility of tau PET across generations of radioligands.[95]
8Etminani et al., 2022CNN757FDG PETADNI and EDLBAUC = 0.96DL model predicted common neurodegenerative disorders with performance comparable to human readers and consensus.[96]
9Thakur and Snekhalatha, 2022CNN1130FDG PETADNIAccuracy = 98.4%,
AUC = 0.95
Help classifying MCI subtypes (EMCI, LMCI) and AD/CN groups from PET brain images.[90]
10Q. Zhang et al., 2021DRL1349MRIADNI, AIBL and NACCAUC = 0.99The model serves as a link between clinical practice and AI diagnosis, offering insight into the interpretability of AI technology.[97]
11Hui et al., 2023DRL----DRL holds great potential in the detection and prediction of AD progression.[98]
12Marti-Juan et al., 2023autoencoder897PET and MRIADNI, synthetic dataReducing error by 5%.Produce authentic synthetic trajectories of imaging biomarkers from cognitive assessments.[87]
13Choi et al., 2020CNN636FDG PETADNIAUC = 0.94 Distinguish individuals with PD who also had dementia.[99]
14Cui et al., 2019RNN, CNN830MRIADNIAccuracy = 91.3%Great potential in analyzing longitudinal MR images.[100]
15Rajasekhar, 2023FFNN-MRIADNIAccuracy = 98.4%Great importance for early-stage AD prediction.[17]
16Yu et al., 2022GAN5316MRIADNI-The performance of GAN to visualize the subtle lesions in AD diagnosis.[92]
17Zhang et al., 2021CNN 2386PET and MRI ADNI Accuracy = 95.65% It exhibited clinical validity and possessed the potential for application.[89]
Abbreviations: CNN convolutional neural networks, GAN generative adversarial network, DRL deep reinforcement learning, RNN recurrent neural networks, FFNN feed forward neural networks, PET positron emission tomography, MRI magnetic resonance imaging, ADNI Alzheimer’s Disease neuroimaging initiative, BraTS’18 multimodal brain tumor segmentation challenge 2018, KBASE Korean Brain Aging Study for the Early diagnosis and prediction of Alzheimer’s disease, EDLB the European Dementia with Lewy Bodies (EDLB) Consortium, NACC national Alzheimer’s coordinating center, AIBL Australian imaging biomarker and lifestyle flagship study of aging, DSC the Dice Similarity Coefficients, AUC area under the curve, AD Alzheimer’s Disease, MCI mild cognitive impairment.
Following is a case of DL technologies in real clinical practice [89]. Neuropsychological testing is an important basis for the diagnosis of memory impairment in AD. However, multiple memory tests may generate conflicting results within the subject and lead to uncertain diagnoses in certain cases. This study proposes a DL framework for diagnosing uncertain cases of memory impairment of AD. A total of 2386 samples from the Alzheimer’s Disease Neuroimaging Initiative (ADNI), which included individuals with Alzheimer’s disease (AD), mild cognitive impairment (MCI), and cognitive normal (CN) were recruited. All raw data of PET and MRI images were obtained using the standardized ADNI protocols and underwent the same processing criteria, PET images were registered to the corresponding MRI. Three different neuropsychological tests, namely the Mini-Mental State Examination (MMSE), Alzheimer’s Disease Assessment Scale-Cognitive Subscale (ADAS-Cog), and Clinical Dementia Rating (CDR) (Figure 4A).
The trained DL framework was utilized to classify all images in the uncertain group as either memory-impaired or healthy. The entire process is illustrated in Figure 4B. A CNN model specifically designed for this purpose was utilized to categorize impaired or healthy individuals based on the images of each case. The structure of the model can be seen in Figure 4C. To introduce non-linearity, a 3D convolution layer with a size of 3 × 3 × 3 and stride 1 was employed, followed by a batch normalization layer and a rectified linear unit (ReLU) activation layer. The downsampling was achieved using 3D max-pooling with a size of 2 × 2 × 2. The number of filters increased progressively from 16 to 128 during the downsampling process. Finally, a 1 × 1 × 1 3D convolution was executed to summarize the high-dimensional features, concluding with a dense layer that utilized sigmoid activation for classification output. For certain cases in the testing set, the proposed DL framework outperformed other methods with 95.65% accuracy. Demonstrated through the longitudinal tracking of its diagnoses, it exhibited clinical validity and possessed the potential for application.

5. Future Directions

With the extension of human lifespan and the increasing trend of social aging, AD has become a pressing public health concern. Early diagnosis and intervention have been proven to effectively slow down the progression and symptoms of AD, highlighting the importance of imaging diagnosis and assessment in clinical medicine. The following is a summary of the future direction of the application of deep learning in PET/MR for AD.

5.1. Automated Diagnosis

AD diagnosis currently relies on subjective clinical experience, leading to a high risk of misdiagnosis. Automated analysis of PET/MR image data using deep learning technology can provide higher accuracy and faster speed, reducing the workload of doctors. It has broad application prospects and will become an essential tool in medical image diagnosis, supporting early diagnosis and treatment of neurodegenerative diseases.

5.2. Predictions of Models

Multi-modal images may be incomplete, leading to a reduction in sample size. Deep learning frameworks such as TPA-GAN and PT-DCN can interpolate and classify multi-modal brain images. Reversible GAN models can reconstruct missing data, and 3D CNN classification models with multi-modal inputs can aid in AD diagnosis. These methods perform well in terms of prediction accuracy and biological interpretability.

5.3. Personalized Medicine

Personalized medicine seeks to offer specialized medical services by taking into account a patient’s distinct genetics, environment, and lifestyle. In the treatment of Alzheimer’s disease, deep learning technology can examine and diagnose PET/MR imaging data, leading to enhanced accuracy in evaluating the patient’s condition and predicting disease advancement. Moreover, it can aid in creating customized treatment strategies by considering variations in patients’ genetic information, lifestyle choices, and exercise routines. Ultimately, personalized medicine contributes to the early detection and diagnosis of diseases.

6. Conclusions

In conclusion, the application of PET/MR imaging and deep learning algorithms has shown great potential in the early diagnosis and prediction of AD. Deep learning algorithms, such as FFNN, CNN, RNN, autoencoder, GAN, and DRL, have greatly improved the accuracy and speed of image reconstruction, efficient image segmentation, diagnosis, and visualization of AD pathological features.
In the future, the integration of multimodal imaging and automated diagnosis of medical images can further enhance the accuracy and efficiency of AD diagnosis. Predictive models based on PET/MR imaging data can also provide valuable insights into disease progression and response to treatment. Personalized medicine can also be achieved through the development of individualized treatment plans based on patient-specific imaging data.
Overall, the application of deep learning in PET/MR imaging has revolutionized the field of AD diagnosis and treatment. Further research and development are needed to address existing challenges and realize the full potential of these technologies in clinical practice.

Author Contributions

S.Z. and H.F. conceived this study. Y.Z. (Yan Zhao), Q.G., Y.Z. (Yukun Zhang) and J.Z. participated in the design, bibliographical search and writing of this manuscript. S.Z., Y.Y., and X.D. gave critical comments on the draft of the manuscript. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

No new data were generated or analyzed in support of this research.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. García-Lorenzo, D.; Lavisse, S.; Leroy, C.; Wimberley, C.; Bodini, B.; Remy, P.; Veronese, M.; Turkheimer, F.; Stankoff, B.; Bottlaender, M. Validation of an automatic reference region extraction for the quantification of [18F]DPA-714 in dynamic brain PET studies. J. Cereb. Blood Flow Metab. 2018, 38, 33–346. [Google Scholar] [CrossRef]
  2. Carlson, M.L.; Toueg, T.N.; Khalighi, M.M.; Castillo, J.; Shen, B.; Azevedo, E.C.; DiGiacomo, P.; Mouchawar, N.; Chau, G.; Zaharchuk, G.; et al. Hippocampal subfield imaging fractional anisotropy show parallel changes in Alzheimer’s disease tau progression using simultaneous tau-PET/MRI at 3T. Alzheimer’s Dement. 2021, 13, e12218. [Google Scholar] [CrossRef]
  3. 2021 Alzheimer’s disease facts and figures. Alzheimers Dement 2021, 17, 327–406. [CrossRef]
  4. Scheltens, P.; De Strooper, B.; Kivipelto, M.; Holstege, H.; Chételat, G.; Teunissen, C.E.; Cummings, J.; van der Flier, W.M. Alzheimer’s disease. Lancet 2021, 397, 1577–1590. [Google Scholar] [CrossRef] [PubMed]
  5. Petersen, R.C.; Doody, R.; Kurz, A.; Mohs, R.C.; Morris, J.C.; Rabins, P.V.; Ritchie, K.; Rossor, M.; Thal, L.; Winblad, B. Current concepts in mild cognitive impairment. Arch. Neurol. 2001, 58, 985–1992. [Google Scholar] [CrossRef] [PubMed]
  6. Schneider, J.S.; Kortagere, S. Current concepts in treating mild cognitive impairment in Parkinson’s disease. Neuropharmacology 2022, 203, 08880. [Google Scholar] [CrossRef] [PubMed]
  7. Frost, G.R.; Longo, V.; Li, T.; Jonas, L.A.; Judenhofer, M.; Cherry, S.; Koutcher, J.; Lekaye, C.; Zanzonico, P.; Li, Y.M. Hybrid PET/MRI enables high-spatial resolution, quantitative imaging of amyloid plaques in an Alzheimer’s disease mouse model. Sci. Rep. 2020, 10, 10379. [Google Scholar] [CrossRef] [PubMed]
  8. Franke, T.N.; Irwin, C.; Bayer, T.A.; Brenner, W.; Beindorff, N.; Bouter, C.; Bouter, Y. In vivo Imaging With 18F-FDG- and 18F-Florbetaben-PET/MRI Detects Pathological Changes in the Brain of the Commonly Used 5XFAD Mouse Model of Alzheimer’s Disease. Front. Med. 2020, 7, 29. [Google Scholar] [CrossRef]
  9. Klunk, W.E.; Engler, H.; Nordberg, A.; Wang, Y.; Blomqvist, G.; Holt, D.P.; Bergström, M.; Savitcheva, I.; Huang, G.F.; Estrada, S.; et al. Imaging brain amyloid in Alzheimer’s disease with Pittsburgh Compound-B. Ann. Neurol. 2004, 55, 306–319. [Google Scholar] [CrossRef]
  10. Jack, C.R., Jr.; Knopman, D.S.; Jagust, W.J.; Shaw, L.M.; Aisen, P.S.; Weiner, M.W.; Petersen, R.C.; Trojanowski, J.Q. Hypothetical model of dynamic biomarkers of the Alzheimer’s pathological cascade. Lancet Neurol. 2010, 9, 19–128. [Google Scholar] [CrossRef]
  11. Schwarz, C.G.; Gunter, J.L.; Wiste, H.J.; Przybelski, S.A.; Weigand, S.D.; Ward, C.P.; Senjem, M.L.; Vemuri, P.; Murray, M.E.; Dickson, D.W.; et al. A large-scale comparison of cortical thickness and volume methods for measuring Alzheimer’s disease severity. Neuroimage Clin. 2016, 11, 802–812. [Google Scholar] [CrossRef]
  12. Tsuneki, M. Deep learning models in medical image analysis. J. Oral Biosci. 2022, 64, 12–320. [Google Scholar] [CrossRef]
  13. Zhao, Y.; Zeng, K.; Zhao, Y.; Bhatia, P.; Ranganath, M.; Kozhikkavil, M.L.; Li, C.; Hermosillo, G. Deep learning solution for medical image localization and orientation detection. Med. Image Anal. 2022, 81, 102529. [Google Scholar] [CrossRef] [PubMed]
  14. Subbanna, N.; Wilms, M.; Tuladhar, A.; Forkert, N.D. An Analysis of the Vulnerability of Two Common Deep Learning-Based Medical Image Segmentation Techniques to Model Inversion Attacks. Sensors 2021, 21, 3874. [Google Scholar] [CrossRef]
  15. Zhang, Z.C.; Zhao, X.; Dong, G.; Zhao, X.M. Improving Alzheimer’s Disease Diagnosis with Multi-Modal PET Embedding Features by a 3D Multi-task MLP-Mixer Neural Network. IEEE J. Biomed. Health Inform. 2023, 27, 4040–4051. [Google Scholar] [CrossRef] [PubMed]
  16. Jiang, B.; Li, N.; Shi, X.; Zhang, S.; Li, J.; de Bock, G.H.; Vliegenthart, R.; Xie, X. Deep Learning Reconstruction Shows Better Lung Nodule Detection for Ultra-Low-Dose Chest CT. Radiology 2022, 303, 202–212. [Google Scholar] [CrossRef]
  17. Maggiora, G.D.; Castillo-Passi, C.; Qiu, W.; Liu, S.; Milovic, C.; Sekino, M.; Tejos, C.; Uribe, S.; Irarrazaval, P. DeepSPIO: Super Paramagnetic Iron Oxide Particle Quantification Using Deep Learning in Magnetic Resonance Imaging. IEEE Trans. Pattern Anal. Mach. Intell. 2022, 44, 143–153. [Google Scholar] [CrossRef] [PubMed]
  18. Minaee, S.; Boykov, Y.; Porikli, F.; Plaza, A.; Kehtarnavaz, N.; Terzopoulos, D. Image Segmentation Using Deep Learning: A Survey. IEEE Trans. Pattern Anal. Mach. Intell. 2022, 44, 3523–3542. [Google Scholar] [CrossRef] [PubMed]
  19. Liu, T.; Siegel, E.; Shen, D. Deep Learning and Medical Image Analysis for COVID-19 Diagnosis and Prediction. Annu. Rev. Biomed. Eng. 2022, 24, 179–201. [Google Scholar] [CrossRef]
  20. Chen, X.; Wang, X.; Zhang, K.; Fung, K.M.; Thai, T.C.; Moore, K.; Mannel, R.S.; Liu, H.; Zheng, B.; Qiu, Y. Recent advances and clinical applications of deep learning in medical image analysis. Med. Image Anal. 2022, 79, 102444. [Google Scholar] [CrossRef]
  21. Apostolidis, K.D.; Papakostas, G.A. Digital Watermarking as an Adversarial Attack on Medical Image Analysis with Deep Learning. J. Imaging 2022, 8, 155. [Google Scholar] [CrossRef]
  22. Beheshti, I.; Geddert, N.; Perron, J.; Gupta, V.; Albensi, B.C.; Ko, J.H. Monitoring Alzheimer’s Disease Progression in Mild Cognitive Impairment Stage Using Machine Learning-Based FDG-PET Classification Methods. J. Alzheimer’s Dis. 2022, 89, 1493–1502. [Google Scholar] [CrossRef]
  23. Minoshima, S.; Mosci, K.; Cross, D.; Thientunyakit, T. Brain [F-18]FDG PET for Clinical Dementia Workup: Differential Diagnosis of Alzheimer’s Disease and Other Types of Dementing Disorders. Semin. Nucl. Med. 2021, 51, 230–240. [Google Scholar] [CrossRef]
  24. Camedda, R.; Bonomi, C.G.; Di Donna, M.G.; Chiaravalloti, A. Functional Correlates of Striatal Dopamine Transporter Cerebrospinal Fluid Levels in Alzheimer’s Disease: A Preliminary 18F-FDG PET/CT Study. Int. J. Mol. Sci. 2023, 24, 751. [Google Scholar] [CrossRef]
  25. Doré, V.; Doecke, J.D.; Saad, Z.S.; Triana-Baltzer, G.; Slemmon, R.; Krishnadas, N.; Bourgeat, P.; Huang, K.; Burnham, S.; Fowler, C.; et al. Plasma p217+tau versus NAV4694 amyloid and MK6240 tau PET across the Alzheimer’s continuum. Alzheimer’s Dement 2022, 14, 12307. [Google Scholar] [CrossRef]
  26. Therriault, J.; Benedet, A.L.; Pascoal, T.A.; Savard, M.; Ashton, N.J.; Chamoun, M.; Tissot, C.; Lussier, F.; Kang, M.S.; Bezgin, G.; et al. Determining Amyloid-beta Positivity Using 18F-AZD4694 PET Imaging. J. Nucl. Med. 2021, 62, 247–252. [Google Scholar] [CrossRef]
  27. Im, S.; Hanaoka, K.; Yamada, T.; Ishii, K. Regional cerebral THK5351 accumulations correlate with neuropsychological test scores in Alzheimer continuum. Asia Ocean. J. Nucl. Med. Biol. 2023, 11, 37–43. [Google Scholar]
  28. Völter, F.; Beyer, L.; Eckenweber, F.; Scheifele, M.; Bui, N.; Patt, M.; Barthel, H.; Katzdobler, S.; Palleis, C.; Franzmeier, N.; et al. Assessment of perfusion deficit with early phases of [18F]PI-2620 tau-PET versus [18F]flutemetamol-amyloid-PET recordings. Eur. J. Nucl. Med. Mol. Imaging 2023, 50, 1384–1394. [Google Scholar] [CrossRef]
  29. Schönecker, S.; Palleis, C.; Franzmeier, N.; Katzdobler, S.; Ferschmann, C.; Schuster, S.; Finze, A.; Scheifele, M.; Prix, C.; Fietzek, U.; et al. Symptomatology in 4-repeat tauopathies is associated with data-driven topology of 18F-PI-2620 tau-PET signal. Neuroimage Clin. 2023, 38, 103402. [Google Scholar] [CrossRef]
  30. Malarte, M.L.; Gillberg, P.G.; Kumar, A.; Bogdanovic, N.; Lemoine, L.; Nordberg, A. Discriminative binding of tau PET tracers PI2620, MK6240 and RO948 in Alzheimer’s disease, corticobasal degeneration and progressive supranuclear palsy brains. Mol. Psychiatry 2023, 28, 1272–1283. [Google Scholar] [CrossRef]
  31. Katzdobler, S.; Nitschmann, A.; Barthel, H.; Bischof, G.; Beyer, L.; Marek, K.; Song, M.; Wagemann, O.; Palleis, C.; Weidinger, E.; et al. German Imaging Initiative for Tauopathies. Additive value of [18F]PI-2620 perfusion imaging in progressive supranuclear palsy and corticobasal syndrome. Eur. J. Nucl. Med. Mol. Imaging 2023, 50, 423–434. [Google Scholar] [CrossRef]
  32. Kunze, G.; Kumpfel, R.; Rullmann, M.; Barthel, H.; Brendel, M.; Patt, M.; Sabri, O. Molecular Simulations Reveal Distinct Energetic and Kinetic Binding Properties of [18F]PI-2620 on Tau Filaments from 3R/4R and 4R Tauopathies. ACS Chem. Neurosci. 2022, 13, 2222–2234. [Google Scholar] [CrossRef]
  33. Laurell, G.L.; Plavén-Sigray, P.; Jucaite, A.; Varrone, A.; Cosgrove, K.P.; Svarer, C.; Knudsen, G.M.; Karolinska Schizophrenia Project Consortium; Ogden, R.T.; Cervenka, S.; et al. Nondisplaceable Binding Is a Potential Confounding Factor in 11C-PBR28 Translocator Protein PET Studies. J. Nucl. Med. 2021, 62, 412–417. [Google Scholar] [CrossRef]
  34. Chandra, A.; Valkimadi, P.E.; Pagano, G.; Cousins, O.; Dervenoulas, G.; Politis, M. Applications of amyloid, tau, and neuroinflammation PET imaging to Alzheimer’s disease and mild cognitive impairment. Hum. Brain Mapp. 2019, 40, 5424–5442. [Google Scholar] [CrossRef] [PubMed]
  35. Wang, J.; Jin, L.; Zhang, X.; Yu, H.; Ge, J.; Deng, B.; Li, M.; Zuo, C.; Chen, X. Activated microglia by 18F-DPA714 PET in a case of anti-LGI1 autoimmune encephalitis. J. Neuroimmunol. 2022, 368, 577879. [Google Scholar] [CrossRef]
  36. Shen, R.; Shen, D.; Zhou, Q.; Zhang, M.; Chen, S. Antibody-mediated autoimmune encephalitis evaluated by 18F-DPA714 PET/MRI. Brain Behav. Immun. Health 2022, 26, 100535. [Google Scholar] [CrossRef]
  37. Kaneko, K.I.; Irie, S.; Mawatari, A.; Igesaka, A.; Hu, D.; Nakaoka, T.; Hayashinaka, E.; Wada, Y.; Doi, H.; Watanabe, Y.; et al. [18F]DPA-714 PET imaging for the quantitative evaluation of early spatiotemporal changes of neuroinflammation in rat brain following status epilepticus. Eur. J. Nucl. Med. Mol. Imaging 2022, 49, 2265–2275. [Google Scholar] [CrossRef]
  38. Ni, R.; Rojdner, J.; Voytenko, L.; Dyrks, T.; Thiele, A.; Marutle, A.; Nordberg, A. In vitro Characterization of the Regional Binding Distribution of Amyloid PET Tracer Florbetaben and the Glia Tracers Deprenyl and PK11195 in Autopsy Alzheimer’s Brain Tissue. J. Alzheimer’s Dis. 2021, 80, 1723–1737. [Google Scholar] [CrossRef] [PubMed]
  39. Tondo, G.; Iaccarino, L.; Cerami, C.; Vanoli, G.E.; Presotto, L.; Masiello, V.; Coliva, A.; Salvi, F.; Bartolomei, I.; Mosca, L.; et al. 11C-PK11195 PET-based molecular study of microglia activation in SOD1 amyotrophic lateral sclerosis. Ann. Clin. Transl. Neurol. 2020, 7, 1513–1523. [Google Scholar] [CrossRef] [PubMed]
  40. Zhang, L.; Xie, D.; Li, Y.; Camargo, A.; Song, D.; Lu, T.; Jeudy, J.; Dreizin, D.; Melhem, E.R.; Wang, Z. Improving Sensitivity of Arterial Spin Labeling Perfusion MRI in Alzheimer’s Disease Using Transfer Learning of Deep Learning-Based ASL Denoising. J. Magn. Reason. Imaging 2022, 55, 1710–1722. [Google Scholar] [CrossRef] [PubMed]
  41. Soman, S.; Raghavan, S.; Rajesh, P.G.; Varma, R.P.; Mohanan, N.; Ramachandran, S.S.; Thomas, B.; Kesavadas, C.; Menon, R.N. Relationship between Cerebral Perfusion on Arterial Spin Labeling (ASL) MRI with Brain Volumetry and Cognitive Performance in Mild Cognitive Impairment and Dementia due to Alzheimer’s Disease. Ann. Indian Acad. Neurol. 2021, 24, 559–565. [Google Scholar] [PubMed]
  42. Chen, H.; Xu, Y.; Chen, L.; Shang, S.; Luo, X.; Wang, X.; Zhang, H. The convergent and divergent patterns in brain perfusion between Alzheimer’s disease and Parkinson’s disease with dementia: An ASL MRI study. Front. Neurosci. 2022, 16, 892374. [Google Scholar] [CrossRef] [PubMed]
  43. Kennedy, J.T.; Harms, M.P.; Korucuoglu, O.; Astafiev, S.V.; Barch, D.M.; Thompson, W.K.; Bjork, J.M.; Anokhin, A.P. Reliability and stability challenges in ABCD task fMRI data. Neuroimage 2022, 252, 119046. [Google Scholar] [CrossRef]
  44. Zhang, T.; Liao, Q.; Zhang, D.; Zhang, C.; Yan, J.; Ngetich, R.; Zhang, J.; Jin, Z.; Li, L. Predicting MCI to AD Conversation Using Integrated sMRI and rs-fMRI: Machine Learning and Graph Theory Approach. Front. Aging Neurosci. 2021, 13, 688926. [Google Scholar] [CrossRef]
  45. Hojjati, S.H.; Ebrahimzadeh, A.; Khazaee, A.; Babajani-Feremi, A. Predicting conversion from MCI to AD by integrating rs-fMRI and structural MRI. Comput. Biol. Med. 2018, 102, 30–39. [Google Scholar] [CrossRef]
  46. Buckner, R.L.; Andrews-Hanna, J.R.; Schacter, D.L. The brain’s default network: Anatomy, function, and relevance to disease. Ann. N. Y. Acad. Sci. 2008, 1124, 1–38. [Google Scholar] [CrossRef]
  47. Canário, N.; Jorge, L.; Martins, R.; Santana, I.; Castelo-Branco, M. Dual PET-fMRI reveals a link between neuroinflammation, amyloid binding and compensatory task-related brain activity in Alzheimer’s disease. Commun. Biol. 2022, 5, 804. [Google Scholar] [CrossRef] [PubMed]
  48. Tomasi, D.; Volkow, N.D. Brain motion networks predict head motion during rest- and task-fMRI. Front. Neurosci. 2023, 17, 1096232. [Google Scholar] [CrossRef]
  49. Liu, T.; Vickers, B.D.; Seidler, R.D.; Preston, S.D. Neural correlates of overvaluation and the effort to save possessions in a novel decision task: An exploratory fMRI study. Front. Psychol. 2023, 14, 1059051. [Google Scholar] [CrossRef]
  50. Farahani, F.V.; Karwowski, W.; D’Esposito, M.; Betzel, R.F.; Douglas, P.K.; Sobczak, A.M.; Bohaterewicz, B.; Marek, T.; Fafrowicz, M. Diurnal variations of resting-state fMRI data: A graph-based analysis. Neuroimage 2022, 256, 119246. [Google Scholar] [CrossRef]
  51. Fazal, Z.; Gomez, D.E.P.; Llera, A.; Marques, J.; Beck, T.; Poser, B.A.; Norris, D.G. A comparison of multiband and multiband multiecho gradient-echo EPI for task fMRI at 3 T. Hum. Brain Mapp. 2023, 4, 82–93. [Google Scholar] [CrossRef] [PubMed]
  52. Saeidi, M.; Karwowski, W.; Farahani, F.V.; Fiok, K.; Hancock, P.A.; Sawyer, B.D.; Christov-Moore, L.; Douglas, P.K. Decoding Task-Based fMRI Data with Graph Neural Networks, Considering Individual Differences. Brain Sci. 2022, 12, 1094. [Google Scholar] [CrossRef] [PubMed]
  53. Wei, Y.; Yang, C.; Jiang, H.; Li, Q.; Che, F.; Wan, S.; Yao, S.; Gao, F.; Zhang, T.; Wang, J.; et al. Multi-nuclear magnetic resonance spectroscopy: State of the art and future directions. Insights Imaging 2022, 13, 135. [Google Scholar] [CrossRef] [PubMed]
  54. Zhu, Y.; Sappo, C.R.; Grissom, W.A.; Gore, J.C.; Yan, X. Dual-Tuned Lattice Balun for Multi-Nuclear MRI and MRS. IEEE Trans. Med. Imaging 2022, 41, 1420–1430. [Google Scholar] [CrossRef]
  55. Heo, H.Y.; Zhang, Y.; Jiang, S.; Zhou, J. Influences of experimental parameters on chemical exchange saturation transfer (CEST) metrics of brain tumors using animal models at 4.7T. Magn. Reason. Med. 2019, 81, 316–330. [Google Scholar] [CrossRef] [PubMed]
  56. Yuan, Y.; Wang, C.; Kuddannaya, S.; Zhang, J.; Arifin, D.R.; Han, Z.; Walczak, P.; Li, G.; Bulte, J.W.M. In vivo tracking of unlabelled mesenchymal stromal cells by mannose-weighted chemical exchange saturation transfer MRI. Nat. Biomed. Eng. 2022, 6, 658–666. [Google Scholar] [CrossRef]
  57. Zhang, J.; Wang, J.; Xu, X.; You, Z.; Huang, Q.; Huang, Y.; Guo, Q.; Guan, Y.; Zhao, J.; Liu, J. In vivo synaptic density loss correlates with impaired functional and related structural connectivity in Alzheimer’s disease. J. Cereb. Blood Flow Metab. 2023, 43, 977–988. [Google Scholar] [CrossRef] [PubMed]
  58. Werner, P.; Barthel, H.; Drzezga, A.; Sabri, O. Current status and future role of brain PET/MRI in clinical and research settings. Eur. J. Nucl. Med. Mol. Imaging 2015, 42, 512–526. [Google Scholar] [CrossRef]
  59. Chang, Y.; Liu, J.; Wang, L.; Li, X.; Wang, Z.; Lin, M.; Jin, W.; Zhu, M.; Xu, B. Diagnostic Utility of Integrated11C-Pittsburgh Compound B Positron Emission Tomography/Magnetic Resonance for Cerebral Amyloid Angiopathy: A Pilot Study. Front. Aging Neurosci. 2021, 13, 721780. [Google Scholar] [CrossRef] [PubMed]
  60. Barthel, H.; Schroeter, M.L.; Hoffmann, K.T.; Sabri, O. PET/MR in dementia and other neurodegenerative diseases. Semin. Nucl. Med. 2015, 45, 224–233. [Google Scholar] [CrossRef]
  61. Zhang, M.; Guan, Z.; Zhang, Y.; Sun, W.; Li, W.; Hu, J.; Li, B.; Ye, G.; Meng, H.; Huang, X.; et al. Disrupted coupling between salience network segregation and glucose metabolism is associated with cognitive decline in Alzheimer’s disease—A simultaneous resting-state FDG-PET/fMRI study. NeuroImage Clin. 2022, 34, 102977. [Google Scholar] [CrossRef] [PubMed]
  62. Schmidhuber, J. Deep learning in neural networks: An overview. Neural Netw. 2015, 61, 85–117. [Google Scholar] [CrossRef]
  63. Fekri-Ershad, S.; Ramakrishnan, S. Cervical cancer diagnosis based on modified uniform local ternary patterns and feed forward multilayer network optimized by genetic algorithm. Comput. Biol. Med. 2022, 144, 05392. [Google Scholar] [CrossRef] [PubMed]
  64. Angelis, G.I.; Fuller, O.K.; Gillam, J.E.; Meikle, S.R. Denoising non-steady state dynamic PET data using a feed-forward neural network. Phys. Med. Biol. 2021, 66, 034001. [Google Scholar] [CrossRef]
  65. Vasireddi, H.K.; K, S.D.; GNV, R.R. Deep feed forward neural network-based screening system for diabetic retinopathy severity classification using the lion optimization algorithm. Graefe’s Arch. Clin. Exp. Ophthalmol. 2022, 260, 1245–1263. [Google Scholar] [CrossRef] [PubMed]
  66. Zhou, Q.; Huang, Z.; Ding, M.; Zhang, X. Medical Image Classification Using Light-weight CNN with Spiking Cortical Model Based Attention Module. IEEE J. Biomed. Health Inform. 2023, 7, 1991–2002. [Google Scholar]
  67. Jiang, X.; Yan, J.; Zhao, Y.; Jiang, M.; Chen, Y.; Zhou, J.; Xiao, Z.; Wang, Z.; Zhang, R.; Becker, B.; et al. Characterizing functional brain networks via Spatio-Temporal Attention 4D Convolutional Neural Networks (STA-4DCNNs). Neural Netw. 2023, 158, 99–110. [Google Scholar] [CrossRef]
  68. Yang, Y.; Wu, H. Deformable medical image registration based on CNN. J. X-ray Sci. Technol. 2023, 31, 85–94. [Google Scholar]
  69. Xiao, Z.; Su, Y.; Deng, Z.; Zhang, W. Efficient Combination of CNN and Transformer for Dual-Teacher Uncertainty-guided Semi-supervised Medical Image Segmentation. Comput. Methods Programs Biomed. 2022, 226, 107099. [Google Scholar] [CrossRef] [PubMed]
  70. Li, Y.; Zhao, J.; Lv, Z.; Pan, Z. Multimodal Medical Supervised Image Fusion Method by CNN. Front. Neurosci. 2021, 15, 638976. [Google Scholar] [PubMed]
  71. Nirmala, K.; Saruladha, K.; Dekeba, K. Investigations of CNN for Medical Image Analysis for Illness Prediction. Comput. Intell. Neurosci. 2022, 2022, 968200. [Google Scholar] [CrossRef] [PubMed]
  72. Rashid, T.; Zia, M.S.; Najam Ur, R.; Meraj, T.; Rauf, H.T.; Kadry, S. A Minority Class Balanced Approach Using the DCNN-LSTM Method to Detect Human Wrist Fracture. Life 2023, 13, 133. [Google Scholar] [CrossRef] [PubMed]
  73. Samee, N.A.; Ahmad, T.; Mahmoud, N.F.; Atteia, G.; Abdallah, H.A.; Rizwan, A. Clinical Decision Support Framework for Segmentation and Classification of Brain Tumor MRIs Using a U-Net and DCNN Cascaded Learning Algorithm. Healthcare 2022, 10, 2340. [Google Scholar] [CrossRef] [PubMed]
  74. Yan, J.; Jin, L.; Luo, X.; Li, S. Modified RNN for Solving Comprehensive Sylvester Equation With TDOA Application. IEEE Trans. Neural Netw. Learn. Syst. 2023, 1–11. [Google Scholar] [CrossRef]
  75. Hinton, G.E.; Salakhutdinov, R.R. Reducing the dimensionality of data with neural networks. Science 2006, 313, 504–507. [Google Scholar] [CrossRef]
  76. Lee, H.; Hyun, S.H.; Cho, Y.S.; Moon, S.H.; Choi, J.Y.; Kim, K.; Lee, K.H. Cluster analysis of autoencoder-extracted FDG PET/CT features identifies multiple myeloma patients with poor prognosis. Sci. Rep. 2023, 13, 7881. [Google Scholar] [CrossRef]
  77. Hong, J.; Kang, S.K.; Alberts, I.; Lu, J.; Sznitman, R.; Lee, J.S.; Rominger, A.; Choi, H.; Shi, K. Image-level trajectory inference of tau pathology using variational autoencoder for Flortaucipir PET. Eur. J. Nucl. Med. Mol. Imaging 2022, 49, 3061–3072. [Google Scholar] [CrossRef]
  78. Zhang, L.; Dai, H.; Sang, Y. Med-SRNet: GAN-Based Medical Image Super-Resolution via High-Resolution Representation Learning. Comput. Intell. Neurosci. 2022, 2022, 744969. [Google Scholar] [CrossRef]
  79. Sun, L.; Chen, J.; Xu, Y.; Gong, M.; Yu, K.; Batmanghelich, K. Hierarchical Amortized GAN for 3D High Resolution Medical Image Synthesis. IEEE J. Biomed. Health Inform. 2022, 26, 3966–3975. [Google Scholar] [CrossRef]
  80. Vaccari, I.; Orani, V.; Paglialonga, A.; Cambiaso, E.; Mongelli, M. A Generative Adversarial Network (GAN) Technique for Internet of Medical Things Data. Sensors 2021, 21, 3726. [Google Scholar] [CrossRef]
  81. Li, D.; Xie, L.; Wang, Z.; Yang, H. Brain Emotion Perception Inspired EEG Emotion Recognition With Deep Reinforcement Learning. IEEE Trans. Neural Netw. Learn. Syst. 2023, 1–14. [Google Scholar] [CrossRef]
  82. Kim, S.; Lee, P.; Oh, K.T.; Byun, M.S.; Yi, D.; Lee, J.H.; Kim, Y.K.; Ye, B.S.; Yun, M.J.; Lee, D.Y.; et al. Deep learning-based amyloid PET positivity classification model in the Alzheimer’s disease continuum by using 2-[18F]FDG PET. EJNMMI Res. 2021, 11, 56. [Google Scholar] [CrossRef]
  83. Hong, J.S.; You, W.C.; Sun, M.H.; Pan, H.C.; Lin, Y.H.; Lu, Y.F.; Chen, K.-M.; Huang, T.-H.; Lee, W.-K.; Wu, Y.-T. Deep Learning Detection and Segmentation of Brain Arteriovenous Malformation on Magnetic Resonance Angiography. J. Magn. Reason. Imaging 2023. [Google Scholar] [CrossRef]
  84. Richter, L.; Fetit, A.E. Accurate segmentation of neonatal brain MRI with deep learning. Front. Neuroinform. 2022, 16, 1006532. [Google Scholar] [CrossRef]
  85. Ramprasad, M.V.S.; Rahman, M.Z.U.; Bayleyegn, M.D. A Deep Probabilistic Sensing and Learning Model for Brain Tumor Classification With Fusion-Net and HFCMIK Segmentation. IEEE Open J. Eng. Med. Biol. 2022, 3, 178–188. [Google Scholar] [CrossRef]
  86. Peng, Z.; Ni, M.; Shan, H.; Lu, Y.; Li, Y.; Zhang, Y.; Pei, X.; Chen, Z.; Xie, Q.; Wang, S.; et al. Feasibility evaluation of PET scan-time reduction for diagnosing amyloid-beta levels in Alzheimer’s disease patients using a deep-learning-based denoising algorithm. Comput. Biol. Med. 2021, 138, 104919. [Google Scholar] [CrossRef]
  87. Martí-Juan, L.M.; Piella, G. MC-RVAE: Multi-channel recurrent variational autoencoder for multimodal Alzheimer’s disease progression modelling. Neuroimage 2023, 268, 119892. [Google Scholar] [CrossRef]
  88. Hamghalam, M.; Wang, T.; Lei, B. High tissue contrast image synthesis via multistage attention-GAN: Application to segmenting brain MR scans. Neural Netw. 2020, 132, 43–52. [Google Scholar] [CrossRef]
  89. Zhang, W.; Zhang, T.; Pan, T.; Zhao, S.; Nie, B.; Liu, H.; Shan, B. Deep Learning With 18F-Fluorodeoxyglucose-PET Gives Valid Diagnoses for the Uncertain Cases in Memory Impairment of Alzheimer’s Disease. Front. Aging Neurosci. 2021, 13, 764272. [Google Scholar] [CrossRef]
  90. Thakur, M.; Snekhalatha, U. Multi-stage classification of Alzheimer’s disease from 18F-FDG-PET images using deep learning techniques. Phys. Eng. Sci. Med. 2022, 45, 1301–1315. [Google Scholar] [CrossRef]
  91. Zhou, P.; Jiang, S.; Yu, L.; Feng, Y.; Chen, C.; Li, F.; Liu, Y.; Huang, Z. Use of a Sparse-Response Deep Belief Network and Extreme Learning Machine to Discriminate Alzheimer’s Disease, Mild Cognitive Impairment, and Normal Controls Based on Amyloid PET/MRI Images. Front. Med. 2020, 7, 621204. [Google Scholar] [CrossRef]
  92. Yu, W.; Lei, B.; Wang, S.; Liu, Y.; Feng, Z.; Hu, Y.; Shen, Y.; Ng, M.K. Morphological Feature Visualization of Alzheimer’s Disease via Multidirectional Perception GAN. IEEE Trans. Neural Netw. Learn. Syst. 2022, 34, 4401–4415. [Google Scholar] [CrossRef]
  93. Jo, T.; Nho, K.; Risacher, S.L.; Saykin, A.J. Deep learning detection of informative features in tau PET for Alzheimer’s disease classification. BMC Bioinform. 2020, 21, 496. [Google Scholar] [CrossRef]
  94. Zhou, P.; Zeng, R.; Yu, L.; Feng, Y.; Chen, C.; Li, F.; Liu, Y.; Huang, Y.; Huang, Z. Deep-Learning Radiomics for Discrimination Conversion of Alzheimer’s Disease in Patients With Mild Cognitive Impairment: A Study Based on 18F-FDG PET Imaging. Front. Aging Neurosci. 2021, 13, 764872. [Google Scholar] [CrossRef]
  95. Zou, J.; Park, D.; Johnson, A.; Feng, X.; Pardo, M.; France, J.; Tomljanovic, Z.; Brickman, A.M.; Devanand, D.P.; Luchsinger, J.A.; et al. Deep learning improves utility of tau PET in the study of Alzheimer’s disease. Alzheimer’s Dement 2021, 13, e12264. [Google Scholar] [CrossRef] [PubMed]
  96. Etminani, K.; Soliman, A.; Davidsson, A.; Chang, J.R.; Martinez-Sanchis, B.; Byttner, S.; Camacho, V.; Bauckneht, M.; Stegeran, R.; Ressner, M.; et al. A 3D deep learning model to predict the diagnosis of dementia with Lewy bodies, Alzheimer’s disease, and mild cognitive impairment using brain 18F-FDG PET. Eur. J. Nucl. Med. Mol. Imaging 2022, 49, 563–584. [Google Scholar] [CrossRef]
  97. Zhang, Q.; Du, Q.; Liu, G. A whole-process interpretable and multi-modal deep reinforcement learning for diagnosis and analysis of Alzheimer’s disease. J. Neural Eng. 2021, 18, 1741–2552. [Google Scholar] [CrossRef] [PubMed]
  98. Hui, H.Y.H.; Ran, A.R.; Dai, J.J.; Cheung, C.Y. Deep Reinforcement Learning-Based Retinal Imaging in Alzheimer’s Disease: Potential and Perspectives. J. Alzheimer’s Dis. 2023, 94, 39–50. [Google Scholar] [CrossRef]
  99. Choi, H.; Kim, Y.K.; Yoon, E.J.; Lee, J.Y.; Lee, D.S. Neuroimaging, Cognitive signature of brain FDG PET based on deep learning: Domain transfer from Alzheimer’s disease to Parkinson’s disease. Eur. J. Nucl. Med. Mol. Imaging 2020, 47, 403–412. [Google Scholar] [CrossRef] [PubMed]
  100. Cui, R.; Liu, M.; Alzheimer’s Disease Neuroimaging Initiative. RNN-based longitudinal analysis for diagnosis of Alzheimer’s disease. Comput. Med. Imaging Graph. 2019, 73, 1–10. [Google Scholar] [CrossRef]
Figure 1. The number of publications in the Web of Science related to Alzheimer’s disease, deep learning, and PET/MR imaging were analyzed by year.
Figure 1. The number of publications in the Web of Science related to Alzheimer’s disease, deep learning, and PET/MR imaging were analyzed by year.
Bioengineering 10 01120 g001
Figure 2. Typical amyloid PET/MRI in rare dementia diseases are shown in the image. The image consists of two columns on the left side, displaying the overlay of 18F-florbetaben amyloid PET/MR. The middle column shows the corresponding anatomical T1-MPRAGE MR slice. The right side of the image displays the corresponding slices of other imaging modalities. The abbreviations used in the image include bvFTD for behavioral-variant frontotemporal dementia, CAA for cerebral amyloid angiopathy, LPA for logopenic aphasia, NPH for normal pressure hydrocephalus, PCA for posterior cortical atrophy, PNFA for progressive nonfluent aphasia, and SWI for susceptibility-weighted MRI [60].
Figure 2. Typical amyloid PET/MRI in rare dementia diseases are shown in the image. The image consists of two columns on the left side, displaying the overlay of 18F-florbetaben amyloid PET/MR. The middle column shows the corresponding anatomical T1-MPRAGE MR slice. The right side of the image displays the corresponding slices of other imaging modalities. The abbreviations used in the image include bvFTD for behavioral-variant frontotemporal dementia, CAA for cerebral amyloid angiopathy, LPA for logopenic aphasia, NPH for normal pressure hydrocephalus, PCA for posterior cortical atrophy, PNFA for progressive nonfluent aphasia, and SWI for susceptibility-weighted MRI [60].
Bioengineering 10 01120 g002
Figure 3. The results of the triple network parcellation and the maps of average functional connectivity strength (FCS) and FDG in the CN, MCI, and AD groups are presented. Using fMRI data, the parcellation results for the DMN (red), CEN (blue), and SN (green) are illustrated in (a). The average FCS (b) and FDG-SUVR (c) maps of the CN, MCI, and AD groups are shown, with the FCS and FDG-SUVR maps masked by the triple-network parcellation results in (a). All maps are overlaid on the MNI T1-weighted template. Lower FCS and FDG in MCI and AD can be observed by visual inspection. The abbreviations used in the image include DMN, default-mode network; CEN, central executive network; SN, salience network; FCS, functional connectivity strength; FDG, fluorodeoxyglucose; CN, cognitively normal; MCI, mild cognitive impairment; AD, Alzheimer’s disease [61].
Figure 3. The results of the triple network parcellation and the maps of average functional connectivity strength (FCS) and FDG in the CN, MCI, and AD groups are presented. Using fMRI data, the parcellation results for the DMN (red), CEN (blue), and SN (green) are illustrated in (a). The average FCS (b) and FDG-SUVR (c) maps of the CN, MCI, and AD groups are shown, with the FCS and FDG-SUVR maps masked by the triple-network parcellation results in (a). All maps are overlaid on the MNI T1-weighted template. Lower FCS and FDG in MCI and AD can be observed by visual inspection. The abbreviations used in the image include DMN, default-mode network; CEN, central executive network; SN, salience network; FCS, functional connectivity strength; FDG, fluorodeoxyglucose; CN, cognitively normal; MCI, mild cognitive impairment; AD, Alzheimer’s disease [61].
Bioengineering 10 01120 g003
Figure 4. (A) A flowchart illustrating the criteria for grouping data based on memory scores from these tests. (B) A flowchart depicting the different stages of this research, which include grouping, training, and utilizing a DL model. The final step involves using this framework to diagnose uncertain samples. (C) The specific architecture of the 3D CNN model that was designed. The Mini-Mental State Examination (MMSE), Alzheimer’s Disease Assessment Scale-Cognitive Subscale (ADAS-Cog), and Clinical Dementia Rating (CDR) are utilized in this process [89].
Figure 4. (A) A flowchart illustrating the criteria for grouping data based on memory scores from these tests. (B) A flowchart depicting the different stages of this research, which include grouping, training, and utilizing a DL model. The final step involves using this framework to diagnose uncertain samples. (C) The specific architecture of the 3D CNN model that was designed. The Mini-Mental State Examination (MMSE), Alzheimer’s Disease Assessment Scale-Cognitive Subscale (ADAS-Cog), and Clinical Dementia Rating (CDR) are utilized in this process [89].
Bioengineering 10 01120 g004
Table 1. Categorization of PET/MR Imaging in AD (including deep learning approaches).
Table 1. Categorization of PET/MR Imaging in AD (including deep learning approaches).
AdvantagesDrawbacks
Structural MRIProvides detailed anatomical information.
Can detect changes in brain structure, such as atrophy or cortical thinning.
Limited sensitivity in detecting early stages of AD.
Difficulty in distinguishing AD-specific changes from normal age-related changes.
Functional MRI (fMRI) Measures brain activity and connectivity.
Can identify functional abnormalities in AD, such as changes in resting-state networks.
Limited specificity in distinguishing AD from other neurodegenerative disorders.
Relatively low spatial resolution compared to other imaging modalities.
Positron Emission Tomography (PET) Can detect specific biomarkers associated with AD, such as beta-amyloid plaques and tau tangles.
Provides quantitative measurements of biomarker distribution.
Expensive and time-consuming procedure.
Requires the use of radiotracers, which may have limited availability.
Ionizing radiation exposure.
PET/MRI Fusion Combines the strengths of both PET and MRI modalities.
Provides complementary information on both functional and structural aspects.
Limited availability and high cost of PET/MRI scanners.
Increased complexity in data acquisition and processing.
Deep Learning (DL) Can extract complex patterns and features from large imaging datasets.
Enables automated analysis and classification of AD-related imaging biomarkers.
Potential for improving diagnostic accuracy and early detection.
Requires large amounts of labeled training data.
Vulnerable to overfitting if the dataset is not representative.
Lack of interpretability, making it challenging to understand the underlying biological mechanisms.
Table 2. Typical Deep learning algorithms approaches in AD.
Table 2. Typical Deep learning algorithms approaches in AD.
AdvantagesDrawbacks
Feedforward Neural Networks (FFNN)FFNN is a simple and straightforward approach for AD disease classification. Can handle high-dimensional data and has good generalization capability.FFNN may struggle with capturing temporal dependencies in AD progression. It may also be prone to overfitting if the dataset is small.
Convolutional Neural Networks (CNN) CNNs are effective in extracting spatial features from images or volumetric data. Can automatically learn relevant features and hierarchies, making them well-suited for image-based AD analysis.CNNs may not effectively capture temporal information, which is crucial for understanding AD progression over time. May also require a large amount of labeled training data.
Recurrent Neural Networks (RNN) RNNs are designed to handle sequential data and can capture temporal dependencies effectively. Can model the dynamics of AD progression over time and handle variable-length input sequences.RNNs may suffer from the vanishing gradient problem, making it difficult to capture long-term dependencies. Can also be computationally expensive and require significant resources for training.
Autoencoder Combines the strengths of both PET and MRI modalities. Provides complementary information on both functional and structural aspects.Autoencoders are primarily unsupervised learning models and may not directly handle AD classification tasks. May also struggle with capturing complex relationships between features.
Generative Adversarial Networks (GAN) GANs can generate synthetic data samples that resemble real AD data. Can be used for data augmentation, increasing the size of the training dataset. GANs can also be used for anomaly detection in AD diagnosis.GANs can be challenging to train and may suffer from mode collapse or instability. May also generate unrealistic samples that do not accurately represent the AD disease characteristics.
Deep Reinforcement Learning (DRL)DRL can be used to optimize treatment strategies for AD patients by learning from trial and error.
Can adapt and improve treatment decisions based on patient feedback, leading to personalized and adaptive therapies.
DRL requires a substantial amount of training data and can be computationally expensive. May also be challenging to define a suitable reward function for AD treatment, and the learned policies may not generalize well to new patients.
Note: The categorization and pros/cons provided above are general observations and may vary depending on specific implementations and variations in the mentioned DL approaches in AD research.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zhao, Y.; Guo, Q.; Zhang, Y.; Zheng, J.; Yang, Y.; Du, X.; Feng, H.; Zhang, S. Application of Deep Learning for Prediction of Alzheimer’s Disease in PET/MR Imaging. Bioengineering 2023, 10, 1120. https://doi.org/10.3390/bioengineering10101120

AMA Style

Zhao Y, Guo Q, Zhang Y, Zheng J, Yang Y, Du X, Feng H, Zhang S. Application of Deep Learning for Prediction of Alzheimer’s Disease in PET/MR Imaging. Bioengineering. 2023; 10(10):1120. https://doi.org/10.3390/bioengineering10101120

Chicago/Turabian Style

Zhao, Yan, Qianrui Guo, Yukun Zhang, Jia Zheng, Yang Yang, Xuemei Du, Hongbo Feng, and Shuo Zhang. 2023. "Application of Deep Learning for Prediction of Alzheimer’s Disease in PET/MR Imaging" Bioengineering 10, no. 10: 1120. https://doi.org/10.3390/bioengineering10101120

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop