Next Article in Journal
Pulmonary Infiltrates in a Non-Cystic Fibrosis Bronchiectasis Patient: A Case Report
Previous Article in Journal
Predictors of Metabolic Syndrome in Polish Women—The Role of Body Composition and Sociodemographic Factors
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Artificial Intelligence in Alzheimer’s Disease Diagnosis and Prognosis Using PET-MRI: A Narrative Review of High-Impact Literature Post-Tauvid Approval

by
Rafail C. Christodoulou
1,*,
Amanda Woodward
1,
Rafael Pitsillos
2,
Reina Ibrahim
3 and
Michalis F. Georgiou
4
1
Department of Radiology, Stanford University School of Medicine, Stanford, CA 94305, USA
2
Department of Neurophysiology, The Cyprus Institute of Neurology and Genetics, 2371 Nicosia, Cyprus
3
Faculty of Medicine, University of Balamand, Balamand 2807, Lebanon
4
Department of Radiology, Division of Nuclear Medicine, University of Miami, Miami, FL 33136, USA
*
Author to whom correspondence should be addressed.
J. Clin. Med. 2025, 14(16), 5913; https://doi.org/10.3390/jcm14165913
Submission received: 16 June 2025 / Revised: 25 July 2025 / Accepted: 19 August 2025 / Published: 21 August 2025

Abstract

Background: Artificial intelligence (AI) is reshaping neuroimaging workflows for Alzheimer’s disease (AD) diagnosis, particularly through PET and MRI analysis advances. Since the FDA approval of Tauvid, a PET tracer targeting tau pathology, there has been a notable increase in studies applying AI to neuroimaging data. This narrative review synthesizes recent, high-impact literature to highlight clinically relevant AI applications in AD imaging. Methods: This review examined peer-reviewed studies published between January 2020 and January 2025, focusing on the use of AI, including machine learning, deep learning, and hybrid models for diagnostic and prognostic tasks in AD using PET and/or MRI. Studies were identified through targeted PubMed, Scopus, and Embase searches, emphasizing methodological diversity and clinical relevance. Results: A total of 111 studies were categorized into five thematic areas: Image preprocessing and segmentation, diagnostic classification, prognosis and disease staging, multimodal data fusion, and emerging innovations. Deep learning models such as convolutional neural networks (CNNs), generative adversarial networks (GANs), and transformer-based architectures were widely employed by the research community in the field of AD. At the same time, several models reported strong diagnostic performance, but methodological challenges such as reproducibility, small sample sizes, and lack of external validation limit clinical translation. Trends in explainable AI, synthetic imaging, and integration of clinical biomarkers are also discussed. Conclusions: AI is rapidly advancing the field of AD imaging, offering tools for enhanced segmentation, staging, and early diagnosis. Multimodal approaches and biomarker-guided models show particular promise. However, future research must focus on reproducibility, interpretability, and standardized validation to bridge the gap between research and clinical practice.

1. Introduction

AD is considered the most prevalent neurodegenerative cause of dementia, with more than 60% of patients in a dementia outpatient clinic being diagnosed with AD [1]. The constant increase in elderly population due to the advances of medicine, combined with the rising incidence of Alzheimer’s disease in individuals over 65, makes AD a significant challenge for the healthcare community [2]. By the age of 85, the annual incidence of the disease rises to 7.6%, compared to the 0.4% in people between 65 and 74 years old [3]. Alzheimer’s Disease International reported in 2018 a global prevalence of 50 million people, with this number expected to triple by 2050 [1]. Despite the high incidence rate of AD, no effective treatment currently exists, and clinicians instead focus on improving patients’ quality of life and slowing the progression rate of dementia.
The pathophysiology underlying AD provides insights into the complexity of the brain networks that govern the higher cognitive functions. The first studies on the pathology of the disease during the early 1900s by Alois Alzheimer indicated the two pathological hallmarks of Alzheimer’s disease: extracellular amyloid-beta (Aβ) senile plaques, and intracellular neurofibrillary tangles (NFTs) [4]. The abnormal, insoluble form of Aβ protein is the product of the amyloid precursor protein (APP) cleavage by beta and gamma secretases. This leads to the highly amyloidogenic form of Aβ, which assembles into oligomers and accumulates extracellularly as plaques. The aggregations cause synaptic loss and eventually loss of neurons [5]. It is now widely accepted that the amyloid plaque deposition begins in the cerebrum years before the onset of cognitive decline and initiates the disease pathogenesis. Neurofibrillary tangles are formed due to tau protein hyperphosphorylation, a product of the MAPT gene, which contributes to neuronal degeneration by disrupting essential cellular processes [6]. Braak and Braak in 1991 proposed a neuropathological staging of AD that is consistent with the symptomatology of Alzheimer’s dementia [7]. Episodic memory impairment is most commonly the leading aspect of dementia, reflecting the damage occurring in the hippocampus. Executive dysfunction, attention deficits, and apathy develop as the disease progresses, indicating a more widespread cortical pathology [8].
Neuroimaging leverages these widespread pathological hallmarks of AD to enhance the diagnostic process and even predict patterns that appear in the late stages of the disease. Magnetic resonance imaging (MRI) and positron emission tomography (PET) are the most significant neuroimaging tools for AD diagnosis. MRI has historically been a powerful tool for detecting massive lesions and cerebrovascular events that contribute to cognitive decline, thereby ruling out neurodegeneration [9]. Recently, MRI has evolved into a precise and sensitive method for detecting atrophy patterns and volume loss caused by tissue degeneration [10]. PET scan, on the contrary, utilizes the metabolic dysfunction occurring in the diseased brain to detect the hypometabolism of the tissue and identify pathogenic accumulations in cells, Aβ, and tau [11]. With the FDA approval of Tauvid in 2020, a tau PET tracer, a groundbreaking advancement in Alzheimer’s diagnosis has been achieved. Tau tracers offer a novel diagnostic instrument for the clinical setting and catalyze a new wave of studies focusing on tau-specific detection [12].
The emerging high volume of imaging data generated by advanced and precise imaging techniques have presented significant challenges to the scientific community. The developing field of artificial intelligence (AI) illustrates an advancement of computing technology that “imitates” human intelligence by training a program to function humanely. Concurrently, artificial intelligence has emerged as a powerful tool for enhancing image interpretation, disease classification, and longitudinal prediction. In the Alzheimer’s research field, machine learning (ML) has been employed to detect MRI patterns by providing the system with data that is labeled (supervised learning) or unlabeled (unsupervised learning) and eventually predict unknown presented information, determining if the pattern exists or not [13]. Deep learning (DL), in contrast to ML, identifies the hidden patterns in complex imaging data using a hierarchy of neuronal networks, amplifying the relevant features and excluding the irrelevant information [14]. The fundamental distinction between the two AI subfields is that DL processes raw data without requiring human preprocessing, directly applying learned patterns to unknown inputs [15]. A multilayered network of nodes mimicking the human brain composes the artificial neuronal network (ANN). ANN uses ML and DL to analyze the input, process the data, and extract information on the classification of imaging and the prediction/prognosis of the disease [16].
As numerous novel technologies emerge in the groundbreaking field of AI applied in AD imaging, there is a growing need for guidance on their use and application to ensure a standardized implementation and optimized diagnostic accuracy. This review aims to narratively synthesize the highest-impact contributions in this area, offering clinicians and researchers a focused overview of AI-driven innovations in AD diagnosis using PET/MRI. Several novel technologies introduced in the last few years are highlighted in our study, with particular emphasis on their performance metrics and their implications for both research purposes and clinical applications. Although various reviews attempt to summarize the role of AI tools applied in neuroimaging or to provide insightful information regarding the use of AI in Alzheimer’s disease diagnosis [15,17], our review is among the first to comprehensively consolidate and critically assess high-impact AI applications in both PET and MRI for AD, thus offering an up-to-date and practice-oriented perspective that is currently lacking in the literature.

2. Materials and Methods

This narrative review highlights significant advancements in applying AI to diagnose and predict AD using PET, MRI, or a combination of both. The review focuses on studies published after the FDA approval of Tauvid, a PET tracer targeting tau pathology, which marked a pivotal milestone in AD neuroimaging.
A clinical research librarian (A.W.) conducted a comprehensive literature search using PubMed, Embase, and Scopus to identify peer-reviewed studies published between January 2020 and January 2025. The core search strategy focused on four key concepts: Alzheimer’s Disease, PET, MRI, and artificial intelligence, incorporating subterms such as “machine learning”, “deep learning”, “neural networks”, and related variants. Full database-specific search strategies are provided in Appendix A.1, Appendix A.2 and Appendix A.3.
To ensure a thorough and inclusive review, we also conducted manual citation screening. During this supplementary process, we applied additional keywords such as “radiomics”, “transformer”, “autoencoder”, “tau PET”, “fusion imaging”, “explainable AI”, and “multimodal deep learning” to identify high-impact studies not retrieved by the structured database queries.
We included only peer-reviewed original research articles and review papers that specifically focused on Alzheimer’s Disease, used AI techniques (machine learning, deep learning, or radiomics) that analyzed PET, MRI, or hybrid PET/MRI data. Studies were limited to those involving human participants and published in English. We excluded case reports, conference abstracts, editorials, non-peer-reviewed literature, studies on non-AD dementias, and those that used only traditional statistical methods without AI integration.
After deduplication and full-text screening, 111 studies met the inclusion criteria and were included in the final review. These studies were grouped into five thematic areas: (1) image preprocessing and segmentation, (2) diagnosis and classification, (3) prediction and prognosis, (4) multimodal data fusion, and (5) emerging trends.
A PRISMA-style flow diagram that summarizes the literature search and selection process is shown in Figure 1.

3. Results

We identified and reviewed 111 peer-reviewed studies to support this study’s background and analytical framework. Our core analysis focused on studies published between January 2020 and January 2025 on artificial intelligence (AI) applications in PET and MRI neuroimaging for Alzheimer’s disease (AD). We categorized the studies into five core diagnostic domains:
  • Image preprocessing and segmentation,
  • Diagnosis and classification,
  • Prediction and prognosis,
  • Multimodal data fusion,
  • Emerging trends in AI modeling.
Some representative studies from each category are presented in the following table (Table 1), along with a brief explanation of the primary objectives for each domain and common AI tools applied to address them.
Deep learning approaches, particularly convolutional neural networks (CNNs), recurrent neural networks (RNNs), and generative adversarial networks (GANs), were the most frequently employed models across all domains. To illustrate these trends, we constructed a bar chart showing the distribution of AI models used in Alzheimer’s disease imaging studies included in our review article, from 2020 to 2025 (Figure 2). Several studies reported classification accuracies exceeding 95% for distinguishing AD from mild cognitive impairment (MCI) or cognitively normal (CN) individuals. These results must be interpreted cautiously due to potential risks of overfitting, lack of external validation, small sample sizes, and inconsistencies in imaging protocols.
Multimodal integration strategies, which combined MRI and PET imaging with clinical or neuropsychological data, generally demonstrated superior performance over unimodal models, particularly for early-stage diagnosis and longitudinal disease progression assessment. (Figure 3) Despite this, few studies conducted head-to-head model comparisons or reported confidence intervals, limiting direct comparability.
In parallel, methodological variability in image preprocessing pipelines further complicates cross-study comparisons, as differences in preprocessing steps such as skull stripping, noise reduction, and registration can significantly influence downstream model performance and generalizability. Some of these processes are necessary to make input modalities compatible with AI tools, thereby enabling the generation of interpretable results (Figure 4). Of the 111 studies included in this review, 46 (41.4%) were primary research articles that explicitly reported at least one image preprocessing step. Among these, 31 studies conducted skull stripping, 21 applied noise reduction (e.g., Gaussian filtering or non-local means denoising), and 24 performed image registration to standard anatomical templates such as MNI152. Bias field correction and intensity normalization were documented in 18 and 17 studies, respectively. These steps were typically implemented using standard neuroimaging software (e.g., FSL, SPM12, FreeSurfer) or deep learning-based preprocessing tools. The remaining studies were either reviews or papers that did not specify preprocessing details, often due to the use of preprocessed datasets like ADNI. Inconsistencies in imaging preprocessing pipelines emphasize the need for standardized and transparent reporting of preprocessing protocols to ensure reproducibility and facilitate model deployment.
Among these challenges, recent studies have focused on enhancing model interpretability, efficiency, and clinical applicability. Approaches such as explainable AI (XAI), contrast-agent-free imaging pipelines, and generative techniques for synthetic data augmentation are gaining traction. These innovations represent promising steps toward developing interpretable, scalable, and clinically viable AI tools for AD neuroimaging.

4. Discussion

4.1. Image Preprocessing and Segmentation

Image preprocessing and segmentation are foundational to AI-driven Alzheimer’s disease (AD) imaging pipelines (Figure 2). These steps are critical for reducing inter-scan variability and optimizing model performance across diverse imaging sources.
Preprocessing typically includes bias field correction, intensity normalization, and resampling to standardized voxel dimensions. These procedures help mitigate scanner-related artifacts and patient-specific noise while ensuring anatomical consistency across datasets [28,29,30]. Skull stripping, performed using tools such as the Brain Surface Extractor or FSL’s Brain Extraction Tool, removes non-brain tissues (e.g., scalp and orbits) and improves focus on cerebral structures, enhancing segmentation accuracy [28].
Noise reduction, significant for PET-MRI fusion, is commonly achieved with Gaussian filters or advanced denoising algorithms like CONN-NLM, improving structural integrity for multimodal image alignment [31]. Image registration to standard anatomical templates (e.g., MNI152) via software such as SPM12, FSL, or DARTEL facilitates cross-subject comparisons and region-of-interest (ROI) analyses, particularly in voxel-based morphometry or atlas-guided studies [30].
Traditional atlas-based segmentation methods (e.g., FreeSurfer, FSL-FIRST) have shown value in delineating regions like the hippocampus, amygdala, and ventricles. However, their limited adaptability to inter-individual anatomical variability has led to a shift toward deep learning-based segmentation, which offers superior spatial precision and robustness [32].
Recent studies employing architectures such as U-Net, V-Net, and nnU-Net have demonstrated high segmentation fidelity, with Dice similarity coefficients exceeding 0.90 for hippocampal and whole-brain tissue segmentation. Patch-wise CNNs (e.g., M-Net, hybrid multiscale models) enhance boundary detection by operating on subregions, reducing overfitting while preserving local context, especially practical in hippocampal segmentation from structural or multimodal inputs [18,33].
For PET segmentation, which faces challenges due to low anatomical resolution, deep conditional generative adversarial networks (cGANs) have shown promise. One model, FSPET, successfully segmented frontal lobe structures from FDG-PET by integrating anatomical priors and convolutional autoencoders, outperforming traditional methods in robustness and spatial accuracy [19,34].
Radiomics-based segmentation remains influential, particularly in hippocampal analysis. Studies have demonstrated that radiomic features extracted from T1-weighted MRI correlate with clinical markers (e.g., MMSE, amyloid-β, pTau), enabling high diagnostic accuracy. One investigation achieved an AUC of 0.961 using a random forest classifier to distinguish cognitively normal individuals from those with MCI or AD [18,34].
Functional imaging pipelines—especially those involving resting-state fMRI—employ preprocessing steps such as motion correction, slice timing adjustment, and spatial smoothing using SPM12 [35]. Subsequent segmentation with hybrid models, including 3D CNNs and LSTMs, enabled the discrimination of early MCI versus advanced AD, achieving classification accuracies above 96% by leveraging both spatial and temporal patterns [36].
Finally, innovative reinforcement learning models, such as Q-learning agents, have been proposed to automate hippocampal localization without manual intervention or atlas dependency. These models achieved comparable performance to fully supervised CNNs, improving generalizability and reducing memory overhead [30,37].

4.2. Diagnosis and Classification

Upon finishing preprocessing and segmentation, the image is ready for analysis and interpretation. Artificial intelligence has provided various tools utilized by researchers in the field of AD research, enabling a more efficient and accurate process for disease diagnosis and enhancing the classification procedure. Machine learning models have been widely used in improving the diagnostic accuracy in AD, with diverse efficacy and reproducibility. The more recent application of AI in medical imaging is deep learning. This model comprises an artificial neural network (ANN) that resembles human intelligence for complex input information processing and data analysis. Advanced techniques have been developed, leveraging the technology of DL, identifying atrophy patterns in MRI and hypometabolic networks, or specific biomarker detection in PET. AI has been further implemented in fMRI scanning and PET-MRI fusion, enabling a more complex processing of functional and anatomical input information without obscure patterns.
This section reviews and presents the most frequently used AI models for Alzheimer’s diagnosis and the classification of AD, mild cognitive impairment (MCI), and healthy controls. Table 1 summarizes the main diagnostic categories of AI use in AD imaging. The imaging modalities employed include MRI (structural MRI or functional MRI), PET (using FDG, amyloid-β, or tau tracers), and PET/MRI fusion models.

4.2.1. Machine Learning in AD Diagnosis and Classification

Support vector machine (SVM) is a supervised machine learning tool that solves complicated tasks by setting variables and detecting patterns depending on the expected output (or labeled imaging when referring to neuroimaging) provided for the algorithm’s training [38]. SVM was widely applied to classify AD and MCI. Zhang et al. in 2021 [39] developed a modular-LASSO feature selection (MLFS) approach for AD/MCI classification, which incorporates the modularity of Fuzzy Bayesian Networks to detect discrete AD/MCI features. These features were then selected by the LASSO tool, classified via SVM, and applied to resting-state fMRI to detect AD/MCI [39]. SVM has also been employed in radiomics-based feature classification studies, yielding promising results [40,41]. For instance, Jiao et al. proposed a computational model based on SVM to identify radiomics signatures of tau tracer PET images, demonstrating higher accuracy than SUVR (84.8 ± 4.5% and 73.1 ± 3.6%) [40]. In a 2022 study by Nuvoli et al., differential diagnosis of AD with MCI based on hypometabolism of the temporal lobe in FDG PET imaging was conducted, showing a 76.23% accuracy of the linear SVM model [42]. A different approach has been proposed by Akramifard et al. in 2020 [43], where they improved the classification performance between AD/MCI and standard controls processed in an SVM classifier. Instead of expanding the sample size, the analysis focused on a smaller dataset by repeating key features within the input vectors- a method known as emphasis learning- achieving a classification accuracy of 98.81% between AD and NC [43]. Combination of SVM with graph measuring in functional MRI analysis, conducted by Wang et al. in 2023, showed an improved detection of AD, with maximum accuracy of 96.80% achieved in AD vs. health controls [44]. These high performances are likely attributed to the more precise separation of the most distinct diagnostic groups (AD vs. HC). In contrast, slightly lower accuracies were observed in identifying more intermediate stages, such as early to late MCI. Another graph–based fMRI study aimed at classifying AD, MCI, and healthy controls, combined SVM with LASSO feature selection technique, and showed high classification accuracy [45]. All the models in this section demonstrate impressive accuracy rates regarding either AD classification or AD/NC differentiation, outperforming more traditional technologies [40].
Logistic regression (LR) is an ML tool that uses a sigmoid-shaped curve to explore the correlation of the input with the probability of a specific output [46]. This feature was utilized by Van Loon et al. in 2022, where their team introduced the StaPLR (stacked penalized logistic regression) method, for automatic selection of the most significant views of sMRI, Diffusion Weighted Imaging (DWI) MRI, and fMRI for AD classification, providing a mean AUC of 0.942 with the 3-level StaPLR (hierarchical) compared to a mean AUC of 0.848 with the elastic net regression [47].
Decision tree (DT) and its extended form, random forest (RF), are supervised ML tools that classify data using roots that perform tasks divided into categories. RF accomplishes this task by producing multiple DTs that predict the output of a class [45]. Various studies have developed an RF model for AD classification [34,48,49,50]. A fusion study of Aβ PET and sMRI by Bao YW et al. in 2024 showed an improved AD classification when combining the two modalities to train the random forest model (AUC 0.89 and AUC of 0.71) [49]. RF has also been implicated in the feature selection procedure, combined with SVM, for AD classification based on statistics and volumetry. Keles et al. employed RF as the classifier; hence, all tools applied (BABC, BPSO, BGWO, and BDE) accomplished their highest accuracy, 0.863, 0.892, 0.905, and 0.893, respectively [51]. A fascinating investigation by Song et al., 2021 [50] compared RF, SVM, Multi-layer perception, and convolutional neural networks (CNN) for AD classification and biomarker detection using 63, 29, and 22 features. All three models demonstrated high accuracy in 63 features. RF, however, had the least reduction in accuracy percentage when 22 features were applied (−3.8% compared to −4% and −7.0% in MLP and CN, respectively). These findings highlight random forest’s consistent robustness across feature sets and modalities, making it a competitive and reliable classifier for Alzheimer’s disease, especially in comparison to other models like SVM, MLP, and CNN [50].

4.2.2. Deep Learning in AD Diagnosis and Classification

Regarding deep learning tools, the most referenced techniques in the articles reviewed were convolutional neural networks (CNNs), recurrent neural networks (RNNs), autoencoders, and generative adversarial networks (GANs)—most of the studies aimed for AD diagnosis and classification employing CNN-based tools.
CNN analyzes the input data structured in a series of arrays (for instance, medical images), forming a complex network of interconnected layers (input, hidden, and output layers). These layers apply convolutional filters, small matrices that slide across the input to detect meaningful patterns like edges or textures, enabling the model to extract and learn relevant features for classification [52]. Multiple studies have implicated CNN models in detecting AD from multimodal imaging, achieving high accuracy. Table 2 presents a selection of top-performing AI models reported in high-impact studies. To ensure the conciseness of the section, the most frequently used CNN tools for diagnosis and classification will be subdivided and reviewed separately.
Visual Geometry Group Network (VGGNet) is a deep CNN tool widely implicated in AD research, [20,58,59], as it reduces the error rate by introducing fewer kernel features and increased network depth [60]. Kim et al., 2023 [20] proposed a highly accurate model by combining VGGNet with a 1D convolutional neural network software that extracts information about the brain’s contour, particularly the boundaries and shape patterns of cortical and subcortical regions. Incorporating VGGNet enhanced the existing model’s performance (accuracy of 0.986), allowing a more precise AD classification by measuring the shape of the patient’s brain, outperforming traditional tools like VGG-16, 19, and AlexNet in precision and accuracy. The highest accuracy and precision values were achieved in a 256 × 256 input size [20]. An additional study conducted by Mujahid et al. in 2023 developed a highly accurate ensemble model by combining VGG-16 with EfficientNet-B2, achieving significant improvements in early AD diagnosis [58].
ResNet is a distinct CNN model with multiple layers that excels at classifying inputs by introducing residual connections, reducing computational complexity [60,61]. ResNet has increasingly gained attention in AD classification and early disease detection, with a variety of ResNet models being used [21,26,53,62,63,64,65]. An illustrative case is the study of Odusami et al. in 2021, where a ResNet18 model was employed for AD classification in fMRI, achieving strong performance and high accuracy levels (99.99% accuracy) in differentiating early MCI from AD [21].
DenseNet is a CNN model introduced to maximize information flow between layers by employing dense feed–forward connections. Each layer receives the concatenated feature maps from preceding layers [60]. DenseNet model has been utilized for feature extraction, automating the procedure of AD diagnosis [53,66]. A comparative study of deep CNN models, in particular DenseNet, ResNet, and EfficientNet, was conducted by Carcagnì et al., demonstrating a better performance of very deep ResNet and DenseNet than the shallow versions of VGG and ResNet, with a 7% increase in the accuracy rate regarding the detection of AD in MRI [67]. A valuable research study by Sharma et al. [54] investigated the performance of a Hybrid AI model for AD diagnosis. They integrated a deep learning technique known as transfer learning, employing DenseNet-121 and DenseNet-201 for feature extraction, combined with machine learning classifiers, achieving an accuracy of 91.75% and specificity of 96.5% [54].
Other than VGG, DenseNet and ResNet, various CNN models have been developed and contributed to the research regarding the diagnosis and classification of AD [14,29,41,57,68,69,70,71,72]. For instance, a Dementia Network (DemNet) tool was implicated in AD staging instead of MRI, with an accuracy of 95.23% and AUC 0.97 [57]. AlzheimerNet has also been utilized as a fine—tuned classifier, achieving high accuracy and outperforming other traditional AD classifying tools [68]. LeNet is one of the first series of CNN architectures that incorporates MaxPooling layers to reduce data complexity by discarding elements that hold low values [73]. Hazarika et al. modified and applied this model in 2021 for AD classification, yielding a 96.64% performance rate [69].
Unlike CNNs, recurrent neural networks (RNNs) are specifically designed to capture temporal dependencies in data, enabling practical sequential analysis and prediction of time-dependent variables [73]. In 2024, Mahim et al. integrated gated RNNs with a vision transformer (ViT), leveraging the capability of gated RNNs to enhance current image processing by incorporating information from previously analyzed data [74,75]. The study demonstrated a high performance of the merged technique (99.69% for binary classification) for detecting and classifying AD from MRI.
Autoencoders are considered unsupervised learning tools that efficiently condense the information in an image and then recover the input data while maintaining its core attributes [76]. Al-Otaibi et al. in 2024, presented a dual–attention convolutional autoencoder technique, which demonstrated increased accuracy (99.02%) in real–time AD recognition, using MRI features [77]. A study based on fMRI employed a DL tool for efficiently discriminating normal aging from AD progression, demonstrating a specialized autoencoder network with excellent performance [78].
GAN has contributed to medical image processing, synchronizing new image output with two neural networks after training. This tool allows the generation of new data and adaptation to shifting domains, as one network generates the image, while the other separates the features [79]. A study published in 2023 introduced a Loop–BasedGAN for Brain Network (BNLoop-GAN) model, which aimed to uncover the distribution of the underlying brain networks by using a set of different tools, including conditional generation. Successful discrimination was achieved using resting-state fMRI and structural MRI (sMRI) between healthy controls and AD patients with a sensitivity and specificity of 81.8% and 84.9%, respectively, on multimodal brain networks, outperforming other tested models [55]. An alternative application of generative adversarial networks in MRI was demonstrated by Chui et al., where CNN and transfer learning (TL) were introduced to improve classification accuracy and incorporate data from various datasets. GANs were subsequently used to augment less frequently occurring data, enabling the model to achieve higher accuracy in detecting AD [80].

4.3. Prediction/Prognosis

AI’s contribution to Alzheimer’s research is evolving beyond disease diagnosis, playing a growing role in early detection and accurate prognosis of disease progression. Various AI models and novel techniques have been developed to precisely detect MRI/PET biomarkers for accurate disease tracking. Table 3 summarizes the pathological features and associated biomarkers commonly analyzed in AI-driven Alzheimer’s disease studies. Longitudinal studies have enriched this area by enabling models to track volumetric and metabolic changes in key regions using MRI and PET, respectively, while integrating these modalities with molecular and clinical data. AI-driven techniques, such as linear mixed-effects analysis, facilitate this integration and analysis [81]. Temporal observations of disease progression through fusion of imaging alterations with clinical cognitive scales allow a more comprehensive view of AD.
The importance of predicting Alzheimer’s disease forecasts has been the focus in recent years, as the capability of detecting early signs of progression of the disease could be potentially life–changing for the patients, enabling an earlier intervention. A recurrent neural network (RNN) model that uses long short-term memory (LSTM) was utilized by Aqeel et al. in 2022 to predict neuropsychological and MRI biomarkers in the progression of time, aiming to distinguish between AD and MCI, based on the predictions [22]. Khalid et al. designed a feed-forward neural network technology that combines aspects from GoogLeNet and Dense–121, to detect Alzheimer’s disease and model the progression trajectory. The accuracy achieved was 99.7%, with an AUC of 0.99 [23]. Two different research studies proposed deep learning tools that targeted the classification of MRI images (sMRI and fMRI) to identify dementia and classify it into stages depending on the severity [36,84]. Both studies revealed comparable results, achieving more than 80% accuracy, with Noh J.’s [36] study stating the need for a more generalized testing of their proposed tool.
Several AI tools have been developed and extensively explored for predicting the development of AD from mild cognitive impairment (MCI) [85,86,87,88]. A radiomics-based feature study on PET was conducted by Peng et al. in 2023 [85], exploring the role of white matter as a predictive component for MCI to AD progression. This study integrated PET-derived radiomics features with clinical assessment scales, for instance, Clinical Dementia Rating (CDR) and Alzheimer’s Disease Assessment Scale (ADAS), and by applying multivariate logistic regression (ML tool), achieved high sensitivity and specificity in predicting the progression of MCI to AD, 87% and 78%, respectively, and a ratio hazard computed when the model was clinically evaluated with 95% confidence interval [85]. An additional investigation of MCI to AD progression was demonstrated by Lin et al., where an extreme learning machine (ELM) tool for grading five different modalities was utilized, yielding excellent performance in disease progression prediction [86]. A different approach for detecting the MCI to Alzheimer’s dementia conversion has been proposed by Fakoya et al. in 2024 [87], aiming to overcome two main barriers: the complexity in data processing of 3D MRI and PET scans, and the combination of both modalities in a way that retains their individual visual information. This study demonstrated a CNN model with high accuracy (94.0%) by integrating slices from both MRI and PET scans, preserving the unique features of each modality while reducing processing time due to the model’s streamlined architecture [87].
Various studies have investigated the AD progression based on imaging biomarkers. Pan et al. in 2023 [56] developed a DL technique called Ensemble 3DCNN, which provides insights into the widespread structural alterations in the brain during AD progression. The model generates a score based on the detected alterations in different regions in MRI scans [56]. The trajectory of lesion progression in Alzheimer’s patients has been investigated by Kim et al. in 2021 [89]. They developed a network using an autoencoder to initially condense MR images, obtained from various AD stages, to latent vectors, and then predict the latent vector of an image at a target time point [89]. Crystal et al. [90] utilized already existing brain age predicting models to develop an ML technology that predicts the age of healthy individuals based on imaging features from the FLAIR sequence. Then they illustrated the predicted–actual age difference with an estimation called BrainAGE, which was then applied as a marker for the longitudinal analysis [90].
Amyloid-beta plaques and tau tangles are key hallmarks of Alzheimer’s disease (AD), often used as biomarkers in AI models. These pathological changes are excessively implicated in medical imaging, especially for the development of tools that assist in the prediction of Alzheimer’s disease progression. A linear support vector regression with multiple variables has been developed by Wang et al. in 2024, which uses tau tracer-based PET images to predict the brain age of healthy individuals [83]. A study by Alongi et al. demonstrated a radiomics analysis of features extracted from 18-FDG PET images with an ML tool to predict AD occurrence. Clinical assessments and amyloid-based PET scans were conducted to compare the results [82]. Alternatively, the aim of Chattopadhyay et al. in the 2024 [91] study was to predict the existence of Aβ plaques, using T1-WI MR images. Multiple DL models have been developed and examined, demonstrating a promising approach for the prediction of Alzheimer’s pathology in MCI patients [91].
In recent years, the application of AI to predict cognitive scores from MRI data has been increasingly explored and refined. Habuza et al., 2022 [92] proposed a convolutional neural network model based on regression technology to predict standard controls’ cognition level using MR images. This approach was subsequently applied in MCI patients, revealing a significant divergence between the normative aging model and cognitively impaired patients. The model discriminates precisely between normal individuals and MCI patients with an AUC of 99.57% [92]. Liang et al. conducted a relevant study, utilizing a multi-task learning (MTL) tool for the prediction of cognitive status, based on structural association [93].

4.4. Multimodal Fusion and Clinical Integration

Multimodal fusion in AD diagnosis combines data from multiple sources such as MRI, PET, cerebrospinal fluid (CSF) biomarkers, and cognitive scores to improve diagnostic accuracy, staging, and prognosis [24] (Figure 5). Each modality analyzes different biological substrates of AD, with MRI providing detailed information on anatomy and FDG-PET reflecting glucose metabolism [25]. In addition, amyloid and tau PET visualize pathological deposition patterns. The complementary nature of these modalities frequently outperforms unimodal models, especially in detecting early stages of dementia and predicting disease progression [94,95].
However, such early fusion approaches may introduce redundancy and imbalance, particularly when the combined modalities differ in resolution, scale, or feature count. These disparities can lead to the dominance of one modality or the dilution of critical information. Zhang et al. proposed a discrete cosine transform (DCT)-based convolutional sparse representation framework to address these limitations, extracting compact and informative spatial-frequency features from MRI and PET before fusion [96].
By contrast, middle fusion techniques integrate information at deeper network stages, after each modality undergoes separate feature extraction. This design helps preserve richer intermodal interactions and minimizes interference during early processing. Kim et al. [24] demonstrated this approach using a dual-path CNN with shared-weight convolutions and depth-wise separable blocks to process FDG-PET, amyloid PET, tau PET, and structural MR. Their architecture merged features in a shared latent space. It achieved high-balanced accuracies of 100% for AD vs. CN and 76% for MCI vs. CN, and robust performance for stable vs. progressive MCI [24].
Some studies applied multi-kernel learning and ensemble classifiers to combine imaging with clinical variables such as MMSE, CDR, and CSF biomarkers. Chiu et al. employed support vector machines using composite features from imaging, cognitive assessments, and demographics to differentiate SCD, MCI, and AD, achieving classification [25]. The classification accuracy findings underscore the clinical utility of integrating structured patient data with imaging features for early-stage detection.
Attention-based fusion models have also emerged to emphasize intermodal relationships. For instance, Huang et al. designed a voxel-wise correlation matrix between MRI and FDG-PET to guide an attention module that fused metabolic and structural signals, maximizing AD classification performance [97]. Similarly, multi-branch neural networks with shared and modality-specific pathways have integrated imaging and cognitive data to improve stage-specific prediction across the AD spectrum [95].
Region-of-interest (ROI)-based fusion also remains popular, targeting AD-susceptible areas such as the hippocampus, entorhinal cortex, and cingulate cortex. This strategy improves model interpretability and reduces computational burden. Several studies have used CNN-based multi-atlas segmentation to restrict analysis to these critical ROIs [24,97,98].
From a clinical standpoint, several proposed models have been validated using large-scale public datasets such as ADNI and OASIS. Some studies have even incorporated real-world data from memory clinics [59,94,96], demonstrating generalizability and clinical relevance. Lightweight architectures like extreme learning machines (ELMs) and attention-guided multi-branch CNNs have shown promise for integration into fast-paced clinical workflows [96,99].
Despite these advancements, widespread clinical adoption is hindered by cross-center heterogeneity, inconsistent acquisition protocols, and a lack of standardized model pipelines. Recent studies have introduced domain adaptation layers, modality-specific normalization, and attention-based calibration blocks to overcome these challenges, improve reproducibility, and reduce bias [59,95,100].
Explainability remains another primary focus. Visual tools such as saliency maps and Grad-CAM are increasingly embedded within multimodal frameworks to provide transparency and foster clinician trust in AI-based decision support systems [1,99,101].

4.5. Emerging Trends and Future Directions

Emerging trends in AI for AD neuroimaging are focusing on making AI models more applicable, interpretable, and clinically useful. As more neuroimaging data is gathered, AI’s role in simplifying analysis and drawing meaningful conclusions is becoming more critical (Figure 6).
One of the main challenges facing AI in healthcare settings is the “black box” nature of most deep learning algorithms. Explainable AI (XAI) is, therefore, a fundamental research area. Amoroso et al., in 2023, emphasize the importance of XAI, showing that Shapley values by fairly distributing the contribution of each feature to a model’s output and evaluating how each one impacts predictions across all possible feature combinations can help us understand how AI models make their predictions [27]. This added transparency is crucial for building trust with clinicians and researchers, as it allows them to identify which brain regions and imaging features are most significant in characterizing AD. XAI helps to not only validate AI’s findings but also uncover new insights into the underlying mechanisms of the disease. Among the 46 primary studies analyzed in this review, 8 explicitly integrated explainability mechanisms into their model architectures [21,50,53,69,71,75,84,102]. These additions improve model transparency and accountability, critical for clinical translation, regulatory approval, and clinician trust in AI-assisted decision-making processes.
Another important initiative is reducing the dependence on contrast agents in MRI scans. One study showed a deep learning model that could produce contrast-equivalent information from non-contrast MRI [26]. This breakthrough highlights safety issues linked to contrast agents, particularly regarding gadolinium retention in the brain after repeated MRI use, as noted by regulatory agencies, as well as the rising costs and complexities of imaging procedures. AI facilitates readily available non-contrast MRI and enhances the accessibility and potential clinical benefits of advanced neuroimaging analysis.
Higher efficiency and standardization in Alzheimer’s disease quantification are also needed. Yamao et al., in 2024, developed a deep learning method to automate the computation of the Centiloid scale from amyloid PET images [102]. The Centiloid scale is key in standardizing amyloid quantification across studies, imaging radiotracers, and equipment. By reducing manual effort and variability, the automated approach enhances the reproducibility and practicality of Centiloid-based analysis in clinical and research settings.
Generative models are also proving to be valuable in AD neuroimaging. A recent study explored various generative models for synthetic MRI data, highlighting their potential to enhance limited datasets [103]. Similarly, another study showcased how super-resolved structural MRI can positively impact the detection of mild cognitive impairment [104]. These abilities to improve image resolution or generate synthetic data decrease the problem of limited data in medical imaging, which ultimately enhances the reliability and applicability of AI models.
In parallel, generative adversarial networks (GANs) have emerged as powerful tools in positron emission tomography (PET) imaging. GAN-based frameworks have been developed to perform super-resolution reconstruction, denoising of low-dose PET scans, and even cross-modality synthesis, generating pseudo-PET images from MRI or CT inputs. Such applications are particularly valuable in neurodegenerative diseases, where concerns about frequent imaging and radiation exposure make non-invasive, low-dose, or synthetic imaging alternatives especially important. For instance, GANs can simulate standard-dose PET scans from ultra-low-dose acquisitions, reducing patient exposure without compromising diagnostic utility. Additionally, cross-domain synthesis methods leveraging GANs offer new opportunities in multimodal integration, improving lesion detectability and disease staging in Alzheimer’s disease and related dementias [105]. These innovations support the goal of making advanced imaging more accessible, cost-effective, and clinically applicable across resource-limited settings.
Another critical factor is addressing the variability in data collection across different locations and timeframes. A study in 2024 showed that deep learning techniques could reduce the time of harmonizing PET/MR data from various scanners [106]. This is vital for longitudinal studies, multi-center collaborations, and early-phase clinical trials evaluating diagnostic or therapeutic agents. Harmonization ensures that imaging biomarkers remain consistent and comparable across different platforms, enhancing the reliability of AI-based analyses and enabling more accurate interpretation of treatment effects and disease progression in heterogeneous datasets.
Finally, Liu et al. (2022) significantly improved the development of a recognition network that combines a spatial transformation attention mechanism with multiple phantom convolutions [26]. These network architecture advancements enhance AI’s capacity to detect small structural changes in MRI images linked to AD.

4.5.1. Clinical Importance

The role of artificial intelligence in diagnosing, classifying, and predicting AD has been extensively examined, highlighting the robustness and high accuracy levels of these techniques. From a clinician’s perspective, these technological advancements, particularly machine learning (ML) and deep learning (DL) in modeling Alzheimer’s disease trajectory, will enable radiologists to interpret extensive, complex imaging data with greater precision and efficiency. As a result, diagnostic accuracy will improve, reducing misdiagnosed dementia cases and enhancing workflow in daily clinical practice. The integration of multi-modal data is also of high clinical importance in Alzheimer’s disease classification. MRI and PET scans, fluid biomarkers, and clinical assessment scales provide valuable information regarding the disease progression. Thus, CNN models [24] and other ML/DL tools [25,97] facilitate this data fusion, supporting a more holistic approach to clinical decision-making. An alternative application of AI tools in clinical and research settings is training medical residents and research interns in neuroimaging in neurodegenerative diseases. This foundational education ensures a new generation of scientists and clinicians is equipped to contribute effectively to the evolving field of neuroimaging.
However, some key issues must be resolved before AI models can be incorporated into clinical practice. A significant consideration involves ethical concerns, which are fundamental to the responsible deployment of AI in Alzheimer’s disease diagnosis. These include ensuring data privacy, maintaining patient autonomy, implementing robust data security, and obtaining informed consent for using personal imaging data in algorithmic analyses. Patients must understand how their data will be processed and retain control over its use in clinical and research contexts.
In parallel, regulatory oversight is evolving to address the unique challenges AI tools pose in healthcare. Agencies such as the U.S. Food and Drug Administration (FDA) and the European Medicines Agency (EMA) increasingly require transparency, algorithmic explainability, and rigorous clinical validation before these tools can be approved.
Finally, interpretability remains one of the most critical factors determining whether a model is suitable for integration into clinical workflows. Clinicians must be able to understand and trust the rationale behind AI-driven decisions. This need for transparency has driven the rise in explainable AI (XAI) as a central trend in developing diagnostic support tools.

4.5.2. Limitations

This review has some limitations that are important to consider when interpreting its findings. First, differences in imaging protocols, sample sizes, and AI model architectures across the included studies make direct comparisons difficult. Many studies used small, single-center datasets and did not validate externally, which limits how well their results apply broadly. In some cases, performance metrics like AUC, sensitivity, and specificity were reported without confidence intervals or statistical comparisons, making these outcomes less clear and reproducible.
While some research showed diagnostic accuracy over 95% as seen in (Table 4), these results might be affected by overfitting, data leakage, or the lack of standardized validation procedures. The variety of cross-validation methods (e.g., k—fold, leave one out) and the absence of benchmarking against established clinical tools further hinder translating these models into clinical practice.
This review mainly focuses on PET and MRI modalities. It does not cover imaging techniques such as optical imaging, digital pathology, or non-imaging data such as genomics and fluid biomarkers, which could enhance imaging models in future applications.
Although key studies were thematically grouped and critically summarized, this review does not include a formal meta-analysis or systematic quality assessment. Though outside this review’s scope, these methods could provide more detailed comparative insights. For example, a recent meta-analysis of 18 studies using deep learning on MRI for AD/MCI reported a pooled sensitivity of 0.84, a specificity of 0.86, and an AUROC of 0.92 [107]. Another systematic review of 101 structural MRI studies showed significant variation based on dataset, model architecture, and validation strategy, highlighting the field’s diversity and the need for standardization [108]. A relevant meta-analysis conducted by Sun Y. et al. in 2025 [109] highlighted the high-performance metrics of DL tools for AD diagnosis using PET scan, with 36 studies included in their article, with most achieving a pooled AUC of 98%. The same study commented on the need for a more standardized procedure regarding the external validation and sample size employed for the tools’ evaluation to attain better robustness and reproducibility of the models [109].
Finally, the fast pace of AI development may make some findings quickly outdated. Many studies in this review predate the widespread use of advanced techniques like transformer models, federated learning, and self-supervised pretraining, which are likely to influence future AI models for AD imaging.

4.5.3. Future Directions

Future research should tackle limitations in AI-driven Alzheimer’s disease neuroimaging studies by emphasizing clinical robustness, broader applicability, and strict methodological standards. Efforts should prioritize creating large, multicenter datasets with standardized imaging protocols to address the shortcomings of small-center cohorts. External validation using prospective or multi-institutional data ensures real-world relevance and minimizes overfitting. Consistent evaluation metrics, confidence intervals, and transparent reporting are essential for enhancing reproducibility. Moreover, AI techniques like transformer-based architectures, self-supervised learning, and federated learning offer potential for developing more scalable and privacy-conscious models. Incorporating non-imaging biomarkers, such as genomic and fluid-based data, could improve diagnostic precision and prognostic insights. Lastly, future work should focus on integrating AI tools into clinical practice, assessing their effects on workflow efficiency, clinical decision-making, and patient outcomes.

5. Conclusions

Artificial intelligence has significantly shaped the modern perspective on neuroimaging, with various applications of machine learning and deep learning integrated into radiographic analysis and interpretation of medical images. Initially, machine learning was introduced in neuroimaging as a tool to support the diagnostic process, offering precise and consistent aid in decision-making. As the term suggests, machine learning in imaging is a technology “trained” by humans to identify structures in MRI/PET, quantify the volume of regions of interest, and determine whether a pattern of atrophy or hypometabolism exists. ML can be supervised or unsupervised, depending on the model designed. The demand for a technology capable of generating meaningful output by providing raw input alone highlighted the potential for developing deep learning models, a subfield of machine learning inspired by the human brain, which processes data using artificial neural networks. These networks allow the system to autonomously analyze pre-processed data and generate outputs by forming hierarchical representations that support pattern recognition. Feed-forward learning allows DL models to enhance discriminative features while suppressing irrelevant information, key processes in their functioning.
Both ML and DL have been leveraged by neuroimaging across all image processing steps. They assist in processing and segmenting key brain areas, provide tools for precisely detecting disease-related patterns in the CNS, and enable classification into stages, contributing to constructing more reliable predictive models. These technologies have been extensively utilized in Alzheimer’s disease research, aiming to pave the way for novel diagnostic tools that allow a more rapid and accurate disease detection during the early stages, or even before the onset of clinical symptoms, thus facilitating a fast intervention.
Numerous models have been developed and presented by research teams, particularly following the FDA’s approval of Tauvid, that aim to disrupt the conventional trajectory of the disease. These techniques have proved high accuracy and excellent performance, each targeting a slightly different aspect of the disease. Although the future of neuroimaging with integrated tools appears promising, several key concerns have emerged. External validation is a significant aspect that defines the actual performance, as the model must be tested on a different dataset to prove its reproducibility and accuracy. Respecting and ensuring data privacy is another primary concern, as the tools require vast amounts of data to be trained and achieve consistent results.
The cutting-edge contributions in the field offer considerable potential in Alzheimer’s disease, not only for disease detection and management, but also for outlining future directions in research and clinical application. Despite the numerous considerations, a new “era” in neuroimaging was introduced, leading to a more personalized medicine, where diseases are managed depending on the features detected, classified into distinct stages by integrating imaging modalities with clinical scales, and heading towards accurate prognostic markers and predictive models. With continued interdisciplinary innovation, this progress brings us closer to earlier diagnoses, more effective interventions, and improved patient lives.

Author Contributions

Conceptualization, R.C.C. and M.F.G.; methodology, R.C.C.; formal analysis, R.C.C., R.P. and R.I.; investigation, A.W.; resources, A.W.; data curation, R.C.C.; writing—original draft preparation, R.C.C., R.I. and R.P.; writing—review and editing, R.C.C. and M.F.G.; visualization, R.I.; supervision, M.F.G.; project administration, M.F.G. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable. This narrative review was based solely on analysis of previously published studies and did not involve any human or animal subjects; therefore, ethical approval was not required.

Informed Consent Statement

Not applicable.

Data Availability Statement

No new data were created or analyzed in this study. Data sharing is not applicable. All data discussed are derived from publicly available, peer-reviewed publications cited within the manuscript.

Acknowledgments

The authors used BioRender to generate and format figures during the preparation of this manuscript.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

ADAlzheimer’s Disease
AIArtificial Intelligence
AUCArea Under the Curve
ANNArtificial Neural Network
CNNConvolutional Neural Network
CSFCerebrospinal Fluid
DLDeep Learning
DTDecision Tree
FDAFood and Drug Administration
FDGFluorodeoxyglucose
GANGenerative Adversarial Network
LLMLarge Language Model
LSTMLong Short-Term Memory
MLMachine Learning
MCIMild Cognitive Impairment
MRIMagnetic Resonance Imaging
MMSEMini-Mental State Examination
PETPositron Emission Tomography
RFRandom Forest
RNNRecurrent Neural Network
ROIRegion of Interest
SVMSupport Vector Machine
SUVRStandardized Uptake Value Ratio
XAIExplainable Artificial Intelligence

Appendix A

Appendix A.1. Pubmed Search Strategy

Date Searched: 2 March 2025
Set #ConceptSyntaxResults
1AI“artificial intelligence” [MeSH Terms] OR “Neural Networks, Computer” [Mesh] OR “Image Processing, Computer-Assisted” [Mesh] OR “Deep Learning” [mesh] OR “Machine Learning” [mesh] OR “Artificial Intelligence” [tw] OR “Artificial Neural Network” [tw] OR “Convolutional Neural Network” [tw] OR “Deep Learning” [tw] OR “Machine Learning” [tw] 606,889
2PET/MRI“Positron-Emission Tomography” [Mesh] OR “Magnetic Resonance Imaging” [Mesh] OR “Positron Emission Tomography” [tw] OR “PET” [tw] OR “Magnetic Resonance Imaging” [tw] OR MRI [tw]921,043
3Alzheimer“Alzheimer Disease” [Mesh] OR Alzheimer [tw] OR Alzheimers [tw] OR Alzheimer’s [tw] OR “Senile Dementia” [tw] OR “Presenile Dementia” [tw]227,267
4Diagnosis or detection“Diagnosis” [Mesh] OR “diagnosis” [Subheading] OR diagnoses [tw] OR diagnose [tw] OR diagnosis [tw] OR detect * [tw]13,530,301
5combining#1 AND #2 AND #34124
6filters# 4 NOT (“animals” [MeSH Terms] NOT “humans” [MeSH Terms]) NOT (“Case Reports” [Publication Type] OR “Editorial” [Publication Type] OR “Review” [Publication Type]) AND “English” [Language] AND (2020:2024 [pdat])1334

Appendix A.2. Embase Search Strategy

Date Searched: 2 April 2025
Set #ConceptSyntaxResults
1AI‘artificial intelligence’/exp OR ‘artificial neural network’/exp OR ‘image processing’/exp OR ‘deep learning’/exp OR ‘machine learning’/exp OR ‘Artificial Intelligence’:ti,ab,kw OR ‘Artificial Neural Network’:ti,ab,kw OR ‘Convolutional Neural Network’:ti,ab,kw OR ‘Deep Learning’:ti,ab,kw OR ‘Machine Learning’:ti,ab,kw 760,426
2PET/MRI1,696,145921,043
3Alzheimer‘Alzheimer disease’/exp OR Alzheimer:ti,ab,kw OR Alzheimers:ti,ab,kw OR ‘Senile Dementia’:ti,ab,kw OR ‘Presenile Dementia’:ti,ab,kw337,471
4Diagnosis or detection‘diagnosis’/exp OR diagnoses:ti,ab,kw OR diagnose:ti,ab,kw OR diagnosis:ti,ab,kw OR detect *:ti,ab,kw13,097,987
5combining#1 AND #2 AND #35670
6filters#4 NOT (‘case report’/de OR [editorial]/lim OR [review]/lim) AND [2020–2025]/py AND [english]/lim AND [humans]/lim2843

Appendix A.3. Scopus Search Strategy

Date Searched: 2 April 2025
Set #ConceptSyntaxResults
1AITITLE-ABS-KEY (“Artificial Intelligence” OR “Artificial Neural Network” OR “Convolutional Neural Network” OR “Deep Learning” OR “Machine Learning”)1,881,748
2PET/MRITITLE-ABS-KEY (“Positron Emission Tomography” OR “PET” OR “Magnetic Resonance Imaging” OR MRI)1,506,097
3AlzheimerTITLE-ABS-KEY (alzheimer OR alzheimers OR “Senile Dementia” OR “Presenile Dementia”)316,171
4Diagnosis or detectionTITLE-ABS-KEY (diagnoses OR diagnose OR diagnosis OR detect *)13,530,301
5combining#1 AND #2 AND #33612
6filters#4 date limit 2020–2025 and English language2848

References

  1. Scheltens, P.; De Strooper, B.; Kivipelto, M.; Holstege, H.; Chételat, G.; Teunissen, C.E.; Cummings, J.; van der Flier, W.M. Alzheimer’s disease. Lancet 2021, 397, 1577–1590. [Google Scholar] [CrossRef]
  2. Monteiro, A.R.; Barbosa, D.J.; Remião, F.; Silva, R. Alzheimer’s disease: Insights and new prospects in disease pathophysiology, biomarkers and disease-modifying drugs. Biochem. Pharmacol. 2023, 211, 115522. [Google Scholar] [CrossRef] [PubMed]
  3. McDade, E.M. Alzheimer Disease. Contin. Lifelong Learn. Neurol. 2022, 28, 648–675. [Google Scholar] [CrossRef]
  4. Lacosta, A.M.; Insua, D.; Badi, H.; Pesini, P.; Sarasa, M. Neurofibrillary Tangles of Aβx-40 in Alzheimer’s Disease Brains. J. Alzheimers Dis. 2017, 58, 661–667. [Google Scholar] [CrossRef]
  5. Ma, C.; Hong, F.; Yang, S. Amyloidosis in Alzheimer’s Disease: Pathogeny, Etiology, and Related Therapeutic Directions. Molecules 2022, 27, 1210. [Google Scholar] [CrossRef]
  6. Otero-Garcia, M.; Mahajani, S.U.; Wakhloo, D.; Tang, W.; Xue, Y.-Q.; Morabito, S.; Pan, J.; Oberhauser, J.; Madira, A.E.; Shakouri, T.; et al. Molecular signatures underlying neurofibrillary tangle susceptibility in Alzheimer’s disease. Neuron 2022, 110, 2929–2948.e8. [Google Scholar] [CrossRef]
  7. Braak, H.; Braak, E. Neuropathological staging of Alzheimer-related changes. Acta Neuropathol. 1991, 82, 239–259. [Google Scholar] [CrossRef]
  8. Bondi, M.W.; Edmonds, E.C.; Salmon, D.P. Alzheimer’s Disease: Past, Present, and Future. J. Int. Neuropsychol. Soc. 2017, 23, 818–831. [Google Scholar] [CrossRef]
  9. Raji, C.A.; Benzinger, T.L.S. The Value of Neuroimaging in Dementia Diagnosis. Contin. Lifelong Learn. Neurol. 2022, 28, 800–821. [Google Scholar] [CrossRef]
  10. Del Sole, A.; Malaspina, S.; Magenta Biasina, A. Magnetic resonance imaging and positron emission tomography in the diagnosis of neurodegenerative dementias. Funct. Neurol. 2016, 31, 205–215. [Google Scholar] [CrossRef]
  11. Masdeu, J.C. Neuroimaging of Diseases Causing Dementia. Neurol. Clin. 2020, 38, 65–94. [Google Scholar] [CrossRef]
  12. Jie, C.V.M.L.; Treyer, V.; Schibli, R.; Mu, L. TauvidTM: The First FDA-Approved PET Tracer for Imaging Tau Pathology in Alzheimer’s Disease. Pharmaceuticals 2021, 14, 110. [Google Scholar] [CrossRef]
  13. Jiang, H.; Cao, P.; Xu, M.; Yang, J.; Zaiane, O. Hi-GCN: A hierarchical graph convolution network for graph embedding learning of brain network and brain disorders prediction. Comput. Biol. Med. 2020, 127, 104096. [Google Scholar] [CrossRef] [PubMed]
  14. Chen, Z.; Mo, X.; Chen, R.; Feng, P.; Li, H. A Reparametrized CNN Model to Distinguish Alzheimer’s Disease Applying Multiple Morphological Metrics and Deep Semantic Features From Structural MRI. Front. Aging Neurosci. 2022, 14, 856391. [Google Scholar] [CrossRef] [PubMed]
  15. Mirkin, S.; Albensi, B.C. Should artificial intelligence be used in conjunction with Neuroimaging in the diagnosis of Alzheimer’s disease? Front. Aging Neurosci. 2023, 15, 1094233. [Google Scholar] [CrossRef]
  16. Institut Curie. Hydrosorb® Versus Control (Water Based Spray) in the Management of Radio-Induced Skin Toxicity: Multicentre Controlled Phase III Randomized Trial. 2016. Available online: https://clinicaltrials.gov/ct2/show/NCT02839473 (accessed on 3 November 2022).
  17. Nenning, K.H.; Langs, G. Machine learning in neuroimaging: From research to clinical practice. Radiologie 2022, 62 (Suppl. 1), S1–S10. [Google Scholar] [CrossRef]
  18. Lyu, J.; Bartlett, P.F.; Nasrallah, F.A.; Tang, X. Toward hippocampal volume measures on ultra-high field magnetic resonance imaging: A comprehensive comparison study between deep learning and conventional approaches. Front. Neurosci. 2023, 17, 1238646. [Google Scholar] [CrossRef]
  19. Bazangani, F.; Richard, F.J.P.; Ghattas, B.; Guedj, E. FDG-PET to T1 Weighted MRI Translation with 3D Elicit Generative Adversarial Network (E-GAN). Sensors 2022, 22, 4640. [Google Scholar] [CrossRef]
  20. Kim, C.M.; Lee, W. Classification of Alzheimer’s Disease Using Ensemble Convolutional Neural Network with LFA Algorithm. IEEE Access 2023, 11, 143004–143015. [Google Scholar] [CrossRef]
  21. Odusami, M.; Maskeliūnas, R.; Damaševičius, R.; Krilavičius, T. Analysis of Features of Alzheimer’s Disease: Detection of Early Stage from Functional Brain Changes in Magnetic Resonance Images Using a Finetuned ResNet18 Network. Diagnostics 2021, 11, 1071. [Google Scholar] [CrossRef]
  22. Aqeel, A.; Hassan, A.; Khan, M.A.; Rehman, S.; Tariq, U.; Kadry, S.; Majumdar, A.; Thinnukool, O. A Long Short-Term Memory Biomarker-Based Prediction Framework for Alzheimer’s Disease. Sensors 2022, 22, 1475. [Google Scholar] [CrossRef]
  23. Khalid, A.; Senan, E.M.; Al-Wagih, K.; Al-Azzam, M.M.A.; Alkhraisha, Z.M. Automatic Analysis of MRI Images for Early Prediction of Alzheimer’s Disease Stages Based on Hybrid Features of CNN and Handcrafted Features. Diagnostics 2023, 13, 1654. [Google Scholar] [CrossRef] [PubMed]
  24. Kim, S.K.; Duong, Q.A.; Gahm, J.K. Multimodal 3D Deep Learning for Early Diagnosis of Alzheimer’s Disease. IEEE Access 2024, 12, 46278–46289. [Google Scholar] [CrossRef]
  25. Chiu, S.I.; Fan, L.Y.; Lin, C.H.; Chen, T.-F.; Lim, W.S.; Jang, J.-S.R.; Chiu, M.-J. Machine Learning-Based Classification of Subjective Cognitive Decline, Mild Cognitive Impairment, and Alzheimer’s Dementia Using Neuroimage and Plasma Biomarkers. ACS Chem. Neurosci. 2022, 13, 3263–3270. [Google Scholar] [CrossRef]
  26. Liu, Y.; Tang, K.; Cai, W.; Chen, A.; Zhou, G.; Li, L.; Liu, R. MPC-STANet: Alzheimer’s Disease Recognition Method Based on Multiple Phantom Convolution and Spatial Transformation Attention Mechanism. Front. Aging Neurosci. 2022, 14, 918462. [Google Scholar] [CrossRef]
  27. Amoroso, N.; Quarto, S.; La Rocca, M.; Tangaro, S.; Monaco, A.; Bellotti, R. An eXplainability Artificial Intelligence approach to brain connectivity in Alzheimer’s disease. Front. Aging Neurosci. 2023, 15, 1238065. [Google Scholar] [CrossRef]
  28. Rao, B.S.; Aparna, M. A Review on Alzheimer’s Disease Through Analysis of MRI Images Using Deep Learning Techniques. IEEE Access 2023, 11, 71542–71556. [Google Scholar] [CrossRef]
  29. Khagi, B.; Kwon, G.R. 3D CNN Design for the Classification of Alzheimer’s Disease Using Brain MRI and PET. IEEE Access 2020, 8, 217830–217847. [Google Scholar] [CrossRef]
  30. Nobakht, S.; Schaeffer, M.; Forkert, N.D.; Nestor, S.; Black, S.E.; Barber, P.; Initiative, T.A.D.N. Combined Atlas and Convolutional Neural Network-Based Segmentation of the Hippocampus from MRI According to the ADNI Harmonized Protocol. Sensors 2021, 21, 2427. [Google Scholar] [CrossRef] [PubMed]
  31. Sun, Z.; Meikle, S.; Calamante, F. CONN-NLM: A Novel CONNectome-Based Non-local Means Filter for PET-MRI Denoising. Front. Neurosci. 2022, 16, 824431. [Google Scholar] [CrossRef]
  32. Tang, Y.; Du, Q.; Wang, J.; Wu, Z.; Li, Y.; Li, M.; Yang, X.; Zheng, J. CCN-CL: A content-noise complementary network with contrastive learning for low-dose computed tomography denoising. Comput. Biol. Med. 2022, 147, 105759. [Google Scholar] [CrossRef]
  33. Yamanakkanavar, N.; Lee, B. Using a Patch-Wise M-Net Convolutional Neural Network for Tissue Segmentation in Brain MRI Images. IEEE Access 2020, 8, 120946–120958. [Google Scholar] [CrossRef]
  34. Yin, T.T.; Cao, M.H.; Yu, J.C.; Shi, T.Y.; Mao, X.H.; Wei, X.Y.; Jia, Z.Z. T1-Weighted Imaging-Based Hippocampal Radiomics in the Diagnosis of Alzheimer’s Disease. Acad. Radiol. 2024, 31, 5183–5192. [Google Scholar] [CrossRef]
  35. Liu, Q.; Zhang, Y.; Guo, L.; Wang, Z. Spatial-temporal data-augmentation-based functional brain network analysis for brain disorders identification. Front. Neurosci. 2023, 17, 1194190. [Google Scholar] [CrossRef]
  36. Noh, J.H.; Kim, J.H.; Yang, H.D. Classification of Alzheimer’s Progression Using fMRI Data. Sensors 2023, 23, 6330. [Google Scholar] [CrossRef]
  37. Brusini, I.; Lindberg, O.; Muehlboeck, J.S.; Smedby, Ö.; Westman, E.; Wang, C. Shape Information Improves the Cross-Cohort Performance of Deep Learning-Based Segmentation of the Hippocampus. Front. Neurosci. 2020, 14, 15. [Google Scholar] [CrossRef] [PubMed]
  38. Cortez, J.; Torres, C.G.; Parraguez, V.H.; De los Reyes, M.; Peralta, O.A. Bovine adipose tissue-derived mesenchymal stem cells self-assemble with testicular cells and integrates and modifies the structure of a testicular organoids. Theriogenology 2024, 215, 259–271. [Google Scholar] [CrossRef]
  39. Zhang, Y.; Jiang, X.; Qiao, L.; Liu, M. Modularity-Guided Functional Brain Network Analysis for Early-Stage Dementia Identification. Front. Neurosci. 2021, 15, 720909. [Google Scholar] [CrossRef]
  40. Jiao, F.; Wang, M.; Sun, X.; Ju, Z.; Lu, J.; Wang, L.; Jiang, J.; Zuo, C. Based on Tau PET Radiomics Analysis for the Classification of Alzheimer’s Disease and Mild Cognitive Impairment. Brain Sci. 2023, 13, 367. [Google Scholar] [CrossRef]
  41. Kaya, M.; Cetin-Kaya, Y. A Novel Deep Learning Architecture Optimization for Multiclass Classification of Alzheimer’s Disease Level. IEEE Access 2024, 12, 46562–46581. [Google Scholar] [CrossRef]
  42. Nuvoli, S.; Bianconi, F.; Rondini, M.; Lazzarato, A.; Marongiu, A.; Fravolini, M.L.; Cascianelli, S.; Amici, S.; Filippi, L.; Spanu, A.; et al. Differential Diagnosis of Alzheimer Disease vs. Mild Cognitive Impairment Based on Left Temporal Lateral Lobe Hypomethabolism on 18F-FDG PET/CT and Automated Classifiers. Diagnostics 2022, 12, 2425. [Google Scholar] [CrossRef]
  43. Akramifard, H.; Balafar, M.; Razavi, S.; Ramli, A.R. Emphasis Learning, Features Repetition in Width Instead of Length to Improve Classification Performance: Case Study-Alzheimer’s Disease Diagnosis. Sensors 2020, 20, 941. [Google Scholar] [CrossRef]
  44. Wang, L.; Sheng, J.; Zhang, Q.; Zhou, R.; Li, Z.; Xin, Y. Functional Brain Network Measures for Alzheimer’s Disease Classification. IEEE Access 2023, 11, 111832–111845. [Google Scholar] [CrossRef]
  45. Lama, R.K.; Kwon, G.R. Diagnosis of Alzheimer’s Disease Using Brain Network. Front. Neurosci. 2021, 15, 605115. [Google Scholar] [CrossRef]
  46. Choi, R.Y.; Coyner, A.S.; Kalpathy-Cramer, J.; Chiang, M.F.; Campbell, J.P. Introduction to Machine Learning, Neural Networks, and Deep Learning. Transl. Vis. Sci. Technol. 2020, 9, 14. [Google Scholar] [CrossRef]
  47. van Loon, W.; de Vos, F.; Fokkema, M.; Szabo, B.; Koini, M.; Schmidt, R.; de Rooij, M. Analyzing Hierarchical Multi-View MRI Data with StaPLR: An Application to Alzheimer’s Disease Classification. Front. Neurosci. 2022, 16, 830630. [Google Scholar] [CrossRef]
  48. Khan, Y.F.; Kaushik, B.; Chowdhary, C.L.; Srivastava, G. Ensemble Model for Diagnostic Classification of Alzheimer’s Disease Based on Brain Anatomical Magnetic Resonance Imaging. Diagnostics 2022, 12, 3193. [Google Scholar] [CrossRef]
  49. Bao, Y.W.; Wang, Z.J.; Shea, Y.F.; Chiu, P.K.-C.; Kwan, J.S.; Chan, F.H.-W.; Mak, H.K.-F. Combined Quantitative amyloid-β PET and Structural MRI Features Improve Alzheimer’s Disease Classification in Random Forest Model—A Multicenter Study. Acad. Radiol. 2024, 31, 5154–5163. [Google Scholar] [CrossRef]
  50. Song, M.; Jung, H.; Lee, S.; Kim, D.; Ahn, M. Diagnostic classification and biomarker identification of alzheimer’s disease with random forest algorithm. Brain Sci. 2021, 11, 453. [Google Scholar] [CrossRef]
  51. Keles, M.K.; Kilic, U. Classification of Brain Volumetric Data to Determine Alzheimer’s Disease Using Artificial Bee Colony Algorithm as Feature Selector. IEEE Access 2022, 10, 82989–83001. [Google Scholar] [CrossRef]
  52. LeCun, Y.; Bengio, Y.; Hinton, G. Deep learning. Nature 2015, 521, 436–444. [Google Scholar] [CrossRef]
  53. Odusami, M.; Maskeliūnas, R.; Damaševičius, R. An Intelligent System for Early Recognition of Alzheimer’s Disease Using Neuroimaging. Sensors 2022, 22, 740. [Google Scholar] [CrossRef]
  54. Sharma, S.; Gupta, S.; Gupta, D.; Altameem, A.; Saudagar, A.K.J.; Poonia, R.C.; Nayak, S.R. HTLML: Hybrid AI Based Model for Detection of Alzheimer’s Disease. Diagnostics 2022, 12, 1833. [Google Scholar] [CrossRef]
  55. Cao, Y.; Kuai, H.; Liang, P.; Pan, J.S.; Yan, J.; Zhong, N. BNLoop-GAN: A multi-loop generative adversarial model on brain network learning to classify Alzheimer’s disease. Front. Neurosci. 2023, 17, 1202382. [Google Scholar] [CrossRef]
  56. Pan, D.; Zeng, A.; Yang, B.; Lai, G.; Hu, B.; Song, X.; Jiang, T.; Alzheimer’s Disease Neuroimaging Initiative (ADNI). Deep Learning for Brain MRI Confirms Patterned Pathological Progression in Alzheimer’s Disease. Adv. Sci. 2023, 10, e2204717. [Google Scholar] [CrossRef] [PubMed]
  57. Murugan, S.; Venkatesan, C.; Sumithra, M.G.; Gao, X.-Z.; Elakkiya, B.; Akila, M.; Manoharan, S. DEMNET: A Deep Learning Model for Early Diagnosis of Alzheimer Diseases and Dementia from MR Images. IEEE Access 2021, 9, 90319–90329. [Google Scholar] [CrossRef]
  58. Mujahid, M.; Rehman, A.; Alam, T.; Alamri, F.S.; Fati, S.M.; Saba, T. An Efficient Ensemble Approach for Alzheimer’s Disease Detection Using an Adaptive Synthetic Technique and Deep Learning. Diagnostics 2023, 13, 2489. [Google Scholar] [CrossRef] [PubMed]
  59. Khan, R.; Akbar, S.; Mehmood, A.; Shahid, F.; Munir, K.; Ilyas, N.; Asif, M.; Zheng, Z. A transfer learning approach for multiclass classification of Alzheimer’s disease using MRI images. Front. Neurosci. 2023, 16, 1050777. [Google Scholar] [CrossRef]
  60. Dhillon, A.; Verma, G.K. Convolutional neural network: A review of models, methodologies and applications to object detection. Prog. Artif. Intell. 2020, 9, 85–112. [Google Scholar] [CrossRef]
  61. Chen, D.; Hu, F.; Nian, G.; Yang, T. Deep Residual Learning for Nonlinear Regression. Entropy 2020, 22, 193. [Google Scholar] [CrossRef]
  62. Li, C.; Wang, Q.; Liu, X.; Hu, B. An Attention-Based CoT-ResNet with Channel Shuffle Mechanism for Classification of Alzheimer’s Disease Levels. Front. Aging Neurosci. 2022, 14, 930584. [Google Scholar] [CrossRef]
  63. Pusparani, Y.; Lin, C.Y.; Jan, Y.K.; Lin, F.-Y.; Liau, B.-Y.; Ardhianto, P.; Farady, I.; Alex, J.S.R.; Aparajeeta, J.; Chao, W.-H.; et al. Diagnosis of Alzheimer’s Disease Using Convolutional Neural Network with Select Slices by Landmark on Hippocampus in MRI Images. IEEE Access 2023, 11, 61688–61697. [Google Scholar] [CrossRef]
  64. Sun, H.; Wang, A.; Wang, W.; Liu, C. An Improved Deep Residual Network Prediction Model for the Early Diagnosis of Alzheimer’s Disease. Sensors 2021, 21, 4182. [Google Scholar] [CrossRef] [PubMed]
  65. AlSaeed, D.; Omar, S.F. Brain MRI Analysis for Alzheimer’s Disease Diagnosis Using CNN-Based Feature Extraction and Machine Learning. Sensors 2022, 22, 2911. [Google Scholar] [CrossRef]
  66. Syed Jamalullah, R.; Mary Gladence, L.; Ahmed, M.A.; Lydia, E.L.; Ishak, M.K.; Hadjouni, M.; Mostafa, S.M. Leveraging Brain MRI for Biomedical Alzheimer’s Disease Diagnosis Using Enhanced Manta Ray Foraging Optimization Based Deep Learning. IEEE Access 2023, 11, 81921–81929. [Google Scholar] [CrossRef]
  67. Carcagnì, P.; Leo, M.; Del Coco, M.; Distante, C.; De Salve, A. Convolution Neural Networks and Self-Attention Learners for Alzheimer Dementia Diagnosis from Brain MRI. Sensors 2023, 23, 1694. [Google Scholar] [CrossRef]
  68. Shamrat, F.M.J.M.; Akter, S.; Azam, S.; Karim, A.; Ghosh, P.; Tasnim, Z.; Hasib, K.M.; De Boer, F.; Ahmed, K. AlzheimerNet: An Effective Deep Learning Based Proposition for Alzheimer’s Disease Stages Classification From Functional Brain Changes in Magnetic Resonance Images. IEEE Access 2023, 11, 16376–16395. [Google Scholar] [CrossRef]
  69. Hazarika, R.A.; Abraham, A.; Kandar, D.; Maji, A.K. An Improved LeNet-Deep Neural Network Model for Alzheimer’s Disease Classification Using Brain Magnetic Resonance Images. IEEE Access 2021, 9, 161194–161207. [Google Scholar] [CrossRef]
  70. Fareed, M.M.S.; Zikria, S.; Ahmed, G.; Din, M.Z.; Mahmood, S.; Aslam, M.; Jillani, S.F.; Moustafa, A. ADD-Net: An Effective Deep Learning Model for Early Detection of Alzheimer Disease in MRI Scans. IEEE Access 2022, 10, 96930–96951. [Google Scholar] [CrossRef]
  71. Sait, A.R.W.; Nagaraj, R. A Feature-Fusion Technique-Based Alzheimer’s Disease Classification Using Magnetic Resonance Imaging. Diagnostics 2024, 14, 2363. [Google Scholar] [CrossRef]
  72. Chabib, C.M.; Hadjileontiadis, L.J.; Shehhi, A.A. DeepCurvMRI: Deep Convolutional Curvelet Transform-Based MRI Approach for Early Detection of Alzheimer’s Disease. IEEE Access 2023, 11, 44650–44659. [Google Scholar] [CrossRef]
  73. Ganokratanaa, T.; Ketcham, M.; Pramkeaw, P. Advancements in Cataract Detection: The Systematic Development of LeNet-Convolutional Neural Network Models. J. Imaging 2023, 9, 197. [Google Scholar] [CrossRef]
  74. Dey, R.; Salem, F.M. Gate-variants of Gated Recurrent Unit (GRU) neural networks. In Proceedings of the 2017 IEEE 60th International Midwest Symposium on Circuits and Systems (MWSCAS), Boston, MA, USA, 6–9 August 2017; pp. 1597–1600. [Google Scholar] [CrossRef]
  75. Mahim, S.M.; Ali, M.S.; Hasan, M.O.; Nafi, A.A.N.; Sadat, A.; Al Hasan, S.; Shareef, B.; Ahsan, M.; Islam, K.; Miah, S.; et al. Unlocking the Potential of XAI for Improved Alzheimer’s Disease Detection and Classification Using a ViT-GRU Model. IEEE Access 2024, 12, 8390–8412. [Google Scholar] [CrossRef]
  76. Zhao, Y.; Guo, Q.; Zhang, Y.; Zheng, J.; Yang, Y.; Du, X.; Feng, H.; Zhang, S. Application of Deep Learning for Prediction of Alzheimer’s Disease in PET/MR Imaging. Bioengineering 2023, 10, 1120. [Google Scholar] [CrossRef]
  77. Al-Otaibi, S.; Mujahid, M.; Khan, A.R.; Nobanee, H.; Alyami, J.; Saba, T. Dual Attention Convolutional AutoEncoder for Diagnosis of Alzheimer’s Disorder in Patients Using Neuroimaging and MRI Features. IEEE Access 2024, 12, 58722–58739. [Google Scholar] [CrossRef]
  78. Guo, H.; Zhang, Y. Resting State fMRI and Improved Deep Learning Algorithm for Earlier Detection of Alzheimer’s Disease. IEEE Access 2020, 8, 115383–115392. [Google Scholar] [CrossRef]
  79. Yi, X.; Walia, E.; Babyn, P. Generative adversarial network in medical imaging: A review. Med. Image Anal. 2019, 58, 101552. [Google Scholar] [CrossRef]
  80. Chui, K.T.; Gupta, B.B.; Alhalabi, W.; Alzahrani, F.S. An MRI Scans-Based Alzheimer’s Disease Detection via Convolutional Neural Network and Transfer Learning. Diagnostics 2022, 12, 1531. [Google Scholar] [CrossRef] [PubMed]
  81. Kale, M.; Wankhede, N.; Pawar, R.; Ballal, S.; Kumawat, R.; Goswami, M.; Khalid, M.; Taksande, B.; Upaganlawar, A.; Umekar, M.; et al. AI-driven innovations in Alzheimer’s disease: Integrating early diagnosis, personalized treatment, and prognostic modelling. Ageing Res. Rev. 2024, 101, 102497. [Google Scholar] [CrossRef]
  82. Alongi, P.; Laudicella, R.; Panasiti, F.; Stefano, A.; Comelli, A.; Giaccone, P.; Arnone, A.; Minutoli, F.; Quartuccio, N.; Cupidi, C.; et al. Radiomics Analysis of Brain [18F]FDG PET/CT to Predict Alzheimer’s Disease in Patients with Amyloid PET Positivity: A Preliminary Report on the Application of SPM Cortical Segmentation, Pyradiomics and Machine-Learning Analysis. Diagnostics 2022, 12, 933. [Google Scholar] [CrossRef] [PubMed]
  83. Wang, M.; Wei, M.; Wang, L.; Song, J.; Rominger, A.; Shi, K.; Jiang, J. Tau Protein Accumulation Trajectory-Based Brain Age Prediction in the Alzheimer’s Disease Continuum. Brain Sci. 2024, 14, 575. [Google Scholar] [CrossRef]
  84. Jain, V.; Nankar, O.; Jerrish, D.J.; Gite, S.; Patil, S.; Kotecha, K. A Novel AI-Based System for Detection and Severity Prediction of Dementia Using MRI. IEEE Access 2021, 9, 154324–154346. [Google Scholar] [CrossRef]
  85. Peng, J.; Wang, W.; Song, Q.; Hou, J.; Jin, H.; Qin, X.; Yuan, Z.; Wei, Y.; Shu, Z. 18F-FDG-PET Radiomics Based on White Matter Predicts The Progression of Mild Cognitive Impairment to Alzheimer Disease: A Machine Learning Study. Acad. Radiol. 2023, 30, 1874–1884. [Google Scholar] [CrossRef]
  86. Lin, W.; Gao, Q.; Yuan, J.; Chen, Z.; Feng, C.; Chen, W.; Du, M.; Tong, T. Predicting Alzheimer’s Disease Conversion From Mild Cognitive Impairment Using an Extreme Learning Machine-Based Grading Method with Multimodal Data. Front. Aging Neurosci. 2020, 12, 77. [Google Scholar] [CrossRef]
  87. Fakoya, A.A.; Parkinson, S. A Novel Image Casting and Fusion for Identifying Individuals at Risk of Alzheimer’s Disease Using MRI and PET Imaging. IEEE Access 2024, 12, 134101–134114. [Google Scholar] [CrossRef]
  88. Li, H.T.; Yuan, S.X.; Wu, J.S.; Gu, Y.; Sun, X. Predicting conversion from mci to ad combining multi-modality data and based on molecular subtype. Brain Sci. 2021, 11, 674. [Google Scholar] [CrossRef]
  89. Kim, S.T.; Kucukaslan, U.; Navab, N. Longitudinal Brain MR Image Modeling Using Personalized Memory for Alzheimer’s Disease. IEEE Access 2021, 9, 143212–143221. [Google Scholar] [CrossRef]
  90. Crystal, O.; Maralani, P.J.; Black, S.; Fischer, C.; Moody, A.R.; Khademi, A. Brain Age Estimation on a Dementia Cohort Using FLAIR MRI Biomarkers. Am. J. Neuroradiol. 2023, 44, 1384–1390. [Google Scholar] [CrossRef] [PubMed]
  91. Chattopadhyay, T.; Ozarkar, S.S.; Buwa, K.; Joshy, N.A.; Komandur, D.; Naik, J.; Thomopoulos, S.I.; Steeg, G.V.; Ambite, J.L.; Thompson, P.M. Comparison of deep learning architectures for predicting amyloid positivity in Alzheimer’s disease, mild cognitive impairment, and healthy aging, from T1-weighted brain structural MRI. Front. Neurosci. 2024, 18, 1387196. [Google Scholar] [CrossRef] [PubMed]
  92. Habuza, T.; Zaki, N.; Mohamed, E.A.; Statsenko, Y. Deviation from Model of Normal Aging in Alzheimer’s Disease: Application of Deep Learning to Structural MRI Data and Cognitive Tests. IEEE Access 2022, 10, 53234–53249. [Google Scholar] [CrossRef]
  93. Liang, W.; Zhang, K.; Cao, P.; Liu, X.; Yang, J.; Zaiane, O.R. Exploiting task relationships for Alzheimer’s disease cognitive score prediction via multi-task learning. Comput. Biol. Med. 2023, 152, 106367. [Google Scholar] [CrossRef]
  94. Qin, Y.; Cui, J.; Ge, X.; Tian, Y.; Han, H.; Fan, Z.; Liu, L.; Luo, Y.; Yu, H. Hierarchical multi-class Alzheimer’s disease diagnostic framework using imaging and clinical features. Front. Aging Neurosci. 2022, 14, 935055. [Google Scholar] [CrossRef]
  95. Dyrba, M.; Mohammadi, R.; Grothe, M.J.; Kirste, T.; Teipel, S.J. Gaussian Graphical Models Reveal Inter-Modal and Inter-Regional Conditional Dependencies of Brain Alterations in Alzheimer’s Disease. Front. Aging Neurosci. 2020, 12, 99. [Google Scholar] [CrossRef]
  96. Zhang, G.; Nie, X.; Liu, B.; Yuan, H.; Li, J.; Sun, W.; Huang, S. A multimodal fusion method for Alzheimer’s disease based on DCT convolutional sparse representation. Front. Neurosci. 2023, 16, 1100812. [Google Scholar] [CrossRef]
  97. Hong, X.; Huang, K.; Lin, J.; Ye, X.; Wu, G.; Chen, L.; Chen, E.; Zhao, S. Combined Multi-Atlas and Multi-Layer Perception for Alzheimer’s Disease Classification. Front. Aging Neurosci. 2022, 14, 891433. [Google Scholar] [CrossRef] [PubMed]
  98. Lin, W.; Gao, Q.; Du, M.; Chen, W.; Tong, T. Multiclass diagnosis of stages of Alzheimer’s disease using linear discriminant analysis scoring for multimodal data. Comput. Biol. Med. 2021, 134, 104478. [Google Scholar] [CrossRef] [PubMed]
  99. Gupta, Y.; Kim, J.I.; Kim, B.C.; Kwon, G.R. Classification and Graphical Analysis of Alzheimer’s Disease and Its Prodromal Stage Using Multimodal Features From Structural, Diffusion, and Functional Neuroimaging Data and the APOE Genotype. Front. Aging Neurosci. 2020, 12, 238. [Google Scholar] [CrossRef] [PubMed]
  100. Lau, A.; Beheshti, I.; Modirrousta, M.; Kolesar, T.A.; Goertzen, A.L.; Ko, J.H. Alzheimer’s Disease-Related Metabolic Pattern in Diverse Forms of Neurodegenerative Diseases. Diagnostics 2021, 11, 2023. [Google Scholar] [CrossRef]
  101. Dong, A.; Li, Z.; Wang, M.; Shen, D.; Liu, M. High-Order Laplacian Regularized Low-Rank Representation for Multimodal Dementia Diagnosis. Front. Neurosci. 2021, 15, 634124. [Google Scholar] [CrossRef]
  102. Yamao, T.; Miwa, K.; Kaneko, Y.; Takahashi, N.; Miyaji, N.; Hasegawa, K.; Wagatsuma, K.; Kamitaka, Y.; Ito, H.; Matsuda, H. Deep Learning-Driven Estimation of Centiloid Scales from Amyloid PET Images with 11C-PiB and 18F-Labeled Tracers in Alzheimer’s Disease. Brain Sci. 2024, 14, 406. [Google Scholar] [CrossRef]
  103. Gajjar, P.; Garg, M.; Desai, S.; Chhinkaniwala, H.; Sanghvi, H.A.; Patel, R.H.; Gupta, S.; Pandya, A.S. An Empirical Analysis of Diffusion, Autoencoders, and Adversarial Deep Learning Models for Predicting Dementia Using High-Fidelity MRI. IEEE Access 2024, 12, 131231–131243. [Google Scholar] [CrossRef]
  104. Ying, C.; Chen, Y.; Yan, Y.; Flores, S.; Laforest, R.; Benzinger, T.L.S.; An, H. Accuracy and longitudinal consistency of PET/MR attenuation correction in amyloid PET imaging amid software and hardware upgrades. Am. J. Neuroradiol. 2024, 46, 635–642. [Google Scholar] [CrossRef] [PubMed]
  105. Apostolopoulos, I.D.; Papathanasiou, N.D.; Apostolopoulos, D.J.; Panayiotakis, G.S. Applications of Generative Adversarial Networks (GANs) in Positron Emission Tomography (PET) imaging: A review. Eur. J. Nucl. Med. Mol. Imaging 2022, 49, 3717–3739. [Google Scholar] [CrossRef] [PubMed]
  106. Grigas, O.; Damaševičius, R.; Maskeliūnas, R. Positive Effect of Super-Resolved Structural Magnetic Resonance Imaging for Mild Cognitive Impairment Detection. Brain Sci. 2024, 14, 381. [Google Scholar] [CrossRef] [PubMed]
  107. Wu, J.; Zhao, K.; Li, Z.; Wang, D.; Ding, Y.; Wei, Y.; Zhang, H.; Liu, Y. A systematic analysis of diagnostic performance for Alzheimer’s disease using structural MRI. Psychoradiology 2022, 2, 287–295. [Google Scholar] [CrossRef]
  108. Wang, L.X.; Wang, Y.-Z.; Han, C.-G.; Zhao, L.; He, L.; Li, J. Revolutionizing early Alzheimer’s disease and mild cognitive impairment diagnosis: A deep learning MRI meta-analysis. Arq. Neuro-Psiquiatr. 2024, 82, s00441788657. [Google Scholar] [CrossRef]
  109. Sun, Y.; Chen, Y.; Dong, L.; Hu, D.; Zhang, X.; Jin, C.; Zhou, R.; Zhang, J.; Dou, X.; Wang, J.; et al. Diagnostic performance of deep learning-assisted [18F]FDG PET imaging for Alzheimer’s disease: A systematic review and meta-analysis. Eur. J. Nucl. Med. Mol. Imaging 2025, 52, 3600–3612. [Google Scholar] [CrossRef]
Figure 1. The workflow of search and selection. The search was conducted in February of 2025.
Figure 1. The workflow of search and selection. The search was conducted in February of 2025.
Jcm 14 05913 g001
Figure 2. Bar chart illustrating the annual distribution of ML and DL model types used in Alzheimer’s disease PET/MRI studies included in this review, from 2020 to 2025. In 2020 generative adversarial networks (GANs) and hybrid models were most prominent. By 2023–2025, the relatively equal distribution across technologies suggests a stabilization in modeling strategies.
Figure 2. Bar chart illustrating the annual distribution of ML and DL model types used in Alzheimer’s disease PET/MRI studies included in this review, from 2020 to 2025. In 2020 generative adversarial networks (GANs) and hybrid models were most prominent. By 2023–2025, the relatively equal distribution across technologies suggests a stabilization in modeling strategies.
Jcm 14 05913 g002
Figure 3. Modality trends in AI-based AD imaging studies from 2020 to 2025. This bar chart demonstrates the proportion of the studies utilizing MRI-only, PET-only, combined PET + MRI, and broader multimodal approaches (e.g., PET + MRI + clinical or CSF biomarkers). A progressive shift toward multimodal integration is evident, particularly after 2022, reflecting increased emphasis on combining structural, metabolic, and clinical information in predictive modeling.
Figure 3. Modality trends in AI-based AD imaging studies from 2020 to 2025. This bar chart demonstrates the proportion of the studies utilizing MRI-only, PET-only, combined PET + MRI, and broader multimodal approaches (e.g., PET + MRI + clinical or CSF biomarkers). A progressive shift toward multimodal integration is evident, particularly after 2022, reflecting increased emphasis on combining structural, metabolic, and clinical information in predictive modeling.
Jcm 14 05913 g003
Figure 4. The AI pipeline in AD neuroimaging is presented in this figure. Preprocessing, including segmentation and feature extraction, is critical for forming raw MRI/PET data into meaningful inputs for further analysis using ML and DL tools. Accurate classification supports clinical outcomes by predicting cognitive decline (MMSE) and estimating conversion risk.
Figure 4. The AI pipeline in AD neuroimaging is presented in this figure. Preprocessing, including segmentation and feature extraction, is critical for forming raw MRI/PET data into meaningful inputs for further analysis using ML and DL tools. Accurate classification supports clinical outcomes by predicting cognitive decline (MMSE) and estimating conversion risk.
Jcm 14 05913 g004
Figure 5. This scheme demonstrates the multimodal Fusion Architecture. Multimodal inputs (MRI, amyloid, tau, and FDG PET) are processed through CNNs to extract features, which are integrated into a shared latent space. A fusion layer with an attention module enhances accurate classification and CN learning.
Figure 5. This scheme demonstrates the multimodal Fusion Architecture. Multimodal inputs (MRI, amyloid, tau, and FDG PET) are processed through CNNs to extract features, which are integrated into a shared latent space. A fusion layer with an attention module enhances accurate classification and CN learning.
Jcm 14 05913 g005
Figure 6. Emerging directions in AI for Alzheimer’s disease include explainable AI (XAI), contrast-free imaging, federated learning, harmonization, cognitive prediction, and the use of synthetic data via GANs, all of which aim to enhance clinical applicability, generalizability, and interpretability.
Figure 6. Emerging directions in AI for Alzheimer’s disease include explainable AI (XAI), contrast-free imaging, federated learning, harmonization, cognitive prediction, and the use of synthetic data via GANs, all of which aim to enhance clinical applicability, generalizability, and interpretability.
Jcm 14 05913 g006
Table 1. Diagnostic categories of AI use in AD imaging.
Table 1. Diagnostic categories of AI use in AD imaging.
DomainDescriptionModalitiesCommon AI MethodsExample Studies
Preprocessing and SegmentationPrepares data for modeling: skull stripping, noise correction, registrationMRI, PETU-Net, nnU-Net, GANs, Radiomics[18,19]
Diagnosis and ClassificationIdentifies disease stage or detects AD during early stagesMRI, PETCNNs (ResNet, DenseNet, VGG), SVM, RF[20,21]
Prediction and PrognosisForecasts disease progression, enables longitudinal analysis studies, and provides risk assessmentMRI, PET, fMRIRNN, LSTM, Logistic Regression[22,23]
Multimodal FusionCombines imaging output, blood/CSF biomarkers, and clinical data to obtain a comprehensive final, cumulative resultMRI + PET + MMSE + CSFEnsemble CNNs, Dual-Path CNNs, SVM[24,25]
Emerging TrendsNew techniques like XAI (enhancing explainability), synthetic data (generating training input for AI models), and harmonization (enabling better standardization)AllXAI, Generative Models, Transformers[26,27]
Table 2. Top-performing AI models in high-impact studies.
Table 2. Top-performing AI models in high-impact studies.
AI ModelTaskInput ModalityAccuracy/AUCStudy
ResNet18AD vs. MCI classificationfMRI99.99%[53]
DenseNet121 + SVMAD ClassificationT1 MRI91.75%[54]
BNLoop-GANBrain network generationfMRI + sMRI98%[55]
Ensemble 3D CNNProgression TrackingLongitudinal MRI--[56]
DemNetAD stagingMRI95.23%[57]
Table 3. Pathological features and imaging biomarkers in AI-driven AD studies.
Table 3. Pathological features and imaging biomarkers in AI-driven AD studies.
Pathological FeatureModalityAI ApplicationsTypical OutputReferences
Amyloid-β (Aβ)PET (AV45, PiB)Diagnosis, prognosisSUVR, Centiloid scaling[24,49,82]
Tau TanglesPET (Tauvid)Staging, prognosisSUVR, cortical distribution[24,40,83]
CSF Aβ42/tau ratioBiochemical (CSF)Risk prediction, multimodal fusionThe ratio “drop” correlates with AD conversion[50]
Medial Temporal AtrophyT1 MRISegmentation, classification, brain ageVolume loss in the hippocampus/entorhinal cortex[34,51,63]
FDG HypometabolismFDG PETMultimodal fusion, deep learningReduced glucose metabolism in parietal/temporal[24,42]
White Matter IntegrityDTI/Structural MRIPrediction of progression, subtype analysisFA and MD abnormalities in frontal-parietal tracts[35,36]
Table 4. Description of some remarkable AI models used in AD during 2020–2025.
Table 4. Description of some remarkable AI models used in AD during 2020–2025.
TitleAuthorYearAI Model/ArchitectureModalityDatasetPerformance MetricsModel InsightsLimitationsReference
3D CNN Design for the Classification of Alzheimer’s Disease Using Brain MRI and PET. Khagi B. et al.2020Encoder-based 3D CNNMRI and/or PETADNI Baseline (BL) projectsAccuracy: 94.56%Diverges receptive fields to optimize feature extraction efficiencyDataset size and class imbalances (AD, MCI)[29]
Diagnosis of Alzheimer’s Disease Using Convolutional Neural Network With Select Slices by Landmark on Hippocampus in MRI ImagesPusparani Y et al.2023Resnet50 and LeNetMRIADNIAccuracy: 98%Attention model to improve accuracyNo external validation[63]
3D CNN for AD detection using MRIHaijing et al.2021ResNetMRIADNIAccuracy: 97.1%Captures more information from MRILimited interpretability/No external Validation[64]
Using a Patch-Wise M-Net Convolutional Neural Network for Tissue segmentation in brain MRI imagesYamanakkanavar N. et al.2020CNN M-NetMRIOASISAccuracy: 94.81–96.33%Automatic segmentation of brain MRI scansM-Net is prone to missing details in certain regions[33]
An Intelligent System for Early Recognition of Alzheimer’s Disease Using NeuroimagingOdusami et al.2022ResNet
DesNet
MRIADNIAccuracy: 98.86%Grad class activation mapNo external validation[53]
Analysis of Features of Alzheimer’s Disease: Detection of Early Stage from Functional Brain Changes in Magnetic Resonance Images Using a Finetuned ResNet18 NetworkOdusami et al.2021ResNet 18fMRIADNIAccuracy: 99.99%Integrates structural and metabolic infoOverfitting[21]
FDG-PET to T1 Weighted MRI Translation with 3D Elicit Generative Adversarial Network (E-GAN)Bazangani F. et al.2022Elicit GANFDG PET ADNIStructural similarity (SSIM): 75%FDG-PET to 3D T1-WI generationLong training time and model tested only on healthy subjects[19]
Brain MRI Analysis for Alzheimer’s Disease Diagnosis Using CNN-Based Feature Extraction and Machine LearningDuaa AlSaeed2022ResNet 50MRIADNI and MIRIADAccuracy: 85.87–99%Efficient feature extractionNo external validation[65]
Based on Tau PET Radiomics Analysis for the Classification of Alzheimer’s Disease and Mild Cognitive ImpairmentJiao F. et al.2023Radiomics analysisTau PETADNIAccuracy: 84.8%Prediction of tau positive MCI or ApoE ε4 presenceNot biopsy-confirmed AD diagnosis and external cohorts with small subject number employed[40]
MPC-STANet: Alzheimer’s Disease Recognition Method Based on Multiple Phantom Convolution and Spatial Transformation Attention MechanismYujian et al.2022ResNet50MRIMultiinstitutionalAccuracy: 96.25Space conversion attentionNo external validation[26]
Functional Brain Network Measures for Alzheimer’s Disease Classification. IEEE Access.Wang L et al.2023SVM-linearfMRIADNIAccuracy: 96.80 (HC vs. AD)Identification of significantly altered networks between HC, MCI states and ADOnly 36 out of 360 regions taken into account using J-HCPMMP parcellation[44]
Leveraging Brain MRI for Biomedical Alzheimer’s Disease Diagnosis Using Enhanced Manta Ray Foraging Optimization Based Deep LearningR. Syed Jamalullah2023DesNet 121MRIADNIAccuracy: 98.29%Enhanced Manta ray Foraging OptimizationNo external validation[66]
Convolution Neural Networks and Self-Attention Learners for Alzheimer Dementia Diagnosis from Brain MRIPierluigi Carcagnì et al.2023CNNMRIADNI and OASISAccuracy: 77%Self-attention learnersNo external validation, information may be lost during feature extraction[67]
Analyzing Hierarchical Multi-View MRI Data With StaPLR: An Application to Alzheimer’s Disease ClassificationVan Loon. et al.2022Stacked penalized LR (STaPLR)MRI (DWI, sMRI, fMRI)PRODEM (Medical University of Graz)Accuracy: 88.8%MRI view selection most significant for disease prediction Binary selection process: each view is either selected or not selected, which may differ from subject to subject[47]
HTLML: Hybrid AI Based Model for Detection of Alzheimer’s DiseaseSarang et al.2022DesNet 121, DesNet 201MRIKaggle Accuracy: 91.75%Hybrid architecture improves stabilityComplex to reproduce, no external validation[54]
DEMNET: A Deep Learning Model for Early Diagnosis of Alzheimer Diseases and Dementia From MR ImagesSuriya et al.2021CNNMRIKaggle ADNI- external validationAccuracy: 95.23%Multilayer architectureComplex to reproduce[57]
Combined Quantitative amyloid-β PET and Structural MRI Features Improve Alzheimer’s Disease Classification in Random Forest Model—A Multicenter StudyBao Y. et al.2024RFAβ-PET, sMRIAIBL database
GAIN dataset
Accuracy: 81% (HC vs. AD)Aβ PET features for AD detection using ML modelsDemographical information missing from subjects, limited sample size[49]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Christodoulou, R.C.; Woodward, A.; Pitsillos, R.; Ibrahim, R.; Georgiou, M.F. Artificial Intelligence in Alzheimer’s Disease Diagnosis and Prognosis Using PET-MRI: A Narrative Review of High-Impact Literature Post-Tauvid Approval. J. Clin. Med. 2025, 14, 5913. https://doi.org/10.3390/jcm14165913

AMA Style

Christodoulou RC, Woodward A, Pitsillos R, Ibrahim R, Georgiou MF. Artificial Intelligence in Alzheimer’s Disease Diagnosis and Prognosis Using PET-MRI: A Narrative Review of High-Impact Literature Post-Tauvid Approval. Journal of Clinical Medicine. 2025; 14(16):5913. https://doi.org/10.3390/jcm14165913

Chicago/Turabian Style

Christodoulou, Rafail C., Amanda Woodward, Rafael Pitsillos, Reina Ibrahim, and Michalis F. Georgiou. 2025. "Artificial Intelligence in Alzheimer’s Disease Diagnosis and Prognosis Using PET-MRI: A Narrative Review of High-Impact Literature Post-Tauvid Approval" Journal of Clinical Medicine 14, no. 16: 5913. https://doi.org/10.3390/jcm14165913

APA Style

Christodoulou, R. C., Woodward, A., Pitsillos, R., Ibrahim, R., & Georgiou, M. F. (2025). Artificial Intelligence in Alzheimer’s Disease Diagnosis and Prognosis Using PET-MRI: A Narrative Review of High-Impact Literature Post-Tauvid Approval. Journal of Clinical Medicine, 14(16), 5913. https://doi.org/10.3390/jcm14165913

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop