Author Contributions
Conceptualization, F.S., A.A.M., J.G.R. and R.A.; methodology, F.S., A.A.M., J.G.R. and R.A.; software, F.S., A.A.M., J.G.R. and R.A.; validation, F.S., A.A.M., J.G.R. and R.A.; formal analysis, F.S., A.A.M., J.G.R. and R.A.; investigation, F.S. and A.A.M.; resources, F.S. and A.A.M.; data curation, F.S., A.A.M., J.G.R. and R.A.; writing—original draft preparation, F.S., A.A.M., J.G.R. and R.A.; writing—review and editing, F.S., A.A.M., J.G.R. and R.A.; visualization, F.S. and A.A.M.; supervision, A.A.M., R.A. and J.G.R.; project administration, R.A. and J.G.R. All authors have read and agreed to the published version of the manuscript.
Figure 1.
Alzheimer’s brain tissue with amyloid plaques (pink) and neurofibrillary tangles (black) [
7].
Figure 1.
Alzheimer’s brain tissue with amyloid plaques (pink) and neurofibrillary tangles (black) [
7].
Figure 2.
Representative coronal MRI slices illustrating the progression of neurodegeneration [
25].
Figure 2.
Representative coronal MRI slices illustrating the progression of neurodegeneration [
25].
Figure 3.
The end-to-end data preparation pipeline.
Figure 3.
The end-to-end data preparation pipeline.
Figure 4.
MobileNetV2 architecture diagram showing the frozen ImageNet pre-trained base model (2.42 M parameters) with a custom trainable classification head. The input 224 × 224 × 3 images pass through the frozen MobileNetV2 base, producing 7 × 7 × 1280 feature maps. These are processed by Global Average Pooling to create a 1 × 1280 feature vector, followed by a 128-unit Dense layer with ReLU activation (164K parameters) and a final three-unit Softmax layer (387 parameters) for tri-class prediction.
Figure 4.
MobileNetV2 architecture diagram showing the frozen ImageNet pre-trained base model (2.42 M parameters) with a custom trainable classification head. The input 224 × 224 × 3 images pass through the frozen MobileNetV2 base, producing 7 × 7 × 1280 feature maps. These are processed by Global Average Pooling to create a 1 × 1280 feature vector, followed by a 128-unit Dense layer with ReLU activation (164K parameters) and a final three-unit Softmax layer (387 parameters) for tri-class prediction.
Figure 5.
EfficientNetV2B0 architecture diagram illustrating the frozen ImageNet pre-trained base model (5.92 M parameters) with identical custom classification head as MobileNetV2. The compound scaling approach in EfficientNetV2B0 results in a higher parameter count in the base model while maintaining the same output dimensions (7 × 7 × 1280) and identical downstream processing through Global Average Pooling, 128-unit Dense layer, and 3-unit Softmax output for tri-class classification.
Figure 5.
EfficientNetV2B0 architecture diagram illustrating the frozen ImageNet pre-trained base model (5.92 M parameters) with identical custom classification head as MobileNetV2. The compound scaling approach in EfficientNetV2B0 results in a higher parameter count in the base model while maintaining the same output dimensions (7 × 7 × 1280) and identical downstream processing through Global Average Pooling, 128-unit Dense layer, and 3-unit Softmax output for tri-class classification.
Figure 6.
Confusion matrix showing strong diagonal classification for MobileNetV2 original configuration.
Figure 6.
Confusion matrix showing strong diagonal classification for MobileNetV2 original configuration.
Figure 7.
ROC curves with AUC values above 0.95, for MobileNetV2 base configuration.
Figure 7.
ROC curves with AUC values above 0.95, for MobileNetV2 base configuration.
Figure 8.
Training curves for MobileNetV2 original configuration showing stable convergence.
Figure 8.
Training curves for MobileNetV2 original configuration showing stable convergence.
Figure 9.
Reliability diagrams for MobileNetV2 original configuration across all three diagnostic classes.
Figure 9.
Reliability diagrams for MobileNetV2 original configuration across all three diagnostic classes.
Figure 10.
Confusion matrix with improved class balance for MobileNetV2 augmented configuration.
Figure 10.
Confusion matrix with improved class balance for MobileNetV2 augmented configuration.
Figure 11.
ROC curves showing enhanced performance, for MobileNetV2 augmented configuration.
Figure 11.
ROC curves showing enhanced performance, for MobileNetV2 augmented configuration.
Figure 12.
Training curves for MobileNetV2 augmented configuration with smoother validation.
Figure 12.
Training curves for MobileNetV2 augmented configuration with smoother validation.
Figure 13.
Reliability diagrams for EfficientNetV2B0 original configuration showing baseline calibration performance.
Figure 13.
Reliability diagrams for EfficientNetV2B0 original configuration showing baseline calibration performance.
Figure 14.
Confusion matrix with high CN specificity for EfficientNetV2B0 original configuration.
Figure 14.
Confusion matrix with high CN specificity for EfficientNetV2B0 original configuration.
Figure 15.
ROC curves with AUC values exceeding 0.95, for EfficientNetV2B0 original configuration.
Figure 15.
ROC curves with AUC values exceeding 0.95, for EfficientNetV2B0 original configuration.
Figure 16.
Training curves for EfficientNetV2B0’s original configuration with rapid initial convergence.
Figure 16.
Training curves for EfficientNetV2B0’s original configuration with rapid initial convergence.
Figure 17.
Reliability diagrams for EfficientNetV2B0’s original configuration showing baseline calibration performance.
Figure 17.
Reliability diagrams for EfficientNetV2B0’s original configuration showing baseline calibration performance.
Figure 18.
Confusion matrix showing optimal class balance, for EfficientNetV2B0 augmented configuration.
Figure 18.
Confusion matrix showing optimal class balance, for EfficientNetV2B0 augmented configuration.
Figure 19.
ROC curves with superior discriminative performance, for EfficientNetV2B0’s augmented configuration.
Figure 19.
ROC curves with superior discriminative performance, for EfficientNetV2B0’s augmented configuration.
Figure 20.
Training curves for EfficientNetV2B0’s augmented configuration showing stable convergence.
Figure 20.
Training curves for EfficientNetV2B0’s augmented configuration showing stable convergence.
Figure 21.
Reliability diagrams for EfficientNetV2B0’s augmented configuration achieving optimal calibration.
Figure 21.
Reliability diagrams for EfficientNetV2B0’s augmented configuration achieving optimal calibration.
Figure 22.
Confusion for DenseNet121 augmented configuration.
Figure 22.
Confusion for DenseNet121 augmented configuration.
Figure 23.
ROC curves for DenseNet121 augmented configuration.
Figure 23.
ROC curves for DenseNet121 augmented configuration.
Figure 24.
Training curves for DenseNet121 augmented configuration.
Figure 24.
Training curves for DenseNet121 augmented configuration.
Figure 25.
Reliability diagrams for DenseNet121 augmented configuration showing calibration limitations.
Figure 25.
Reliability diagrams for DenseNet121 augmented configuration showing calibration limitations.
Figure 26.
Multi-scale explainability pipeline combining coarse- and fine-grained attribution methods.
Figure 26.
Multi-scale explainability pipeline combining coarse- and fine-grained attribution methods.
Figure 27.
Explainability results for CN classification showing focus on preserved brain structures.
Figure 27.
Explainability results for CN classification showing focus on preserved brain structures.
Figure 28.
Explainability results for EMCI classification highlighting early pathological changes.
Figure 28.
Explainability results for EMCI classification highlighting early pathological changes.
Figure 29.
Explainability results for LMCI classification showing advanced neurodegenerative patterns.
Figure 29.
Explainability results for LMCI classification showing advanced neurodegenerative patterns.
Figure 30.
Neuroimaging slice viewer interface demonstrating MRI visualization capabilities.
Figure 30.
Neuroimaging slice viewer interface demonstrating MRI visualization capabilities.
Figure 31.
Analysis tool interface demonstrating AI explainability features.
Figure 31.
Analysis tool interface demonstrating AI explainability features.
Table 1.
Comparison of diagnostic modalities for Alzheimer’s Disease.
Table 1.
Comparison of diagnostic modalities for Alzheimer’s Disease.
| Modality | Principle of Measurement | Key Findings in AD | Primary Advantages | Key Limitations |
|---|
| Structural MRI | Measures brain anatomy and volume using magnetic fields. | Atrophy in specific regions (hippocampus, temporal/parietal lobes). | Non-invasive, no radiation, widely available, high spatial resolution. | Traditionally measures late-stage neurodegeneration; less specific for underlying pathology. |
| Amyloid-PET | Visualizes extracellular A plaques using a radiotracer. | Increased tracer uptake in neocortical regions. | High specificity for A pathology; provides diagnostic clarity. | Expensive, limited availability, radiation exposure. |
| Tau-PET | Visualizes intracellular neurofibrillary tangles using a radiotracer. | Increased tracer uptake corresponding to Braak staging patterns. | High specificity for tau pathology; correlates well with cognitive decline. | Expensive, even more limited availability than amyloid-PET, radiation exposure. |
| FDG-PET | Measures regional brain glucose metabolism. | Hypometabolism in posterior cingulate/precuneus and temporoparietal lobes. | Sensitive indicator of neurodegeneration, often preceding structural atrophy. | Expensive, limited availability, radiation exposure, not specific to AD pathology. |
| CSF Analysis | Measures biomarker concentrations in cerebrospinal fluid via lumbar puncture. | Decreased A42, increased t-tau and p-tau levels. | High diagnostic accuracy for A and tau pathologies; direct measure of biochemistry. | Invasive procedure, risk of side effects, pre-analytical variability, lack of standardized cut-offs. |
| Cognitive Tests (MMSE/MoCA) | Brief, standardized tests of cognitive function (memory, attention, etc.). | Low scores indicating impairment in multiple cognitive domains. | Quick, inexpensive, non-invasive, accessible. | Low sensitivity for early/mild stages, influenced by education/culture, non-specific to cause. |
Table 2.
Summary of related works in Alzheimer’s disease classification using deep learning.
Table 2.
Summary of related works in Alzheimer’s disease classification using deep learning.
| Study | Dataset | Model Architecture | Task | Explainability | Key Findings/Performance |
|---|
| Deep Learning for AD Classification |
| Jo et al. (2019) [30] | ADNI | Various DL models | Binary (AD vs. CN) | Not Reported | Up to 98.8% accuracy on binary classification |
| Hechkel & Helali (2025) [34] | ADNI (sMRI + DTI) | YOLOv11 | Multi-class detection | Not Reported | 93.6% precision, 91.6% recall with multimodal data |
| Marcisz & Polanska (2023) [32] | ADNI | Various models | MCI vs. Early AD | Not Specified | Demonstrated sMRI-only feasibility |
| Lightweight Architectures |
| Borah et al. (2024) [41] | Multiple medical imaging | MobileNet, VGG, ResNet | Multi-disease | Not Reported | Traditional models outperformed lightweight in some cases |
| Alruily et al. (2025) [42] | Not specified | Ensemble (VGG16, MobileNet, InceptionResNetV2) | AD classification | Not Reported | 97.93% accuracy, 98.04% specificity via feature fusion |
| Cueto & Kelleher (2024) [40] | Various | Multiple architectures | Training efficiency | N/A | Efficiency vs. performance trade-off framework |
| Explainable AI Methods |
| Huff et al. (2021) [44] | Review | Various CNNs | Medical imaging | Grad-CAM, saliency | Comprehensive XAI review |
| van de Leur et al. (2021) [47] | ECG data | Deep CNNs | ECG detection | Grad-CAM | Anatomical validation emphasis |
| Ennab & Mcheick (2025) [48] | Medical imaging | Various CNNs | Medical classification | Pixel-level, Grad-CAM | Comparative XAI analysis |
| Bhati et al. (2024) [49] | Survey | Various | Medical imaging | Multiple XAI | XAI visualization survey |
| Multimodal Approaches |
| Kitamura & Topol (2023) [35] | Various | Multimodal AI | Medical imaging | Not focus | Multimodal AI in radiology |
| Schouten et al. (2024) [36] | Review | Multimodal | Medical AI | Not focus | Technical challenges review |
| Current Study |
| This Work | ADNI (102 subjects) | MobileNetV2, EfficientNetV2B0 | Tri-class (CN, EMCI, LMCI) | Grad-CAM++, Guided Backprop | 88% accuracy; lightweight interpretable early detection |
Table 3.
Search criteria for ADNI MRI image selection.
Table 3.
Search criteria for ADNI MRI image selection.
| Search Section | Criteria |
|---|
| Projects/Phase | ADNI, ADNI 1, ADNI GO, ADNI 2, ADNI 3, ADNI 4 |
| Image Types | Pre-processed |
| Subject | Research Group - CN, EMCI, LMCI |
| Study/Visit | ADNI Baseline, ADNIGO Month 3 MRI, ADNI2 Baseline-New Pt, ADNI2 Month 6-New Pt, ADNI3 Initial Visit-Cont Pt |
| Image | Image Description - MPRAGE; Modality - MRI |
| Imaging Protocol | Acquisition Plane - 3D |
Table 4.
Complete preprocessing pipeline technical specifications for reproducibility.
Table 4.
Complete preprocessing pipeline technical specifications for reproducibility.
| Parameter | Specification |
|---|
| Skull Stripping | |
| Tool | SynthStrip v1.2 (FreeSurfer 7.4.1) |
| Success rate | 100% (102/102 subjects) |
| Spatial Preprocessing | |
| Orientation | RAS canonical space (nibabel 5.1.0) |
| Native voxel spacing | mm3 (MPRAGE typical) |
| Resampling | None (native resolution preserved) |
| Template registration | None (subject-specific geometry preserved) |
| Slice Extraction | |
| Plane | Coronal () |
| Target per subject | 30 slices |
| Actual yield | 29.4 ± 0.8 slices/subject |
| Bounding box method | 5th percentile intensity threshold |
| Morphological ops | Binary closing (disk radius = 5 pixels) |
| Intensity Processing | |
| Initial scaling | Native range → [0, 255] (8-bit) |
| MobileNetV2 normalization | |
| EfficientNetV2B0 normalization | |
| DenseNet121 normalization | |
| Application timing | On-the-fly during training |
| Quality Control | |
| Volume-level checks | Brain volume > 800 cm3, skull residual < 5% |
| Slice-level checks | Brain coverage > 30%, intensity 10–200 |
| Total slices processed | 3060 (102 subjects × 30 slices) |
| Slices excluded | 60 (2.0%): 42 coverage, 18 artifacts |
| Subjects excluded | 0 (0%) |
| Final dataset | 3000 slices (1000 per class) |
Table 5.
On-the-fly data augmentation parameters applied during training.
Table 5.
On-the-fly data augmentation parameters applied during training.
| Transformation | Range/Mode |
|---|
| Rotation | ±10 degrees |
| Width Shift | ±10% of image width |
| Height Shift | ±10% of image height |
| Shear | ±10% |
| Zoom | ±10% |
| Horizontal Flip | Random |
| Brightness Adjustment | 0.9–1.1 |
| Fill Mode | Nearest |
Table 6.
Complete hyperparameter configuration for model training.
Table 6.
Complete hyperparameter configuration for model training.
| Parameter | Value |
|---|
| Model Architecture | |
| Input Image Size | 224 × 224 × 3 |
| Base Models | MobileNetV2, EfficientNetV2B0 |
| Transfer Learning | ImageNet pre-trained weights |
| Custom Head | GAP + Dense (128, ReLU) + Dense (3, Softmax) |
| Training Configuration | |
| Batch Size | 16 |
| Learning Rate | |
| Optimizer | Adam ( = 0.9, = 0.999) |
| Loss Function | Categorical Cross-Entropy |
| Early Stopping Patience | 7 epochs |
| Maximum Epochs | 100 |
| Hardware | |
| GPU | NVIDIA A100 |
| Training Time (MobileNetV2) | 70–240 ms/step |
| Training Time (EfficientNetV2B0) | 240–300 ms/step |
Table 7.
Inference performance metrics demonstrating clinical deployment readiness.
Table 7.
Inference performance metrics demonstrating clinical deployment readiness.
| Model | Size (MB) | Params (M) | CPU (ms) | GPU (ms) | Studies/h |
|---|
| MobileNetV2 (Aug) | 11.1 | 2.42 | 174.6 | 144.3 | 687 |
| MobileNetV2 (Orig) | 11.1 | 2.42 | 175.6 | 144.6 | 683 |
| EfficientNetV2B0 (Aug) | 25.4 | 6.08 | 345.5 | 314.1 | 347 |
| EfficientNetV2B0 (Orig) | 25.4 | 6.08 | 349.1 | 317.6 | 344 |
| DenseNet121 (Aug) | 29.8 | 7.17 | 505.6 | 452.0 | 237 |
Table 8.
Per-class performance metrics for all model configurations.
Table 8.
Per-class performance metrics for all model configurations.
| Model | Class | Precision | Recall | F1-Score | Accuracy |
|---|
| EfficientNetV2B0 (Aug) | CN | 0.93 | 0.83 | 0.88 | 0.8800 |
| EMCI | 0.83 | 0.93 | 0.88 |
| LMCI | 0.89 | 0.88 | 0.88 |
| EfficientNetV2B0 (Orig) | CN | 0.90 | 0.88 | 0.89 | 0.8750 |
| EMCI | 0.85 | 0.90 | 0.87 |
| LMCI | 0.88 | 0.85 | 0.86 |
| MobileNetV2 (Aug) | CN | 0.87 | 0.88 | 0.87 | 0.8650 |
| EMCI | 0.85 | 0.86 | 0.86 |
| LMCI | 0.87 | 0.85 | 0.86 |
| MobileNetV2 (Orig) | CN | 0.87 | 0.84 | 0.86 | 0.8600 |
| EMCI | 0.84 | 0.88 | 0.86 |
| LMCI | 0.87 | 0.86 | 0.86 |
| DenseNet121 (Aug) | CN | 0.88 | 0.78 | 0.82 | 0.8167 |
| EMCI | 0.86 | 0.78 | 0.82 |
| LMCI | 0.74 | 0.90 | 0.81 |
Table 9.
Comprehensive discrimination and calibration metrics for all model configurations showing the impact of data augmentation on lightweight architectures.
Table 9.
Comprehensive discrimination and calibration metrics for all model configurations showing the impact of data augmentation on lightweight architectures.
| Architecture | Configuration | Accuracy | Macro AUC [95% CI] | Micro AUC | Brier Score |
|---|
| EfficientNetV2B0 | Original | 0.8750 | 0.971 [0.961, 0.980] | 0.971 | 0.0643 |
| Augmented | 0.8800 | 0.973 [0.963, 0.982] | 0.973 | 0.0588 |
| MobileNetV2 | Original | 0.8600 | 0.962 [0.951, 0.972] | 0.962 | 0.0706 |
| Augmented | 0.8650 | 0.970 [0.961, 0.979] | 0.970 | 0.0628 |
| DenseNet121 † | Augmented | 0.8167 | 0.947 [0.933, 0.960] | 0.942 | 0.0891 |
Table 10.
Class-wise sensitivity, specificity, positive predictive value (PPV), and negative predictive value (NPV) for all augmented model configurations.
Table 10.
Class-wise sensitivity, specificity, positive predictive value (PPV), and negative predictive value (NPV) for all augmented model configurations.
| Model | Class | Sensitivity | Specificity | PPV | NPV |
|---|
| EfficientNetV2B0 (Aug) | CN | 0.835 | 0.970 | 0.933 | 0.922 |
| EMCI | 0.930 | 0.905 | 0.830 | 0.963 |
| LMCI | 0.875 | 0.945 | 0.888 | 0.938 |
| Macro Avg | 0.880 | 0.940 | 0.884 | 0.941 |
| MobileNetV2 (Aug) | CN | 0.875 | 0.935 | 0.871 | 0.937 |
| EMCI | 0.865 | 0.925 | 0.852 | 0.932 |
| LMCI | 0.855 | 0.938 | 0.872 | 0.928 |
| Macro Avg | 0.865 | 0.933 | 0.865 | 0.932 |
| DenseNet121 (Aug) | CN | 0.775 | 0.948 | 0.881 | 0.894 |
| EMCI | 0.775 | 0.938 | 0.861 | 0.893 |
| LMCI | 0.900 | 0.840 | 0.738 | 0.944 |
| Macro Avg | 0.817 | 0.909 | 0.827 | 0.910 |
Table 11.
Training convergence details for all model configurations.
Table 11.
Training convergence details for all model configurations.
| Model Configuration | Convergence Epoch | Best Val. Accuracy | Final Test Accuracy |
|---|
| MobileNetV2 (Original) | 27 | 0.8729 | 0.8600 |
| MobileNetV2 (Augmented) | 49 | 0.8708 | 0.8650 |
| EfficientNetV2B0 (Original) | 30 | 0.8521 | 0.8750 |
| EfficientNetV2B0 (Augmented) | 68 | 0.8792 | 0.8800 |
| DenseNet121 (Augmented) | 78 | 0.8354 | 0.8167 |
Table 12.
Five-fold cross-validation performance summary showing mean accuracy and stability metrics.
Table 12.
Five-fold cross-validation performance summary showing mean accuracy and stability metrics.
| Model Configuration | Mean Accuracy | Std Dev. | Min Accuracy | Max Accuracy |
|---|
| EfficientNetV2B0 (Augmented) | 0.8800 | 0.0100 | 0.8650 | 0.8950 |
| EfficientNetV2B0 (Original) | 0.8750 | 0.0120 | 0.8580 | 0.8900 |
| MobileNetV2 (Augmented) | 0.8650 | 0.0110 | 0.8500 | 0.8800 |
| MobileNetV2 (Original) | 0.8600 | 0.0115 | 0.8450 | 0.8750 |
| DenseNet121 (Augmented) | 0.8100 | 0.0252 | 0.7800 | 0.8400 |
Table 13.
Per-class performance metrics across 5-fold cross-validation (mean ± std).
Table 13.
Per-class performance metrics across 5-fold cross-validation (mean ± std).
| Model | Class | Precision | Recall | F1-Score |
|---|
| EfficientNetV2B0 (Aug) | CN | 0.93 ± 0.02 | 0.83 ± 0.03 | 0.88 ± 0.02 |
| EMCI | 0.83 ± 0.04 | 0.93 ± 0.03 | 0.88 ± 0.02 |
| LMCI | 0.89 ± 0.02 | 0.88 ± 0.04 | 0.88 ± 0.02 |
| EfficientNetV2B0 (Orig) | CN | 0.90 ± 0.02 | 0.88 ± 0.02 | 0.89 ± 0.01 |
| EMCI | 0.85 ± 0.03 | 0.90 ± 0.03 | 0.87 ± 0.02 |
| LMCI | 0.88 ± 0.02 | 0.85 ± 0.03 | 0.86 ± 0.01 |
| MobileNetV2 (Aug) | CN | 0.87 ± 0.02 | 0.88 ± 0.02 | 0.87 ± 0.01 |
| EMCI | 0.85 ± 0.02 | 0.86 ± 0.03 | 0.86 ± 0.02 |
| LMCI | 0.87 ± 0.02 | 0.85 ± 0.03 | 0.86 ± 0.01 |
| MobileNetV2 (Orig) | CN | 0.87 ± 0.02 | 0.84 ± 0.03 | 0.86 ± 0.02 |
| EMCI | 0.84 ± 0.03 | 0.88 ± 0.02 | 0.86 ± 0.01 |
| LMCI | 0.87 ± 0.02 | 0.86 ± 0.02 | 0.86 ± 0.01 |
| DenseNet121 (Aug) | CN | 0.83 ± 0.02 | 0.79 ± 0.03 | 0.81 ± 0.03 |
| EMCI | 0.80 ± 0.04 | 0.80 ± 0.08 | 0.80 ± 0.05 |
| LMCI | 0.80 ± 0.04 | 0.85 ± 0.04 | 0.82 ± 0.02 |
Table 14.
Individual fold performance for EfficientNetV2B0 (augmented configuration) demonstrating cross-validation stability.
Table 14.
Individual fold performance for EfficientNetV2B0 (augmented configuration) demonstrating cross-validation stability.
| Fold | Accuracy | Balanced Accuracy | Convergence Epoch |
|---|
| 1 | 0.8850 | 0.8845 | 62 |
| 2 | 0.8900 | 0.8897 | 75 |
| 3 | 0.8650 | 0.8651 | 58 |
| 4 | 0.8950 | 0.8943 | 71 |
| 5 | 0.8650 | 0.8655 | 74 |
| Mean ± Std | 0.88 ± 0.01 | 0.88 ± 0.01 | 68 ± 12 |
Table 15.
Chi-square test results for pairwise model agreement analysis.
Table 15.
Chi-square test results for pairwise model agreement analysis.
| Model Pair | Statistic | p-Value | DoF | Agreement | Interpretation |
|---|
| MobileNetV2 vs. EfficientNetV2B0 | 537.66 | < | 4 | 78.00% | Significant agreement |
| MobileNetV2 vs. DenseNet121 | 451.92 |
< | 4 | 73.83% | Significant agreement |
| EfficientNetV2B0 vs. DenseNet121 | 537.76 |
< | 4 | 77.33% | Significant agreement |
Table 16.
Contingency table for MobileNetV2 vs. EfficientNetV2B0 (augmented configurations).
Table 16.
Contingency table for MobileNetV2 vs. EfficientNetV2B0 (augmented configurations).
| EfficientNetV2B0∖MobileNetV2 | CN | EMCI | LMCI | Total |
|---|
| CN | 138 | 22 | 19 | 179 |
| EMCI | 26 | 173 | 25 | 224 |
| LMCI | 18 | 22 | 157 | 197 |
| Total | 182 | 217 | 201 | 600 |
Table 17.
Contingency table for MobileNetV2 vs. DenseNet121 (augmented configurations).
Table 17.
Contingency table for MobileNetV2 vs. DenseNet121 (augmented configurations).
| DenseNet121∖MobileNetV2 | CN | EMCI | LMCI | Total |
|---|
| CN | 130 | 26 | 20 | 176 |
| EMCI | 14 | 149 | 17 | 180 |
| LMCI | 38 | 42 | 164 | 244 |
| Total | 182 | 217 | 201 | 600 |
Table 18.
Contingency table for EfficientNetV2B0 vs. DenseNet121 (augmented configurations).
Table 18.
Contingency table for EfficientNetV2B0 vs. DenseNet121 (augmented configurations).
| DenseNet121∖EfficientNetV2B0 | CN | EMCI | LMCI | Total |
|---|
| CN | 139 | 21 | 16 | 176 |
| EMCI | 16 | 154 | 10 | 180 |
| LMCI | 24 | 49 | 171 | 244 |
| Total | 179 | 224 | 197 | 600 |