You are currently viewing a new version of our website. To view the old version click .
Electronics
  • Article
  • Open Access

16 January 2023

A Novel Framework for Classification of Different Alzheimer’s Disease Stages Using CNN Model

,
,
,
,
,
,
and
1
School of Computer Applications, Lovely Professional University, Phagwara 144411, India
2
Kwintech-R Labs, Jammu & Kashmir, Srinagar 193501, India
3
Department of Computer Science, College of Computer, Qassim University, Buraydah 51452, Saudi Arabia
4
Abu Dhabi Polytechnic, Abu Dhabi 111499, United Arab Emirates
This article belongs to the Special Issue Advances in Fuzzy and Intelligent Systems

Abstract

Background: Alzheimer’s, the predominant formof dementia, is a neurodegenerative brain disorder with no known cure. With the lack of innovative findings to diagnose and treat Alzheimer’s, the number of middle-aged people with dementia is estimated to hike nearly to 13 million by the end of 2050. The estimated cost of Alzheimer’s and other related ailments is USD321 billion in 2022 and can rise above USD1 trillion by the end of 2050. Therefore, the early prediction of such diseases using computer-aided systems is a topic of considerable interest and substantial study among scholars. The major objective is to develop a comprehensive framework for the earliest onset and categorization of different phases of Alzheimer’s. Methods: Experimental work of this novel approach is performed by implementing neural networks (CNN) on MRI image datasets. Five classes of Alzheimer’s disease subjects are multi-classified. We used the transfer learning determinant to reap the benefits of pre-trained health data classification models such as the MobileNet. Results: For the evaluation and comparison of the proposed model, various performance metrics are used. The test results reveal that the CNN architectures method has the following characteristics: appropriate simple structures that mitigate computational burden, memory usage, and overfitting, as well as offering maintainable time. The MobileNet pre-trained model has been fine-tuned and has achieved 96.6 percent accuracy for multi-class AD stage classifications. Other models, such as VGG16 and ResNet50 models, are applied tothe same dataset whileconducting this research, and it is revealed that this model yields better results than other models. Conclusion: The study develops a novel framework for the identification of different AD stages. The main advantage of this novel approach is the creation of lightweight neural networks. MobileNet model is mostly used for mobile applications and was rarely used for medical image analysis; hence, we implemented this model for disease detection andyieldedbetter results than existing models.

1. Introduction

Alzheimer’s is a condition of the brain’s central partthat causes gradual memory loss, cognitive impairment, and emotional distress. Around 46.8 million individuals worldwide have dementia, with Alzheimer’s disease accounting for 60–70% of cases and costing more than USD 818 billion globally []. Given that growing old is the foremost basis of dementia, the figure of persons exaggerated is expected to rise to 151 million by 2050 []. Extensive neuronal impairments are caused by the formation of extracellular amyloid plaques and intracellular accretion of neurofibrillary tangles composed of hyperphosphorylated tau []. Because Alzheimer’s disease is presently incurable [,,,], disease-modifying medications are being explored that address not only amyloid Beta (Aβ) build-up and the tau pathway [,] and neuroinflammation [,,] and nutritional pathways [,]. Recent medications’ inability to cure could be partly attributable to late-stage delivery and insensitive techniques of identifying cognitive changes, underscoring the clinical necessity for rigorous diagnostics []. The National Institute of Neurological and Communicative Disorders (NINCDS) and the Alzheimer’s Disease and Related Disorders Association currently set the standard to classify a person’s mental well-being (ADRDA). They discovered that the biochemical alterations that occur in Alzheimer’s disease last for decades. Differentiating between disease stages will demand the establishment of innovative biomarkers:
  • Early-symptomatic aliment;
  • Prevenient aliments, i.e., those with apparent early dementia AD;
  • AD or common variants of AD [].
Cognitive assessments and imaging examinations, such as Neuro-imaging, rule out alternative factors of memory impairment, such as tumors, which are ruled out by cognitive assessments and imaging examinations, such as Neuro-imaging []. When integrated with these approaches and professional clinical expertise, medical criteria have an 80 percent good prognosis and a 60 percent diagnostic accuracy for clinical diagnosis. Recent visualization advances, such as Neuro-imaging scans (MRI) [], PET scans [,], and single-photon emission computed tomography (SPECT) [], have empowered the tracking of neurodegeneration, malformations in neuronal development, and edema.
The number of AD patients is expected to rise substantially, necessitating the use of a computer-aided diagnosis (CAD) system for early and precise AD diagnosis []. Furthermore, mild cognitive impairment (MCI) is an interim phase between sound perception and dementia. As per a previous study [], MCI participants advance to clinical AD at a rate of 10–15 percent yearly. In recent years, research in detecting MCI patients who will develop clinical dementia has received much attention. Conversion of one stage into another is vital to identifying the different stages of Alzheimer’s disease.
The primary emphasis of this study is on identifying different stages of AD based on an image dataset. Deep learning techniques are frequently employed for time series classification, image identification, and multidimensional data processing []. These methods are widely applied to neuroimaging data to identify Alzheimer’s disease (AD) []. Potential genetic indicators of Alzheimer’s disease have also been explored using these methods [].
In order to diagnose Alzheimer’s, Zhang et al. [] combined neuroimaging data with clinical and neuropsychological evaluations using a multimodal deep learning model. The idea put forth by Spasov et al. [] emphasizes the significance of deep learning designs in patients at risk of AD in stopping the progression of mild cognitive impairment. Deep learning-based models also use extensive genomic and DNA methylation data to anticipate AD, mitigating the symptoms of usual neurodegeneration (falls, memory loss, etc.).
Our work progresses these approaches (neuroimaging and clinical analysis) by applying a technique that uses an ensemble of CNN models to identify the different stages of AD. Different CNN models are applied to the same dataset and comparing results, and it is found that the MobileNet model is efficient for medical image analysis. The rest of the work is presented in this paper in the following subsections. Section 2 discusses previous findings for diagnosing Alzheimer’s disease. Our CNN-based method for determining the stage of Alzheimer’s disease based on MobileNet and image weights is described in Section 3. The experimental findings of our model are shown in Section 4. The research’s findings are discussed in Section 5, and in Section 6, we provided a research paper conclusion.

3. Problem Description and Solution Strategy

As highlighted in the Section 2, numerous paradigms encompassing AD prognosis and clinical image assessment have recently been presented in the literature. However, most do not use transfer learning algorithms, multi-class clinical object detection, or an Alzheimer’s disease monitoring cloud service to assess AD distinct phases and provide faraway guidance. These issues have received insufficient attention in literary works. Thus, the novelties of this research can be organized as follows following other cutting-edge techniques discussed in the Section 2:
A novel framework is devised to identify various Alzheimer’s ailment phases and the classification of medical images. The suggested method relies on CNN architectures for structural MRI images of the brain. Transfer learning is utilized to grab efficiencies of already trained architectures, such as VGG19, ResNet50, and DenseNet121.
Unbalanced datasets and notional size are the most problematic aspects of medical image analysis. Resampling techniques are employed to balance the datasets, while data expansion methods are utilized to enhance the dataset size and overcome the over-fitting issues. According to performance indicators, the experimental results predict a positive outcome.

4. Methods and Materials

The early diagnosis of Alzheimer’s ailment is critical for precluding and managing its progression. The data used in this research are taken from ADNI (Alzheimer’s Disease Neuroimaging Initiative). The Alzheimer’s Disease Neuroimaging Initiative (ADNI) is a longitudinal study designed to develop clinical, imaging, genetic, and biochemical biomarkers for the early detection and tracking of Alzheimer’s disease (AD). This initiative started in 2004 and is supported by multiple companies. Complete 1-year data were taken from ADNI to design this novel research. The main purpose of this research is to design a novel approach for the initial identification and tracking of Alzheimer’s stages. The proposed framework workflow, data preparation algorithms, and medical image classification techniques are thoroughly explained below.

5. The Proposed Framework

The proposed approach consists of below mentioned four steps:
Step1. Data Attainment
This approach uses the ADNI dataset in the T2w MRI format. It offers medical images in jpeg formats, such as Axial Coronal and Sagittal. The dataset comprises data from 300 subjects alienated into five classes that are cognitively normal, mild cognitive impairment, early mild cognitive impairment, late mild cognitive impairment (CN, MCI, EMCI, LMCI), and NC. The patient with LMCI symptoms can acquire more AD than the EMCI subject because the patient has undergone severe neuron damage in this stage. The total number of images available here is 1101. AD class comprises 145 images, EMCI comprises 204 images, LMCI comprises 61 images, MCI has 198, and NC has 493 images. We re-sized all the images into 224 × 224 pixels, and three channels (RGB) were used. A batch size of 32 images was transferred at each iteration during training to reduce the computational power batch size of 32 images transferred at each iteration of training to minimize the height = 224 width = 224 channels = 3 batch size = 32.
Step2. Pre-processing
Data are unbalanced, and training on an unbalanced dataset leads us toward data underfitting or overfitting issues, and in the end, our model would not be able to classify the images correctly. The solution is to balance the data, for which we use the upsampling technique, in which the labels with a smaller number of images are increased or unsampled. After resampling, all the classes become 580 MRI images, and thus the entire dataset size is 2900. The data are refined, standardized, scaled, denoised, and formatted appropriately. In the future, other techniques, such as downsampling, will be used. Figure 1 illustartes the resampling technique of MRI images.
Figure 1. Example of resampling technique applied on MRI image [,].
Step3. Data Augmentation
The primary objective of employing data augmentation methods is to (1) expand the data size and (2) solve the issue of overfitting. Data augmentation approaches are used in the following way.
The input images are pre-processed by using the pre-processing function of the pre-trained model, horizontal flipping of the images, rotation of images by 5 degrees, and width and shift in the images.
We used the Keras API of the image data generator to apply the data augmentation. We can observe that some images are 5 degrees rotated, and some are flipped, as shown in Figure 2.
Figure 2. The different augmentation techniques are rotation, flipping, shift, etc. In our approach, some images are 5 degrees rotated, some are flipped, etc. [].
As a result, the dataset expands to 2900 images, partitioned into 580 images in each class. After that, the balanced dataset of 2900 MRI scans is reconfigured and randomly fragmented into training, validation, and test groups, with an 80:10:10 split ratio for each class. The division of data testing, training, and validation groups for 5-way classification is summarized in Table 1 (CN vs. MCI vs. EMCI vs. LMCI vs. AD).
Table 1. Shows the images inputting the model from different AD classes. Two thousand images (400 from each class) are used for training purposes, 450 for validation (90 from each class), and 450 for testing (90 from each class) [].
Step4. Pre-processing Techniques
  • Data normalization: Data normalization is beneficial for removing different redundancies from the datasets, such as varied contrasts and varied subject poses, to simplify subtle difference detection. It rescales the attributes with a mean value of 0 and a standard deviation of 1. Different types of normalization techniques, such as Z normalization, called standardization; min–max normalization; and unit vector normalization, are applied to the dataset. We applied unit vector normalization to our dataset.
  • Unit vector normalization: It shrinks/stretches a vector and scales it to a unit length. We applied it to the whole dataset, and the transformed data are viewed as a cluster of vectors with distant trajectories on the d-dimensional unit sphere. The general formulae for unit vector normalization are U ^ = U | U | , where U ^ = normalized vector, U = non-Zero vector, and | U | = length of U.

6. Proposed Classification Methods and Techniques

The three critical components of machine learning algorithms are feature extraction, reduction, and classification. All three steps are performed manually or separately while implementing machine learning algorithms. The beauty of deep learning algorithms such as CNNs is that there is no need for manual feature extraction. These three stages are performed in combination with CNN architectures. CNN architectures have high classification performance than traditional models. The three layers of CNN architectures are the convolution layer, the pooling layer, and the entire connected layer []. Extraction of features is the responsibility of the convolution layer, dimension reduction by the pooling layer, and classification by fully connected layers. Conversion of two-dimensional metrics into one-dimensional vectors is also performed by a fully connected layer [].

6.1. Convolution Layer

It acts as a base for the CNN architecture. It comprises a set of filters, also called kernels, which are learned through the training process. The filter dimensions are smaller than the real image. Filters convolve with images and create activation maps. The convolution layer extracts all the features. A learnable filter that retrieves features out of a given image is represented by the convolutional layer. For a three-dimensional image with the dimensions H × W × C, H denotes height, W width, and C is the total count of channels. Applying a 3D filter-sized F × H F × W F × C, where FC is the number of filter channels, FW denotes the filter width, and FH denotes the filter height. Hence, the output activation map size must be AH AW, where AH stands for activation height and AW activation width. The following equations are used to calculate activation height and width values.
A H   = 1 + H FH + 2 P S
A w   = 1 + W Fw + 2 P S
P signifies padding, S represents stride, and there are n filters, so the activation map dimensions must turn out to be AH × AW × n. Figure 3 illustrates the complete convolution.
Figure 3. The complete convolution process is shown in the figure above, where different filters are applied to images before obtaining the final output [].

6.2. Polling Layer

The pooling layer’s primary purpose is to lower the size of the feature maps. Therefore, there are fewer parameters to learn and fewer computations to be made by the network. The different polling layers are max pooling, average pooling, and global pooling. By applying a non-linear conversion to the given inputs, the activation function addresses non-linearity in the network. Our proposed multi-classifier uses the SoftMax activation function in the output layer. The main function of the SoftMax function is to calculate relative probabilities. The general equation of the SoftMax equation function is given below in Equation (3).
Softmax   ( z i ) = exp ( z i ) ε j e x p ( Z j )
In this case, Z stands for the values of the output layer neurons, with the exponent being a non-linear function. These values are then normalized and transformed into probabilities by dividing them by the sum of exponent values. For all hidden layers, we applied the ReLU activation function, the most familiar implicated function in CNNs. There are different variants of ReLU activation functions, such as parametricReLU, leaky ReLU, exponential linear (ELU, SELU), and concatenated ReLU (CReLU). We applied leaky ReLU since it has some benefits over other variants, such as it fixes the problem of “dying ReLU” because it has no zero-slope parts. It also speeds the training process because it is more balanced and therefore learns faster. However, it should be kept in mind that leaky ReLU is not superior to simple ReLU and should be considered as an alternative. The general equation for the ReLU and leaky ReLU activation functions are given below in Equation (4) and Equation (5), respectively.
F(x) = x+ = max(0,x)
{ Z Z > 0 Z Z 0 }
When z is less than 0, leaky ReLU allows a small nonzero, constant gradient α. Generally, the value of α = 0.01. in our study for medical image analysis, we ensemble Mobile net with ImageNet weights along with transfer learning for image classification. These architectures can easily handle two-dimension and three-dimension brain neuroimages built on 2D, 3D, and convolutions. The general flow of our novel framework is shown in Table 2.
Table 2. Shows the general architecture of the MobileNet model with total trainable and non-trainable parameters [].
We applied the MobileNet model and ImageNet weights to categorize distant phases of Alzheimer’s. It uses depth-wise separable convolutions. The main advantage of this model is that it reduces the parameter number compared to other networks and generates lightweight deep neural networks [,]. It is a class of CNN and gives us the optimum initial point for training our classifier to be insanely small and fast. Mobile Nets are built on depthwise separable convolution layers. Each dw Conv consists of depthwise convolution and pointwise convolution [,,]. There are almost 4.2 million parameters in a MobileNet architecture. The size of the input image is 224 × 224 × 3. The convolution kernel shape is 3 × 3 × 3 × 32, with Avg pool size of 7 × 7 × 1024. Dropout layers are succeeded by a flattened layer and entirely connected layers. The final fully connected layer with SoftMax as the activation function is implemented to manage five classes of Alzheimer’s (AD), while ReLU is the activation function for hidden layers.
The general architecture of the MobileNet model we applied is shown below in Figure 4. The total trainable parameters are 25,958,917, and the non-trainable are 3,231,936. We used the RMSProp as our optimizer with a learning rate of 0.00001 and used the loss as categorical cross entropy for multi-class classification and keeping the metrics such as accuracy, which give the results of training and authentication, loss, and accuracy values during the training. The evaluation of the novel approach with other developed approaches is revealed in the tables below.
Figure 4. Shows the basic design of the proposed model [].

7. Experimental Findings and Model Evaluation

The novelmodelconsiders various scenarios. We examined the empirical results in terms of many performance benchmarks, including confusion matrix, accuracy, loss, F1 score, precession, recall, ROC, sensitivity, and AUC. Table 3 below provides a summary of the novel model.
Table 3. Shows us the evaluation of the presentation metrics of the devised model [].

Model Evaluation

For the multi-classification, we used MobileNet architecture, a version of CNN networks. The efficacy of the planned model is equated with prevailing models; as depicted in Table 3, the proposed model shows better accuracy results than existing models. Our model achieves an accuracy of 96.22%, as shown in Figure 5, while Juan Ruiz et al. [] achieved 66.67%, Spasov et al. [] achieved 88%, and Sahumbaiev et al. [] achieved 89.47%. The training and validating accuracy and loss of the suggested approach are shown in Figure 6.
Figure 5. Shows the accuracy and loss values during training over 100 Epochs.
Figure 6. Depicts the normalized confusion matrix generated by the proposed model.
The number of patients diagnosed with each type of AD stage (NC/MCI/AD/LMCI/EMCI) is shown in the confusion matrix. The normalized confusion matrix for the suggested framework is shown in Figure 6.
The comparative analysis of the proposed method with existing approaches is shown graphically in the below figure. Our approach shows better results than existing results. Figure 7 shows the assessment of the devised architecture with other.
Figure 7. Assessment of the devised architecture with the existing frameworks. Our model shows better performance results than existing architectures [,].

8. Conclusions

This study presents a system for medical image categorization and Alzheimer’s ailment recognition. Deep-learning CNN architectures support the proposed approach. Alzheimer’s disease has five stages. We employ the MobileNet model with ImageNet weights. The beauty of MobileNet architecture is that it uses depthwise separable convolutions, which reduces parameter numbers compared to other models with regular convolutions and results in lightweight neural networks. The other important characteristic of MobileNet architecture is that instead of having a single 3 × 3 convolution layer in traditional networks, it has batch norm and ReLU. Additionally, it divides convolution into 1 × 1 pointwise convolutions and 3 × 3 depth-wise convolutions. For detection, embedding segmentation, and classification, MobileNet models are essential. ImageNet is a standard for image classification. It provides a standard measure of how efficient a model is for classification. Different performance metrics are implemented for the assessment of the model. Our model achieves an accuracy of 96.22%. In the impending, it is premeditated to implement other pre-trained models for classification purposes and check whether a patient can convert from one AD stage into another. In the imminent, the dataset’s size will also be increased to improve accuracy, and different augmentation approaches, such as downsampling, will be used. The author is currently working on multi-modal fusion data fusion techniques to detect and diagnose AD.

Author Contributions

Conceptualization A.B. and G.M.u.d.d.; methodology, A.B.; software, G.M.u.d.d.; validation, S.I.A., Y.H. and I.U.; formal analysis, Y.H., I.U. and M.T.B.O.; investigation, Y.H. and H.K.A.; resources, I.U.; data curation, G.M.u.d.d. and H.K.A.; writing—original draft preparation, G.M.u.d.d.; writing—review and editing, A.B. and I.U.; visualization, I.U. and H.H.; supervision, A.B.; project administration, M.T.B.O. and H.H.; funding acquisition, M.T.B.O. and H.H. All authors have read and agreed to the published version of the manuscript.

Funding

The researchers would like to thank the Deanship of Scientific Research, Qassim University for funding the publication of this project.

Data Availability Statement

Not applicable.

Acknowledgments

The researchers would like to thank the Deanship of Scientific Research, Qassim University for funding the publication of this project.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Prince, M.J.; Comas-Herrera, A.; Knapp, M.; Guerchet, M.M.; Karagiannidou, M. World Alzheimer Report 2016—Improving Healthcare for People Living with Dementia: Coverage, Quality and Costs Now and in the Future; Alzheimer’s Disease International (ADI): London, UK, 2016. [Google Scholar]
  2. Prince, M.; Wimo, A.; Guerchet, M.; Ali, G.; Wu, Y.; Prina, M. World Alzheimer Report 2015; Alzheimer’s Disease International(ADI): London, UK, 2015; pp. 1–92. Available online: https://www.alz.co.uk/research/WorldAlzheimerReport2015.pdf (accessed on 14 March 2022).
  3. Armstrong, R.A. The molecular biology of senile plaques and neurofibrillary tangles in Alzheimer’s disease. Folia Neuropathol. 2009, 47, 289–299. [Google Scholar] [PubMed]
  4. Bin Tufail, A.; Ullah, K.; Khan, R.A.; Shakir, M.; Khan, M.A.; Ullah, I.; Ma, Y.-K.; Ali, S. On Improved 3D-CNN-Based Binary and Multiclass Classification of Alzheimer’s Disease Using Neuroimaging Modalities and Data Augmentation Methods. J. Healthc. Eng. 2022, 2022, 1302170. [Google Scholar] [CrossRef] [PubMed]
  5. Ahmad, S.; Ullah, T.; Ahmad, I.; Al-Sharabi, A.; Ullah, K.; Khan, R.A.; Rasheed, S.; Ullah, I.; Uddin, N.; Ali, S. A Novel Hybrid Deep Learning Model for Metastatic Cancer Detection. Comput. Intell. Neurosci. 2022, 2022, 8141530. [Google Scholar] [CrossRef] [PubMed]
  6. Bin Tufail, A.; Ullah, I.; Khan, W.U.; Asif, M.; Ahmad, I.; Ma, Y.-K.; Khan, R.; Kalimullah; Ali, S. Diagnosis of Diabetic Retinopathy through Retinal Fundus Images and 3D Convolutional Neural Networks with Limited Number of Samples. Wirel. Commun. Mob. Comput. 2021, 2021, 6013448. [Google Scholar] [CrossRef]
  7. Ahmad, I.; Ullah, I.; Khan, W.U.; Rehman, A.U.; Adrees, M.S.; Saleem, M.Q.; Cheikhrouhou, O.; Hamam, H.; Shafiq, M. Efficient algorithms for E-healthcare to solve multiobject fuse detection problem. J. Healthc. Eng. 2021, 2021, 9500304. [Google Scholar] [CrossRef]
  8. Porter, J.H.; Prus, A.J. The Discriminative Stimulus Properties of Drugs Used to Treat Depression and Anxiety. Brain Imag. Behav. Neurosci. 2012, 5, 289–320. [Google Scholar]
  9. Noble, W.; Europe PMC Funders Group. Advances in tau-based drug discovery. Expert. Opin. Drug. Discov. 2011, 6, 797–810. [Google Scholar] [CrossRef] [PubMed]
  10. Ferrera, P.; Arias, C. Differential effects of COX inhibitors against b -amyloid-induced neurotoxicity in human neuroblastoma cells. Neurochem. Int. 2005, 47, 589–596. [Google Scholar] [CrossRef]
  11. Gasparini, L.; Ongoing, E.; Wenk, G. Non-steroidal anti-inflammatory drugs (NSAIDs) in Alzheimer’s disease: Old and new mechanisms of action. J. Neurochem. 2004, 91, 521–536. [Google Scholar] [CrossRef]
  12. Reitz, C. Alzheimer’s disease and the amyloid cascade hypothesis: A critical review. Int. J. Alzheimer’s Dis. 2012, 2012, 369808. [Google Scholar] [CrossRef]
  13. Gustafson, D.R.; Morris, M.C.; Scarmeas, N.; Shah, R.C.; Sijben, J.; Yaffe, K.; Zhu, X. New Perspectives on Alzheimer’s Disease and Nutrition. J. Alzheimer’s Dis. 2015, 46, 1111–1127. [Google Scholar] [CrossRef] [PubMed]
  14. Shah, R.C. Medical foods for Alzheimer’s disease. Drugs Aging 2011, 28, 421–428. [Google Scholar] [CrossRef] [PubMed]
  15. Cummings, J. Alzheimer’s disease diagnostic criteria: Practical applications. Alzheimer’s Res. Ther. 2012, 4, 35. [Google Scholar] [CrossRef] [PubMed]
  16. Dubois, B.; Feldman, H.; Jacova, C.; Dekosky, S.; Barberger-Gateau, P.; Cummings, J.; Delacourte, A.; Galasko, D.; Gauthier, S.; Jicha, G.; et al. Research criteria for the diagnosis of Alzheimer’s disease: Revising the NINCDS-ADRDA criteria. Lancet Neurol. 2007, 6, 734–746. [Google Scholar] [CrossRef]
  17. Viola, K.L.; Sbarboro, J.; Sureka, R.; De, M.; Bicca, M.A.; Wang, J.; Vasavada, S.; Satpathy, S.; Wu, S.; Joshi, H.; et al. Towards non-invasive diagnostic imaging of early-stage Alzheimer’s disease. Nat. Nanotechnol. 2015, 10, 91–98. [Google Scholar] [CrossRef] [PubMed]
  18. Grundman, M.; Petersen, R.; Ferris, S.; Thomas, R.; Aisen, P.; Bennett, D.; Foster, N.; Galasko, D.; Doody, R.; Kaye, J.; et al. Mild Cognitive Impairment Can Be Distinguished from Alzheimer Disease and Normal Aging for Clinical Trials. Arch. Neurol. 2004, 61, 59–66. [Google Scholar] [CrossRef] [PubMed]
  19. Ito, H.; Shimada, H.; Shinotoh, H.; Takano, H.; Sasaki, T.; Nogami, T.; Suzuki, M.; Nagashima, T.; Takahata, K.; Seki, C.; et al. Quantitative analysis of amyloid deposition in Alzheimer’s disease using PET and the radiotracer 11 C-AZD2184. J. Nucl. Med. 2014, 55, 932–938. [Google Scholar] [CrossRef]
  20. Bron, E.E.; Smits, M.; van der Flier, W.M.; Vrenken, H.; Barkhof, F.; Scheltens, P.; Papma, J.M.; Steketee, R.M.; Orellana, C.M.; Meijboom, R.; et al. Standardized evaluation of algorithms for computer-aided diagnosis of dementia based on structural MRI: The CADDementia challenge. NeuroImage 2015, 111, 562–579. [Google Scholar] [CrossRef]
  21. Janousova, E.; Vounou, M.; Wolz, R.; Gray, K.R.; Rueckert, D.; Montana, G.; the Alzheimer’s Disease Neuroimaging Initiative. Biomarker discovery for sparse classification of brain images in Alzheimer’s disease. Ann. BMVA 2012, 2012, 1–11. [Google Scholar]
  22. Payan, A.; Montana, G. Predicting Alzheimer’s disease a neuroimaging study with 3D convolutional neural networks. In Proceedings of the ICPRAM 2015—4th International Conference on Pattern Recognition Applications and Methods, Lisbon, Portugal, 10–12 January 2015; Volume 2, pp. 355–362. [Google Scholar]
  23. Bengio, Y.; Courville, A.; Vincent, P. Representation learning: A review and new perspectives. IEEE Trans. Pattern Anal. Mach. Intell. 2013, 35, 1798–1828. [Google Scholar] [CrossRef]
  24. Ebrahimighahnavieh, A.; Luo, S.; Chiong, R. Deep learning to detect Alzheimer’s disease from neuroimaging: A systematic literature review. Comput. Methods Programs Biomed. 2020, 187, 105242. [Google Scholar] [CrossRef] [PubMed]
  25. Pan, D.; Huang, Y.; Zeng, A.; Jia, L.; Song, X. Early Diagnosis of Alzheimer’s Disease Based on Deep Learning and Was. In Human Brain and Artificial Intelligence; Zeng, A., Pan, D., Hao, T., Zhang, D., Shi, Y., Song, X., Eds.; Springer: Singapore, 2019; pp. 52–68. [Google Scholar]
  26. Zhang, F.; Li, Z.; Zhang, B.; Du, H.; Wang, B.; Zhang, X. Multimodal deep learning model for auxiliary diagnosis of Alzheimer’s disease. Neurocomputing 2019, 361, 185–195. Available online: http://www.sciencedirect.com/science/article/pii/S0169260719310946 (accessed on 6 January 2023). [CrossRef]
  27. Spasov, S.; Passamonti, L.; Duggento, A.; Liò, P.; Toschi, N. A parameter-efficient deep learning approach to predict conversion from mild cognitive impairment to Alzheimer’s disease. Neuroimage 2019, 189, 276–287. [Google Scholar] [CrossRef] [PubMed]
  28. Park, C.; Ha, J.; Park, S. Prediction of Alzheimer’s disease based on the deep neural network by integrating gene expression and DNA methylation dataset. Expert Syst. Appl. 2020, 140, 112873. [Google Scholar] [CrossRef]
  29. Sarraf, S.; Tofghi, G. Classification of Alzheimer’s disease structural MRI data by deep learning convolutional neural networks. arXiv 2016, arXiv:1607.06583. [Google Scholar]
  30. Hosseini-asl, E.; Kenton, R.; El-baz, A. Alzheimer’s Disease Diagnostics by Adaptation of 3d Convolutional Network. Electrical and Computer Engineering Department. University of Louisville: Louisville. In Proceedings of the 2016 IEEE International Conference on Image Processing (ICIP), Phoenix, AZ, USA, 25–28 September 2016; Volume 502. [Google Scholar]
  31. Gupta, A.; Ayhan, M.; Maida, A. Natural Image Bases to Represent Neuroimaging Data. In Proceedings of the 30th International Conference on International Conference on Machine Learning, Atlanta, GA, USA, 16–21 June 2013; pp. 987–994. [Google Scholar]
  32. Brosch, T.; Tam, R. Manifold learning of brain MRIs by deep learning. In Lecture Notes in Computer Science, Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention, Nagoya, Japan, 22–26 September 2013; Springer Nature Switzerland AG.: Cham, Switzerlands, 2013; pp. 633–640. [Google Scholar]
  33. Tufail, A.B.; Ma, Y.-K.; Zhang, Q.-N.; Khan, A.; Zhao, L.; Yang, Q.; Adeel, M.; Khan, R.; Ullah, I. 3D convolutional neural networks-based multiclass classification of Alzheimer’s and Parkinson’s diseases using PET and SPECT neuroimaging modalities. Brain Inform. 2021, 8, 23. [Google Scholar] [CrossRef]
  34. Bilal, A.; Shafiq, M.; Fang, F.; Waqar, M.; Ullah, I.; Ghadi, Y.Y.; Long, H.; Zeng, R. IGWO-IVNet3: DL-Based Automatic Diagnosis of Lung Nodules Using an Improved Gray Wolf Optimization and InceptionNet-V3. Sensors 2022, 22, 9603. [Google Scholar] [CrossRef]
  35. Mazhar, T.; Nasir, Q.; Haq, I.; Kamal, M.M.; Ullah, I.; Kim, T.; Mohamed, H.G.; Alwadai, N. A Novel Expert System for the Diagnosis and Treatment of Heart Disease. Electronics 2022, 11, 3989. [Google Scholar]
  36. Tufail, A.B.; Ullah, I.; Rehman, A.U.; Khan, R.A.; Khan, M.A.; Ma, Y.K.; Khokhar, N.H.; Sadiq, M.T.; Khan, R.; Shafiq, M.; et al. On Disharmony in Batch Normalization and Dropout Methods for Early Categorization of Alzheimer’s Disease. Sustainability 2022, 14, 14695. [Google Scholar] [CrossRef]
  37. Liu, F.; Shen, C. Learning deep convolutional features for MRI based Alzheimer’s disease classification. arXiv 2014, arXiv:1404.3366. [Google Scholar]
  38. Korolev, S.; Safullin, A.; Belyaev, M.; Dodonova, Y. Residual, and plain convolutional neural networks for 3d brain MRI classification. In Proceedings of the 2017 IEEE 14th International Symposium on Biomedical Imaging (ISBI 2017), Melbourne, VIC, Australia, 18–21 April 2017; pp. 835–838. [Google Scholar]
  39. Sarraf, S.; Tofghi, G. Classification of Alzheimer’s disease using fMRI data and deep learning convolutional neural networks. arXiv 2016, arXiv:1603.08631. [Google Scholar]
  40. Suk, H.I.; Lee, S.W.; Shen, D.; the Alzheimers Disease Neuroimaging Initiative. Hierarchical feature representation and multimodal fusion with deep learning for AD/MCI diagnosis. Neuroimage 2014, 101, 569–582. [Google Scholar] [CrossRef] [PubMed]
  41. Suk, H.I.; Shen, D. Deep learning-based feature representation for ad/MCI classification. Med Image Comput. Comput. Assist. Interv. 2013, 16, 583–590. [Google Scholar] [PubMed]
  42. Suk, H.I.; Lee, S.-W.; Shen, D.; the Alzheimer’s Disease Neuroimaging Initiative. Latent feature representation with stacked auto-encoder for AD/MCI diagnosis. Brain Struct Funct. 2015, 220, 841–859. [Google Scholar] [CrossRef]
  43. Suk, H.I.; Shen, D.; the Alzheimer’s Disease Neuroimaging Initiative. Deep Learning in the Diagnosis of Brain Disorders. In Recent Progress in Brain and Cognitive Engineering; Springer Nature Switzerland AG.: Chem, Switzerland, 2015; pp. 203–213. [Google Scholar]
  44. Wang, Y.; Yang, Y.; Guo, X.; Ye, C.; Gao, N.; Fang, Y.; Ma, H.T. A novel multimodal MRI analysis for Alzheimer’s disease based on convolutional neural network. In Proceedings of the 2018 40th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Honolulu, HI, USA, 18–21 July 2018; pp. 754–757. [Google Scholar]
  45. Song, T.-A.; Chowdhury, S.R.; Yang, F.; Jacobs, H.; El Fakhri, G.; Li, Q.; Johnson, K.; Dutta, J. Graph convolutional neural networks for Alzheimer’s disease. In Proceedings of the 2019 IEEE 16th International Symposium on Biomedical Imaging (ISBI 2019), Venice, Italy, 8–11 April 2019; pp. 414–417. [Google Scholar]
  46. Jain, R.; Jain, N.; Aggarwal, A.; Hemanth, D.J. ScienceDirect Convolutional neural network-based Alzheimer’s disease classification from magnetic resonance brain images. Cogn. Syst. Res. 2019, 57, 147–159. [Google Scholar] [CrossRef]
  47. Spasov, S.E.; Passamonti, L.; Duggento, A.; Lio, P.; Toschi, N. A Multi-modal Convolutional Neural Network Framework for the Prediction of Alzheimer’s Disease. In Proceedings of the 2018 40th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Honolulu, HI, USA, 18–21 July 2018; pp. 1271–1274. [Google Scholar] [CrossRef]
  48. Bin Tufail, A.; Anwar, N.; Ben Othman, M.T.; Ullah, I.; Khan, R.A.; Ma, Y.-K.; Adhikari, D.; Rehman, A.U.; Shafiq, M.; Hamam, H. Early-Stage Alzheimer’s Disease Categorization Using PET Neuroimaging Modality and Convolutional Neural Networks in the 2D and 3D Domains. Sensors 2022, 22, 4609. [Google Scholar] [CrossRef]
  49. Haq, I.; Mazhar, T.; Malik, M.A.; Kamal, M.M.; Ullah, I.; Kim, T.; Hamdi, M.; Hamam, H. Lung Nodules Localization and Report Analysis from Computerized Tomography (CT) Scan Using a Novel Machine Learning Approach. Appl. Sci. 2022, 12, 12614. [Google Scholar] [CrossRef]
  50. Bin Tufail, A.; Ullah, I.; Khan, R.; Ali, L.; Yousaf, A.; Rehman, A.U.; Alhakami, W.; Hamam, H.; Cheikhrouhou, O.; Ma, Y.-K. Recognition of Ziziphus lotus through Aerial Imaging and Deep Transfer Learning Approach. Mob. Inf. Syst. 2021, 2021, 4310321. [Google Scholar] [CrossRef]
  51. Khan, R.; Yang, Q.; Ullah, I.; Rehman, A.U.; Bin Tufail, A.; Noor, A.; Rehman, A.; Cengiz, K. 3D convolutional neural networks based automatic modulation classification in the presence of channel noise. IET Commun. 2021, 16, 497–509. [Google Scholar] [CrossRef]
  52. Bin Tufail, A.; Ma, Y.-K.; Kaabar, M.K.A.; Martínez, F.; Junejo, A.R.; Ullah, I.; Khan, R. Deep Learning in Cancer Diagnosis and Prognosis Prediction: A Minireview on Challenges, Recent Trends, and Future Directions. Comput. Math. Methods Med. 2021, 2021, 9025470. [Google Scholar] [CrossRef]
  53. Sahumbaiev, I.; Popov, A.; Ramirez, J.; Gorriz, J.M.; Ortiz, A. 3D-CNN HadNet classification of MRI for Alzheimer’s Disease diagnosis. In Proceedings of the 2018 IEEE Nuclear Science Symposium and Medical Imaging Conference Proceedings (NSS/MIC), Sydney, NSW, Australia, 10–17 November 2018; pp. 3–6. [Google Scholar]
  54. Ansarullah, S.I.; Saif, S.M.; Andrabi, S.A.B.; Kumhar, S.H.; Kirmani, M.M.; Kumar, P. An Intelligent and Reliable Hyperparameter Optimization Machine Learning Model for Early Heart Disease Assessment Using Imperative Risk Attributes. J. Healthc. Eng. 2022, 2022, 9882288. [Google Scholar] [CrossRef] [PubMed]
  55. Ansarullah, S.I.; Saif, S.M.; Kumar, P.; Kirmani, M.M. Significance of Visible Non-Invasive Risk Attributes for the Initial Prediction of Heart Disease Using Different Machine Learning Techniques. Comput. Intell. Neurosci. 2022, 2022, 9580896. [Google Scholar] [CrossRef] [PubMed]
  56. Ansarullah, S.I.; Kumar, P. A systematic literature review on cardiovascular disorder identification using knowledge mining and machine learning method. Int. J. Recent Technol. Eng. 2019, 7, 1009–1015. [Google Scholar]
  57. Saif, S.M.; Ansarullah, S.I.; Ben Othman, M.T.; Alshmrany, S.; Shafiq, M.; Hamam, H. Impact of ICT in Modernizing the Global Education Industry to Yield Better Academic Outreach. Sustainability 2022, 14, 6884. [Google Scholar] [CrossRef]
  58. Li, Y.; Wang, Z.; Yin, L.; Zhu, Z.; Qi, G.; Liu, Y. X-Net: A dual encoding–Decoding method in medical image segmentation. Vis. Comput. 2021, 1–11. [Google Scholar] [CrossRef]
  59. Xu, Y.; He, X.; Xu, G.; Qi, G.; Yu, K.; Yin, L.; Yang, P.; Yin, Y.; Chen, H. A medical image segmentation method based on multi-dimensional statistical features. Front. Neurosci. 2022, 16, 1009581. [Google Scholar] [CrossRef]
  60. Sharma, A.; Singh, P.; Dar, G. Artificial Intelligence and Machine Learning for Healthcare Solutions. In Data Analytics in Bioinformatics: A Machine Learning Perspective; Scrivener Publishing LLC: Beverly, MA, USA, 2021; pp. 281–291. [Google Scholar]
  61. Mohiuddin, G.; Sharma, A.; Singh, P. Deep Learning Models for Detection and Diagnosis of Alzheimer’s Disease. In Machine Learning and Data Analytics for Predicting, Managing, and Monitoring Disease; IGI Global: Hershey, PA, USA, 2021; pp. 140–149. [Google Scholar]
  62. Zhu, Z.; He, X.; Qi, G.; Li, Y.; Cong, B.; Liu, Y. Brain tumor segmentation based on the fusion of deep semantics and edge information in multimodal MRI. Inf. Fusion 2023, 91, 376–387. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Article Metrics

Citations

Article Access Statistics

Multiple requests from the same IP address are counted as one view.