Detection of Subarachnoid Hemorrhage Using CNN with Dynamic Factor and Wandering Strategy-Based Feature Selection
Abstract
:1. Introduction
- Proposed a modified region-growing for segmenting the regions affected by SAH. Modified region-growing is a simple and efficient segmentation method. The seed points were not positioned precisely in the conventional region-growing method when the background had similar color and texture to the object of interest. It was overcome in the modified region-growing by selecting the seed points based on the orientation and intensity threshold values.
- Integrated three pre-trained models (GoogleNet, VGG-16, and ResNet50) for extracting features from the segmented images. In that, GoogleNet, VGG-16, and ResNet50 were used to extract the wide range of patterns and irregularities, fine-grained textures and details, and complex and abstract features to enhance the SAH grade classification.
- The use of high-level (semantic content and visual patterns) and low-level (color, texture, contrast, and edges) features decreased the semantic space between the vectors, which increased the success rate of SAH grade classification.
- DWSCSO was developed for reducing the dimensions of extracted features; this enabled a significant reduction in the training time and complexity of the classifier. The water waves dynamic factor (WWDF) and wandering strategy were included in the DWSCSO for an effective adaptation to the complex operations and for increasing the exploration capacity.
- The Parametric Rectified Linear Unit (PReLU) Stacked Convolution Neural Network model was used for stable SAH grade classification by avoiding the vanishing gradient issue. This PReLU was chosen because it offers smoother gradient flow by backpropagation that used to achieve stable training by avoiding the vanishing gradient issue.
2. Literature Survey
3. Methods
3.1. Data Description
3.1.1. Collected Dataset
3.1.2. CT Dataset
3.2. Region Segmentation
- Initially, the image gradients were computed using the sobel operator. Here, the gradient states the rate of change in pixel values in NCCT images. In the context of region growing, the sobel operator utilizes two convolution kernels to identify the variations in both the vertical and horizontal directions. These two kernels compute the gradient magnitude and direction for every pixel in an NCCT image. The region growing criteria were defined according to the gradient magnitude, and the gradient direction was used to guide the direction of region expansion.
- Then, the NCCT images were segmented into different grids, , based on the orientation threshold, , and intensity threshold, . Here, the sobel operator and histogram analysis were used to identify the and , respectively. The sobel operator was used to identify the edges by evaluating the pixel’s gradient magnitude and orientation. Generally, it was used to discover the image intensity variation. This operator computed the level of intensity variation (magnitude) and direction of orientation. Equations (1) and (2) are used to compute the gradient by the sobel operator.
- The process is followed with respect to grids , until the total number of grids is similar to the number of grids in the NCCT images. This is conducted as follows:
- In the grids , compute the histogram value of every pixel;
- Select the frequency histogram value of the grid;
- Select a pixel based on the frequency histogram value and assign a respective pixel as the seed point containing the orientation value and intensity value ;
- Then, consider the adjacent pixels containing the orientation and intensity ;
- Finally, determine the differences in orientation and intensity of the pixels and using Equations (3) and (4). The sample-segmented images are graphically presented in Figure 3, where the red area denotes the segmented portions.
3.3. Feature Extraction
3.4. Feature Optimization
3.4.1. Exploration Phase (Searching for Prey)
3.4.2. Exploitation Phase (Attacking Prey)
Algorithm 1 Pseudocode of the DWSCSO algorithm |
//Step 1: Initialization Input: Maximum iterations T, population size N, fitness function F, WWDF, α, r, rG //Initialize population P of N search agents (feature vectors) For each search agent i in P do search solution space Solution space states the range of probable feature values for every agent. End //Step 2: Estimate the Initial Fitness For every search agent i in P do Calculate fitness Fi for agent i using the fitness function F Fi = Accuracy(knn_classifier(X_train, y_train)) End //Step 3: Main Loop—Iterate over Maximum Iterations T For iteration t = 1 to T do //Step 3.1: For every search agent, accomplish exploration or exploitation For every search agent i in P do //Step 3.1.1: Select a random angle α for direction of movement (0° ≤ α ≤ 360°) α = random_angle() //Randomly selected using RWS. //Step 3.1.2: Identify if exploration or exploitation is to be executed If |rG| > 1 then //Exploration phase //Step 3.1.2.1: Exploration—Move to a new position using Triangle Walk Scheme new_position_Xi = Xi + WWDF ∗ r ∗ cos(α) ∗ random_step() Else //Exploitation phase //Step 3.1.2.2: Exploitation—Move using LF new_position_Xi = Xi +levy_flight() ∗ (best_position − Xi) //LF generates a random step based on Lévy distribution. End //Step 3.1.3: Verify if new position is valid If new_position_Xi is invalid (e.g., NaN, Inf, out-of-bounds) then //Reset the agent’s position to a valid random location in the search space new_position_Xi = random_valid_position() End //Step 3.1.4: Evaluate the fitness of the new position new_fitness_Xi = evaluate_fitness(new_position_Xi) //Fitness function estimates the new feature vector for classification accuracy. //Step 3.1.5: Update agent’s position and fitness if the new position is better If new_fitness_Fi > current_fitness_Fi then Xi = new_position_Xi//Update the agent’s position Fi = new_fitness_Fi //Update the agent’s fitness Else Retain current position Xi and fitness Fi End End //Step 3.2: Convergence Check—Monitor improvement If no substantial improvement in fitness after X consecutive iterations then //Enhance the step size in LF to escape local optima and encourage exploration WWDF = WWDF ∗ 1.5 //enhance an exploration factor to cover more search space End //Step 3.3: Adjust parameters dynamically Adjust r and rG based on the iteration number t End //Step 4: Return the best feature vector Determine an agent with greatest fitness score Return best_position (best feature vector) and corresponding fitness |
3.4.3. Error Handling and Edge Case Considerations
Convergence Issues
Local Minima Traps
Invalid Inputs Handling
Outlier Detection and Robustness
3.5. SAH Grade Classification
4. Results and Discussion
4.1. Evaluation Measures
4.2. Quantitative Analysis Related to Segmentation
4.3. Quantitative Analysis Related to Feature Extraction
4.4. Quantitative Analysis Related to Classification
- Decision tree (criterion is Gini, splitter is best, and maximum depth is 30);
- GCN (layer is 3, hidden size is 64 and dropout rate is 0.2);
- ANN (learning number is 13, learning rate is 0.001, and target error is 0.001);
- Autoencoder (dropout rate is 0.5, epoch is 100, batch size is 128, and learning rate is 0.001);
- CNN and PRSCNN (layer is 7, kernel size is 3 3, filters/channels per layer is 128, pooling size is 2 2, learning rate = 0.001, batch size = 32, num epochs = 10, and regularization weight = 0.0001).
4.5. Quantitative Analysis Related to Noisy Images and Handling of Missing Values
4.6. Comparative Analysis
4.7. Discussion
5. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Stetzuhn, M.; Tigges, T.; Pielmus, A.G.; Spies, C.; Middel, C.; Klum, M.; Zaunseder, S.; Orglmeister, R.; Feldheiser, A. Detection of a Stroke Volume Decrease by Machine-Learning Algorithms Based on Thoracic Bioimpedance in Experimental Hypovolaemia. Sensors 2022, 22, 5066. [Google Scholar] [CrossRef] [PubMed]
- Malinova, V.; Tsogkas, I.; Behme, D.; Rohde, V.; Psychogios, M.N.; Mielke, D. Defining Cutoff Values for Early Prediction of Delayed Cerebral Ischemia after Subarachnoid Hemorrhage by CT Perfusion. Neurosurg. Rev. 2020, 43, 581–587. [Google Scholar] [CrossRef]
- Kaur, S.; Singh, A. A New Deep Learning Framework for Accurate Intracranial Brain Hemorrhage Detection and Classification Using Real-Time Collected NCCT Images. Appl. Magn. Reson. 2024, 55, 629–661. [Google Scholar] [CrossRef]
- Yu, D.; Williams, G.W.; Aguilar, D.; Yamal, J.; Maroufy, V.; Wang, X.; Zhang, C.; Huang, Y.; Gu, Y.; Talebi, Y.; et al. Machine Learning Prediction of the Adverse Outcome for Nontraumatic Subarachnoid Hemorrhage Patients. Ann. Clin. Transl. Neurol. 2020, 7, 2178–2185. [Google Scholar] [CrossRef]
- Zeiler, F.A.; Lo, B.W.Y.; Akoth, E.; Silvaggio, J.; Kaufmann, A.M.; Teitelbaum, J.; West, M. Predicting Outcome in Subarachnoid Hemorrhage (SAH) Utilizing the Full Outline of UnResponsiveness (FOUR) Score. Neurocrit. Care 2017, 27, 381–391. [Google Scholar] [CrossRef] [PubMed]
- Merkel, H.; Lindner, D.; Gaber, K.; Ziganshyna, S.; Jentzsch, J.; Mucha, S.; Gerhards, T.; Sari, S.; Stock, A.; Vothel, F.; et al. Standardized Classification of Cerebral Vasospasm after Subarachnoid Hemorrhage by Digital Subtraction Angiography. JCM 2022, 11, 2011. [Google Scholar] [CrossRef]
- Nassar, H.G.E.; Ghali, A.A.; Bahnasy, W.S.; Elawady, M.M. Vasospasm Following Aneurysmal Subarachnoid Hemorrhage: Prediction, Detection, and Intervention. Egypt. J. Neurol. Psychiatry Neurosurg. 2019, 55, 3. [Google Scholar] [CrossRef]
- Takano, K.; Hida, K.; Iwaasa, M.; Inoue, T.; Yoshimitsu, K. Three-dimensional Spin-echo-based Black-blood MRA in the Detection of Vasospasm Following Subarachnoid Hemorrhage. Magn. Reson. Imaging 2019, 49, 800–807. [Google Scholar] [CrossRef]
- Kanazawa, T.; Takahashi, S.; Minami, Y.; Jinzaki, M.; Toda, M.; Yoshida, K. Early Prediction of Clinical Outcomes in Patients with Aneurysmal Subarachnoid Hemorrhage Using Computed Tomography Texture Analysis. J. Clin. Neurosci. 2020, 71, 144–149. [Google Scholar] [CrossRef]
- Ramos, L.A.; Van Der Steen, W.E.; Sales Barros, R.; Majoie, C.B.L.M.; Van Den Berg, R.; Verbaan, D.; Vandertop, W.P.; Zijlstra, I.J.A.J.; Zwinderman, A.H.; Strijkers, G.J.; et al. Machine Learning Improves Prediction of Delayed Cerebral Ischemia in Patients with Subarachnoid Hemorrhage. J. NeuroIntervent. Surg. 2019, 11, 497–502. [Google Scholar] [CrossRef]
- Odenstedt Hergès, H.; Vithal, R.; El-Merhi, A.; Naredi, S.; Staron, M.; Block, L. Machine Learning Analysis of Heart Rate Variability to Detect Delayed Cerebral Ischemia in Subarachnoid Hemorrhage. Acta Neurol. Scand. 2022, 145, 151–159. [Google Scholar] [CrossRef] [PubMed]
- Hu, P.; Li, Y.; Liu, Y.; Guo, G.; Gao, X.; Su, Z.; Wang, L.; Deng, G.; Yang, S.; Qi, Y.; et al. Comparison of Conventional Logistic Regression and Machine Learning Methods for Predicting Delayed Cerebral Ischemia After Aneurysmal Subarachnoid Hemorrhage: A Multicentric Observational Cohort Study. Front. Aging Neurosci. 2022, 14, 857521. [Google Scholar] [CrossRef] [PubMed]
- Kim, K.H.; Koo, H.-W.; Lee, B.-J.; Sohn, M.-J. Analysis of Risk Factors Correlated with Angiographic Vasospasm in Patients with Aneurysmal Subarachnoid Hemorrhage Using Explainable Predictive Modeling. J. Clin. Neurosci. 2021, 91, 334–342. [Google Scholar] [CrossRef]
- Mansour, R.F.; Aljehane, N.O. An Optimal Segmentation with Deep Learning Based Inception Network Model for Intracranial Hemorrhage Diagnosis. Neural. Comput. Applic. 2021, 33, 13831–13843. [Google Scholar] [CrossRef]
- Hssayeni, M.D.; Croock, M.S.; Salman, A.D.; Al-khafaji, H.F.; Yahya, Z.A.; Ghoraani, B. Intracranial Hemorrhage Segmentation Using a Deep Convolutional Model. Data 2020, 5, 14. [Google Scholar] [CrossRef]
- Nijiati, M.; Tuersun, A.; Zhang, Y.; Yuan, Q.; Gong, P.; Abulizi, A.; Tuoheti, A.; Abulaiti, A.; Zou, X. A Symmetric Prior Knowledge Based Deep Learning Model for Intracerebral Hemorrhage Lesion Segmentation. Front. Physiol. 2022, 13, 977427. [Google Scholar] [CrossRef] [PubMed]
- Xu, J.; Zhang, R.; Zhou, Z.; Wu, C.; Gong, Q.; Zhang, H.; Wu, S.; Wu, G.; Deng, Y.; Xia, C.; et al. Deep Network for the Automatic Segmentation and Quantification of Intracranial Hemorrhage on CT. Front. Neurosci. 2021, 14, 541817. [Google Scholar] [CrossRef]
- Wang, R.; Zhang, J.; Shan, B.; He, M.; Xu, J. XGBoost Machine Learning Algorithm for Prediction of Outcome in Aneurysmal Subarachnoid Hemorrhage. Neuropsychiatr. Dis. Treat 2022, 18, 659–667. [Google Scholar] [CrossRef]
- Mohammed, B.A.; Senan, E.M.; Al-Mekhlafi, Z.G.; Rassem, T.H.; Makbol, N.M.; Alanazi, A.A.; Almurayziq, T.S.; Ghaleb, F.A.; Sallam, A.A. Multi-Method Diagnosis of CT Images for Rapid Detection of Intracranial Hemorrhages Based on Deep and Hybrid Learning. Electronics 2022, 11, 2460. [Google Scholar] [CrossRef]
- Rau, C.-S.; Wu, S.-C.; Chien, P.-C.; Kuo, P.-J.; Chen, Y.-C.; Hsieh, H.-Y.; Hsieh, C.-H. Prediction of Mortality in Patients with Isolated Traumatic Subarachnoid Hemorrhage Using a Decision Tree Classifier: A Retrospective Analysis Based on a Trauma Registry System. Int. J. Environ. Res. Public Health 2017, 14, 1420. [Google Scholar] [CrossRef]
- Malik, P.; Vidyarthi, A. Stacked deep model-based classification of the multiclass brain hemorrhages in CT scans. Int. J. Imaging Syst. Technol. 2024, 34, e22955. [Google Scholar] [CrossRef]
- Lee, H.; Yune, S.; Mansouri, M.; Kim, M.; Tajmir, S.H.; Guerrier, C.E.; Ebert, S.A.; Pomerantz, S.R.; Romero, J.M.; Kamalian, S.; et al. An Explainable Deep-Learning Algorithm for the Detection of Acute Intracranial Haemorrhage from Small Datasets. Nat. Biomed. Eng. 2018, 3, 173–182. [Google Scholar] [CrossRef] [PubMed]
- Imran, R.; Hassan, N.; Tariq, R.; Amjad, L.; Wali, A. Intracranial brain Haemorrhage segmentation and classification. IKSP J. Comput. Sci. Eng. 2021, 1, 52–56. [Google Scholar]
- Liu, P.; Ning, G.; Shi, L.; Zhou, Q.; Chen, X. Fine-grained Classification of Intracranial Haemorrhage Subtypes in Head CT Scans. IET Comput. Vision 2023, 17, 170–188. [Google Scholar] [CrossRef]
- Karkkainen, K.; Fazeli, S.; Sarrafzadeh, M. Unsupervised Acute Intracranial Hemorrhage Segmentation With Mixture Models. In Proceedings of the 2021 IEEE 9th International Conference on Healthcare Informatics (ICHI), Victoria, BC, Canada, 9–12 August 2021; IEEE: Piscataway, NJ, USA, 2021; pp. 120–129. [Google Scholar]
- Sage, A.; Badura, P. Intracranial Hemorrhage Detection in Head CT Using Double-Branch Convolutional Neural Network, Support Vector Machine, and Random Forest. Appl. Sci. 2020, 10, 7577. [Google Scholar] [CrossRef]
- Barros, R.S.; Van Der Steen, W.E.; Boers, A.M.M.; Zijlstra, I.; Van Den Berg, R.; El Youssoufi, W.; Urwald, A.; Verbaan, D.; Vandertop, P.; Majoie, C.; et al. Automated Segmentation of Subarachnoid Hemorrhages with Convolutional Neural Networks. Inform. Med. Unlocked 2020, 19, 100321. [Google Scholar] [CrossRef]
- Li, Y.-H.; Zhang, L.; Hu, Q.-M.; Li, H.-W.; Jia, F.-C.; Wu, J.-H. Automatic Subarachnoid Space Segmentation and Hemorrhage Detection in Clinical Head CT Scans. Int. J. CARS 2012, 7, 507–516. [Google Scholar] [CrossRef]
- Shahzad, R.; Pennig, L.; Goertz, L.; Thiele, F.; Kabbasch, C.; Schlamann, M.; Krischek, B.; Maintz, D.; Perkuhn, M.; Borggrefe, J. Fully Automated Detection and Segmentation of Intracranial Aneurysms in Subarachnoid Hemorrhage on CTA Using Deep Learning. Sci. Rep. 2020, 10, 21799. [Google Scholar] [CrossRef]
- Sun, H.; Jin, L.; Xie, W. Deformable Attention U-Shaped Network with Progressively Supervised Learning for Subarachnoid Hemorrhage Image Segmentation. In Proceedings of the 2022 IEEE International Conference on Bioinformatics and Biomedicine (BIBM), Las Vegas, NV, USA, 6 December 2022; IEEE: Piscataway, NJ, USA, 2022; pp. 1734–1739. [Google Scholar]
- Nagaraju, S.; Prince Mary, S.; Gayatri, N. Optimal Ensemble Transfer Learning Espoused Detection and Classification of Intracranial Haemorrhage. IETE J. Res. 2024, 1–10. [Google Scholar] [CrossRef]
- Korra, S.; Mamidi, R.; Soora, N.R.; Kumar, K.V.; Kumar, N.C.S. Intracranial hemorrhage subtype classification using learned fully connected separable convolutional network. Concurr. Comput. Pract. Exp. 2022, 34, e7218. [Google Scholar] [CrossRef]
- SS, V.C. Multimodal deep learning approach for identifying and categorizing intracranial hemorrhage. Multimed. Tools Appl. 2023, 82, 42945–42960. [Google Scholar] [CrossRef]
- CT Dataset. Available online: https://www.kaggle.com/abdulkader90/brain-ct-hemorrhage-dataset (accessed on 15 January 2024).
- Zeebaree, D.Q.; Haron, H.; Abdulazeez, A.M.; Zebari, D.A. Machine Learning and Region Growing for Breast Cancer Segmentation. In Proceedings of the 2019 International Conference on Advanced Science and Engineering (ICOASE), Zakho-Duhok, Iraq, 2–4 April 2019; IEEE: Piscataway, NJ, USA, 2019; pp. 88–93. [Google Scholar]
- Biratu, E.S.; Schwenker, F.; Debelee, T.G.; Kebede, S.R.; Negera, W.G.; Molla, H.T. Enhanced Region Growing for Brain Tumor MR Image Segmentation. J. Imaging 2021, 7, 22. [Google Scholar] [CrossRef] [PubMed]
- Sheela, C.J.J.; Suganthi, G. Morphological Edge Detection and Brain Tumor Segmentation in Magnetic Resonance (MR) Images Based on Region Growing and Performance Evaluation of Modified Fuzzy C-Means (FCM) Algorithm. Multimed. Tools Appl. 2020, 79, 17483–17496. [Google Scholar] [CrossRef]
- Mukti, I.Z.; Biswas, D. Transfer Learning Based Plant Diseases Detection Using ResNet50. In Proceedings of the 2019 4th International Conference on Electrical Information and Communication Technology (EICT), Khulna, Bangladesh, 20–22 December 2019; IEEE: Piscataway, NJ, USA, 2022; pp. 1–6. [Google Scholar]
- Srivastava, S.; Kumar, P.; Chaudhry, V.; Singh, A. Detection of Ovarian Cyst in Ultrasound Images Using Fine-Tuned VGG-16 Deep Learning Network. SN Comput. Sci. 2020, 1, 81. [Google Scholar] [CrossRef]
- Yuesheng, F.; Jian, S.; Fuxiang, X.; Yang, B.; Xiang, Z.; Peng, G.; Zhengtao, W.; Shengqiao, X. Circular Fruit and Vegetable Classification Based on Optimized GoogLeNet. IEEE Access 2021, 9, 113599–113611. [Google Scholar] [CrossRef]
- Seyyedabbasi, A.; Kiani, F. Sand Cat Swarm Optimization: A Nature-Inspired Algorithm to Solve Global Optimization Problems. Eng. Comput. 2023, 39, 2627–2651. [Google Scholar] [CrossRef]
- Kiani, F.; Anka, F.A.; Erenel, F. PSCSO: Enhanced Sand Cat Swarm Optimization Inspired by the Political System to Solve Complex Problems. Adv. Eng. Softw. 2023, 178, 103423. [Google Scholar] [CrossRef]
- Li, Y.; Wang, G. Sand Cat Swarm Optimization Based on Stochastic Variation With Elite Collaboration. IEEE Access 2022, 10, 89989–90003. [Google Scholar] [CrossRef]
- Yadav, S.S.; Jadhav, S.M. Deep Convolutional Neural Network Based Medical Image Classification for Disease Diagnosis. J. Big Data 2019, 6, 113. [Google Scholar] [CrossRef]
- Lei, X.; Pan, H.; Huang, X. A Dilated CNN Model for Image Classification. IEEE Access 2019, 7, 124087–124095. [Google Scholar] [CrossRef]
- Tripathi, M. Analysis of Convolutional Neural Network Based Image Classification Techniques. J. Innov. Image Process. 2021, 3, 100–117. [Google Scholar] [CrossRef]
Segmentation Methods | JI | DSC | PA | MPA |
---|---|---|---|---|
FCM clustering | 0.72 | 0.78 | 0.74 | 0.69 |
Otsu thresholding | 0.73 | 0.79 | 0.78 | 0.73 |
K-means clustering | 0.81 | 0.88 | 0.84 | 0.77 |
Superpixel clustering | 0.92 | 0.93 | 0.89 | 0.84 |
Region-growing | 0.89 | 0.92 | 0.92 | 0.87 |
Modified region-growing | 0.94 | 0.95 | 0.93 | 0.90 |
Segmentation Methods | JI | DSC | PA | MPA |
---|---|---|---|---|
FCM clustering | 0.75 | 0.80 | 0.77 | 0.72 |
Otsu thresholding | 0.79 | 0.82 | 0.83 | 0.77 |
K-means clustering | 0.83 | 0.92 | 0.88 | 0.82 |
Superpixel clustering | 0.94 | 0.95 | 0.92 | 0.88 |
Region-growing | 0.91 | 0.94 | 0.95 | 0.92 |
Modified region-growing | 0.96 | 0.96 | 0.95 | 0.93 |
Feature Extraction Methods | Accuracy (%) | MCC (%) | F1 Score (%) | Sensitivity (%) | Specificity (%) | AUC |
---|---|---|---|---|---|---|
GoogleNet | 95.27 | 94.12 | 94.67 | 93.64 | 96.12 | 0.952 |
VGG-16 | 96.35 | 95.18 | 95.84 | 94.21 | 97.06 | 0.961 |
VGG-19 | 96.81 | 95.67 | 96.33 | 94.76 | 97.53 | 0.963 |
SqueezeNet | 94.00 | 92.85 | 93.40 | 91.71 | 95.19 | 0.940 |
ResNet50 | 97.42 | 96.25 | 96.81 | 96.31 | 98.22 | 0.975 |
GoogleNet + VGG-16 + ResNet50 | 99.48 | 99.53 | 99.48 | 98.56 | 99.47 | 0.996 |
Feature Extraction Methods | Accuracy (%) | MCC (%) | F1 Score (%) | Sensitivity (%) | Specificity (%) | AUC |
---|---|---|---|---|---|---|
GoogleNet | 95.67 | 95.69 | 94.91 | 93.50 | 96.85 | 0.957 |
VGG-16 | 96.92 | 96.96 | 96.02 | 94.75 | 97.10 | 0.963 |
VGG-19 | 97.30 | 97.35 | 97.15 | 95.00 | 97.80 | 0.965 |
SqueezeNet | 95.42 | 94.18 | 94.50 | 92.70 | 96.30 | 0.946 |
ResNet50 | 98.44 | 97.26 | 97.71 | 96.75 | 98.80 | 0.980 |
GoogleNet + VGG-16 + ResNet50 | 99.62 | 99.73 | 99.51 | 98.80 | 99.70 | 0.997 |
Optimization Algorithms | Accuracy (%) | MCC (%) | F1 Score (%) | Sensitivity (%) | Specificity (%) | AUC |
---|---|---|---|---|---|---|
GA | 93.44 | 95.33 | 95.43 | 93.58 | 95.45 | 0.933 |
BOA | 95.58 | 96.44 | 96.48 | 95.03 | 96.63 | 0.950 |
ABC | 96.92 | 97.24 | 97.92 | 96.08 | 97.15 | 0.959 |
WOA | 97.46 | 97.45 | 97.94 | 96.75 | 97.99 | 0.977 |
SCSO | 98.68 | 98.10 | 98.63 | 97.82 | 98.15 | 0.981 |
DWSCSO | 99.48 | 99.53 | 99.48 | 98.56 | 99.47 | 0.996 |
Optimization Algorithms | Accuracy (%) | MCC (%) | F1 Score (%) | Sensitivity (%) | Specificity (%) | AUC |
---|---|---|---|---|---|---|
GA | 95.40 | 96.01 | 95.35 | 95.12 | 95.07 | 0.939 |
BOA | 96.09 | 96.79 | 9684 | 95.28 | 95.10 | 0.944 |
ABC | 96.73 | 95.98 | 96.31 | 96.05 | 96.11 | 0.949 |
WOA | 97.03 | 97.80 | 96.88 | 97.03 | 96.87 | 0.981 |
SCSO | 98.76 | 98.27 | 98.85 | 98.12 | 98.35 | 0.986 |
DWSCSO | 99.62 | 99.73 | 99.51 | 98.80 | 99.70 | 0.997 |
Optimization Algorithms | Computational Time (S) |
---|---|
GA | 110.30 |
BOA | 101.08 |
ABC | 81.66 |
WOA | 83.90 |
SCSO | 78.02 |
DWSCSO | 70.12 |
Optimization Algorithms | Friedman Rank |
---|---|
GA | 2.43 |
BOA | 3.98 |
ABC | 4.37 |
WOA | 5.78 |
SCSO | 7.04 |
DWSCSO | 7.63 |
p-value |
Actual Feature Vectors | ||||||
Classifiers | Accuracy (%) | MCC (%) | F1 Score (%) | Sensitivity (%) | Specificity (%) | AUC |
Decision tree | 88.24 | 87.94 | 88.98 | 87.67 | 88.45 | 0.873 |
GCN | 89.26 | 90.90 | 90.40 | 89.32 | 90.20 | 0.907 |
ANN | 92.36 | 93.50 | 92.34 | 92.89 | 93.10 | 0.915 |
Autoencoder | 94.27 | 93.88 | 93.54 | 93.48 | 92.12 | 0.939 |
CNN | 94.81 | 94.01 | 93.99 | 94.77 | 94.89 | 0.944 |
PRSCNN | 95.22 | 95.90 | 94.90 | 94.09 | 95.40 | 0.955 |
Optimized Feature Vectors | ||||||
Classifiers | Accuracy (%) | MCC (%) | F1 Score (%) | Sensitivity (%) | Specificity (%) | AUC |
Decision tree | 90.30 | 93.28 | 92.28 | 93.45 | 93.81 | 0.902 |
GCN | 93.90 | 95.02 | 93.82 | 94.91 | 94.80 | 0.930 |
ANN | 95.38 | 97.38 | 98.33 | 96.84 | 96.99 | 0.966 |
Autoencoder | 97.30 | 98.76 | 98.70 | 97.60 | 97.73 | 0.979 |
CNN | 98.09 | 99.00 | 99.01 | 98.22 | 98.84 | 0.982 |
PRSCNN | 99.48 | 99.53 | 99.48 | 98.56 | 99.47 | 0.996 |
Actual Feature Vectors | ||||||
Classifiers | Accuracy (%) | MCC (%) | F1 Score (%) | Sensitivity (%) | Specificity (%) | AUC |
Decision tree | 89.22 | 89.15 | 89.04 | 89.32 | 89.14 | 0.881 |
GCN | 90.98 | 90.70 | 91.60 | 90.64 | 91.22 | 0.918 |
ANN | 93.68 | 93.86 | 92.12 | 93.75 | 93.15 | 0.935 |
Autoencoder | 94.66 | 94.53 | 95.06 | 95.14 | 94.57 | 0.947 |
CNN | 95.77 | 94.47 | 95.15 | 95.83 | 95.50 | 0.954 |
PRSCNN | 96.54 | 96.16 | 95.72 | 96.43 | 96.82 | 0.962 |
Optimized Feature Vectors | ||||||
Classifiers | Accuracy (%) | MCC (%) | F1 Score (%) | Sensitivity (%) | Specificity (%) | AUC |
Decision tree | 91.27 | 91.62 | 91.87 | 91.26 | 91.20 | 0.929 |
GCN | 92.81 | 93.22 | 92.85 | 93.40 | 92.91 | 0.940 |
ANN | 94.78 | 95.02 | 95.47 | 95.71 | 94.82 | 0.956 |
Autoencoder | 97.72 | 97.17 | 97.91 | 96.93 | 97.17 | 0.978 |
CNN | 98.42 | 98.65 | 98.34 | 97.70 | 97.74 | 0.983 |
PRSCNN | 99.62 | 99.73 | 99.51 | 98.80 | 99.70 | 0.997 |
Activation Functions | Accuracy (%) | MCC (%) | F1 Score (%) | Sensitivity (%) | Specificity (%) | AUC |
---|---|---|---|---|---|---|
ReLU | 92.74 | 91.47 | 92.88 | 92.60 | 92.03 | 0.937 |
Leaky ReLU | 95.84 | 94.25 | 94.29 | 94.72 | 94.23 | 0.941 |
ELU | 96.94 | 97.73 | 96.22 | 97.16 | 96.83 | 0.970 |
PReLU | 99.48 | 99.53 | 99.48 | 98.56 | 99.47 | 0.996 |
Activation Functions | Accuracy (%) | MCC (%) | F1 Score (%) | Sensitivity (%) | Specificity (%) | AUC |
---|---|---|---|---|---|---|
ReLU | 93.45 | 93.54 | 93.56 | 93.51 | 93.91 | 0.942 |
Leaky ReLU | 96.95 | 95.90 | 96.04 | 96.48 | 96.90 | 0.964 |
ELU | 97.36 | 98.83 | 96.73 | 98.20 | 98.02 | 0.987 |
PReLU | 99.62 | 99.73 | 99.51 | 98.80 | 99.70 | 0.997 |
Measures | K = 2 | K = 4 | K = 5 | K = 8 |
---|---|---|---|---|
MCC (%) | 94.94 | 95.82 | 99.53 | 97.21 |
F1-score (%) | 93.63 | 94.04 | 99.48 | 93.69 |
Accuracy (%) | 97.24 | 98.19 | 99.48 | 96.17 |
Sensitivity (%) | 96.34 | 97.33 | 98.56 | 94.67 |
Specificity (%) | 95.11 | 97.67 | 99.47 | 94.62 |
AUC | 0.959 | 0.961 | 0.996 | 0.950 |
Measures | K = 2 | K = 4 | K = 5 | K = 8 |
---|---|---|---|---|
MCC (%) | 95.78 | 97.62 | 99.73 | 95.81 |
F1-score (%) | 96.10 | 98.39 | 99.51 | 95.29 |
Accuracy (%) | 96.66 | 98.58 | 99.62 | 96.28 |
Sensitivity (%) | 95.74 | 97.05 | 98.80 | 94.25 |
Specificity (%) | 95.53 | 96.33 | 99.70 | 94.72 |
AUC | 0.965 | 0.971 | 0.997 | 0.945 |
Classifiers | Accuracy (%) | |
---|---|---|
Noiseless Images | Noisy Images | |
Decision tree | 91.27 | 86.53 |
GCN | 92.81 | 86.74 |
ANN | 94.78 | 88.21 |
Autoencoder | 97.72 | 92.56 |
CNN | 98.42 | 93.91 |
PRSCNN | 99.62 | 97.32 |
Classifiers | Sensitivity (%) | |
---|---|---|
Without Missing Data | With 10% Missing Data | |
Decision tree | 91.26 | 84.56 |
GCN | 93.40 | 87.35 |
ANN | 95.71 | 91.42 |
Autoencoder | 96.93 | 92.58 |
CNN | 97.70 | 93.51 |
PRSCNN | 98.80 | 95.40 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Sengupta, J.; Alzbutas, R.; Iešmantas, T.; Petkus, V.; Barkauskienė, A.; Ratkūnas, V.; Lukoševičius, S.; Preikšaitis, A.; Lapinskienė, I.; Šerpytis, M.; et al. Detection of Subarachnoid Hemorrhage Using CNN with Dynamic Factor and Wandering Strategy-Based Feature Selection. Diagnostics 2024, 14, 2417. https://doi.org/10.3390/diagnostics14212417
Sengupta J, Alzbutas R, Iešmantas T, Petkus V, Barkauskienė A, Ratkūnas V, Lukoševičius S, Preikšaitis A, Lapinskienė I, Šerpytis M, et al. Detection of Subarachnoid Hemorrhage Using CNN with Dynamic Factor and Wandering Strategy-Based Feature Selection. Diagnostics. 2024; 14(21):2417. https://doi.org/10.3390/diagnostics14212417
Chicago/Turabian StyleSengupta, Jewel, Robertas Alzbutas, Tomas Iešmantas, Vytautas Petkus, Alina Barkauskienė, Vytenis Ratkūnas, Saulius Lukoševičius, Aidanas Preikšaitis, Indre Lapinskienė, Mindaugas Šerpytis, and et al. 2024. "Detection of Subarachnoid Hemorrhage Using CNN with Dynamic Factor and Wandering Strategy-Based Feature Selection" Diagnostics 14, no. 21: 2417. https://doi.org/10.3390/diagnostics14212417
APA StyleSengupta, J., Alzbutas, R., Iešmantas, T., Petkus, V., Barkauskienė, A., Ratkūnas, V., Lukoševičius, S., Preikšaitis, A., Lapinskienė, I., Šerpytis, M., Misiulis, E., Skarbalius, G., Navakas, R., & Džiugys, A. (2024). Detection of Subarachnoid Hemorrhage Using CNN with Dynamic Factor and Wandering Strategy-Based Feature Selection. Diagnostics, 14(21), 2417. https://doi.org/10.3390/diagnostics14212417