Enhanced Skin Lesion Segmentation and Classification Through Ensemble Models
Abstract
:1. Introduction
- A novel ensemble architecture for segmentation: the proposed method integrates U-Net, SegNet, and DeepLabv3 with augmentation, each contributing unique strengths—boundary detection, semantic segmentation, and multi-scale context capture—for precise and efficient skin lesion segmentation.
- An ensemble of deep models for classification: through combining VGG16, ResNet-50, and Inception-V3, the model captures diverse feature representations, improving classification accuracy for melanoma, basal cell carcinoma (BCC), and squamous cell carcinoma (SCC) lesions.
- Balanced dataset and data augmentation: employing oversampling and augmentation techniques, the model effectively addresses class imbalance, enhancing reliability across lesion categories.
- Enhanced performance metrics: the ensemble method achieves superior precision, recall, and F1-scores compared to single-model approaches, demonstrating its robustness for clinical applications.
- Real-world clinical relevance: the method’s improved diagnostic accuracy and robustness make it a promising tool for real-world skin lesion analysis and potential integration into clinical workflows.
2. Materials and Methods
Algorithm 1: Proposed Method for Skin Lesion Segmentation and Classification |
Input: Dermoscopic images of skin lesions Corresponding labels for each image (melanoma, BCC, or SCC) Hyperparameters (e.g., learning rate, batch size, epochs, etc.) Output: Segmentation masks and classification labels |
Step 1: Data Preparation Load the dataset of dermoscopic images and their corresponding labels. Split the dataset into training (80%) and validation (20%) sets for segmentation, and training (75%) and validation (25%) sets for classification. Resize images to appropriate dimensions: Segmentation models: 256 × 256 pixels Classification models: 224 × 224 pixels (VGG16, ResNet50) or 299 × 299 pixels (InceptionV3). Normalize pixel values of images to the range [0, 1]. Apply data augmentation techniques to the training set: Random rotations Horizontal and vertical flips Painting. Step 2: Class Balancing Identify class distribution within the training dataset. Use random oversampling to duplicate instances of minority classes (BCC, SCC) until balanced with the majority class (melanoma). Step 3: Model Training Segmentation Models: Initialize U-Net, DeepLabV3, and SegNet models. Compile each model using an appropriate optimizer (e.g., Adam) and loss function (e.g., Binary Crossentropy, unet3p_hybrid_loss). Train each segmentation model on the augmented and balanced training set for a specified number of epochs. Monitor training and validation loss/accuracy. Classification Models: Initialize VGG16, ResNet50, and InceptionV3 models Compile each model using an appropriate optimizer and loss function (e.g., Categorical Crossentropy). Train each classification model on the augmented and balanced training set for a specified number of epochs. Monitor training and validation metrics (accuracy, precision, recall). Step 4: Ensemble Prediction Generate predictions from each trained segmentation model for the validation set. Combine segmentation outputs using a voting or averaging mechanism to obtain the final segmentation mask. Generate predictions from each trained classification model for the validation set. Combine classification outputs using an ensemble technique (e.g., weighted average or majority voting) to obtain the final class label. Step 5: Model Evaluation Calculate evaluation metrics for segmentation: Intersection over Union (IoU) Dice Coefficient Accuracy. Calculate evaluation metrics for classification: Precision Recall F1-Score AUC Accuracy. End Algorithm |
2.1. Proposed Skin Lesion Segmentation
0 if Combined Mask (i, j) ≤ 0.5
2.2. Proposed Skin Lesion Classification
3. Results
3.1. Performance Evaluation for Segmentation
3.2. Performance Evaluation for Classification
4. Discussion
Comparison with Existing Literature
5. Conclusions
Limitations
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Alkarakatly, T.; Eidhah, S.; Sarawani, M.A.; Sobhi, A.A.; Bilal, M. Skin Lesions Identification Using Deep Convolutional Neural Network. In Proceedings of the 2019 International Conference on Advances in the Emerging Computing Technologies (AECT), Al Madinah Al Munawwarah, Saudi Arabia, 10 February 2020; IEEE: Piscataway, NJ, USA; pp. 209–213. [Google Scholar] [CrossRef]
- Murugan, A.; Nair, S.A.H.; Preethi, A.A.P.; Kumar, K.S. Diagnosis of skin cancer using machine learning techniques. Microprocess. Microsyst. 2021, 81, 103727. [Google Scholar] [CrossRef]
- Salian, A.C.; Vaze, S.; Singh, P. Skin Lesion Classification using Deep Learning Architectures. In Proceedings of the 2020 3rd International Conference on Communication System, Computing and IT Applications (CSCITA), Mumbai, India, 3–4 April 2020; IEEE: Piscataway, NJ, USA; pp. 168–173. [Google Scholar] [CrossRef]
- Ali, M.S.; Miah, M.S.; Haque, J.; Rahman, M.M.; Islam, M.K. An enhanced technique of skin cancer classification using deep convolutional neural network with transfer learning models. Mach. Learn. Appl. 2021, 5, 100036. [Google Scholar] [CrossRef]
- Filali, Y.; Khoukhi, H.E.; Sabri, M.A. Texture Classification of skin lesion using convolutional neural network. In Proceedings of the 2019 International Conference on Wireless Technologies, Embedded and Intelligent Systems (WITS), Fez, Morocco, 3–4 April 2019. [Google Scholar] [CrossRef]
- Gouda, W.; Sama, N.U.; Waakid, G.A. Detection of Skin Cancer Based on Skin Lesion Images Using Deep Learning. Healthcare 2022, 10, 1183. [Google Scholar] [CrossRef] [PubMed]
- Araujo, R.L.; Rabelo, R.A.; Rodrigues, J.P.C.; Silva, R.V. Automatic Segmentation of Melanoma Skin Cancer Using Deep Learning. In Proceedings of the 2021 IEEE International Conference on E-Health Networking, Application & Services (HEALTHCOM), Shenzhen, China, 1–2 March 2021; IEEE: Piscataway, NJ, USA; pp. 1–6. [Google Scholar] [CrossRef]
- Singh, V.K.; Abdel-Nasser, M.; Rashwan, H.A.; Akram, F.; Pandey, N.; Lalande, A.; Presles, B.; Romani, S.; Puig, D. FCA-Net: Adversarial learning for skin lesion segmentation based on multi-scale features and factorized channel attention. IEEE Access 2019, 7, 130552–130565. [Google Scholar] [CrossRef]
- Yang, X.; Zeng, Z.; Yeo, S.Y.; Tan, C.; Tey, H.L.; Su, Y. A novel multi-task deep learning model for skin lesion segmentation and classification. arXiv 2017, arXiv:1703.01025. [Google Scholar] [CrossRef]
- Liu, L.; Tsui, Y.Y.; Mandal, M. Skin Lesion Segmentation Using Deep Learning with Auxiliary Task. J. Imaging 2021, 7, 67. [Google Scholar] [CrossRef]
- Mirikharaji, Z.; Abhishek, K.; Bissoto, A.; Barata, C.; Avila, S.; Valle, E.; Celebi, M.E.; Hamarneh, G. A survey on deep learning for skin lesion segmentation. Med. Image Anal. 2023, 88, 102863. [Google Scholar] [CrossRef] [PubMed]
- Jimi, A.; Abouche, H.; Zrira, N.; Benmiloud, I. Skin Lesion Segmentation Using Attention-Based DenseUNet. In Proceedings of the 16th International Joint Conference on Biomedical Engineering Systems and Technologies (BIOSTEC 2023), Lisbon, Portugal, 16–18 February 2023; BIOINFORMATICS. Volume 3, pp. 91–100. [Google Scholar] [CrossRef]
- Bibi, A.; Khan, M.A.; Javed, M.Y.; Tariq, U.; Kang, B.G.; Nam, Y.; Mostafa, R.R.; Sakr, R.H. Skin Lesion Segmentation and Classification Using Conventional and Deep Learning Based Framework. Comput. Mater. Contin. 2022, 71, 2477–2495. [Google Scholar] [CrossRef]
- Ashraf, H.; Waris, A.; Ghafoor, M.F.; Gilani, S.O.; Niazi, I.K. Melanoma segmentation using deep learning with test-time augmentations and conditional random fields. Sci. Rep. 2022, 12, 3948. [Google Scholar] [CrossRef]
- Jafari, M.H.; Karimi, N.; Nasr-Esfahani, E.; Samavi, S.; Soroushmehr, S.M.R.; Ward, K.; Najarian, K. Skin Lesion Segmentation in Clinical Images Using Deep Learning. In Proceedings of the 2016 23rd International Conference on Pattern Recognition (ICPR), Cancún Center, Cancún, México, 4–8 December 2016. [Google Scholar] [CrossRef]
- Chandra, R.; Hajiarbabi, M. Skin Lesion Detection Using Deep Learning. J. Autom. Mob. Robot. Intell. Syst. 2022, 16, 56–64. [Google Scholar] [CrossRef]
- Gessert, N.; Nielsen, M.; Shaikh, M.; Werner, R.; Schlaefer, A. Skin lesion classification using ensembles of multi-resolution EfficientNets with meta data. MethodsX 2020, 7, 100864. [Google Scholar] [CrossRef] [PubMed]
- Ding, S.; Wu, Z.; Zheng, Y.; Liu, Z.; Yang, X.X.; Yuan, G.; Xie, J. Deep attention branch networks for skin lesion classification. Comput. Methods Programs Biomed. 2021, 212, 106447. [Google Scholar] [CrossRef] [PubMed]
- Alhudhaif, A.; Almaslukh, B.; Aseeri, A.O.; Guler, O.; Polat, K. A novel nonlinear automated multi-class skin lesion detection system using soft-attention based convolutional neural networks. Chaos Solitons Fractals 2023, 170, 113409. [Google Scholar] [CrossRef]
- Alsahafi, Y.S.; Kassem, M.A.; Hosny, K.M. Skin-Net: A novel deep residual network for skin lesions classification using multilevel feature extraction and cross-channel correlation with detection of outlier. J. Big Data 2023, 10, 105. [Google Scholar] [CrossRef]
- Hosny, K.M.; Elshoura, D.; Mohamed, E.R.; Vrochidou, E.; Papakostas, G.A. Deep Learning and Optimization-Based Methods for Skin Lesions Segmentation: A Review. IEEE Access 2023, 11, 85467–85488. [Google Scholar] [CrossRef]
- Ronneberger, O.; Fischer, P.; Brox, T. U-Net: Convolutional Networks for Biomedical Image Segmentation. In Proceedings of the Medical Image Computing and Computer-Assisted Intervention–MICCAI 2015: 18th International Conference, Munich, Germany, 5–9 October 2015. [Google Scholar] [CrossRef]
- Badrinarayanan, V.; Kendall, A.; Cipolla, R. SegNet: A Deep Convolutional Encoder-Decoder Architecture for Image Segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 39, 2481–2495. [Google Scholar] [CrossRef] [PubMed]
- Chen, L.; Zhu, Y.; Papandreou, G.; Schroff, F.; Adam, H. DeepLabv3: Rethinking Atrous Convolution for Semantic Image Segmentation. arXiv 2018, arXiv:1706.05587. [Google Scholar] [CrossRef]
- MNOWAK061. Skin Lesion Dataset. ISIC2018 Kaggle Repository. 2021. Available online: https://www.kaggle.com/datasets/mnowak061/isic2018-and-ph2-384x384-jpg (accessed on 10 April 2022).
- Simonyan, K.; Zisserman, A. Very Deep Convolutional Networks for Large-Scale Image Recognition. arXiv 2014, arXiv:1409.1556. [Google Scholar] [CrossRef]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Deep Residual Learning for Image Recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, Hawaii, 21–26 July 2016. [Google Scholar] [CrossRef]
- Sayyad, J.; Patil, P.; Gurav, S. Skin Disease Detection Using VGG16 and InceptionV3. Int. J. Intell. Syst. Appl. Eng. 2024, 12, 148–155. [Google Scholar]
- Barua, S.; Islam, M.M.; Murase, K. A novel synthetic minority oversampling technique for imbalanced data set learning. Lect. Notes Comput. Sci. 2011, 7063, 735–744. [Google Scholar] [CrossRef]
- Han, J.; Kamber, M.; Pei, J. Data Mining: Concepts and Techniques (The Morgan Kaufmann Series in Data Management Systems), 3rd ed.; Elsevier Science Ltd.: Amsterdam, The Netherlands, 2011; pp. 1–703. [Google Scholar] [CrossRef]
- Harangi, B.; Baran, A.; Hajdu, A. Assisted deep learning framework for multi-class skin lesion classification considering a binary classification support. Biomed. Signal Process. Control 2020, 62, 102041. [Google Scholar] [CrossRef]
- Mariani, G.; Scheidegger, F.; Istrate, R.; Bekas, C.; Malossi, C. BAGAN: Data Augmentation with Balancing GAN. arXiv 2018, arXiv:1803.09655. [Google Scholar]
Parameter | U-Net | DeepLab-V3 | SegNet |
---|---|---|---|
Input Image Size | 256 × 256 | 256 × 256 | 256 × 256 |
Batch Size | 16 | 16 | 16 |
Learning Rate | 0.001 | 0.001 | 0.001 |
Optimizer | Adam | Adam | Adam |
Epochs | 100 | 100 | 100 |
Loss Function | unet3p_hybrid_loss | unet3p_hybrid_loss | Binary Crossentropy Loss |
Dropout Rate | 0.5 | 0.5 | 0.5 |
Data Augmentation | Random Rotation, Flip, Painting | Random Rotation, Flip, Painting | Random Rotation, Flip, Painting |
Training/Validation Split | 80%/20% | 80%/20% | 80%/20% |
Dice Coefficient | IoU | Accuracy | |
---|---|---|---|
Model 1 (SegNet) | 0.82 | 0.75 | 0.88 |
Model 2 (DeepLabV3) | 0.86 | 0.79 | 0.90 |
Model 3 (U-Net) | 0.84 | 0.81 | 0.91 |
Ensemble Model | 0.93 | 0.90 | 0.95 |
Parameter | VGG16 | ResNet-50 | Inception-V3 |
---|---|---|---|
Input Image Size | 224 × 224 | 224 × 224 | 299 × 299 |
Batch Size | 32 | 32 | 32 |
Learning Rate | 0.001 | 0.001 | 0.001 |
Optimizer | Adam | Adam | Adam |
Epochs | 150 | 150 | 150 |
Loss Function | Categorical Crossentropy | Categorical Crossentropy | Categorical Crossentropy |
Dropout Rate | 0.5 | 0.5 | 0.5 |
Data Augmentation | Yes | Yes | Yes |
Training/Validation Split | 75%/25% | 75%/25% | 75%/25% |
Dataset Type | Class | Precision | Recall | F1-Score | AUC | Accuracy |
---|---|---|---|---|---|---|
Unbalanced (Without Augmentation) | Melanoma | 0.75 | 0.70 | 0.72 | 0.80 | 0.71 |
BCC | 0.80 | 0.74 | 0.77 | 0.82 | ||
SCC | 0.70 | 0.65 | 0.67 | 0.78 | ||
Unbalanced (With Augmentation) | Melanoma | 0.78 | 0.74 | 0.76 | 0.83 | 0.80 |
BCC | 0.84 | 0.78 | 0.81 | 0.85 | ||
SCC | 0.73 | 0.69 | 0.71 | 0.80 | ||
Balanced (Without Augmentation) | Melanoma | 0.85 | 0.82 | 0.83 | 0.88 | 0.86 |
BCC | 0.87 | 0.85 | 0.86 | 0.90 | ||
SCC | 0.80 | 0.78 | 0.79 | 0.86 | ||
Balanced (With Augmentation) | Melanoma | 0.90 | 0.87 | 0.88 | 0.92 | 0.94 |
BCC | 0.91 | 0.89 | 0.90 | 0.93 | ||
SCC | 0.85 | 0.83 | 0.84 | 0.89 |
Dataset Type | Class | Precision | Recall | F1-Score | AUC | Accuracy |
---|---|---|---|---|---|---|
Unbalanced (Without Augmentation) | Melanoma | 0.77 | 0.72 | 0.74 | 0.78 | 0.74 |
BCC | 0.81 | 0.76 | 0.78 | 0.79 | ||
SCC | 0.71 | 0.67 | 0.69 | 0.71 | ||
Unbalanced (With Augmentation) | Melanoma | 0.80 | 0.76 | 0.78 | 0.81 | 0.85 |
BCC | 0.84 | 0.79 | 0.81 | 0.83 | ||
SCC | 0.74 | 0.71 | 0.72 | 0.74 | ||
Balanced (Without Augmentation) | Melanoma | 0.86 | 0.83 | 0.84 | 0.88 | 0.88 |
BCC | 0.88 | 0.86 | 0.87 | 0.89 | ||
SCC | 0.81 | 0.80 | 0.80 | 0.82 | ||
Balanced (With Augmentation) | Melanoma | 0.98 | 0.87 | 0.88 | 0.91 | 0.98 |
BCC | 0.90 | 0.88 | 0.89 | 0.92 | ||
SCC | 0.83 | 0.81 | 0.82 | 0.84 |
Dataset Type | Class | Precision | Recall | F1-Score | AUC | Accuracy |
---|---|---|---|---|---|---|
Unbalanced (Without Augmentation) | Melanoma | 0.75 | 0.71 | 0.73 | 0.76 | 0.70 |
BCC | 0.79 | 0.74 | 0.76 | 0.77 | ||
SCC | 0.69 | 0.65 | 0.67 | 0.69 | ||
Unbalanced (With Augmentation) | Melanoma | 0.78 | 0.75 | 0.76 | 0.79 | 0.87 |
BCC | 0.81 | 0.77 | 0.79 | 0.80 | ||
SCC | 0.72 | 0.68 | 0.70 | 0.72 | ||
Balanced (Without Augmentation) | Melanoma | 0.84 | 0.80 | 0.82 | 0.85 | 0.86 |
BCC | 0.87 | 0.83 | 0.85 | 0.86 | ||
SCC | 0.78 | 0.76 | 0.77 | 0.79 | ||
Balanced (With Augmentation) | Melanoma | 0.86 | 0.83 | 0.84 | 0.87 | 0.98 |
BCC | 0.89 | 0.85 | 0.87 | 0.88 | ||
SCC | 0.80 | 0.78 | 0.79 | 0.81 |
Dataset Type | Class | Precision | Recall | F1-Score | AUC | Accuracy |
---|---|---|---|---|---|---|
Unbalanced (Without Augmentation) | Melanoma | 0.80 | 0.75 | 0.77 | 0.82 | 0.78 |
BCC | 0.83 | 0.78 | 0.80 | 0.83 | ||
SCC | 0.71 | 0.67 | 0.69 | 0.71 | ||
Unbalanced (With Augmentation) | Melanoma | 0.84 | 0.80 | 0.82 | 0.85 | 0.90 |
BCC | 0.86 | 0.82 | 0.84 | 0.87 | ||
SCC | 0.74 | 0.70 | 0.72 | 0.74 | ||
Balanced (Without Augmentation) | Melanoma | 0.87 | 0.83 | 0.85 | 0.88 | 0.92 |
BCC | 0.89 | 0.85 | 0.87 | 0.89 | ||
SCC | 0.81 | 0.78 | 0.79 | 0.82 | ||
Balanced (With Augmentation) | Melanoma | 0.89 | 0.86 | 0.87 | 0.90 | 0.99 |
BCC | 0.91 | 0.88 | 0.89 | 0.91 | ||
SCC | 0.83 | 0.81 | 0.82 | 0.84 |
Predicted BCC | Predicted SCC | Predicted Melanoma | |
---|---|---|---|
True Melanoma | 300 | 30 | 8 |
True BCC | 10 | 280 | 12 |
True SCC | 5 | 15 | 150 |
Dataset Type | Class | Precision | Recall | F1-Score | AUC | Accuracy |
---|---|---|---|---|---|---|
Unbalanced (Without Augmentation) | Melanoma | 0.70 | 0.65 | 0.67 | 0.76 | 0.80 |
BCC | 0.75 | 0.80 | 0.77 | 0.79 | ||
SCC | 0.68 | 0.60 | 0.64 | 0.70 | ||
Unbalanced (With Augmentation) | Melanoma | 0.73 | 0.70 | 0.71 | 0.78 | 0.92 |
BCC | 0.78 | 0.82 | 0.80 | 0.81 | ||
SCC | 0.70 | 0.65 | 0.67 | 0.72 | ||
Balanced (Without Augmentation) | Melanoma | 0.85 | 0.80 | 0.82 | 0.90 | 0.94 |
BCC | 0.88 | 0.85 | 0.86 | 0.92 | ||
SCC | 0.82 | 0.78 | 0.80 | 0.88 | ||
Balanced (With Augmentation) | Melanoma | 0.87 | 0.85 | 0.86 | 0.91 | 0.99 |
BCC | 0.90 | 0.88 | 0.89 | 0.93 | ||
SCC | 0.84 | 0.82 | 0.83 | 0.89 |
Existing Works | Methods | Dataset | Dice Coefficient | IoU | Accuracy | |
---|---|---|---|---|---|---|
Segmentation | [7] | U-Net | ISIC 2018 | 89.3 | ||
[8] | FCA-Net | ISIC 2018 | 77.2 | |||
[12] | DenseUNet | ISIC 2016 | 85.64 | |||
ISIC 2017 | 86.61 | |||||
ISIC 2018 | 92.23 | |||||
Proposed System | Ensemble Model | ISIC 2018 | 93 | 90 | ||
Classification | [6] | CNN | ISIC 2018 | 83.1 | ||
ReNet-50 | 83.6 | |||||
Resnet50-Inception | 84.1 | |||||
Inception V3 | 85.7 | |||||
[10] | CNN | ISBI2017 | 94.32 | |||
[12] | DenseUNet | ISIC 2016 | 98.03 | |||
ISIC 2017 | 96.19 | |||||
ISIC 2018 | 97.88 | |||||
[13] | Cubic SVM | ISIC 2017 | 96.7 | |||
[15] | CNN | Dermquest database | 98.5 | |||
[16] | DenseNet | ISIC 2018 | 81 | |||
[19] | CNN | HAM10000 | 95.94 | |||
Proposed System | Ensemble Model | ISIC 2018 | 99 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Thwin, S.M.; Park, H.-S. Enhanced Skin Lesion Segmentation and Classification Through Ensemble Models. Eng 2024, 5, 2805-2820. https://doi.org/10.3390/eng5040146
Thwin SM, Park H-S. Enhanced Skin Lesion Segmentation and Classification Through Ensemble Models. Eng. 2024; 5(4):2805-2820. https://doi.org/10.3390/eng5040146
Chicago/Turabian StyleThwin, Su Myat, and Hyun-Seok Park. 2024. "Enhanced Skin Lesion Segmentation and Classification Through Ensemble Models" Eng 5, no. 4: 2805-2820. https://doi.org/10.3390/eng5040146
APA StyleThwin, S. M., & Park, H.-S. (2024). Enhanced Skin Lesion Segmentation and Classification Through Ensemble Models. Eng, 5(4), 2805-2820. https://doi.org/10.3390/eng5040146