You are currently viewing a new version of our website. To view the old version click .
Information
  • Article
  • Open Access

19 February 2025

Early Detection of Skin Diseases Across Diverse Skin Tones Using Hybrid Machine Learning and Deep Learning Models

,
,
,
and
1
College of Computing, Birmingham City University, Birmingham B4 7XG, UK
2
Information Technology Department, Faculty of Computing and Information Technology, King Abdulaziz University, Jeddah 21589, Saudi Arabia
3
Department of Computer Science, Faculty of Computing and Information Technology, King Abdulaziz University, Jeddah 21589, Saudi Arabia
4
Department of Computer Science, Faculty of Computers and Artificial Intelligence, Cairo University, Giza 12613, Egypt
This article belongs to the Special Issue AI-Based Image Processing and Computer Vision

Abstract

Skin diseases in melanin-rich skin often present diagnostic challenges due to the unique characteristics of darker skin tones, which can lead to misdiagnosis or delayed treatment. This disparity impacts millions within diverse communities, highlighting the need for accurate, AI-based diagnostic tools. In this paper, we investigated the performance of three machine learning methods -Support Vector Machines (SVMs), Random Forest (RF), and Decision Trees (DTs)-combined with state-of-the-art (SOTA) deep learning models, EfficientNet, MobileNetV2, and DenseNet121, for predicting skin conditions using dermoscopic images from the HAM10000 dataset. The features were extracted using the deep learning models, with the labels encoded numerically. To address the data imbalance, SMOTE and resampling techniques were applied. Additionally, Principal Component Analysis (PCA) was used for feature reduction, and fine-tuning was performed to optimize the models. The results demonstrated that RF with DenseNet121 achieved a superior accuracy of 98.32%, followed by SVM with MobileNetV2 at 98.08%, and Decision Tree with MobileNetV2 at 85.39%. The proposed methods overcome the SVM with the SOTA EfficientNet model, validating the robustness of the proposed approaches. Evaluation metrics such as accuracy, precision, recall, and F1-score were used to benchmark performance, showcasing the potential of these methods in advancing skin disease diagnostics for diverse populations.

1. Introduction

Severe skin conditions, such as rashes or critical diseases that can lead to skin cancer, often present differently on darker skin compared to lighter skin. Detecting these conditions early on darker skin is particularly challenging due to a lack of familiarity with how they manifest. This challenge is exacerbated by the underrepresentation of darker skin tones in medical educational resources, which affect individuals with darker skin and cause lower survival rates for conditions like skin cancer.
As Artificial Intelligence (AI) continues to advance, particularly through machine learning and deep learning, it offers promising solutions to these disparities. AI models trained on images of diverse skin tones can facilitate the development of early diagnosis systems, potentially improving outcomes []. Accurate diagnostic tools are crucial for dermatologists to reduce the number of biopsies required and improve treatment efficacy. The application of AI techniques in diagnosing skin conditions can significantly assist physicians in making accurate diagnoses [].
The increasing success in clinical treatment and diagnosis is largely the result of innovative advancements in medicine and data-driven methodologies. For instance, AI algorithms, particularly deep learning, simulate human reasoning in the domain of disease diagnosis. Image processing and feature extraction frameworks rely on these algorithms to predict and identify different types of diseases. Various classification algorithms, such as Random Forest (RF), Support Vector Machine (SVM), and convolutional neural network (CNN), have been used for disease classification.
Despite the potential of these techniques, many existing algorithms have not been applied extensively to image data representing a wide variety of skin conditions on darker skin tones. For instance, the authors in [] proposed a system for recognizing malignant melanoma through 2D wavelet transformation, followed by a neural network classifier, which achieved 84% accuracy. While promising, the limitations of such methods highlight the need for further research and optimization, particularly in diagnosing diseases in individuals with darker skin tones.

3. Materials and Methods

The CRISP-DM (Cross-Industry Standard Process for Data Mining) methodology is employed in this research to evaluate the accuracy of skin condition classification using deep learning and machine learning techniques. Figure 1 illustrates the methodology of different phases.
Figure 1. Phases of CRISP-DM.

3.1. Data Collection

A publicly available dataset for the detection of skin diseases, particularly for black skin and multiple diseases within one dataset, is typically not available. The authors in [] describe the HAM10000 dataset for diverse skin tones, which is used in this study for the detection of seven types of skin diseases. The HAM10000 dataset includes 10,015 dermoscopic images of critical skin conditions. Each image is a high-quality, color photograph with a height of 450 pixels and a width of 600 pixels, resulting in a consistent size across the dataset. It contains many dermoscopic images, each labeled with one of seven different skin disease categories. From the HAM10000 dataset, we selected images of melanocytic nevi (NV), melanoma (MEL), benign keratosis-like lesions (BKLs), basal cell carcinoma (BCC), actinic keratoses (AKIEC), vascular lesions (VASC) and dermatofibroma (DF) for this study.

3.2. Exploratory Data Analysis

An Exploratory Data Analysis (EDA) was conducted on the attributes and images of skin diseases from the metadata in the CSV file to uncover valuable insights. This step is critical in the data analysis process, as it allows for feature analysis and a comprehensive understanding of the dataset. Visualization was performed using libraries like Matplotlib and Seaborn, generating six types of graphs.
Figure 2 visualizes the frequency of skin lesion types, revealing that melanocytic nevi (NV) is the most common, while dermatofibroma (DF) has the lowest frequency. This provides a clear understanding of lesion type distribution in the dataset. Figure 3 illustrates the age distribution of individuals with skin conditions, showing that those aged 45 have over 1200 instances, followed by those aged 50 with nearly 1200. That for the age groups of 5, 10, 15, 20, and 51 years fall below 200, while that for ages 25, 80, and 85 range between 200 and 400. That for ages 30, 35, 65, 70, and 75 fall between 400 and 800, with that for ages 40, 50, 55, and 60 ranging from 800 to 1200. Figure 4 reflects the sex distribution, indicating that 4500–5000 females and over 5000 males are represented. Figure 5 shows the distribution of lesion localization which means which body parts most affected by skin conditions, with the back having over 200 cases, and an average number of cases on the chest and face, while fewer cases occur on the hands, genitals, ears, and other body parts. Figure 6 shows the age distribution for each skin lesion type: BKL occurs between 57 and 75 years, NV occurs between 35 and 55 years, DF occurs between 45 and 50 years, MEL occurs between 55 and 75 years, VASC occurs between 40 and 50 years, and both BCC and AKIEC occur between 60 and 75 years. Figure 7 represents the distribution of skin conditions by sex. NV has the highest frequency (3000–3500) for both males and females, while BKL and MEL have similar distributions (0–600). DF and VASC also show similar distributions for both sexes, with BCC occurring in 350 males and 250 females, and AKIEC appearing in 250 males and 100 females.
Figure 2. Frequency of skin lesion types.
Figure 3. Distribution of age.
Figure 4. Sex distribution.
Figure 5. Distribution of lesion localization.
Figure 6. Age distribution across lesion types.
Figure 7. Distribution of lesion types by sex.

3.3. Data Preparation

The first step in data preprocessing involves loading the metadata CSV file, which contains information about the images, into Google Colab Pro, followed by extracting the zip file containing all 10,015 skin disease images. Next, data filtration is performed for cleaning, including handling missing or invalid data, ensuring that all files are available, and verifying that the file paths are correctly mapped. Upon checking the metadata CSV file, 57 missing values were found in the age column, which were then filled with the median age. Afterward, the dataset was checked to confirm that there are no remaining missing data, and no missing image paths were found.

3.4. Feature Selection

For this skin disease dataset, feature selection is crucial to focus on the most relevant attributes for analysis and modeling. The following features are selected:
image_id: important for tracking and referencing individual images.
dx: represents the diagnosis of skin conditions and is essential for analysis and prediction.
sex: selected because some skin diseases vary in frequency or manifestation by gender.
localization: refers to the body part where the skin condition appears, providing critical information for diagnosis and disease occurrence.
image_path: necessary for loading and preprocessing image data during analysis and modeling. The image_path column or variable (in a metadata file) provides the file path or filename corresponding to each image in the dataset. It maps each image’s metadata (e.g., diagnosis, age, sex, lesion location) to its associated image file.
Other features were excluded due to irrelevance, redundancy (high correlation without adding new information), or noise (variability without contributing to prediction).

3.5. Modeling

For predicting skin conditions in the HAM10000 dataset, three models were applied using transfer learning: Support Vector Machine (SVM-MobileNetV2), Random Forest (RF-DenseNet121), and Decision Tree (DT-MobileNetV2). Additionally, the same models were applied with EfficientNetB0 for feature extraction. The performance of these combinations was then compared to evaluate the effectiveness of different deep learning feature extractors in conjunction with the machine learning models. The system was implemented on Google Colab Pro, a cloud-based Jupyter notebook environment offering access to computational resources, including GPUs (Graphics Processing Unit) and TPUs (Tensor Processing Unit). Colab also facilitates easy collaboration, making it useful for research projects.
According to [], SVM is a pattern classification technique introduced by Vapnik. Unlike traditional methods that focus on minimizing empirical errors, SVM separates data using a hyperplane while maximizing the margin between classes. This reduces generalization error and makes SVM a distribution-free algorithm that addresses issues related to poor statistical estimation. SVM has demonstrated better generalization and performance compared to standard classifiers, especially when dealing with high-dimensional data and small training samples.

3.6. Evaluation

Evaluating the model is a critical phase in AI-based learning, focusing on assessing how well the trained models perform. This step ensures that the model generalizes effectively to new data and guides decisions on deployment and further improvements.
The following metrics and techniques contribute to a comprehensive evaluation:
Accuracy measures the overall performance of the model by showing how often it correctly classifies or predicts outcomes.
Accuracy = (True Positive + True Negative)/(True Positive + True Negative) + (False Positive + False Negative),
Precision measures the accuracy of positive predictions, with a higher precision indicating more correct positive classifications.
Precision = (True Positive)/(True Positive + False Positive),
Recall assesses the model’s ability to detect true positive cases. A higher recall means that the model effectively identifies actual positive instances.
Recall = (True Positive)/(True Positive + False Negative),
The F1-score combines precision and recall into a single metric by calculating their harmonic mean, offering a balanced evaluation of both.
F1-Score = (2 × Precision × Recall)/(Precision + Recall),

4. Results and Discussion

The SVM classifier was applied to classify skin lesions after preprocessing images by resizing and normalizing pixel values. The features were extracted using both EfficientNetB0 and MobileNetV2 architectures, with labels encoded into numerical values. A PCA was performed to reduce dimensionality while retaining significant variance, and SMOTE was utilized to address class imbalance. The dataset was split into training and testing sets, and grid search was employed to optimize hyperparameters. Using EfficientNetB0, the SVM model achieved an accuracy of 97.51%. When MobileNetV2 was used, the SVM model showed improved performance, achieving an accuracy of 98.20%, demonstrating its effectiveness in distinguishing between different skin conditions and surpassing the results obtained with EfficientNetB0.
The Random Forest (RF) classifier was employed to classify skin lesions after training on the processed dataset. The model was configured with 100 Decision Trees and trained using the training set. Using EfficientNetB0 for feature extraction, RF achieved an accuracy of 96.68%. However, when DenseNet121 was used as the feature extractor, RF achieved superior performance with an accuracy of 98.32%. A classification report highlighted the precision, recall, and F1-scores for each class, with excellent results observed, particularly for classes such as AKIEC, BCC, and DF.
For the Decision Tree (DT) model, images were loaded in batches to manage memory efficiently, where each image was resized and preprocessed for use with the EfficientNetB0 and MobileNetV2 architectures. The features were extracted using the pre-trained models, normalized, and reduced in dimensionality using PCA to enhance computational efficiency. SMOTE was applied to address the class imbalance, resulting in a balanced dataset. The dataset was then split into training and testing subsets, followed by the application of a Decision Tree Classifier with hyperparameter tuning via Grid Search. Using EfficientNetB0 for feature extraction, the Decision Tree achieved an accuracy of 84.96%. When MobileNetV2 was used, the Decision Tree showed a slightly improved performance, achieving an accuracy of 85.39%.
The results of these experiments indicate that the combination of SVM with MobileNetV2 and RF with DenseNet121 outperformed other configurations, consistently surpassing the state-of-the-art (SOTA) EfficientNetB0 across multiple metrics. Detailed evaluation metrics, including precision, recall, and F1-scores, further underscored the models’ robustness in identifying skin lesion types.
The accuracy achieved from the SVM classifier using features extracted from MobileNetV2 is 98.08%. The grid search process identified the best parameters for the model, optimizing its performance. The evaluation of the trained model revealed not only a high accuracy but also a solid classification performance across various skin lesion types, as demonstrated by the classification report. Figure 8 illustrates the training and validation accuracy of the SVM classifier over the training epochs, showcasing a gradual increase in accuracy for both sets. This convergence of training and validation accuracy indicates effective learning with minimal overfitting, underscoring the model’s robustness in accurately classifying skin lesions.
Figure 8. Model accuracy graph for SVM-MobileNetV2.
The proposed methodology achieved a high accuracy of 98.32% for skin lesion classification using the Random Forest classifier with DenseNet121 as the feature extractor. Figure 9 illustrates the progression of training and validation accuracy during the feature extraction and classification process. The training accuracy demonstrates a steady increase, reaching a peak value of approximately 98.5%, indicative of the model’s ability to effectively capture patterns from the training dataset. Similarly, the validation accuracy shows consistent growth, attaining a final value of approximately 98.1%, reflecting the model’s robust generalization to unseen data. The close alignment between training and validation accuracy curves indicates minimal overfitting, affirming the model’s capability to accurately distinguish between various skin lesion classes. This exceptional performance highlights the effectiveness of the proposed preprocessing pipeline, including the fine-tuning of DenseNet121 for feature extraction, PCA for dimensionality reduction, and SMOTE for addressing class imbalance. The combination of these techniques enhances the Random Forest classifier’s predictive power, making it a highly reliable tool for dermoscopic skin lesion classification.
Figure 9. Random Forest DenseNet121 Accuracy Model.
The accuracy achieved by the Decision Tree model is 85.39%. Figure 10 illustrates the training and validation accuracy of the model over five epochs. The training accuracy demonstrates a steady increase. Conversely, the validation accuracy initially rises but peaks at around 90% by the fourth epoch, subsequently declining to 85.39% in the fifth epoch. This divergence suggests potential overfitting, as the model excels on the training data while struggling to generalize to unseen validation data. The growing gap between the training and validation accuracies in later epochs underscores the importance of employing strategies like pruning or early stopping to mitigate overfitting and enhance the model’s ability to generalize to new data.
Figure 10. Validation Accuracy of Decision Tree.
Table 2 shows the comparative analysis of HAM10000 dataset where different machine learning with transfer learning models performance are compared.
Table 2. Comparative analysis on HAM10000 dataset.
The results presented in the table compare the performance of various machine learning models (Random Forest, Support Vector Machine, and Decision Tree) integrated with three different deep learning feature extraction architectures (MobileNetV2, DenseNet121, and EfficientNetB0) and the previous studies [,] on the HAM10000 dataset. Overall, Random Forest combined with DenseNet121 achieves the highest accuracy (98.32%) compared to all the other methods along with excellent precision, recall, and F1-score (98%), demonstrating its robust ability to classify dermatological images. SVM with MobileNetV2 closely follows, achieving 98.20% accuracy and maintaining high precision, recall, and F1-score values (98%), showcasing the effectiveness of MobileNetV2 for feature extraction. Interestingly, Random Forest with MobileNetV2 also performs competitively (97.86% accuracy). However, Decision Tree models exhibit a noticeably lower performance across all feature extractors, with the highest accuracy being 85.39% when paired with MobileNetV2. This highlights the limitations of simpler models like Decision Trees for complex datasets, even with advanced feature extraction. EfficientNetB0, although effective, slightly underperforms compared to MobileNetV2 and DenseNet121 in most scenarios, with its best accuracy being 97.51% when paired with SVM. These results underscore the importance of choosing the right combination of feature extractor and classifier, with DenseNet121 and MobileNetV2 proving to be particularly effective for dermatological image analysis.

5. Conclusions and Future Scope

This research reviewed existing models to understand their strengths and limitations, applying advanced preprocessing techniques to enhance image quality and developing models to detect skin diseases across diverse skin tones. Due to the unavailability of a black skin dataset, the HAM10000 dataset was utilized in this study. Effective preprocessing methods, including resizing, normalization, and dimensionality reduction using PCA, along with model fine-tuning, such as dropout layers and hyperparameter optimization, helped mitigate overfitting. The SMOTE technique was applied to address the data imbalance, improving the model’s generalization capabilities. The results demonstrated that Random Forest, when paired with DenseNet121, achieved the highest accuracy of 98.32%, followed closely by SVM with MobileNetV2 at 98.20%. The Decision Tree model with MobileNetV2 achieved an accuracy of 85.39%. In comparison, models utilizing EfficientNetB0 for feature extraction performed slightly lower, with SVM, Random Forest, and Decision Tree achieving accuracies of 97.51%, 96.68%, and 84.96%, respectively. These findings highlight the superior performance of the proposed models over existing predictions on the HAM10000 dataset, emphasizing their potential for dermoscopic applications. Future research will explore advanced architectures, such as transformers, to further enhance model performance. Techniques like Grad-CAM will be investigated for improving the interpretability and explainability of model predictions. Additionally, integrating multimodal data—such as patient demographics, medical history, and genetic information—has the potential to enhance model accuracy and enable personalized treatment recommendations. Further preprocessing methods, including Histogram Equalization, Gray-scale Conversion, Color Space Transformation, and Edge Detection, could also improve the detection of skin diseases, particularly in melanin-rich skin, where conditions are often harder to detect due to a lower contrast. Expanding datasets to include images of black and diverse skin tones will remain a key focus for improving generalization across populations.

Author Contributions

Conceptualization, A.A. and F.S.; methodology, A.A., F.S. and N.S.E.; software, A.A.; validation, S.B., A.M.A.; formal analysis, A.A., F.S., N.S.E., S.B. and A.M.A.; investigation, S.B. and A.M.A.; resources, S.B. and A.M.A.; data curation, A.A.; writing—original draft preparation, A.A. and F.S.; writing—review and editing, A.A., F.S., N.S.E., S.B. and A.M.A.; visualization, A.A.; supervision, F.S. and N.S.E.; project administration, F.S. and S.B.; funding acquisition, F.S. and S.B. All authors have read and agreed to the published version of the manuscript.

Funding

This project was funded by the Deanship of Scientific Research (DSR) at King Abdulaziz University, Jeddah, under grant no. (GPIP: 1907-612-2024). The authors, therefore, give their acknowledgement with thanks to DSR for their technical and financial support.

Institutional Review Board Statement

Not applicable.

Data Availability Statement

The datasets are available upon request.

Acknowledgments

This project was funded by the Deanship of Scientific Research (DSR) at King Abdulaziz University, Jeddah, under grant no. (GPIP: 1907-612-2024). The authors, therefore, give their acknowledgement with thanks to DSR for their technical and financial support.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Rezk, E.; Eltorki, M.; El-Dakhakhni, W. Leveraging artificial intelligence to improve the diversity of dermatological skin color pathology: Protocol for an algorithm development and validation study. JMIR Res. Protoc. 2022, 11, e34896. [Google Scholar] [CrossRef]
  2. Aljohani, K.; Turki, T. Automatic classification of melanoma skin cancer with deep convolutional neural networks. Ai 2022, 3, 512–525. [Google Scholar] [CrossRef]
  3. Aswin, R.B.; Jaleel, J.A.; Salim, S. Implementation of ann classifier using matlab for skin cancer detection. Int. J. Comp. Sci. Mob. Comput. 2013, 1002, 87–94. [Google Scholar]
  4. Stafford, H.; Buell, J.; Chiang, E.; Ramesh, U.; Migden, M.; Nagarajan, P.; Amit, M.; Yaniv, D. Non-melanoma skin cancer detection in the age of advanced technology: A review. Cancers 2023, 15, 3094. [Google Scholar] [CrossRef]
  5. Kumari, S.; Umrao, S.; Kushwaha, D. Decoding the Skin with AI: A Review of Cutting-Edge Technologies and Applications. In Proceedings of the 2024 2nd International Conference on Disruptive Technologies (ICDT), Greater Noida, India, 15–16 March 2024; pp. 1276–1281. [Google Scholar]
  6. Xu, L.; Jackowski, M.; Goshtasby, A.; Roseman, D.; Bines, S.; Yu, C.; Dhawan, A.; Huntley, A. Segmentation of skin cancer images. Image Vis. Comput. 1999, 17, 65–74. [Google Scholar] [CrossRef]
  7. Amelard, R.; Glaister, J.; Wong, A.; Clausi, D.A. Melanoma decision support using lighting-corrected intuitive feature models. In Computer Vision Techniques for the Diagnosis of Skin Cancer; Springer: Berlin/Heidelberg, Germany, 2014; pp. 193–219. [Google Scholar]
  8. Senthil, V.; Shreyaa, V.; Kothandapany, V. Deep learning and rules-based hybrid approach to improve the accuracy of early detection of skin cancer. Authorea Prepr. 2022. [Google Scholar] [CrossRef]
  9. Kassani, S.H.; Kassani, P.H. A comparative study of deep learning architectures on melanoma detection. Tissue Cell 2019, 58, 76–83. [Google Scholar] [CrossRef] [PubMed]
  10. Hekler, A.; Utikal, J.S.; Enk, A.H.; Solass, W.; Schmitt, M.; Klode, J.; Schadendorf, D.; Sondermann, W.; Franklin, C.; Bestvater, F.; et al. Deep learning outperformed 11 pathologists in the classification of histopathological melanoma images. Eur. J. Cancer 2019, 118, 91–96. [Google Scholar] [CrossRef] [PubMed]
  11. Chattopadhyay, S.; Dey, A.; Singh, P.K.; Sarkar, R. DRDA-Net: Dense residual dual-shuffle attention network for breast cancer classification using histopathological images. Comput. Biol. Med. 2022, 145, 105437. [Google Scholar] [CrossRef] [PubMed]
  12. Jahanbani, S.; Hansen, P.S.; Blum, L.K.; Bastounis, E.E.; Ramadoss, N.S.; Pandrala, M.; Kirschmann, J.M.; Blacker, G.S.; Love, Z.Z.; Weissman, I.L.; et al. Increased macrophage phagocytic activity with TLR9 agonist conjugation of an anti-Borrelia burgdorferi monoclonal antibody. Clin. Immunol. 2023, 246, 109180. [Google Scholar] [CrossRef]
  13. Lallas, A.; Apalla, Z.; Argenziano, G.; Longo, C.; Moscarella, E.; Specchio, F.; Raucci, M.; Zalaudek, I. The dermatoscopic universe of basal cell carcinoma. Dermatol. Pract. Concept. 2014, 4, 11. [Google Scholar] [CrossRef]
  14. Akay, B.N.; Kocyigit, P.; Heper, A.O.; Erdem, C. Dermatoscopy of flat pigmented facial lesions: Diagnostic challenge between pigmented actinic keratosis and lentigo maligna. Br. J. Dermatol. 2010, 163, 1212–1217. [Google Scholar] [CrossRef] [PubMed]
  15. Zaballos, P.; Salsench, E.; Serrano, P.; Cuellar, F.; Puig, S.; Malvehy, J. Studying regression of seborrheic keratosis in lichenoid keratosis with sequential dermoscopy imaging. Dermatology 2010, 220, 103–109. [Google Scholar] [CrossRef] [PubMed]
  16. Moscarella, E.; Zalaudek, I.; Pellacani, G.; Eibenschutz, L.; Catricalà, C.; Amantea, A.; Panetta, C.; Argenziano, G. Lichenoid keratosis-like melanomas. J. Am. Acad. Dermatol. 2011, 65, e85–e87. [Google Scholar] [CrossRef]
  17. Zaballos, P.; Puig, S.; Llambrich, A. and Malvehy, J. Dermoscopy of dermatofibromas: A prospective morphological study of 412 cases. Arch. Dermatol. 2008, 144, 75–83. [Google Scholar] [CrossRef] [PubMed]
  18. Rosendahl, C.; Cameron, A.; McColl, I.; Wilkinson, D. Dermatoscopy in routine practice: ‘Chaos and clues’. Aust. Fam. Physician 2012, 41, 482–487. [Google Scholar]
  19. Zaballos, P.; Daufí, C.; Puig, S.; Argenziano, G.; Moreno-Ramírez, D.; Cabo, H.; Marghoob, A.A.; Llambrich, A.; Zalaudek, I.; Malvehy, J. Dermoscopy of solitary angiokeratomas: A morphological study. Arch. Dermatol. 2007, 143, 318–325. [Google Scholar] [CrossRef] [PubMed]
  20. Zaballos, P.; Carulla, M.; Ozdemir, F.; Zalaudek, I.; Bañuls, J.; Llambrich, A.; Puig, S.; Argenziano, G.; Malvehy, J. Dermoscopy of pyogenic granuloma: A morphological study. Br. J. Dermatol. 2010, 163, 1229–1237. [Google Scholar] [CrossRef]
  21. Vasconcelos, C.N.; Vasconcelos, B.N. Experiments using deep learning for dermoscopy image analysis. Pattern Recognit. Lett. 2020, 139, 95–103. [Google Scholar] [CrossRef]
  22. Mahmoud, N.M.; Soliman, A.M. Early automated detection system for skin cancer diagnosis using artificial intelligent techniques. Sci. Rep. 2024, 14, 9749. [Google Scholar] [CrossRef]
  23. Ashwath, V.A.; Sikha, O.K.; Benitez, R. TS-CNN: A three-tier self-interpretable CNN for multi-region medical image classification. IEEE Access 2023, 11, 78402–78418. [Google Scholar] [CrossRef]
  24. Singh, J.; Sandhu, J.K.; Kumar, Y. An analysis of detection and diagnosis of different classes of skin diseases using artificial intelligence-based learning approaches with hyper parameters. Arch. Comput. Methods Eng. 2024, 31, 1051–1078. [Google Scholar] [CrossRef]
  25. Lopez, A.R.; Giro-I-Nieto, X.; Burdick, J.; Marques, O. Skin lesion classification from dermoscopic images using deep learning techniques. In Proceedings of the 2017 13th IASTED International Conference on Biomedical Engineering (BioMed), Innsbruck, Austria, 20–21 February 2017. [Google Scholar]
  26. Bhadula, S.; Sharma, S.; Juyal, P.; Kulshrestha, C. Machine learning algorithms based skin disease detection. Int. J. Innov. Technol. Explor. Eng. 2019, 9, 4044–4049. [Google Scholar] [CrossRef]
  27. Allugunti, V.R. A machine learning model for skin disease classification using convolution neural network. Int. J. Comput. Program. Database Manag. 2022, 3, 141–147. [Google Scholar] [CrossRef]
  28. Hameed, N.; Shabut, A.M.; Hossain, M.A. Multi-class skin diseases classification using deep convolutional neural network and support vector machine. In Proceedings of the 2018 12th International Conference on Software, Knowledge, Information Management & Applications (SKIMA 2018), Phnom Penh, Cambodia, 3–5 December 2018. [Google Scholar]
  29. ALEnezi, N.S.A. A method of skin disease detection using image processing and machine learning. Procedia Comput. Sci. 2019, 163, 85–92. [Google Scholar] [CrossRef]
  30. Cheong, K.H.; Tang, K.J.W.; Zhao, X.; Koh, J.E.W.; Faust, O.; Gururajan, R.; Ciaccio, E.J.; Rajinikanth, V.; Acharya, U.R. An automated skin melanoma detection system with melanoma-index based on entropy features. Biocybern. Biomed. Eng. 2021, 41, 997–1012. [Google Scholar] [CrossRef]
  31. Tan, T.Y.; Zhang, L.; Jiang, M. An intelligent decision support system for skin cancer detection from dermoscopic images. In Proceedings of the 2016 12th International Conference on Natural Computation, Fuzzy Systems and Knowledge Discovery (ICNC-FSKD 2016), Changsha, China, 13–15 August 2016. [Google Scholar]
  32. Maia, L.B.; Lima, A.; Pereira, R.M.; Junior, G.B.; de Almeida, J.D.; de Paiva, A.C. Evaluation of melanoma diagnosis using deep features. In Proceedings of the 2018 25th International Conference on Systems, Signals and Image Processing (IWSSIP), Maribor, Slovenia, 20–22 June 2018. [Google Scholar]
  33. Schandl, P.; Rosendahl, C.; Kittler, H. The HAM10000 dataset, a large collection of multi-source dermatoscopic images of common pigmented skin lesions. Sci. Data 2018, 5, 180161. [Google Scholar] [CrossRef]
  34. Kuo, B.C.; Ho, H.H.; Li, C.H.; Hung, C.C.; Taur, J.S. A kernel-based feature selection method for SVM with RBF kernel for hyperspectral image classification. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2013, 7, 317–326. [Google Scholar] [CrossRef]
  35. Velaga, N.; Vardineni, V.R.; Tupakula, P.; Pamidimukkala, J.S. Skin cancer detection using the HAM10000 dataset: A comparative study of machine learning models. In Proceedings of the 2023 Global Conference on Information Technologies and Communications (GCITC), Bangalore, India, 1–3 December 2023; IEEE: New York, NY, USA, 2023; pp. 1–7. [Google Scholar]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Article Metrics

Citations

Article Access Statistics

Multiple requests from the same IP address are counted as one view.