Next Article in Journal
Technical and Clinical Outcomes of Laparoscopic–Laparotomic Hepatocellular Carcinoma Thermal Ablation with Microwave Technology: Case Series and Review of Literature
Next Article in Special Issue
Artificial Intelligence and Machine Learning in Ocular Oncology, Retinoblastoma (ArMOR): Experience with a Multiracial Cohort
Previous Article in Journal
Synergistic Sensitization of High-Grade Serous Ovarian Cancer Cells Lacking Caspase-8 Expression to Chemotherapeutics Using Combinations of Small-Molecule BRD4 and CDK9 Inhibitors
Previous Article in Special Issue
Adversarial Attacks on Medical Image Classification
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

SkinLesNet: Classification of Skin Lesions and Detection of Melanoma Cancer Using a Novel Multi-Layer Deep Convolutional Neural Network

School of Science, Engineering & Environment, University of Salford, Manchester M5 4WT, UK
*
Author to whom correspondence should be addressed.
Cancers 2024, 16(1), 108; https://doi.org/10.3390/cancers16010108
Submission received: 16 November 2023 / Revised: 20 December 2023 / Accepted: 22 December 2023 / Published: 24 December 2023
(This article belongs to the Collection Artificial Intelligence and Machine Learning in Cancer Research)

Abstract

:

Simple Summary

While melanoma accounts for 4% of skin cancer cases, it causes 75% of skin-cancer-related deaths. The survival rate for melanoma is higher for early-identified cases, so improved access to diagnosis and screening programs is essential for addressing skin cancer deaths. Computer-aided diagnosis utilizing machine learning can be used to differentiate malignant and benign skin lesions. There is significant research into the use of convolutional neural networks to classify skin lesions from dermoscopic images. However, to provide cost-effective and accessible options for early detection of malignant melanoma, smartphone applications capable of accurately classifying skin lesions from images taken on a smartphone would be beneficial. This research investigates a previously underexplored dataset of smartphone images and develops a novel multi-layer deep convolutional neural network model, named SkinLesNet, to classify three types of skin lesions, including melanoma. Further studies to validate the model should be conducted as other image datasets become available.

Abstract

Skin cancer is a widespread disease that typically develops on the skin due to frequent exposure to sunlight. Although cancer can appear on any part of the human body, skin cancer accounts for a significant proportion of all new cancer diagnoses worldwide. There are substantial obstacles to the precise diagnosis and classification of skin lesions because of morphological variety and indistinguishable characteristics across skin malignancies. Recently, deep learning models have been used in the field of image-based skin-lesion diagnosis and have demonstrated diagnostic efficiency on par with that of dermatologists. To increase classification efficiency and accuracy for skin lesions, a cutting-edge multi-layer deep convolutional neural network termed SkinLesNet was built in this study. The dataset used in this study was extracted from the PAD-UFES-20 dataset and was augmented. The PAD-UFES-20-Modified dataset includes three common forms of skin lesions: seborrheic keratosis, nevus, and melanoma. To comprehensively assess SkinLesNet’s performance, its evaluation was expanded beyond the PAD-UFES-20-Modified dataset. Two additional datasets, HAM10000 and ISIC2017, were included, and SkinLesNet was compared to the widely used ResNet50 and VGG16 models. This broader evaluation confirmed SkinLesNet’s effectiveness, as it consistently outperformed both benchmarks across all datasets.

1. Introduction

“Cancer”—the collective term by which a group of linked diseases is referred to—occurs when several body cells start to divide uncontrollably and to invade nearby tissues [1]. Skin cancer is one of the most common forms of cancer [2] and commonly occurs when the skin is frequently exposed to sunlight [3]. Ultraviolet rays, which are the primary cause of skin cancer, harm the DNA in skin cells [4]. There are three main types of skin cancer: basal cell carcinoma, squamous cell carcinoma, and melanoma [5]. However, non-melanoma skin cancers present a lower risk of spreading to other parts of the body and are easier to treat than melanoma [6]. It is estimated that while melanoma only accounts for 4% of skin cancer cases, it causes 75% of skin-cancer-related deaths [7]. Globally, there were an estimated 287,700 new cases of melanoma in 2018 and an estimated 60,700 deaths from melanoma in the same year [2]. The incidence of melanoma has grown significantly in recent times, partly due to an increase in sun-seeking behaviors [7]. For example, in the United States, the lifetime risk of developing malignant melanoma has increased from 1 in 5000 in the year 1935 to 1 in 74 in the year 2000 [8].
Early-identified individuals have a better chance of recovering, because the five-year survival rate for patients with early-identified malignant melanoma is 94% [8]. Early diagnosis is therefore a critical factor in reducing skin cancer mortality. Dermoscopy is a specialist technology that produces high-resolution magnified images of the skin by controlling light and removing surface skin reflectance [9], and the clinical use of dermatoscopic images has the potential to improve diagnosis rates for melanoma [8] and, ultimately, to save lives [10]. However, given the existing pressures on healthcare systems, cost-effective strategies are needed to facilitate increased screening and diagnosis [11], and there has been significant interest in computer-aided diagnosis [6]. An active area of research is the use of smartphone applications incorporating machine-learning methods to analyze images and assist in early melanoma diagnosis, whilst reducing pressures on healthcare systems and clinical staff [12].
Clinicians have traditionally utilized the ABCD guidelines to differentiate melanoma from non-malignant skin lesions, with A denoting asymmetry, B denoting border irregularity, C denoting color variations, and D denoting diameter greater than 6 mm [8,13]. However, due to the morphological variation and complex characteristics of skin lesions, there can be challenges with inter-observer and intra-observer concordance, further motivating the exploration of computer-aided diagnosis techniques [14]. By utilizing texture cues, geometrical aspects, color features, and combinations of these features, medical images can be used to identify and classify skin cancer conditions [15]. The use of traditional machine-learning techniques in melanoma diagnosis has typically involved feature extraction from dermatoscopic images, to build a set of relevant features that can be used to train a classification model, often utilizing the ABCD rule to define an appropriate feature set [14,16,17,18]. Due to the complexity of skin lesions, it is difficult for researchers to recognize skin malignancies by using these geometrical properties [18].
In the field of image-based melanoma diagnosis, the development of deep learning and, in particular, convolutional neural networks (CNNs) has reduced reliance on manual-feature extraction techniques. CNN-based classification methods have also demonstrated diagnostic effectiveness comparable to that of dermatologists [19]. In [20], researchers mainly concentrated on the automatic identification and categorization of skin cancer, as computer-aided screening technologies had become more prevalent. Many studies have been conducted to categorize melanoma skin lesions, using CNNs on various datasets, including MNIST HAM10000 [21], the International Skin Imaging Collaboration 2018 (ISIC) [22], the PH2 public database [23], and the International Symposium on Biomedical Imaging (ISBI) [24]. Using these datasets, promising results have been achieved, employing a variety of pre-trained CNN models, including ResNet50 [25], ImageNet50 [26], and DenseNet201 [27] alongside additional cutting-edge models. Some datasets, such as the Dermatological and Surgical Assistance Program at the Federal University of Espirito Santo (PAD-UFES-20) dataset [28], have not been explored as extensively for the identification of skin lesions.
In order to classify and discriminate skin lesions, we have created and used a modified version of the PAD-UFES-20 dataset, named PAD-UFES-20-Modified. The PAD-UFES-20-Modified dataset was chosen for this research due to its unique characteristics and practicality. This dataset consists of 1314 samples that correspond to three primary types of skin lesions: nevus, melanoma, and seborrheic keratosis. The dataset was collated from images taken using smartphones, meaning it is particularly well suited to investigating the possibility of melanoma detection and diagnosis through smartphone applications [28]. The integration of patient medical data offers insightful contextual information for each lesion. The model’s ability to undergo thorough training and assessment is strengthened by the contextual richness, the number of samples in the dataset, and the thorough annotations. This dataset’s clinically significant features were in alignment with the objectives of this study. The choice of PAD-UFES-20-Modified, with its diverse samples and comprehensive information, aligns with the complexity of real-world scenarios, making it a valuable resource for developing and testing the SkinLesNet model.

Contributions

The introduction provides an overview of the challenges posed by skin cancer, the importance of early detection, and the prevalence and severity of melanoma. The following points highlight the key contributions of the paper:
  • The development and implementation of a cutting-edge multi-layer CNN model represents a significant contribution. The model was specifically designed for the classification and discrimination of skin lesions, and its superior performance—achieving a 96% accuracy rate—demonstrates its effectiveness, compared to established models like ResNet50 and VGG16.
  • This research contributes to the field by utilizing the PAD-UFES-20 dataset, which has not been as extensively explored for skin-lesion classification. This dataset contains smartphone images rather than dermatoscopic images, which is particularly relevant to the development of smartphone applications for accessible, scalable, and cost-effective melanoma diagnosis.
  • This study evaluated the proposed model on diverse datasets, including the PAD-UFES-20-Modified dataset, HAM10000, and ISIC2017. This approach enhanced the generalizability of the model, showcasing its adaptability to different datasets and real-world scenarios.
  • This study’s primary contribution lies in achieving a high accuracy rate of 96% in classifying skin lesions. This is a crucial contribution, considering the complexities and challenges associated with accurate dermatological diagnoses.
The paper has been structured as follows: Section 2 briefly highlights the related work; Section 3 explains the methodology used to clean and preprocess the dataset, and to build and train the model; Section 4 discusses the results and evaluation metrics of the models; Section 5 contains the conclusion.

2. Literature Review

Machine learning and AI have made significant advances in cancer prediction and detection in recent years [29]. Dermoscopy is a non-invasive imaging method for taking comprehensive images of skin lesions [30]. The development of computer-aided diagnosis systems has been spurred by research into the need for accurate and early identification of skin illnesses, including melanoma and other types of skin cancer [31]. When used to automate the classification of skin lesions based on dermoscopy images, deep learning methods, in particular CNNs, have demonstrated encouraging results [32]. Earlier methods for classifying skin lesions relied on manually engineered characteristics and conventional machine-learning techniques [33]. The use of deep learning techniques, particularly CNNs, has reduced the reliance on manual feature extraction in skin-lesion classification [34]. For classification tasks involving skin lesions, well-known CNN architectures like AlexNet [35], VGGNet [36], and InceptionNet [37] have been adapted and refined.
Deep learning models were developed by [38], where CNN models were trained and evaluated on the HAM10000 dataset, which delivered 90 per cent validation accuracy when classifying various forms of skin malignancies. In [39], with the help of a data augmentation technique, a CNN classification model was proposed and trained, using a public dataset of skin lesions that included 600 test and 6162 training images, achieving a classification accuracy of 89.2%. Using a cutting-edge prediction algorithm, benign and malignant skin lesions were separated into categories in [40] with a CNN and a novel regularizer. The model was then trained with a dataset obtained by the International Skin Imaging Collaboration (ISIC) databank, which acquired a summed accuracy score of 97.49%. In [41], by using fuzzy C-means clustering and K-means clustering, the researchers classified skin lesions using a CNN model trained with the ISIC dataset, achieving an accuracy of 98.83%.
In [42], transfer-learning techniques were used with two CNN architectures, ResNet50 and DenseNet169. The models were trained and validated on the HAM10000 dataset, and the highest-performing generated an accuracy score of 91.2%. In addition to existing methods, such as border extraction utilizing XOR with regression logic, another CNN model was suggested in [43]. The datasets from PH2 and ISBI 2017 were utilized to train the model, which achieved a 97.8% accuracy rate. In [44], to enhance the performance of the proposed CNN model, a transfer-learning strategy was employed, using a publicly available dataset on Kaggle, which resulted in accuracy of 79.45%. Another CNN model was designed by [45] and trained with a medical dataset acquired from Al-Kindi Hospital and Baghdad Medical City to classify skin lesions, obtaining accuracy of 89%. In [46], seven different types of skin problems were categorized, using a CNN.
In [47], a U-Net-based model was proposed for semantic segmentation of skin-lesion images, and the proposed model was validated against ISIC2018, ISIC2017, and PH2. In [48], a U-Net model was also used for skin lesion semantic segmentation and was evaluated with ISIC2017 and ISIC2018, with accuracy of 94.9% and 95.4%, respectively. In [49], a CNN model that distinguished blemishes into moderate skin cancer and cases of acne was developed, using images of diverse benign skin cancers and acne cases, and it yielded precision of 96.4%. In [50], a dataset of skin cancer dermoscopy images was utilized, subjected to a number of data-cleaning steps to reduce noise and enhance the quality of images, and then a CNN model was employed for categorization, achieving accuracy of 98.38%. In [51], the HAM10000 dataset was used to train a Siamese neural network. While the classification accuracy was lower than with some other models, this approach was able to detect examples that did not belong to the training classes.
In [52], the PH2 dataset of dermoscopic images was utilized to create and build a CNN model, which achieved test-set accuracy of over 95%. In [53], a six-layer CNN model was created and trained on the ISIC dataset and showed promise, with accuracy of 89.30% in classifying skin lesions. Another State-of-the-Art CNN model was designed and developed by [54]. The model achieved 97.50% accuracy results when used with the ISIC and PH2 datasets, to separate skin lesions.
In [55], along with data augmentation and image preparation procedures, a CNN model, which obtained 95.2% accuracy, was trained and tested on the HAM10000 dataset. In [56], a CNN model was designed and trained on the ISIC2019 dataset, successfully classifying eight types of skin malignancies with a 94.92% test-accuracy score. In [57], the DenseNet201 model was fine-tuned and trained on the HAM10000 dataset, to classify skin lesions in dermoscopy images, obtaining 86.91% test accuracy. In [58], a deep CNN was created and trained on the ISBI 2017 dataset, to classify melanoma skin lesions. This network achieved 87% accuracy on test data.
Table 1 presents detailed performance metrics and a comparative analysis of the implemented CNN models on each dermoscopy dataset.

3. Methodology

To develop the SkinLesNet model, the Keras and TensorFlow Python libraries were used. Google Colab, which is a cloud-based platform, was used to execute the SkinLesNet model’s Python code. Google Colab was built on top of the Jupyter Notebook infrastructure.
The performance of the SkinLesNet model was compared to the ResNet50 and VGG16 models. ResNet50 [60] and VGG16 [61] were selected as benchmarks because they are popular CNN architectures known for their effectiveness in image-classification tasks, including medical-imaging applications [62,63,64,65]. ResNet50, part of the ResNet family, utilizes skip connections that aid in mitigating the vanishing gradient problem during training, enabling the network to effectively learn from a broader set of features [62]. On the other hand, VGG16 is recognized for its simple and uniform architecture with multiple convolutional layers, which makes it efficient in learning various image representations [63]. Both models have demonstrated strong performance in image-based tasks, due to their ability to extract meaningful features from medical images, thus making them suitable choices for skin cancer detection.

3.1. Dataset and Data Augmentation

A voluntary program at the Federal University of Espirito Santo (UFES), known as The Programa de Assistencia Dermatologica e Cirurgica (PAD), contributes handout skin-lesion medication, mostly to deserving individuals who are unable to pay for personal medical services. The 19th century witnessed millions of immigrants from Europe settle in the Espirito Santo state, for reasons of historical significance. The majority of those immigrants and their children were unprepared for Brazil’s tropical atmosphere. Because skin cancer and skin lesions are so common in this state, PAD is crucial in helping those who are affected. Because they were taken with different devices, the images in this collection have different resolutions, sizes, and lighting. To accurately detect skin cancer, this heterogeneity must be addressed.
The initial PAD-UFES-20 dataset exhibited a composition of 52 melanoma, 244 nevus, and 235 seborrheic keratosis images, as well as several other categories of benign and malignant skin lesions. Due to the risks associated with melanoma [7], this was the focus of the study. Misdiagnosis of melanoma as seborrheic keratosis or nevus could lead to suboptimal patient outcomes, yet these are two of the most common misdiagnoses [66,67].
The PAD-UFES-20-Modified dataset, comprising 1314 samples of seborrheic keratosis, nevus, and melanoma, stands out for its diversity and real-world relevance, as it includes both clinical images and patient medical records. During the data preprocessing stage, the images are standardized to 224 × 224 pixels and are split into training and test sets, to ensure model generalizability. The choice of PAD-UFES-20-Modified, with its diverse samples of smartphone images and comprehensive information, aligns with the complexity of real-world scenarios, making it a valuable resource for development and testing the SkinLesNet model.
In the utilization of the PAD-UFES-20-Modified dataset, a data augmentation strategy was employed, to address the challenges posed by imbalanced class distributions and limited original data. To enhance the dataset’s diversity and to mitigate the risk of model bias, a geometric-transformation [68] data-augmentation technique was implemented, introducing variations through random flips and rotations or translations. Consequently, the dataset was substantially expanded, yielding 520 melanoma, 408 nevus, and 386 seborrheic keratosis images, totalling 1314 images. Figure 1 provides examples from the image collections utilized in this investigation.
The distribution of the three skin lesion classes is shown in a pie chart in Figure 2. The pie chart effectively visualizes class balance or imbalance within the dataset, making it easier to grasp the relative proportions of the three classes.
To achieve uniformity and compatibility with the deep learning model’s input size requirements, it was necessary to standardize the image size. Every image was therefore reduced in size to a square with dimensions of 224 × 224 pixels.
The dataset was split into training and test sets in an 80:20 ratio, with 80% of the data being used for training and 20% being held back for testing, Table 2. To ensure that classes were distributed randomly across the sets used for training and testing, the train_test_split function randomly shuffled the data before splitting. This division was compulsory, to assess the model’s generalizability to new data during testing and to guard against overfitting.

3.2. Comparative Datasets

Apart from our primary dataset for this research, we used two more well-known and publicly available datasets, HAM10000 [69] and ISIC2017 [70]. The HAM10000 dataset, also known as “Human Against Machine with 10,000 training images”, is a set of dermatoscopic images of skin lesions. It comprises 10,015 dermatoscopic skin lesion images. The skin lesions in the dataset are divided into many categories, such as basal-cell carcinoma, squamous-cell carcinoma, seborrheic keratosis, melanoma, and nevus (moles) [71]. As melanoma is the most deadly type of skin cancer, it is of special interest. Furthermore, the International Skin Imaging Collaboration (ISIC), a global initiative to advance the early detection and diagnosis of skin cancer, particularly melanoma, includes the ISIC2017 dataset. A sizable number of dermatoscopic images of skin lesions are included in the dataset. The dataset’s skin lesions are divided into a number of classifications, with a particular emphasis on melanoma, the deadliest type of skin cancer. Basal-cell cancer, seborrheic keratosis, and nevus (moles) are possible further classes [72].

3.3. Proposed Model Architecture

We thoroughly tested numerous layer combinations in our neural network architecture before recommending a four-layer CNN model. The number of convolutional layers, the kind and size of filters, the activation functions, and the presence of pooling layers were all adjusted during these analyses. Finding the best architecture that balanced model complexity and efficiency was our goal. To make sure the model could correctly diagnose skin lesions, we evaluated several configurations, using measures including accuracy, precision, recall, and F1-score. After extensive testing and analysis, we came to the conclusion that the four-layer CNN architecture, dubbed SkinLesNet, had the most promising outcomes, in terms of precision and robustness, supporting its recommendation as the final model for skin-lesion classification, Figure 3.
The decision to utilize a multi-layer CNN was grounded in its proven efficacy in learning hierarchical features from complex image data, particularly in the domain of medical image analysis. Studies in image-based melanoma diagnosis have consistently shown that deep CNNs can achieve diagnostic effectiveness comparable to dermatologists. The unique strength of the SkinLesNet model lies in its four-layer architecture, systematically optimized through extensive testing. The successive convolutional layers act as feature extractors, capturing intricate patterns in skin-lesion images. The inclusion of max-pooling layers aids in spatial-dimension reduction while retaining essential features. The ReLU activation function adds non-linearity to the model [73], allowing it to recognize complex patterns in the data. This architecture is adept at processing diverse features, including texture cues, geometrical aspects, and color features, crucial for accurate skin-lesion classification. The choice of this model, with its distinct architecture and robust performance metrics, positions SkinLesNet as a unique and effective approach compared to other methods, providing a strong foundation for feature extraction and contributing to superior predictions and classifications in the realm of skin-lesion diagnosis.
The input layer accepted RGB images with a resolution of 224 × 224. The ReLU activation function was used to capture image characteristics in the first convolutional layer, which included 32 filters with a 3 × 3 filter size. It was followed by a max-pooling layer that shrank the spatial dimensions. The model comprised three additional convolutional layers, each with additional filters. Each convolutional layer was followed by a further max-pooling layer. To avoid overfitting, a dropout layer with a rate of 0.5 was added. The ReLU activation function was then used to flatten the data before a final fully connected hidden layer with 64 neurons.
An additional dropout layer with a dropout rate of 0.3 preceded the output layer, which consisted of three output neurons utilizing a SoftMax activation function, to calculate probabilities for each class. The model was trained to produce precise estimations and to decrease classification errors, using previously unseen images of skin lesions. The Adam optimizer was chosen, with a learning rate of 0.001 and a moving average with an exponential momentum of 0.99, in order to improve the efficacy of training. As an integer representation of the target labels, “sparse_categorical_crossentropy” was used as the loss function in the model. The model’s performance was assessed using the accuracy statistic, which provided the percentage of labels that were correctly predicted during training and evaluation.
The CNN model was built up to be used for training on the skin lesions dataset with these options. As part of its training process, the CNN model analyzed the data, using the Adam optimizer to adjust its internal parameters and assess how well it anticipated the development of new skin lesions.

3.4. Model Training

Various metrics, including training and validation accuracy, loss, and other pertinent metrics, were used to track progress throughout the training phase. These metrics offered information about the model’s performance and aided in choosing how to adjust its hyperparameters and architectural design. The training phase, in general, was a data-driven, repeating procedure where the model worked to determine significant trends and abstractions from the data used for training, in order to produce accurate predictions on new, unseen data. A batch size of 32, 100 training epochs, and a validation split of 0.2 were used during training. These hyperparameters are summarized in Table 3.
The model frequently converged effectively during training at a moderate learning rate, such as 0.001. With smaller learning rates, overshooting or divergence—which may have happened—were avoided by allowing the model to make incremental weight adjustments. A lower learning rate made the optimization process more precise and controllable, which helped the model become broader and less prone to overfitting [74]. Smaller batch sizes bring noise into the gradient estimates, which can function as a type of regularization and aid in avoiding overfitting. Working with less data can be advantageous when this regularization effect is present. Batch sizes of up to 32 are memory-efficient and allow deep neural networks to be trained even on computers with little GPU capacity [75].
Convergence could be accelerated by the use of Adam optimization [76] of the learning rates during training. For each parameter, it maintained two moving averages: the first moment’s mean and the second moment’s uncentered variance. The optimizer could modify the learning rates, based on how the gradient behaved for each parameter using these moving averages. Adam is suitable for a variety of deep learning problems and is robust to noisy gradients.

4. Results and Discussions

The model’s performance on unseen data after it had been trained is displayed methodically. Performance metrics, such as accuracy, precision, recall, and F1-score, could help to determine the model’s strengths and flaws. The discussion of visuals, confusion matrices, and comparison to testing methods all helped to give a greater understanding of the model’s possibilities and limitations. Overall, the findings and analysis provide both practitioners and scholars in the field of medical image analysis with meaningful data that bridge the gap between the theoretical model’s conceptualization and its practical implementation.
Figure 4 provides a detailed view of SkinLesNet’s early training stages, showcasing the evolution of both accuracy and loss metrics over the first 10 epochs. Observe the gradual ascent of training and validation accuracy, reaching a promising 96% after approximately 100 epochs. However, a closer examination of the loss curves within the figure offers valuable clues about the model’s optimization process and potential for further improvement.
ResNet50 contains several convolutional-layer combinations with average-pooling layers and batch-normalization layers. The fully connected layer, which has 1000 out-features, is the last layer in the original ResNet50 model. This work involved replacing this fully connected layer with a collection of fully connected layers, in order to fine-tune the ResNet50 model. There were 2048 out-features in the first fully connected layer. A probability of 0.5 was then applied in a dropout layer. The first and second fully connected layers were identical and used a ReLU activation function. Dropout with a probability of 0.5 was performed again after the second fully connected layer. There were three out-features and 2048 in-features in the final fully connected layer, which was intended for three-class categorization. In the VGG16 model, some layers were frozen and unfrozen, in order to fine-tune the model. Meanwhile, the VGG16 model was fine-tuned by unfreezing the last block, so that their weights were updated during training.
The proposed SkinLesNet model was first trained and tested on the PAD-UFES-20-Modified dataset, where it achieved 96% testing accuracy, with precision, recall, and F1-scores of 97%, 92%, and 92%, respectively. Moreover, the testing accuracies obtained for the ResNet50 and VGG16 models were 82% and 79%, Table 4.
The SkinLesNet model was then trained and tested on the HAM10000 dataset, where it achieved 90% testing accuracy, with precision, recall, and F1-scores of 89%, 87%, and 85%. Moreover, the testing accuracies obtained for the ResNet50 and VGG16 models were 80% and 75%, Table 5.
Finally, the SkinLesNet model was trained and tested on the ISIC2017 dataset, where it achieved 92% testing accuracy, with precision, recall, and F1-scores of 80%, 82%, and 75%, respectively. Moreover, the testing accuracies obtained for the ResNet50 and VGG16 models were 75% and 70%, Table 6.
The SkinLesNet model significantly outperformed all three datasets as compared to the ResNet50 and VGG16 models. The variations in accuracy across the different datasets can be attributed to the inherent differences in the dataset characteristics, complexities, and levels of diversity. While the PAD-UFES-20-Modified dataset achieved a notable accuracy of 96%, outperforming other datasets, such as HAM10000 and ISIC2017, several factors contributed to these differences. The PAD-UFES-20-Modified dataset, with its focus on clinical relevance and diverse lesion representation, aligns closely to the target application, fostering robust model performance. On the other hand, the HAM10000 and ISIC2017 datasets, although widely used, may exhibit variations in lesion types, distributions, or contextual information, potentially posing challenges for accurate classification. Variations in dataset sizes and annotation quality can influence model learning.
Despite these promising results, it is essential to acknowledge potential limitations and challenges in the comparison. The choice of datasets, while diverse, may not have encompassed the full spectrum of skin-lesion variations encountered in real-world clinical settings. The model’s performance may also have been influenced by the quality and quantity of data available for training. As with any deep learning model, overfitting remains a concern, although dropout layers were incorporated to mitigate this issue. Continuous efforts in data augmentation and the inclusion of larger and more diverse datasets could further enhance the model’s generalizability. Furthermore, the computational resources required for training and evaluating these models should be considered, especially as the complexity of the architecture increases. Despite these challenges, the SkinLesNet model’s consistently superior performance suggests its potential for practical implementation in dermatological applications, pending further refinement and validation.

5. Conclusions

SkinLesNet’s performance, which was discussed in the results section, demonstrated instances of accuracy and suggested areas for improvement. Two benchmark CNN architectures—VGG16 and ResNet50—were analyzed and compared to the proposed SkinLesNet model in this study. The PAD-UFES-20-Modified dataset was used to train and test the SkinLesNet model, which provided accuracy of 96%, compared to 82% for the ResNet50 model and 79% for the VGG16 model. Moreover, two other publicly available datasets were used to train the model: HAM10000 and ISIC2017. The model acquired accuracy on the HAM10000 and ISIC2017 datasets of 90% and 92%, respectively. Therefore, the SkinLesNet model outperformed the two benchmark models trained and tested on all three datasets. With sufficient computational resources and a well-annotated dataset, significant enhancements in model performance are achievable. Expanding the dataset and employing techniques like active learning or self-supervised learning could further improve model performance. Furthermore, while the problem is currently addressed through image classification, exploring the utilization of semantic-segmentation models could offer another effective approach to tackling this problem.

Author Contributions

M.A.: conceptualization, data curation, formal analysis, writing—original draft; K.K.: methodology, conceptualization, validation, supervision, project administration, resources; T.M.: methodology, conceptualization, validation; N.T.: methodology, investigation, validation, writing—review and editing. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data presented in this study are available on request from the corresponding author.

Conflicts of Interest

The authors declare no conflict of interest.

GitHub Repository

The proposed SkinLesNet model is publicly available and accessible on the GitHub repository at the following link: SkinLesNet_Project.

References

  1. Khan, D.; Rahman, A.U.; Kumam, P.; Watthayu, W. A Fractional Analysis of Hyperthermia Therapy on Breast Cancer in a Porous Medium along with Radiative Microwave Heating. Fractal Fract. 2022, 6, 82. [Google Scholar] [CrossRef]
  2. Ferlay, J.; Colombet, M.; Soerjomataram, I.; Mathers, C.; Parkin, D.M.; Piñeros, M.; Znaor, A.; Bray, F. Estimating the global cancer incidence and mortality in 2018: GLOBOCAN sources and methods. Int. J. Cancer 2019, 144, 1941–1953. [Google Scholar] [CrossRef] [PubMed]
  3. Feller, L.; Khammissa, R.; Kramer, B.; Altini, M.; Lemmer, J. Basal cell carcinoma, squamous cell carcinoma and melanoma of the head and face. Head Face Med. 2016, 12, 1–7. [Google Scholar] [CrossRef] [PubMed]
  4. Abdulfatah, E.; Fine, S.W.; Lotan, T.L.; Mehra, R. De Novo neuroendocrine features in prostate cancer. Hum. Pathol. 2022, 127, 112–122. [Google Scholar] [CrossRef] [PubMed]
  5. Linares, M.A.; Zakaria, A.; Nizran, P. Skin cancer. Prim. Care Clin. Off. Pract. 2015, 42, 645–659. [Google Scholar] [CrossRef] [PubMed]
  6. Dildar, M.; Akram, S.; Irfan, M.; Khan, H.U.; Ramzan, M.; Mahmood, A.R.; Alsaiari, S.A.; Saeed, A.H.M.; Alraddadi, M.O.; Mahnashi, M.H. Skin cancer detection: A review using deep learning techniques. Int. J. Environ. Res. Public Health 2021, 18, 5479. [Google Scholar] [CrossRef] [PubMed]
  7. Davis, L.E.; Shalin, S.C.; Tackett, A.J. Current state of melanoma diagnosis and treatment. Cancer Biol. Ther. 2019, 20, 1366–1379. [Google Scholar] [CrossRef]
  8. Rigel, D.S.; Carucci, J.A. Malignant melanoma: Prevention, early detection, and treatment in the 21st century. CA Cancer J. Clin. 2000, 50, 215–236. [Google Scholar] [CrossRef]
  9. Goceri, E. Evaluation of denoising techniques to remove speckle and Gaussian noise from dermoscopy images. Comput. Biol. Med. 2022, 152, 106474. [Google Scholar] [CrossRef]
  10. Rajput, G.; Agrawal, S.; Raut, G.; Vishvakarma, S.K. An accurate and noninvasive skin cancer screening based on imaging technique. Int. J. Imaging Syst. Technol. 2022, 32, 354–368. [Google Scholar] [CrossRef]
  11. Voss, R.K.; Woods, T.N.; Cromwell, K.D.; Nelson, K.C.; Cormier, J.N. Improving outcomes in patients with melanoma: Strategies to ensure an early diagnosis. Patient Relat. Outcome Meas. 2015, 229–242. [Google Scholar] [CrossRef]
  12. Zaidan, A.; Zaidan, B.; Albahri, O.; Alsalem, M.; Albahri, A.; Yas, Q.M.; Hashim, M. A review on smartphone skin cancer diagnosis apps in evaluation and benchmarking: Coherent taxonomy, open issues and recommendation pathway solution. Health Technol. 2018, 8, 223–238. [Google Scholar] [CrossRef]
  13. Nachbar, F.; Stolz, W.; Merkle, T.; Cognetta, A.B.; Vogt, T.; Landthaler, M.; Bilek, P.; Braun-Falco, O.; Plewig, G. The ABCD rule of dermatoscopy: High prospective value in the diagnosis of doubtful melanocytic skin lesions. J. Am. Acad. Dermatol. 1994, 30, 551–559. [Google Scholar] [CrossRef] [PubMed]
  14. Burroni, M.; Corona, R.; Dell’Eva, G.; Sera, F.; Bono, R.; Puddu, P.; Perotti, R.; Nobile, F.; Andreassi, L.; Rubegni, P. Melanoma computer-aided diagnosis: Reliability and feasibility study. Clin. Cancer Res. 2004, 10, 1881–1886. [Google Scholar] [CrossRef] [PubMed]
  15. Gouda, W.; Sama, N.U.; Al-Waakid, G.; Humayun, M.; Jhanjhi, N.Z. Detection of skin cancer based on skin lesion images using deep learning. Healthcare 2022, 10, 1183. [Google Scholar] [CrossRef] [PubMed]
  16. Schindewolf, T.; Stolz, W.; Albert, R.; Abmayr, W.; Harms, H. Classification of melanocytic lesions with color and texture analysis using digital image processing. Anal. Quant. Cytol. Histol. 1993, 15, 1–11. [Google Scholar] [PubMed]
  17. Das, J.B.A.; Mishra, D.; Das, A.; Mohanty, M.N.; Sarangi, A. Skin cancer detection using machine learning techniques with ABCD features. In Proceedings of the 2022 2nd Odisha International Conference on Electrical Power Engineering, Communication and Computing Technology (ODICON), Bhubaneswar, India, 11–12 November 2022; pp. 1–6. [Google Scholar]
  18. Salma, W.; Eltrass, A.S. Automated deep learning approach for classification of malignant melanoma and benign skin lesions. Multimed. Tools Appl. 2022, 81, 32643–32660. [Google Scholar] [CrossRef]
  19. Azeem, M.; Javaid, S.; Khalil, R.A.; Fahim, H.; Althobaiti, T.; Alsharif, N.; Saeed, N. Neural Networks for the Detection of COVID-19 and Other Diseases: Prospects and Challenges. Bioengineering 2023, 10, 850. [Google Scholar] [CrossRef]
  20. Malibari, A.A.; Alzahrani, J.S.; Eltahir, M.M.; Malik, V.; Obayya, M.; Al Duhayyim, M.; Neto, A.V.L.; de Albuquerque, V.H.C. Optimal deep neural network-driven computer aided diagnosis model for skin cancer. Comput. Electr. Eng. 2022, 103, 108318. [Google Scholar] [CrossRef]
  21. Sethanan, K.; Pitakaso, R.; Srichok, T.; Khonjun, S.; Thannipat, P.; Wanram, S.; Boonmee, C.; Gonwirat, S.; Enkvetchakul, P.; Kaewta, C.; et al. Double AMIS-ensemble deep learning for skin cancer classification. Expert Syst. Appl. 2023, 234, 121047. [Google Scholar] [CrossRef]
  22. Faheem Saleem, M.; Muhammad Adnan Shah, S.; Nazir, T.; Mehmood, A.; Nawaz, M.; Attique Khan, M.; Kadry, S.; Majumdar, A.; Thinnukool, O. Signet ring cell detection from histological images using deep learning. CMC-Comput. Mater. Contin. 2022, 72, 5985–5997. [Google Scholar] [CrossRef]
  23. Shahsavari, A.; Khatibi, T.; Ranjbari, S. Skin lesion detection using an ensemble of deep models: SLDED. Multimed. Tools Appl. 2023, 82, 10575–10594. [Google Scholar] [CrossRef]
  24. Ahmed, M.R.; Fahim, M.A.I.; Islam, A.M.; Islam, S.; Shatabda, S. DOLG-NeXt: Convolutional neural network with deep orthogonal fusion of local and global features for biomedical image segmentation. Neurocomputing 2023, 546, 126362. [Google Scholar] [CrossRef]
  25. Sharma, A.K.; Nandal, A.; Dhaka, A.; Koundal, D.; Bogatinoska, D.C.; Alyami, H. Enhanced watershed segmentation algorithm-based modified ResNet50 model for brain tumor detection. Biomed Res. Int. 2022, 2022, 7348344. [Google Scholar] [CrossRef] [PubMed]
  26. Jin, H.; Kim, E. Helpful or Harmful: Inter-task Association in Continual Learning. In Proceedings of the European Conference on Computer Vision; Springer Nature: Cham, Switzerland, 2022; pp. 519–535. [Google Scholar]
  27. Sanghvi, H.A.; Patel, R.H.; Agarwal, A.; Gupta, S.; Sawhney, V.; Pandya, A.S. A deep learning approach for classification of COVID and pneumonia using DenseNet-201. Int. J. Imaging Syst. Technol. 2023, 33, 18–38. [Google Scholar] [CrossRef] [PubMed]
  28. Pacheco, A.G.; Lima, G.R.; Salomao, A.S.; Krohling, B.; Biral, I.P.; de Angelo, G.G.; Alves, F.C., Jr.; Esgario, J.G.; Simora, A.C.; Castro, P.B.; et al. PAD-UFES-20: A skin lesion dataset composed of patient data and clinical images collected from smartphones. Data Brief 2020, 32, 106221. [Google Scholar] [CrossRef]
  29. Mohan, S.; Bhattacharya, S.; Kaluri, R.; Feng, G.; Tariq, U. Multi-modal prediction of breast cancer using particle swarm optimization with non-dominating sorting. Int. J. Distrib. Sens. Netw. 2020, 16, 1–12. [Google Scholar]
  30. Alexandris, D.; Alevizopoulos, N.; Marinos, L.; Gakiopoulou, C. Dermoscopy and novel non invasive imaging of Cutaneous Metastases. Adv. Cancer Biol.-Metastasis 2022, 6, 100078. [Google Scholar] [CrossRef]
  31. Adla, D.; Reddy, G.V.R.; Nayak, P.; Karuna, G. Deep learning-based computer aided diagnosis model for skin cancer detection and classification. Distrib. Parallel Databases 2022, 40, 717–736. [Google Scholar] [CrossRef]
  32. Anand, V.; Gupta, S.; Koundal, D.; Singh, K. Fusion of U-Net and CNN model for segmentation and classification of skin lesion from dermoscopy images. Expert Syst. Appl. 2023, 213, 119230. [Google Scholar] [CrossRef]
  33. Bibi, A.; Khan, M.A.; Javed, M.Y.; Tariq, U.; Kang, B.G.; Nam, Y.; Mostafa, R.R.; Sakr, R.H. Skin lesion segmentation and classification using conventional and deep learning based framework. Comput. Mater. Contin 2022, 71, 2477–2495. [Google Scholar] [CrossRef]
  34. Qian, S.; Ren, K.; Zhang, W.; Ning, H. Skin lesion classification using CNNs with grouping of multi-scale attention and class-specific loss weighting. Comput. Methods Programs Biomed. 2022, 226, 107166. [Google Scholar] [CrossRef] [PubMed]
  35. Ullah, A.; Elahi, H.; Sun, Z.; Khatoon, A.; Ahmad, I. Comparative analysis of AlexNet, ResNet18 and SqueezeNet with diverse modification and arduous implementation. Arab. J. Sci. Eng. 2022, 47, 2397–2417. [Google Scholar] [CrossRef]
  36. Goswami, A.D.; Bhavekar, G.S.; Chafle, P.V. Electrocardiogram signal classification using VGGNet: A neural network based classification model. Int. J. Inf. Technol. 2023, 15, 119–128. [Google Scholar] [CrossRef]
  37. Qayyum, A.; Mazher, M.; Khan, T.; Razzak, I. Semi-supervised 3D-InceptionNet for segmentation and survival prediction of head and neck primary cancers. Eng. Appl. Artif. Intell. 2023, 117, 105590. [Google Scholar] [CrossRef]
  38. Huang, Y.; Huang, C.Y.; Li, X.; Li, K. A Dataset Auditing Method for Collaboratively Trained Machine Learning Models. IEEE Trans. Med. Imaging 2022, 42, 2081–2090. [Google Scholar] [CrossRef]
  39. Shah, A.; Shah, M.; Pandya, A.; Sushra, R.; Sushra, R.; Mehta, M.; Patel, K.; Patel, K. A Comprehensive Study on Skin Cancer Detection using Artificial Neural Network (ANN) and Convolutional Neural Network (CNN). Clin. eHealth 2023, 6, 76–84. [Google Scholar] [CrossRef]
  40. Albahar, M.A. Skin lesion classification using convolutional neural network with novel regularizer. IEEE Access 2019, 7, 38306–38313. [Google Scholar] [CrossRef]
  41. Rasel, M.; Obaidellah, U.H.; Kareem, S.A. Convolutional neural network-based skin lesion classification with Variable Nonlinear Activation Functions. IEEE Access 2022, 10, 83398–83414. [Google Scholar] [CrossRef]
  42. Gururaj, H.; Manju, N.; Nagarjun, A.; Aradhya, V.N.M.; Flammini, F. DeepSkin: A Deep Learning Approach for Skin Cancer Classification. IEEE Access 2023, 11, 50205–50214. [Google Scholar] [CrossRef]
  43. Allugunti, V.R. A machine learning model for skin disease classification using convolution neural network. Int. J. Comput. Program. Database Manag. 2022, 3, 141–147. [Google Scholar] [CrossRef]
  44. Bhargava, M.; Vijayan, K.; Anand, O.; Raina, G. Exploration of transfer learning capability of multilingual models for text classification. In Proceedings of the 2023 5th International Conference on Pattern Recognition and Intelligent Systems, Shenyang, China, 28–30 July 2023; pp. 45–50. [Google Scholar]
  45. Ogudo, K.A.; Surendran, R.; Khalaf, O.I. Optimal Artificial Intelligence Based Automated Skin Lesion Detection and Classification Model. Comput. Syst. Sci. Eng. 2023, 44. [Google Scholar] [CrossRef]
  46. Bala, D.; Abdullah, M.I.; Hossain, M.A.; Islam, M.A.; Rahman, M.A.; Hossain, M.S. SkinNet: An Improved Skin Cancer Classification System Using Convolutional Neural Network. In Proceedings of the 2022 4th International Conference on Sustainable Technologies for Industry 4.0 (STI), Dhaka, Bangladesh, 17–18 December 2022; pp. 1–6. [Google Scholar]
  47. Ramadan, R.; Aly, S. CU-net: A new improved multi-input color U-net model for skin lesion semantic segmentation. IEEE Access 2022, 10, 15539–15564. [Google Scholar] [CrossRef]
  48. Kartal, M.S.; Polat, Ö. Segmentation of Skin Lesions using U-Net with EfficientNetB7 Backbone. In Proceedings of the 2022 Innovations in Intelligent Systems and Applications Conference (ASYU), Antalya, Turkey, 7–9 September 2022; pp. 1–5. [Google Scholar]
  49. Vasudeva, K.; Chandran, S. Classifying Skin Cancer and Acne using CNN. In Proceedings of the 2023 15th International Conference on Knowledge and Smart Technology (KST), Phuket, Thailand, 21–24 February 2023; pp. 1–6. [Google Scholar]
  50. Jayabharathy, K.; Vijayalakshmi, K. Detection and classification of malignant melanoma and benign skin lesion using CNN. In Proceedings of the 2022 International Conference on Smart Technologies and Systems for Next Generation Computing (ICSTSN), Villupuram, India, 25–26 March 2022; pp. 1–4. [Google Scholar]
  51. Battle, M.L.; Atapour-Abarghouei, A.; McGough, A.S. Siamese Neural Networks for Skin Cancer Classification and New Class Detection using Clinical and Dermoscopic Image Datasets. In Proceedings of the 2022 IEEE International Conference on Big Data (Big Data), Osaka, Japan, 17–20 December 2022; pp. 4346–4355. [Google Scholar]
  52. Rasheed, A.; Umar, A.I.; Shirazi, S.H.; Khan, Z.; Nawaz, S.; Shahzad, M. Automatic eczema classification in clinical images based on hybrid deep neural network. Comput. Biol. Med. 2022, 147, 105807. [Google Scholar] [CrossRef] [PubMed]
  53. Mohamed, E.H.; Abubakr, A.F.; Abdu, N.; Khalil, M.; Kamal, H.; Youssef, M.; Mohamed, H.; ElSayed, M. A Hybrid Deep Learning Framework for Skin Cancer Classification Using Dermoscopy Images and Metadata. Res. Sq. 2023, preprint. [Google Scholar] [CrossRef]
  54. Bedeir, R.H.; Mahmoud, R.O.; Zayed, H.H. Automated multi-class skin cancer classification through concatenated deep learning models. Iaes Int. J. Artif. Intell. 2022, 11, 764. [Google Scholar]
  55. Ghosh, P.; Azam, S.; Quadir, R.; Karim, A.; Shamrat, F.; Bhowmik, S.K.; Jonkman, M.; Hasib, K.M.; Ahmed, K. SkinNet-16: A deep learning approach to identify benign and malignant skin lesions. Front. Oncol. 2022, 12, 931141. [Google Scholar] [CrossRef] [PubMed]
  56. Nigar, N.; Umar, M.; Shahzad, M.K.; Islam, S.; Abalo, D. A deep learning approach based on explainable artificial intelligence for skin lesion classification. IEEE Access 2022, 10, 113715–113725. [Google Scholar] [CrossRef]
  57. Agyenta, C.; Akanzawon, M. Skin Lesion Classification Based on Convolutional Neural Network. J. Appl. Sci. Technol. Trends 2022, 3, 14–19. [Google Scholar] [CrossRef]
  58. Malo, D.C.; Rahman, M.M.; Mahbub, J.; Khan, M.M. Skin Cancer Detection using Convolutional Neural Network. In Proceedings of the 2022 IEEE 12th Annual Computing and Communication Workshop and Conference (CCWC), Las Vegas, NV, USA, 26–29 January 2022; pp. 0169–0176. [Google Scholar]
  59. Nawaz, M.; Nazir, T.; Masood, M.; Ali, F.; Khan, M.A.; Tariq, U.; Sahar, N.; Damaševičius, R. Melanoma segmentation: A framework of improved DenseNet77 and UNET convolutional neural network. Int. J. Imaging Syst. Technol. 2022, 32, 2137–2153. [Google Scholar] [CrossRef]
  60. Panthakkan, A.; Anzar, S.; Jamal, S.; Mansoor, W. Concatenated Xception-ResNet50—A novel hybrid approach for accurate skin cancer prediction. Comput. Biol. Med. 2022, 150, 106170. [Google Scholar] [CrossRef] [PubMed]
  61. Singh, A.; Pandey, A.; Rakhra, M.; Singh, D.; Singh, G.; Dahiya, O. An Iris Recognition System Using CNN & VGG16 Technique. In Proceedings of the 2022 10th International Conference on Reliability, Infocom Technologies and Optimization (Trends and Future Directions) (ICRITO), Noida, India, 13–14 October 2022; pp. 1–6. [Google Scholar]
  62. Mascarenhas, S.; Agarwal, M. A comparison between VGG16, VGG19 and ResNet50 architecture frameworks for Image Classification. In Proceedings of the 2021 International Conference on Disruptive Technologies for Multi-Disciplinary Research and Applications (CENTCON), Bengaluru, India, 19–21 November 2021; Volume 1, pp. 96–99. [Google Scholar]
  63. Pugliesi, R.A. Deep Learning Models for Classification of Pediatric Chest X-ray Images using VGG-16 and ResNet-50. Sage Sci. Rev. Appl. Mach. Learn. 2019, 2, 37–47. [Google Scholar]
  64. Yang, D.; Martinez, C.; Visuña, L.; Khandhar, H.; Bhatt, C.; Carretero, J. Detection and analysis of COVID-19 in medical images using deep learning techniques. Sci. Rep. 2021, 11, 19638. [Google Scholar] [CrossRef] [PubMed]
  65. Nijaguna, G.; Babu, J.A.; Parameshachari, B.; de Prado, R.P.; Frnda, J. Quantum Fruit Fly algorithm and ResNet50-VGG16 for medical diagnosis. Appl. Soft Comput. 2023, 136, 110055. [Google Scholar] [CrossRef]
  66. Izikson, L.; Sober, A.J.; Mihm, M.C.; Zembowicz, A. Prevalence of melanoma clinically resembling seborrheic keratosis: Analysis of 9204 cases. Arch. Dermatol. 2002, 138, 1562–1566. [Google Scholar] [CrossRef] [PubMed]
  67. Grant-Kels, J.M.; Bason, E.T.; Grin, C.M. The misdiagnosis of malignant melanoma. J. Am. Acad. Dermatol. 1999, 40, 539–548. [Google Scholar] [CrossRef] [PubMed]
  68. Garcea, F.; Serra, A.; Lamberti, F.; Morra, L. Data augmentation for medical imaging: A systematic literature review. Comput. Biol. Med. 2023, 152, 106391. [Google Scholar] [CrossRef]
  69. Zephaniah, B. Comparison of Keras Applications Prebuilt Model with Extra Densely Connected Neural Layer Accuracy And Stability Using Skin Cancer Dataset of Mnist: Ham10000. Ph.D. Thesis, Universitas Kristen Satya Wacana, Salatiga, Indonesia, 2023. [Google Scholar]
  70. Alsahafi, Y.S.; Kassem, M.A.; Hosny, K.M. Skin-Net: A novel deep residual network for skin lesions classification using multilevel feature extraction and cross-channel correlation with detection of outlier. J. Big Data 2023, 10, 105. [Google Scholar] [CrossRef]
  71. Alam, T.M.; Shaukat, K.; Khan, W.A.; Hameed, I.A.; Almuqren, L.A.; Raza, M.A.; Aslam, M.; Luo, S. An efficient deep learning-based skin cancer classifier for an imbalanced dataset. Diagnostics 2022, 12, 2115. [Google Scholar] [CrossRef]
  72. Wang, Z.; Lyu, J.; Luo, W.; Tang, X. Superpixel inpainting for self-supervised skin lesion segmentation from dermoscopic images. In Proceedings of the 2022 IEEE 19th International Symposium on Biomedical Imaging (ISBI), Kolkata, India, 28–31 March 2022; pp. 1–4. [Google Scholar]
  73. Nayef, B.H.; Abdullah, S.N.H.S.; Sulaiman, R.; Alyasseri, Z.A.A. Optimized leaky ReLU for handwritten Arabic character recognition using convolution neural networks. Multimed. Tools Appl. 2022, 81, 2065–2094. [Google Scholar] [CrossRef]
  74. Tirumala, K.; Markosyan, A.; Zettlemoyer, L.; Aghajanyan, A. Memorization without overfitting: Analyzing the training dynamics of large language models. Adv. Neural Inf. Process. Syst. 2022, 35, 38274–38290. [Google Scholar]
  75. Oh, S.; Moon, J.; Kum, S. Application of Deep Learning Model Inference with Batch Size Adjustment. In Proceedings of the 2022 13th International Conference on Information and Communication Technology Convergence (ICTC), Jeju Island, Republic of Korea, 19–21 October 2022; pp. 2146–2148. [Google Scholar]
  76. Ogundokun, R.O.; Maskeliunas, R.; Misra, S.; Damaševičius, R. Improved CNN based on batch normalization and adam optimizer. In Proceedings of the International Conference on Computational Science and Its Applications, Malaga, Spain, 4–7 July 2022; Springer: Berlin/Heidelberg, Germany, 2022; pp. 593–604. [Google Scholar]
Figure 1. Illustrative representations from the PAD-UFES-20-Modified dataset employed in this research exhibit diverse visualizations of distinct skin lesions, encompassing the three respective categories of seborrheic keratosis, nevus, and melanoma.
Figure 1. Illustrative representations from the PAD-UFES-20-Modified dataset employed in this research exhibit diverse visualizations of distinct skin lesions, encompassing the three respective categories of seborrheic keratosis, nevus, and melanoma.
Cancers 16 00108 g001
Figure 2. The pie chart highlights the distribution of different skin-lesion classes within the PAD-UFES-20-Modified dataset, and shows that in this dataset there is no significant class imbalance.
Figure 2. The pie chart highlights the distribution of different skin-lesion classes within the PAD-UFES-20-Modified dataset, and shows that in this dataset there is no significant class imbalance.
Cancers 16 00108 g002
Figure 3. Proposed multi-layer deep CNN model architecture to classify different skin lesions categories.
Figure 3. Proposed multi-layer deep CNN model architecture to classify different skin lesions categories.
Cancers 16 00108 g003
Figure 4. The graph depicts variations in training and validation accuracy and loss of the proposed SkinLesNet model over the first 10 epochs. Accuracy gradually increased and reached 96% after 100 epochs.
Figure 4. The graph depicts variations in training and validation accuracy and loss of the proposed SkinLesNet model over the first 10 epochs. Accuracy gradually increased and reached 96% after 100 epochs.
Cancers 16 00108 g004
Table 1. Comparison of implemented CNN models for skin lesions detection on different dermoscopy datasets.
Table 1. Comparison of implemented CNN models for skin lesions detection on different dermoscopy datasets.
Ref.ModelDatasetAccuracyComments
[38]Deep convolutional
neural network
HAM1000090%One of the main benchmark datasets was used in
this paper, which produced promising
results while using a CNN model. However,
hyperparameters tuning is required, to
increase the accuracy results.
[59]Convolutional neural
network (CNN)
International Skin
Imaging Collaboration
(ISIC)
97.49%A CNN model was used, which showed
relatively good results, but the size of the dataset
needs to be maximized.
[42]ResNet50MNIST: HAM1000091%A State-of-the-Art model was used, which
produced reasonable results on the given dataset.
However, the dataset needs to be preprocessed
well before training, to obtain more
accurate and promising results.
[43]Deep convolutional
neural network
International Symposium
on Biomedical Imaging
(ISBI)
97.8%A CNN model was trained on an
internationally recognized benchmark dataset.
However, the size of the dataset was
decreased, which showed good
results but could lead to model overfitting.
[48]U-NetInternational Skin
Imaging Collaboration
(ISIC)
94.9%A State-of-the-Art model was used, which
showed promising results, but more data
preprocessing or augmentation is needed
for accurate prediction.
Table 2. Number of images per class and train-test dataset split.
Table 2. Number of images per class and train-test dataset split.
DatasetTrain (80%)Test (20%)Total
Melanoma416104520
Nevus32682408
Seborrheic
Keratosis30977386
Total10512631314
Table 3. Hyperparameters and configurations used to train the proposed multi-layer model for this work.
Table 3. Hyperparameters and configurations used to train the proposed multi-layer model for this work.
Learning RateBatch SizeEpochsOptimizerActivation
0.00132100AdamReLU
Table 4. Performance comparison of SkinLesNet to other State-of-the-Art fine-tuned models for PAD-UFES-20-Modified test dataset.
Table 4. Performance comparison of SkinLesNet to other State-of-the-Art fine-tuned models for PAD-UFES-20-Modified test dataset.
Performance MetricsVGG16ResNet50SkinLesNet
Accuracy79%82%96%
Precision80%85%97%
Recall75%75%92%
F1-Score72%75%92%
Table 5. Performance comparison of SkinLesNet to other State-of-the-Art fine-tuned models for HAM10000 test dataset.
Table 5. Performance comparison of SkinLesNet to other State-of-the-Art fine-tuned models for HAM10000 test dataset.
Performance MetricsVGG16ResNet50SkinLesNet
Accuracy75%80%90%
Precision75%80%89%
Recall70%72%87%
F1-Score70%71%85%
Table 6. Performance comparison of SkinLesNet to other State-of-the-Art fine-tuned models for ISIC2017 test dataset.
Table 6. Performance comparison of SkinLesNet to other State-of-the-Art fine-tuned models for ISIC2017 test dataset.
Performance MetricsVGG16ResNet50SkinLesNet
Accuracy70%75%92%
Precision70%75%80%
Recall70%65%82%
F1-Score72%70%75%
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Azeem, M.; Kiani, K.; Mansouri, T.; Topping, N. SkinLesNet: Classification of Skin Lesions and Detection of Melanoma Cancer Using a Novel Multi-Layer Deep Convolutional Neural Network. Cancers 2024, 16, 108. https://doi.org/10.3390/cancers16010108

AMA Style

Azeem M, Kiani K, Mansouri T, Topping N. SkinLesNet: Classification of Skin Lesions and Detection of Melanoma Cancer Using a Novel Multi-Layer Deep Convolutional Neural Network. Cancers. 2024; 16(1):108. https://doi.org/10.3390/cancers16010108

Chicago/Turabian Style

Azeem, Muhammad, Kaveh Kiani, Taha Mansouri, and Nathan Topping. 2024. "SkinLesNet: Classification of Skin Lesions and Detection of Melanoma Cancer Using a Novel Multi-Layer Deep Convolutional Neural Network" Cancers 16, no. 1: 108. https://doi.org/10.3390/cancers16010108

APA Style

Azeem, M., Kiani, K., Mansouri, T., & Topping, N. (2024). SkinLesNet: Classification of Skin Lesions and Detection of Melanoma Cancer Using a Novel Multi-Layer Deep Convolutional Neural Network. Cancers, 16(1), 108. https://doi.org/10.3390/cancers16010108

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop