Next Article in Journal
The Relationship between Thyrotropin Serum Concentrations and Thyroid Carcinoma
Next Article in Special Issue
Real-World Data on Clinical Outcomes and Treatment Management of Advanced Melanoma Patients: Single-Center Study of a Tertiary Cancer Center in Switzerland
Previous Article in Journal
Targeting KRAS in Pancreatic Ductal Adenocarcinoma: The Long Road to Cure
Previous Article in Special Issue
The Promise of Semantic Segmentation in Detecting Actinic Keratosis Using Clinical Photography in the Wild
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Internet of Things-Assisted Smart Skin Cancer Detection Using Metaheuristics with Deep Learning Model

1
Department of Biomedical Engineering, College of Engineering, Princess Nourah Bint Abdulrahman University, P.O. Box 84428, Riyadh 11671, Saudi Arabia
2
Department of Computer Science, College of Science and Arts in RijalAlmaa, King Khalid University, Abha 62529, Saudi Arabia
3
Department of Special Education, College of Education, King Saud University, Riyadh 12372, Saudi Arabia
4
Department of Information Systems, College of Computing and Information System, Umm Al-Qura University, Mecca 21421, Saudi Arabia
5
Department of Computer Science, College of Science, Majmaah University, Al Majmaah 11952, Saudi Arabia
6
Research Center, Future University in Egypt, New Cairo 11835, Egypt
*
Author to whom correspondence should be addressed.
Cancers 2023, 15(20), 5016; https://doi.org/10.3390/cancers15205016
Submission received: 17 September 2023 / Revised: 9 October 2023 / Accepted: 10 October 2023 / Published: 17 October 2023
(This article belongs to the Special Issue Skin Cancers as a Paradigm Shift: From Pathobiology to Treatment)

Abstract

:

Simple Summary

The Internet of Things (IoT) uses connected devices and sensors, like high-resolution cameras and specific sensors in wearable devices, for the collection of skin images with abnormalities. Skin cancer detection is difficult because of differences in lesion size, shape, and lighting conditions. To address this, an innovative approach called “ODL-SCDC”, combining deep learning with IoT technology, is developed. The proposed model uses advanced techniques like hyperparameter selection and feature extraction to improve skin cancer classification. The results show that ODL-SCDC outperforms other methods in accurately identifying skin lesions, which could have a significant impact on early cancer detection in the medical field.

Abstract

Internet of Things (IoT)-assisted skin cancer recognition integrates several connected devices and sensors for supporting the primary analysis and monitoring of skin conditions. A preliminary analysis of skin cancer images is extremely difficult because of factors such as distinct sizes and shapes of lesions, differences in color illumination, and light reflections on the skin surface. In recent times, IoT-based skin cancer recognition utilizing deep learning (DL) has been used for enhancing the early analysis and monitoring of skin cancer. This article presents an optimal deep learning-based skin cancer detection and classification (ODL-SCDC) methodology in the IoT environment. The goal of the ODL-SCDC technique is to exploit metaheuristic-based hyperparameter selection approaches with a DL model for skin cancer classification. The ODL-SCDC methodology involves an arithmetic optimization algorithm (AOA) with the EfficientNet model for feature extraction. For skin cancer detection, a stacked denoising autoencoder (SDAE) classification model has been used. Lastly, the dragonfly algorithm (DFA) is utilized for the optimal hyperparameter selection of the SDAE algorithm. The simulation validation of the ODL-SCDC methodology has been tested on a benchmark ISIC skin lesion database. The extensive outcomes reported a better solution of the ODL-SCDC methodology compared with other models, with a maximum sensitivity of 97.74%, specificity of 99.71%, and accuracy of 99.55%. The proposed model can assist medical professionals, specifically dermatologists and potentially other healthcare practitioners, in the skin cancer diagnosis process.

1. Introduction

The Internet of Things (IoT) is designed by interconnecting devices to the Internet using modern communication technology for sharing data [1]. Recently, IoT has been popularly implemented in various appliances such as vehicular ad hoc networks, smart grids, body sensor networks, smart cities, and smart homes [2,3]. The IoT development depends on diverse advanced technologies, namely, wireless sensor networks (WSNs), cloud computing (CC), and information sensing [4]. The IoT is usually exploited to enhance and develop medical systems due to its effective power for integrating with the resources of substructures and offering essential data to users [5]. The medical system puts a considerable quantity of data through WSNs when distributing various e-health services, namely, electronic health records, remote monitoring for patients, and medical platforms [6]. Skin cancer is considered the sixth major cancer variety, which could be improved around the world. The skin layer comprises three forms of cells, melanocytes, basal cells, and squamous cells, in which cells are responsible for tissues to induce cancer [7,8]. Hence, there are different skin tumors, namely, basal cell carcinoma (BCC), melanoma, and squamous cell carcinoma (SCC), which can be a serious variety of cancers. People are mainly affected by skin cancer in Australia and the United States [9]. Diagnosis of skin cancer at an earlier phase is challenging for dermatologists, which stimulates research workers to develop a simplified and automated cancer detector for identifying skin cancer at an earlier phase [10].
Dermoscopy improves melanoma diagnostic accuracy; however, it can be quite difficult to accurately analyze some cancers, and especially earlier melanomas have insufficient special dermoscopic features [11]. Although dermoscopy analyzes skin cancers with better accuracy, it is not appropriate for identifying featureless melanoma, and it requires increased accuracy to improve the patient’s survival rates [12,13]. The difficulties with dermoscopy and the requirement to enhance the identification accuracy of skin tumors then positions the substructure for emerging computer-aided detection (CAD) techniques for analyzing skin cancers [14]. In general, there have been five stages in computer-aided skin cancer analysis such as feature extraction, segmentation, classification, preprocessing, and image acquisition [15]. The important stages in the CAD of skin cancers are classification and segmentation [16]. But, identifying skin cancer employing CAD is simple, and we should consider numerous aspects for accurate identification, for instance, artefacts like ruler signs, dark corners, ink marks, water bubbles, hairs, and marker signs, which may lead to incorrect segmentation and misclassification of skin cancers [17,18]. In several computer-aided techniques, deep learning (DL)-based algorithms provide optimistic outcomes for the classification and segmentation of skin cancers due to their capability for extracting complex features from skin cancer images for extremely specific diagnosis [19]. Also, DL methods learn function-specific features and are more effective than other techniques.
This article presents an optimal deep learning-based skin cancer detection and classification (ODL-SCDC) algorithm in the IoT environment. The goal of the ODL-SCDC technique is to exploit metaheuristic-based hyperparameter selection approaches with a DL model for skin cancer classification. To achieve this, the ODL-SCDC technique undergoes preprocessing using a Wiener filtering (WF) system. Moreover, the ODL-SCDC algorithm involves an arithmetic optimization algorithm (AOA) with an EfficientNet model for feature extraction. For skin cancer detection, a stacked denoising autoencoder (SDAE) classification model has been used. Lastly, the dragonfly algorithm (DFA) is utilized for the optimal hyperparameter selection of the SDAE algorithm. The simulation validation of the ODL-SCDC algorithm can be tested on a benchmark skin lesion database. The key contributions of the paper are summarized as follows.
  • Develop an automated ODL-SCDC technique comprising WF-based preprocessing, AOA with EfficientNet-based feature extraction, SDAE classifier, and DFA-based hyperparameter tuning. To the best of our knowledge, the proposed ODL-SCDC technique never existed in the literature.
  • Propose AOA with the EfficientNet model for feature extraction, a critical aspect of skin cancer classification. The AOA-based fine-tuning process is crucial for optimizing the performance of the classification model.
  • Present an SDAE classifier for skin cancer classification and DFA is employed for optimal hyperparameter selection of the SDAE model. Hyperparameter optimization of the SDAE model using DFA using cross-validation helps to boost the predictive outcome of the proposed model for unseen data.

2. Related Works

In [20], a powerful skin cancer identification model was presented for enhancing accuracy by learning and extracting significant image representations through a MobileNetV3 framework. Subsequently, the removed features were employed as input to an adapted Hunger Games Search (HGS) based on Dynamic-Opposite Learning (DOLHGS) and PSO. Ramya and Sathiyabhama’s [21] primary aim was creating an ensemble ML with an improved genetic algorithm (GA) method for attaining high-level accurateness in the prognosis of skin cancers at an early phase by comparison with other present methods. Then, the feature selection (FS) was implemented by utilizing an Enhanced-GA (EGA) that generates enhanced solutions through processes such as ensemble, mutations, and crossover with ELM (EGA-ELM) for classifying the images as non-cancerous or cancerous. Abd Elaziz et al. [22] designed a robust technique for skin cancer diagnosis with a DL-based algorithm as the extracted features support that a diagnosis could be attained by employing the MobileNetV3 framework. Further, an innovative technique named the Improved Artificial Rabbits Optimization (IARO) was presented that exploits the crossover operator and Gaussian mutation to avoid the irrelevant features from the feature extraction by the MobileNetV3 framework.
Khamparia et al. [23] introduced a new DL Internet of Health and Things (IoHT)-determined model for classifying skin cancers in skin images by implementing the TL method. In this developed model, automated features are removed from images employing various pretrained frameworks, namely, SqueezeNet, VGG19, Inception V3, and ResNet50, that were provided in the fully connected layer (FCL) of a CNN for the classification of malignant and benign skin cells utilizing a dense and max pooling process. The authors of [24] suggested a novel skin cancer detection technique named DL with Evolutionary Algorithm Image Segmentation (DL-EAIS) for IoT and cloud-based smart medical fields. Firstly, dermoscopic images could be taken by employing IoT devices that must be transferred to cloud servers for additional identification. Secondly, the shallow CNN (SCNN) framework was exploited for feature extraction. Moreover, the Deep-Kernel-ELM (D-KELM) algorithm has been utilized as a classification technique for identifying the class labels of dermoscopic images. In [25], the DL technique (CNN) was utilized to develop a computer technique to forecast novel conditions of skin cancers. Later, this developed method made a CNN approach that contains four fully connected layers, three convolution layers, and three max pooling layers. Adjobo et al. [26] implemented a Gabor Convolutional Network (GCN) method to enhance the effectiveness of the automatic method of analysis for skin tumors. This algorithm integrates a CNN and Gabor filtering (GF) and supports three operations such as the collection of GF banks, a CNN model, and filter injection. In [27], a DL-assisted hybrid optimizer was employed to identify skin cancer and segmenting lesions. Two optimization techniques have been implemented for diagnosing cancers and segmenting skin lesions. MultiScale Residual Fusion Network (MSRFNet) was exploited for the segmentation of skin cancer and could be trained by the developed Average Subtraction Student Psychology-Based Optimizer (ASSPBO) technique.

3. The Proposed Model

In this article, we have designed and developed an automated skin cancer classification and detection model using the ODL-SCDC technique in the IoT environment. The goal of the ODL-SCDC technique is to exploit metaheuristic-based hyperparameter selection approaches with a DL model for skin cancer classification. To achieve this, the ODL-SCDC technique performs a series of processes such as WF-based processing, EfficientNet-based feature extraction, AOA-based hyperparameter tuning, SDAE-based classification, and DFA-based parameter tuning. Figure 1 depicts the entire process of the ODL-SCDC approach.

3.1. Image Preprocessing

To preprocess the input images, the WF approach is used. The WF is named after Norbert Wiener, and it is a mathematical model for signal processing and filtering [28]. It is mainly utilized in the domains of statistics, engineering, and image processing for estimating an unknown signal or system by decreasing the mean squared error (MSE) among the evaluated signal and true signal. The WF is extremely beneficial if dealing with noisy signals or once the features of noises are known. Mathematically, the WF can plan for minimizing the MSE among the estimation signal and true signal. It usually contains convolutional, spectral analysis, and statistical estimates. The filtering is executed in either the time or frequency domains, based on the nature of the problems and the existing data.

3.2. Feature Extraction Using EfficientNet Model

In this work, the EfficientNet approach is applied for feature extraction. A model scaling algorithm is used to enhance the accuracy and speed of the model. To accomplish this, different sizes of scaling models can be re-examined as suggested by the predecessors, involving the width, depth, and resolution of the network [29]. The researchers recognized that the dimension is mutually influential and EfficientNet was proposed through experiments, while earlier research had focused typically on expanding one of these dimensions to enhance performance. Figure 2 represents the architecture of EfficientNet. Particularly, they formulated the problem description for exploring the relationships between the width, depth, and resolution of the network to achieve model accuracy. Consider the entire net as N , and the i t h layer is formulated by Y i = F i X i , where F i represents the operator, Y i denotes the output tensor, and X i indicates the input tensor. Where N has k convolution layers, = F k F 2 F 1 X 1 = j = 1 , k F j X i . The convolution layer is generally divided into similar architecture phases, hence N is formulated as:
N = i = 1 s F i L i X H i ,   W i , C j  
In Equation (1), i refers to the stage index, F i L i denotes the convolution layer of i t h stages, F i repeats L i times, and H i ,   W i , C i shows the shape of the input images.
Figure 2. Architecture of EfficientNet.
Figure 2. Architecture of EfficientNet.
Cancers 15 05016 g002
The research workers established some constraints involving fixing the fundamental architecture of the network, which imposes equivalent scaling on each layer and incorporates computation and memory constraints to decrease the search range. Consequently, the scaling of the network is enhanced by multiplying the baseline network as F ^ i ,   L ^ i , H ^ i ,   ^   W i ,   a n d   C ^ i with the constant magnification:
max d , w , r A c c u r a c y N d ,   w , r
s . t . N d , w , r = i = 1 , s F ^ i L i ^ X r × H ^ i , r × W ^ i , r × C ^ i  
Memory   ( N ) target _ memory
FLOPS   ( N ) target _ flops
In Equation (2), w ,   d , and r signify the coefficients for scaling the width, depth, and resolution of the network.
The authors presented a compound scaling method after conducting an experiment that involved modifying only one dimension simultaneously, along with adjusting each of the three dimensions at a time. This technique includes a compound co-efficient ϕ to equally scale the resolution, width, and depth of the network:
d e p t h :   d = α ϕ
w i d t h :   w = β ϕ
d e p t h :   r = γ ϕ
s . t . α β 2 γ 2 2 ,   w h e r e   α , β , γ 1  
In Equation (3), α ,   β , and γ are the constants representing a small grid search.

3.3. Hyperparameter Tuning Using AOA

To adjust the parameters related to the EfficientNet, the AOA is used. The concept behind the AOA technique is to perform mathematical operations such as addition, division, subtraction, and multiplication operators [30]. AOA is a basic structure with lower computation difficulty, and it can be associated with a sine-cosine algorithm (SCA). Assume that M&D companies are turning out large phases in each iteration; the exploration stage is where most of the work is performed.
X i l + 1 = X b t / ( M O P + e p s ) . U B L B μ + L B , r a n d < 0.5 X b t M O P . U B L B μ + L B , r a n d 0.5  
where e p s referring to the simple positive number and the constant coefficient represented as 1 (0.499) are two factors from the proposal model. MOP includes a nonlinear reduction from 0 to 1 as the iterations progress.
M O P = 1 t T 1 α  
where α is a constant value fixed as 5. Note that both the M and D operators in Equation (5) generate a high random starting point for the best search agent. At the same time, the S and A operators are employed to devote greater attention to local exploitation, thus decreasing the count of stages beneath the search space.
The right equilibrium between use and discovery is critical to the accomplishment of maximum efficiencies in any model. The AOA parameter was utilized to switch between exploitation and exploration at each iteration.
M O A t = M i n + t M a x M i n T  
In Equation (6), M i n and Max indicate constant values. According to Equation (6), MOA enhances Min to Max. Thus, the search agent has an additional chance to conduct exploration in the searching range; then, the search agent is very likely to conduct a search near the optimal position.
The AOA technique produces an FF to enhance classification accuracy. It shows a positive integer for describing the real-time accuracy of the possible performance. The classifier error rate is supposed to be FF, and its minimization is the goal.
f i t n e s s X i = C l a s s i f i e r E r r o r R a t e X i = N o .   o f   m i s c l a s s i f i e d   i n s t a n c e s   T o t a l   n o .   o f   i n s t a n c e s   100  

3.4. Skin Cancer Detection Using Optimal SDAE Model

For skin cancer classification, the SDAE model is applied. Autoencoders (AEs) are allowed to convert high-dimension input data into low-dimension feature representations [31]. For improved robustness of AE, the DAE is capable of mapping real data instances x i for corrupted instances x ~ i . Stacking multiple DAEs allows the input data that were compressed as distinct hidden spaces to be extracted for in-depth features. Therefore, the resultant layer z i L of the SDAE is expressed as Equation (8):
z i L = ϕ L ( W L ϕ 1 W 1 x ~ i + b 1 ) + b L  
where i denotes the corrupted input for x i ,   W 1 ,   W 2 , , W L implies the weighted matrix, b 1 , b 2 , , b L represents the bias vectors, and ϕ 1 , ϕ 2 , , ϕ L stands for the activation functions like Relu, Sigmoid, and Tanh. L signifies the count of hidden layers (HLs), with L = 1 for the input layer and L = L for the resultant layer. By diminishing the error among the original input and reconstructed output, the main function of the SDAE is expressed as Equation (9):
ζ = 1 n i = 1 n x i ϕ L ( W L ( ϕ 1 ( W 1 x ~ i + b 1 ) ) + b L ) 2  
Compared to the simple AE and DAE, the SDAE is a specific hierarchical model for learning the feature representation from depth in the corrupted input.
Finally, the DFA is utilized for the hyperparameter selection of the SDAE model. DFA is a recent metaheuristic technique that drew its inspiration from the static and dynamic strategies of crowding [32]. Both steps in the metaheuristic algorithm are called exploitation and exploration. DFs form small groups and fly in dissimilar regions as a static group. In the static group, DFs fly in one direction and in large groups, which are desired behaviors from the exploitation stage. To inspire the behaviors of DFs, five fundamental rules, three of which are developed by Reynold and two novel concepts, are discussed in detail:
Separation, which represents the avoidance of people’s contact with others, is given as follows.
S i = j = 1 N X X i  
where the measured tap of transformer X signifies the individual location, X j refers to the amount of power of resources in the j t h location, and N shows the number of measurements.
Alignment: Compared to the total tap transformer measurement, this implies the amount of the tap transformer at different hours.
A j = j = 1 N V j N  
In Equation (11),   V i denotes the number of transformer j and N refers to the amount of transformer tap measurements.
Cohesion: This implies the quantity of passing power measured in relation to the overall amount of measured powers in various hours.
C j = j = 1 N X j N X  
In Equation (12), X j denotes the amount closer to the reference value and   X shows the transformer tap rate.
Attraction: The principal objective is to maintain survival; consequently, each individual should be attracted to the food sources:
F i = X + X  
In Equation (13), X denotes the reduction in transformer tap loss and X + is the power transmission from the network.
Distraction: This means staying away from the enemy that is shown below.
E i = X X  
In Equation (14), X denotes the location of the enemy and X shows the location of individuals.
Position vector X and the step length vector are the two vectors considered for updating the location of artificial DFs and simulating their movement.
Δ X t + 10 = s S i + a A i + c C j + f F i + e E i + w Δ X t  
In Equation (15), the Δ x step length vector is the same as the speed vector in PSO, and based on the PSO technique, the DFA is developed. A denotes the alignment value based on the i t h load and A indicates the co-efficient related to the direction; s denotes the number of transformer taps from the presence of scattered production; S i denotes the separation rate compared with the i t h loss; the value f represents the nutrition factor; and f i indicates the food source for the i t h load. The conditions of the tap transformer regarding the passing power are noted by i , w implies the inertia weight, and t shows the repetition count of the model. c shows the cohesion coefficient and C i denotes the cohesion value connected to i . e indicates the deviation of power transmitted, E i . After evaluating the step vector, the position vector is evaluated using the following expression.
Δ X t + 1 = X t + Δ X t + 1  

4. Results Analysis

The proposed model is simulated using the Python 3.8.5 tool on a PC with the following specifications: i5-8600k, GeForce 1050 Ti 4 GB, 16 GB RAM, 250 GB SSD, and 1 TB HDD. The parameter settings are given as follows: learning rate: 0.01, dropout: 0.5, batch size: 5, epoch count: 50, and activation: ReLU. For experimental validation, 80:20 and 70:30 ratios of training/testing data are used.
In this study, the performance validation of the ODL-SCDC algorithm has been tested on the ISIC database including distinct classes, namely, Angioma (ANG) (21 images), Nevus (NEV) (46 images), Lentigo NOS (LNOS) (41 images), Solar Lentigo (SLG) (68 images), Melanoma (MEL) (51 images), Seborrheic Keratosis (SKT) (54 images), and Basal Cell Carcinoma (BCC) (37 images). Table 1 represents the details of the database.
Figure 3 exhibits the confusion matrices attained by the ODL-SCDC methodology at 80:20 and 70:30 of the TR phase/TS phase. The outcome inferred the effective recognition and classification of all seven classes.
The skin cancer classification result of the ODL-SCDC technique is provided at 80:20 of the TR phase/TS phase in Table 2 and Figure 4. The experimental values inferred that the ODL-SCDC technique gains enhanced performance under all classes. With 80% of the TR phase, the ODL-SCDC technique offers average a c c u y , s e n s y , s p e c y , and F m e a s u r e of 96.55%, 97.74%, 99.71%, and 98.33%, respectively. Additionally, with 20% of the TS phase, the ODL-SCDC system gains average a c c u y , s e n s y , s p e c y , and F m e a s u r e of 98.66%, 94.05%, 99.14%, and 95.28%, correspondingly.
The skin cancer classification outcome of the ODL-SCDC technique is provided at 70:30 of the TR phase/TS phase in Table 3 and Figure 5. The simulation values implied that the ODL-SCDC method obtains higher outcomes under all classes. With 70% of the TR phase, the ODL-SCDC system attains average a c c u y , s e n s y , s p e c y , and F m e a s u r e of 99.36%, 96.82%, 99.61%, and 97.41%, correspondingly. Furthermore, with 30% of the TS phase, the ODL-SCDC algorithm gains average a c c u y , s e n s y , s p e c y , and F m e a s u r e of 98.51%, 93.73%, 99.10%, and 94.37%, correspondingly.
To calculate the performance of the ODL-SCDC approach at 80:20 of the TR phase/TS phase, TR and TS a c c u y curves are determined, as revealed in Figure 6. The TR and TS a c c u y curves establish the performance of the ODL-SCDC model over several epochs. The figure provides meaningful details regarding the learning task and generalisation abilities of the ODL-SCDC model. With an enhanced epoch count, it is noticed that the TR and TS a c c u y curves are improved. It is experimental that the ODL-SCDC algorithm obtains better testing accuracy which has the capability of recognizing the patterns in the TR and TS data.
Figure 7 exhibits the overall TR and TS loss values of the ODL-SCDC algorithm at 80:20 of the TR phase/TS phase over epochs. The TR loss exhibits that the method loss is minimal over epochs. Primarily, the loss values are lesser as the model modifies the weight to minimize the prediction error on the TR and TS data. The loss curves demonstrate the extent to which the model fits the training data. It is detected that the TR and TS loss is steadily decreased and depicted that the ODL-SCDC system effectually learns the patterns exhibited in the TR and TS data. It is also observed that the ODL-SCDC methodology modifies the parameters to decrease the discrepancy between the prediction and the original training label.
The precision–recall curve of the ODL-SCDC system at 80:20 of the TR phase/TS phase is demonstrated by plotting precision against recall as defined in Figure 8. The outcome confirms that the ODL-SCDC approach reaches higher precision–recall outcomes under all classes. The figure represents that the model learns to recognize various classes. The ODL-SCDC model accomplishes improved results in the recognition of positive instances with minimal false positives.
The ROC curves offered by the ODL-SCDC model at 80:20 of the TR phase/TS phase are illustrated in Figure 9, which have the ability the discriminate the class labels. The figure implies valuable insights into the trade-off between the TPR and FPR rates over distinct classification thresholds and varying numbers of epochs. It presents the accurate predictive performance of the ODL-SCDC system on the classification of various classes.
In Table 4, a comprehensive comparison study of the ODL-SCDC technique is made [1]. Figure 10 represents the comparative results of the ODL-SCDC technique in terms of a c c u y . Based on a c c u y , the ODL-SCDC technique gains an increasing a c c u y of 99.55%, whereas the IIOT-DLSLD, DLCAL-SLDC, DL-ANFC, SVM, CDNN, DLN, and DCCN-GC models obtain decreasing a c c u y values of 99.20%, 98.50%, 97.90%, 74.30%, 93.40%, 93.20%, and 93.40%, respectively.
Figure 11 signifies the comparative outcomes of the ODL-SCDC approach in terms of s e n s y and s p e c y . Based on s e n s y , the ODL-SCDC technique gains a higher s e n s y of 97.74%, whereas the IIOT-DLSLD, DLCAL-SLDC, DL-ANFC, SVM, CDNN, DLN, and DCCN-GC systems obtain decreasing s e n s y values of 97.30%, 94.50%, 93.40%, 73.20%, 82.50%, 82%, and 90.80%, correspondingly. Based on s p e c y , the ODL-SCDC methodology achieves a higher s p e c y of 99.71%, whereas the IIOT-DLSLD, DLCAL-SLDC, DL-ANFC, SVM, CDNN, DLN, and DCCN-GC algorithms obtain lesser s p e c y values of 99.50%, 99.10%, 98.70%, 75.40%, 97.50%, 97.80%, and 92.70%, correspondingly.
Lastly, the computation time (CT) results of the ODL-SCDC technique are compared with recent models in Table 5 and Figure 12. The experimental outcomes infer the lowest CT value of the ODL-SCDC technique with 1.30 s. On the other hand, the IIoT-DLSLD, DLCAL-SLDC, DL-ANFC, SVM, CDNN, DLN, and DCCN-GC models obtain increasing CT values. Therefore, the ODL-SCDC technique exhibits effectual performance of skin cancer classification.

5. Conclusions

In this article, we have designed and developed an automated skin cancer classification and detection model using the ODL-SCDC technique in the IoT environment. The goal of the ODL-SCDC technique is to exploit metaheuristic-based hyperparameter selection approaches with a DL model for skin cancer classification. To achieve this, the ODL-SCDC technique performs a series of processes such as WF-based processing, EfficientNet-based feature extraction, AOA-based hyperparameter tuning, SDAE-based classification, and DFA-based parameter tuning. In addition, the ODL-SCDC system involves the AOA with the EfficientNet algorithm for feature extraction. For skin cancer detection, the SDAE classification model has been used. Lastly, the DFA is utilized for the optimal hyperparameter selection of the SDAE algorithm. The simulation validation of the ODL-SCDC algorithm has been tested on a benchmark skin lesion database. The extensive results reported the enhanced performance of the ODL-SCDC technique with other approaches with respect to distinct measures.

Author Contributions

Conceptualization, M.O. and M.A.A.; Methodology, M.O., M.A.A., N.S.A. and M.A.S.; Software, S.S.A.; Validation, N.S.A., S.S.A. and A.S.; Investigation, M.O.; Resources, N.S.A.; Data curation, N.S.A.; Writing—original draft, M.O., M.A.A., M.A.S. and A.S.; Writing—review & editing, M.O., M.A.A., N.S.A., S.S.A., M.A.S. and A.S.; Visualization, N.S.A. and S.S.A.; Project administration, M.A.A.; Funding acquisition, M.O. and M.A.A. All authors have read and agreed to the published version of the manuscript.

Funding

The authors extend their appreciation to the Deanship of Scientific Research at King Khalid University for funding this work through a large group Research Project under grant number (RGP2/61/44). Princess Nourah bint Abdulrahman University Researchers Supporting Project number (PNURSP2023R203), Princess Nourah bint Abdulrahman University, Riyadh, Saudi Arabia. Research Supporting Project number (RSPD2023R521), King Saud University, Riyadh, Saudi Arabia. The authors would like to thank the Deanship of Scientific Research at Majmaah University for supporting this work under Project Number R-2023-656. This study is partially funded by the Future University in Egypt (FUE).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data sharing does not apply to this article as no datasets were generated during the current study.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Asiri, Y.; Halawani, H.T.; Algarni, A.D.; Alanazi, A.A. IoT-enabled healthcare environment using intelligent deep learning enabled skin lesion diagnosis model. Alex. Eng. J. 2023, 78, 35–44. [Google Scholar] [CrossRef]
  2. Riaz, L.; Qadir, H.M.; Ali, G.; Ali, M.; Raza, M.A.; Jurcut, A.D.; Ali, J. A Comprehensive Joint Learning System to Detect Skin Cancer. IEEE Access 2023, 11, 79434–79444. [Google Scholar] [CrossRef]
  3. Shinde, R.K.; Alam, S.; Hossain, B.; Imtiaz, S.M.; Kim, J.; Padwal, A.A.; Kim, N. Squeeze-mnet: Precise skin cancer detection model for low computing IOT devices using transfer learning. Cancers 2022, 15, 12. [Google Scholar] [CrossRef] [PubMed]
  4. Abdelhafeez, A.; Mohamed, H.K. Skin Cancer Detection using Neutrosophic c-means and Fuzzy c-means Clustering Algorithms. J. Intell. Syst. Internet Things 2023, 8, 33–42. [Google Scholar] [CrossRef]
  5. Singh, B.; Ebrahim, A.M.A.; Rajan, R.; Gupta, S.; Babu, D.V. February. IoT enabled Primary Skin Cancer Prediction Using Pigmented Lesions. In Proceedings of the 2022 Second International Conference on Artificial Intelligence and Smart Energy (ICAIS), Coimbatore, India, 23–25 February 2022; IEEE: New York, NY, USA, 2022; pp. 1315–1319. [Google Scholar]
  6. Singh, S.K.; Abolghasemi, V.; Anisi, M.H. Fuzzy Logic with Deep Learning for Detection of Skin Cancer. Appl. Sci. 2023, 13, 8927. [Google Scholar] [CrossRef]
  7. Alenezi, F.; Armghan, A.; Polat, K. A multi-stage melanoma recognition framework with deep residual neural network and hyperparameter optimization-based decision support in dermoscopy images. Expert Syst. Appl. 2023, 215, 119352. [Google Scholar] [CrossRef]
  8. Kaur, R.; GholamHosseini, H.; Sinha, R.; Lindén, M. Melanoma classification using a novel deep convolutional neural network with dermoscopic images. Sensors 2022, 22, 1134. [Google Scholar] [CrossRef]
  9. Tajjour, S.; Garg, S.; Chandel, S.S.; Sharma, D. A novel hybrid artificial neural network technique for the early skin cancer diagnosis using color space conversions of original images. Int. J. Imaging Syst. Technol. 2023, 33, 276–286. [Google Scholar] [CrossRef]
  10. Jojoa Acosta, M.F.; Caballero Tovar, L.Y.; Garcia-Zapirain, M.B.; Percybrooks, W.S. Melanoma diagnosis using deep learning techniques on dermatoscopic images. BMC Med. Imaging 2021, 21, 6. [Google Scholar] [CrossRef]
  11. Meenakshi, K.; Adepu, A.; Nagandla, V.V.T.; Agarwal, S. A Machine learning based melanoma skin cancer using hybrid texture features. In Proceedings of the 2023 3rd International Conference on Intelligent Technologies (CONIT), Hubballi, India, 23–25 June 2023; IEEE: New York, NY, USA, 2023; pp. 1–5. [Google Scholar]
  12. Mehr, R.A.; Ameri, A. Skin Cancer Detection Based on Deep Learning. J. Biomed. Phys. Eng. 2022, 12, 559. [Google Scholar]
  13. Khan, M.A.; Akram, T.; Zhang, Y.; Alhaisoni, M.; Al Hejaili, A.; Shaban, K.A.; Tariq, U.; Zayyan, M.H. SkinNet-ENDO: Multiclass skin lesion recognition using deep neural network and Entropy-Normal distribution optimization algorithm with ELM. Int. J. Imaging Syst. Technol. 2023, 33, 1275–1292. [Google Scholar] [CrossRef]
  14. Malibari, A.A.; Alzahrani, J.S.; Eltahir, M.M.; Malik, V.; Obayya, M.; Al Duhayyim, M.; Neto, A.V.L.; de Albuquerque, V.H.C. Optimal deep neural network-driven computer aided diagnosis model for skin cancer. Comput. Electr. Eng. 2022, 103, 108318. [Google Scholar] [CrossRef]
  15. Veeraiah, V.; Ravikaumar, G.K.; Kalpana, R.; Sreenivasulu, K.; Singh, Y.; Shukla, S.K. December. Medical Internet of Things using Deep Learning Techniques for Skin Cancer Detection. In Proceedings of the 2022 5th International Conference on Contemporary Computing and Informatics (IC3I), Uttar Pradesh, India, 14–16 December 2022; IEEE: New York, NY, USA, 2022; pp. 317–321. [Google Scholar]
  16. Sivakumar, N.R.; Ghorashi, S.A.; Karim, F.K.; Alabdulkreem, E.; Al-Rasheed, A. MIoT Based Skin Cancer Detection Using Bregman Recurrent Deep Learning. Comput. Mater. Contin. 2022, 73, 6253–6267. [Google Scholar] [CrossRef]
  17. Pushpa, B. An Efficient Internet of Things (IoT)-Enabled Skin Lesion Detection Model using Hybrid Feature Extraction with Extreme Machine Learning Model. In Proceedings of the International Conference on Intelligent Computing, Information and Control Systems: ICICCS 2020, Madurai, India, 13–15 May 2020; Springer: Singapore, 2021; pp. 275–282. [Google Scholar]
  18. Ananth, C.; Therese, M.J. A Survey on Melanoma: Skin Cancer through Computerized Diagnosis. 2020. Available online: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3551811 (accessed on 8 October 2023).
  19. Linsangan, N.B.; Adtoon, J.J. Skin cancer detection and classification for moles using k-nearest neighbor algorithm. In Proceedings of the 5th International Conference on Bioinformatics Research and Applications, Hong Kong, 27–29 December 2018; Association for Computing Machinery: New York, NY, USA, 2018; pp. 47–51. [Google Scholar]
  20. Dahou, A.; Aseeri, A.O.; Mabrouk, A.; Ibrahim, R.A.; Al-Betar, M.A.; Elaziz, M.A. Optimal Skin Cancer Detection Model Using Transfer Learning and Dynamic-Opposite Hunger Games Search. Diagnostics 2023, 13, 1579. [Google Scholar] [CrossRef]
  21. Ramya, P.; Sathiyabhama, B. Skin Cancer Prediction using Enhanced Genetic Algorithm with Extreme Learning Machine. J. Trends Comput. Sci. Smart Technol. 2023, 5, 1–13. [Google Scholar] [CrossRef]
  22. Abd Elaziz, M.; Dahou, A.; Mabrouk, A.; El-Sappagh, S.; Aseeri, A.O. An efficient artificial rabbits optimization based on mutation strategy for skin cancer prediction. Comput. Biol. Med. 2023, 163, 107154. [Google Scholar] [CrossRef]
  23. Khamparia, A.; Singh, P.K.; Rani, P.; Samanta, D.; Khanna, A.; Bhushan, B. An internet of health things-driven deep learning framework for detection and classification of skin cancer using transfer learning. Trans. Emerg. Telecommun. Technol. 2021, 32, e3963. [Google Scholar] [CrossRef]
  24. Anupama, C.S.S.; Natrayan, L.; Lydia, E.L.; Sait, A.W.; Escorcia-Gutierrez, J.; Gamarra, M.; Mansour, R.F. Deep learning with backtracking search optimization based skin lesion diagnosis model. Comput. Mater. Contin. 2021, 70, 1297–1313. [Google Scholar] [CrossRef]
  25. Ottom, M.A. Convolutional Neural Network for diagnosing skin cancer. Int. J. Adv. Comput. Sci. Appl. 2019, 10, 333–338. [Google Scholar] [CrossRef]
  26. Adjobo, E.C.; Mahama, A.T.S.; Gouton, P.; Tossa, J. November. Proposition of convolutional neural network based system for skin cancer detection. In Proceedings of the 2019 15th International Conference on Signal-Image Technology & Internet-Based Systems (SITIS), Sorrento, Italy, 26–29 November 2019; IEEE: New York, NY, USA, 2019; pp. 35–39. [Google Scholar]
  27. Baskaran, D.; Nagamani, Y.; Merugula, S.; Premnath, S.P. MSRFNet for skin lesion segmentation and deep learning with hybrid optimization for skin cancer detection. Imaging Sci. J. 2023, 71, 616–635. [Google Scholar] [CrossRef]
  28. Luo, J.; Fan, Y.; Jiang, P.; He, Z.; Xu, P.; Li, X.; Yang, W.; Zhou, W.; Ma, S. Vehicle platform attitude estimation method based on adaptive Kalman filter and sliding window least squares. Meas. Sci. Technol. 2020, 32, 035007. [Google Scholar] [CrossRef]
  29. Chen, S.W.; Chen, J.K.; Hsieh, Y.H.; Chen, W.H.; Liao, Y.H.; Lin, Y.C.; Chen, M.C.; Tsai, C.T.; Chai, J.W.; Yuan, S.M. Improving Patient Safety in the X-ray Inspection Process with EfficientNet-Based Medical Assistance System. Healthcare 2023, 11, 2068. [Google Scholar] [CrossRef]
  30. Jajam, N.; Challa, N.P.; Prasanna, K.S.; Deepthi, C.H.V.S. Arithmetic Optimization with Ensemble Deep Learning SBLSTM-RNN-IGSA model for Customer Churn Prediction. IEEE Access 2023, 11, 93111–93128. [Google Scholar] [CrossRef]
  31. Fei, Z.; Wang, J.; Liu, K.; Attahi, E.; Huang, B. Deep feature fusion-based stacked denoising autoencoder for tag recommendation systems. IET Cyber-Syst. Robot. 2023, 5, e12095. [Google Scholar] [CrossRef]
  32. Rahmati, K.; Taherinasab, S. The importance of reconfiguration of the distribution network to achieve minimization of energy losses using the dragonfly algorithm. e-Prime-Adv. Electr. Eng. Electron. Energy 2023, 5, 100270. [Google Scholar] [CrossRef]
Figure 1. Overall process of ODL-SCDC algorithm.
Figure 1. Overall process of ODL-SCDC algorithm.
Cancers 15 05016 g001
Figure 3. Confusion matrices at (a,b) 80:20 of TR phase/TS phase and (c,d) 70:30 of TR phase/TS phase.
Figure 3. Confusion matrices at (a,b) 80:20 of TR phase/TS phase and (c,d) 70:30 of TR phase/TS phase.
Cancers 15 05016 g003
Figure 4. Average of ODL-SCDC algorithm at 80:20 of TR phase/TS phase.
Figure 4. Average of ODL-SCDC algorithm at 80:20 of TR phase/TS phase.
Cancers 15 05016 g004
Figure 5. Average of ODL-SCDC algorithm at 70:30 of TR phase/TS phase.
Figure 5. Average of ODL-SCDC algorithm at 70:30 of TR phase/TS phase.
Cancers 15 05016 g005
Figure 6. A c c u y curve of ODL-SCDC approach at 80:20 of TR phase/TS phase.
Figure 6. A c c u y curve of ODL-SCDC approach at 80:20 of TR phase/TS phase.
Cancers 15 05016 g006
Figure 7. Loss curve of ODL-SCDC approach at 80:20 of TR phase/TS phase.
Figure 7. Loss curve of ODL-SCDC approach at 80:20 of TR phase/TS phase.
Cancers 15 05016 g007
Figure 8. PR curve of ODL-SCDC system at 80:20 of TR phase/TS phase.
Figure 8. PR curve of ODL-SCDC system at 80:20 of TR phase/TS phase.
Cancers 15 05016 g008
Figure 9. ROC curve of ODL-SCDC approach at 80:20 of TR phase/TS phase.
Figure 9. ROC curve of ODL-SCDC approach at 80:20 of TR phase/TS phase.
Cancers 15 05016 g009
Figure 10. A c c u y outcome of ODL-SCDC algorithm with other approaches.
Figure 10. A c c u y outcome of ODL-SCDC algorithm with other approaches.
Cancers 15 05016 g010
Figure 11. S e n s y and s p e c y outcomes of ODL-SCDC algorithm with other approaches.
Figure 11. S e n s y and s p e c y outcomes of ODL-SCDC algorithm with other approaches.
Cancers 15 05016 g011
Figure 12. CT outcome of ODL-SCDC algorithm with other approaches.
Figure 12. CT outcome of ODL-SCDC algorithm with other approaches.
Cancers 15 05016 g012
Table 1. Database details.
Table 1. Database details.
ClassNo. of Images
Angioma21
Nevus46
Lentigo NOS41
Solar Lentigo68
Melanoma51
Seborrheic Keratosis54
Basal Cell Carcinoma37
Total Number of Images318
Table 2. Skin cancer classifier outcome of ODL-SCDC algorithm at 80:20 of TR phase/TS phase.
Table 2. Skin cancer classifier outcome of ODL-SCDC algorithm at 80:20 of TR phase/TS phase.
Class A c c u y S e n s y S p e c y F M e a s u r e
TR Phase (80%)
Angioma99.6192.31100.0096.00
Nevus100.00100.00100.00100.00
Lentigo NOS100.00100.00100.00100.00
Solar Lentigo98.43100.0098.0096.43
Melanoma100.00100.00100.00100.00
Seborrheic Keratosis99.2195.12100.0097.50
Basal Cell Carcinoma99.6196.77100.0098.36
Average99.5597.7499.7198.33
TS Phase (20%)
Angioma96.8875.00100.0085.71
Nevus100.00100.00100.00100.00
Lentigo NOS100.00100.00100.00100.00
Solar Lentigo95.31100.0094.0090.32
Melanoma100.00100.00100.00100.00
Seborrheic Keratosis100.00100.00100.00100.00
Basal Cell Carcinoma98.4483.33100.0090.91
Average98.6694.0599.1495.28
Table 3. Skin cancer classifier outcome of ODL-SCDC algorithm at 70:30 of TR phase/TS phase.
Table 3. Skin cancer classifier outcome of ODL-SCDC algorithm at 70:30 of TR phase/TS phase.
Class A c c u y S e n s y S p e c y F M e a s u r e
TR Phase (70%)
Angioma99.1086.67100.0092.86
Nevus100.00100.00100.00100.00
Lentigo NOS99.5596.67100.0098.31
Solar Lentigo99.55100.0099.4398.95
Melanoma99.1097.3099.4697.30
Seborrheic Keratosis98.2097.1498.4094.44
Basal Cell Carcinoma100.00100.00100.00100.00
Average99.3696.8299.6197.41
TS Phase (30%)
Angioma98.9683.33100.0090.91
Nevus97.92100.0097.5992.86
Lentigo NOS100.00100.00100.00100.00
Solar Lentigo97.92100.0097.3395.45
Melanoma100.00100.00100.00100.00
Seborrheic Keratosis97.9289.47100.0094.44
Basal Cell Carcinoma96.8883.3398.8186.96
Average98.5193.7399.1094.37
Table 4. Comparative outcome of ODL-SCDC algorithm with other approaches.
Table 4. Comparative outcome of ODL-SCDC algorithm with other approaches.
Methods S e n s y S p e c y A c c u y
ODL-SCDC97.7499.7199.55
IIoT-DLSLD Technique97.3099.5099.20
DLCAL-SLDC94.5099.1098.50
DL-ANFC93.4098.7097.90
SVM Model73.2075.4074.30
CDNN Model82.5097.5093.40
DLN Algorithm82.0097.8093.20
DCCN-GC90.8092.7093.40
Table 5. CT outcome of ODL-SCDC algorithm with other approaches.
Table 5. CT outcome of ODL-SCDC algorithm with other approaches.
MethodsComputational Time (s)
ODL-SCDC1.30
IIoT-DLSLD Technique2.85
DLCAL-SLDC4.80
DL-ANFC4.43
SVM Model3.93
CDNN Model3.80
DLN Algorithm4.82
DCCN-GC3.87
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Obayya, M.; Arasi, M.A.; Almalki, N.S.; Alotaibi, S.S.; Al Sadig, M.; Sayed, A. Internet of Things-Assisted Smart Skin Cancer Detection Using Metaheuristics with Deep Learning Model. Cancers 2023, 15, 5016. https://doi.org/10.3390/cancers15205016

AMA Style

Obayya M, Arasi MA, Almalki NS, Alotaibi SS, Al Sadig M, Sayed A. Internet of Things-Assisted Smart Skin Cancer Detection Using Metaheuristics with Deep Learning Model. Cancers. 2023; 15(20):5016. https://doi.org/10.3390/cancers15205016

Chicago/Turabian Style

Obayya, Marwa, Munya A. Arasi, Nabil Sharaf Almalki, Saud S. Alotaibi, Mutasim Al Sadig, and Ahmed Sayed. 2023. "Internet of Things-Assisted Smart Skin Cancer Detection Using Metaheuristics with Deep Learning Model" Cancers 15, no. 20: 5016. https://doi.org/10.3390/cancers15205016

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop