Next Article in Journal
Progress and Prospects in Assessing the Multidimensional Environmental Impacts of Global Vegetation Restoration
Previous Article in Journal
Novel Microfluidic Septum to Optimize Energy Recovery in Single-Chamber Microbial Fuel Cells
Previous Article in Special Issue
Multi-Level Training and Testing of CNN Models in Diagnosing Multi-Center COVID-19 and Pneumonia X-ray Images
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Classification of Skin Lesion Images Using Artificial Intelligence Methodologies through Radial Fourier–Mellin and Hilbert Transform Signatures

by
Esperanza Guerra-Rosas
1,
Luis Felipe López-Ávila
2,
Esbanyely Garza-Flores
3,
Claudia Andrea Vidales-Basurto
2 and
Josué Álvarez-Borrego
2,*
1
Facultad de Ingeniería, Arquitectura y Diseño, Universidad Autónoma de Baja California, Km. 103 Carretera Tijuana-Ensenada, Ensenada 22860, Baja California, Mexico
2
Centro de Investigación Científica y de Educación Superior de Ensenada (CICESE), Baja California, Carretera Ensenada-Tijuana No. 3918, Zona Playitas, Ensenada 22860, Baja California, Mexico
3
SolexVintel, Santa Margarita 117, Colonia Insurgentes San Borja, Alcaldía Benito Juárez, Cd. De Mexico C. P. 03100, Mexico
*
Author to whom correspondence should be addressed.
Appl. Sci. 2023, 13(20), 11425; https://doi.org/10.3390/app132011425
Submission received: 15 September 2023 / Revised: 8 October 2023 / Accepted: 16 October 2023 / Published: 18 October 2023
(This article belongs to the Special Issue New Trends in Machine Learning for Biomedical Data Analysis)

Abstract

:

Featured Application

A method to develop a potential automated skin lesion classification.

Abstract

This manuscript proposes the possibility of concatenated signatures (instead of images) obtained from different integral transforms, such as Fourier, Mellin, and Hilbert, to classify skin lesions. Eight lesions were analyzed using some algorithms of artificial intelligence: basal cell carcinoma (BCC), squamous cell carcinoma (SCC), melanoma (MEL), actinic keratosis (AK), benign keratosis (BKL), dermatofibromas (DF), melanocytic nevi (NV), and vascular lesions (VASCs). Eleven artificial intelligence models were applied so that eight skin lesions could be classified by analyzing the signatures of each lesion. The database was randomly divided into 80% and 20% for the training and test dataset images, respectively. The metrics that are reported are accuracy, sensitivity, specificity, and precision. Each process was repeated 30 times to avoid bias, according to the central limit theorem in this work, and the averages and ± standard deviations were reported for each metric. Although all the results were very satisfactory, the highest average score for the eight lesions analyzed was obtained using the subspace k-NN model, where the test metrics were 99.98% accuracy, 99.96% sensitivity, 99.99% specificity, and 99.95% precision.

1. Introduction

The skin is the largest organ in the body; it covers and protects externally. Its condition and appearance show the state of health and wellbeing of a person; however, due to exposure to the environment and other factors, specific skin injuries may occur; these can vary in size and shape. Different types of skin diseases vary according to the symptoms. The presence of any mild skin condition can lead to severe complications and even death. Skin cancer is a serious disease and one of the most common carcinomas. It is mainly classified into basal cell carcinoma (BCC), squamous cell carcinoma or epidermoid carcinoma (SCC), and melanoma (MEL) [1,2,3,4]. BCC is the most frequent skin cancer in the worldwide population, and its growth is slow. It can grow and destroy the skin. It is generally observed as a flat or raised lesion and its color is reddish [1,5,6,7]. SCC is a malignant tumor that appears as red spots with raised growths like warts; it is the second most common skin cancer [1,2]. Melanoma is the most aggressive skin cancer, appearing as a pigmented lesion, usually beginning on normal skin as a new, small, pigmented growth [2]. Other common skin diseases are actinic keratosis (AK), benign keratosis (BKL), dermatofibromas (DF), melanocytic nevi (NV), and vascular lesions (VASCs), among others.
Actinic keratosis occurs due to frequent exposure to ultraviolet rays from the sun or tanning beds; it usually appears as small rough spots and presents color variations. Some areas are generally malignant, causing squamous cell carcinoma [7,8,9]. Seborrheic keratoses or benign keratoses are benign tumors that frequently occur in the geriatric population. This type of tumor consists of a brown, black, or light brown spot or lesion. In some cases, it is often confused with basal cell carcinoma or melanoma [10,11]. Dermatofibroma is a benign skin lesion; it appears as a slow-growing papule, and its color varies from light to dark brown, purple to red, or yellowish. Its clinical diagnosis is simple; however, sometimes it is difficult to differentiate it from other tumors, such as malignant melanoma [12,13]. Melanocytic nevi or moles are the most common benign lesions and have smooth, flat, or palpable surfaces. They form as a brown spot or freckle that varies in size and thickness [14,15]. Vascular lesions are disorders of the blood vessels. They become evident on the skin’s surface, forming red structures, dark spots, or scars, presenting an aesthetic problem. Some VASCs develop over time due to environmental changes, temperature, poor blood circulation, or sensitive skin [16,17]. The skin can warn of a health problem; its care is critical for physical and emotional aspects. Different conditions can affect the skin; some are present at birth, whereas others are acquired throughout life. Some lesions may have similar characteristics, making it difficult to differentiate them from the most common malignant neoplasms. Skin diseases are a significant health problem. The development of new digital tools and technologies based on artificial intelligence (AI) to detect and treat diseases is due to the ability to analyze large amounts of data. The latest developments based on artificial intelligence tools help identify health problems in the early stages. These developments are also based on image processing, where the recognition of visual patterns through images represents a potential solution for the failure of the human eye to detect disease. Applying algorithms based on artificial intelligence allows the development of non-invasive tools in the health area.
In recent decades, machine learning (ML) techniques have been used in impressive applications in many research areas and are still growing. Since many studies about medical imaging are focused on automating skin lesion identification from dermatological images to provide an adequate diagnosis and treatment to patients, ML techniques can be a powerful tool to detect malignant lesions automatically, quickly, and reliably.
Some ML techniques, such as convolutional neural networks (CNNs), are the most popular as they represent the closest technique to the learning process of human vision. CNNs are based on the mathematical operator of the convolution between the image and filters to extract information. Even though CNNs have provided successful results for image classification, their high computational cost, as well the non-invariance properties under rotation, scale, and translation of the convolution operation, make CNNs, in practice, an ineffective tool to implement in the identification or classification of some images, especially for those corresponding to skin lesions where vital information could be lost due to the low contrast that often exists between the lesion and healthy skin. This low contrast can mean that during the segmentation carried out by some filters, information on the lesion is not extracted correctly [18,19]. In this context, other ML techniques, such as the support vector machine (SVM), k-nearest neighbor (k-NN), or ensemble classifiers, have shown acceptable performance [20,21].
Although the SVM, k-NN, and ensemble classifiers have been successfully implemented to classify skin lesions, digital images under rotation, scaling, or translation were not considered. The application of the fractional radial Fourier transform in pattern recognition for digital images is invariant to translation, scale, and rotation, and the results obtained to classify phytoplankton species showed a high level of confidence (greater than 90%) [22]. In this paper, we implemented the Hilbert mask on the module of the Fourier–Mellin Transform to obtain a unique signature of the skin lesion called the radial Fourier–Mellin signature, which is invariant to rotation, scale, and translation. Additionally, we included as a signature the extraction of the uniform local binary pattern (LBP-U) image features. The classification task was obtained using the k-NN, SVM, and ensemble classifiers. Section 2 contains the description and image dataset pre-processing procedure and methods used to achieve its skin lesion classification. Section 3 presents the results using these supervised machine learning methods. In Section 4, we discuss the obtained results.

2. Materials and Methods

2.1. Image Dataset

The images used in our study were obtained from the 2019 challenge training data of the “International Skin Imaging Collaboration” (ISIC) [23]. Some examples are shown in Figure 1. This dataset consists of 25,331 digital images of 8 types of skin lesions. A total of 3323 basal cell carcinoma (BCC) digital images, 628 digital images of squamous cell carcinoma (SCC), 4522 melanoma (MEL) digital images, 867 actinic keratosis (AK) skin lesion digital images, 2624 benign keratosis (BKL) skin lesion images as solar lentigo, seborrheic keratosis, and lichen planus-like keratosis, 239 dermatofibroma (DF) digital skin lesion images, 12,875 melanocytic nevus (NV) digital images, and 253 images of vascular skin lesions (VASCs). However, we proceeded to eliminate the skin lesion images that contained noise, such as hair, measurement artifacts, and other noise types that made it difficult to segment the lesions. The image dataset debugging reduced the data to 9067 dermatologic skin lesions, where 6747 images were for NV, 1032 for MEL, 512 for BCC, 471 for BKL, 130 for VASC, 67 for DF, 62 for SCC, and 46 for AK. This dramatic reduction motivated us to include a data augmentation procedure resulting in 362,680 dermatologic skin lesion images, and we randomly selected images to classify using the SVM, k-NN, and ensemble classifier machine learning methodologies. To avoid classification bias, we homogenized the classes in the database using 1840 images for each type of skin lesion. Each of these 14,720 randomly selected dataset images were transformed into their radial Fourier signatures and texture descriptors using the Hilbert transform.

2.2. The Signatures

We generated several signature vectors for each RGB channel and gray-scale images using our dataset (362,680) obtained through data augmentation. The data augmentation procedure considered five image scale percentages (100%, 95%, 90%, 85%, and 80%) and eight rotation angle values (45°, 90°, 135°, 180°, 225°, 270°, 315°, and 360°). These signatures or descriptors used invariance properties to the translation and scale of the modules of the Fourier transform and the Mellin transform, respectively. To include the invariant object rotation property, we used the Hilbert transform. To calculate the unique signatures of the image, we summed the pixel value for each ring obtained after using the Hilbert masks as a filter. We incorporated the texture signatures or descriptors on the previously generated radial Fourier signatures. The process to obtain this one-dimensional skin lesion digital image representation or signature is shown in Figure 2 and Figure 3. The original image ( I m ( x , y ) ) contains three matrix channels in RGB (red, green, and blue channels). Thus, the picture was segmented into these three primary color channels to apply the radial Fourier–Mellin method and the uniform LBP image features extraction. In addition, we considered the gray-scale skin lesion digital image obtained by a weighted sum of RGB values defined by: 0.299R + 0.587G + 0.114B.

2.3. Radial Fourier–Mellin Signatures through Hilbert Transform

To generate the radial Fourier–Mellin signatures, first the image was split into its RGB channels and gray scale (Figure 3a). Then, the module of the Fourier–Mellin (FM) transforms of each skin digital image, named I m ( x , y ) , was obtained using the next equation [24,25,26] (Figure 3b).
| F M s , t | = 0 | F T [ I m ( x , y ) ] | x s 1 y t 1 d x d y = M | F T [ I m ( x , y ) ] |
where | F M s , t | is the module of the Mellin transform, which give us a scale invariance of an object in the image that is necessary as the skin lesion digital images were obtained from different lesion to camera distances. Thus, the lesion region is smaller for longer lesion to camera distances and the images contain a larger lesion region when the opposite is true. Moreover, ( s , t ) represents the 2D coordinates of the transformed x , y pixel coordinates on Mellin’s plane. Notice that these pixel coordinates x , y correspond to the module Fourier transform of the image ( | F T [ I m ( x , y ) ] | ), taking advantage of its translation invariance. Therefore, at this moment, the object (skin lesion) in the image is invariant to translation and scale.
Now, using the Hilbert transform, we also achieve the skin lesion in the image being invariant to rotation. The Hilbert transform of the image is given by [22,26,27,28,29]:
F H r [ I m ( x , y ) ] = e i p θ F T I m x , y = e i p θ F u , v
where p is the order of the radial Hilbert transform, θ is the angle on frequency domain/space of the pixel coordinates in the image ( x , y ) after being transformed to the Fourier plane coordinates as ( u , v ) . Therefore, this angle is determined by θ = a c o s ( u / u 2 + v 2 ) . Then, using the Euler’s formula, we calculated the binary ring masks of the RGB channels and gray-scale skin lesion digital image, using both the real ( H R ) and imaginary ( H I ) parts of the radial Hilbert transform of the image as follows (Figure 2), [22,26,27,28,29].
H R = R e [ H r u , v ] = 1 ,   i f   s i n = p θ > 0       0 ,     o t h e r w i s e
H I = I m [ H r u , v ] = 1 ,   i f   c o s p θ > 0       0 ,     o t h e r w i s e
The binary ring masks obtained above were applied to filter the skin lesion digital images that were previously processed using the module of the Fourier–Mellin transform (Figure 3c). The results require the sum of the values in the pixels of each ring obtaining two unique signatures of each skin gray-scale lesion image ( S g r a y H R   a n d   S g r a y H I ) and its RGB channels given by: S R H R , S R H I , S G H R , S G H I , S B H R , and S B H I (Figure 3d). Finally, each signature is normalized by its maximum value (Figure 3e,g).
To include texture descriptors, we used the uniform local binary pattern (LBP) technique (Figure 3f). This is a texture analysis tool in computer vision and image processing [30].
It is a simple and efficient descriptor that describes textures, edges, corners, spots, and flat regions. Taking blocks of 3 × 3 pixels, the intensity of each of the eight neighboring pixels is compared with the intensity of the central pixel (defined as the threshold). Suppose the intensity of the adjacent pixel is greater than or equal to the intensity of the central pixel; in this case, the position of the neighboring pixel is assigned a value of 1 (or otherwise a value of 0). Once all the pixels are compared, we obtain a set of zeros and ones that represent a binary number. Each position of the binary number is multiplied by its corresponding decimal value and then all the values are added. This sum is the LBP value that labels the central pixel. Figure 4 shows an example of the LBP calculation on any pixel for P = 8 neighborhood pixels.
To calculate the LBP on a gray-scale image, the following equation is used:
L B P x c , y c = p = 0 P 1 s ( I p I c ) 2 p ,
where x c , y c represents the position of the central pixel, P is the number of pixels in the neighborhood, I p is the intensity of the neighboring pixels, I c is the intensity of the central pixel, and s is the function:
s x = 1 ,                                                 x 0 0 ,                   a n o t h e r   v a l u e .
The so-called uniform LBP (LBP-U) is a variant of the LBP that reduces the original LBP characteristic vector; this technique allows invariance against rotations. An LBP is uniform when there are at most two transitions from 1 to 0 and from 0 to 1; for example, the binary patterns 11111111 (0 transitions), 11111000 (1 transition), and 11001111 (2 transitions) are uniform, whereas the patterns 11010110 (6 transitions) and 110010001 (4 transitions) are not. In a neighborhood of eight pixels, 256 patterns can be identified; of which, 58 will be uniform. In this way, 58 labels are obtained for the uniform patterns, and for the non-uniform patterns, the same label is assigned, which will be 59. In this work, we use the LBP-U.
After calculating the uniform LBP value for each pixel in the image, we create a histogram of the uniform LBP values to represent the distribution of different image features of both the RGB and gray-scale images L B P R , L B P G , L B P B , and L B P g r a y . We concatenate these signatures to obtain 444 components of one-dimensional objects/signatures (Figure 3g).

2.4. Signature Classification

We generated radial Fourier and texture signature vectors for each RGB channel and gray-scale dermatological digital image in our dataset. These descriptors are invariant to scale, rotation, illumination, and noise on the image being analyzed. A data augmentation procedure considering scale and rotation was included. To homogenize the classes in the database, we used 1840 images for each type of skin lesion. The signature classification was performed with the support vector machine (SVM), k-nearest neighbor (k-NN), and ensemble classifiers.
Variations of these methods were implemented. In the case of the SVM, four different kernels were used [31]: Quadratic SVM works by implementing a polynomial of degree = 2 as a kernel, whereas the cubic SVM method uses a polynomial of degree = 3. The fine Gaussian SVM method uses a Gaussian kernel with a kernel scale = 5.3. The medium Gaussian SVM works with a wider Gaussian kernel, using a kernel scale = 21.
Additionally, five k-NN variations were explored [32,33]: Fine k-NN works using the Euclidean distance and a k = 1 neighbor. The medium k-NN algorithm also uses Euclidean distance, but the number of neighbors is k = 10. The cosine k-NN algorithm implements the cosine distance with k = 10 neighbors. The Minkowski distance, with exponent p = 3, is used for the cubic k-NN method. For the weighted k-NN, the same length and number of neighbors were used as in the medium k-NN, but for this case, the neighbors are weighted based on the square inverse of the distance.
Finally, two ensemble classifiers were used. These classifiers aggregate the predictions of a group of predictors to obtain better forecasts than the best individual predictor [34]. The bagged trees method implements decision trees using bootstrap aggregation (bagging) [35]. The subspace k-NN method uses the random subspace method for k-NN classification [36,37].
In total, 11 algorithms were explored; these are shown in Table 1.
MATLAB 2022b was used to carry out the classification process, using the same dataset for all 11 algorithms. The dataset was balanced by selecting 1840 signatures from each skin lesion. Therefore, the dataset consists of 14,720 signatures.

3. Results

The data were divided by randomly selecting 80% of the data to train and 20% of the data to test each methodology used. In the training process, k-fold cross-validation with k = 5 was used. For the training dataset and the testing dataset, the accuracy, sensitivity, specificity, and precision [33] were calculated for each class. These parameters are given by
a c c u r a c y C i = T P i + T N i T P i + T N i + F P i + F N i
s e n s i t i v i t y C i = T P i T P i + F N i      
s p e c i f i c i t y C i = T N i T N i + F P i    
p r e c i s i o n C i = T P i T P i + F P i    
where:
  • T P i : true positives for class i .
  • T N i : true negatives for class i .
  • F P i : false positives for class i .
  • F N i : false negatives for class i .
Table 2, Table 3, Table 4, Table 5, Table 6, Table 7, Table 8, Table 9, Table 10, Table 11 and Table 12 show the results for each algorithm.
Because the splitting, training, and testing process described above was performed randomly, it was repeated 30 times to avoid bias according to the central limit theorem, and the mean ±1 standard deviation was calculated for each metric to determine the repeatability of each implemented method.
An example of the subspace k-NN method’s performance in each cycle is presented. In Figure 5, the ROC curve for the training and test sets from an example cycle of the subspace k-NN method is shown. Both curves have excellent performance for each of the eight classes.
The confusion matrix for the training and testing of the example process of the subspace k-NN method are shown in Figure 6.

4. Discussion

The results obtained from this new methodology for classifying skin lesions by employing eleven artificial intelligence algorithms using concatenated signatures were promising. This methodology proposes using several integral transforms (Fourier, Mellin, and Hilbert) along with the uniform LBP method of texture features to obtain a series of concatenated signatures for each lesion. This methodology works quite well for those images of skin lesions that lack noise. It is important to note that the concatenation of the signatures was achieved using the three color spaces (RGB) and the gray image to extract as much information as possible. This allows the artificial intelligence algorithms to detect the most subtle differences between the lesions. The eleven tables show excellent results for the accuracy, sensitivity, specificity, and precision metrics of the method, both for the training set and the test set. The group of lesions used to test this methodology was absent in the set of images used to train the various artificial intelligence algorithms. For both the training and test sets, the algorithms were run 30 times, thus obtaining an average and a ±standard deviation for each case. When defining the best classifier from the eleven presented in this work for this class of images, we observed that the classifier with the best performance was the subspace k-NN classifier, which had the best results for the test set. The same happens when we compare the metrics regarding the training of the various classifiers. The remaining ten classifiers also performed well. Using these classifiers when the input consists of concatenated signatures that are invariant to rotation, scale, and translation is an excellent contribution to this work. This work aligns with previous studies on the classification of skin lesions using artificial intelligence algorithms. For example, ref. [38] presented a dataset of 2241 histopathological images from 2008 to 2018. They employed two deep learning architectures, namely VGG19 and ResNet50. The results showed a high accuracy for distinguishing melanoma from nevi, with an average F1 score of 0.89, a sensitivity of 0.92, a specificity of 0.94, and an AUC of 0.98. In reference [39], the authors present an automated skin lesion detection and classification technique utilizing an optimized stacked sparse autoencoder (OSSAE)-based feature extractor with a backpropagation neural network (BPNN), named the OSSAE-BPNN technique, that reached a testing accuracy of 0.947, a sensitivity of 0.824, a specificity of 0.974, and a precision of 0.830. More studies in these areas are described in Table 13. Table 13 compares our work with other approaches, and our results are generally better. However, the results show that training and validation data metrics are around 99%. The signatures are invariant to each image’s rotation, scale, and displacement.
These studies demonstrate the potential of artificial intelligence algorithms in the classification of skin lesions, and this work furthers our understanding of how to achieve the best results. Our methodology is a breakthrough in classifying skin lesions using artificial intelligence algorithms and concatenated signatures. Indeed, the main contribution of this manuscript is how the images have been converted into linked signatures using the different integral transforms mentioned above, as well as the uniform LBP vectors.
This is a promising development in medical diagnosis and has the potential to revolutionize how medical professionals diagnose and treat skin lesions. It is worth mentioning that concatenated signatures cannot only be used for classifying skin lesions; they can be used for other medical imaging tasks, such as identifying tumors or categorizing brain scans. Furthermore, this technique can be applied to other areas, such as facial recognition or the categorization of satellite images.

Author Contributions

Methodology, L.F.L.-Á., E.G.-R., J.Á.-B. and E.G.-F.; Software, L.F.L.-Á., E.G.-R., J.Á.-B. and E.G.-F.; Validation, J.Á.-B., E.G.-F., L.F.L.-Á. and C.A.V.-B.; Data curation, E.G.-R. and C.A.V.-B.; Visualization, E.G.-R. and J.Á.-B.; Supervision, J.Á.-B.; Project administration, J.Á.-B.; Funding acquisition, J.Á.-B. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by Centro de Investigación Científica y de Educación Superior de Ensenada (CICESE), Baja California, grant number F0F181.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Available online: https://challenge.isic-archive.com/data/#2019, accessed on 10 January 2019.

Acknowledgments

Luis Felipe López-Ávila holds a postdoc in Centro de Investigación Científica y de Educación Superior de Ensenada (CICESE) supported by CONAHCYT, with postdoc application number 4553917, CVU 693156, Clave: BP-PA-20230502163027674-4553917. Claudia Andrea Vidales-Basurto holds a postdoc in Centro de Investigación Científica y de Educación Superior de Ensenada (CICESE) supported by CONAHCYT, with postdoc application number 2340213, CVU 395914, Clave: BP-PA-20220621205655995-2340213.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Lomas, A.; Leonardi-Bee, J.; Bath-Hextall, F. A systematic review of worldwide incidence of nonmelanoma skin cancer. Br. J. Dermatol. 2012, 166, 1069–1080. [Google Scholar] [CrossRef] [PubMed]
  2. Gordon, R. Skin cancer: An overview of epidemiology and risk factors. Semin. Oncol. Nurs. 2013, 29, 160–169. [Google Scholar] [CrossRef] [PubMed]
  3. Cameron, M.C.; Lee, E.; Hibler, B.P.; Barker, C.A.; Mori, S.; Cordova, M.; Nehal, K.S.; Rossi, A.M. Basal cell carcinoma: Epidemiology; pathophysiology; clinical and histological subtypes; and disease associations. J. Am. Acad. Dermatol. 2019, 80, 303–317. [Google Scholar] [CrossRef] [PubMed]
  4. Zhang, W.; Zeng, W.; Jiang, A.; He, Z.; Shen, X.; Dong, X.; Feng, J.; Lu, H. Global, regional and national incidence, mortality and disability-adjusted life-years of skin cancers and trend analysis from 1990 to 2019: An analysis of the Global Burden of Disease Study 2019. Cancer Med. 2021, 10, 4905–4922. [Google Scholar] [CrossRef]
  5. Born, L.J.; Khachemoune, A. Basal cell carcinosarcoma: A systematic review and reappraisal of its challenges and the role of Mohs surgery. Arch. Dermatol. Res. 2023, 315, 2195–2205. [Google Scholar] [CrossRef]
  6. Naik, P.P.; Desai, M.B. Basal Cell Carcinoma: A Narrative Review on Contemporary Diagnosis and Management. Oncol. Ther. 2022, 10, 317–335. [Google Scholar] [CrossRef]
  7. Reinehr, C.P.H.; Bakos, R.M. Actinic keratoses: Review of clinical, dermoscopic, and therapeutic aspects. An. Bras. De Dermatol. 2019, 94, 637–657. [Google Scholar] [CrossRef]
  8. Del Regno, L.; Catapano, S.; Di Stefani, A.; Cappilli, S.; Peris, K. A Review of Existing Therapies for Actinic Keratosis: Current Status and Future Directions. Am. J. Clin. Dermatol. 2022, 23, 339–352. [Google Scholar] [CrossRef]
  9. Casari, A.; Chester, J.; Pellacani, G. Actinic Keratosis and Non-Invasive Diagnostic Techniques: An Update. Biomedicines 2018, 6, 8. [Google Scholar] [CrossRef]
  10. Opoko, U.; Sabr, A.; Raiteb, M.; Maadane, A.; Slimani, F. Seborrheic keratosis of the cheek simulating squamous cell carcinoma. Int. J. Surg. Case Rep. 2021, 84, 106175. [Google Scholar] [CrossRef]
  11. Moscarella, E.; Brancaccio, G.; Briatico, G.; Ronchi, A.; Piana, S.; Argenziano, G. Differential Diagnosis and Management on Seborrheic Keratosis in Elderly Patients. Clin. Cosmet. Investig. Dermatol. 2021, 14, 395–406. [Google Scholar] [CrossRef] [PubMed]
  12. Jiahua, X.; Yi, C.; Liwu, Z.; Yan, S.; Yichi, X.; Lingli, G. Innovative combined therapy for multiple keloidal dermatofibromas of the chest wall: A novel case report. CJPRS 2022, 4, 182–186. [Google Scholar] [CrossRef]
  13. Endzhievskaya, S.; Hsu, C.-K.; Yang, H.-S.; Huang, H.-Y.; Lin, Y.-C.; Hong, Y.-K.; Lee, J.Y.W.; Onoufriadis, A.; Takeichi, T.; Lee, J.Y.-Y.; et al. Loss of RhoE Function in Dermatofibroma Promotes Disorganized Dermal Fibroblast Extracellular Matrix and Increased Integrin Activation. J. Investig. Dermatol. 2023, 143, 1487–1497. [Google Scholar] [CrossRef] [PubMed]
  14. Park, S.; Yun, S.J. Acral Melanocytic Neoplasms: A Comprehensive Review of Acral Nevus and Acral Melanoma in Asian Perspective. Dermatopathology 2022, 9, 292–303. [Google Scholar] [CrossRef] [PubMed]
  15. Frischhut, N.; Zelger, B.; Andre, F.; Zelger, B.G. The spectrum of melanocytic nevi and their clinical implications. J. Der Dtsch. Dermatol. Ges. 2022, 20, 483–504. [Google Scholar] [CrossRef] [PubMed]
  16. Hu, K.; Li, Y.; Ke, Z.; Yang, H.; Lu, C.; Li, Y.; Guo, Y.; Wang, W. History, progress and future challenges of artificial blood vessels: A narrative review. Biomater. Transl. 2022, 28, 81–98. [Google Scholar] [CrossRef]
  17. Liu, C.; Dai, J.; Wang, X.; Hu, X. The Influence of Textile Structure Characteristics on the Performance of Artificial Blood Vessels. Polymers 2023, 15, 3003. [Google Scholar] [CrossRef]
  18. Folland, G.B. Fourier Analysis and Its Applications; American Mathematical Society: Providence, RI, USA, 2000; pp. 314–318. [Google Scholar]
  19. Al-masni, M.A.; Al-antari, M.A.; Choi, M.T.; Han, S.M.; Kim, T.S. Skin lesion segmentation in dermoscopy images via deep full resolution convolutional networks. Comput. Meth. Prog. Biomed. 2018, 162, 221–231. [Google Scholar] [CrossRef]
  20. Afza, F.; Sharif, M.; Khan, M.A.; Tariq, U.; Yong, H.-S.; Cha, J. Multiclass Skin Lesion Classification Using Hybrid Deep Features Selection and Extreme Learning Machine. Sensors 2022, 22, 799. [Google Scholar] [CrossRef]
  21. Surówka, G.; Ogorzalek, M. On optimal wavelet bases for classification of skin lesion images through ensemble learning. In Proceedings of the 2014 International Joint Conference on Neural Networks (IJCNN), Beijing, China, 6–11 July 2014. [Google Scholar] [CrossRef]
  22. López-Ávila, L.F.; Álvarez-Borrego, J.; Solorza-Calderón, S. Fractional Fourier-Radial Transform for Digital Image Recognition. J. Signal Process. Syst. 2021, 2021, 49–66. [Google Scholar] [CrossRef]
  23. ISICCHALLENGE. Available online: https://challenge.isic-archive.com/data/#2019 (accessed on 10 January 2019).
  24. Casasent, D.; Psaltis, D. Scale invariant optical correlation using Mellin transforms. Opt. Commun. 1976, 17, 59–63. [Google Scholar] [CrossRef]
  25. Derrode, S.; Ghorbel, F. Robust and efficient Fourier—Mellin transform approximations for gray-level image reconstruction and complete invariant description. Comput. Vis. Image Underst. 2001, 83, 57–78. [Google Scholar] [CrossRef]
  26. Alcaraz-Ubach, D.F. Reconocimiento de Patrones en Imágenes Digitales Usando Máscaras de Hilbert Binarias de Anillos Concéntricos. Bachelor Thesis, Science Faculty, Universidad Autónoma de Baja California, Ensenada, México, 2015. [Google Scholar]
  27. Davis, J.A.; McNamara, D.E.; Cottrell, D.M.; Campos, J. Image processing with the radial Hilbert transform: Theory and experiments. Opt. Lett. 2000, 25, 99–101. [Google Scholar] [CrossRef] [PubMed]
  28. Pei, S.C.; Ding, J.J. The generalized radial Hilbert transform and its applications to 2D edge detection (any direction or specified directions). In Proceedings of the 2003 IEEE International Conference on Acoustics, Speech, and Signal, Hong Kong, China, 6–10 April 2003. [Google Scholar] [CrossRef]
  29. King, F.W. Hilbert Transforms; Cambridge University Press: Cambridge, UK, 2009; pp. 1–858. [Google Scholar] [CrossRef]
  30. Ojala, T.; Pietikainen, T.; Maenpaa, T. Multiresolution grayscale and rotation invariant texture classification with local binary patterns. Pattern Anal. Mach. Intell. 2002, 24, 971–987. [Google Scholar] [CrossRef]
  31. Rogers, S.; Girolami, M. A First Course in Machine Learning, 2nd ed.; Chapman & Hall/CRC Press: Boca Raton, FL, USA, 2017; pp. 185–195. [Google Scholar]
  32. K-Nearest Neighbor. Available online: http://scholarpedia.org/article/K-nearest_neighbor (accessed on 16 August 2023).
  33. Mucherino, A.; Papajorgji, P.J.; Pardalos, P.M.; Mucherino, A.; Papajorgji, P.J.; Pardalos, P.M. K-nearest neighbor classification. In Data Mining in Agriculture. Springer Optimization and Its Applications, 2nd ed.; Springer: New York, NY, USA, 2009; Volume 34, pp. 83–106. [Google Scholar] [CrossRef]
  34. Gerón, A. Hands-On Machine Learnign with Scikit-Learn, Keras & TensorFlow, 2nd ed.; O’Reily: Sebastopol, CA, USA, 2019; pp. 189–212. [Google Scholar]
  35. Breiman, L. Bagging Predictors. Mach. Learn. 1996, 24, 123–140. [Google Scholar] [CrossRef]
  36. Ho, T.K. The random subspace method for constructing decision forests. IEEE Trans. Pattern Anal. Mach. Intell. 1998, 20, 832–844. [Google Scholar] [CrossRef]
  37. Ma, X.; Yang, T.; Chen, J.; Liu, Z. k-Nearest Neighbor algorithm based on feature subspace. In Proceedings of the 2021 International Conference on Big Data Analysis and Computer Science (BDACS), Kunming, China, 25–27 June 2021. [Google Scholar] [CrossRef]
  38. Xie, P.; Zuo, K.; Zhang, Y.; Li, F.; Yin, M.; Lu, K. Interpretable classification from skin cancer histology slides using deep learning: A retrospective multicenter study. arXiv 2019, arXiv:1904.06156. [Google Scholar] [CrossRef]
  39. Ogudo, K.A.; Surendran, R.; Khalaf, O.I. Optimal Artificial Intelligence Based Automated Skin Lesion Detection and Classification Model. Comput. Syst. Sci. Eng. 2023, 44, 693–707. [Google Scholar] [CrossRef]
  40. Ballerini, L.; Fisher, R.B.; Aldridge, B.; Rees, J. Non-melanoma skin lesion classification using colour image data in a hierarchical k-nn classifier. In Proceedings of the 2012 9th IEEE International Symposium on Biomedical Imaging (ISBI), Barcelona, Spain, 2–5 May 2012; pp. 358–361. [Google Scholar]
  41. Ozkan, I.A.; Koklu, M. Skin Lesion Classification using Machine Learning Algorithms. Int. J. Intell. Syst. Appl. Eng. 2017, 5, 285–289. [Google Scholar] [CrossRef]
  42. Nasir, M.; Attique Khan, M.; Sharif, M.; Lali, I.U.; Saba, T.; Iqbal, T. An improved strategy for skin lesion detection and classification using uniform segmentation and feature selection based approach. Microsc. Res. Tech. 2018, 81, 528–543. [Google Scholar] [CrossRef]
  43. Chatterjee, S.; Dey, D.; Munshi, S. Integration of morphological preprocessing and fractal based feature extraction with recursive feature elimination for skin lesion types classification. Comput. Methods Programs Biomed. 2019, 178, 201–218. [Google Scholar] [CrossRef] [PubMed]
  44. Fisher, R.; Rees, J.; Bertrand, A. Classification of Ten Skin Lesion Classes: Hierarchical KNN versus Deep Net. In Medical Image Understanding and Analysis, Proceedings of the 23rd Conference, MIUA 2019, Liverpool, UK, 24–26 July 2019; Communications in Computer and Information Science (CCIS); Springer: Berlin/Heidelberg, Germany, 2020; Volume 1065, pp. 86–98. [Google Scholar] [CrossRef]
  45. Molina-Molina, E.O.; Solorza-Calderón, S.; Álvarez-Borrego, J. Classification of Dermoscopy Skin Lesion Color-Images Using Fractal-Deep Learning Features. Appl. Sci. 2020, 10, 5954. [Google Scholar] [CrossRef]
  46. Afza, F.; Khan, M.A.; Sharif, M.; Saba, T.; Rehman, A.; Javed, M.Y. Skin Lesion Classification: An Optimized Framework of Optimal Color Features Selection. In Proceedings of the 2020 2nd International Conference on Computer and Information Sciences (ICCIS), Sakaka, Saudi Arabia, 13–15 October 2020; pp. 1–6. [Google Scholar]
  47. Ghalejoogh, G.S.; Kordy, H.M.; Ebrahimi, F. A hierarchical structure based on Stacking approach for skin lesion classification. Expert Syst. Appl. 2020, 145, 113127. [Google Scholar] [CrossRef]
  48. Moldovanu, S.; Damian Michis, F.A.; Biswas, K.C.; Culea-Florescu, A.; Moraru, L. Skin Lesion Classification Based on Surface Fractal Dimensions and Statistical Color Cluster Features Using an Ensemble of Machine Learning Techniques. Cancers 2021, 13, 5256. [Google Scholar] [CrossRef]
  49. Shetty, B.; Fernandes, R.; Rodrigues, A.P.; Chengoden, R.; Bhattacharya, S.; Lakshmanna, K. Skin lesion classification of dermoscopic images using machine learning and convolutional neural network. Sci. Rep. 2022, 12, 18134. [Google Scholar] [CrossRef]
  50. Mohanty, N.; Pradhan, M.; Reddy, A.V.N.; Kumar, S.; Alkhayyat, A. Integrated Design of Optimized Weighted Deep Feature Fusion Strategies for Skin Lesion Image Classification. Cancers 2022, 14, 5716. [Google Scholar] [CrossRef] [PubMed]
  51. Camacho-Gutiérrez, J.A.; Solorza-Calderón, S.; Álvarez-Borrego, J. Multi-class skin lesion classification using prism- and segmentation-based fractal signatures. Expert Syst. Appl. 2022, 197, 116671. [Google Scholar] [CrossRef]
Figure 1. Some digital skin lesion images from our dataset.
Figure 1. Some digital skin lesion images from our dataset.
Applsci 13 11425 g001
Figure 2. (a) Binary disk. (b) H R mask. (c) H I mask.
Figure 2. (a) Binary disk. (b) H R mask. (c) H I mask.
Applsci 13 11425 g002
Figure 3. Description of the methodology for the image signature generation.
Figure 3. Description of the methodology for the image signature generation.
Applsci 13 11425 g003
Figure 4. LBP calculation procedure.
Figure 4. LBP calculation procedure.
Applsci 13 11425 g004
Figure 5. ROC curve for one example subspace k-NN process. (a) Training ROC. (b) Testing ROC.
Figure 5. ROC curve for one example subspace k-NN process. (a) Training ROC. (b) Testing ROC.
Applsci 13 11425 g005
Figure 6. Confusion matrix for one example subspace k-NN process: (a) training, (b) testing.
Figure 6. Confusion matrix for one example subspace k-NN process: (a) training, (b) testing.
Applsci 13 11425 g006
Table 1. This table shows a description of the 11 algorithms implemented.
Table 1. This table shows a description of the 11 algorithms implemented.
AlgorithmDescription
Quadratic SVMType: Support vector machine
Kernel type: Quadratic polynomial
Cubic SVMType: Support vector machine
Kernel type: Cubic polynomial
Fine Gaussian SVMType: Support vector machine
Kernel type: Gaussian
Kernel scale: 5.3
Medium Gaussian SVMType: Support vector machine
Kernel type: Gaussian
Kernel scale: 21
Fine k-NNType: K-nearest neighbor
Number of neighbors: 1
Distance: Euclidean
Medium Gaussian k-NNType: K-nearest neighbor
Number of neighbors: 10
Distance: Euclidean
Cosine k-NNNumber of neighbors: 10
Distance: Cosine
Cubic k-NNType: K-nearest neighbor
Number of neighbors: 10
Distance: Minkowski distance with exponent p = 3
Weighted k-NNType: K-nearest neighbor
Number of neighbors: 10
Distance: Euclidean
Distance weight: Squared inverse
Bagged TreesType: Ensemble classifier
Method: Bootstrap aggregating
Subspace k-NNType: Ensemble classifier
Method: Subspace-based KNN
Table 2. Results for quadratic SVM.
Table 2. Results for quadratic SVM.
TRAINING SET
ACCURACYSENSITIVITYSPECIFICITYPRECISION
LESIONMEAN±1SDMEAN±1SDMEAN±1SDMEAN±1SD
BCC0.9977130.0004350.9925180.0019180.9984540.0004410.9892010.003089
SCC0.9941210.0005020.9894320.0026880.9947890.0003940.964420.002662
MEL0.9955640.0006340.9785230.003490.9979980.0004660.9858820.003276
AK0.9941410.000640.9693870.0040280.9976920.0004050.9836910.002828
BKL0.9947070.0005850.9821080.0036610.99650.0004330.9756440.002831
DF0.9960770.000530.9759420.0037580.9989640.00030.9926620.00211
NV0.9957680.0006370.9871180.0035550.9969960.000540.9790670.003622
VASC0.9984520.0002270.9911860.0018290.9994890.0002020.9964050.001409
MEAN ± 1SD0.9958180.0015850.9832770.0082050.997610.0015020.9833710.01024
TESTING SET
ACCURACYSENSITIVITYSPECIFICITYPRECISION
LESIONMEAN±1SDMEAN±1SDMEAN±1SDMEAN±1SD
BCC0.9989020.0007710.9951550.0034510.9994430.0007470.9961290.005223
SCC0.9966710.0013480.9949840.0066410.996920.0014770.9788960.009978
MEL0.9973280.0014840.9849850.0110260.9990950.0008360.9936160.005861
AK0.9964670.0013550.9817740.0083030.9985270.0010440.9895130.007158
BKL0.996920.0010440.991050.0061580.9977750.0010730.9845010.00745
DF0.9976560.001090.9849030.0081380.9994310.0005830.9959440.004108
NV0.9974640.0015550.994280.0060720.997910.001660.985950.010578
VASC0.9989130.0005380.9937410.0041510.9996510.0003270.9975290.002344
MEAN ± 1SD0.997540.0009330.9901090.0053920.9985940.0009810.990260.006679
Table 3. Results for cubic SVM.
Table 3. Results for cubic SVM.
TRAINING SET
ACCURACYSENSITIVITYSPECIFICITYPRECISION
LESIONMEAN±1SDMEAN±1SDMEAN±1SDMEAN±1SD
BCC0.9979310.0004750.9917720.0025020.9988150.00040.991750.002773
SCC0.9982250.0005150.99580.002460.9985710.0003660.9900310.002564
MEL0.9985850.0003760.9950690.0021590.9990850.0003090.9935810.002148
AK0.9978570.0004690.989770.0027360.999010.0004180.9930460.002911
BKL0.9979510.0006040.992580.0035650.9987190.0004190.9910670.002932
DF0.9984880.000420.9932970.0030420.999230.0002920.9946170.002015
NV0.9983210.0004120.991410.0021840.9993110.0003230.9951730.002237
VASC0.9997680.0001580.9988170.0010530.9999030.0001080.999320.000756
MEAN ± 1SD0.9983910.0006170.9935650.0028820.999080.0004170.9935730.002906
TESTING SET
ACCURACYSENSITIVITYSPECIFICITYPRECISION
LESIONMEAN±1SDMEAN±1SDMEAN±1SDMEAN±1SD
BCC0.9992410.0007010.9971090.0041130.9995350.0007140.9967130.005081
SCC0.9994110.0006670.9985690.0034590.9995340.0006850.9967720.00483
MEL0.9994680.0005480.9984910.0030870.9996110.0005680.9973170.003941
AK0.9992070.0007690.9960190.0043350.9996650.0006150.9975980.004424
BKL0.9993430.0008070.9974740.0054460.9996120.0006380.9973010.004364
DF0.9996830.0005270.9990940.0020630.9997670.0005070.9983930.003469
NV0.9991620.0008160.9952870.004870.9997160.0005840.9979860.004132
VASC10101010
MEAN ± 1SD0.999440.0002810.9977550.0015870.999680.0001530.997760.001067
Table 4. Results for fine Gaussian SVM.
Table 4. Results for fine Gaussian SVM.
TRAINING SET
ACCURACYSENSITIVITYSPECIFICITYPRECISION
LESIONMEAN±1SDMEAN±1SDMEAN±1SDMEAN±1SD
BCC0.9906170.0012020.9380940.0097890.9981080.0006340.9860970.004562
SCC0.9917290.0007760.93740.0062560.9995110.0002480.9963840.001816
MEL0.9907040.0009290.9483010.0049590.996770.0006790.9768020.004607
AK0.9643310.0022840.9922580.0023830.9603550.0025880.7810320.01118
BKL0.9884850.0008880.9155230.0068380.9988840.0003840.9915310.002881
DF0.9936570.0008140.9638940.0060350.9979070.0004370.9850450.003063
NV0.9870070.0009430.9555620.0065480.9915040.0007970.9414910.00538
VASC0.9936790.0008480.9497360.0068680.9999517.96E−050.9996430.000587
MEAN ± 1SD0.9875260.0096510.9500960.0223660.9928740.0134030.9572530.073477
TESTING SET
ACCURACYSENSITIVITYSPECIFICITYPRECISION
LESIONMEAN±1SDMEAN±1SDMEAN±1SDMEAN±1SD
BCC0.9947920.0019810.9636880.0140520.9992490.0006680.9945720.004833
SCC0.9960710.0009040.9687720.0076780.9999220.0002140.9994460.001521
MEL0.9965350.0016410.9805290.0124710.9988370.000960.9916980.006806
AK0.9827560.0027370.9948990.004950.9809970.0028220.8835490.015957
BKL0.994350.0016410.9608390.0107330.9992220.0006710.9944510.004731
DF0.9972260.0014410.9836650.0087720.9991480.000910.9938850.006639
NV0.9941240.0017730.9822190.0088450.9958120.0014280.9708190.010054
VASC0.9972710.0013310.9782140.0105821010
MEAN ± 1SD0.9941410.0047640.9766030.0114070.9966490.0064590.9785520.039459
Table 5. Results for medium Gaussian SVM.
Table 5. Results for medium Gaussian SVM.
TRAINING SET
ACCURACYSENSITIVITYSPECIFICITYPRECISION
LESIONMEAN±1SDMEAN±1SDMEAN±1SDMEAN±1SD
BCC0.9703520.0012390.9042540.0079830.9797370.0011040.8637930.006639
SCC0.9730610.0010970.8939460.0073510.9843250.001160.8904570.006616
MEL0.9708670.0009610.8296850.0066840.9910480.0004990.9298280.003744
AK0.9774990.0008980.8976160.0061060.9889210.001030.9206290.006421
BKL0.9725290.0008650.8654360.007040.9878140.0008860.9102860.005526
DF0.9859040.0006160.9212730.0046870.9951520.0006070.964580.004144
NV0.9599780.0008960.9449470.0036690.962130.0011280.7815470.004489
VASC0.9915250.0006970.9493020.0059610.9975630.0004330.9824020.002982
MEAN ± 1SD0.9752140.0097980.9008080.0398170.9858360.0111250.905440.062767
TESTING SET
ACCURACYSENSITIVITYSPECIFICITYPRECISION
LESIONMEAN±1SDMEAN±1SDMEAN±1SDMEAN±1SD
BCC0.9775020.0033810.935410.0157740.983650.0030910.893230.018677
SCC0.9793370.0028920.9214980.0203660.9877160.00270.9157550.017457
MEL0.9765510.0033950.8589840.0245650.9933010.001360.9479960.010845
AK0.9832430.0029950.9139260.0137770.9931470.0022060.9499310.015829
BKL0.9780230.0036740.8931060.0229570.9901870.0018560.9287950.012963
DF0.9910780.0018020.9476920.0119630.9972610.0010980.9799650.008013
NV0.9660440.0041550.9528370.0101160.9678970.0051430.8075990.024077
VASC0.9936710.0018830.9594450.0131310.9985260.0007790.9893050.005602
MEAN ± 1SD0.9806810.0087310.9228620.0339290.9889610.0097960.9265720.057544
Table 6. Results for fine k-NN.
Table 6. Results for fine k-NN.
TRAINING SET
ACCURACYSENSITIVITYSPECIFICITYPRECISION
LESIONMEAN±1SDMEAN±1SDMEAN±1SDMEAN±1SD
BCC0.9966120.0006470.9882530.0033770.9978090.0004520.9847670.003143
SCC0.9964590.0006870.9920860.002470.9970820.0006610.9798320.004407
MEL0.9919810.0008650.9678740.0054570.9954120.000630.9677870.004308
AK0.9970190.0005370.988840.003080.9981880.0004760.987360.003258
BKL0.9936620.0007350.9683040.0051790.9972770.0004910.9806740.003422
DF0.9958470.000540.9840480.0034330.9975340.0004750.98280.003276
NV0.99060.0009950.9628630.0050590.9945650.0006980.9620320.004756
VASC0.9978520.0005160.9877440.0037710.9992950.0002960.9950330.002071
MEAN ± 1SD0.9950040.0026160.9800010.0116260.9971450.0015110.9800360.010578
TESTING SET
ACCURACYSENSITIVITYSPECIFICITYPRECISION
LESIONMEAN±1SDMEAN±1SDMEAN±1SDMEAN±1SD
BCC0.9985620.0007760.9958560.004560.9989410.000650.9925120.004646
SCC0.9987090.0009540.9978160.0029840.9988350.0009480.9919320.006699
MEL0.9965810.0014440.984850.0106750.9983160.0014310.9883920.009868
AK0.9989130.0010190.9964750.0050150.9992750.0008010.9949560.005579
BKL0.9971010.0012190.9849990.0077620.9988360.0010080.9917790.007134
DF0.9983580.0009530.9950910.0049750.9988230.0009480.9918050.006562
NV0.9958110.0017330.9836510.0088140.9975410.0014880.9828720.010041
VASC0.9990260.0007610.9939040.0061810.9997540.000330.9982660.002323
MEAN ± 1SD0.9978830.0012150.991580.005980.998790.0006520.9915640.004523
Table 7. Results for medium Gaussian k-NN.
Table 7. Results for medium Gaussian k-NN.
TRAINING SET
ACCURACYSENSITIVITYSPECIFICITYPRECISION
LESIONMEAN±1SDMEAN±1SDMEAN±1SDMEAN±1SD
BCC0.9689620.0016120.9029080.0088070.9784190.0016020.8570630.008559
SCC0.9648860.0014170.8864760.0093360.9760840.0016430.8412520.008924
MEL0.9562560.0018990.8173150.0110590.9761330.0017030.8306250.009523
AK0.9662050.0012260.8623920.0097650.981040.0014440.8668470.008002
BKL0.9635050.001870.849410.009380.9797550.0022780.8569390.013198
DF0.9695740.0014170.8495350.0103330.9866640.0012880.9007860.008433
NV0.9514040.0016510.8344930.0108150.9680770.0018820.7886320.009132
VASC0.9854930.0010170.9022110.0078390.9974210.0005260.9804560.003871
MEAN ± 1SD0.9657860.0101180.8630920.0315040.9804490.0086390.8653250.056466
TESTING SET
ACCURACYSENSITIVITYSPECIFICITYPRECISION
LESIONMEAN±1SDMEAN±1SDMEAN±1SDMEAN±1SD
BCC0.9778990.0028780.9463230.0140220.9823660.0031880.8840540.017584
SCC0.9759280.0034230.9069040.0215860.9857930.002510.9012480.016761
MEL0.9653080.0045860.8798490.0238750.9774660.0039120.8474690.023878
AK0.974740.0030650.8918630.0164020.9865820.0027670.9042450.020062
BKL0.9715580.003390.8974290.0171040.9823170.0029850.8799330.018989
DF0.9772190.0029960.8770220.0243210.9917610.0021460.9391280.015253
NV0.9604620.0041760.854560.0259120.9756990.0042180.835140.02571
VASC0.9880550.002340.9128790.0188590.9986940.0007010.9900390.005171
MEAN ± 1SD0.9738960.0083840.8958540.0274990.9850850.00750.8976570.049623
Table 8. Results for cosine k-NN.
Table 8. Results for cosine k-NN.
TRAINING SET
ACCURACYSENSITIVITYSPECIFICITYPRECISION
LESIONMEAN±1SDMEAN±1SDMEAN±1SDMEAN±1SD
BCC0.968340.001080.8511590.0104410.9850470.0013850.8904530.008299
SCC0.9636890.0016170.8499730.0134450.9798870.0016110.8577450.009328
MEL0.9540310.0016840.7983470.0126740.9763340.0023850.8289590.012168
AK0.9656670.0015540.8675010.0099350.97970.0021580.8596220.011847
BKL0.9591970.0017850.8133810.0122240.9799970.0019020.8531890.010953
DF0.9699080.0013790.8335210.0100510.9894560.0012260.9189750.008551
NV0.9414630.0020090.8901120.0133750.948790.003150.713190.010549
VASC0.9819860.0012870.913120.0081430.9917960.0010240.9407250.006908
MEAN ± 1SD0.9630350.011970.8521390.0380690.9788760.0132640.8578570.069131
TESTING SET
ACCURACYSENSITIVITYSPECIFICITYPRECISION
LESIONMEAN±1SDMEAN±1SDMEAN±1SDMEAN±1SD
BCC0.9774340.0022630.9111820.0141760.9869880.0023370.9095990.016076
SCC0.9722710.0035380.8715250.0200690.9868180.0028350.9048420.020734
MEL0.96430.004160.8533840.0263010.9799370.00290.8570770.018503
AK0.9736070.0039010.8954280.0168130.984740.0035150.8933240.023071
BKL0.9679690.0041530.8712390.021850.9818980.0034210.8738440.020752
DF0.9744110.002960.850760.0162790.991860.0023330.9367370.016802
NV0.956080.0034350.9123510.0185790.9623450.0042940.77580.022732
VASC0.9858470.002220.9216730.0143450.9951140.0015570.9646870.010787
MEAN ± 1SD0.971490.0089170.8859430.0278340.9837130.0099420.8894890.057024
Table 9. Results for cubic k-NN.
Table 9. Results for cubic k-NN.
TRAINING SET
ACCURACYSENSITIVITYSPECIFICITYPRECISION
LESIONMEAN±1SDMEAN±1SDMEAN±1SDMEAN±1SD
BCC0.9675440.0015530.8927310.0111090.9782450.001620.854630.008652
SCC0.9588370.0020560.871730.0115940.9712490.0024040.8123950.011305
MEL0.9512540.0019660.7927430.0166170.9739540.0024120.8137430.012396
AK0.9616030.0014130.8432460.0087520.9785040.0018080.848710.010153
BKL0.9589480.0015450.830260.0096220.9773470.0018450.8398810.010704
DF0.9654980.0012780.8334680.0105310.9843560.0016130.8840180.009842
NV0.9502860.0016920.8342160.0135080.9669070.0023340.7833260.010768
VASC0.9840780.0009320.8939350.0075090.9968710.0005870.9759650.004312
MEAN ± 1SD0.9622560.0107040.8490410.0347590.9784290.0091090.8515830.058924
TESTING SET
ACCURACYSENSITIVITYSPECIFICITYPRECISION
LESIONMEAN±1SDMEAN±1SDMEAN±1SDMEAN±1SD
BCC0.9772310.0028060.9501590.0128090.981110.0033810.8769280.021674
SCC0.9702110.0033760.8835540.0219920.9827570.0037790.8807850.024607
MEL0.9621830.004270.8677460.0250050.9755430.0037540.833740.023414
AK0.9706520.0033690.8691580.0151760.9851980.0032150.8937510.021674
BKL0.9667690.0037540.8821640.0199330.9788680.0032540.8561080.020597
DF0.9728830.0037970.8590580.0198390.9891670.0025750.9191440.017968
NV0.9604170.0037480.8496690.0274380.9761550.0046670.8348630.027576
VASC0.9864580.0027530.9074630.0185570.9980540.0008890.9854790.006777
MEAN ± 1SD0.970850.0083630.8836220.0321050.9833570.007480.88510.049853
Table 10. Results for weighted k-NN.
Table 10. Results for weighted k-NN.
TRAINING SET
ACCURACYSENSITIVITYSPECIFICITYPRECISION
LESIONMEAN±1SDMEAN±1SDMEAN±1SDMEAN±1SD
BCC0.991010.0008530.9663990.0044790.994520.0007990.9618460.005073
SCC0.9904890.0008540.9738970.0053590.9928550.0006910.9511290.004412
MEL0.982920.0011790.9298580.0060060.9905010.0009930.9333290.006406
AK0.9906480.0009560.9616490.005940.9947790.0008290.9633760.005514
BKL0.9877040.0011240.942180.0079590.9942080.0008320.9588310.005584
DF0.9913580.0008660.9611260.0058190.9956690.0006470.9694070.004409
NV0.9786370.0011980.9309440.0089270.9854550.0012820.9016570.006904
VASC0.9948820.0006870.9644080.0047780.9992360.0003120.9944940.002249
MEAN ± 1SD0.9884560.0052480.9538080.0169920.9934030.004050.9542590.02732
TESTING SET
ACCURACYSENSITIVITYSPECIFICITYPRECISION
LESIONMEAN±1SDMEAN±1SDMEAN±1SDMEAN±1SD
BCC0.9951430.0014160.9802860.0068490.9972830.0012470.9809930.008724
SCC0.9957090.0016170.9896980.0080210.9965710.0011780.9763470.008376
MEL0.9892320.0023620.9535520.0120950.9943580.0019250.9603110.013101
AK0.9958220.0016040.9820160.0081890.9978120.0013770.9847430.009674
BKL0.9937950.0020860.9674520.0109270.9975690.0013630.9826430.009614
DF0.9959580.0014990.9842280.0078140.9976580.0010940.9836580.007641
NV0.9858920.0026860.9571460.0138470.9899760.002270.9314920.013995
VASC0.9975430.0011590.9824620.0074620.999690.0004590.9977590.003378
MEAN ± 1SD0.9936370.0039890.9746050.0134620.9963650.0029760.9747430.020327
Table 11. Results for bagged trees.
Table 11. Results for bagged trees.
TRAINING SET
ACCURACYSENSITIVITYSPECIFICITYPRECISION
LESIONMEAN±1SDMEAN±1SDMEAN±1SDMEAN±1SD
BCC0.9980530.0005570.9913250.0033050.9990130.0004780.9930970.003319
SCC0.9969120.0007340.9856760.0045450.9985150.0005310.9895790.003716
MEL0.9977270.0006070.9908380.0029730.9987090.0005280.9909810.003639
AK0.9969320.0005570.9911460.003060.9977580.000520.9844480.003544
BKL0.9956890.000730.985310.0041890.9971670.000620.9802360.004258
DF0.9975540.0007140.9871480.0039660.9990390.0003780.9932350.002618
NV0.9978150.0006050.9912930.0033890.9987450.0004870.9912210.003352
VASC0.999120.0003590.9963880.0018290.9995110.0003230.9965890.00225
MEAN ± 1SD0.9974750.0010020.989890.0036840.9985570.0007540.9899230.00524
TESTING SET
ACCURACYSENSITIVITYSPECIFICITYPRECISION
LESIONMEAN±1SDMEAN±1SDMEAN±1SDMEAN±1SD
BCC0.9992410.0007120.9967740.0039760.9995860.0005480.9970780.00388
SCC0.9989920.0010310.9943030.0077620.9996650.0004830.9975830.003562
MEL0.9994790.0008260.9985820.0026010.9996110.0007370.9973660.004978
AK0.9992530.0006930.9976170.0033470.9994830.000570.9963590.003996
BKL0.9984710.0010620.9966980.0036670.9987320.0010710.9911930.00753
DF0.9993770.0006060.996150.0039060.9998440.0002830.9989460.001909
NV0.9992980.0007190.9970540.0043320.9996250.0005910.9973680.004074
VASC0.9998410.0003650.9987210.0029711010
MEAN ± 1SD0.9992440.0003950.9969870.0014130.9995680.0003750.9969870.002606
Table 12. Results for subspace k-NN.
Table 12. Results for subspace k-NN.
TRAINING SET
ACCURACYSENSITIVITYSPECIFICITYPRECISION
LESIONMEAN±1SDMEAN±1SDMEAN±1SDMEAN±1SD
BCC0.9994620.0002960.9979580.0018510.9996770.0002450.9977370.0017
SCC0.9990380.0006150.9958290.0027570.9994960.0004450.996460.003122
MEL0.9987090.0004690.9953240.0030620.9991940.0003390.9943980.002311
AK0.9989580.0005460.9952730.0034050.9994850.0002850.9964080.00199
BKL0.9991280.0003550.9979120.0019050.9993010.000390.9951390.0027
DF0.9991760.0005540.996560.0025210.999550.0004030.9968530.002832
NV0.9988140.0004940.994080.0030250.9994860.0003150.9963810.002206
VASC0.9998530.000130.9995910.0007080.999890.0001340.9992340.000934
MEAN ± 1SD0.9991420.0003680.9965660.0018060.999510.0002130.9965760.001482
TESTING SET
ACCURACYSENSITIVITYSPECIFICITYPRECISION
LESIONMEAN±1SDMEAN±1SDMEAN±1SDMEAN±1SD
BCC0.9999550.000248100.9999480.0002840.9996420.001963
SCC0.9997740.0005150.9989570.0031810.9998960.0003940.9992840.002726
MEL0.9999090.000345100.9998970.0003930.9992680.002791
AK0.9999090.0003450.9996430.0019580.9999480.0002830.9996370.00199
BKL0.9998640.000415100.9998450.0004730.9989150.003311
DF0.9998640.0004150.9989520.00321010
NV0.9999090.0003450.9992820.0027321010
VASC10101010
MEAN ± 1SD0.9998986.74 × 10−50.9996040.0004750.9999425.82 × 10−50.9995930.000407
Table 13. Literature review. Works that implement machine learning techniques for skin lesion classification.
Table 13. Literature review. Works that implement machine learning techniques for skin lesion classification.
ReferenceClassification
Method
DatasetNumber of ImagesClassesBalance StrategyClassified TypeTrain–Test (%)Training Results ReportedTest Results
Accuracy
(%)
Sensitivity
(%)
Specificity (%)Precision (%)
Ballerini (2012)
[40]
k-Nearest Neighbor.Not reported9605 NoMulticlassNot reportedYes74.30Not reportedNot reportedNot reported
Ozkan and Koklu (2017)
[41]
Support Vector Machine.
k-Nearest Neighbor.
Artificial Neural Network.
Decision Tree.
PH22003NoMulticlassNot reportedNo92.5090.8696.1190.45
Nasir et al.
(2018)
[42]
Support Vector Machine.PH22002NoBinary50–50No97.597.796.7Not reported
Chatterjee et al. (2019)
[43]
Support Vector Machine.Different sources65793YesBinary80–20 NoBy Class:
MEL = 98.99
NV = 97.54
BCC = 99.65
By Class:
MEL = 98.28
NV = 91.07
BCC = 100
By Class:
MEL = 98.48
NV = 99.39
BCC = 99.63
Not reported
Fisher et al. (2019)
[44]
k-Nearest Neighbor.
Artificial Neural Network.
Decision Tree.
Edinburgh
DERMOFIT
130010 NoBinary and MulticlassNot reportedNoBinary = 91.30
8 class = 87.10
10 class = 80.10
Not reportedNot reportedNot reported
Molina-Molina et al. (2020)
[45]
Ensemble Classifiers.ISIC201925,3318NoMulticlass70–30YesBy class:
AK = 98.64
BCC = 96.55
SCC = 98.54
BKL = 97.24
MEL = 95.76
NEV = 93.44
DF = 99.24
VASC = 99.37
Mean ± SD = 97.35 ± 2.04
By class:
AK = 82.54
BCC = 80.97
SCC = 93.56
BKL = 92.12
MEL = 96.02
NEV = 89.75
DF = 97.92
VASC = 100
Mean ± SD = 91.61 ± 6.89
By class:
AK = 76.36
BCC = 96.39
SCC = 43.95
BKL = 80.22
MEL = 79.54
NEV = 98.32
DF = 19.67
VASC = 37.15
Mean ± SD = 66.45 ± 29.09
By class:
AK = 99.43
BCC = 96.58
SCC = 99.92
BKL = 99.21
MEL = 99.28
NEV = 88.39
DF = 100
VASC = 100
Mean ± SD = 97.85 ± 3.98
Afza, F (2020) [46] Support Vector Machine.
k-Nearest Neighbor.
Ensemble Classifiers.
Linear Discriminant.
ISBI 201727502 NoBinary70–30No96.2096.0010096.00
Ghalejoogh, G.S (2020)
[47]
Hybrid Approach.PH2
Ganster
4703YesBinary and MulticlassNot reportedNoBinary
PH2 = 98.50
Ganster = 97.78
Multiclass:
PH2 = 96.00
Ganster = 93.33
Binary:
PH2 = 97.5
Ganster = 94.29
Multiclass:
PH2 = 94.00
Ganster = 90.00
Binary c:
PH2 = 98.75
Ganster = 97.00
Multiclass:
PH2 = 97.00
Ganster = 95.00
Not reported
Moldovanu et al. (2021)
[48]
k-Nearest Neighbor.
Radial Basis Function Neural Network.
7-Point.
Med-Node
Pedro Hispano Hospital (PH2).
6552NoBinaryNot reportedNo7-Point
80.77
Med-Node
94.71
PH2
94.88
7-Point
98.01
Med-Node
96.42
PH2
1.00
Not reported7-Point
94.44
Med-Node
88.61
PH2
85.62
Afza et al. (2022)
[20]
Support Vector Machine.
k-Nearest Neighbor.
Ensemble Classifiers.
Decision Tree.
Naïve Bayes.
Single Hidden Layer Extreme Learning Machine.
HAM10000
ISIC2018
10,015
12,500
7YesMulticlass50–50NoHAM10000:
93.40
ISIC2018:
94.36
Not reportedNot reportedHAM10000:
93.10
ISIC2018:
94.80
Shetty et al. (2022)
[49]
Support Vector Machine.
k-Nearest Neighbor.
Decision Tree.
Logistic Regression.
Naïve Bayes.
Linear Discriminant.
Convolutional Neural Network.
HAM1000010,0157YesMulticlass80–20No94.0085.00Not reported88.00
Mohanty et al. (2022)
[50]
Support Vector Machine.
Decision Tree.
Multi-Layer Perceptron.
Naïve Baye.
HAM10000
BCN20000
10,000.
19,424.
7NoMulticlassNot reportedNoHAM10000:
97.79
BCN20000:
97.79
HAM10000:
94.99
BCN20000:
95.39
Not reportedHAM10000:
95.24
BCN20000:
96.24
Camacho-Gutiérrez et al. (2022)
[51]
Support Vector Machine.
k-Nearest Neighbor.
Linear Discriminant.
Ensemble Classifiers.
ISIC201925,3318 NoMulticlass80–20Yes87.0064.0090.001.00
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Guerra-Rosas, E.; López-Ávila, L.F.; Garza-Flores, E.; Vidales-Basurto, C.A.; Álvarez-Borrego, J. Classification of Skin Lesion Images Using Artificial Intelligence Methodologies through Radial Fourier–Mellin and Hilbert Transform Signatures. Appl. Sci. 2023, 13, 11425. https://doi.org/10.3390/app132011425

AMA Style

Guerra-Rosas E, López-Ávila LF, Garza-Flores E, Vidales-Basurto CA, Álvarez-Borrego J. Classification of Skin Lesion Images Using Artificial Intelligence Methodologies through Radial Fourier–Mellin and Hilbert Transform Signatures. Applied Sciences. 2023; 13(20):11425. https://doi.org/10.3390/app132011425

Chicago/Turabian Style

Guerra-Rosas, Esperanza, Luis Felipe López-Ávila, Esbanyely Garza-Flores, Claudia Andrea Vidales-Basurto, and Josué Álvarez-Borrego. 2023. "Classification of Skin Lesion Images Using Artificial Intelligence Methodologies through Radial Fourier–Mellin and Hilbert Transform Signatures" Applied Sciences 13, no. 20: 11425. https://doi.org/10.3390/app132011425

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop