Classiﬁcation of Dermoscopy Skin Lesion Color-Images Using Fractal-Deep Learning Features

Featured Application: Detection of skin diseases is one of today’s priority tasks worldwide. Computer-aided diagnosis is a promising tool for prevention (diagnosis). Abstract: The detection of skin diseases is becoming one of the priority tasks worldwide due to the increasing amount of skin cancer. Computer-aided diagnosis is a helpful tool to help dermatologists in the detection of these kinds of illnesses. This work proposes a computer-aided diagnosis based on 1D fractal signatures of texture-based features combining with deep-learning features using transferred learning based in Densenet-201. This proposal works with three 1D fractal signatures built per color-image. The energy, variance, and entropy of the fractal signatures are used combined with 100 features extracted from Densenet-201 to construct the features vector. Because commonly, the classes in the dataset of skin lesion images are imbalanced, we use the technique of ensemble of classiﬁers: K-nearest neighbors and two types of support vector machines. The computer-aided diagnosis output was determined based on the linear plurality vote. In this work, we obtained an average accuracy of 97.35%, an average precision of 91.61%, an average sensitivity of 66.45%, and an average speciﬁcity of 97.85% in the eight classes’ classiﬁcation in the International Skin Imaging Collaboration (ISIC) archive-2019.


Introduction
Due to the changes in the environmental conditions in which we currently live, we observe that more people suffer some form of cancer [1].For example, ultraviolet radiation (UV) is one of the primary risk factors for skin cancer.In 1975, Fitzpatrick proposed a scale from I to VI of how the skin type reacts to UV [1].Type I are the very fair skins (more susceptible to developing some form of skin cancer), and Type VI comprises intensely pigmented dark brown skins (less affected).Hence, this kind of illness is most frequent in countries with a predominantly fair skin population.Nowadays, in Ireland, the skin cancer's diagnosis is over 11,000 cases per year.Most people living in Ireland have fair skin, which means skin type I or II on the Fitzpatrick scale.Ireland's government, concerned about the rising incidence of skin cancer, in its National Cancer Strategy (2019-2022), prioritizes the need to implement a national skin cancer prevention plan [2].In the USA annually, skin cancer diagnosis surpasses five million cases [3].Globally, in 2010, skin diseases were the fourth leading cause of nonfatal illnesses that caused economic losses due to inabilities [4].
Abnormal growth of skin cells could develop skin cancer.One of the first warning signs is that skin lesions, like new moles, bumps, sores, scales, or dark spots, grow and do not go away.
Mainly, skin cancer divides into two categories, melanoma and non-melanoma (basal cell carcinoma and squamous cell carcinoma) [5].The dermatologists employ the ABCDE criteria in the diagnosis of skin lesions: asymmetry (A), border (B), color (C), differential structure (D), and evolving (E) [6].In the asymmetry feature, the dermatologist searches if the lesion is uneven.The border characteristic measures the ragged edges.The color analysis determines if the spot has an unusual coloration.The diameter parameter measures if the spot is more extensive than one-quarter inch.The evolving features indicate if the lesion is changing in size, color, or shape.Therefore, the dermatologist analyses the form, appearance, and color of the mole or spot.Detection of skin diseases is one of today's priority tasks worldwide.Characterizing the role that the textures play in skin diseases will help us develop more robust computer-aided diagnosis methods.
In recent years, neural networks have once again boomed in a wide variety of applications.By obtaining satisfactory results, these become a promising tool for the classification of skin lesions.CNNs are based on the convolution operation of the image with different filters, offering extensive and varied information.In its first layers, CNNs get the lines and edge features, while in the deeper layers, they find more specific details in the images.It is well known that the convolution operation is not invariant to geometric transformations like translation, rotation, and scale [7].That is why neural networks require a large number of training images to extract all the possible ways that the image can have, hence their high computational cost.In addition, images of skin lesions can have low contrast between the lesion and healthy skin, which could interfere with the correct segmentation of the lesions [8].That can lead to loss of valuable information that could be used for proper classification of the injury.Due to these aspects, in this work, it is proposed to complement the Densenet-201 network's information with the compact information obtained from fractal signatures, which extract global and local information from the images considering the three color bands, R, G, B.
The techniques of texture-based extraction have been applied to myriad problems, achieving excellent results in multi-class classification [31,32].However, the skin lesion classification task is so complex that most proposals are based on the two-class classification problem [33][34][35][36][37][38][39].Because melanoma is deadly cancer, most of those works are focused on identifying it.Few works present systems that classify more than two types of skin lesions [40][41][42][43][44][45], like Wu et al. [42], that use five convolutional neural networks to classify face skin lesions of the Xiangya-Derm database.They selected 2656 face images of seborrheic keratosis, actinic keratosis, rosacea, lupus erythematosus, basal cell carcinoma, and squamous cell carcinoma.The network Inception-ResNet-v2 yielded the best results, a recall (or sensitivity) of 67.2%, and a 63.7% precision.

Background
The Fractal Signature of Texture Images The fractal term arose from the works of Benoit Mandelbrot when he studied irregular geometric structures repeated at different scales [46].The fractal objects have irregular shapes in which the standard measures cannot determine their dimension [47].The fractal dimension pinpoints how the fractal object fills space as the measurement unit is refined.Generally, the fractal dimension is a fractional number instead of an integer number, as the point, the straight line, the square, and the cube have dimensions zero, one, two, and three.The skin lesions present irregular borders, so it is natural to measure their size with the fractal dimension.Moreover, Wahba et al. [41] recommended to include the fractal dimension in the ABCDE rule.
Recently, Backes et al. [48] proposed a texture signature based on the volumetric fractal dimension (Bouligand-Minkowski descriptor) to identify plants.On the other hand, Florindo et al. [49] used the Bouligand-Minkowski descriptors to pinpoint histological images of odontogenic keratocyst, a jaw cyst type.In this work, we propose to use fractal signatures obtained from triangular prisms.Let I(x, y) represent a gray-scale image of size M × N. The fractal signature of I(x, y) builds by where δ = 1, 2, . . ., log 2 (min(M, N)) , and is the addition of the four triangles' area-values of the prism in Figure 1.α is a weight term for the sum of the area of the faces.The triangular prism, with a square base of length ε + 1, is obtained from the vertices: , and a 5 = x + ε 2 , y + ε 2 , z , with z = I(x,y)+I(x+ε,y)+I(x,y+ε)+I(x+ε,y+ε)
The addition of the areas of the triangular faces of all prisms built from ε = 2 and for all (x, y) of the image, corresponds to the fractal signature, F, in its first index; that is, . The next square-base size is given by ε = 4, thus 4), and so on.The scalar ε is an even value of the form, ε = 2 δ .
(3)  Most of the aided-computer diagnosis try to reproduce what a dermatologist would do.The dermatologist gives a score to the segmented skin lesion's ABCDE features [6].The final result is the addition of the scores given.Based on a threshold value applied to the final result, the lesion is determined or not like a disease.
Commonly, aided-computer methodologies comprehend of four steps: segmentation, feature extraction, feature score, and classification [34].An adequate segmentation must completely separate the healthy region from the diseased skin; however, in most of these images, it is not possible to have proper segmentation.These kinds of images have not very well-defined edges.In some cases, they present low contrast between the regions of the healthy and the diseased skin.That complicates the segmentation process and causes wrong segmentation of the lesion.In References [50,51], different state-of-the-art segmentation algorithms were tested, which in many cases could not make correct discrimination of healthy skin and diseased skin, mainly in the images of actinic keratosis and basal cell carcinoma.That caused it not to consider significant sections of the injuries.With fractal signatures we work with the whole image.

The Fractal Signature Features
Since we are working with images of different sizes, the fractal signatures are of different sizes.To avoid the size of the uneven signatures, we work with the energy E k , the variance σ 2 k , and the entropy H k of the signatures [52], computed like where F k is the mean of F k , k = R, G, B, and n is the length of signature F k , Equation (1).
Because the color of the lesion is one of the features in the ABCDE rule, it is imperative to preserve the color information of the images.Hence, the energy, variance, and entropy of the three signatures of the image, one signature per RGB color-channel, compose the image's fractal signature, in the following manner.
As the fractal signatures have different lengths, to have feature vectors of the same length, we decided to use the following statistical features: energy, variance, and entropy.The energy measures the signature's strength.The variance measures the spread or dispersion of the signal around its mean value, and the entropy is the expected information content of the signal [52].These three features obtained from each of the three 1D fractal signatures were added to the 100 deep-features extracted from the DenseNet-201.If the signal is viewing as a histogram, more statistical features could be explored like: skewness, kurtosis, mode, interquartile range, percentiles [21].In addition, the statistical features obtained from grey level co-occurrence matrices (GLCMs, 2D signals) could be adapted to 1D signals [9,11,21].That will be studied in the next step of this work.

The Densenet Features
This proposal consists of a hybrid methodology that combines the features from fractal signatures with those obtained with the DenseNet-201 neural convolutional network.Like other neural networks, the DenseNets, in its first layers, get the lines and edge features, while in the deeper layers, it finds more specific details in the images [53].DenseNet-201 was selected because it demonstrated handling better the smooth boundaries, like the skin lesion images [53].The DenseNets connect all layers in a feed-forward form, Figure 3a.Hence, each layer gets additional input from all previous tiers and passes its feature-maps to subsequent layers.Instead of summing the features before they are moving into a layer, the DenseNet concatenates them.That results in a denser connectivity pattern in each of the layers, Figure 3b.The number of additional channels in each layer is named the growth rate; we use 32.
For each composition layer, pre-activation Batch Norm, ReLU (rectified linear units), and 3 × 3 convolution (with stride 1) are utilized in each channel, that operations generated a new output feature-map, Figure 3c.In the Batch Norm layer, values in the activation matrix are divided by the sum of values in the kernel matrix.This is done to have all values between 0 and 1.The ReLu activation function yields to the CNN the non-linearity feature.It is widely used because it handles the vanishing gradient problem properly.In addition, it has the capability to train the network with higher computational efficiency [8].The Rectified Linear Unit layer (ReLu) is defined as, The DenseNet uses multiple dense blocks with transition layers, Figure 4. First, it uses a convolution layer of size 7 × 7 with stride 2, followed by a max-pooling layer of 3 × 3 with stride 2. Between two contiguous dense blocks, it employs a convolution layer of size 1 × 1 with pace 1 followed by an average-pooling layer of size 2 × 2 with stride 2. This proposal uses four dense blocks.The DenseNet-201 requires resizing the input images to 224 × 224 pixels per channel.We use the MatLab 2019b function augmentedImageDatastore, which utilizes the scale's affine transformation [54].We selected to apply the bilinear interpolation operation, because between the nearest-neighbor, bilinear, and bicubic methods, it is the one that provides satisfactory results without spending as much time as the bicubic method does [54].The output of the DenseNet-201 is a vector of length 100, called 1D CNN vector features, and denoted by S CNN .

K-Nearest Neighbor
This is a non-probabilistic classifier [56].It is popular due to its simplicity and excellent performance.For this type of classifiers, we have N training vectors is, x j is the concatenation of the fractal vector features S F and the CNN vector features S CNN , j = 1, 2, . . ., N, and {C 1 , C 2 , C 3 , C 4 , C 5 , C 6 , C 7 , C 8 } is the finite set of skin lesion classes.To determine the class of the unknown vector features x, we find the K points that are closest to x.The majority class amongst these neighbors is the class assigned to x.In this work, we use K = 5, and we called it KNN-5.The Euclidean distance is used to compute the distance between the training data x j and the unknown point x, given by where j = 1, 2, . . ., N, Figure 5.

Support Vector Machines
This is a non-probabilistic classifier.The support vector machines (SVM) are based on the linear training machines with margins [56,57].The linear discriminant function has the general form where w is a weight vector, and w 0 is the bias or threshold weight.
In the SVM methodology, each pattern x j = [S F , S CNN ] is transformed by where j = 1, 2, . . ., N. With an appropriate nonlinear mapping ϕ, the data can always be separated by a hyperspace, Figure 6.For each of these N patterns, let be z j = ±1.A linear discriminant in an augmented y space is given by g(y) = a T y, (12) where both the weight vector and the transformed pattern vector are augmented (compares it with Equation ( 10)).The first step to train an SVM is to choose the nonlinear function ϕ.The goal is to minimize the magnitude of the weight vector.The problem was rewritten as maximizing the distance of the hyperplane by the Kuhn-Tucker construction, Let K a M × M matrix with elements using Equation (11).In Equation ( 13) is not required to know the mapping ϕ(x i ), it is enough to know the associated kernel K(i, j).Two of the most common kernels are the linear kernel (SVM-L) and the Gaussian kernel (SVM-G) where σ is the standard deviation.

The Ensemble Classifier
Pattern recognition methodologies perform efficiently with databases with balanced classes.Today, many of the applications, particularly medical ones, have unbalanced distribution.That causes the classifier to skew towards the majority class.Working with databases with unbalanced classes is one of the significant challenges for developing computer-aided diagnosis tools.One of the commonly used techniques to rebalance classes' distribution is resampling techniques such as subsampling, oversampling, or a hybrid.However, the case of deleting relevant information could be presented.Or, in the other case, we are adding information by using synthetic data.Recently, it has been utilizing the assembly of multiple classifiers for working with unbalanced classes [58][59][60][61][62][63][64].In this proposal, we use the ensemble of three classifiers with a linear plurality vote.The skin lesion image will be classified as one of the eight classes if two or three of the classifier spaces predict that it belongs to that class.Figure 7 shows a block diagram of the proposed computer-aided diagnosis methodology.

The Database
The computer-aided diagnosis proposed methodology was tested with the International Skin Imaging Collaboration (ISIC) 2019 dataset [65,66].This dataset consisted of high-quality skin lesion color images of different sizes.The dataset comprised 25,331 images in eight different classes, including identification labels of the lesions and metadata associated with these lesions, such as age, anatomical site, and sex of the patient.The lesion classes are actinic keratosis (AK), basal cell carcinoma (BCC), squamous cell carcinoma (SCC), benign keratosis (BKL), melanoma (MEL), melanocytic nevus (NEV), dermatofibroma (DER), and the vascular lesion (VASC).Table 1 shows the distribution of images per class in the dataset.

Evaluation Metrics
The computer-aided diagnosis results had four possibilities: true positive classification, indicated by TP; false positive classification, marked as FP; false negative classification, showed like FN; and true negative classification, pointed out by TN.Based on these values, the four measures used to test the methodology robustness were The accuracy was the proportion of the total cases classified correctly as positive and negative TP + TN between all possibilities presented (TP + TN + FP + FN); this was a measure of bias.The precision was the conditional probability that measures the correct classifications TP between the number of classifications indicated as positive TP + FP; this was a measure of the spread.The sensitivity or recall was the conditional probability that measures the proportion of the correct classification (TP) between all cases that should have been classified as correct (TP + FN).The specificity was the conditional probability that measured the proportion of true negative classification (TN) between the number of total cases who were rightly negative (TN + FP) [52].

Results
The proposed methodology was implemented in MatLab 2019b on an HP computer, Intel Core i5, 12 GB RAM.A total of four experiments were done.The ISIC archive-2019 had a significant imbalance between the majority and minority classes.To handle imbalanced classes, we used the ensemble method using three classifiers with a linear plurality vote.In Exp-1 and Exp-2, the performance metrics reported were for the dataset without splitting.In Exp-3 and Exp-4, the dataset was randomly sampled five times with replacement in 70% for the training part and 30% for the test.Since the original set was sampled with replacement, some elements were repeated in the training sets, and others were not presented.With the five sets, we trained the classifiers, and thus we obtained different predictions that helped us have a fair classification of the images.In the first experiment (Exp-1), the complete dataset was used, the 25,331 dermoscopy color-images of the ISIC archive-2019.The α parameter in the fractal signature was varied from α = −2 to 2 with stride 0.1.The best performance was obtained with α = − 2.0.Table 2 displays the confusion matrix output for the Exp-1 with α = −2.0.The ISIC 2019 library had 867 AK, 3323 BCC, 628 SCC, 2624 BKL, 4522 MEL, 12875 NEV, 239 DER, 253 VASC.The computer-aided diagnosis response was 802 AK, 3956 BCC, 295 SCC, 2285 BKL, 3746 MEL, 14,105 NEV, 48 DER, 94 VASC.As was expected, the classes with an insufficient number of images were misclassified, like SCC, DER, and VASC.We must always keep in mind to have a representative dataset D C for the class C that allows us to reproduce or generate the space for that class.For D C , all the images in C should be rewritten as a linear combination of the elements in D C , which was not happening with the classes SCC, DER, and VASC.Based on the confusion matrix in Table 2, the true positive TP, false positive FP, false negative FN, and true negative TN values were computed, as shown in Table 3.These values were used in Equations ( 18) to (21) to compute the performance of the computer-aided diagnosis of the Exp-1.Table 4 shows the percentage of the values per class of accuracy, precision, sensitivity, and specificity.As was expected, sensitivity was low due to poor representation of SCC, DER, and VASC images.The Exp-1 computer-aided diagnosis yielded a mean ± standard deviation for accuracy = 97.35± 2.04%, precision = 91.61± 6.89%, sensitivity = 66.45 ± 29.09%, and speci f icity = 97.85 ± 3.98%.
Experiment 2 (Exp-2) used a dataset of 24,839 images with 867 AK, 3323 BCC, 628 SCC, 2624 BKL, 4522 MEL, and 12,875 NEV.The 239 DER and 253 VASC images were not considered.Again, the α parameter in the fractal signature was varied from α = −2 to 2 with stride 0.1.The best performance was obtained with α = − 1.9.For the Exp-2, the mean ± standard deviation of the computer-aided diagnosis was accuracy = 96.85± 1.71%, precision = 90.12± 6.07%, sensitivity = 79.25 ± 19.06%, and speci f icity = 97.42 ± 3.93%, as shown in Table 5.As is observed, when not considering the classes with very few images, the sensitivity increased, and the standard deviation decreased.So this confirmed the importance of having a good sampling for each of the classes.
For experiment 3 (Exp-3), the ISIC dataset of 25,331 dermoscopy color-images was randomly divided into 70% for the training dataset (17732 images) and 30% for the test dataset (7599 images).The α parameter in the fractal signature was varied from α = −2 to 2 with stride 0.1.The best performance was obtained with α = 0.4.The mean of five realizations was reported in Table 6.The Exp-3 for the training dataset gave the performance metric of accuracy = 97.31± 2.14%, precision = 91.90± 6.28%, sensitivity = 64.39 ± 32.93%, and speci f icity = 97.80 ± 4.15%.For the test dataset the computer-aided diagnosis got accuracy = 92.20 ± 6.74%, precision = 66.30± 23.46%, sensitivity = 38.43 ± 31.12%, and speci f icity = 93.60 ± 12.22%, Table 7. Finally, experiment 4 (Exp-4) did not consider the DER and VASC classes, and the dataset of 24,839 images was randomly divided into 70% for the training dataset (17387 images) and 30% for the test dataset (7452 images).The best performance was obtained with α = 1.6.Five realizations were executed for the training dataset obtaining an accuracy = 96.82± 1.85%, precision = 90.57± 5.19%, sensitivity = 79.35 ± 20.24%, and speci f icity = 97.39 ± 4.06%, Table 8.For the test dataset it had accuracy = 89.95± 6.05%, precision = 58.62 ± 17.54%, sensitivity = 47.89 ± 29.30%, and speci f icity = 91.60 ± 13.62%, Table 9.As observed, in the performance of the training data metrics in Exp-4, when the classes with fewer images were eliminated, the accuracy, precision, and speci f icity remained in the same range as Exp-3 but increases the sensitivity.Remember that the training methodologies in Exp-3 and Exp-4 did not see the images in the test database, the accuracy and specificity in both experiments showed excellent performance.However, the precision and sensitivity will need to be improved in the next stage of this work.There are several factors to consider that could affect the reliability of computer-aided diagnosis methodologies.The first thing to be analyzed is the stage of the extraction of characteristics, to determine if it is required to add more elements to the vector or if there are redundant elements.Afterwards, we must always keep in mind that we are working with unbalanced classes, which implies selecting how to handle them, using subsampling, oversampling, a hybrid between them or the assembly techniques, and the type of classifiers that are being used for the assembly.

Comparison with Other Methodologies
Garnavi et al. [33] developed a computer-aided diagnosis of melanoma detection.They used wavelet-decomposition and geometrical features of the lesion, and the classifiers SVM, random forest, logistic model tree, and naive Bayes.The dataset had 289 dermoscopy images: 114 malignant (M) and 175 benign (B) images, which were split by the training set with 40 M and 59 B images; validation set with 30 M and 57 B images; and, the test set with 44 M and 59 B images.They obtained an accuracy = 91.26%,and AUC = 0.937.Barata et al. [34] built two computer-aided diagnoses of dermoscopy images for melanoma detection: one uses global features and the other local features.They employed the classifiers KNN, AdaBoost, SVM-RBF, bag of features (BoF).The dataset had 176 dermoscopy images: 25 MEL and 151 NEV images.To handle the classes' unbalances, they repeated the melanoma features.The global method showed sensitivity = 96% and specificity = 80%.The local method presented sensitivity = 100% and specificity = 75%.
Shimizu et al. [40] used linear classifiers and a binary strategy.The dataset employed had 968 dermoscopy images.The detection rate reached was 90.48% for MEL, 82.51% for NEV, 82.61% for BCC, and 80.61% for seborrheic keratosis (SK).They employed 10-fold cross-validation to report the performance.
Khan et al. [37] used the grey level co-occurrence matrix (GLCM), local binary pattern (LBP), and color features to produced a binary classifier.The data set had 146 MEL and 251 NEV digital images.The classifiers used were SVM, KNN, Naive Bayes, and decision trees.They reported accuracy = 96%, sensitivity = 97%, specificity = 96%, and precision = 97%.Albahar [38] yielded a CNN to classify dermoscopy images of melanoma, nevus, seborrheic keratosis, squamous cell carcinoma, basal cell carcinoma, and lentigo.He divided the dataset into three equal parts of 8000 images of benign and malignant categories.A total of 5600 images pertained to the training set and 2400 for the validation set.He obtained AUC = 0.98, accuracy = 97.49%,specificity = 93.6%,and sensitivity = 94.3%.
Marka et al. [39] gave a review of the state-of-the-art in the automated detection of nonmelanoma skin cancer.They reported the computer-aided diagnoses for dermoscopy and non-dermoscopy images.Gessert et al. [44] proposed an ensemble of deep learning models to classify the dermoscopy color-images in the ISIC archive-2019 challenge.The methodology worked with cropped and binarized images.In addition, to avoid the imbalance of the classes, they made data augmentation.Here was reported specificity of 72.5%, and when the metadata were also used, the specificity increased to 74.2%.
Bajwa et al. [45] presented a computer-assisted diagnosis of the set of four classifiers that reports the average of the predictions.They used DermNet and the ISIC-2018 file.For the training step, the images were cropping, horizontal flipping, and resizing (as required by neural networks, for example, the size of images must be 224 × 224 pixels or 331 × 331 pixels).They used stratified k-fold cross-validation with k = 5.For the ISIC archive-2018 consisting of seven classes with a total of 23665 images, they obtained a weighted average for precision = 85.02 ± 09.10%, sensitivity = 80.46 ± 09.38%, speci f icity = 96.57± 07.15% and F1 − Score = 82.45± 08.38 when classifying the 23,665 images.

Discussion
The diagnosis of a skin lesion by a dermatologist remains subjective.The diagnostic accuracy range is 64% to 80% [67][68][69][70], measured in specialized dermatology centers.Nowadays, most diagnoses are made by physician assistants, whose accuracy is lower than the dermatologist's [71].
In this work, we proposed a computer-aided diagnosis to identify eight classes of skin lesions in dermoscopy color-images.The classes are actinic keratosis (AK), basal cell carcinoma (BCC), squamous cell carcinoma (SCC), benign keratosis (BKL), melanoma (MEL), melanocytic nevus (NEV), dermatofibroma (DER), and the vascular lesion (VASC).The methodology uses three fractal signatures, one signature per color in the RGB color-space.To handle the difference of the signatures' lengths, we used the energy, variance, and entropy of the fractal signatures.These nine features concatenate with the 100 features obtained from the Densenet-201.We use three classifier spaces to construct an ensemble classifier based on the majority vote: the K-nearest neighbors (KNN) and the support vector machines (SVM) with linear and Gaussian kernels.The computer-aided diagnosis tested using 25,331 dermoscopy color-images from the ISIC archive-2019.Working with a hybrid methodology of three 1D fractal signatures and DenseNet-201 features allows us to strengthen the computer-aided diagnosis.We have more image information working with the fractal signatures since we do not have to reduce the image in size as required to work with a neural network.The computer-aided diagnosis presented excellent results like those obtained by the four ensembles CNNs by Gessert et al. [44] and Bajwa et al. [45].A difference from the Gessert et al. and Bajwa et al. methodologies, in this proposal, we do not use artificial image generation or manipulation.Because there are various open-source alternatives for CNNs, the proposed methodology does not require computer equipment with high performance.That makes it viable to reproduce the same results with different programming languages on various platforms.However, we need to notice that CNNs are based on the convolution operation of the image with different filters, which require a large number of training images to extract all the possible ways that the image can be presented.That yields a high computational cost.Furthermore, to properly segment images with low-contrast edges and hair artifacts, such as those of skin lesions, specialized neural networks are required on the topic [8].Due to these aspects, a viable option to reduce computational cost has been developing, using hybrid methodologies of CNNs with handcrafted feature extraction approaches, like this proposal.

Conclusions
We proposed a computer-aided diagnosis methodology for eight classes of dermoscopy color-digital images in the ISIC archive-2019.The classes are actinic keratosis, basal cell carcinoma, squamous cell carcinoma, benign keratosis, melanoma, melanocytic nevus, dermatofibroma, and vascular lesion.The methodology utilized 1D fractal features and Densenet-201 CNN features.In this work, a hybrid proposal between a convolutional neural network and a handcrafted feature extraction methodology was chosen to take advantage of both of these techniques.The aim is to reduce the operational computational cost of using various neural networks.It is well known that a neural network requires a vast training database to pursue an excellent performance since it is based on image convolution with several filters.The reason for such operational computational cost is that the convolution operation is not invariant to geometric transformations such as scale, rotation, and translation.The computer-aided diagnosis presented an average accuracy of 97.35%, an average precision of 91.61%, an average sensitivity of 66.45%, and an average specificity of 97.85%.As the diagnosis depends on the clinical experience of the human vision, the computer-aided diagnosis aims to be a tool to help them to eliminate the subjectivity as much as possible.

Figure 2b shows the
Figure 2b shows the graph of F R , F G , F B , the fractal signatures of the RGB dermoscopy color-image of the actinic keratosis of 450 × 600 pixels in Figure 2a.Because of min{450, 600}, δ = 1, 2, . . ., log 2 (450) = 1, 2, . . ., 8.8138 = 1, 2, . . ., 8. Thus, the three fractal signatures have length 8. Figure 2b exhibits three signatures with the same behavior.Due to the scale range of the graphs in Figure 2b, the three signatures look similar, but they are different in values, Figure 2c.The magnitude of the maximum values of the difference

Figure 2 .
Figure 2. Fractal signature examples.(a) Color image of actinic keratosis.(b) F R , F G , F B fractal signatures from channel red, green, and blue, respectively.(c) Amplification region of the signatures' graphs to see the difference in the values of the three signatures.

Figure 3 .
Figure 3. DenseNet architecture.(a) Block diagram of the concatenation operation, indicated by a C within a yellow circle.(b) Block diagram of the forward propagation.(c) Block diagram of the layer composition.

Figure 4 .
Figure 4. Block diagram of a deep DenseNet.The C within a yellow circle represents the concatenation operation.

Figure 5 .
Figure 5. Sketch of the K-NN classifier space using five neighbors.The red star represents the test point.The black stars, squares, and diamonds are the training points.

Figure 6 .
Figure 6.Sketch of the linear machine with margins.The black star and black circles represent the support vectors.

Figure 7 .
Figure 7. Block diagram of the proposed methodology.The C within a yellow circle represents the concatenation operation.

Table 1 .
International Skin Imaging Collaboration (ISIC) archive-2019, number of images per skin lesion classes.

Table 3 .
Performance of the Exp-1 based on the confusion matrix in Table2.

Table 4 .
The performance metrics obtained from the values in Table3for the Exp-1 with α = −2.0.

Table 7 .
Mean ± SD of five performance metrics for the test dataset of the Exp-3 with α = 0.4.The test dataset has 7599 images.

Table 8 .
Mean ± SD of five performance metrics for the training dataset of the Exp-4 with α = 1.6.The training dataset has 17,387 images.

Table 9 .
Mean ± SD of five performance metrics for the test dataset of the Exp-4 with α = 1.6.The test dataset has 7452 images.