Skin Lesion Extraction Using Multiscale Morphological Local Variance Reconstruction Based Watershed Transform and Fast Fuzzy C-Means Clustering

: Early identiﬁcation of melanocytic skin lesions increases the survival rate for skin cancer patients. Automated melanocytic skin lesion extraction from dermoscopic images using the computer vision approach is a challenging task as the lesions present in the image can be of different colors, there may be a variation of contrast near the lesion boundaries, lesions may have different sizes and shapes, etc. Therefore, lesion extraction from dermoscopic images is a fundamental step for automated melanoma identiﬁcation. In this article, a watershed transform based on the fast fuzzy c-means (FCM) clustering algorithm is proposed for the extraction of melanocytic skin lesion from dermoscopic images. Initially, the proposed method removes the artifacts from the dermoscopic images and enhances the texture regions. Further, it is ﬁltered using a Gaussian ﬁlter and a local variance ﬁlter to enhance the lesion boundary regions. Later, the watershed transform based on MMLVR (multiscale morphological local variance reconstruction) is introduced to acquire the superpixels of the image with accurate boundary regions. Finally, the fast FCM clustering technique is implemented in the superpixels of the image to attain the ﬁnal lesion extraction result. The proposed method is tested in the three publicly available skin lesion image datasets, i.e., ISIC 2016, ISIC 2017 and ISIC 2018. Experimental evaluation shows that the proposed method achieves a good result.


Introduction
Skin cancer is quite prevalent throughout the world. It affects both males and females of all ages. According to the skin cancer foundation, the estimated case count in the U.S. alone is going to be 207,390 in 2021. Melanoma is the most dangerous of all skin cancers because it spreads quickly to other organs of the body. Early detection is a key factor for effective melanoma care. Basically, melanomas have distinct features such as asymmetry, uneven borders, different colors, large size and frequently changing shape and size. These features of melanomas, named the ABCDE (asymmetry, border, color, diameter and evolving) rule, help the experts during visual inspection to identify melanoma. However, it is still a challenging task for experts to identify melanoma by naked eye. Therefore, computer-vision-assisted diagnosis systems [1] are used that help the experts in timely detection of the melanocytic lesions accurately and provide a better way toward diagnosis. The system uses dermoscopic images where the diagnosis process comprises different stages, such as preprocessing, lesion extraction and classification of lesions that detects the melanoma by ignoring all the artifacts present in the affected region and segregates the skin lesion accurately from healthy skin.
Lesion extraction is a fundamental step that helps the experts to detect and classify the lesions from the acquired images. Various lesion extraction approaches have previously been developed to assist experts in efficiently identifying and classifying lesions using computer-vision-assisted diagnostic systems. However, because of the location of lesions in the human body; variations in colors, shapes and sizes; and contrast in lesion boundary regions, extracting lesions from dermoscopic images remains a hard task, as it increases computational time and results in inaccurate lesion extraction. Many supervised, unsupervised and deep learning methods have been developed to overcome the aforementioned challenges for the extraction of lesions with better accuracy.
Literature shows that there are several challenges in effective lesion extraction, and thus to overcome them, the proposed method uses the local variance method instead of gradient based on boundary detection. Here, lesion extraction is done by hybridizing superpixel and FCM. The proposed method extracts the lesions by removing the undesired artifacts and enhancing the lesion regions as compared to the healthy skin regions.
The proposed approach, being an unsupervised one, extracts the skin lesion in a better way because of the usage of the following process: (1) Preprocessing techniques comprise hair removal and texture enhancement. To remove the hairs from the dermoscopic images, one of the popular hair removal approaches, DullRazor [2] is used. It removes the hairs from the input images and helps in further processing of the images. (2) Due to the large intensity variations in dermoscopic images, it is very difficult to segregate the lesion regions from healthy skin regions. Therefore, to enhance the lesion regions, the hair-removed images are processed through a contrast enhancement technique known as dominant orientation-based texture histogram equalization (DOTHE), and it enhances the lesion regions of the dermoscopic images based on histogram equalization. (3) Further, the preprocessed images come across the MMLVR-WT for generation of superpixels of images and compute the histogram of superpixel images to achieve fast fuzzy c-means (FCM). The proposed method uses the local variance method for accurate detection of boundary regions and helps to separate the lesions from healthy skin regions effectively. (4) Later, the postprocessing technique is used to remove the undesired pixel regions from the lesion regions.
The rest of the paper is organized as follows: Section 2 presents the related works. Section 3 provides an idea of the datasets used for experimentation work. The proposed method is discussed elaborately in Section 4. Sections 5 and 6 discuss the proposed method's performance analysis and results. Finally, Section 7 concludes the paper.

Related Work
Skin lesion extraction is carried out invasively using dermoscopic images. Here, the lesion regions are segregated from the healthy skin regions. The lesion extraction methods available in literature are broadly categorized into supervised and unsupervised approaches. The supervised approaches [3][4][5][6][7][8] use the prior knowledge of lesions and non-lesions in the case of dermoscopic images for accurate identification of melanocytic skin lesions. This process requires large image datasets with annotated ground truths by the experts in order to create an accurate model for detection. Most of the supervised approaches presently use deep convolutional neural networks for segmentation [9]. Some of the popular CNN architectures used for dermoscopic images are U-Net [10] proposed by Ronneberger et al.,SegNet [11] by Badrinarayanan et al.,DeepLab [12] by Bagheri et al., deep FCN along with a combination of shallow networks and deep networks proposed by Zhang et al. [13]. Furthermore, to improve the lesion extraction accuracy, a hybrid combination of supervised and unsupervised methods was developed for skin lesion extraction. Ünver and Ayan [14] combined the deep neural network YOLO with the unsupervised approach GrabCut followed by morphological operations, to extract the melanocytic lesions. Nida et al. [15] used RCNN for lesion localization, followed by fuzzy c-means clustering for segmentation. Banerjee et al. [16] extract the lesions using a combination of the deep network YOLO and L-type fuzzy number based approximations. The former methods, such as [14,16], use a combination of supervised and unsupervised hybrid combination for lesion extraction.
The hybrid scheme signifies that, although good detection accuracy is attained, there is still some scope of improvement when it is combined with an unsupervised approach. It can be well remarked that the hybrid schemes using both the supervised and unsupervised approach require an enormous amount of data for the supervised part, whereas the unsupervised approach uses human visual attention models for detecting lesions.
Among the various unsupervised approaches, such as threshold, region-based [17], edge detection [18] and clustering approaches [19], clustering is one of the widely used algorithms, and it applies to both grayscale and color images. Fuzzy c-means (FCM) clustering is an unsupervised approach, because of its overall success in feature analysis and clustering; it is typically used in image segmentation. It divides the n feature vectors into c fuzzy groups, evaluates the clustering center for each group and reduces the non-similarity index function value. Kumar et al. [20] developed a novel method for segmentation and classification based on fuzzy c-means to differentiate homogeneous image regions for image segmentation and DE-ANN for classification of skin cancers.
For achieving better segmentation accuracy, researchers have combined the fuzzy c-means clustering with existing state-of-arts methods. However, fuzzy c-means defines the utilization of membership function that divides the images into various regions. Lee and Chen [21] developed a classical FCM clustering segmentation method for various skin cancers. Zhang et al. [22] integrated the local spatial features of membership with the fuzzy c-means functional objective that achieves a satisfactory outcome for segmented images. An updated FCM algorithm was introduced by Liu et al. [23], considering the distance among various regions achieved by mean-shift and the distance among pixels were integrated into its objective function. To address FCM's inability to handle ambiguous data, the NS and FCM frameworks were combined by Guo et al. [24].
The unsupervised clustering approach, particularly FCM, is used in combination with supervised approaches to yield better lesion extraction results. More recently, the superpixel approach in combination with the deep learning framework has been used to achieve a high accuracy rate in skin lesion extraction. Several superpixel generation approaches are available in literature.
To generate a superpixel image with accurate boundaries, Lei et al. [25] proposed an algorithm based on multiscale morphological gradient reconstruction (MMGR), and further, a fast FCM method was implemented for color image segmentation. It is observed that the gradient based method is suitable for detecting the boundary region accurately for real images. However, for dermoscopic images, it is a challenging task to detect the boundary region accurately using the gradient-based method of uneven boundary regions.
In this context, Ali et al. [26] proposed an automated approach to detect and measure the border irregularity and trained the network by combining the CNN and Gaussian naïve Bayes to detect the border irregularity automatically, which helps to determine whether the lesion's border region is regular or irregular. Afza et al. [27] provided a three-step superpixel approach for lesion extraction from dermoscopic images. A boundary detection method proposed by Liu et al. [28] combined the CNN with edge prediction in dermoscopic images for better lesion extraction. Ali et al. [29] used the Feret's diameter method for the prediction of asymmetry parameters, along with an improved Otsu thresholding method for extraction of skin lesions from dermoscopic images. A stochastic region-merging and pixel-based Markov random field approach was proposed [30] to decompose the likelihood function by multiplying the stochastic region-merging likelihood function and the pixel likelihood function for skin lesion extraction. The lesion extraction results obtained from existing boundary detection methods are shown in Figure 1.  [29]; (c) extracted lesion by ADR method [26]; (d) extracted lesion by AT method [28]; (e) extracted lesion by HTSDL method [27]; (f) extracted lesion by MRF method [30]; (g) extracted lesion by the proposed method; (h) ground truth; (i) lesion mask of AD method [29]; (j) lesion mask of ADR method [26]; (k) lesion mask of AT method [28]; (l) lesion mask of HTSDL method [27]; (m) lesion mask of MRF method [30]; (n) lesion mask of the proposed method.
In Figure 1a, the original dermoscopic image was considered and Figure 1h depicts the corresponding ground truth (GT) for it. The lesion region had irregular intensity variations near the lesion border regions. The lesion extracted using the AD method [29] is shown in Figure 1b, and the corresponding lesion mask is shown in Figure 1i. Although Figure 1b shows that the lesions are effectively extracted, the comparison of the lesion mask of Figure 1i with the GT (Figure 1h) shows that some emergence of the background region is occurring.
Similarly, for the ADR method [26], the extracted lesion is shown in Figure 1c, and the corresponding mask is given in Figure 1j. From Figure 1j, it can be observed that there is some loss of lesion regions and emergence of background healthy skin region. Figure 1d,k demonstrates the extracted lesion and lesion mask of the AT approach [28]. Although it uses CNN, it can still be observed from Figure 1d that the lesion extraction is not accurate, and it extracts lesion along with the healthy skin regions. Figure 1e,l shows the lesion and mask for the HTSDL method [27].
From Figure 1e, it can be observed that some of the lesion regions were missed, which is not desirable. Similarly, Figure 1f,m represents the lesion extraction result and lesion mask of the MRF method [30]. From Figure 1m, it can be observed that although it resembles the GT given in Figure 1h, it has extensive emergence of healthy skin regions, as shown in Figure 1f. However, it can be seen from Figure 1g,n, which represents the lesion extraction result and lesion mask of the proposed method, that the proposed approach has a minimal loss of lesion region and fewer emergences of healthy skin regions as compared to the other state-of-art recent approaches.

Datasets
For experimentation purpose, three publicly available datasets, i.e., ISIC 2016 [31], ISIC 2017 [32] and ISIC 2018 [33] were used. The dermoscopic images available in the datasets were of varying intensities and different resolutions and sizes along with the ground truth images. The proposed method used the dermoscopic images as they were with no modifications in size and resolution. The RGB image size ranged from 542 × 718 to 2848 × 4288 for the ISIC 2016 dataset. For ISIC 2017 and ISIC 2018 datasets, the size of the RGB images varied from 576 × 768 to 6748 × 4499.

Proposed Method
The proposed method segregates the lesions from dermoscopic images. The entire lesion extraction process comprised four basic sections such as preprocessing, filtering-based watershed transform, fast fuzzy c-means (FCM) clustering and morphological postprocessing techniques. The proposed method's architecture is illustrated in Figure 2. The subsequent subsections explain the proposed system in more detail.

Input Image
For experimentation, the proposed method used the three publicly available skin lesion image datasets, i.e., ISIC 2016, ISIC 2017 and ISIC 2018. The input images considered were of RGB-type dermoscopic images. For example, the input image is shown in Figure 3a.

Preprocessing Techniques
The preprocessing techniques play a vital role in the automated melanocytic skin lesion extraction process. It is the first step that helps to make the dermoscopic images ready for further analysis by enhancing certain features and removing undesired artifacts. It comprised two basic steps, i.e., hair removal and texture enhancement.

Hair Removal
The undesired artifacts, such as hairs present in the captured dermoscopic images, increase the computational cost and leads to inaccurate skin lesion extraction. To remove the hairs and other artifacts, several hair-removal methods [2,34] proposed by various researchers exist that detect the hair pixels and remove them. Thus, the computational time can be reduced. The proposed method used one of the popular hair-removal approach, the DullRazor [2], to remove hairs from dermoscopic images. This technique removes the hairs present in the input RGB images. Figure 3 gives the process of hair removal from the input RGB images. Initially, from the original RGB image given in Figure 3a, the hair mask shown in Figure 3b was generated using the DullRazor algorithm. Further, using the image inpainting method, the hair regions in the binary mask generated in Figure 3b were replaced with the nearest non-hair pixel intensity values as shown in Figure 3c. Later, it was smoothed to get a clear, hair-removed image. Figure 3d shows the hair-removed image obtained using DullRazor from the original RGB image, shown in Figure 3a.

Texture Enhancement
The input RGB images had large variations in intensity, due to which it was not possible to segregate the lesion regions from healthy skin. This leads to improper extraction of skin lesions and also affects [35] the diagnosis process. Therefore, it is highly essential to enhance the texture region accurately for further processing. One of the contrastenhancement techniques to enhance the texture region based on histogram equalization is dominant orientation-based texture histogram equalization (DOTHE) [36]. The DOTHE algorithm comprises the following six steps: (i) Initially the entire image that is to be enhanced is divided into a number of blocks.
(ii) To differentiate each image block into smooth and rough blocks, a variance threshold is applied to each one.

Filtering-Based Watershed Transform
This subsection comprised four operational steps, namely, Gaussian filter, local variance, MMLVR (multiscale morphological local variance reconstruction) and WT (watershed transform). The preprocessed output obtained from the former subsection was further processed using the aforementioned sequential steps for lesion extraction. The detailed explanation of the operations is given in the following subsections.

Gaussian Filter
The presence of irregular texture patches in the texture-enhanced dermoscopic images is one of the limiting factors for the accurate extraction of the skin lesion. Therefore, smoothing was performed using a 2-D Gaussian filter [37,38]. The 2D Gaussian filter kernel is given as This is convolved with the enhanced image I E (x, y) obtained from Section 4.2.2 to get the following result: A Gaussian filter having a size of 3 × 3 was used in the proposed method. It reduced the effect of irregular texture regions and provided the smoothed intensity values. Thus, the output obtained after Gaussian filtering provided a Gaussian blurred image. The appropriate parameter selection for the Gaussian kernel is discussed in detail in Section 5.

Local Variance for Boundary Region Extraction
The output obtained from the former step was a Gaussian blurred image (I G ) after the removal of irregular texture patches, which gave a better smoothing effect to the corresponding output image. Further, for lesion extraction, boundary identification is one of the important steps that segregates the lesions from the healthy skin regions. The boundary occurs in the transition points of the intensity image obtained after the smoothing operation. However, due to the smoothing operation applied in the former steps, the boundaries (or edges) were also smoothened, which made it difficult to apply conventional gradient-based techniques such as Canny, Prewitt and Sobel. Therefore, we used local variance for boundary region identification. The local variance technique highly depends on the image's statistical intensity distribution rather than on the image intensity gradient. When compared to the flat region of the image, the value of local variance differs across the edges. It varies from minimum to maximum and vice versa across the edges. The local variance of the pixel can be computed by using the subsequent steps: where the local coordinate of the neighbourhood of m is given by (x, y) and m is the mean of the neighbourhood. To determine the local variance feature of the image, this operation was performed throughout the image, varying vertically and horizontally as follows: where P × Q is the size of the original image. The mean of the local variance of pixels was used to determine the image's boundary in the proposed method. The variance yielded a high value near the boundary regions. The boundary region extraction using the local variance method is shown in Figure 4a

Multiscale Morphological Local Variance Reconstruction (MMLVR) Based on Watershed Transform
The local variance image (I LV ) obtained in the former step segregated the boundary region from the healthy skin. The I LV image was further processed using the multiscale morphological local variance reconstruction (MMLVR) operation. It gives a smoothening effect to the lesion region of the image so that the boundary of the lesion regions can be protected [39], which overcomes the over segmentation [25] while removing the useless gradient details. Thus, a binary image was generated, denoted as I B . The basic operation of morphological reconstruction depends on dilation and erosion [39]. Based on the structuring elements (SEs), the dilation and erosion operation can be performed. The dilation of the image I B expands the image as per the four or eight connected structuring elements (SEs), whereas the erosion performs the reverse operation, and it shrinks the image I B based on the SEs. The grayscale image, which is a subset of ZXZ, and was represented with respect to the image. I B and SE are given below: The binary image I B with opening and closing operation-based SEs was symbolized as I B • SE and I B •SE and represented as follows: The object (or lesion region) was smoothed using the morphological closing by partial reconstruction operator (Φ K ) on the processed dilated image, ∂(I B ), with a reference image, φ k (I B ), which was obtained by closing the preprocessed image k times. This is given by where n defines the size of the SE. Further, the proposed method used the watershed transform (WT) to generate superpixels of the enhanced image based on MMGR. The superpixel segregation technique oversegmented the enhanced image into a variety of confined regions. Thus, it helped to improve the efficiency of lesion extraction. The WT performed the operation based on region minima of gradient images to achieve the pre-lesion extraction. The output obtained in the WT based on MMLVR is shown in Figure 5b.

Fast Fuzzy C-Means Clustering
The output obtained from the former step is a pre-lesion extraction approach that depended on the MMLVR-WT. For final lesion extraction, the fast fuzzy c-means (FCM) [22] was used by computing the histogram of the superpixel images. This histogram of superpixel images was considered as an important factor to achieve the fast color lesion extraction. The proposed method used both MMLVR-WT and fast FCM method for accurate skin lesion extraction. The MMLVR-WT performs the operation based on the local features of an image. The FCM needs the global features of an image for its operation. Thus, by combining both MMLVR-WT and FCM, it is possible to get a better lesion extraction result. The lesion extracted by fast FCM is displayed in Figure 5c.

Postprocessing
The lesion extraction result obtained from the former step was further binarized and then postprocessed using morphological operation, followed by extraction of the biggest blob. More details about the postprocessing techniques are described in subsequent subsections.

Morphological Operation
Morphological operation is an essential step for the extraction of lesions. The clustered image obtained from the previous step contained the undesired tiny pixel regions shown as encircled regions 'A' and 'B' that affected the shape and texture regions of the image. Therefore, to remove the undesired pixel regions, thinning and region filling operations were applied to the binary image.

Extraction of the Biggest Blob
The binary image achieved from Section 4.5.1, still contained undesired pixels that could be ignored to extract the accurate skin lesion. To achieve the above, the skin lesion was further processed for extraction of the biggest blob by ignoring all the undesired pixel components from the binary image. Finally, the skin lesion mask was obtained by considering the largest connected components and ignoring all the smallest connected components. The lesion mask obtained using the biggest blob is shown in Figure 6b.

Performance Analysis
For performance analysis, the proposed method used different metrics, such as accuracy (Acc), dice coefficient (DC), Jaccard index (JI), sensitivity (SN) and specificity (SP). Therefore, to compute the different metrics, the binary lesion mask extracted using the proposed method was compared with the ground truth binary images provided in the datasets. With the help of two images, a confusion matrix was developed where TP, TN, FP and FN show true positive, true negative, false positive and false negative rate. The different metrics were represented as follows:

Results and Discussion
Three publicly available dermoscopic image datasets, i.e., ISIC 2016 [31], ISIC 2017 [32] and ISIC 2018 [33], were used to test and validate the proposed method. The entire experiment was done in a PC having a Core i3 processor and 8 GB RAM in MATLAB R2018b.
The proposed method used a Gaussian filter of size 3 × 3 with σ = 1. The local variance window size used was 3 × 3. The structuring element (SE) used in Section 4.   Table 1 represents the performance measurements obtained from the proposed method when SE was 3, local variance window size was 3 × 3 and different kernel sizes and sigma values were used. The results obtained by changing the SE to 2 with the same local variance size and different kernel sizes and sigma values are shown in Table 2.
From Tables 1 and 2, it can be observed that the best results were obtained when SE was 3, sigma value was 1(σ = 1), Gaussian filter kernel was used and local variance window size was 3 × 3. The best values are shown in bold in Table 1.
The proposed method comprised three main sections, i.e., preprocessing techniques, filtering-based watershed transform and postprocessing. In the proposed method, fast FCM was used for lesion extractions, and it was one of the major factors. The time complexity of FCM was O(ndc2i), where n is the number of data points, and keeping the data points constant, we assumed that n = 50, d = 3, i = 50 and varied the number of clusters, where n = number of data points, c = number of clusters, d = number of dimensions and i = number of iterations. We did not compare the complexity of the proposed method with that of other approaches as this information was not available in the relevant literature.
After the validation of different parameters, the proposed method was further evaluated by considering the variety of images from the three publicly available datasets, i.e., ISIC 2016, ISIC 2017 and ISIC 2018, and the performance measurement metrics were obtained.
For the ISIC 2016 dataset, the performance measurements obtained for the different metrics in the proposed method are presented in Table 3, which was compared with the different supervised and unsupervised approaches. Although the proposed method is an unsupervised technique, it gave better accuracy, as shown in Table 3. The best values of each metric for different methods are marked in bold. It provided an accuracy of 95.4%, dice coefficient of 94.5% and Jaccard index of 93.2%, which were greater than those of existing approaches. The sensitivity and specificity of 94.7% and 98.5% were achieved in the proposed method, which were the second highest.
The Figure 7 shows the lesion extracted from the ISIC 2016 dataset using the proposed method. The proposed method was capable of accurately extracting skin lesions from a wide range of dermoscopic images and is shown in Figure 7b. Figure 7c,d shows the ground truths available in the datasets and the lesion mask obtained in the proposed method. By comparing Figure 7c,d, it can be realized that the proposed method was more accurate in segregating the lesion regions. A bar plot is shown in Figure 8 for better analysis of the proposed method using the different metrics, such as accuracy, dice coefficient and Jaccard index. It shows the effective performance of the proposed method in the case of the ISIC 2016 dataset. Further, the proposed method was tested with the ISIC 2017 dataset by considering a variety of images, such as in the presence of hairs, ruler marks, low illumination in texture regions and irregularity in shape and structure. The different metrics obtained from the proposed method using ISIC 2017dataset were compared with those from the currently supervised, unsupervised and deep learning approaches, and it is demonstrated in Table 4. The bold values indicate the best results for a particular performance parameter. It was observed that the proposed method gave a better accuracy of 97.8%, dice coefficient of 93.2%, Jaccard index of 87.1% and specificity of 99.8%, which were the highest as compared to the recent approaches. As demonstrated in Table 4, the proposed method had a sensitivity of 96.8 percent, which was the third highest.
The lesion mask obtained from the proposed method using the ISIC 2017 dataset is illustrated in Figure 9. The extracted lesions obtained from ISIC 2017 are shown in Figure 9b. Figure 9c,d represents the ground truth from the dataset and the lesion masks obtained from the proposed method.   Table 4 as a bar plot by considering the values of accuracy, dice coefficient and Jaccard index, which proved the superiority of the proposed method as compared to the existing methods in the ISIC 2017 dataset. Finally, the proposed method was tested in the ISIC 2018 dataset by considering a variety of images. The performance measures evaluated were compared with the existing supervised, unsupervised and deep learning approaches are shown in Table 5. The best values are marked in bold for each performance metric. The bold values indicate the best result.
Accuracy of 96.9%, dice coefficient of 93.0%, Jaccard index of 87.0% and specificity of 98.6% were obtained, which were the highest compared to current state-of-the-art methods; the sensitivity was the second highest at 95.8%, which was obtained in the proposed method.
The skin lesion extracted from the proposed method using the ISIC 2018 dataset is shown in Figure 11. It demonstrates how well the proposed method works in the presence of undesired artifacts. It was found in the datasets that some of the dermoscopic images had low illumination in the lesion regions. Therefore, it was very difficult to segregate the lesion regions from the healthy skins. The proposed method used a texture enhancement technique that enhanced the lesion regions in the dermoscopic images and that helped to segregate the lesion regions accurately from healthy regions. Figure 11b shows the lesions extracted from the ISIC 2018 dataset using the proposed method. The corresponding lesion masks are shown in Figure 11d. The performance of the proposed method by considering the three different metrics, accuracy, dice coefficient and Jaccard index, for the ISIC 2018 dataset is shown in Figure 12. From the bar plot given in Figure 12, it can be said that the proposed method performed better compared to other methods when considering the ISIC 2018 dataset images.

Conclusions
The paper represents an unsupervised method for the extraction of lesions from dermoscopic images using fast fuzzy c-means (FCM) based on MMLVR-WT. The proposed method uses MMLVR-WT for the generation of superpixels of images and computes the histogram of superpixel images to achieve fast fuzzy c-means (FCM). The method is tested on the different publicly available datasets, i.e., ISIC 2016, ISIC 2017 and ISIC 2018, by considering a wide variety of images. Although the proposed method is an unsupervised approach, it can still extract the lesions accurately. It gives an overall accuracy of 96.7%, dice coefficient of 93.56%, Jaccard index of 89.1%, sensitivity of 95.76% and specificity of 98.96%. After analyzing the performance measures of the proposed method, it is found that it gives a better overall result for accuracy, sensitivity and specificity rather than overall dice coefficient and Jaccard index. It is due to the consideration of most challenging images from different datasets having very low resolution, hairs, gels, ruler marks, etc. Nevertheless, comparing the results of dice coefficient and Jaccard index obtained from the proposed method with individual state of art approaches, it is still better than the existing approaches, which include deep learning methods. The proposed method being an unsupervised approach is comparable with supervised approaches such as deep learning approaches. It has a scope for further improvement in terms of lesion detection accuracy upon further integration with deep neural networks. The results obtained are comparable with those of some of the existing state-of-arts methods in order to demonstrate the superiority of the proposed method.

Institutional Review Board Statement: Not applicable.
Informed Consent Statement: Not applicable.

Data Availability Statement:
The study did not report any data.