Next Article in Journal
Virtual Reality Applications Market Analysis—On the Example of Steam Digital Platform
Next Article in Special Issue
Effectiveness of Telemedicine in Diabetes Management: A Retrospective Study in an Urban Medically Underserved Population Area (UMUPA)
Previous Article in Journal
Smartphone Usage before and during COVID-19: A Comparative Study Based on Objective Recording of Usage Data
Previous Article in Special Issue
Development of a Chatbot for Pregnant Women on a Posyandu Application in Indonesia: From Qualitative Approach to Decision Tree Method
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:

A Survey on Computer-Aided Intelligent Methods to Identify and Classify Skin Cancer

Department of ECE, Karunya Institute of Technology and Sciences, Coimbatore 641114, India
Author to whom correspondence should be addressed.
Informatics 2022, 9(4), 99;
Submission received: 14 November 2022 / Revised: 3 December 2022 / Accepted: 9 December 2022 / Published: 11 December 2022
(This article belongs to the Special Issue Feature Papers in Medical and Clinical Informatics)


Melanoma is one of the skin cancer types that is more dangerous to human society. It easily spreads to other parts of the human body. An early diagnosis is necessary for a higher survival rate. Computer-aided diagnosis (CAD) is suitable for providing precise findings before the critical stage. The computer-aided diagnostic process includes preprocessing, segmentation, feature extraction, and classification. This study discusses the advantages and disadvantages of various computer-aided algorithms. It also discusses the current approaches, problems, and various types of datasets for skin images. Information about possible future works is also highlighted in this paper. The inferences derived from this survey will be useful for researchers carrying out research in skin cancer image analysis.

1. Introduction

The abnormalities in the skin layers cause various skin diseases [1,2]. Some skin diseases are identified and cured by clinicians. Other skin diseases exist without symptoms and cannot be diagnosed by doctors with the naked eye. One such skin cancer caused by UV radiation affecting the genetic material of skin layers is melanoma [3,4]. The types of skin diseases is shown in Figure 1.
Melanoma is the 17th most common cancer in the world, out of 200 different types [5]. Based on the American Cancer Society, nearly 7650 people will die from melanoma in 2022 (about 5080 men and 2570 women) [6]. Skin cancer rates have been rapidly increasing in recent times. Skin cancer affects human beings of all ages [7]. A skin lesion is a disorder in the skin cells. Skin cancer (carcinoma) can take many different forms [8]. Most types of melanoma begin at the top of the skin’s layers. There are types of melanoma that could become intrusive by penetrating through deeper layers of the skin [9]. The non-melanoma types of cancer are diagnosed and cured, but in some rare cases, they can be fatal [10,11,12]. Any type of skin cancer that develops in the basal, squamous, or Merkel cells of the skin is referred to as nonmelanoma skin cancer. Melanoma is a type of cancer that appears in the melanocytes of the skin. Basal cell carcinoma can be seen anywhere on the skin; however, it typically appears on the head and neck. It primarily results from sun exposure or manifests in patients who have had radiation therapy in their youth. Sun exposure is the primary cause of squamous cell carcinoma, which can be found in various skin types. It can also manifest on skin that has been burned, harmed by chemicals, or exposed to x-rays. Lips with old scars are frequently affected by squamous cell cancer [10,11].
The skin surface is typically covered by radial growth, which describes the behavior of superficially spreading melanoma. It may, however, also begin to penetrate the skin (called vertical growth). It frequently has a flat surface and a border that is irregular. It can have various shades such as red, blue, brown, black, grey, and white. On the skin, a mole may occasionally serve as the origin of a superficially spreading melanoma [13,14]. The various types of melanoma and non-melanoma are given in Table 1.
Skin cancer is an abnormality in the human skin. It affects the tissues of the human skin. Skin cancer includes melanoma and non-melanoma types. It is captured by an imaging device. AI techniques work well with images. It gives accurate results and supports the clinicians in making accurate decisions for treatment planning.
In recent research, identifying the different types of melanoma and non-melanoma classification is more challenging [15,16]. Dermoscopy is an instrument that identifies a wide range of skin problems, including malignant and benign tumors. Although dermoscopy improves melanoma diagnosis, it cannot replace histopathologic evaluation [17,18]. The lack of methods for detecting melanoma at an early stage leads the researchers to aim for computer-aided diagnosis methods [19,20]. The availability of advanced image processing methods, artificial intelligence methods, and decision-making mechanisms to construct computer-aided diagnostic systems can provide a comprehensive solution to aid in the early detection of skin cancer melanoma. The advanced intelligent methods are based on deep learning and machine learning algorithms that are used in skin cancer image classification and image segmentation [21,22].
The structure of the paper is as follows: Section 2 focuses on the image data collection. The list of collected data has been tabulated with the websites and the total number of skin images sorted into melanoma skin images and other skin disease images. Section 3 deals with image pre-processing techniques. Section 4 explains the image segmentation methods. Section 5 describes various feature extraction techniques. Section 6 explains the classification methods for melanoma and non-melanoma. Section 7 and Section 8 provide the conclusions about the different AI-based diagnostic methods.

2. Skin Cancer Image Database

Images that are already present in numerous datasets have been used in this work. In addition, researchers are also actively carrying out diagnosis for skin treatment with the real-time images. The image dataset has been developed for research and benchmarking objectives to enable comparative studies on image-based methods, machine learning, and deep learning algorithms of dermoscopic images. The quality of the dataset is affected by clinical, dermatoscope, and pathological parameters. The skin diagnosis is validated using clinical and synthetic data that are presently available. The datasets include patients from the United States of America, Portugal, Scotland, Denmark, and Australia. Melanoma is more than 20 times more common in white people than in dark skin populations. The incidence of melanoma is lower among dark skin populations due to the protective effect of melanin, but the risk is higher for non-melanoma types. According to statistics, the average age of people diagnosed with melanoma is 50 and above. These datasets have been used by many researchers to develop AI-based diagnostic tools. A brief description on the datasets is shown in Table 2.

3. Skin Cancer Image Preprocessing

Image pre-processing is used to improve the image quality and enhance the original image. It is a required step in the acquisition of dermoscopy images. It is required because the captured image may lack clarity. Hair color, scars, and skin tone differences will accumulate on the surface of the human skin. As a result, the images should be preprocessed to accurately assess the affected skin lesion [46]. An image can be preprocessed in a variety of ways. Some of the methods are: 1. Image enhancement 2. Image restoration 3. Hair Removal. An illustration of the various pre-processing techniques is given in Figure 2.

3.1. Image Enhancement

Image enhancement is the technique of enhancing digital pictures to make them more appropriate for display [47]. Under low lighting conditions, object detection, and identification can be enhanced using image enhancement techniques. It can be used to change the contrast of an image, making it lighter, darker, or both. The researchers have created several techniques for enhancing image quality [48].

3.1.1. Image Resolution Enhancement

Data augmentation can be used for image resolution enhancement, which includes rotation, shifting, brightness, reflection, and resizing images. Due to the fact that some of the images in the dataset have small pixel dimensions, different image acquisition factors may apply. The luminance and size of the image can thus change significantly. The lesion image dataset is likely to contain a variety of images because each acquisition tool has a different set of criteria. The pixel strength of all images is standardized to ensure that the data are consistent and noise-free [47]. In order to preserve the features and shape of the skin lesion, a lower resolution may be used after resizing the input images to prevent shape distortion of the skin lesions [49].

3.1.2. Color Space Transformation

Increasing the contrast of an image means transferring it to a new space where the image intensity is directly proportional to its main components. To accomplish this, the image color space is initially converted from RGB to LAB. Then, the remaining processes are performed in the sublayer L. Regulating the sublayer L affects the intensity of the pixels while preserving the image’s original color [50]. Luminance components are best suited for distinguishing hair and dark pigments. As a result, the LUV color space can be used to transform color spaces [46]. Since the shadow effect in the value channel is more obvious than in other channels and color spaces, the original RGB color space image is converted to the hue, saturation, and value (HSV) space in order to reduce the effect of non-uniform illumination or shadow [51]. Images that contain only shades of gray and no other colors are known as “grayscale” images. A range of monochromatic tones from black to white is referred to as grayscale. Images are converted to grayscale using the luminance value of each pixel, which is also known as the brightness or intensity. It is measured on a scale from black to white. Each pixel in an RGB digital image has three distinct luminance values: red, green, and blue [52].

3.1.3. Contrast Enhancement

The primary function of a histogram equalization-based method is to improve the contrast of an input image. The image histogram will be below if the difference in brightness between the lowest and highest values in the image is small. Histogram equalization is a method for boosting the histogram’s value and contrast to make the following stages of image processing easier [53]. An improvised version of Adaptive Histogram Equalization (AHE) that was created specifically to preprocess medical image data is called Contrast Limited Adaptive Histogram Equalization (CLAHE). Each tile in the image is processed using the CLAHE technique, which also enhances each tile’s contrast. As a result, the output will be more accurate than simply boosting an image’s contrast [54]. The contrast-enhancing technique known as adaptive histogram equalization (AHE) has proven to be effective and is intended to be widely applicable. However, there are two issues with it: its slow speed and the excessively amplified noise. The algorithms presented address these issues. For the method to run more quickly on all-purpose computers, these algorithms include interpolated AHE [55].

3.2. Image Restoration

The goal of image restoration techniques is to recreate the original image from a damaged observation. This deterioration may be caused by a variety of factors, including motion blur, noise, or even an out-of-focus camera [56]. Table 3 provides some of the filtering methods for image restoration.

3.3. Hair Removal

Hair removal techniques are used to filter the thick hairs and thin blood vessels. The dark hair is removed from the image using the Dull Razor algorithm. To replace the non-hair pixels and smooth the algorithm’s output, interpolation is used in the algorithm. However, undesirable blurring and color bleeding frequently result from this procedure [57]. A direction normal to the hair orientation produces a strong derivative response in hair structures. Thus, long, hair-like structures can be removed using oriented derivative filters. The maximum magnitude of these filters is preprocessed to remove hair. A graphical model is then used to reconstruct the skin image [58].

4. Skin Lesion Segmentation

The first step in image analysis and data extraction is image segmentation. Image segmentation has a direct impact on how well people are able to understand the image as a whole. The location of the lesion border must be determined using a segmentation algorithm. Then, the features of the skin lesion must be extracted to determine the lesion’s malignant or benign status. The segmentation of the skin lesion must be precise. Algorithms for feature extraction and classification must be properly selected [59]. The early detection and diagnosis of melanoma are improved by the skin lesion image segmentation technique. Some of the segmentation methods used for skin lesions are explained in the subsequent sections.

4.1. Threshold Based Segmentation

With a threshold-based segmentation algorithm, pixels with values below the threshold are ignored because they are thought to be free of skin cancer. Segmentation allows us to determine if any areas are affected by skin cancer [59]. The threshold method used to segment an image turns a grayscale image into a binary image by applying a threshold value. This threshold value is chosen using Otsu’s method by maximizing the image’s variance. A global threshold is computed by Otsu’s method to be used to convert an intensity image to a binary image. Morphological operations are used to remove the lower pixels to suppress light structures connected to the image border and to fill the image region and holes. The steps used in Otsu’s algorithm are given below:
  • Select an initial estimate of T (Threshold).
  • Compute the means of the two regions determined by T.
  • Set the new T as the average of the two means.
  • Repeat step 3 until the difference in T in successive iterations is smaller than a predefined parameter.
With adaptive thresholding, each pixel’s threshold value is determined by the values of the pixels around it. This adaptive method offers a better conversion from grayscale image to binary image and can assist in overcoming the varying lighting conditions in the input image [60]. The general location and shape of a lesion are determined using initial segmentation and then double thresholding to focus on an area of the image where the ideal lesion boundary is present. The goal of double thresholding is to choose a range of threshold values that contains the optimal threshold value at each boundary point because the optimal threshold value at one boundary point may differ from that at another boundary point. Additionally, double thresholding lessens the quantity of noisy regions produced by the intensity of thresholding [61].

4.2. Edge Based Segmentation

Edge-based segmentation methods commonly refer to the process of segmenting an image based on the edges between regions by searching for edge pixels and connecting them to form image contours. However, two methods are established for applying such methods: manually, by using the mouse to draw lines that represent image boundaries between regions, and automatically, by implementing some edge detection filters. The watershed segmentation algorithm and the Laplacian of gaussian filter are two examples of edge detection filters [62]. A derivative filter called the Laplacian filter is used to locate areas of abrupt intensity change in an image in order to identify edges. A derivative filter called the Laplacian is typically used to reduce the sensitivity to noise in images that have already been smoothed with other filters [63]. Edge-based and region-based segmentation are both used in the watershed segmentation algorithm. Finding the watershed lines in the input image and segmenting the prominent regions is the goal of the watershed segmentation algorithm [62]. A clever edge detector can locate pixels close to the edge, but it struggles to locate precise edges [64]. There are two phases to the segmentation strategy. The nonlinear diffusion model is used in the first method to detect edges by selectively removing low-level contrast information, which is typically related to noises and hairs. By using the Canny edge detector on the previously smoothed image, the second stage determines the lesion edges [65].

4.3. Region Based Segmentation

In region growing segmentation, the region is developed repeatedly by comparing it with all of its unallocated neighboring pixels. The measure of similarity is the difference between the intensity value of a pixel and the mean of the region. The individual region receives the pixel with the lowest degree of dissimilarity as determined along these lines. When the intensity difference between the region means and the new pixel noticeably exceeds a predetermined threshold, this handle stops [66]. The skin lesion images lack clearly defined edges, and the region’s shape is highly erratic, making segmentation difficult. For segmenting the objects in the images, the majority of segmentation algorithms use either edge or region information. However, in order to segment the foreground image, the GrabCut segmentation algorithm makes use of both the boundary and region information [67].

4.4. Soft Computing Based Segmentation

The study of soft computing has grown in popularity and significance over time. It is used in many different research fields, but it is mainly used in medical image analysis to provide various soft computing-based segmentation techniques. To determine the validity of implementation based on the model and the datasets, it is essential to evaluate the algorithms. The evaluation of various soft computing techniques takes segmentation and classification into consideration. To gauge the algorithm’s effectiveness, a variety of techniques can be used. Among them are accuracy (AC), dice score (D), Jaccard coefficient (J), true detection rate (TDR), sensitivity, and specificity. Table 4 gives different soft computing-based segmentation methods with its performance analysis.

5. Feature Extraction

The process of computing parameters that reflect the characters of the input image is known as feature extraction. ABCD features include area, border, color, and diameter [82]. Geometrical features include area, perimeter, thinness ratio, bounding length and width, major axis length, minor axis length, aspect ratio, rectangular aspect ratio, area ratio, maximum radius, minimum radius, radius ratio, standard deviation, mean of all radii, and Haralick ratio. Texture features include First Order Statistics (FOS), Gray Level Co-Occurrence Matrix of Second Order Statistics (GLCM), and Gray Level Run Length Matrix of Higher Order Statistics (GLRLM). Mean, median, mode, and range [83]. Some feature extraction methods and their performance are listed in Table 5.

6. Skin Lesion Classification

A computer-aided diagnosis system has been developed for the identification of the skin disease [86,87,88]. Proper detection and classification can lead to earlier detection, reducing subsequent risks to the patient. Skin cancer can be classified as melanoma or non-melanoma depending on the extracted features. Various classification approaches, along with the performance analysis, are listed in Table 6.

7. Inferences from the Survey

A detailed explanation of image datasets, image pre-processing, lesion segmentation, feature extraction, classification, and performance metrics has been provided. The advantages and disadvantages of the techniques have been determined, and the following conclusion is drawn from the survey:
  • To improve diagnostic accuracy, deep learning algorithms typically require a large amount of diverse, balanced, and high-quality training data that represents each class of skin lesions.
  • The features extracted from the images must be accurate to ensure high classification accuracy.
  • Most of the algorithms are computationally complex and, hence, difficult to use in practical situations.

8. Conclusions and Future Scope

A major factor in the early diagnosis of skin cancer is the development of computer-aided diagnosis to detect melanoma, which is a problem for global health. The process of detecting skin cancer involves several steps, including preprocessing, image segmentation, feature extraction, classification, and performance analysis. The performance can be improved by increasing the number of features and by modifying the existing techniques. Theoretically, there are several AI-based techniques that yield good detection results. However, the practical feasibility of these approaches is still a problem due to the computational complexity of the AI-based methods. Several researchers are currently trying to tackle this problem, which can make these systems practically feasible.

Author Contributions

J.P.J., Conceptualization; A.J., Methodology; A.G.P., Investigation; J.H., Supervision. All authors have read and agreed to the published version of the manuscript.


This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.


  1. Kolarsick, P.A.J.; Kolarsick, M.A.; Goodwin, C. Anatomy and Physiology of the Skin. J. Dermatol. Nurses’ Assoc. 2011, 3, 203–213. [Google Scholar] [CrossRef] [Green Version]
  2. Bai, H.; Graham, C. Focus: Introduction: Skin. Yale J. Biol. Med. 2020, 93, 1–2. [Google Scholar]
  3. D’Orazio, J.; Jarrett, S.; Amaro-Ortiz, A.; Scott, T. UV Radiation and the Skin. Int. J. Mol. Sci. 2013, 14, 12222–12248. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  4. De Gruijl, F.R. Skin cancer and solar UV radiation. Eur. J. Cancer 1999, 35, 2003–2009. [Google Scholar] [CrossRef] [PubMed]
  5. Available online: (accessed on 15 November 2022).
  6. Available online: (accessed on 15 November 2022).
  7. Sinikumpu, S.P.; Jokelainen, J.; Keinänen-Kiukaanniemi, S.; Huilaja, L. Skin cancers and their risk factors in older persons: A population-based study. BMC Geriatr. 2022, 22, 269. [Google Scholar] [CrossRef]
  8. Bhattacharya, A.; Young, A.; Wong, A.; Stalling, S.; Wei, M.; Hadley, D. Precision Diagnosis of Melanoma and Other Skin Lesions from Digital Images. AMIA Summits Transl. Sci. Proc. 2017, 2017, 220–226. [Google Scholar]
  9. Heistein, J.B.; Acharya, U.; Mukkamalla, S.K.R. Malignant Melanoma; StatPearls: Tampa, FL, USA, 2022. [Google Scholar]
  10. Griffin, L.L.; Ali, F.R.; Lear, J.T. Non-Melanoma Skin Cancer. Clin. Med. 2016, 16, 62–65. [Google Scholar] [CrossRef] [PubMed]
  11. Lomas, A.; Leonardi-Bee, J.; Bath-Hextall, F. A systematic review of worldwide incidence of non-melanoma skin cancer. Br. J. Dermatol. 2012, 166, 1069–1080. [Google Scholar] [CrossRef] [PubMed]
  12. Didona, D.; Paolino, G.; Bottoni, U.; Cantisani, C. Non-Melanoma Skin Cancer Pathogenesis Overview. Biomedicines 2018, 6, 6. [Google Scholar] [CrossRef] [Green Version]
  13. Liu, Y.; Sheikh, M.S. Melanoma: Molecular Pathogenesis and Therapeutic Management. Mol. Cell. Pharmacol. 2014, 6, 228. [Google Scholar]
  14. Rogers, H.W.; Weinstock, M.A.; Feldman, S.R.; Coldiron, B.M. Incidence estimate of nonmelanoma skin cancer (keratinocyte carcinomas) in the US population, 2012. JAMA Dermatol. 2015, 151, 1081–1086. [Google Scholar] [CrossRef] [PubMed]
  15. Merlino, G.; Herlyn, M.; Fisher, D.E.; Bastian, B.; Flaherty, K.T.; Davies, M.A.; Wargo, J.A.; Curiel-Lewandrowski, C.; Weber, M.J.; Leachman, S.A.; et al. The state of melanoma: Challenges and opportunities. Pigment Cell Melanoma Res. 2016, 29, 404–416. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  16. Masood, A.; Al-Jumaily, A.; Anam, K. Self-Supervised Learning Model for Skin Cancer Diagnosis. In Proceedings of the 7th International IEEE/EMBS Conference on Neural Engineering (NER), Manhattan, NY, USA, 22–24 April 2015. [Google Scholar]
  17. Marghoob, N.G.; Liopyris, K.; Jaimes, N. Dermoscopy: A Review of the Structures That Facilitate Melanoma Detection. J. Osteopath. Med. 2019, 119, 380–390. [Google Scholar] [CrossRef] [PubMed]
  18. Kato, J.; Horimoto, K.; Sato, S.; Minowa, T.; Uhara, H. Dermoscopy of Melanoma and Non-melanoma Skin Cancers. Front. Med. 2019, 6, 180. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  19. Dick, V.; Sinz, C.; Mittlböck, M.; Kittler, H.; Tschandl, P. Accuracy of Computer-Aided Diagnosis of Melanoma: A Meta-analysis. JAMA Derm. 2019, 155, 1291–1299. [Google Scholar] [CrossRef]
  20. Malti, A.; Chatterjee, B.; Ashor, S.A.; Dey, N. Computer-aided Diagnosis of Melanoma: A Review of Existing Knowledge and Strategies. Curr. Med. Imaging 2020, 16, 835–854. [Google Scholar] [CrossRef]
  21. Xu, Z.; Sheykhahmad, F.R.; Ghadimi, N.; Razmjooy, N. Computer-aided diagnosis of skin cancer based on soft computing techniques. Open Med. 2020, 15, 860–871. [Google Scholar] [CrossRef]
  22. Bakheet, S.; Al-Hamadi, A. Computer-Aided Diagnosis of Malignant Melanoma Using Gabor-Based Entropic Features and Multilevel Neural Networks. Diagnostics 2020, 10, 822. [Google Scholar] [CrossRef]
  23. Gutman, D.; Codella, N.C.; Celebi, E.; Helba, B.; Marchetti, M.; Mishra, N.; Halpern, A. Skin Lesion Analysis toward Melanoma Detection: A Challenge at the International Symposium on Biomedical Imaging (ISBI) 2016, hosted by the International Skin Imaging Collaboration (ISIC). arXiv 2016, arXiv:1605.01397. [Google Scholar]
  24. Codella, N.; Gutman, D.; Celebi, M.E.; Helba, B.; Marchetti, M.A.; Dusza, S.; Kalloo, A.; Liopyris, K.; Mishra, N.; Kittler, H.; et al. Skin Lesion Analysis Toward Melanoma Detection: A Challenge at the 2017 International Symposium on Biomedical Imaging (ISBI), Hosted by the International Skin Imaging Collaboration (ISIC). In Proceedings of the 15th International Symposium on Biomedical Imaging, Washingtion, DC, USA, 4–7 April 2018. [Google Scholar]
  25. Codella, N.; Rotemberg, V.; Tschandl, P.; Celebi, M.E.; Dusza, S.; Gutman, D.; Helba, B.; Kalloo, A.; Liopyris, K.; Marchetti, M.; et al. Skin Lesion Analysis Toward Melanoma Detection 2018: A Challenge Hosted by the International Skin Imaging Collaboration (ISIC). arXiv 2019, arXiv:1902.03368. [Google Scholar] [CrossRef]
  26. Rehman, M.U.; Khan, S.H.; Rizvi, S.M.D.; Abbas, Z.; Zafar, A. Classification of skin lesion by interference of segmentation and convolotion neural network. In Proceedings of the 2nd International Conference on Engineering Innovation (ICEI), Bangkok, Thailand, 5–6 July 2018. [Google Scholar]
  27. Combalia, M.; Codella, N.C.F.; Rotemberg, V.; Helba, B.; Vilaplana, V.; Reiter, O.; Carrera, C.; Barreiro, A.; Halpern, A.C.; Puig, S.; et al. BCN20000: Dermoscopic Lesions in the Wild. arXiv 2019, arXiv:1908.02288. [Google Scholar]
  28. Rotemberg, V.; Kurtansky, N.; Betz-Stablein, B.; Caffery, L.; Chousakos, E.; Codella, N.; Combalia, M.; Dusza, S.; Guitera, P.; Gutman, D.; et al. A patient-centric dataset of images and metadata for identifying melanomas using clinical context. Scintific Data 2021, 8, 34. [Google Scholar] [CrossRef] [PubMed]
  29. Lei, B.; Jinman, K.; Euijoon, A.; Ashnil, K.; Michael, F.; Dagan, F. Dermoscopic Image Segmentation via Multistage Fully Convolutional Networks. IEEE Trans. Biomed. Eng. 2017, 64, 2065–2074. [Google Scholar] [CrossRef] [Green Version]
  30. Satheesha, T.Y.; Satyanarayana, D.; Prasad, M.G.; Dhruve, K.D. Melanoma is Skin Deep: A 3D reconstruction technique for computerized dermoscopic skin lesion classification. IEEE J. Transl. Eng. Health Med. 2017, 5, 1–17. [Google Scholar] [CrossRef] [PubMed]
  31. Abuzaghleh, O.; Barkana, B.D.; Faezipour, M. Noninvasive Real-Time Automated Skin Lesion Analysis System for Melanoma Early Detection and Prevention. IEEE J. Transl. Eng. Health Med. 2015, 3, 1–12. [Google Scholar] [CrossRef]
  32. Barata, C.; Celebi, M.E.; Marques, J.S. Improving dermoscopy image classification using color constancy. IEEE J. Biomed. Health Inform. 2015, 19, 1146–1152. [Google Scholar] [CrossRef]
  33. Argenziano, G.; Soyer, P.; Giorgio, V.; Piccolo, D.; Carli, P.; Delfino, M.; Ferrari, A.; Hofmann-Wellenhof, R.; Massi, D.; Mazzocchetti, G.; et al. Interactive Atlas of Dermoscopy; Edra Medical Publishing & New Media: Milan, Italy, 2000. [Google Scholar]
  34. Aurora, S.; Carmen, S.; Begona, A. Model-Based Classification Methods of Global Patterns in Dermoscopic Images. IEEE Trans. Med. Imaging 2014, 33, 1137–1147. [Google Scholar] [CrossRef]
  35. Pacheco, A.G.C.; Lima, G.R.; Salomão, A.S.; Krohling, B.; Biral, I.P.; de Angelo, G.G.; Alves, F.C., Jr.; Esgario, J.G.; Simora, A.C.; Castro, P.B.; et al. PAD-UFES-20: A skin lesion dataset composed of patient data and clinical images collected from smartphones. Data Brief 2020, 32, 106221. [Google Scholar] [CrossRef]
  36. Ballerini, L.; Fisher, R.B.; Aldridge, B.; Rees, J. A Color and Texture Based Hierarchical K-NN Approach to the Classification of Non-melanoma Skin Lesions. In Color Medical Image Analysis; Springer: Dordrecht, The Netherlands, 2013; pp. 63–86. [Google Scholar] [CrossRef]
  37. DermNet is supported by and contributed to by New Zealand Dermatologists on behalf of the New Zealand Dermatological Society Incorporated. Available online: (accessed on 15 November 2022).
  38. Jeremy, K.; Sara, D.; Giuseppe, A.; Ghassan, H. 7-Point Checklist and Skin Lesion Classification using Multi-Task Multi-Modal Neural Nets. IEEE J. Biomed. Health Inform. 2018, 23, 538–546. [Google Scholar] [CrossRef]
  39. Diniz, J.B.; Cordeiro, F.R. Automatic Segmentation of Melanoma in Dermoscopy Images Using Fuzzy Numbers. In Proceedings of the IEEE 30th International Symposium on Computer-Based Medical Systems, Thessaloniki, Greece, 22–24 June 2017; pp. 150–155. [Google Scholar] [CrossRef]
  40. Svetlana, S.; Svetislav, D.S.; Zorana, B.; Milana, I.S.; José, R.V.; Dragan, S. Deep Convolutional Neural Networks on Automatic Classification for Skin Tumour Images. Log. J. IGPL 2022, 30, 649–663. [Google Scholar]
  41. Giotis, I.; Molders, N.; Land, S.; Biehl, M.; Jonkman, M.F.; Petkov, N. MED-NODE: A computer-assisted melanoma diagnosis system using non-dermoscopic images. Expert Syst. Appl. 2015, 42, 6578–6585. [Google Scholar] [CrossRef]
  42. Andersen, L.B.; Fountain, J.W.; Gutmann, D.H.; Tarlé, S.A.; Glover, T.W.; Dracopoli, N.C.; Housman, D.E.; Collins, F.S. Mutations in the neurofibromatosis 1 gene in sporadic malignant melanoma cell lines. Nat. Genet. 1993, 3, 118–121. [Google Scholar] [CrossRef] [PubMed]
  43. Dermtology Information System. 2012. Available online: (accessed on 2 August 2018).
  44. DermQuest. 2012. Available online: (accessed on 2 August 2018).
  45. Hosny, K.M.; Kassem, M.A.; Foaud, M.M. Classification of skin lesions using transfer learning and augmentation with Alex-net. PLoS ONE 2019, 14, e0217293. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  46. Premaladha, J.; Sujitha, S.; Lakshmi Priya, M.; Ravichandran, K.S. A Survey on Melanoma Diagnosis using Image Processing and Soft Computing Techniques. Res. J. Inf. Technol. 2014, 6, 65–80. [Google Scholar] [CrossRef]
  47. Gouda, W.; Sama, N.U.; Al-Waakid, G.; Humayun, M.; Jhanjhi, N.Z. Detection of Skin Cancer Based on Skin Lesion Images Using Deep Learning. Healthcare 2022, 10, 1183. [Google Scholar] [CrossRef]
  48. Sudhamony, S.; Binu, P.J.; Satheesh, G.; IssacNiwas, S.; Sudalaimani, C.; Nandakumar, K.; Muralidharan, V.; Baljit, S.B. Nationwide Tele-Oncology network in India—A framework for implementation. In Proceedings of the HealthCom 2008—10th International Conference on e-health Networking, Applications and Services, Singapore, 7–9 July 2008. [Google Scholar]
  49. Abbas, Q.; Ramzan, F.; Ghani, M.U. Acral melanoma detection using dermoscopic images and convolutional neural networks. Vis. Comput. Ind. Biomed. 2021, 4, 25. [Google Scholar] [CrossRef]
  50. Amoabedini, A.; Farsani, M.S.; Saberkari, H.; Aminian, E. Employing the Local Radon Transform for Melanoma Segmentation in Dermoscopic Images. J. Med. Signals Sens. 2018, 8, 184–194. [Google Scholar] [CrossRef]
  51. Ramezani, M.; Karimian, A.; Moallem, P. Automatic Detection of Malignant Melanoma using Macroscopic Images. J. Med. Signals Sens. 2014, 4, 281–290. [Google Scholar]
  52. Ghosh, P.; Azam, S.; Quadir, R.; Karim, A.; Shamrat, F.M.; Bhowmik, S.K.; Jonkman, M.; Hasib, K.M.; Ahmed, K. SkinNet-16: A deep learning approach to identify benign and malignant skin lesions. Front. Oncol. 2022, 12, 931141. [Google Scholar] [CrossRef]
  53. Haohai, Z.; Zhijun, W.; Liping, L.; Fatima, R.S. A robust method for skin cancer diagnosis based on interval analysis. Automatika 2021, 62, 43–53. [Google Scholar] [CrossRef]
  54. Premaladha, J.; Ravichandran, K.S. Novel Approaches for Diagnosing Melanoma Skin Lesions Through Supervised and Deep Learning Algorithms. J. Med. Syst. 2016, 40, 96. [Google Scholar] [CrossRef]
  55. Pizer, S.M.; Amburn, E.P.; Austin, J.D.; Cromartie, R.; Geselowitz, A.; Greer, T.; ter Haar Romeny, B.; Zimmerman, J.B.; Zuiderveld, K. Adaptive histogram equalization and its variations. Comput. Vis. Graph. Image Process. 1987, 39, 355–368. [Google Scholar] [CrossRef]
  56. Martínez, L.T.; Bibiloni, P.; González, H. Hair Segmentation and Removal in Dermoscopic Images Using Deep Learning. IEEE Access 2021, 9, 2694–2704. [Google Scholar] [CrossRef]
  57. Lee, T.; Ng, V.; Gallagher, R.; Coldman, A.; McLean, D. A Dullrazor, Software approach to hair removal from images. J. Comput. Biol. Med. 1997, 27, 533–543. [Google Scholar] [CrossRef] [PubMed]
  58. Salido, J.A.; Ruiz, C.R. Using morphological operators and inpainting for hair removal in dermoscopic images. In Proceedings of the Computer Graphics International Conference, Yokohama, Japan, 27–30 June 2017. [Google Scholar]
  59. Sivaraj, S.; Malmathanraj, R.; Palanisamy, P. Detecting anomalous growth of skin lesion using threshold-based segmentation algorithm and Fuzzy K-Nearest Neighbor classifier. J. Cancer Res. Ther. 2020, 16, 40–52. [Google Scholar] [CrossRef]
  60. Adil H., K.; Ghazanfar Latif, D.N.F.; Awang, I.J.; Alghazo, M.B. Segmentation of Melanoma Skin Lesions Using Anisotropic Diffusion and Adaptive Thresholding. In Proceedings of the 2018 8th International Conference on Biomedical Engineering and Technology (ICBET ‘18), Bali, Indonesia, 23–25 April 2018; Association for Computing Machinery: New York, NY, USA, 2018; pp. 39–45. [Google Scholar] [CrossRef]
  61. Xu, L.; Jackowski, M.; Goshtasby, A.; Roseman, D.; Bines, S.; Yu, C.; Dhawan, A.; Huntley, A. Segmentation of skin cancer images. Image Vis. Comput. 1999, 7, 65–74. [Google Scholar] [CrossRef]
  62. Wang, Y.-H. Tutorial: Image Segmentation; Graduate Institute of Communication Engineering National Taiwan University: Taipei, Taiwan, 2018. [Google Scholar]
  63. Khan, R.Z.; Ibraheem, N.A. Survey on Gesture Recognition for Hand Image Postures. Can. Cent. Comput. Inf. Sci. 2012, 5, 110–121. [Google Scholar] [CrossRef] [Green Version]
  64. Kaganami, H.G.; Beiji, Z. Region-Based Segmentation versus Edge Detection. In Proceedings of the IEEE Fifth International Conference on Intelligent Information Hiding and Multimedia Signal Processing, Kyoto, Japan, 12–14 September 2009; pp. 1217–1221. [Google Scholar] [CrossRef]
  65. Barcelos, C.A.Z.; Pires, V.B. An automatic based nonlinear diffusion equations scheme for skin lesion segmentation. Appl. Math. Comput. 2009, 251–261. [Google Scholar] [CrossRef]
  66. Gurajala, R. Skin Cancer Detection Using Region Based Segmentation. Int. J. Innov. Sci. Technol. 2019, 6, 42–46. [Google Scholar]
  67. Jaisakthi, S.M.; Mirunalini, P.; Aravindan, C. Automated skin lesion segmentation of dermoscopic images using GrabCut and k-means algorithms. IET Comput. Vis. 2018, 12, 1088–1095. [Google Scholar] [CrossRef]
  68. Albahli, S.; Nida, N.; Irtaza, A.; Yousaf, M.H.; Mahmood, M.T. Melanoma Lesion Detection and Segmentation Using YOLOv4-DarkNet and Active Contour. IEEE Access 2020, 8, 198403–198414. [Google Scholar] [CrossRef]
  69. Park, H.; Schoepflin, T.; Kim, Y. Active contour model with gradient directional information: Directional snake. IEEE Trans. Circuits Syst. Video Technol. 2001, 11, 252–256. [Google Scholar] [CrossRef]
  70. Yuan, Y.; Situ, N.; Zouridakis, G. A narrow band graph partitioning method for skin lesion segmentation. Pattern Recognit. 2009, 42, 1017–1028. [Google Scholar] [CrossRef]
  71. Goyal, M.; Oakley, A.; Bansal, P.; Dancey, D.; Yap, M.H. Skin Lesion Segmentation in Dermoscopic Images With Ensemble Deep Learning Methods. IEEE Access 2020, 8, 4171–4181. [Google Scholar] [CrossRef]
  72. Ramadan, R.; Aly, S. CU-Net: A New Improved Multi-Input Color U-Net Model for Skin Lesion Semantic Segmentation. IEEE Access 2022, 10, 15539–15564. [Google Scholar] [CrossRef]
  73. Zhang, G.; Shen, X.; Chen, S.; Liang, L.; Luo, Y.; Yu, J.; Lu, J. DSM: A Deep Supervised Multi-Scale Network Learning for Skin Cancer Segmentation. IEEE Access. 2019, 7, 140936–140945. [Google Scholar] [CrossRef]
  74. Xie, Y.; Zhang, J.; Xia, Y.; Shen, C. A Mutual Bootstrapping Model for Automated Skin Lesion Segmentation and Classification. IEEE Trans. Med. Imaging 2020, 39, 2482–2493. [Google Scholar] [CrossRef] [Green Version]
  75. Chen, P.; Huang, S.; Yue, Q. Skin Lesion Segmentation Using Recurrent Attentional Convolutional Networks. IEEE Access 2022, 10, 94007–94018. [Google Scholar] [CrossRef]
  76. Wong, A.; Scharcanski, J.; Fieguth, P. Automatic Skin Lesion Segmentation via Iterative Stochastic Region Merging. IEEE Trans. Inf. Technol. Biomed. 2011, 15, 929–936. [Google Scholar] [CrossRef]
  77. Yuan, Y.; Lo, Y.C. Improving Dermoscopic Image Segmentation With Enhanced Convolutional-Deconvolutional Networks. IEEE J. Biomed. Health Inform. 2019, 23, 519–526. [Google Scholar] [CrossRef] [Green Version]
  78. Cavalcanti, P.G.; Scharcanski, J.; Lopes, C.B.O. Shading attenuation in human skin color images. Adv. Vis. Comput. 2010, 6453, 190–198. [Google Scholar]
  79. Cavalcanti, P.G.; Scharcanski, J. Automated prescreening of pigmented skin lesions using standard cameras. Comput. Med. Imaging Graph. 2011, 35, 481–491. [Google Scholar] [CrossRef] [PubMed]
  80. Yuan, Y.; Chao, M.; Lo, Y.C. Automatic Skin Lesion Segmentation Using Deep Fully Convolutional Networks With Jaccard Distance. IEEE Trans. Med. Imaging 2017, 36, 1876–1886. [Google Scholar] [CrossRef]
  81. Bagheri, F.; Tarokh, M.J.; Ziaratban, M. Skin Lesion Segmentation from Dermoscopic Images by using Mask R-CNN, Retina-Deeplab, and Graph-based Methods. Biomed. Signal Process. Control 2021, 67, 102533. [Google Scholar] [CrossRef]
  82. Poornima, J.J.; Anitha, J.; Priya, H.A.G. Clustering-Based Melanoma Detection in Dermoscopy Images Using ABCD Parameters. Adv. Intell. Syst. Comput. 2019, 766, 267–274. [Google Scholar] [CrossRef]
  83. Murugan, A.; Nair, S.A.H.; Preethi, A.A.P.; Kumar, K.P.S. Diagnosis of skin cancer using machine learning techniques. Microprocess. Microsyst. 2021, 81, 103727. [Google Scholar] [CrossRef]
  84. Annaby, M.H.; Elwer, A.M.; Rushdi, M.A. Melanoma Detection Using Spatial and Spectral Analysis on Superpixel Graphs. J. Digit. Imaging 2021, 34, 162–181. [Google Scholar] [CrossRef]
  85. Rehman, A.; Khan, M.A.; Mehmood, Z.; Saba, T.; Sardaraz, M.; Rashid, M. Microscopic melanoma detection and classification: A framework of pixel-based fusion and multilevel features reduction. Microsc. Res. Tech. 2020, 83, 410–423. [Google Scholar] [CrossRef] [PubMed]
  86. Azadeh, N.H.; Adel, A.J.; Afsaneh, N.H. Comparing the performance of various filters on skin cancer images. Procedia Comput. Sci. 2014, 42, 32–37. [Google Scholar] [CrossRef] [Green Version]
  87. Victor, A.; Ghalib, M.R. Detection of skin cancer cells—A review. Res. J. Pharm. Technol. 2017, 10, 4093–4098. [Google Scholar] [CrossRef]
  88. Guerra-Rosas, E.; Álvarez-Borrego, J. Methodology for diagnosing of skin cancer on images of dermatologic spots by spectral analysis. Biomed. Opt. Express 2015, 6, 3876–3891. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  89. Dorj, U.O.; Lee, K.K.; Choi, J.Y. The skin cancer classification using deep convolutional neural network. Multimed. Tools Appl. 2018, 77, 9909–9924. [Google Scholar] [CrossRef]
  90. Zhao, C.; Shuai, R.; Ma, L.; Liu, W.; Hu, D.; Wu, M. Dermoscopy Image Classification Based on StyleGAN and DenseNet201. IEEE Access 2021, 9, 8659–8679. [Google Scholar] [CrossRef]
  91. Zhang, J.; Xie, Y.; Xia, Y.; Shen, C. Attention Residual Learning for Skin Lesion Classification. IEEE Trans. Med. Imaging 2019, 38, 2092–2103. [Google Scholar] [CrossRef] [PubMed]
  92. Tang, P.; Liang, Q.; Yan, X.; Xiang, S.; Zhang, D. GP-CNN-DTEL: Global-Part CNN Model with Data-Transformed Ensemble Learning for Skin Lesion Classification. IEEE J. Biomed. Health Inform. 2020, 24, 2870–2882. [Google Scholar] [CrossRef] [PubMed]
  93. Carcagnì, P.; Ricci, E.; Rota Bulò, S.; Snoek, C.; Lanz, O.; Messelodi, S.; Sebe, N. Classification of Skin Lesions by Combining Multilevel Learnings in a DenseNet Architecture; Lecture Notes in Computer Science; Springer: Cham, Switzerland, 2019; Volume 11751. [Google Scholar] [CrossRef]
  94. Li, Y.; Shen, L. Skin Lesion Analysis towards Melanoma Detection Using Deep Learning Network. Sensors 2018, 18, 556. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Types of Skin Diseases.
Figure 1. Types of Skin Diseases.
Informatics 09 00099 g001
Figure 2. Types of Image Pre-Processing.
Figure 2. Types of Image Pre-Processing.
Informatics 09 00099 g002
Table 1. Different Types of melanoma and non-melanoma [10,11,12,13,14].
Table 1. Different Types of melanoma and non-melanoma [10,11,12,13,14].
Superficial Spreading MelanomaInformatics 09 00099 i001Basal Cell CarcinomaInformatics 09 00099 i002
Nodular MelanomaInformatics 09 00099 i003Squamous Cell CarcinomaInformatics 09 00099 i004
Lentigo Maligna MelanomaInformatics 09 00099 i005Merkel Cell CarcinomaInformatics 09 00099 i006
Amelanotic MelanomaInformatics 09 00099 i007Cutaneous T-Cell LymphonaInformatics 09 00099 i008
Rare Melanoma Types;
  • Cutaneous Melanoma
  • Metastatic Melanoma
  • Mucosal Melanoma
  • Ocular Melanoma
Informatics 09 00099 i009Kaposi SarcomaInformatics 09 00099 i010
Table 2. Skin Image Dataset.
Table 2. Skin Image Dataset.
ReferenceDatasetNo of Skin ImageMelanoma ImagesOther Skin Disease ImagesWeb-Link
Gutman, David, et al. (2016) [23]ISIC-201612792481031 (Accessed on 15 November 2022)
Codella N, et al. (2017) [24]ISIC-201727505212229
Noel Codella. et al. (2018) [25]
Rehman M., (2018) [26]
(HAM10000, MSK)
Rehman M., (2018) [26]
Noel C. F. Cordella, et al. (2018) [25]
Marc Combalia, et al. (2019) [27]
(BCN_20000, HAM10000, MSK)
Rotemberg, V, et al. (2021) [28]ISIC-202033,126692726,199
Lei Bi, et al. (2017) [29]
T. Y. Satheesha, et al. (2017) [30]
Omar abuzaghleh, et al. (2015) [31]
Catarina Barata (2014) [32]
PH2 20040160 (Accessed on 15 November 2022)
Argenziano, et al. (2000) [33]
Aurora Saez, et al. (2014) [34]
Interactive Atlas of Dermoscopy1000270730 (Accessed on 15 November 2022)
Pacheco, Andre, et al. (2020) [35]PAD-UFES-202298522246 (Accessed on 15 November 2022)
Rees, Aldridge, et al. (2013) [36]Dermofit Image Library1300761224 (Accessed on 15 November 2022)
NewZealand Dermatological Society (2021) [37]DermNet NZ>21,000-- (Accessed on 15 November 2022)
Jeremy Kawahara, et al. (2018) [38]7-Point Criteria Evaluation Database1011252759 (Accessed on 15 November 2022)
Jessica B. Diniz, et al. (2017) [39]ISDI 571125446NA
Svetlana. S, et al. (2017) [40]Asan and Hallym Dataset17,25059916,651 (Accessed on 15 November 2022)
I. Giotis, et al. (2015) [41]MED-NODE Dataset17070100 (Accessed on 15 November 2022)
Andersen, L. B. [42]NCI GDC Portal28832333550 (Accessed on 15 November 2022)
Hosny KM, et al. (2019) [43,44,45]DIS and DermQuest20611987 (Accessed on 15 November 2022)
Table 3. Restoration of images by various filtering methods.
Table 3. Restoration of images by various filtering methods.
Ghosh P (2022)
Mean FilterIn addition to reducing noise, it could maintain edges.Suppress the finer details in an image.
H. Zhang (2021)
Median FilterWith median filtering, noise is eliminated while edges are preserved.Blurring of image in process.
Pizer (1987) [55] Adaptive Median FilterRemove the noise and enhance the image.The median filter replaces the potential noisy pixels but not regional features, such as the existence of edges.
Martínez, L.T (2021)
Gaussian Smoothing FilterImages are sharpened and smoothened.High-frequency image elements are distorted and removed.
Pizer (1987) [55]Inverse FilterImage enhancement from blurred images.Spectral indices with a clearly defined fringe.
Table 4. Different soft computing-based segmentation methods.
Table 4. Different soft computing-based segmentation methods.
AuthorDatasetMethodPerformance Analysis (PA)
AccuracySpecificitySensitivityOther PA
S. Albahli, et al. [68,69,70]ISIC 2016Active Contour93.9%95.2%94.2%D-1
M. Goyal, et al. [71]ISIC 2017End to End Ensemble Segmentation Method94.1%97.9%89.9%-
R.Ramadan, et al. [72]ISIC 2017 Color U-Net Semantic Segmentation Deep Model93.13%96.21%83.64%D-85.63%
ISIC 201894.58%95.85%91.57%D-90.96%
G. Zhang, et al. [73]ISIC 2017DSM Network94.3%-85.9%J-78.5%
Y. Xie, et al. [74]ISIC 2017MB-DCNN Model93.8%87.4%96.8%J-80.4%
P. Chen, et al. [75]ISIC 2017Recurrent Attentional Convolutional Network (O-Net)94.71%96.3%89.70%J-80.36%
A.Wong, et al. [76]60 real imagesIterative Stochastic Region Merging Method--9.16%TDR-93%
Y. Yuan, et al. [77]ISBI 2016CDNN95.7%96.5%92.4%J-76.5%
P.G.Cavalcanti, et al. [78,79]Skin ImagesOtsu-RGB85.0%85.5%92.2%-
Y.Yuan, et al. [80]ISBI 2016FCN ensemble95.5%96.6%91.8%-
Bagheri, et al. [81]DermquestMask R-CNN99.25%99.64%94.92%J-76.5%
ISBI 2017Retina-Deeplab94.18%96.51%88.37%J-80.04%
Table 5. The different types of skin cancer image feature extraction.
Table 5. The different types of skin cancer image feature extraction.
AuthorDatasetFeatureMethodPerformance Analysis
Jacinth., et al. (2020) [82] Med-NodeABCDTotal Dermoscopy ScoreAccuracy-88%
Murugan A., et al. (2021) [83]ISICGLCMSupport Vector MachineAccuracy-89.31%
Annaby M. H., et al. (2021) [84]ISICColor,
Geometry, Texture
Support Vector MachineAccuracy-97.40%
Rehman A., et al. (2020) [85]PH2ABCDTotal Dermatoscopy Score (TDS)Accuracy-93.5%
Table 6. The different types of skin cancer image classification.
Table 6. The different types of skin cancer image classification.
AuthorDatasetMethodPerformance Analysis (%)
Dorj UO, et al. [89]PH2SVM9697-
Bag of Features (BoF)9396-
Bag of Features (BoF)7796-
Zhao C, et al. [90]ISIC 2019SLA-Style GAN85.696.196.4
DenseNet 20168.295.698.84
Zhang J, et al. [91]ISIC 2017ARL-CNN 5065.889.685
Tang P, et al. [92]ISIC 2016GP-CNN-DTEL3299.786.3
Carcagni, et al. [93]ISBI 2017ResNet 50 + RA Pooling + Rank Opt60.788.483
Li Y, et al. [94]ISIC 2017Lesion Indexing Network(LIN)50.49385.2
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Jeyakumar, J.P.; Jude, A.; Priya, A.G.; Hemanth, J. A Survey on Computer-Aided Intelligent Methods to Identify and Classify Skin Cancer. Informatics 2022, 9, 99.

AMA Style

Jeyakumar JP, Jude A, Priya AG, Hemanth J. A Survey on Computer-Aided Intelligent Methods to Identify and Classify Skin Cancer. Informatics. 2022; 9(4):99.

Chicago/Turabian Style

Jeyakumar, Jacinth Poornima, Anitha Jude, Asha Gnana Priya, and Jude Hemanth. 2022. "A Survey on Computer-Aided Intelligent Methods to Identify and Classify Skin Cancer" Informatics 9, no. 4: 99.

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop