Next Article in Journal
Finite-Difference Analysis of a Quasi-3D Wave-Driven Flow Model: Stability, Grid Structure and Parameter Sensitivity
Previous Article in Journal
Emergency Takeover Performance Evaluation of Train Operators in Semi-Automated Urban Rail Transit: An Attention-Enhanced MLP Approach
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Hairless Image Preprocessing for Accurate Skin Lesion Detection and Segmentation

by
Muhammet Pasaoglu
1,* and
Irem Demirkan
2,3
1
Department of Computer Engineering, Bahcesehir University, Yildiz, Ciragan Cd., 34349 Istanbul, Türkiye
2
Department of Biomedical Engineering, Bahcesehir University, Yildiz, Ciragan Cd., 34349 Istanbul, Türkiye
3
Laboratoire de Physique des Deux Infinis de Bordeaux, Centre National de la Recherche Scientifique (CNRS), 33170 Gradignan, France
*
Author to whom correspondence should be addressed.
Appl. Sci. 2026, 16(4), 1819; https://doi.org/10.3390/app16041819
Submission received: 21 December 2025 / Revised: 29 January 2026 / Accepted: 30 January 2026 / Published: 12 February 2026
(This article belongs to the Section Biomedical Engineering)

Abstract

Skin cancer is a widespread and fatal disease in which early and accurate detection is an important aspect for effective treatment. The issues that arise when performing automated analysis of dermatoscopic images include artifacts such as hair, low contrast, and irregular edges of lesions that interfere with segmentation and classification. This study proposes an automated image preprocessing pipeline designed to remove artifacts while saving lesion texture and boundary. The method combines various computer vision methods and processes to produce a hairless dermatoscopic image of the sample, and lesion segmentation is subsequently performed using the HSV color space and binary masking. The effectiveness of the proposed preprocessing approach is evaluated using five state-of-the-art models: VGG16, ResNet50, InceptionV3, EfficientNet-B4, and DenseNet121.

1. Introduction

Skin cancer, which is a prevalent health issue worldwide, has been the subject of a vast amount of research due to its severity, high incidence, and mortality rate. Consequently, early detection and accurate classification of skin cancer are the cornerstone of effective treatment and improved outcomes [1]. Excessive exposure to ultraviolet (UV) radiation from the sun is the most important cause of skin cancer. Moreover, genetic factors, immune system deficiencies, and some treatment modalities, including radiation therapy, can contribute to the susceptibility and development of skin cancer. Skin cancer is commonly classified into two major categories: melanoma and non-melanoma skin cancer [2]. Melanoma, located in the outer layer of the skin, is the most dangerous and deadliest, although it accounts for only about 1% of skin cancers [3], as it spreads rapidly to other parts of the body if left undiagnosed and untreated [4]. Melanoma can develop mainly as a result of UV light damage in melanocytes, the pigment-producing cells. Melanoma can appear as a new mole or a variation in an existing one, showing irregular borders, colors, and dimensions. In addition, it develops in the body parts that are not commonly exposed to sunlight. Furthermore, the majority of skin cancer cases fall under non-melanoma types, such as basal cell carcinoma (BCC) and squamous cell carcinoma (SCC). The non-melanoma skin cancers originate in the epidermis, which is the outermost layer of the skin, by the influenced parts commonly referred to as a lesion. They have a lower likelihood of spreading to other parts of the body. In comparison to melanoma, non-melanoma skin cancers are easier to cure. They tend to behave differently from melanoma and are often treated by different methods. Approximately 2 to 3 million cases of non-melanoma skin cancers are reported annually worldwide [5]. In accordance with the statistics, melanoma is projected to be the 5th most commonly diagnosed cancer in men and women in 2022. Without timely diagnosis, both types of skin tumors can often lead to fatalities that is why it is essential to acknowledge the presence of potentially dangerous malignant skin cancers. In clinical settings, traditional diagnostic procedures used by expert dermatologists heavily starts with the visual examination of suspicious lesions with the naked eye, followed by a dermoscopy involving inspection under a microscope and if required, proceeds to biopsy to whether it is benign or malignant and to classify the type of skin cancer [5]. Nevertheless, these approaches face several challenges, including difficulty in analyzing lesions perturbed by hair and the similarity in the microscopic appearance between cancerous and noncancerous lesions. That is to say, biopsy is subjective and prone to errors. In addition, many communities encounter limitations in access to adequate healthcare services, such as the shortage of medical sources and specialized dermatological equipment. On the one hand, existing diagnostic procedures can be invasive, time-consuming, and subject to wrong outcomes due to human error. On the other hand, these limitations exacerbate the issue by causing a steady increase in the rate of occurrence of skin cancer. Therefore, the requirement for more accurate, efficient, and automated diagnostic tools for skin cancer remains a goal for the healthcare systems. In this context, instead of doing manual interpretations, computer-based technologies like image processing and Artificial Intelligence (AI), particularly Deep Learning (DL) such as VGG16 [6], ResNet50 [6], InceptionV3 [7], EfficientNet-B4 [8], and DenseNet121 [6], have demonstrated significant potential to overcome subjectiveness. Given the clinical characteristics such as border irregularity measured from the border (i.e., B parameter) in the ABCD rule, analysis of skin cancer based on computerized technology starts with the detection of lesion borders. This is because clinical features like border irregularity or fuzziness are evaluated depending upon the detected borders of the lesions. As for visual examination, the American Center for the Study of Dermatology offered the ABCD guide focusing on asymmetry (A), border (B), color (C), and diameter (D) to help in the early diagnosis of skin cancers [9,10]. The ABCD rule assists physicians, dermatologists, and even non-experts in identifying features of melanoma for early diagnosis by offering a straightforward framework for skin lesions that may necessitate further investigation using biopsy. However, detection of such borders automatically without affecting the lesion texture is a challenging task due to the low contrast between the foreground (skin lesion) and background, diffused lesion borders, artifacts of hair presence, and the uneven color distribution within the lesion [11,12]. Recent advancements in the deep learning field have shown significant and important potential in overcoming these challenges by enabling robust lesion segmentation and classification through deep learning model architectures designed to handle complex patterns [12].
Hair artifacts in dermoscopic images are a prevalent hindrance to skin cancer diagnosis due to making delineation of the boundary of lesion difficult and leading to poor performance of automated lesion detection systems. This challenge has been addressed by several studies and based on our research, ref. [13] proposed a hair removal technique via a median filtered followed by morphological operation technique for feature preservation and hair artifact removal, while effective this method may struggle with dense hair regions. In Ref. [14], based on interpolation and morphological operations, simplicity is prioritized over performance by compromising performance in cases of complex hair patterns. In Ref. [15] they proposed a deep learning-based approach using GANs and showed promising results while requiring extensive computational resources and large datasets for training.
Unlike these methods, our approach introduces a novel method that is formatted by combining Canny edge detection followed by inpainting operations to robustly detect and remove hair while preserving lesion texture. Occluded regions are seamlessly reconstructed using this method, and visualization for further segmentation and classification is improved by the clearing of the focus boundary. In this study, we introduce our novel automated detection and classification of skin cancer method that integrates artifact removal, such as hair, to improve lesion border detection and classification accuracy. The findings seek to overcome the key shortcomings of traditional methods, advancing computer-aided skin cancer diagnostics to a significant extent.

2. Related Works

For this study, we employed a variety of advanced techniques and technology to improve the accuracy and efficiency of skin lesion detection and classification, and these methods were picked based on state-of-the-art computer vision and medical image analysis research undertaken that can solve issues such as hair occlusion, noise, and complex feature extraction.
The Canny edge detection algorithm [16] is the most widely used edge detection algorithm, which can successfully detect hair edges using dermatoscopic images because it reduces noise while identifying important image boundaries. Inpainting for image reconstruction is a technique, such as the one in [17], that is extensively used to fill missing regions in images by interpolating surrounding pixel values, which is crucial for removing hair occlusions in this research. VGG16 [18] is a deep convolutional neural network model for image classification tasks that is composed of 16 layers of artificial neurons, each of which is used to work on image information progressively and to enhance the accuracy of its predictions, addressing many medical imaging feature extraction tasks. ResNet50 [19] is a deep neural network architecture that consists of 50 weight layers, and it is primarily used to address the issue of low accuracy in shallow neural network classification. By utilizing the residual connections to enable deeper architecture, and by that improving feature extraction and classification accuracy in complex applications. DenseNet121 [20] has a total of 121 layers. It consists of one initial convolutional layer, four dense blocks, three transition layers, and one classification layer at the end. By employing the dense connectivity between layers to enhance feature reuse and improve the efficiency of image classification and segmentation tasks. EfficientNet-B4 was proposed by Tan and Le (2019) [21], and it is a Convolutional Neural Network (CNN) architecture that utilizes a compound scaling method to uniformly scale depth, width, and resolution, providing high accuracy with computational efficiency. A family of models utilizing compound scaling to optimize performance and computational efficiency, by making it highly applicable for dermatoscopic image analysis. inceptionV3 was developed by Szegedy et al. (2016) [22], a convolutional neural network of 48 layers deep, in which multi scale feature extraction and low computational cost were combined to be widely used in image recognition task. Gaussian Blur [23] is an image smoothing technique that is widely used to reduce noise and small variation in the intensity of pixel values, facilitating edge detection results and image preprocessing. Morphological operations [24] which including dilation, erosion, and closing are important for image processing to refine the binary masks, to fill the small openings of edges after having detected and to do the segmentation task more accurately. Lastly, Binary Masking [25] for Image Segmentation, which is a fundamental step in this study, is widely used in medical image analysis to isolate regions of interest, as highlighted in U-Net architecture [25].
Recent studies such as [26,27,28] further highlight the relevance of lightweight and few-shot learning models for medical image classification tasks.
Both individually and in combination, these technologies finally gave us a robust and effective pipeline for processing dermatoscopic images. They further contributed to improving the performance of the deep learning models that are used in this research by solving each of these individual lesion detection and segmentation challenges in skin lesions using each technique, and by that our method allows us to support the desirability of using basic methods due to the complexity of medical image analysis.

3. Methodology

3.1. Dataset

The dataset used in this research is the Melanoma Skin Cancer Dataset [29] which is publicly available on Kaggle and comprises a massive 10,000 dermatoscopic images the dataset is specifically designed to help the research and development of melanoma detection and classification of skin lesions labeled images of many different skin conditions are included, and we have used these dataset to train and evaluate our preprocessing hair removal pipeline method and tested using deep learning models for skin lesion detection and segmentation. The dataset is organized into two main categories the first one benign lesions images includes the images of benign skin conditions or lesions which is required for distinguishing between healthy skin and benign anomalies as skin samples containing melanoma, a very critical and possibly mortal skin cancer that is being used to categorize and identify as cancerous lesions segmentation, and malignant ones the images in this category show skin samples with melanoma, a very serious and potentially deadly type of skin cancer, which are used to help identify and categorize as cancerous lesions and each image in the dataset is stored in JPEG format and accompanied by labels that identify the lesion type (benign or malignant).
The dataset mimics true difficulties encountered in dermatoscopic image analysis, specifically, they are varying illumination and color, and the presence of artifacts like hair, and with these challenges as shown in Figure 1, the dataset is perfect for testing the performance of our preprocessing methodology and evaluating the robustness of state-of-the-art neural networks in detecting and classifying skin lesions. And by using these datasets in order to contribute to the ongoing efforts towards improving automated melanoma detection and improving the reliability of computer-aided diagnostic systems.
In our study, we applied a systematic approach to detect skin lesions in medical images: two steps pre-handling to eliminate hair from images and enhance lesion visibility as presented in Figure 2, and then, those diagnoses using image processing techniques, methodology was applied to two datasets containing benign and malignant skin lesions, with a comprehensive evaluation carried out using state-of-the-art neural networks.
Dermatologic images contain hair that can suppress parts of the lesion areas and prevent accurate detection and segmentation. We applied a process of hair removal to deal with this issue using various techniques which start with grayscale was used since usually hair is heavily contrasted with skin tones. It was then Gaussian blurred to reduce noise and smooth out small irregularities. Canny edge detection with low and high thresholds set to 30 and 20 was then used to isolate hair strand edges with low and high thresholds, respectively. Then morphological operations were applied to performed using a 3 × 3 elliptical structuring element to close small gaps in the detected edges, producing more continuous hair contours. In particular, we finally remove the detected hair using Telea inpainting technique algorithm from OpenCV to reconstructing the removed hair area, which used to preserve local texture structure, especially in dense hair lesions and use the surrounding pixel information to reconstruct the areas that were previously occluded by hair. And finally, hairless images were produced for additional lesion analysis using this process.
With hair removed, it exposed all of the skin lesions in the hairless images. The images were converted to the HSV (hue, saturation, and value) [30]. Utilizing the saturation channel, which highlights color intensity disparities between lesions and the surrounding skin. This increases the algorithm’s capabilities to build accurate binary masks, especially under difficult lighting circumstances. The saturation channel was subjected to Gaussian blur with a 5 × 5 kernel in order to reduce noise, and Otsu’s thresholding [31] was applied to generate a binary mask separating the lesion from the surrounding skin. Next, we detected contours of the lesions obtained from the previous stages on the original image and drew lines to visually demarcate the affected areas.
We automated the hair removal and lesion detection steps using a custom Python 3.10 script and then processed the entire dataset, which has been split into benign and malignant lesion categories, which enabled us to process large volumes of dermatoscopic images in time. Results were well organized by storing the output images in separate directories for further analysis.
To evaluate our proposed method, we use the following five of the most common neural networks: VGG16, ResNet50, InceptionV3, EfficientNet-B4, and DenseNet121. These models are well known for their quality in image classification. To maintain performance to the core, we have individually fine-tuned the models on this task of skin lesion detection dataset was split into 70% training, 15% validation, and 15% testing. We trained each model using a batch size of 32, a learning rate of 0.0001, and for 20 epochs using the Adam optimizer. Early stopping was applied based on validation loss. We then carried out an evaluation on both the original datasets and our hairless dataset, which is preprocessed using our preprocessing pipeline. This enabled us to assess the impact that hair removal from the images has on the performance of these neural models and the fairness of comparison. Since our study primarily focuses on enhancing classification accuracy through preprocessing, we did not quantitatively evaluate the segmentation step with metrics like IoU or Dice due to the lack of labeled masks in the dataset.
As we indicated previously, the evaluation was conducted on the original dataset, which also contains hair, and the hairless dataset created from our preprocessing pipeline. By using this dual-dataset approach, we were able to make a very rigorous comparison of how hair removal affects the performance of those models. We compared the results of the original and hairless images and were able to quantify the degree to which hair impacted lesion detection accuracy, and the benefits of our preprocessing method. It also set this up carefully so that the comparison would be fair, because we tested the models on the same images, and all we would be changing would be hair or not, and therefore utilized preprocessing to provide a thorough understanding of how preprocessing affects deep learning models in the context of medical image analysis, and thus supported the robustness of our conclusions.
These results have been very promising: removing hair significantly improves skin lesion detection accuracy for several of these models, indicating they process noise in images better after preprocessing, allowing the neural networks to find lesion feature points easier. These results consistently demonstrated this improvement as a function of hair removal in dermatoscopic image analysis and its consequential impact on the accuracy of deep learning models in skin lesion detection. Integration of image preprocessing stages and hair removal with advanced neural networks for better, more accurate, and reliable diagnostic tools in dermatology resulted in increased lesion detection.

3.2. Importance of Removing Skin Hair for Improved Skin Lesion Segmentation

Removing skin hair from skin lesion images is a critical preprocessing step that significantly enhances the accuracy and efficiency of lesion segmentation hair artifacts can obscure the lesion boundaries which can lead to inaccurate lesions and edges detection and segmentation results we can effectively isolate and remove these hair artifacts which resulting in cleaner and hairless images this preprocessing step ensures that the subsequent and segmentation processes focus solely on the lesion which then improving the precision and reliability of the segmentation outcomes as showing in Figure 3 below hair removal leads to more accurate and effective image analysis ultimately contributing to better segmentation result and network decisions.

3.3. Enhancing Skin Lesion and Hair Detection: Combining Mask and Original Images Using Various Methods

Combining extracted hair masks with original images is a crucial step in enhancing skin lesions and hair occlusion detection to remove them from the images. Various methods, such as Original, BRG (Blue-Red-Green) Mask, BW (black and white) Original + Mask, Original + Mask, Original + Mask Alt (alternative Mask), Original + BW (black and white) Mask, and BW (black and white) Mask as can be seen in Figure 4 offer unique approaches to integrate segmented lesions with the original images.
  • Original: The first skin lesion dermatoscope image is presented, and it contains occlusions by hair. Hair strands cover the lesion boundaries, which aggravate accurate detection and segmentation of the lesion;
  • BRG Mask: Here, in this step, a color mask (shown with blue-green-red colors) is applied to show the edges and contours inside the image. This mask highlights both hair and lesion boundaries to first identify features. The BRG mask allows us to visualize the scope of hair interference as well as to outline the lesion region;
  • BW (Black and White) Original + Mask: At this stage the image is overlaid with a black and white edge detection mask. In this image, edges inside the image are highlighted in grayscale with emphasis on regnant contours. There is still some hair detail visible in the edges, meaning the lesion needs to be further processed to really isolate it;
  • Original + Mask: The lesion is marked in this frame by a contour line around its boundary on the original image. The mask used in this figure helps to separate the lesion from the surrounding artifacts, such as hair, but the hair remains in the image and this stage is an intermediate stage in which we focus on the lesion not with other interference;
  • Original + Mask Alt: In this case, a stronger contrast around lesion boundary is created using a different version of the mask. This separates the area of the lesion clearer from the area of the hair, while still showing that this area of hair is also present;
  • Original + BW Mask: It pulls in a binary (black or white) mask of the lesion and fades the surrounding areas. Though you can still see some hair edges, the lesion now looks more prominent. The final segmentation will depend on this version, which creates a strong contrast between the lesion and the background for preparation;
  • BW Mask: This is the last black and white mask with a lesion represented as a white region and everything else is black background. Initially in this frame, we remove the hair and other extraneous features, leaving a clean segmentation of the lesion. It can then be used for further analysis or another time to feed into the machine learning models, since now we have the lesion without hair.

4. Results

To evaluate how well our proposed methodology works, experiments were conducted with five well-known neural network models, which are VGG16, ResNet50, InceptionV3, EfficientNet-B4, and DenseNet121. Each model was evaluated on both the original dataset with hair occlusions dermatoscopic images first, as well as on the hairless dermatoscopic images dataset created by our preprocessing pipeline. As the main metric to evaluate and compare this result, accuracy was used to determine the performance and decision process of each model detecting skin lesions.

Impact of Preprocessing on Model Accuracy

Our experiments produce results indicating that our hair removal preprocessing step improves model performance relative to the original dataset and illustrates, as shown in Figure 5 and Table 1, that hair occlusions in dermatoscopic images often mimic lesion boundaries or introduce extraneous edges leading to potentially false positives or confusion in feature extraction. The models removed hair from skin in images presented in the dataset, leaving only the relevant features of the lesion on the skin without distraction from the noisy, misleading textures, thereby improving his classification ability.
Specifically, Figure 5 shows how the training accuracy on the original dataset was 88.0%, and after applying our preprocessing pipeline method, it increased to 88.4% on the hairless dataset. This small improvement indicates that VGG16 is relatively resistant to hair occlusions, even with minimal preprocessing done on the dataset before training the model. The model maintains robust performance by smoothing over distracting, irrelevant features on boundaries or eliminating extraneous edges that could lead to false positives or interference in feature extraction, which is a very common error that the feature extraction model faces because of these distractions. By removing hair from skin in the images, the models were able to focus exclusively on relevant features of the lesion presented on the skin without distraction from noisy or misleading textures and ultimately improving their classification capabilities.
Similarly, as shown in Figure 5 and Table 1, Figure 5 shows that accuracy of ResNet50 sees a positive gain from preprocessing, from 90.2% on original dataset to 91% on hairless dataset. On clearer images, the deep architecture with residual connections was able to better capture lesion features as hair removal improved its ability to learn meaningful patterns after the removing of the edges caused by hair occlusions and this high accuracy indicates a benefit of preprocessing pipeline method in terms of the performance of ResNet50, which struggles with feature extraction in the presence of hair on the skin in images which edges introduced by hair occlusions and this boost in accuracy underscores the advantage of preprocessing method for ResNet50 model.
EfficientNet-B4 displayed good performance, with 91.2% on the original dataset and 91.8% on the hairless dataset. Figure 5 and Table 1 indicate that EfficientNet-B4, with compound scaling in combination of depth, width, and resolution, could manage both datasets well, although it was slightly better and more clear when the hairless dataset images were presented. We observe this improvement in accuracy, which indicates EfficientNet-B4 ability to effectively use clean input data to achieve marginally faster convergence and stronger detection precision given the absence of hair occlusions.
For InceptionV3, we had a similar trend: high accuracy on both datasets at around 91.5% on the original dataset and 91.6% on the hairless dataset. With the model multi-scale architecture, to be able to process at different levels of granularity. Nevertheless, as shown in Figure 5 and Table 1, the slight improvement shown on the hairless dataset, which was produced as a product of our preperocessing pipeline method, highlights that the model benefited from the reduction in noise, as the absence of hair made it easier to accurately delineate lesion boundaries without extraneous edges.
Finally, DenseNet121, with densely connected architecture, also displayed a positive improvement on the hair removal preprocessing method, increasing accuracy from 90.4% on the original dataset to 91.5% on the hairless dataset as we can see Figure 5 and Table 1. On the original images DenseNet121 did reasonably well although our preprocessing pipeline helped it to focus on these relevant lesion features without distraction and this result suggests that further development could optimize DenseNet121’s processing efficiency, potentially yielding even greater improvements in accuracy.

5. Discussion

The comparison between the original and hairless datasets highlights the advantages provided by hair removal. On average, hair removal led to an approximate 0.5–1% increase in accuracy across models, with of ResNet50 and EfficientNet-B4 showing the most obvious improvements, which further emphasizes the crucial role of preprocessing, which can improve detection performance, especially for small or subtle lesions in which hair would otherwise obscure the most critical features, resulting in classification errors. The preprocessing step reduces visual obstructions such that models are able to focus on the important lesion characteristics as opposed to the optical complexity, and such preprocessing steps improve the reliability of diagnostic prediction.
The impact of hair removal was especially evident in models with deeper architectures, such as ResNet50 and DenseNet121, where clearer input data allowed these models to better capture lesion features. Hair removal showed potential for consistent performance without additional noise from hair occlusion and also led to a minor reduction in accuracy in cases, like VGG16, where models must be further adjusted to capitalize on clearer data. A limitation of this study is the absence of ground-truth masks in the dataset, preventing direct validation of lesion border detection performance.
The preprocessing step, which involved painting to eliminate hair occlusions, enhanced the accuracy of all deep-learning models employed for skin lesion detection. The experimental results strongly support the effectiveness of our hair removal pipeline method, with DenseNet121, EfficientNet-B4, and ResNet50 achieving the highest performance gains on the hairless dataset. These results indicate that the models benefited greatly from cleaner input images that allowed them to focus solely on lesion-related features.
Our results have shown that our hair removal preprocessing pipeline is proven to be a piece of well-grounded evidence of how it improves the capability and quality of the neural model detection and skin lesion classification with accuracy. It offers a promising solution for clinical application of increasing lesion visibility and detection quality by potentially enabling faster, more accurate, and automated skin lesion detection in dermatology.

Author Contributions

Conceptualization, methodology, validation, formal analysis, investigation, data curation, writing, and visualization, M.P. and I.D. The author M.P. and I.D. have read and agreed to the published version of the manuscript. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

The original contributions presented in the study are included in the article, further inquiries can be directed to the corresponding author.

Acknowledgments

The authors thank Erkut Arican for his helpful discussions and continuous support related to this study.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
UVUltraviolet
HSVHue–Saturation–Value
ABCDAsymmetry, Border, Color, Diameter
AIArtificial Intelligence
DLDeep Learning
CNNConvolutional Neural Network
VGG16Visual Geometry Group Network with 16 layers
ResNet50Residual Network with 50 layers
InceptionV3Inception Network Version 3
EfficientNet-B4Efficient Network, Variant B4
DenseNet121Densely Connected Network with 121 layers
GANGenerative Adversarial Network
BCCBasal Cell Carcinoma
SCCSquamous Cell Carcinoma
BRGBlue-Red-Green
BWBlack and White
Mask AltAlternative Mask

References

  1. Ghosh, H.; Rahat, I.S.; Mohanty, S.N.; Ravindra, J.; Sobur, A. A Study on the Application of Machine Learning and Deep Learning Techniques for Skin Cancer Detection. Int. J. Comput. Syst. Eng. 2024, 18, 51–59. [Google Scholar]
  2. Linares, M.A.; Zakaria, A.; Nizran, P. Skin cancer. Prim. Care 2015, 42, 645–659. [Google Scholar] [PubMed]
  3. American Cancer Society. Key Statistics for Melanoma Skin Cancer. 2024. Available online: https://www.cancer.org/cancer/types/melanoma-skin-cancer/about/key-statistics.html (accessed on 12 December 2024).
  4. Bello, A.; Ng, S.C.; Leung, M.F. Skin Cancer Classification Using Fine-Tuned Transfer Learning of DENSENET-121. Appl. Sci. 2024, 14, 7707. [Google Scholar] [CrossRef]
  5. World Health Organization. Radiation: Ultraviolet (UV) Radiation and Skin Cancer; World Health Organization: Geneva, Switzerland, 2017. [Google Scholar]
  6. Taşar, B. SkinCancerNet: Automated Classification of Skin Lesion Using Deep Transfer Learning Method. Trait. Signal 2023, 40, 285. [Google Scholar] [CrossRef]
  7. Swathi, B.; Kannan, K.; Chakravarthi, S.S.; Ruthvik, G.; Avanija, J.; Reddy, C.C.M. Skin Cancer Detection using VGG16, InceptionV3 and ResUNet. In Proceedings of the 2023 4th International Conference on Electronics and Sustainable Communication Systems (ICESC); IEEE: Piscataway, NJ, USA, 2023; pp. 812–818. [Google Scholar]
  8. Ali, S.N.; Ahmed, M.T.; Jahan, T.; Paul, J.; Sani, S.S.; Noor, N.; Asma, A.N.; Hasan, T. A web-based mpox skin lesion detection system using state-of-the-art deep learning models considering racial diversity. Biomed. Signal Process. Control 2024, 98, 106742. [Google Scholar] [CrossRef]
  9. Abbasi, N.R.; Shaw, H.M.; Rigel, D.S.; Friedman, R.J.; McCarthy, W.H.; Osman, I.; Kopf, A.W.; Polsky, D. Early diagnosis of cutaneous melanoma: Revisiting the ABCD criteria. JAMA 2004, 292, 2771–2776. [Google Scholar]
  10. Chatterjee, S.; Dey, D.; Munshi, S.; Gorai, S. Extraction of features from cross correlation in space and frequency domains for classification of skin lesions. Biomed. Signal Process. Control 2019, 53, 101581. [Google Scholar] [CrossRef]
  11. Emre Celebi, M.; Kingravi, H.A.; Iyatomi, H.; Alp Aslandogan, Y.; Stoecker, W.V.; Moss, R.H.; Malters, J.M.; Grichnik, J.M.; Marghoob, A.A.; Rabinovitz, H.S.; et al. Border detection in dermoscopy images using statistical region merging. Ski. Res. Technol. 2008, 14, 347–353. [Google Scholar] [CrossRef]
  12. Mirikharaji, Z.; Abhishek, K.; Bissoto, A.; Barata, C.; Avila, S.; Valle, E.; Celebi, M.E.; Hamarneh, G. A survey on deep learning for skin lesion segmentation. Med. Image Anal. 2023, 88, 102863. [Google Scholar]
  13. Toossi, M.T.B.; Pourreza, H.R.; Zare, H.; Sigari, M.H.; Layegh, P.; Azimi, A. An effective hair removal algorithm for dermoscopy images. Ski. Res. Technol. 2013, 19, 230–235. [Google Scholar] [CrossRef]
  14. Barın, S.; Güraksın, G.E. An improved hair removal algorithm for dermoscopy images. Multimed. Tools Appl. 2024, 83, 8931–8953. [Google Scholar] [CrossRef]
  15. El-Shafai, W.; El-Fattah, I.A.; Taha, T.E. Deep learning-based hair removal for improved diagnostics of skin diseases. Multimed. Tools Appl. 2024, 83, 27331–27355. [Google Scholar] [CrossRef]
  16. Canny, J. A computational approach to edge detection. IEEE Trans. Pattern Anal. Mach. Intell. 1986, PAMI-8, 679–698. [Google Scholar] [CrossRef]
  17. Telea, A. An image inpainting technique based on the fast marching method. J. Graph. Tools 2004, 9, 23–34. [Google Scholar] [CrossRef]
  18. Simonyan, K.; Zisserman, A. Very deep convolutional networks for large-scale image recognition. arXiv 2014, arXiv:1409.1556. [Google Scholar]
  19. He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR); IEEE: Piscataway, NJ, USA, 2016; pp. 770–778. [Google Scholar]
  20. Huang, G.; Liu, Z.; Van Der Maaten, L.; Weinberger, K.Q. Densely connected convolutional networks. In 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR); IEEE: Piscataway, NJ, USA, 2017; pp. 4700–4708. [Google Scholar]
  21. Tan, M.; Le, Q. Efficientnet: Rethinking model scaling for convolutional neural networks. In Proceedings of the International Conference on Machine Learning; PMLR: Cambridge, MA, USA, 2019; pp. 6105–6114. [Google Scholar]
  22. Szegedy, C.; Vanhoucke, V.; Ioffe, S.; Shlens, J.; Wojna, Z. Rethinking the inception architecture for computer vision. In 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR); IEEE: Piscataway, NJ, USA, 2016; pp. 2818–2826. [Google Scholar]
  23. Wikipedia Contributors. Gaussian Blur—Wikipedia, the Free Encyclopedia. 2024. Available online: https://en.wikipedia.org/wiki/Gaussian_blur (accessed on 29 November 2024).
  24. Haralick, R.M.; Sternberg, S.R.; Zhuang, X. Image analysis using mathematical morphology. IEEE Trans. Pattern Anal. Mach. Intell. 1987, PAMI-9, 532–550. [Google Scholar] [CrossRef]
  25. Ronneberger, O.; Fischer, P.; Brox, T. U-net: Convolutional networks for biomedical image segmentation. In Proceedings of the Medical Image Computing and Computer-Assisted Intervention–MICCAI 2015: 18th International Conference, Munich, Germany, 5–9 October 2015; Proceedings, Part III 18; Springer: Cham, Switzerland, 2015; pp. 234–241. [Google Scholar]
  26. Musa, A.; Kakudi, H.; Hassan, M.; Hamada, M.; Umar, U.; Lawan Salisu, M. Lightweight Deep Learning Models For Edge Devices—A Survey. Int. J. Comput. Inf. Syst. Ind. Manag. Appl. 2025, 17, 18. [Google Scholar] [CrossRef]
  27. Shen, J.; Cheng, X.; Yang, X.; Zhang, L.; Cheng, W.; Lin, Y. Efficient CNN Accelerator Based on Low-End FPGA with Optimized Depthwise Separable Convolutions and Squeeze-and-Excite Modules. AI 2025, 6, 244. [Google Scholar] [CrossRef]
  28. Zhang, L.; Yang, X.; Cheng, X.; Cheng, W.; Lin, Y. Few-Shot Image Classification Algorithm Based on Global–Local Feature Fusion. AI 2025, 6, 265. [Google Scholar] [CrossRef]
  29. Javid, M.H. Melanoma Skin Cancer Dataset of 10000 Images. 2022. Available online: https://www.kaggle.com/dsv/3376422 (accessed on 29 January 2026). [CrossRef]
  30. Smith, A.R. Color gamut transform pairs. ACM Siggraph Comput. Graph. 1978, 12, 12–19. [Google Scholar]
  31. Otsu, N. A threshold selection method from gray-level histograms. Automatica 1975, 11, 23–27. [Google Scholar] [CrossRef]
Figure 1. Samples Images from the melanoma skin cancer dataset.
Figure 1. Samples Images from the melanoma skin cancer dataset.
Applsci 16 01819 g001
Figure 2. Flowchart demonstrating the hair removal and skin lesion detection methodology.
Figure 2. Flowchart demonstrating the hair removal and skin lesion detection methodology.
Applsci 16 01819 g002
Figure 3. (a) Original image vs. hairless image 5%. (b) Original image vs. hairless image 20%. (c) Original image vs. hairless image 35%. (d) Original image vs. hairless image 45%. Comparison of original images with hair and processed images with hair removed with different intensity for improved lesion visibility. Intensity percentages refer to the proportion of synthetic hair overlays applied to the lesion images, picturing varying levels of severity before removal.
Figure 3. (a) Original image vs. hairless image 5%. (b) Original image vs. hairless image 20%. (c) Original image vs. hairless image 35%. (d) Original image vs. hairless image 45%. Comparison of original images with hair and processed images with hair removed with different intensity for improved lesion visibility. Intensity percentages refer to the proportion of synthetic hair overlays applied to the lesion images, picturing varying levels of severity before removal.
Applsci 16 01819 g003
Figure 4. Combining mask and original images using various methods.
Figure 4. Combining mask and original images using various methods.
Applsci 16 01819 g004
Figure 5. Accuracy comparison between original and hairless datasets across models.
Figure 5. Accuracy comparison between original and hairless datasets across models.
Applsci 16 01819 g005
Table 1. Comparison of accuracy results for different models on original and hairless datasets.
Table 1. Comparison of accuracy results for different models on original and hairless datasets.
NetworkAccuracy (Original)Accuracy (Hairless)
VGG1688.00%88.40%
ResNet5090.20%91.00%
EfficientNet-B491.20%91.80%
InceptionV391.50%91.60%
DenseNet12190.40%91.50%
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Pasaoglu, M.; Demirkan, I. Hairless Image Preprocessing for Accurate Skin Lesion Detection and Segmentation. Appl. Sci. 2026, 16, 1819. https://doi.org/10.3390/app16041819

AMA Style

Pasaoglu M, Demirkan I. Hairless Image Preprocessing for Accurate Skin Lesion Detection and Segmentation. Applied Sciences. 2026; 16(4):1819. https://doi.org/10.3390/app16041819

Chicago/Turabian Style

Pasaoglu, Muhammet, and Irem Demirkan. 2026. "Hairless Image Preprocessing for Accurate Skin Lesion Detection and Segmentation" Applied Sciences 16, no. 4: 1819. https://doi.org/10.3390/app16041819

APA Style

Pasaoglu, M., & Demirkan, I. (2026). Hairless Image Preprocessing for Accurate Skin Lesion Detection and Segmentation. Applied Sciences, 16(4), 1819. https://doi.org/10.3390/app16041819

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop