Hairless Image Preprocessing for Accurate Skin Lesion Detection and Segmentation
Abstract
1. Introduction
2. Related Works
3. Methodology
3.1. Dataset
3.2. Importance of Removing Skin Hair for Improved Skin Lesion Segmentation
3.3. Enhancing Skin Lesion and Hair Detection: Combining Mask and Original Images Using Various Methods
- Original: The first skin lesion dermatoscope image is presented, and it contains occlusions by hair. Hair strands cover the lesion boundaries, which aggravate accurate detection and segmentation of the lesion;
- BRG Mask: Here, in this step, a color mask (shown with blue-green-red colors) is applied to show the edges and contours inside the image. This mask highlights both hair and lesion boundaries to first identify features. The BRG mask allows us to visualize the scope of hair interference as well as to outline the lesion region;
- BW (Black and White) Original + Mask: At this stage the image is overlaid with a black and white edge detection mask. In this image, edges inside the image are highlighted in grayscale with emphasis on regnant contours. There is still some hair detail visible in the edges, meaning the lesion needs to be further processed to really isolate it;
- Original + Mask: The lesion is marked in this frame by a contour line around its boundary on the original image. The mask used in this figure helps to separate the lesion from the surrounding artifacts, such as hair, but the hair remains in the image and this stage is an intermediate stage in which we focus on the lesion not with other interference;
- Original + Mask Alt: In this case, a stronger contrast around lesion boundary is created using a different version of the mask. This separates the area of the lesion clearer from the area of the hair, while still showing that this area of hair is also present;
- Original + BW Mask: It pulls in a binary (black or white) mask of the lesion and fades the surrounding areas. Though you can still see some hair edges, the lesion now looks more prominent. The final segmentation will depend on this version, which creates a strong contrast between the lesion and the background for preparation;
- BW Mask: This is the last black and white mask with a lesion represented as a white region and everything else is black background. Initially in this frame, we remove the hair and other extraneous features, leaving a clean segmentation of the lesion. It can then be used for further analysis or another time to feed into the machine learning models, since now we have the lesion without hair.
4. Results
Impact of Preprocessing on Model Accuracy
5. Discussion
Author Contributions
Funding
Data Availability Statement
Acknowledgments
Conflicts of Interest
Abbreviations
| UV | Ultraviolet |
| HSV | Hue–Saturation–Value |
| ABCD | Asymmetry, Border, Color, Diameter |
| AI | Artificial Intelligence |
| DL | Deep Learning |
| CNN | Convolutional Neural Network |
| VGG16 | Visual Geometry Group Network with 16 layers |
| ResNet50 | Residual Network with 50 layers |
| InceptionV3 | Inception Network Version 3 |
| EfficientNet-B4 | Efficient Network, Variant B4 |
| DenseNet121 | Densely Connected Network with 121 layers |
| GAN | Generative Adversarial Network |
| BCC | Basal Cell Carcinoma |
| SCC | Squamous Cell Carcinoma |
| BRG | Blue-Red-Green |
| BW | Black and White |
| Mask Alt | Alternative Mask |
References
- Ghosh, H.; Rahat, I.S.; Mohanty, S.N.; Ravindra, J.; Sobur, A. A Study on the Application of Machine Learning and Deep Learning Techniques for Skin Cancer Detection. Int. J. Comput. Syst. Eng. 2024, 18, 51–59. [Google Scholar]
- Linares, M.A.; Zakaria, A.; Nizran, P. Skin cancer. Prim. Care 2015, 42, 645–659. [Google Scholar] [PubMed]
- American Cancer Society. Key Statistics for Melanoma Skin Cancer. 2024. Available online: https://www.cancer.org/cancer/types/melanoma-skin-cancer/about/key-statistics.html (accessed on 12 December 2024).
- Bello, A.; Ng, S.C.; Leung, M.F. Skin Cancer Classification Using Fine-Tuned Transfer Learning of DENSENET-121. Appl. Sci. 2024, 14, 7707. [Google Scholar] [CrossRef]
- World Health Organization. Radiation: Ultraviolet (UV) Radiation and Skin Cancer; World Health Organization: Geneva, Switzerland, 2017. [Google Scholar]
- Taşar, B. SkinCancerNet: Automated Classification of Skin Lesion Using Deep Transfer Learning Method. Trait. Signal 2023, 40, 285. [Google Scholar] [CrossRef]
- Swathi, B.; Kannan, K.; Chakravarthi, S.S.; Ruthvik, G.; Avanija, J.; Reddy, C.C.M. Skin Cancer Detection using VGG16, InceptionV3 and ResUNet. In Proceedings of the 2023 4th International Conference on Electronics and Sustainable Communication Systems (ICESC); IEEE: Piscataway, NJ, USA, 2023; pp. 812–818. [Google Scholar]
- Ali, S.N.; Ahmed, M.T.; Jahan, T.; Paul, J.; Sani, S.S.; Noor, N.; Asma, A.N.; Hasan, T. A web-based mpox skin lesion detection system using state-of-the-art deep learning models considering racial diversity. Biomed. Signal Process. Control 2024, 98, 106742. [Google Scholar] [CrossRef]
- Abbasi, N.R.; Shaw, H.M.; Rigel, D.S.; Friedman, R.J.; McCarthy, W.H.; Osman, I.; Kopf, A.W.; Polsky, D. Early diagnosis of cutaneous melanoma: Revisiting the ABCD criteria. JAMA 2004, 292, 2771–2776. [Google Scholar]
- Chatterjee, S.; Dey, D.; Munshi, S.; Gorai, S. Extraction of features from cross correlation in space and frequency domains for classification of skin lesions. Biomed. Signal Process. Control 2019, 53, 101581. [Google Scholar] [CrossRef]
- Emre Celebi, M.; Kingravi, H.A.; Iyatomi, H.; Alp Aslandogan, Y.; Stoecker, W.V.; Moss, R.H.; Malters, J.M.; Grichnik, J.M.; Marghoob, A.A.; Rabinovitz, H.S.; et al. Border detection in dermoscopy images using statistical region merging. Ski. Res. Technol. 2008, 14, 347–353. [Google Scholar] [CrossRef]
- Mirikharaji, Z.; Abhishek, K.; Bissoto, A.; Barata, C.; Avila, S.; Valle, E.; Celebi, M.E.; Hamarneh, G. A survey on deep learning for skin lesion segmentation. Med. Image Anal. 2023, 88, 102863. [Google Scholar]
- Toossi, M.T.B.; Pourreza, H.R.; Zare, H.; Sigari, M.H.; Layegh, P.; Azimi, A. An effective hair removal algorithm for dermoscopy images. Ski. Res. Technol. 2013, 19, 230–235. [Google Scholar] [CrossRef]
- Barın, S.; Güraksın, G.E. An improved hair removal algorithm for dermoscopy images. Multimed. Tools Appl. 2024, 83, 8931–8953. [Google Scholar] [CrossRef]
- El-Shafai, W.; El-Fattah, I.A.; Taha, T.E. Deep learning-based hair removal for improved diagnostics of skin diseases. Multimed. Tools Appl. 2024, 83, 27331–27355. [Google Scholar] [CrossRef]
- Canny, J. A computational approach to edge detection. IEEE Trans. Pattern Anal. Mach. Intell. 1986, PAMI-8, 679–698. [Google Scholar] [CrossRef]
- Telea, A. An image inpainting technique based on the fast marching method. J. Graph. Tools 2004, 9, 23–34. [Google Scholar] [CrossRef]
- Simonyan, K.; Zisserman, A. Very deep convolutional networks for large-scale image recognition. arXiv 2014, arXiv:1409.1556. [Google Scholar]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR); IEEE: Piscataway, NJ, USA, 2016; pp. 770–778. [Google Scholar]
- Huang, G.; Liu, Z.; Van Der Maaten, L.; Weinberger, K.Q. Densely connected convolutional networks. In 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR); IEEE: Piscataway, NJ, USA, 2017; pp. 4700–4708. [Google Scholar]
- Tan, M.; Le, Q. Efficientnet: Rethinking model scaling for convolutional neural networks. In Proceedings of the International Conference on Machine Learning; PMLR: Cambridge, MA, USA, 2019; pp. 6105–6114. [Google Scholar]
- Szegedy, C.; Vanhoucke, V.; Ioffe, S.; Shlens, J.; Wojna, Z. Rethinking the inception architecture for computer vision. In 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR); IEEE: Piscataway, NJ, USA, 2016; pp. 2818–2826. [Google Scholar]
- Wikipedia Contributors. Gaussian Blur—Wikipedia, the Free Encyclopedia. 2024. Available online: https://en.wikipedia.org/wiki/Gaussian_blur (accessed on 29 November 2024).
- Haralick, R.M.; Sternberg, S.R.; Zhuang, X. Image analysis using mathematical morphology. IEEE Trans. Pattern Anal. Mach. Intell. 1987, PAMI-9, 532–550. [Google Scholar] [CrossRef]
- Ronneberger, O.; Fischer, P.; Brox, T. U-net: Convolutional networks for biomedical image segmentation. In Proceedings of the Medical Image Computing and Computer-Assisted Intervention–MICCAI 2015: 18th International Conference, Munich, Germany, 5–9 October 2015; Proceedings, Part III 18; Springer: Cham, Switzerland, 2015; pp. 234–241. [Google Scholar]
- Musa, A.; Kakudi, H.; Hassan, M.; Hamada, M.; Umar, U.; Lawan Salisu, M. Lightweight Deep Learning Models For Edge Devices—A Survey. Int. J. Comput. Inf. Syst. Ind. Manag. Appl. 2025, 17, 18. [Google Scholar] [CrossRef]
- Shen, J.; Cheng, X.; Yang, X.; Zhang, L.; Cheng, W.; Lin, Y. Efficient CNN Accelerator Based on Low-End FPGA with Optimized Depthwise Separable Convolutions and Squeeze-and-Excite Modules. AI 2025, 6, 244. [Google Scholar] [CrossRef]
- Zhang, L.; Yang, X.; Cheng, X.; Cheng, W.; Lin, Y. Few-Shot Image Classification Algorithm Based on Global–Local Feature Fusion. AI 2025, 6, 265. [Google Scholar] [CrossRef]
- Javid, M.H. Melanoma Skin Cancer Dataset of 10000 Images. 2022. Available online: https://www.kaggle.com/dsv/3376422 (accessed on 29 January 2026). [CrossRef]
- Smith, A.R. Color gamut transform pairs. ACM Siggraph Comput. Graph. 1978, 12, 12–19. [Google Scholar]
- Otsu, N. A threshold selection method from gray-level histograms. Automatica 1975, 11, 23–27. [Google Scholar] [CrossRef]





| Network | Accuracy (Original) | Accuracy (Hairless) |
|---|---|---|
| VGG16 | 88.00% | 88.40% |
| ResNet50 | 90.20% | 91.00% |
| EfficientNet-B4 | 91.20% | 91.80% |
| InceptionV3 | 91.50% | 91.60% |
| DenseNet121 | 90.40% | 91.50% |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2026 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license.
Share and Cite
Pasaoglu, M.; Demirkan, I. Hairless Image Preprocessing for Accurate Skin Lesion Detection and Segmentation. Appl. Sci. 2026, 16, 1819. https://doi.org/10.3390/app16041819
Pasaoglu M, Demirkan I. Hairless Image Preprocessing for Accurate Skin Lesion Detection and Segmentation. Applied Sciences. 2026; 16(4):1819. https://doi.org/10.3390/app16041819
Chicago/Turabian StylePasaoglu, Muhammet, and Irem Demirkan. 2026. "Hairless Image Preprocessing for Accurate Skin Lesion Detection and Segmentation" Applied Sciences 16, no. 4: 1819. https://doi.org/10.3390/app16041819
APA StylePasaoglu, M., & Demirkan, I. (2026). Hairless Image Preprocessing for Accurate Skin Lesion Detection and Segmentation. Applied Sciences, 16(4), 1819. https://doi.org/10.3390/app16041819

