Next Article in Journal
Auto-Tuning Sync in Acoustic Emission Mapping for CFRP Milling
Previous Article in Journal
Technology for eVTOL Cementing and Co-Curing Composite Wing Box Segment
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Proceeding Paper

NevusCheck: A Dysplastic Nevi Detection Model Using Convolutional Neural Networks †

by
Andreluis Ingaroca-Torres
*,
Lucía Heredia-Moscoso
* and
Alvaro Aures-García
Faculty of Engineering, Peruvian University of Applied Sciences, Lima 15023, Peru
*
Authors to whom correspondence should be addressed.
Presented at the III International Congress on Technology and Innovation in Engineering and Computing, Lima, Peru, 20–24 November 2023.
Eng. Proc. 2025, 83(1), 11; https://doi.org/10.3390/engproc2025083011
Published: 13 January 2025

Abstract

:
Dysplastic nevi are skin lesions that have distinctive clinical features and are considered risk markers for the development of melanoma, the deadliest type of skin cancer. A specific deep learning technique to identify diseases is convolutional neural networks (CNNs) because of their great capacity to extract features and classify objects. Therefore, the research aims to develop a model to diagnose dysplastic nevi using a deep learning network whose classification is based on the pre-trained architecture EfficientNet-B7, which was selected for its high classification accuracy and low computational complexity. As for the results obtained, an accuracy of 78.33% was achieved in the classification model. Also, the degree of similarity between the detection by a dermatology expert and the proposed model reached an accuracy of 79.69%.

1. Introduction

The Skin Cancer Foundation [1] indicates that dysplastic nevi, also called atypical nevi, present certain distinctive characteristics, which are grouped under the acronym ABCDE: asymmetry, irregular borders, color variation, diameter greater than 6 mm, and changes in their evolution. According to Wang et al. [2], the existence of these nevi is associated with melanoma, which is one of the deadliest types of skin cancer in humans. This is corroborated by the study of Karaarslan et al. [3], which has shown that having one or more dysplastic nevi generates a 2.4-fold increased risk of developing melanoma. However, Bandy et al. [4] mentioned that the survival rate of melanoma treatment reaches 92%, if detected at an early stage. Therefore, it is important to implement systems for the detection of dysplastic nevi, which allow people to take precautions against the possible development of melanoma. However, there are several causes for the late detection of this kind of lesions, such as lack of effective skin self-examination practices, refusal of medical assistance, and misdiagnosis by a specialist.
Firstly, the lack of effective skin self-examination practices, as pointed out by Coups et al. [5], is mainly due to insufficient guidance from health professionals on the importance and proper technique of self-examination; poor use of tools and methods that could favor a thorough evaluation of the skin, such as mirrors or the help of another person; and lack of knowledge to identify pre-malignant lesions.
Secondly, in a study concerning the reasons for delays in seeking medical attention performed on 148 patients, it was found that 38.5% of patients refused to seek medical assistance because they did not wish to have a body examination, and 9.5% of patients mentioned that they were too busy to approach a physician [6].
Thirdly, misdiagnosis by a specialist may be due to two main reasons. On the one hand, fatigue or the mental state of the specialist may affect accuracy, resulting in an error rate of up to 20%, according to Sevli [7]. On the other hand, dermoscopy, a noninvasive method to detect abnormalities such as dysplastic nevi [8], is sometimes influenced by the dermatologist’s level of experience, which can negatively impact detection accuracy [9].
Currently, there are solutions for the detection of skin lesions using clinical images collected with a smartphone, although there is still room for improvement in the results, since the accuracies of their models fail to exceed 77% [10,11]. This is due to the presence of many unnecessary artifacts and noise, as smartphone images are often of low quality and resolution compared to dermoscopic images [11]. In this regard, this research proposes a model for dysplastic nevi detection using a deep learning network, designed to be implemented mainly on mobile devices. This integration with a mobile application will facilitate the use and access for people, although it can also be adapted for web platforms. The presented model considers the importance of preprocessing as a phase prior to classification, which aims to improve the quality of images that will enable better performance during medical image classification [12]. In this context, the research uses convolutional neural networks for classification, since it has been shown to be effective in the classification of skin lesions, sometimes even surpassing the ability of expert dermatologists [3].
In order for the proposed model to perform efficient detection, the training of the pre-trained convolutional neural network EfficientNet-B7 is limited to the use of clinical images of dysplastic nevi and common nevi, using the PAD-UFES-20 and PH2 datasets. PAD-UFES-20, which comes from the Dermatological Assistance Program (PAD) of the Federal University of Espírito Santo (UFES) in Brazil, includes a dataset of basal cell carcinoma, melanoma, squamous cell carcinoma, nevi, actinic keratosis, and seborrheic keratosis [13]. PH2, which comes from the Dermatological Service of the Pedro Hispano Hospital in Portugal, contains a dataset of nevi, atypical nevi, and melanoma [14].
Based on the above, the deep learning model is based on the EfficientNet-B7 architecture to detect dysplastic nevi. Such an artificial intelligence approach stands out for its high efficiency in analyzing medical images, automatically extracting increasingly complex features as the image traverses convolutional layers, addressing aspects from shape to nevus edge variation [7,15]. Also, it is suitable due to its low computational complexity. This architecture achieved a top accuracy of 84.3% using only 66 million parameters and 37 billion FLOPS, being significantly more efficient than other CNNs with similar accuracies. In particular, EfficientNet-B7 is 8.4 times smaller than the best GPipe model, a distributed library for training neural network models through parallelism, which underlines its efficiency in terms of parameters and computational operations, making it ideal for integration into mobile devices, as these systems have limitations in terms of resources [16].
Mahbod et al. [17] proposed the MSM-CNN approach, which employs multiple multiscale networks, such as EfficientNetB0, EfficientNetB1, and SeReNeXt-50, for skin lesion classification with an accuracy of 86.2%. In contrast, Baig et al. [15] introduced Light-Dermo, an architecture based on a lightweight convolutional neural network model, using ShuffleNet-Light and preprocessing techniques to achieve an impressive 99.14% accuracy in skin lesion classification. These approaches show significant advances in skin lesion detection, highlighting the diversity of strategies and models implemented to improve accuracy.
Ou et al. [10] developed a deep neural network with intramodality self-attenuation and cross-modality cross-attenuation to fuse image and metadata data in skin lesion classification, achieving 76.8% accuracy. In contrast, Wong et al. [11] proposed a two-stage approach using Fast R-CNN and ResNet-50 for region-of-interest identification and binary classification, achieving a maximum accuracy of 69% by combining images and clinical data collected using a smartphone.
These approaches highlight the need to further explore other neural network architectures for better results in dysplastic nevi detection.

2. Materials and Methods

In this section, the fundamental stages of the proposed approach for the detection of dysplastic nevi are described in detail. These stages are data preprocessing and classification using the deep learning model. In addition, each of these aspects is explored in depth, detailing the techniques, tools, and approaches used to achieve an efficient and accurate system.

2.1. Deep Learning Model

The proposed model is based on the use of the EfficientNet-B7 architecture for the classification of dysplastic nevi. With the goal of improving the performance of the deep learning model, several techniques are applied in the preprocessing phase.
As shown in Figure 1, the proposed model is structured in 2 phases, where each image must first go through the preprocessing phase and then be classified by the trained EfficientNet-B7 model into one of the two possible labels, which are dysplastic nevus or not dysplastic nevus.

2.1.1. Preprocessing

This phase consists of seven techniques that have different purposes, such as reducing unnecessary noise and improving the quality of each image. For example, Figure 2 shows the result of an image of a dysplastic nevus that underwent each of the techniques.
  • Data augmentation: A total of 314 images of common nevi and 380 images of dysplastic nevi were enhanced by a sequence of transformations that were applied to the original images. These transformations involved horizontal and vertical flipping, brightness and contrast adjustment, Gaussian blur, Gaussian noise, and random rotation.
  • Resizing: Each of the images were adjusted to a size of 224 × 224 pixels.
  • Hair removal: The algorithm employed consisted of morphological operations and inpainting techniques to remove or reduce the appearance of unwanted hairs in the image.
  • Noise filtering: The algorithm used was “Non-Local Means Denoising”, which works by taking patches of the image and looking for similar areas in the whole image, instead of relying only on a small neighborhood.
  • Super resolution: The interpolation technique was used to improve the visual appearance of the images.
  • Enhancement: The CLAHE (Contrast-Limited Adaptive Histogram Equalization) method was implemented to adaptively enhance the contrast of each image.
  • Segmentation: For segmentation, the UNet model was chosen, which is based on convolutional neural networks and captures both context and location features, since it uses an encoder and decoder [18].
In order to achieve good performance during segmentation, 1000 images were chosen from the training dataset for classification, and their masks were manually created. Subsequently, the UNet model was trained, which received as input two types of images: the preprocessed image (from resizing technique to enhancement) and its respective mask, as shown in Figure 3.
The UNet model reached a loss value of 0.19 at its maximum performance in epoch 48 out of a total of 50 epochs in which it was evaluated.

2.1.2. Classification

For binary classification, the EfficientNet-B7 architecture was used, due to its high effectiveness in computer vision related tasks, such as image categorization [4]. This type of convolutional neural network was built using the PyTorch deep learning library.
For the purposes of training and validating the proposed model, the PAD-UFES-20 and PH2 datasets were used. The first dataset contains 44 images of dysplastic nevi and 236 images of common nevi [13], while the second dataset contains 109 images of the first category and 78 images of the second category [14]. Also, 37 images of dysplastic nevi were manually collected. In summary, 190 images of dysplastic nevi and 314 images of common nevi were evaluated. However, when considering data augmentation, the total number of dysplastic nevus images increased to 570 and the total number of common nevus images increased to 628.
On the other hand, an exhaustive adjustment of the hyperparameters in the model was carried out to optimize its performance. After experimenting with various combinations, it was determined that the maximum accuracy was achieved in epoch 33 out of a total of 50 epochs when evaluated with a learning rate set at 0.001. Additionally, the Adam optimizer and the cross-entropy loss function were employed.

3. Results

In the present section, the performance of the proposed approach for dysplastic nevi detection is described in detail in order to measure the accuracy of the proposed approach and the ability to contribute to the early diagnosis of dysplastic nevi. First, the results achieved by the convolutional neural network model trained specifically for dysplastic nevi detection are presented. Subsequently, the degree of similarity between the detections made by the proposed model and the evaluation of a dermatology expert is analyzed.

3.1. Convolutional Neural Network Model Results

The results obtained using the deep learning model, together with the proposals of Ou et al. [10] and Wong et al. [11], are presented in Table 1, in which superior performance by the proposed method is demonstrated in terms of accuracy, reaching a maximum accuracy of 78.33%, thus surpassing the models of Ou et al. [10] and Wong et al. [11], which showed lower levels of accuracy.
As can be seen, the proposal by Ou et al. [10] focuses on the classification of six different types of skin lesions using images and meta-information. To perform this classification, they used the PAD-UFES-20 dataset, which provides a large collection of 2298 images of different types of skin lesions, including examples of melanoma, squamous cell carcinoma, seborrheic keratosis, nevus, actinic keratosis, and basal cell carcinoma, thus providing a diverse and representative basis for the development and evaluation of the proposed classification model. Another important aspect to mention is that each image had up to 21 clinical features such as age information, gender, and cancer history, among others; this approach achieved an accuracy of 76.8% and a balanced accuracy of 77.5%. In both situations, the uncertainty was 2.2%, and the metrics were calculated using the macro average.
The approach of Wong et al. [11] focuses on the binary classification of a lesion, for the purpose of determining whether it is malignant or benign, and for the purpose of implementing an efficient model, they used three main datasets: (a) the Discovery Dataset, which includes 6819 images from 3853 patients, collected retrospectively at the Duke University Medical Center, with 57% of malignant lesions and variety in skin tones; (b) the Clinical Dataset, comprising 4130 images from 2270 patients, complemented with demographic data, lesion characteristics, and comorbidities, with 2537 images of malignant lesions and 1593 benign; (c) ISIC 2018 with 10015 images, where 1954 correspond to malignant lesions and 8061 to benign lesions. This model was able to correctly identify malignant lesions with an accuracy of 71.3% at a threshold of 0.5%, compared with accuracies of 77.9%, 69.9%, and 71.9% obtained by three board-certified dermatologists. The accuracy reported by the authors mentioned is focused on the correct classification of malignant lesions and is therefore considered to be centered in the positive class.
In this regard, the superior performance of the proposed model is largely due to three key factors: First, the implementation of various techniques to improve image quality and reduce unnecessary artifacts, such as the application of noise reduction and enhancement filters, in contrast to the approaches of Ou et al. [10] and Wong et al. [11], which do not incorporate such techniques. Second, the use of a convolutional neural network model, namely UNet, for image segmentation for the purpose of localizing the area of interest, which differs from the strategy of Ou et al. [10], which does not employ any image segmentation technique, and the approach of Wong et al. [11], which is based on the Fast-RCNN architecture. And thirdly, the choice of the EfficientNet-B7 architecture for the classification task, in contrast to the other models mentioned by authors Wong et al. [11] and Ou et al. [10], who used the ResNet-50 model and a fully connected layer-based classifier, respectively.

3.2. Degree of Similarity Between Expert Detection and the Proposed Model

A study was conducted that focused on evaluating and comparing two methods of detecting dysplastic nevi. One of these methods was carried out by a dermatology expert who performed a visual and clinical analysis of the nevi in sampled patients; her diagnoses were based on her medical expertise and were recorded in detail. To ensure that the dermatologist had the appropriate conditions to make accurate diagnoses, a specific protocol was followed:
Evaluation Conditions: Assessments were conducted in a suitable clinical environment, with controlled lighting, ensuring that the dermatologist had access to the necessary tools to make a detailed and accurate diagnosis.
Information Provided: During the evaluations, the dermatologist obtained information on the possible family history of skin disease through direct interviews with patients, which allowed her to contextualize each case within her clinical evaluation. Informed consent was obtained from the patients for the use of their clinical data and images in the study.
Adequate time: It was guaranteed that the dermatologist had the necessary time to perform an exhaustive analysis of each nevus without restrictions that could have affected the quality of the diagnosis.
Simultaneously, the deep learning model proposed in the study was used to detect dysplastic nevi, using digital images of the same nevi evaluated by the expert. Each nevus was photographed under controlled conditions to ensure consistency in the quality of the images and were used exclusively to compare the diagnostic effectiveness of the model with respect to the dermatologist’s direct assessment.
The investigation was carried out on a sample of 50 individuals who together had a total of 64 nevi. In some cases, more than one nevus was evaluated in the same patient. The distribution of the samples can be seen in Figure 4.
From the set of 64 nevi evaluated, the dermatologist diagnosed 2 as dysplastic and 62 as non-dysplastic. As for the NevusCheck evaluation, 13 dysplastic nevi and 51 non-dysplastic nevi were identified. In Figure 5, it is illustrated that NevusCheck accurately identified one of two dysplastic nevi diagnosed by the expert. Moreover, the application correctly classified 50 of 62 non-dysplastic nevi as diagnosed by the expert.
Given the results, the degree of similarity between the detection by a dermatology expert and the “NevusCheck” model was 79.69%, as 51 nevi were classified in the same way out of a total of 64 nevi. This level of accuracy is remarkably close to the percentage of accuracy obtained during the validation of the deep learning model, which was 78.33%.

4. Discussion

The discussion section highlights the success of the proposed deep learning model approach focused on the detection of dysplastic nevi using convolutional neural networks.
The effectiveness of the deep learning model, with an accuracy of 78.33%, is based on known techniques such as denoising to reduce unnecessary noise and the application of the UNet model for segmentation, supporting the choice of the EfficientNet-B7 architecture for classification.
Likewise, the model performance is also based on the strategic use of PAD-UFES-20 and PH2 datasets, optimized with the data augmentation technique, resulting in a final set of 1198 images, consisting of 570 dysplastic nevi and 628 common nevi, respectively, contributing to the robust performance of the model.
In the comparison study between the evaluation of the dermatology expert and the “NevusCheck” model, a degree of similarity of 79.69% was achieved. Although the results of the deep learning model and the dermatologist differed in some cases, the proximity in the degree of similarity supports the clinical usefulness of the model as a complementary tool, considering the speed and accessibility it could offer in its implementation in mobile applications. Furthermore, it is essential to highlight that such a tool should not be considered as a replacement for consultation with a healthcare professional; rather, it acts as a complementary resource that can provide a preliminary assessment.
However, limitations in the study are recognized, especially in relation to the number of images collected with smartphones during model training. It is emphasized that the number of images of dysplastic and common nevi collected with smartphones is limited, specifically 44 dysplastic nevi images and 236 common nevi images, totaling 280 images. This highlights the need for greater focus on mobile data collection to ensure optimal application efficiency in daily use.
Future implications include extending the dataset with images acquired using mobile devices, with the purpose of increasing the accuracy achieved by the model in this study. Likewise, it is recommended to explore and evaluate different convolutional neural network structures with a view to discovering possible advances in the classification of dysplastic nevi. All of the above suggestions strengthen the ability of research to contribute to the improvement of medical care and increase skin health awareness through innovative approaches.

Author Contributions

Conceptualization, A.A.-G., A.I.-T. and L.H.-M.; methodology, A.A.-G., A.I.-T. and L.H.-M.; software, A.I.-T. and L.H.-M.; validation, A.A.-G., A.I.-T. and L.H.-M.; formal analysis, A.A.-G., A.I.-T. and L.H.-M.; investigation, A.A.-G., A.I.-T. and L.H.-M.; resources, A.I.-T. and L.H.-M.; data curation, A.I.-T. and L.H.-M.; writing—original draft preparation, A.A.-G., A.I.-T. and L.H.-M.; writing—review and editing, A.A.-G., A.I.-T. and L.H.-M.; visualization, A.A.-G., A.I.-T. and L.H.-M.; supervision, A.A.-G.; project administration, A.A.-G., A.I.-T. and L.H.-M. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

The PAD-UFES-20 dataset is available online (https://data.mendeley.com/datasets/zr7vgbcyr2/1 (accessed on 10 September 2023)). Also, the PH2 dataset is available online (https://www.kaggle.com/datasets/athina123/ph2dataset (accessed on 15 September 2023)). However, the evaluation study dataset cannot be provided, due to the sensitivity of the data.

Acknowledgments

The authors extend their appreciation to the Peruvian University of Applied Sciences. Special thanks to Gloria Velazco Mogrovejo for her invaluable contributions as a dermatologist to the case study, enriching the research with her expertise.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. The Skin Cancer Foundation. Lunares Atípicos (Nevos Displásicos). The Skin Cancer Foundation. 2021. Available online: https://cancerdepiel.org/cancer-de-piel/nevos-displasicos (accessed on 15 May 2023).
  2. Wang, G.; Yan, P.; Tang, Q.; Yang, L.; Chen, J. Multiscale Feature Fusion for Skin Lesion Classification. BioMed Res. Int. 2023, 2023, 5146543. [Google Scholar] [CrossRef] [PubMed]
  3. Karaarslan, I.K.; Yagci, A.; Acar, A.; Sahin, A.; Ozkapu, T.; Palamar, M.; Ozdemir, F. Is it necessary to perform eye examination for patients with cutaneous atypical nevi? Dermatol. Ther. 2020, 33, e14503. [Google Scholar] [CrossRef]
  4. Bandy, A.D.; Spyridis, Y.; Villarini, B.; Argyriou, V. Intraclass Clustering-Based CNN Approach for Detection of Malignant Melanoma. Sensors 2023, 23, 926. [Google Scholar] [CrossRef]
  5. Coups, E.J.; Manne, S.L.; Stapleton, J.L.; Tatum, K.L.; Goydos, J.S. Skin self-examination behaviors among individuals diagnosed with melanoma. Melanoma Res. 2016, 26, 71–76. [Google Scholar] [CrossRef] [PubMed]
  6. Xavier, M.H.; Drummond-Lage, A.P.; Baeta, C.; Rocha, L.; Almeida, A.M.; Wainstein, A.J. Delay in cutaneous melanoma diagnosis. Medicine 2016, 95, e4396. [Google Scholar] [CrossRef]
  7. Sevli, O. A deep convolutional neural network-based pigmented skin lesion classification application and experts evaluation. Neural Comput. Appl. 2021, 33, 12039–12050. [Google Scholar] [CrossRef]
  8. Usama, M.; Naeem, M.; Mirza, F. Multi-Class Skin Lesions Classification Using Deep Features. Sensors 2022, 22, 8311. [Google Scholar] [CrossRef] [PubMed]
  9. Lee, J.R.H.; Pavlova, M.; Famouri, M.; Wong, A. Cancer-Net SCa: Tailored deep neural network designs for detection of skin cancer from dermoscopy images. BMC Med. Imaging 2022, 22, 143. [Google Scholar] [CrossRef] [PubMed]
  10. Ou, C.; Zhou, S.; Yang, R.; Jiang, W.; He, H.; Gan, W.; Chen, W.; Qin, X.; Luo, W.; Pi, X.; et al. A deep learning based multimodal fusion model for skin lesion diagnosis using smartphone collected clinical images and metadata. Front. Surg. 2022, 9, 1029991. [Google Scholar] [CrossRef] [PubMed]
  11. Wong, S.C.; Ratliff, W.; Xia, M.; Park, C.; Sendak, M.; Balu, S.; Henao, R.; Carin, L.; Kheterpal, M.K. Use of convolutional neural networks in skin lesion analysis using real world image and non-image data. Front. Med. 2022, 9, 946937. [Google Scholar] [CrossRef] [PubMed]
  12. Barin, S.; Güraksin, G.E. An automatic skin lesion segmentation system with hybrid FCN-ResAlexNet. Eng. Sci. Technol. Int. J. 2022, 34, 101174. [Google Scholar] [CrossRef]
  13. Pacheco, A.G.C.; Lima, G.R.; Salomão, A.S.; Krohling, B.A.; Biral, I.P.; De Angelo, G.G.; Alves, F.C.R.; Esgario, J.G.M.; Simora, A.C.; Castro, P.B.C.; et al. PAD-UFES-20: A skin lesion dataset composed of patient data and clinical images collected from smartphones. Data Brief 2020, 32, 106221. [Google Scholar] [CrossRef]
  14. Mendonça, T.; Ferreira, P.L.; Marques, J.S.; Marçal, A.R.S.; Rozeira, J. PH 2—A Dermoscopic Image Database for Research and Benchmarking. In Proceedings of the 2013 35th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Osaka, Japan, 3–7 July 2013. [Google Scholar] [CrossRef]
  15. Baig, A.R.; Abbas, Q.; Almakki, R.; Ibrahim, M.E.A.; AlSuwaidan, L.; Ahmed, A.E.S. Light-Dermo: A Lightweight Pretrained Convolution Neural Network for the Diagnosis of Multiclass Skin Lesions. Diagnostics 2023, 13, 385. [Google Scholar] [CrossRef] [PubMed]
  16. Tan, M.; Le, Q. EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks. In Proceedings of the 36th International Conference on Machine Learning, Long Beach, CA, USA, 9–15 June 2019. [Google Scholar]
  17. Mahbod, A.; Schaefer, G.; Wang, C.; Dorffner, G.; Ecker, R.; Ellinger, I. Transfer learning using a multi-scale and multi-network ensemble for skin lesion classification. Comput. Methods Programs Biomed. 2020, 193, 105475. [Google Scholar] [CrossRef]
  18. Anand, V.; Gupta, S.; Koundal, D.; Singh, K. Fusion of U-Net and CNN model for segmentation and classification of skin lesion from dermoscopy images. Expert Syst. Appl. 2023, 213, 119230. [Google Scholar] [CrossRef]
Figure 1. Flowchart of the proposed model for the detection of dysplastic nevi.
Figure 1. Flowchart of the proposed model for the detection of dysplastic nevi.
Engproc 83 00011 g001
Figure 2. Preprocessing techniques applied to a dysplastic nevus image.
Figure 2. Preprocessing techniques applied to a dysplastic nevus image.
Engproc 83 00011 g002
Figure 3. Flowchart of the UNet model for image segmentation.
Figure 3. Flowchart of the UNet model for image segmentation.
Engproc 83 00011 g003
Figure 4. Distribution of the samples of 50 people.
Figure 4. Distribution of the samples of 50 people.
Engproc 83 00011 g004
Figure 5. Comparison of dysplastic nevi detections by a dermatology expert and the proposed model.
Figure 5. Comparison of dysplastic nevi detections by a dermatology expert and the proposed model.
Engproc 83 00011 g005
Table 1. Comparative table of performance metrics.
Table 1. Comparative table of performance metrics.
MetricsProposed MethodOu et al. (2022) [10]Wong et al. (2022) [11]
Accuracy78.33%76.80%71.31%
Precision68.63%-68.21%
AUC71.13%94.70%78.83%
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Ingaroca-Torres, A.; Heredia-Moscoso, L.; Aures-García, A. NevusCheck: A Dysplastic Nevi Detection Model Using Convolutional Neural Networks. Eng. Proc. 2025, 83, 11. https://doi.org/10.3390/engproc2025083011

AMA Style

Ingaroca-Torres A, Heredia-Moscoso L, Aures-García A. NevusCheck: A Dysplastic Nevi Detection Model Using Convolutional Neural Networks. Engineering Proceedings. 2025; 83(1):11. https://doi.org/10.3390/engproc2025083011

Chicago/Turabian Style

Ingaroca-Torres, Andreluis, Lucía Heredia-Moscoso, and Alvaro Aures-García. 2025. "NevusCheck: A Dysplastic Nevi Detection Model Using Convolutional Neural Networks" Engineering Proceedings 83, no. 1: 11. https://doi.org/10.3390/engproc2025083011

APA Style

Ingaroca-Torres, A., Heredia-Moscoso, L., & Aures-García, A. (2025). NevusCheck: A Dysplastic Nevi Detection Model Using Convolutional Neural Networks. Engineering Proceedings, 83(1), 11. https://doi.org/10.3390/engproc2025083011

Article Metrics

Back to TopTop