An Ingeniously Designed Skin Lesion Classification Model Across Clinical and Dermatoscopic Datasets
Abstract
1. Introduction
- Single-Modality Dependency: Existing models are exclusively trained and tested on datasets containing only one type of image data (either dermatoscopic lesion images or clinical high-definition images). This restricts their applicability to a single imaging modality, limiting real-world usage scenarios and reducing diagnostic confidence. For instance, a model trained solely on dermatoscopic images may fail to generalize to clinical photographs with different lighting and magnification.
- Severe Class Imbalance: All datasets exhibit significant imbalance in the number of samples across lesion categories. During training, models tend to overfit to majority classes, forgetting features of minority classes. This imbalance introduces bias, causing models to favor predicting dominant categories and underperforming in rare lesion detection. For example, melanocytic nevi (with abundant samples) may overshadow vascular lesions (with scarce samples), leading to misdiagnosis of rare conditions.
- Lightweight Model Design for Resource-Constrained Systems: Developing a practical lightweight model that balances local feature extraction (e.g., texture details) and global context awareness (e.g., lesion architecture), while being deployable on medical IoT chips with limited computational resources, remains unsolved. Current models either excel in accuracy but are too heavy for edge devices or are lightweight but sacrifice multi-scale feature learning.
Major Contributions
- Hybrid Dataset Construction: We constructed a hybrid dataset comprising high-definition local clinical lesion images and dermatoscopic images using multi-center public datasets. This dataset is designed to train a lesion disease recognition and screening model capable of handling both clinical images and dermatoscopic photography, addressing the limitation of single-modality models and expanding applicability across diverse clinical scenarios.
- HybridSkinFormer Model Architecture: To address the challenge of classifying multi-type lesion images, we propose a deep recognition model called HybridSkinFormer. This model employs a two-stage feature extraction strategy: (a) Local feature extraction via multi-layer ConvNet, capturing fine-grained details such as texture and color variations in lesions. (b) Global feature fusion using a residual-learnable multi-head attention module, enabling the model to integrate contextual information across the entire image.Additionally, we introduce a novel activation function, StarPRelu, to mitigate the “dying ReLU” problem by preserving negative gradient flow. To tackle class imbalance in the training data, we enhance the Focal Loss with an adaptive scaling mechanism, resulting in the Enhanced Focal Loss (EFLoss). EFLoss dynamically adjusts loss weights based on class sample ratios and current loss values, improving minority class representation during training.
- Adequate Experimental Validation: The trained model was evaluated on a test set disjoint from the training data and compared against state-of-the-art lightweight deep image classification models. Results demonstrate that HybridSkinFormer achieves optimal or comparable performance across all metrics, highlighting its effectiveness in multi-modality lesion classification and robustness to class imbalance.
2. Related Works
3. Materials and Methods
3.1. Datasets
3.2. Data Preprocessing and Augmentation
- When the pixel value is less than , the output pixel value is equal to multiplied by , which enhances the contrast of darker regions.
- When , the output pixel value is calculated as . This linear transformation adjusts the contrast of the middle-gray-level region.
- When , the output pixel value is , which adjusts the contrast of brighter regions.
3.3. HybridSkinFormer Model
3.3.1. Global Framework
Algorithm 1: Forward Framework |
3.3.2. Global Feature Fusion Module
Algorithm 2: Multi-head attention module with -learnable residual connection and pre-normalization |
3.4. Adaptive Activation Function
3.5. Enhanced Focal Loss
- is the ground truth probabilities vector of the sample i, is the groud truth probability of class j.
- is the predicted probabilities vector of the sample i, is the predicted probability of class j.
- , are the hyper-parameters. and .
- is the scalar factor for each class. The larger the proportion of samples in a class j, the smaller the . We calculate the
4. Experiment Evaluation and Discussion
4.1. Experimental Setup and Main Results
- Confusion between Melanocytic Nevus (NV) and Melanoma (MEL): In clinical or dermatoscopic images, some melanocytic nevi (NV) may exhibit characteristics indistinguishable from melanoma (MEL). In terms of color, early-stage or certain special types of MEL may show pigment distribution patterns similar to those of NV. Morphologically, atypical NV with local inflammatory cell infiltration may also present irregular shapes, consistent with the morphology of early-stage low-grade MEL.
- Confusion between Dermatofibroma (DF) and Melanocytic Nevus (NV): For dermatofibroma (DF) and melanocytic nevus (NV), pigmented DFs with tan or dark brown hues are nearly identical in color to NV, making accurate differentiation by color alone challenging. Morphologically, DF typically appears as a firm, elevated, oblate, or button-shaped nodule with a smooth surface; NV can also present as an elevated nodule with a smooth or slightly rough surface, resulting in morphological similarity.
- Confusion between Dermatofibroma (DF) and Basal Cell Carcinoma (BCC): Early-stage basal cell carcinoma (BCC) manifests as a slightly elevated, light yellow or pinkish small nodule with a firm texture on the local skin; DF may occasionally exhibit similar light yellow or pink coloration and firmness. Especially for smaller DFs, differentiation from early-stage BCC based solely on color and texture is difficult.
- Confusion between Benign Keratosis (BKL) and Melanocytic Nevus (NV): In terms of color, benign keratosis (BKL) encompasses a wide spectrum—skin-colored, light brown, dark brown, or black—overlapping considerably with NV’s color range. For example, seborrheic keratosis, a common type of BKL, often appears as brown or black flat papules or plaques, resembling pigmented NV and making color-based discrimination challenging. Morphologically, BKL typically presents as flat or slightly elevated lesions with clear borders and verrucous or papillomatous surfaces; NV can also be flat or elevated with well-defined margins. Congenital or atypical NV further blur morphological distinctions from BKL. Under dermatoscopy, both may exhibit non-specific features such as mottled or reticular pigmentation patterns, leading to diagnostic confusion in images.
- Confusion between Benign Keratosis (BKL) and Basal Cell Carcinoma (BCC): Basal cell carcinoma (BCC) often presents as a pearly or translucent papule/nodule with dilated surface capillaries. Some BKLs, such as actinic keratosis, may develop similar features during progression: mild elevation, rough texture, and vascular dilation. When occurring on the head and face—common sites for BCC—actinic keratosis closely mimics early BCC in appearance, increasing the risk of misdiagnosis. While dermatoscopic features like blue-gray globules or ulcers are typical of BCC, certain BKLs (e.g., seborrheic keratosis with comedo-like openings or milia cysts) may exhibit overlapping non-specific structures. Vascular dilation in both entities further complicates dermatoscopic differentiation.
4.2. Model Interpretability Evaluation
4.3. Ablation Study
4.4. Comparison
- Overall Accuracy: HybridSkinFormer achieved the highest accuracy of 94.2% on the test dataset, outperforming the second-ranked model (MobileViTv3-xxs) by 2%. This demonstrates its superior generalization capability across multi-modality skin lesion images.
- Macro Mean Precision: The model attained a top Macro Mean Precision of 91.1%, exceeding MobileViTv3-xxs (the second-place model) by over 7%. This indicates its enhanced ability to minimize false positive predictions across all lesion classes.
- Macro Mean Recall: HybridSkinFormer achieved a Macro Mean Recall of 91.0%, comparable to MobileViTv3-xxs, highlighting its robustness in detecting positive cases without significant compromise in sensitivity relative to state-of-the-art alternatives.
- Macro Mean F1-Score: The model recorded the highest Macro Mean F1-Score of 0.911, outperforming MobileViTv3-xxs by 0.5 points. This balanced metric reflects its optimal trade-off between precision and recall across imbalanced classes.
- Macro Mean MCC (Matthews Correlation Coefficient): HybridSkinFormer achieved a leading Macro Mean MCC of 0.901, approximately 0.5 points higher than MobileViTv3-xxs. This signifies stronger overall predictive performance, particularly in distinguishing between difficult-to-classify lesion types.
5. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- Sanghvi, A.R. Skin Cancer: Prevention and Early Detection. In Handbook of Cancer and Immunology; Springer International Publishing: Cham, Switzerland, 2022; pp. 1–31. [Google Scholar] [CrossRef]
- de Vries, E.; Coebergh, J.W. Cutaneous malignant melanoma in Europe. Eur. J. Cancer 2004, 40, 2355–2366. [Google Scholar] [CrossRef]
- Garbe, C.; Leiter, U. Melanoma epidemiology and trends. Clin. Dermatol. 2009, 27, 3–9. [Google Scholar] [CrossRef] [PubMed]
- Van der Leest, R.J.; De Vries, E.; Bulliard, J.L.; Paoli, J.; Peris, K.; Stratigos, A.J.; Trakatelli, M.; Maselis, T.; Šitum, M.; Pallouras, A.; et al. The Euromelanoma skin cancer prevention campaign in Europe: Characteristics and results of 2009 and 2010. J. Eur. Acad. Dermatol. Venereol. 2011, 25, 1455–1465. [Google Scholar] [CrossRef]
- Siegel, R.L.; Giaquinto, A.N.; Jemal, A. Cancer statistics, 2024. CA A Cancer J. Clin. 2024, 74, 12–49. [Google Scholar] [CrossRef]
- Melarkode, N.; Srinivasan, K.; Qaisar, S.M.; Plawiak, P. AI-powered diagnosis of skin cancer: A contemporary review, open challenges and future research directions. Cancers 2023, 15, 1183. [Google Scholar] [CrossRef]
- Esteva, A.; Kuprel, B.; Novoa, R.A.; Ko, J.; Swetter, S.M.; Blau, H.M.; Thrun, S. Dermatologist-level classification of skin cancer with deep neural networks. Nature 2017, 542, 115–118. [Google Scholar] [CrossRef] [PubMed]
- Codella, N.C.; Gutman, D.; Celebi, M.E.; Helba, B.; Marchetti, M.A.; Dusza, S.W.; Kalloo, A.; Liopyris, K.; Mishra, N.; Kittler, H.; et al. Skin lesion analysis toward melanoma detection: A challenge. In Proceedings of the 2018 IEEE 15th International Symposium on Biomedical Imaging (ISBI 2018), Washington, DC, USA, 4–7 April 2018; IEEE: Piscataway, NJ, USA, 2018; pp. 168–172. [Google Scholar]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Deep Residual Learning for Image Recognition. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar] [CrossRef]
- Dosovitskiy, A.; Beyer, L.; Kolesnikov, A.; Weissenborn, D.; Zhai, X.; Unterthiner, T.; Dehghani, M.; Minderer, M.; Heigold, G.; Gelly, S.; et al. An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale. arXiv 2021, arXiv:2010.11929. [Google Scholar] [CrossRef]
- Khan, S.; Ali, H.; Shah, Z. Identifying the role of vision transformer for skin cancer—A scoping review. Front. Artif. Intell. 2023, 6, 1202990. [Google Scholar] [CrossRef]
- Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A.N.; Kaiser, L.; Polosukhin, I. Attention Is All You Need. arXiv 2023, arXiv:1706.03762. [Google Scholar] [CrossRef]
- Touvron, H.; Cord, M.; Douze, M.; Massa, F.; Sablayrolles, A.; Jégou, H. Training data-efficient image transformers & distillation through attention. arXiv 2021, arXiv:2012.12877. [Google Scholar] [CrossRef]
- Liu, Z.; Lin, Y.; Cao, Y.; Hu, H.; Wei, Y.; Zhang, Z.; Lin, S.; Guo, B. Swin Transformer: Hierarchical Vision Transformer using Shifted Windows. arXiv 2021, arXiv:2103.14030. [Google Scholar] [CrossRef]
- Ring, C.; Cox, N.; Lee, J.B. Dermatoscopy. Clin. Dermatol. 2021, 39, 635–642. [Google Scholar] [CrossRef]
- Chang, C.H.; En Wang, W.; Hsu, F.Y.; Jhen Chen, R.; Chang, H.C. AI HAM 10000 Database to Assist Residents in Learning Differential Diagnosis of Skin Cancer. In Proceedings of the 2022 IEEE 5th Eurasian Conference on Educational Innovation (ECEI), Taipei, Taiwan, 10–12 February 2022; pp. 1–3. [Google Scholar] [CrossRef]
- Khattar, S.; Kaur, R.; Gupta, G. A Review on Preprocessing, Segmentation and Classification Techniques for Detection of Skin Cancer. In Proceedings of the 2023 2nd Edition of IEEE Delhi Section Flagship Conference (DELCON), Rajpura, India, 24–26 February 2023; pp. 1–6. [Google Scholar] [CrossRef]
- Shete, A.S.; Rane, A.S.; Gaikwad, P.S.; Patil, M.H. Detection of skin cancer using cnn algorithm. Int. J. 2021, 6. [Google Scholar]
- Garg, R.; Maheshwari, S.; Shukla, A. Decision support system for detection and classification of skin cancer using CNN. In Proceedings of the Innovations in Computational Intelligence and Computer Vision (ICICV 2020), Rajasthan, India, 17–19 January 2020; Springer: Singapore, 2021; pp. 578–586. [Google Scholar]
- Thwin, S.M.; Park, H.S. Skin Lesion Classification Using a Deep Ensemble Model. Appl. Sci. 2024, 14, 5599. [Google Scholar] [CrossRef]
- Ahmad, B.; Usama, M.; Huang, C.M.; Hwang, K.; Hossain, M.S.; Muhammad, G. Discriminative Feature Learning for Skin Disease Classification Using Deep Convolutional Neural Network. IEEE Access 2020, 8, 39025–39033. [Google Scholar] [CrossRef]
- Satapathy, S.C.; Cruz, M.; Namburu, A.; Chakkaravarthy, S.; Pittendreigh, M.; Satapathy, S.C. Skin cancer classification using convolutional capsule network (CapsNet). J. Sci. Ind. Res. 2020, 79, 994–1001. [Google Scholar]
- Xie, B.; He, X.; Zhao, S.; Li, Y.; Su, J.; Zhao, X.; Kuang, Y.; Wang, Y.; Chen, X. XiangyaDerm: A Clinical Image Dataset of Asian Race for Skin Disease Aided Diagnosis. In Proceedings of the Large-Scale Annotation of Biomedical Data and Expert Label Synthesis and Hardware Aware Learning for Medical Imaging and Computer Assisted Intervention, Shenzhen, China, 13 and 17 October 2019; Zhou, L., Heller, N., Shi, Y., Xiao, Y., Sznitman, R., Cheplygina, V., Mateus, D., Trucco, E., Hu, X.S., Chen, D., et al., Eds.; Springer: Cham, Switzerland, 2019; pp. 22–31. [Google Scholar]
- Anjum, M.A.; Amin, J.; Sharif, M.; Khan, H.U.; Malik, M.S.A.; Kadry, S. Deep Semantic Segmentation and Multi-Class Skin Lesion Classification Based on Convolutional Neural Network. IEEE Access 2020, 8, 129668–129678. [Google Scholar] [CrossRef]
- Goyal, M.; Oakley, A.; Bansal, P.; Dancey, D.; Yap, M.H. Skin Lesion Segmentation in Dermoscopic Images with Ensemble Deep Learning Methods. IEEE Access 2020, 8, 4171–4181. [Google Scholar] [CrossRef]
- Nigar, N.; Umar, M.; Shahzad, M.K.; Islam, S.; Abalo, D. A Deep Learning Approach Based on Explainable Artificial Intelligence for Skin Lesion Classification. IEEE Access 2022, 10, 113715–113725. [Google Scholar] [CrossRef]
- Bian, J.; Zhang, S.; Wang, S.; Zhang, J.; Guo, J. Skin Lesion Classification by Multi-View Filtered Transfer Learning. IEEE Access 2021, 9, 66052–66061. [Google Scholar] [CrossRef]
- Hosny, K.M.; Said, W.; Elmezain, M.; Kassem, M.A. Explainable deep inherent learning for multi-classes skin lesion classification. Appl. Soft Comput. 2024, 159, 111624. [Google Scholar] [CrossRef]
- Naeem, A.; Farooq, M.S.; Khelifi, A.; Abid, A. Malignant Melanoma Classification Using Deep Learning: Datasets, Performance Measurements, Challenges and Opportunities. IEEE Access 2020, 8, 110575–110597. [Google Scholar] [CrossRef]
- Thurnhofer-Hemsi, K.; López-Rubio, E.; Domínguez, E.; Elizondo, D.A. Skin Lesion Classification by Ensembles of Deep Convolutional Networks and Regularly Spaced Shifting. IEEE Access 2021, 9, 112193–112205. [Google Scholar] [CrossRef]
- Liu, H.; Dou, Y.; Wang, K.; Zou, Y.; Sen, G.; Liu, X.; Li, H. A skin disease classification model based on multi scale combined efficient channel attention module. Sci. Rep. 2025, 15, 6116. [Google Scholar] [CrossRef]
- Ozdemir, B.; Pacal, I. A robust deep learning framework for multiclass skin cancer classification. Sci. Rep. 2025, 15, 4938. [Google Scholar] [CrossRef]
- Tschandl, P.; Rosendahl, C.; Kittler, H. The HAM10000 dataset, a large collection of multi-source dermatoscopic images of common pigmented skin lesions. Sci. Data 2018, 5, 180161. [Google Scholar] [CrossRef]
- Hernández-Pérez, C.; Combalia, M.; Podlipnik, S.; Codella, N.C.F.; Rotemberg, V.; Halpern, A.C.; Reiter, O.; Carrera, C.; Barreiro, A.; Helba, B.; et al. BCN20000: Dermoscopic lesions in the wild. Sci. Data 2024, 11, 641. [Google Scholar] [CrossRef]
- Ricci Lara, M.A.; Rodríguez Kowalczuk, M.V.; Lisa Eliceche, M.; Ferraresso, M.G.; Luna, D.R.; Benitez, S.E.; Mazzuoccolo, L.D. A dataset of skin lesion images collected in Argentina for the evaluation of AI tools in this population. Sci. Data 2023, 10, 712. [Google Scholar] [CrossRef] [PubMed]
- Pacheco, A.G.; Lima, G.R.; Salomão, A.S.; Krohling, B.; Biral, I.P.; De Angelo, G.G.; Alves, F.C., Jr.; Esgario, J.G.; Simora, A.C.; Castro, P.B.; et al. PAD-UFES-20: A skin lesion dataset composed of patient data and clinical images collected from smartphones. Data Brief 2020, 32, 106221. [Google Scholar] [CrossRef]
- Radosavovic, I.; Kosaraju, R.P.; Girshick, R.; He, K.; Dollár, P. Designing Network Design Spaces. arXiv 2020, arXiv:2003.13678. [Google Scholar] [CrossRef]
- Xie, S.; Girshick, R.; Dollár, P.; Tu, Z.; He, K. Aggregated Residual Transformations for Deep Neural Networks. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; pp. 5987–5995. [Google Scholar] [CrossRef]
- Howard, A.; Sandler, M.; Chu, G.; Chen, L.C.; Chen, B.; Tan, M.; Wang, W.; Zhu, Y.; Pang, R.; Vasudevan, V.; et al. Searching for MobileNetV3. arXiv 2019, arXiv:1905.02244. [Google Scholar] [CrossRef]
- Wang, W.; Xie, E.; Li, X.; Fan, D.P.; Song, K.; Liang, D.; Lu, T.; Luo, P.; Shao, L. PVT v2: Improved baselines with pyramid vision transformer. Comput. Vis. Media 2022, 8, 415–424. [Google Scholar] [CrossRef]
- Wadekar, S.N.; Chaurasia, A. MobileViTv3: Mobile-Friendly Vision Transformer with Simple and Effective Fusion of Local, Global and Input Features. arXiv 2022, arXiv:2209.15159. [Google Scholar] [CrossRef]
- Xu, W.; Xu, Y.; Chang, T.; Tu, Z. Co-Scale Conv-Attentional Image Transformers. In Proceedings of the 2021 IEEE/CVF International Conference on Computer Vision (ICCV), Montreal, QC, Canada, 10–17 October 2021; pp. 9961–9970. [Google Scholar] [CrossRef]
- Chollet, F. Xception: Deep Learning with Depthwise Separable Convolutions. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; pp. 1800–1807. [Google Scholar] [CrossRef]
CPU: Xeon(R) Platinum 8255C | |
Hardware | RAM: 40 GB |
GPU: RTX 2080Ti/11 GB with Driver 550.90.07 | |
OS: Ubuntu 22.04 | |
Software and Library | CUDA: 12.4 |
PyTorch: 2.5.1 | |
batch size: 3,10,20 | |
Training Setting | epoch: 50 |
Optimizer: Adam with | |
hidden_dim: 30 | |
embed_dim: 180 | |
Model Parameters | head_num: 30 |
attention_layer_num: 6 | |
linear bias: True | |
EFLoss Parameters | = 3 |
= 0.3 |
Class | Overall Accuracy (%) | Precision (%) | Recall (%) | F1-Score | MCC |
---|---|---|---|---|---|
Melanocytic nevus (NV) | - | 97.8 | 95.6 | 0.967 | 0.940 |
Melanoma (MEL) | - | 94.4 | 95.5 | 0.950 | 0.938 |
Benign keratosis (BKL) | - | 86.6 | 85.8 | 0.862 | 0.847 |
Basal cell carcinoma (BCC) | - | 89.0 | 96.3 | 0.925 | 0.912 |
Actinic keratosis (AK) | - | 97.4 | 90.0 | 0.936 | 0.933 |
Dermatofibroma (DF) | - | 78.1 | 75.8 | 0.769 | 0.767 |
Vascular lesion (VASC) | - | 95.1 | 92.9 | 0.940 | 0.939 |
Squamous cell carcinoma (SCC) | - | 91.3 | 96.0 | 0.936 | 0.934 |
Overall (macro mean) | 94.2 | 91.1 | 91.0 | 0.911 | 0.901 |
Model | Overall Accuracy (%) | Precision (%) (Macro Mean) | Recall(%) (Macro Mean) | F1-Score (Macro Mean) | MCC (Macro Mean) |
---|---|---|---|---|---|
ResNet-20 [9] | 79.6 | 66.9 | 69.1 | 0.676 | 0.645 |
RegNetY-Small [37] | 82.5 | 70.7 | 74.5 | 0.721 | 0.695 |
ResNeXt50-32x4d [38] | 81.9 | 70.4 | 71.8 | 0.709 | 0.681 |
MobileNetV3-Small-1.0 [39] | 82.7 | 71.1 | 72.4 | 0.714 | 0.688 |
Xception [43] | 85.2 | 74.9 | 79.4 | 0.767 | 0.745 |
DeiT-Tiny/16 [13] | 87.8 | 78.0 | 85.6 | 0.811 | 0.795 |
PVTv2-B1 [40] | 90.6 | 81.4 | 90.8 | 0.851 | 0.841 |
CoaT-Tiny [42] | 89.6 | 80.1 | 86.9 | 0.828 | 0.815 |
MobileViTv3-xxs [41] | 92.2 | 83.2 | 90.5 | 0.862 | 0.853 |
HybridSkinFormer (our) | 94.2 | 91.1 | 91.0 | 0.911 | 0.901 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Huang, Y.; Zhang, Z.; Ran, X.; Zhuang, K.; Ran, Y. An Ingeniously Designed Skin Lesion Classification Model Across Clinical and Dermatoscopic Datasets. Diagnostics 2025, 15, 2011. https://doi.org/10.3390/diagnostics15162011
Huang Y, Zhang Z, Ran X, Zhuang K, Ran Y. An Ingeniously Designed Skin Lesion Classification Model Across Clinical and Dermatoscopic Datasets. Diagnostics. 2025; 15(16):2011. https://doi.org/10.3390/diagnostics15162011
Chicago/Turabian StyleHuang, Ying, Zhishuo Zhang, Xin Ran, Kaiwen Zhuang, and Yuping Ran. 2025. "An Ingeniously Designed Skin Lesion Classification Model Across Clinical and Dermatoscopic Datasets" Diagnostics 15, no. 16: 2011. https://doi.org/10.3390/diagnostics15162011
APA StyleHuang, Y., Zhang, Z., Ran, X., Zhuang, K., & Ran, Y. (2025). An Ingeniously Designed Skin Lesion Classification Model Across Clinical and Dermatoscopic Datasets. Diagnostics, 15(16), 2011. https://doi.org/10.3390/diagnostics15162011