Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (239)

Search Parameters:
Keywords = dermatological images

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
18 pages, 1253 KiB  
Article
Leveraging Synthetic Degradation for Effective Training of Super-Resolution Models in Dermatological Images
by Francesco Branciforti, Kristen M. Meiburger, Elisa Zavattaro, Paola Savoia and Massimo Salvi
Electronics 2025, 14(15), 3138; https://doi.org/10.3390/electronics14153138 - 6 Aug 2025
Abstract
Teledermatology relies on digital transfer of dermatological images, but compression and resolution differences compromise diagnostic quality. Image enhancement techniques are crucial to compensate for these differences and improve quality for both clinical assessment and AI-based analysis. We developed a customized image degradation pipeline [...] Read more.
Teledermatology relies on digital transfer of dermatological images, but compression and resolution differences compromise diagnostic quality. Image enhancement techniques are crucial to compensate for these differences and improve quality for both clinical assessment and AI-based analysis. We developed a customized image degradation pipeline simulating common artifacts in dermatological images, including blur, noise, downsampling, and compression. This synthetic degradation approach enabled effective training of DermaSR-GAN, a super-resolution generative adversarial network tailored for dermoscopic images. The model was trained on 30,000 high-quality ISIC images and evaluated on three independent datasets (ISIC Test, Novara Dermoscopic, PH2) using structural similarity and no-reference quality metrics. DermaSR-GAN achieved statistically significant improvements in quality scores across all datasets, with up to 23% enhancement in perceptual quality metrics (MANIQA). The model preserved diagnostic details while doubling resolution and surpassed existing approaches, including traditional interpolation methods and state-of-the-art deep learning techniques. Integration with downstream classification systems demonstrated up to 14.6% improvement in class-specific accuracy for keratosis-like lesions compared to original images. Synthetic degradation represents a promising approach for training effective super-resolution models in medical imaging, with significant potential for enhancing teledermatology applications and computer-aided diagnosis systems. Full article
(This article belongs to the Section Computer Science & Engineering)
7 pages, 8022 KiB  
Interesting Images
Multimodal Imaging Detection of Difficult Mammary Paget Disease: Dermoscopy, Reflectance Confocal Microscopy, and Line-Field Confocal–Optical Coherence Tomography
by Carmen Cantisani, Gianluca Caruso, Alberto Taliano, Caterina Longo, Giuseppe Rizzuto, Vito DAndrea, Pawel Pietkiewicz, Giulio Bortone, Luca Gargano, Mariano Suppa and Giovanni Pellacani
Diagnostics 2025, 15(15), 1898; https://doi.org/10.3390/diagnostics15151898 - 29 Jul 2025
Viewed by 177
Abstract
Mammary Paget disease (MPD) is a rare cutaneous malignancy associated with underlying ductal carcinoma in situ (DCIS) or invasive ductal carcinoma (IDC). Clinically, it appears as eczematous changes in the nipple and areola complex (NAC), which may include itching, redness, crusting, and ulceration; [...] Read more.
Mammary Paget disease (MPD) is a rare cutaneous malignancy associated with underlying ductal carcinoma in situ (DCIS) or invasive ductal carcinoma (IDC). Clinically, it appears as eczematous changes in the nipple and areola complex (NAC), which may include itching, redness, crusting, and ulceration; these symptoms can sometimes mimic benign dermatologic conditions such as nipple eczema, making early diagnosis challenging. A 56-year-old woman presented with persistent erythema and scaling of the left nipple, which did not respond to conventional dermatologic treatments: a high degree of suspicion prompted further investigation. Reflectance confocal microscopy (RCM) revealed atypical, enlarged epidermal cells with irregular boundaries, while line-field confocal–optical coherence tomography (LC-OCT) demonstrated thickening of the epidermis, hypo-reflective vacuous spaces and abnormally large round cells (Paget cells). These non-invasive imaging findings were consistent with an aggressive case of Paget disease despite the absence of clear mammographic evidence of underlying carcinoma: in fact, several biopsies were needed, and at the end, massive surgery was necessary. Non-invasive imaging techniques, such as dermoscopy, RCM, and LC-OCT, offer a valuable diagnostic tool in detecting Paget disease, especially in early stages and atypical forms. Full article
(This article belongs to the Section Medical Imaging and Theranostics)
Show Figures

Figure 1

21 pages, 5527 KiB  
Article
SGNet: A Structure-Guided Network with Dual-Domain Boundary Enhancement and Semantic Fusion for Skin Lesion Segmentation
by Haijiao Yun, Qingyu Du, Ziqing Han, Mingjing Li, Le Yang, Xinyang Liu, Chao Wang and Weitian Ma
Sensors 2025, 25(15), 4652; https://doi.org/10.3390/s25154652 - 27 Jul 2025
Viewed by 317
Abstract
Segmentation of skin lesions in dermoscopic images is critical for the accurate diagnosis of skin cancers, particularly malignant melanoma, yet it is hindered by irregular lesion shapes, blurred boundaries, low contrast, and artifacts, such as hair interference. Conventional deep learning methods, typically based [...] Read more.
Segmentation of skin lesions in dermoscopic images is critical for the accurate diagnosis of skin cancers, particularly malignant melanoma, yet it is hindered by irregular lesion shapes, blurred boundaries, low contrast, and artifacts, such as hair interference. Conventional deep learning methods, typically based on UNet or Transformer architectures, often face limitations in regard to fully exploiting lesion features and incur high computational costs, compromising precise lesion delineation. To overcome these challenges, we propose SGNet, a structure-guided network, integrating a hybrid CNN–Mamba framework for robust skin lesion segmentation. The SGNet employs the Visual Mamba (VMamba) encoder to efficiently extract multi-scale features, followed by the Dual-Domain Boundary Enhancer (DDBE), which refines boundary representations and suppresses noise through spatial and frequency-domain processing. The Semantic-Texture Fusion Unit (STFU) adaptively integrates low-level texture with high-level semantic features, while the Structure-Aware Guidance Module (SAGM) generates coarse segmentation maps to provide global structural guidance. The Guided Multi-Scale Refiner (GMSR) further optimizes boundary details through a multi-scale semantic attention mechanism. Comprehensive experiments based on the ISIC2017, ISIC2018, and PH2 datasets demonstrate SGNet’s superior performance, with average improvements of 3.30% in terms of the mean Intersection over Union (mIoU) value and 1.77% in regard to the Dice Similarity Coefficient (DSC) compared to state-of-the-art methods. Ablation studies confirm the effectiveness of each component, highlighting SGNet’s exceptional accuracy and robust generalization for computer-aided dermatological diagnosis. Full article
(This article belongs to the Section Biomedical Sensors)
Show Figures

Figure 1

17 pages, 6870 KiB  
Article
Edge- and Color–Texture-Aware Bag-of-Local-Features Model for Accurate and Interpretable Skin Lesion Diagnosis
by Dichao Liu and Kenji Suzuki
Diagnostics 2025, 15(15), 1883; https://doi.org/10.3390/diagnostics15151883 - 27 Jul 2025
Viewed by 380
Abstract
Background/Objectives: Deep models have achieved remarkable progress in the diagnosis of skin lesions but face two significant drawbacks. First, they cannot effectively explain the basis of their predictions. Although attention visualization tools like Grad-CAM can create heatmaps using deep features, these features [...] Read more.
Background/Objectives: Deep models have achieved remarkable progress in the diagnosis of skin lesions but face two significant drawbacks. First, they cannot effectively explain the basis of their predictions. Although attention visualization tools like Grad-CAM can create heatmaps using deep features, these features often have large receptive fields, resulting in poor spatial alignment with the input image. Second, the design of most deep models neglects interpretable traditional visual features inspired by clinical experience, such as color–texture and edge features. This study aims to propose a novel approach integrating deep learning with traditional visual features to handle these limitations. Methods: We introduce the edge- and color–texture-aware bag-of-local-features model (ECT-BoFM), which limits the receptive field of deep features to a small size and incorporates edge and color–texture information from traditional features. A non-rigid reconstruction strategy ensures that traditional features enhance rather than constrain the model’s performance. Results: Experiments on the ISIC 2018 and 2019 datasets demonstrated that ECT-BoFM yields precise heatmaps and achieves high diagnostic performance, outperforming state-of-the-art methods. Furthermore, training models using only a small number of the most predictive patches identified by ECT-BoFM achieved diagnostic performance comparable to that obtained using full images, demonstrating its efficiency in exploring key clues. Conclusions: ECT-BoFM successfully combines deep learning and traditional visual features, addressing the interpretability and diagnostic accuracy challenges of existing methods. ECT-BoFM provides an interpretable and accurate framework for skin lesion diagnosis, advancing the integration of AI in dermatological research and clinical applications. Full article
Show Figures

Figure 1

7 pages, 540 KiB  
Case Report
Simultaneous Central Nervous System and Cutaneous Relapse in Acute Myeloid Leukemia
by Eros Cerantola, Laura Forlani, Marco Pizzi, Renzo Manara, Mauro Alaibac, Federica Lessi, Angelo Paolo Dei Tos, Chiara Briani and Carmela Gurrieri
Hemato 2025, 6(3), 25; https://doi.org/10.3390/hemato6030025 - 23 Jul 2025
Viewed by 169
Abstract
Introduction: Acute Myeloid Leukemia (AML) is a hematologic malignancy characterized by the clonal expansion of myeloid progenitors. While it primarily affects the bone marrow, extramedullary relapse occurs in 3–5% of cases, and it is linked to poor prognosis. Central nervous system (CNS) involvement [...] Read more.
Introduction: Acute Myeloid Leukemia (AML) is a hematologic malignancy characterized by the clonal expansion of myeloid progenitors. While it primarily affects the bone marrow, extramedullary relapse occurs in 3–5% of cases, and it is linked to poor prognosis. Central nervous system (CNS) involvement presents diagnostic challenges due to nonspecific symptoms. CNS manifestations include leptomeningeal dissemination, nerve infiltration, parenchymal lesions, and myeloid sarcoma, occurring at any disease stage and frequently asymptomatic. Methods: A 62-year-old man with a recent history of AML in remission presented with diplopia and aching paresthesias in the left periorbital region spreading to the left frontal area. The diagnostic workup included neurological and hematological evaluation, lumbar puncture, brain CT, brain magnetic resonance imaging (MRI) with contrast, and dermatological evaluation with skin biopsy due to the appearance of nodular skin lesions on the abdomen and thorax. Results: Neurological evaluation showed hypoesthesia in the left mandibular region, consistent with left trigeminal nerve involvement, extending to the periorbital and frontal areas, and impaired adduction of the left eye with divergent strabismus in the primary position due to left oculomotor nerve palsy. Brain MRI showed an equivocal thickening of the left oculomotor nerve without enhancement. Cerebrospinal fluid (CSF) analysis initially showed elevated protein (47 mg/dL) with negative cytology; a repeat lumbar puncture one week later detected leukemic cells. Skin biopsy revealed cutaneous AML localization. A diagnosis of AML relapse with CNS and cutaneous localization was made. Salvage therapy with FLAG-IDA-VEN (fludarabine, cytarabine, idarubicin, venetoclax) and intrathecal methotrexate, cytarabine, and dexamethasone was started. Subsequent lumbar punctures were negative for leukemic cells. Due to high-risk status and extramedullary disease, the patient underwent allogeneic hematopoietic stem cell transplantation. Post-transplant aplasia was complicated by septic shock; the patient succumbed to an invasive fungal infection. Conclusions: This case illustrates the diagnostic complexity and poor prognosis of extramedullary AML relapse involving the CNS. Early recognition of neurological signs, including cranial nerve dysfunction, is crucial for timely diagnosis and management. Although initial investigations were negative, further analyses—including repeated CSF examinations and skin biopsy—led to the identification of leukemic involvement. Although neuroleukemiosis cannot be confirmed without nerve biopsy, the combination of clinical presentation, neuroimaging, and CSF data strongly supports the diagnosis of extramedullary relapse of AML. Multidisciplinary evaluation remains essential for detecting extramedullary relapse. Despite treatment achieving CSF clearance, the prognosis remains unfavorable, underscoring the need for vigilant clinical suspicion in hematologic patients presenting with neurological symptoms. Full article
Show Figures

Figure 1

17 pages, 2307 KiB  
Article
DeepBiteNet: A Lightweight Ensemble Framework for Multiclass Bug Bite Classification Using Image-Based Deep Learning
by Doston Khasanov, Halimjon Khujamatov, Muksimova Shakhnoza, Mirjamol Abdullaev, Temur Toshtemirov, Shahzoda Anarova, Cheolwon Lee and Heung-Seok Jeon
Diagnostics 2025, 15(15), 1841; https://doi.org/10.3390/diagnostics15151841 - 22 Jul 2025
Viewed by 340
Abstract
Background/Objectives: The accurate identification of insect bites from images of skin is daunting due to the fine gradations among diverse bite types, variability in human skin response, and inconsistencies in image quality. Methods: For this work, we introduce DeepBiteNet, a new [...] Read more.
Background/Objectives: The accurate identification of insect bites from images of skin is daunting due to the fine gradations among diverse bite types, variability in human skin response, and inconsistencies in image quality. Methods: For this work, we introduce DeepBiteNet, a new ensemble-based deep learning model designed to perform robust multiclass classification of insect bites from RGB images. Our model aggregates three semantically diverse convolutional neural networks—DenseNet121, EfficientNet-B0, and MobileNetV3-Small—using a stacked meta-classifier designed to aggregate their predicted outcomes into an integrated, discriminatively strong output. Our technique balances heterogeneous feature representation with suppression of individual model biases. Our model was trained and evaluated on a hand-collected set of 1932 labeled images representing eight classes, consisting of common bites such as mosquito, flea, and tick bites, and unaffected skin. Our domain-specific augmentation pipeline imputed practical variability in lighting, occlusion, and skin tone, thereby boosting generalizability. Results: Our model, DeepBiteNet, achieved a training accuracy of 89.7%, validation accuracy of 85.1%, and test accuracy of 84.6%, and surpassed fifteen benchmark CNN architectures on all key indicators, viz., precision (0.880), recall (0.870), and F1-score (0.875). Our model, optimized for mobile deployment with quantization and TensorFlow Lite, enables rapid on-client computation and eliminates reliance on cloud-based processing. Conclusions: Our work shows how ensemble learning, when carefully designed and combined with realistic data augmentation, can boost the reliability and usability of automatic insect bite diagnosis. Our model, DeepBiteNet, forms a promising foundation for future integration with mobile health (mHealth) solutions and may complement early diagnosis and triage in dermatologically underserved regions. Full article
(This article belongs to the Special Issue Artificial Intelligence in Biomedical Diagnostics and Analysis 2024)
Show Figures

Figure 1

15 pages, 1669 KiB  
Article
Prospective Evaluation of a Thermogenic Topical Cream-Gel Containing Caffeine, Genistein, and Botanical Extracts for the Treatment of Moderate to Severe Cellulite
by Vittoria Giulia Bianchi, Matteo Riccardo Di Nicola, Anna Cerullo, Giovanni Paolino and Santo Raffaele Mercuri
Cosmetics 2025, 12(4), 155; https://doi.org/10.3390/cosmetics12040155 - 21 Jul 2025
Viewed by 804
Abstract
Cellulite, characterised by cutaneous dimpling, surface irregularities, and dermal atrophy skin texture, affects up to 90% of post-pubertal females. It is a multifactorial condition involving anatomical, hormonal, and metabolic components, primarily affecting the thighs and buttocks. Despite numerous available therapies, there remains a [...] Read more.
Cellulite, characterised by cutaneous dimpling, surface irregularities, and dermal atrophy skin texture, affects up to 90% of post-pubertal females. It is a multifactorial condition involving anatomical, hormonal, and metabolic components, primarily affecting the thighs and buttocks. Despite numerous available therapies, there remains a high demand for effective, non-invasive, and well-tolerated treatment options. This single-centre, in vivo, prospective study evaluated the efficacy of a non-pharmacological, thermogenic topical cream-gel combined with manual massage in women with symmetrical grade II or III cellulite (Nürnberger–Müller scale). A total of 56 female participants (aged 18–55 years) were enrolled and instructed to apply the product twice daily for eight weeks to the thighs and buttocks. Efficacy was assessed using instrumental skin profilometry (ANTERA® 3D CS imaging system), dermatological clinical grading, and patient self-assessment questionnaires. Quantitative analysis showed a mean reduction of 23.5% in skin indentation volume (p < 0.01) and a mean decrease of 1.1 points on the cellulite severity scale by week 8. Patient-reported outcomes revealed 85.7% satisfaction with visible results and 91% satisfaction with product texture and ease of application. Dermatological evaluation confirmed no clinically significant adverse reactions, and only 3.5% of participants reported mild and transient skin sensitivity. These findings suggest that this topical cream-gel formulation, when used in conjunction with manual massage, represents a well-tolerated and non-invasive option for the cosmetic improvement of moderate to severe cellulite. Full article
(This article belongs to the Section Cosmetic Dermatology)
Show Figures

Figure 1

13 pages, 4206 KiB  
Case Report
Comparison of Symptoms and Disease Progression in a Mother and Son with Gorlin–Goltz Syndrome: A Case Report
by Agnieszka Adamska, Dominik Woźniak, Piotr Regulski and Paweł Zawadzki
J. Clin. Med. 2025, 14(14), 5151; https://doi.org/10.3390/jcm14145151 - 20 Jul 2025
Viewed by 468
Abstract
Background: Gorlin–Goltz syndrome (GGS), also known as basal cell nevus syndrome or nevoid basal cell carcinoma syndrome, is a rare genetic disorder caused by mutations in the PTCH1, PTCH2, or SUFU genes, leading to an increased risk of neoplasms. Craniofacial [...] Read more.
Background: Gorlin–Goltz syndrome (GGS), also known as basal cell nevus syndrome or nevoid basal cell carcinoma syndrome, is a rare genetic disorder caused by mutations in the PTCH1, PTCH2, or SUFU genes, leading to an increased risk of neoplasms. Craniofacial anomalies are among the most common features of GGS. This paper aimed to highlight the similarities and differences in clinical presentation across different ages and to emphasize the importance of including all family members in the diagnostic process. The diagnosis can often be initiated by a dentist through routine radiographic imaging. Case Presentation: We present a 17-year longitudinal follow-up of a male patient with recurrent multiple odontogenic keratocysts and other manifestations consistent with GGS. Nearly 20 years later, the patient’s mother presented with similar clinical features suggestive of GGS. Diagnostic imaging, including contrast-enhanced computed tomography (CT), cone-beam CT, magnetic resonance imaging, and orthopantomography, was performed, and the diagnosis was confirmed through genetic testing. Interdisciplinary management included age-appropriate surgical and dermatological treatments tailored to lesion severity. Conclusions: Given the frequent involvement of the stomatognathic system in GGS, dentists play a critical role in early detection and referral. Comprehensive family-based screening is essential for timely diagnosis, improved monitoring, and effective management of this multisystem disorder. Full article
(This article belongs to the Section Dentistry, Oral Surgery and Oral Medicine)
Show Figures

Figure 1

44 pages, 2807 KiB  
Review
Artificial Intelligence in Dermatology: A Review of Methods, Clinical Applications, and Perspectives
by Agnieszka M. Zbrzezny and Tomasz Krzywicki
Appl. Sci. 2025, 15(14), 7856; https://doi.org/10.3390/app15147856 - 14 Jul 2025
Viewed by 1101
Abstract
The use of artificial intelligence (AI) in dermatology is skyrocketing, but a comprehensive overview integrating regulatory, ethical, validation, and clinical issues is lacking. This work aims to review current research, map applicable legal regulations, identify ethical challenges and methods of verifying AI models [...] Read more.
The use of artificial intelligence (AI) in dermatology is skyrocketing, but a comprehensive overview integrating regulatory, ethical, validation, and clinical issues is lacking. This work aims to review current research, map applicable legal regulations, identify ethical challenges and methods of verifying AI models in dermatology, assess publication trends, compare the most popular neural network architectures and datasets, and identify good practices in creating AI-based applications for dermatological use. A systematic literature review is conducted in accordance with the PRISMA guidelines, utilising Google Scholar, PubMed, Scopus, and Web of Science and employing bibliometric analysis. Since 2016, there has been exponential growth in deep learning research in dermatology, revealing gaps in EU and US regulations and significant differences in model performance across different datasets. The decision-making process in clinical dermatology is analysed, focusing on how AI is augmenting skin imaging techniques such as dermatoscopy and histology. Further demonstration is provided regarding how AI is a valuable tool that supports dermatologists by automatically analysing skin images, enabling faster diagnosis and the more accurate identification of skin lesions. These advances enhance the precision and efficiency of dermatological care, showcasing the potential of AI to revolutionise the speed of diagnosis in modern dermatology, sparking excitement and curiosity. Then, we discuss the regulatory framework for AI in medicine, as well as the ethical issues that may arise. Additionally, this article addresses the critical challenge of ensuring the safety and trustworthiness of AI in dermatology, presenting classic examples of safety issues that can arise during its implementation. The review provides recommendations for regulatory harmonisation, the standardisation of validation metrics, and further research on data explainability and representativeness, which can accelerate the safe implementation of AI in dermatological practice. Full article
(This article belongs to the Special Issue Machine Learning in Biomedical Sciences)
Show Figures

Figure 1

20 pages, 3941 KiB  
Article
AΚtransU-Net: Transformer-Equipped U-Net Model for Improved Actinic Keratosis Detection in Clinical Photography
by Panagiotis Derekas, Charalampos Theodoridis, Aristidis Likas, Ioannis Bassukas, Georgios Gaitanis, Athanasia Zampeta, Despina Exadaktylou and Panagiota Spyridonos
Diagnostics 2025, 15(14), 1752; https://doi.org/10.3390/diagnostics15141752 - 10 Jul 2025
Viewed by 441
Abstract
Background: Integrating artificial intelligence into clinical photography offers great potential for monitoring skin conditions such as actinic keratosis (AK) and skin field cancerization. Identifying the extent of AK lesions often requires more than analyzing lesion morphology—it also depends on contextual cues, such as [...] Read more.
Background: Integrating artificial intelligence into clinical photography offers great potential for monitoring skin conditions such as actinic keratosis (AK) and skin field cancerization. Identifying the extent of AK lesions often requires more than analyzing lesion morphology—it also depends on contextual cues, such as surrounding photodamage. This highlights the need for models that can combine fine-grained local features with a comprehensive global view. Methods: To address this challenge, we propose AKTransU-net, a hybrid U-net-based architecture. The model incorporates Transformer blocks to enrich feature representations, which are passed through ConvLSTM modules within the skip connections. This configuration allows the network to maintain semantic coherence and spatial continuity in AK detection. This global awareness is critical when applying the model to whole-image detection via tile-based processing, where continuity across tile boundaries is essential for accurate and reliable lesion segmentation. Results: The effectiveness of AKTransU-net was demonstrated through comparative evaluations with state-of-the-art segmentation models. A proprietary annotated dataset of 569 clinical photographs from 115 patients with actinic keratosis was used to train and evaluate the models. From each photograph, crops of 512 × 512 pixels were extracted using translation lesion boxes that encompassed lesions in different positions and captured different contexts. AKtransU-net exhibited a more robust context awareness and achieved a median Dice score of 65.13%, demonstrating significant progress in whole-image assessments. Conclusions: Transformer-driven context modeling offers a promising approach for robust AK lesion monitoring, supporting its application in real-world clinical settings where accurate, context-aware analysis is crucial for managing skin field cancerization. Full article
(This article belongs to the Special Issue Artificial Intelligence in Dermatology)
Show Figures

Figure 1

21 pages, 1611 KiB  
Article
Novel Snapshot-Based Hyperspectral Conversion for Dermatological Lesion Detection via YOLO Object Detection Models
by Nan-Chieh Huang, Arvind Mukundan, Riya Karmakar, Syna Syna, Wen-Yen Chang and Hsiang-Chen Wang
Bioengineering 2025, 12(7), 714; https://doi.org/10.3390/bioengineering12070714 - 30 Jun 2025
Viewed by 415
Abstract
Objective: Skin lesions, including dermatofibroma, lichenoid lesions, and acrochordons, are increasingly prevalent worldwide and often require timely identification for effective clinical management. However, conventional RGB-based imaging can overlook subtle vascular characteristics, potentially delaying diagnosis. Methods: A novel spectrum-aided vision enhancer (SAVE) that [...] Read more.
Objective: Skin lesions, including dermatofibroma, lichenoid lesions, and acrochordons, are increasingly prevalent worldwide and often require timely identification for effective clinical management. However, conventional RGB-based imaging can overlook subtle vascular characteristics, potentially delaying diagnosis. Methods: A novel spectrum-aided vision enhancer (SAVE) that transforms standard RGB images into simulated narrowband imaging representations in a single step was proposed. The performances of five cutting-edge object detectors, based on You Look Only Once (YOLOv11, YOLOv10, YOLOv9, YOLOv8, and YOLOv5) models, were assessed across three lesion categories using white-light imaging (WLI) and SAVE modalities. Each YOLO model was trained separately on SAVE and WLI images, and performance was measured using precision, recall, and F1 score. Results: Among all tested configurations, YOLOv10 attained the highest overall performance, particularly under the SAVE modality, demonstrating superior precision and recall across the majority of lesion types. YOLOv9 exhibited robust performance, especially for dermatofibroma detection under SAVE, albeit slightly lagging behind YOLOv10. Conversely, YOLOv11 underperformed on acrochordon detection (cumulative F1  =  65.73%), and YOLOv8 and YOLOv5 displayed lower accuracy and higher false-positive rates, especially in WLI mode. Although SAVE improved the performance of YOLOv8 and YOLOv5, their results remained below those of YOLOv10 and YOLOv9. Conclusions: Combining the SAVE modality with advanced YOLO-based object detectors, specifically YOLOv10 and YOLOv9, markedly enhances the accuracy of lesion detection compared to conventional WLI, facilitating expedited real-time dermatological screening. These findings indicate that integrating snapshot-based narrowband imaging with deep learning object detection models can improve early diagnosis and has potential applications in broader clinical contexts. Full article
(This article belongs to the Special Issue Medical Artificial Intelligence and Data Analysis)
Show Figures

Figure 1

21 pages, 444 KiB  
Review
The Role of ChatGPT in Dermatology Diagnostics
by Ziad Khamaysi, Mahdi Awwad, Badea Jiryis, Naji Bathish and Jonathan Shapiro
Diagnostics 2025, 15(12), 1529; https://doi.org/10.3390/diagnostics15121529 - 16 Jun 2025
Viewed by 943
Abstract
Artificial intelligence (AI), especially large language models (LLMs) like ChatGPT, has disrupted different medical disciplines, including dermatology. This review explores the application of ChatGPT in dermatological diagnosis, emphasizing its role in natural language processing (NLP) for clinical data interpretation, differential diagnosis assistance, and [...] Read more.
Artificial intelligence (AI), especially large language models (LLMs) like ChatGPT, has disrupted different medical disciplines, including dermatology. This review explores the application of ChatGPT in dermatological diagnosis, emphasizing its role in natural language processing (NLP) for clinical data interpretation, differential diagnosis assistance, and patient communication enhancement. ChatGPT can enhance a diagnostic workflow when paired with image analysis tools, such as convolutional neural networks (CNNs), by merging text and image data. While it boasts great capabilities, it still faces some issues, such as its inability to perform any direct image analyses and the risk of inaccurate suggestions. Ethical considerations, including patient data privacy and the responsibilities of the clinician, are discussed. Future perspectives include an integrated multimodal model and AI-assisted framework for diagnosis, which shall improve dermatology practice. Full article
(This article belongs to the Special Issue Machine-Learning-Based Disease Diagnosis and Prediction)
Show Figures

Figure 1

13 pages, 1228 KiB  
Article
Medical Photography in Dermatology: Quality and Safety in the Referral Process to Secondary Healthcare
by Eduarda Castro Almeida, João Rocha-Neves, Ana Filipa Pedrosa and José Paulo Andrade
Diagnostics 2025, 15(12), 1518; https://doi.org/10.3390/diagnostics15121518 - 14 Jun 2025
Viewed by 458
Abstract
Background: Medical photography is widely used in dermatology referrals to secondary healthcare, yet concerns exist regarding image quality and data security. This study aimed to evaluate the quality of clinical photographs used in dermatology referrals, to identify discrepancies between specialties’ perceptions, and to [...] Read more.
Background: Medical photography is widely used in dermatology referrals to secondary healthcare, yet concerns exist regarding image quality and data security. This study aimed to evaluate the quality of clinical photographs used in dermatology referrals, to identify discrepancies between specialties’ perceptions, and to determine the general awareness of proper storage and security of clinical photographs. Methods: A 43-question survey, based on previously validated questionnaires, was administered to general and family medicine (GFM) doctors and to dermatologists at an academic referral hospital in Porto, Portugal. The survey assessed demographics, photo-taking habits, perceived photo quality, adequacy of clinical information, and opinions on the role of photography in the referral process. Quantitative statistical methods were used to analyze questionnaire responses. Results: A total of 65 physicians participated (18 dermatologists and 47 GFM doctors). Significant differences were observed between the two groups. While 36.2% of GFMs rated their submitted photos as high- or very-high-quality, none of the dermatologists rated the received photos as high-quality, with 83.3% rating them as average (p = 0.012). Regarding clinical information, 46.8% of GFMs reported consistently sending enough information, while no dermatologists reported always receiving sufficient information (p < 0.001). Most respondents (76.9%) agreed that the quality of photographs is important in diagnosis and treatment. Conclusions: The findings reveal a discrepancy between GFM doctors’ and dermatologists’ perceptions of photograph quality and information sufficiency in dermatology referrals. Standardized guidelines and educational interventions are necessary to improve the quality and consistency of clinical photographs, thereby enhancing communication between healthcare providers and ensuring patient data privacy and security. Full article
(This article belongs to the Section Medical Imaging and Theranostics)
Show Figures

Figure 1

15 pages, 18796 KiB  
Article
Study of the Repair Action and Mechanisms of a Moisturizing Cream on an SLS-Damaged Skin Model Using Two-Photon Microscopy
by Yixin Shen, Ying Ye, Lina Wang, Huiping Hu, Caixia Wang, Yuxuan Wu, Dingqiao Lin, Jiaqi Shen, Hong Zhang, Yanan Li and Peiwen Sun
Cosmetics 2025, 12(3), 119; https://doi.org/10.3390/cosmetics12030119 - 10 Jun 2025
Viewed by 1000
Abstract
This study evaluates the efficacy of a novel moisturizing cream using a sodium lauryl sulfate (SLS)-induced skin damage model, supported by advanced imaging with two-photon microscopy (TPM). TPM’s capabilities allow for in-depth, non-invasive visualization of skin repair processes, surpassing traditional imaging methods. The [...] Read more.
This study evaluates the efficacy of a novel moisturizing cream using a sodium lauryl sulfate (SLS)-induced skin damage model, supported by advanced imaging with two-photon microscopy (TPM). TPM’s capabilities allow for in-depth, non-invasive visualization of skin repair processes, surpassing traditional imaging methods. The innovative formulation of the cream includes ceramide NP, ceramide NS, ceramide AP, lactobacillus/soybean ferment extract, and bacillus ferment, targeting the enhancement of skin hydration, barrier function, and structural integrity. In SLS-stimulated 3D skin models and clinical settings, the cream significantly improved the expression of key barrier proteins such as filaggrin (FLG), loricrin (LOR), and transglutaminase 1 (TGM1), while reducing inflammatory markers like IL-1α, TNF-α, and PGE2. Notably, the cream facilitated a significant increase in epidermal thickness and improved the dermal–epidermal junction index (DEJI), as observed through TPM, indicating profound skin repair and enhanced barrier functionality. Clinical trials further demonstrated the cream’s reparative effects, significantly reducing symptoms in participants with sensitive skin and post-intense pulsed light (IPL) treatment scenarios. This study highlights the utility of TPM as a groundbreaking tool in cosmetic dermatology, offering real-time analysis of the effects of skincare products on skin structure and function. Full article
(This article belongs to the Section Cosmetic Dermatology)
Show Figures

Figure 1

24 pages, 985 KiB  
Article
Attention-Based Deep Feature Aggregation Network for Skin Lesion Classification
by Siddiqui Muhammad Yasir and Hyun Kim
Electronics 2025, 14(12), 2364; https://doi.org/10.3390/electronics14122364 - 9 Jun 2025
Viewed by 672
Abstract
Early and accurate detection of dermatological conditions, particularly melanoma, is critical for effective treatment and improved patient outcomes. Misclassifications may lead to delayed diagnosis, disease progression, and severe complications in medical image processing. Hence, robust and reliable classification techniques are essential to enhance [...] Read more.
Early and accurate detection of dermatological conditions, particularly melanoma, is critical for effective treatment and improved patient outcomes. Misclassifications may lead to delayed diagnosis, disease progression, and severe complications in medical image processing. Hence, robust and reliable classification techniques are essential to enhance diagnostic precision in clinical practice. This study presents a deep learning-based framework designed to improve feature representation while maintaining computational efficiency. The proposed architecture integrates multi-level feature aggregation with a squeeze-and-excitation attention mechanism to effectively extract salient patterns from dermoscopic medical images. The model is rigorously evaluated on five publicly available benchmark datasets—ISIC-2019, ISIC-2020, SKINL2, MED-NODE, and HAM10000—covering a diverse spectrum of dermatological medical disorders. Experimental results demonstrate that the proposed method consistently outperforms existing approaches in classification performance, achieving accuracy rates of 94.41% and 97.45% on the MED-NODE and HAM10000 datasets, respectively. These results underscore the method’s potential for real-world deployment in automated skin lesion analysis and clinical decision support. Full article
(This article belongs to the Special Issue Convolutional Neural Networks and Vision Applications, 4th Edition)
Show Figures

Figure 1

Back to TopTop