Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (43)

Search Parameters:
Keywords = multimodal skin imaging

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
33 pages, 15612 KiB  
Article
A Personalized Multimodal Federated Learning Framework for Skin Cancer Diagnosis
by Shuhuan Fan, Awais Ahmed, Xiaoyang Zeng, Rui Xi and Mengshu Hou
Electronics 2025, 14(14), 2880; https://doi.org/10.3390/electronics14142880 - 18 Jul 2025
Viewed by 339
Abstract
Skin cancer is one of the most prevalent forms of cancer worldwide, and early and accurate diagnosis critically impacts patient outcomes. Given the sensitive nature of medical data and its fragmented distribution across institutions (data silos), privacy-preserving collaborative learning is essential to enable [...] Read more.
Skin cancer is one of the most prevalent forms of cancer worldwide, and early and accurate diagnosis critically impacts patient outcomes. Given the sensitive nature of medical data and its fragmented distribution across institutions (data silos), privacy-preserving collaborative learning is essential to enable knowledge-sharing without compromising patient confidentiality. While federated learning (FL) offers a promising solution, existing methods struggle with heterogeneous and missing modalities across institutions, which reduce the diagnostic accuracy. To address these challenges, we propose an effective and flexible Personalized Multimodal Federated Learning framework (PMM-FL), which enables efficient cross-client knowledge transfer while maintaining personalized performance under heterogeneous and incomplete modality conditions. Our study contains three key contributions: (1) A hierarchical aggregation strategy that decouples multi-module aggregation from local deployment via global modular-separated aggregation and local client fine-tuning. Unlike conventional FL (which synchronizes all parameters in each round), our method adopts a frequency-adaptive synchronization mechanism, updating parameters based on their stability and functional roles. (2) A multimodal fusion approach based on multitask learning, integrating learnable modality imputation and attention-based feature fusion to handle missing modalities. (3) A custom dataset combining multi-year International Skin Imaging Collaboration(ISIC) challenge data (2018–2024) to ensure comprehensive coverage of diverse skin cancer types. We evaluate PMM-FL through diverse experiment settings, demonstrating its effectiveness in heterogeneous and incomplete modality federated learning settings, achieving 92.32% diagnostic accuracy with only a 2% drop in accuracy under 30% modality missingness, with a 32.9% communication overhead decline compared with baseline FL methods. Full article
(This article belongs to the Special Issue Multimodal Learning and Transfer Learning)
Show Figures

Figure 1

18 pages, 15953 KiB  
Review
Development of Objective Measurements of Scratching as a Proxy of Atopic Dermatitis—A Review
by Cheuk-Yan Au, Neha Manazir, Huzhaorui Kang and Ali Asgar Saleem Bhagat
Sensors 2025, 25(14), 4316; https://doi.org/10.3390/s25144316 - 10 Jul 2025
Viewed by 466
Abstract
Eczema, or atopic dermatitis (AD), is a chronic inflammatory skin condition characterized by persistent itching and scratching, significantly impacting patients’ quality of life. Effective monitoring of scratching behaviour is crucial for assessing disease severity, treatment efficacy, and understanding the relationship between itch and [...] Read more.
Eczema, or atopic dermatitis (AD), is a chronic inflammatory skin condition characterized by persistent itching and scratching, significantly impacting patients’ quality of life. Effective monitoring of scratching behaviour is crucial for assessing disease severity, treatment efficacy, and understanding the relationship between itch and sleep disturbances. This review explores current technological approaches for detecting and monitoring scratching and itching in AD patients, categorising them into contact-based and non-contact-based methods. Contact-based methods primarily involve wearable sensors, such as accelerometers, electromyography (EMG), and piezoelectric sensors, which track limb movements and muscle activity associated with scratching. Non-contact methods include video-based motion tracking, thermal imaging, and acoustic analysis, commonly employed in sleep clinics and controlled environments to assess nocturnal scratching. Furthermore, emerging artificial intelligence (AI)-driven approaches leveraging machine learning for automated scratch detection are discussed. The advantages, limitations, and validation challenges of these technologies, including accuracy, user comfort, data privacy, and real-world applicability, are critically analysed. Finally, we outline future research directions, emphasizing the integration of multimodal monitoring, real-time data analysis, and patient-centric wearable solutions to improve disease management. This review serves as a comprehensive resource for clinicians, researchers, and technology developers seeking to advance objective itch and scratch monitoring in AD patients. Full article
Show Figures

Figure 1

19 pages, 1840 KiB  
Article
Facial Analysis for Plastic Surgery in the Era of Artificial Intelligence: A Comparative Evaluation of Multimodal Large Language Models
by Syed Ali Haider, Srinivasagam Prabha, Cesar A. Gomez-Cabello, Sahar Borna, Ariana Genovese, Maissa Trabilsy, Adekunle Elegbede, Jenny Fei Yang, Andrea Galvao, Cui Tao and Antonio Jorge Forte
J. Clin. Med. 2025, 14(10), 3484; https://doi.org/10.3390/jcm14103484 - 16 May 2025
Viewed by 913
Abstract
Background/Objectives: Facial analysis is critical for preoperative planning in facial plastic surgery, but traditional methods can be time consuming and subjective. This study investigated the potential of Artificial Intelligence (AI) for objective and efficient facial analysis in plastic surgery, with a specific focus [...] Read more.
Background/Objectives: Facial analysis is critical for preoperative planning in facial plastic surgery, but traditional methods can be time consuming and subjective. This study investigated the potential of Artificial Intelligence (AI) for objective and efficient facial analysis in plastic surgery, with a specific focus on Multimodal Large Language Models (MLLMs). We evaluated their ability to analyze facial skin quality, volume, symmetry, and adherence to aesthetic standards such as neoclassical facial canons and the golden ratio. Methods: We evaluated four MLLMs—ChatGPT-4o, ChatGPT-4, Gemini 1.5 Pro, and Claude 3.5 Sonnet—using two evaluation forms and 15 diverse facial images generated by a Generative Adversarial Network (GAN). The general analysis form evaluated qualitative skin features (texture, type, thickness, wrinkling, photoaging, and overall symmetry). The facial ratios form assessed quantitative structural proportions, including division into equal fifths, adherence to the rule of thirds, and compatibility with the golden ratio. MLLM assessments were compared with evaluations from a plastic surgeon and manual measurements of facial ratios. Results: The MLLMs showed promise in analyzing qualitative features, but they struggled with precise quantitative measurements of facial ratios. Mean accuracy for general analysis were ChatGPT-4o (0.61 ± 0.49), Gemini 1.5 Pro (0.60 ± 0.49), ChatGPT-4 (0.57 ± 0.50), and Claude 3.5 Sonnet (0.52 ± 0.50). In facial ratio assessments, scores were lower, with Gemini 1.5 Pro achieving the highest mean accuracy (0.39 ± 0.49). Inter-rater reliability, based on Cohen’s Kappa values, ranged from poor to high for qualitative assessments (κ > 0.7 for some questions) but was generally poor (near or below zero) for quantitative assessments. Conclusions: Current general purpose MLLMs are not yet ready to replace manual clinical assessments but may assist in general facial feature analysis. These findings are based on testing models not specifically trained for facial analysis and serve to raise awareness among clinicians regarding the current capabilities and inherent limitations of readily available MLLMs in this specialized domain. This limitation may stem from challenges with spatial reasoning and fine-grained detail extraction, which are inherent limitations of current MLLMs. Future research should focus on enhancing the numerical accuracy and reliability of MLLMs for broader application in plastic surgery, potentially through improved training methods and integration with other AI technologies such as specialized computer vision algorithms for precise landmark detection and measurement. Full article
(This article belongs to the Special Issue Innovation in Hand Surgery)
Show Figures

Figure 1

24 pages, 2586 KiB  
Article
Deep Multi-Modal Skin-Imaging-Based Information-Switching Network for Skin Lesion Recognition
by Yingzhe Yu, Huiqiong Jia, Li Zhang, Suling Xu, Xiaoxia Zhu, Jiucun Wang, Fangfang Wang, Lianyi Han, Haoqiang Jiang, Qiongyan Zhou and Chao Xin
Bioengineering 2025, 12(3), 282; https://doi.org/10.3390/bioengineering12030282 - 12 Mar 2025
Cited by 1 | Viewed by 1591
Abstract
The rising prevalence of skin lesions places a heavy burden on global health resources and necessitates an early and precise diagnosis for successful treatment. The diagnostic potential of recent multi-modal skin lesion detection algorithms is limited because they ignore dynamic interactions and information [...] Read more.
The rising prevalence of skin lesions places a heavy burden on global health resources and necessitates an early and precise diagnosis for successful treatment. The diagnostic potential of recent multi-modal skin lesion detection algorithms is limited because they ignore dynamic interactions and information sharing across modalities at various feature scales. To address this, we propose a deep learning framework, Multi-Modal Skin-Imaging-based Information-Switching Network (MDSIS-Net), for end-to-end skin lesion recognition. MDSIS-Net extracts intra-modality features using transfer learning in a multi-scale fully shared convolutional neural network and introduces an innovative information-switching module. A cross-attention mechanism dynamically calibrates and integrates features across modalities to improve inter-modality associations and feature representation in this module. MDSIS-Net is tested on clinical disfiguring dermatosis data and the public Derm7pt melanoma dataset. A Visually Intelligent System for Image Analysis (VISIA) captures five modalities: spots, red marks, ultraviolet (UV) spots, porphyrins, and brown spots for disfiguring dermatosis. The model performs better than existing approaches with an mAP of 0.967, accuracy of 0.960, precision of 0.935, recall of 0.960, and f1-score of 0.947. Using clinical and dermoscopic pictures from the Derm7pt dataset, MDSIS-Net outperforms current benchmarks for melanoma, with an mAP of 0.877, accuracy of 0.907, precision of 0.911, recall of 0.815, and f1-score of 0.851. The model’s interpretability is proven by Grad-CAM heatmaps correlating with clinical diagnostic focus areas. In conclusion, our deep multi-modal information-switching model enhances skin lesion identification by capturing relationship features and fine-grained details across multi-modal images, improving both accuracy and interpretability. This work advances clinical decision making and lays a foundation for future developments in skin lesion diagnosis and treatment. Full article
(This article belongs to the Special Issue Artificial Intelligence for Skin Diseases Classification)
Show Figures

Figure 1

21 pages, 6186 KiB  
Article
Automatic Measurement of Comprehensive Skin Types Based on Image Processing and Deep Learning
by Jianghong Ran, Guolong Dong, Fan Yi, Li Li and Yue Wu
Electronics 2025, 14(1), 49; https://doi.org/10.3390/electronics14010049 - 26 Dec 2024
Viewed by 2376
Abstract
The skin serves as a physical and chemical barrier, effectively protecting us against the external environment. The Baumann Skin Type Indicator (BSTI) classifies skin into 16 types based on traits such as dry/oily (DO), sensitive/resistant (SR), pigmented/nonpigmented (PN), and wrinkle-prone/tight (WT). Traditional assessments [...] Read more.
The skin serves as a physical and chemical barrier, effectively protecting us against the external environment. The Baumann Skin Type Indicator (BSTI) classifies skin into 16 types based on traits such as dry/oily (DO), sensitive/resistant (SR), pigmented/nonpigmented (PN), and wrinkle-prone/tight (WT). Traditional assessments are time-consuming and challenging as they require the involvement of experts. While deep learning has been widely used in skin disease classification, its application in skin type classification, particularly using multimodal data, remains largely unexplored. To address this, we propose an improved Inception-v3 model incorporating transfer learning, based on the four-dimensional classification of the Baumann Skin Type Index (BSTI), which demonstrates outstanding accuracy. The dataset used in this study includes non-invasive physiological indicators, BSTI questionnaires, and skin images captured under various light sources. By comparing performance across different light sources, regions of interest (ROI), and baseline models, the improved Inception-v3 model achieved the best results, with accuracy reaching 91.11% in DO, 81.13% in SR, 91.72% in PN, and 74.9% in WT, demonstrating its effectiveness in skin type classification. This study surpasses traditional classification methods and previous similar research, offering a new, objective approach to measuring comprehensive skin types using multimodal and multi-light-source data. Full article
Show Figures

Figure 1

13 pages, 1722 KiB  
Systematic Review
Exploring the Role of Large Language Models in Melanoma: A Systematic Review
by Mor Zarfati, Girish N. Nadkarni, Benjamin S. Glicksberg, Moti Harats, Shoshana Greenberger, Eyal Klang and Shelly Soffer
J. Clin. Med. 2024, 13(23), 7480; https://doi.org/10.3390/jcm13237480 - 9 Dec 2024
Cited by 2 | Viewed by 2380
Abstract
Objective: This systematic review evaluates the current applications, advantages, and challenges of large language models (LLMs) in melanoma care. Methods: A systematic search was conducted in PubMed and Scopus databases for studies published up to 23 July 2024, focusing on the application [...] Read more.
Objective: This systematic review evaluates the current applications, advantages, and challenges of large language models (LLMs) in melanoma care. Methods: A systematic search was conducted in PubMed and Scopus databases for studies published up to 23 July 2024, focusing on the application of LLMs in melanoma. The review adhered to PRISMA guidelines, and the risk of bias was assessed using the modified QUADAS-2 tool. Results: Nine studies were included, categorized into subgroups: patient education, diagnosis, and clinical management. In patient education, LLMs demonstrated high accuracy, though readability often exceeded recommended levels. For diagnosis, multimodal LLMs like GPT-4V showed capabilities in distinguishing melanoma from benign lesions, but accuracy varied, influenced by factors such as image quality and integration of clinical context. Regarding management advice, ChatGPT provided more reliable recommendations compared to other LLMs, but all models lacked depth for individualized decision-making. Conclusions: LLMs, particularly multimodal models, show potential in improving melanoma care. However, current applications require further refinement and validation. Future studies should explore fine-tuning these models on large, diverse dermatological databases and incorporate expert knowledge to address limitations such as generalizability across different populations and skin types. Full article
(This article belongs to the Section Dermatology)
Show Figures

Graphical abstract

17 pages, 2260 KiB  
Article
From Phantoms to Patients: Improved Fusion and Voxel-Wise Analysis of Diffusion-Weighted Imaging and FDG-Positron Emission Tomography in Positron Emission Tomography/Magnetic Resonance Imaging for Combined Metabolic–Diffusivity Index (cDMI)
by Katharina Deininger, Patrick Korf, Leonard Lauber, Robert Grimm, Ralph Strecker, Jochen Steinacker, Catharina S. Lisson, Bernd M. Mühling, Gerlinde Schmidtke-Schrezenmeier, Volker Rasche, Tobias Speidel, Gerhard Glatting, Meinrad Beer, Ambros J. Beer and Wolfgang Thaiss
Diagnostics 2024, 14(16), 1787; https://doi.org/10.3390/diagnostics14161787 - 16 Aug 2024
Viewed by 1488
Abstract
Hybrid positron emission tomography/magnetic resonance imaging (PET/MR) opens new possibilities in multimodal multiparametric (m2p) image analyses. But even the simultaneous acquisition of positron emission tomography (PET) and magnetic resonance imaging (MRI) does not guarantee perfect voxel-by-voxel co-registration due to organs and distortions, especially [...] Read more.
Hybrid positron emission tomography/magnetic resonance imaging (PET/MR) opens new possibilities in multimodal multiparametric (m2p) image analyses. But even the simultaneous acquisition of positron emission tomography (PET) and magnetic resonance imaging (MRI) does not guarantee perfect voxel-by-voxel co-registration due to organs and distortions, especially in diffusion-weighted imaging (DWI), which would be, however, crucial to derive biologically meaningful information. Thus, our aim was to optimize fusion and voxel-wise analyses of DWI and standardized uptake values (SUVs) using a novel software for m2p analyses. Using research software, we evaluated the precision of image co-registration and voxel-wise analyses including the rigid and elastic 3D registration of DWI and [18F]-Fluorodeoxyglucose (FDG)-PET from an integrated PET/MR system. We analyzed DWI distortions with a volume-preserving constraint in three different 3D-printed phantom models. A total of 12 PET/MR-DWI clinical datasets (bronchial carcinoma patients) were referenced to the T1 weighted-DIXON sequence. Back mapping of scatterplots and voxel-wise registration was performed and compared to the non-optimized datasets. Fusion was rated using a 5-point Likert scale. Using the 3D-elastic co-registration algorithm, geometric shapes were restored in phantom measurements; the measured ADC values did not change significantly (F = 1.12, p = 0.34). Reader assessment showed a significant improvement in fusion precision for DWI and morphological landmarks in the 3D-registered datasets (4.3 ± 0.2 vs. 4.6 ± 0.2, p = 0.009). Most pronounced differences were noted for the chest wall (p = 0.006), tumor (p = 0.007), and skin contour (p = 0.014). Co-registration increased the number of plausible ADC and SUV combinations by 25%. The volume-preserving elastic 3D registration of DWI significantly improved the precision of fusion with anatomical sequences in phantom and clinical datasets. The research software allowed for a voxel-wise analysis and visualization of [18F]FDG-PET/MR data as a “combined diffusivity–metabolic index” (cDMI). The clinical value of the optimized PET/MR biomarker can thus be tested in future PET/MR studies. Full article
(This article belongs to the Special Issue New Trends and Advances of MRI and PET Hybrid Imaging in Diagnostics)
Show Figures

Figure 1

16 pages, 1755 KiB  
Article
Predicting Sleep Quality through Biofeedback: A Machine Learning Approach Using Heart Rate Variability and Skin Temperature
by Andrea Di Credico, David Perpetuini, Pascal Izzicupo, Giulia Gaggi, Nicola Mammarella, Alberto Di Domenico, Rocco Palumbo, Pasquale La Malva, Daniela Cardone, Arcangelo Merla, Barbara Ghinassi and Angela Di Baldassarre
Clocks & Sleep 2024, 6(3), 322-337; https://doi.org/10.3390/clockssleep6030023 - 23 Jul 2024
Cited by 1 | Viewed by 3664
Abstract
Sleep quality (SQ) is a crucial aspect of overall health. Poor sleep quality may cause cognitive impairment, mood disturbances, and an increased risk of chronic diseases. Therefore, assessing sleep quality helps identify individuals at risk and develop effective interventions. SQ has been demonstrated [...] Read more.
Sleep quality (SQ) is a crucial aspect of overall health. Poor sleep quality may cause cognitive impairment, mood disturbances, and an increased risk of chronic diseases. Therefore, assessing sleep quality helps identify individuals at risk and develop effective interventions. SQ has been demonstrated to affect heart rate variability (HRV) and skin temperature even during wakefulness. In this perspective, using wearables and contactless technologies to continuously monitor HR and skin temperature is highly suited for assessing objective SQ. However, studies modeling the relationship linking HRV and skin temperature metrics evaluated during wakefulness to predict SQ are lacking. This study aims to develop machine learning models based on HRV and skin temperature that estimate SQ as assessed by the Pittsburgh Sleep Quality Index (PSQI). HRV was measured with a wearable sensor, and facial skin temperature was measured by infrared thermal imaging. Classification models based on unimodal and multimodal HRV and skin temperature were developed. A Support Vector Machine applied to multimodal HRV and skin temperature delivered the best classification accuracy, 83.4%. This study can pave the way for the employment of wearable and contactless technologies to monitor SQ for ergonomic applications. The proposed method significantly advances the field by achieving a higher classification accuracy than existing state-of-the-art methods. Our multimodal approach leverages the synergistic effects of HRV and skin temperature metrics, thus providing a more comprehensive assessment of SQ. Quantitative performance indicators, such as the 83.4% classification accuracy, underscore the robustness and potential of our method in accurately predicting sleep quality using non-intrusive measurements taken during wakefulness. Full article
(This article belongs to the Section Computational Models)
Show Figures

Figure 1

12 pages, 1818 KiB  
Article
Precise Serial Microregistration Enables Quantitative Microscopy Imaging Tracking of Human Skin Cells In Vivo
by Yunxian Tian, Zhenguo Wu, Harvey Lui, Jianhua Zhao, Sunil Kalia, InSeok Seo, Hao Ou-Yang and Haishan Zeng
Cells 2024, 13(13), 1158; https://doi.org/10.3390/cells13131158 - 7 Jul 2024
Viewed by 1542
Abstract
We developed an automated microregistration method that enables repeated in vivo skin microscopy imaging of the same tissue microlocation and specific cells over a long period of days and weeks with unprecedented precision. Applying this method in conjunction with an in vivo multimodality [...] Read more.
We developed an automated microregistration method that enables repeated in vivo skin microscopy imaging of the same tissue microlocation and specific cells over a long period of days and weeks with unprecedented precision. Applying this method in conjunction with an in vivo multimodality multiphoton microscope, the behavior of human skin cells such as cell proliferation, melanin upward migration, blood flow dynamics, and epidermal thickness adaptation can be recorded over time, facilitating quantitative cellular dynamics analysis. We demonstrated the usefulness of this method in a skin biology study by successfully monitoring skin cellular responses for a period of two weeks following an acute exposure to ultraviolet light. Full article
(This article belongs to the Special Issue Advanced Technology for Cellular Imaging)
Show Figures

Graphical abstract

19 pages, 2064 KiB  
Review
Skin Imaging Using Optical Coherence Tomography and Photoacoustic Imaging: A Mini-Review
by Mohsin Zafar, Amanda P. Siegel, Kamran Avanaki and Rayyan Manwar
Optics 2024, 5(2), 248-266; https://doi.org/10.3390/opt5020018 - 30 Apr 2024
Cited by 8 | Viewed by 4777
Abstract
This article provides an overview of the progress made in skin imaging using two emerging imaging modalities, optical coherence tomography (OCT) and photoacoustic imaging (PAI). Over recent years, these technologies have significantly advanced our understanding of skin structure and function, offering non-invasive and [...] Read more.
This article provides an overview of the progress made in skin imaging using two emerging imaging modalities, optical coherence tomography (OCT) and photoacoustic imaging (PAI). Over recent years, these technologies have significantly advanced our understanding of skin structure and function, offering non-invasive and high-resolution insights previously unattainable. The review begins by briefly describing the fundamental principles of how OCT and PAI capture images. It then explores the evolving applications of OCT in dermatology, ranging from diagnosing skin disorders to monitoring treatment responses. This article continues by briefly describing the capabilities of PAI imaging, and how PAI has been used for melanoma and non-melanoma skin cancer detection and characterization, vascular imaging, and more. The third section describes the development of multimodal skin imaging systems that include OCT, PAI, or both modes. A comparative analysis between OCT and PAI is presented, elucidating their respective strengths, limitations, and synergies in the context of skin imaging. Full article
Show Figures

Figure 1

20 pages, 14378 KiB  
Article
Multimodal Method for Differentiating Various Clinical Forms of Basal Cell Carcinoma and Benign Neoplasms In Vivo
by Yuriy I. Surkov, Isabella A. Serebryakova, Yana K. Kuzinova, Olga M. Konopatskova, Dmitriy V. Safronov, Sergey V. Kapralov, Elina A. Genina and Valery V. Tuchin
Diagnostics 2024, 14(2), 202; https://doi.org/10.3390/diagnostics14020202 - 17 Jan 2024
Cited by 4 | Viewed by 2520
Abstract
Correct classification of skin lesions is a key step in skin cancer screening, which requires high accuracy and interpretability. This paper proposes a multimodal method for differentiating various clinical forms of basal cell carcinoma and benign neoplasms that includes machine learning. This study [...] Read more.
Correct classification of skin lesions is a key step in skin cancer screening, which requires high accuracy and interpretability. This paper proposes a multimodal method for differentiating various clinical forms of basal cell carcinoma and benign neoplasms that includes machine learning. This study was conducted on 37 neoplasms, including benign neoplasms and five different clinical forms of basal cell carcinoma. The proposed multimodal screening method combines diffuse reflectance spectroscopy, optical coherence tomography and high-frequency ultrasound. Using diffuse reflectance spectroscopy, the coefficients of melanin pigmentation, erythema, hemoglobin content, and the slope coefficient of diffuse reflectance spectroscopy in the wavelength range 650–800 nm were determined. Statistical texture analysis of optical coherence tomography images was used to calculate first- and second-order statistical parameters. The analysis of ultrasound images assessed the shape of the tumor according to parameters such as area, perimeter, roundness and other characteristics. Based on the calculated parameters, a machine learning algorithm was developed to differentiate the various clinical forms of basal cell carcinoma. The proposed algorithm for classifying various forms of basal cell carcinoma and benign neoplasms provided a sensitivity of 70.6 ± 17.3%, specificity of 95.9 ± 2.5%, precision of 72.6 ± 14.2%, F1 score of 71.5 ± 15.6% and mean intersection over union of 57.6 ± 20.1%. Moreover, for differentiating basal cell carcinoma and benign neoplasms without taking into account the clinical form, the method achieved a sensitivity of 89.1 ± 8.0%, specificity of 95.1 ± 0.7%, F1 score of 89.3 ± 3.4% and mean intersection over union of 82.6 ± 10.8%. Full article
(This article belongs to the Special Issue Advanced Role of Optical Coherence Tomography in Clinical Medicine)
Show Figures

Figure 1

20 pages, 2441 KiB  
Article
Soft Epidermal Paperfluidics for Sweat Analysis by Ratiometric Raman Spectroscopy
by Ata Golparvar, Lucie Thenot, Assim Boukhayma and Sandro Carrara
Biosensors 2024, 14(1), 12; https://doi.org/10.3390/bios14010012 - 25 Dec 2023
Cited by 6 | Viewed by 5192
Abstract
The expanding interest in digital biomarker analysis focused on non-invasive human bodily fluids, such as sweat, highlights the pressing need for easily manufactured and highly efficient soft lab-on-skin solutions. Here, we report, for the first time, the integration of microfluidic paper-based devices (μPAD) [...] Read more.
The expanding interest in digital biomarker analysis focused on non-invasive human bodily fluids, such as sweat, highlights the pressing need for easily manufactured and highly efficient soft lab-on-skin solutions. Here, we report, for the first time, the integration of microfluidic paper-based devices (μPAD) and non-enhanced Raman-scattering-enabled optical biochemical sensing (Raman biosensing). Their integration merges the enormous benefits of μPAD, with high potential for commercialization and use in resource-limited settings, with biorecognition-element-free (but highly selective) optical Raman biosensing. The introduced thin (0.36 mm), ultra-lightweight (0.19 g), and compact footprint (3 cm2) opto-paperfluidic sweat patch is flexible, stretchable, and conforms, irritation-free, to hairless or minimally haired body regions to enable swift sweat collection. As a great advantage, this new bio-chemical sensory system excels through its absence of onboard biorecognition elements (bioreceptor-free) and omission of plasmonic nanomaterials. The proposed easy fabrication process is adaptable to mass production by following a fully sustainable and cost-effective process utilizing only basic tools by avoiding typically employed printing or laser patterning. Furthermore, efficient collection and transportation of precise sweat volumes, driven exclusively by the wicking properties of porous materials, shows high efficiency in liquid transportation and reduces biosensing latency by a factor of 5 compared to state-of-the-art epidermal microfluidics. The proposed unit enables electronic chip-free and imaging-less visual sweat loss quantification as well as optical biochemical analysis when coupled with Raman spectroscopy. We investigated the multimodal quantification of sweat urea and lactate levels ex vivo (with syntactic sweat including +30 sweat analytes on porcine skin) and achieved a linear dynamic range from 0 to 100 mmol/L during fully dynamic continuous flow characterization. Full article
(This article belongs to the Special Issue SERS-Based Biosensors: Design and Biomedical Applications)
Show Figures

Figure 1

15 pages, 12218 KiB  
Article
Presumed Onchocerciasis Chorioretinitis Spilling over into North America, Europe and Middle East
by Ahmad Mansour, Linnet Rodriguez, Hana Mansour, Madeleine Yehia and Maurizio Battaglia Parodi
Diagnostics 2023, 13(24), 3626; https://doi.org/10.3390/diagnostics13243626 - 8 Dec 2023
Cited by 4 | Viewed by 1685
Abstract
Background: Newer generation ophthalmologists practicing in the developed world are not very familiar with some tropical ocular diseases due to the absence of reports in the ophthalmic literature over the past thirty years. Because of world globalization or due to influx of immigrants [...] Read more.
Background: Newer generation ophthalmologists practicing in the developed world are not very familiar with some tropical ocular diseases due to the absence of reports in the ophthalmic literature over the past thirty years. Because of world globalization or due to influx of immigrants from sub-Saharan Africa, exotic retinal diseases are being encountered more often in ophthalmology clinics. Methods: A multicenter case series of chorioretinitis or optic neuritis with obscure etiology that used serial multimodal imaging. Results: Four cases qualified with the diagnosis of presumed ocular onchocerciasis based on their residence near fast rivers in endemic areas, multimodal imaging, long term follow-up showing progressive disease and negative workup for other diseases. Characteristic findings include peripapillary choroiditis with optic neuritis or atrophy, subretinal tracts of the microfilaria, progressive RPE atrophy around heavily pigmented multifocal chorioretinal lesions of varying shapes, subretinal white or crystalline dots, and response to ivermectin. Typical skin findings are often absent in such patients with chorioretinitis rendering the diagnosis more challenging. Conclusions: Familiarity with the myriad ocular findings of onchocerciasis, and a high-degree of suspicion in subjects residing in endemic areas can help in the correct diagnosis and implementation of appropriate therapy. Onchocercal chorioretinitis is a slow, insidious, progressive, and prolonged polymorphous disease. Full article
(This article belongs to the Section Medical Imaging and Theranostics)
Show Figures

Figure 1

12 pages, 1703 KiB  
Article
Multispectral Imaging Analysis of Skin Lesions in Patients with Neurofibromatosis Type 1
by Emilija V. Plorina, Kristine Saulus, Ainars Rudzitis, Norbert Kiss, Márta Medvecz, Tatjana Linova, Dmitrijs Bliznuks, Alexey Lihachev and Ilze Lihacova
J. Clin. Med. 2023, 12(21), 6746; https://doi.org/10.3390/jcm12216746 - 25 Oct 2023
Cited by 1 | Viewed by 1917
Abstract
Neurofibromatosis type 1 (NF1) is a rare disease, affecting around 1 in 3500 individuals in the general population. The rarity of the disease contributes to the scarcity of the available diagnostic and therapeutic approaches. Multispectral imaging is a non-invasive imaging method that shows [...] Read more.
Neurofibromatosis type 1 (NF1) is a rare disease, affecting around 1 in 3500 individuals in the general population. The rarity of the disease contributes to the scarcity of the available diagnostic and therapeutic approaches. Multispectral imaging is a non-invasive imaging method that shows promise in the diagnosis of various skin diseases. The device utilized for the present study consisted of four sets of narrow-band LEDs, including 526 nm, 663 nm, and 964 nm for diffuse reflectance imaging and 405 nm LEDs, filtered through a 515 nm long-pass filter, for autofluorescence imaging. RGB images were captured using a CMOS camera inside of the device. This paper presents the results of this multispectral skin imaging approach to distinguish the lesions in patients with NF1 from other more common benign skin lesions. The results show that the method provides a potential novel approach to distinguish NF1 lesions from other benign skin lesions. Full article
(This article belongs to the Section Dermatology)
Show Figures

Figure 1

14 pages, 1224 KiB  
Article
Treatment with the Topical Antimicrobial Peptide Omiganan in Mild-to-Moderate Facial Seborrheic Dermatitis versus Ketoconazole and Placebo: Results of a Randomized Controlled Proof-of-Concept Trial
by Jannik Rousel, Mahdi Saghari, Lisa Pagan, Andreea Nădăban, Tom Gambrah, Bart Theelen, Marieke L. de Kam, Jorine Haakman, Hein E. C. van der Wall, Gary L. Feiss, Tessa Niemeyer-van der Kolk, Jacobus Burggraaf, Joke A. Bouwstra, Robert Rissmann and Martijn B. A. van Doorn
Int. J. Mol. Sci. 2023, 24(18), 14315; https://doi.org/10.3390/ijms241814315 - 20 Sep 2023
Cited by 6 | Viewed by 3519
Abstract
Facial seborrheic dermatitis (SD) is an inflammatory skin disease characterized by erythematous and scaly lesions on the skin with high sebaceous gland activity. The yeast Malassezia is regarded as a key pathogenic driver in this disease, but increased Staphylococcus abundances and barrier dysfunction [...] Read more.
Facial seborrheic dermatitis (SD) is an inflammatory skin disease characterized by erythematous and scaly lesions on the skin with high sebaceous gland activity. The yeast Malassezia is regarded as a key pathogenic driver in this disease, but increased Staphylococcus abundances and barrier dysfunction are implicated as well. Here, we evaluated the antimicrobial peptide omiganan as a treatment for SD since it has shown both antifungal and antibacterial activity. A randomized, patient- and evaluator-blinded trial was performed comparing the four-week, twice daily topical administration of omiganan 1.75%, the comparator ketoconazole 2.00%, and placebo in patients with mild-to-moderate facial SD. Safety was monitored, and efficacy was determined by clinical scoring complemented with imaging. Microbial profiling was performed, and barrier integrity was assessed by trans-epidermal water loss and ceramide lipidomics. Omiganan was safe and well tolerated but did not result in a significant clinical improvement of SD, nor did it affect other biomarkers, compared to the placebo. Ketoconazole significantly reduced the disease severity compared to the placebo, with reduced Malassezia abundances, increased microbial diversity, restored skin barrier function, and decreased short-chain ceramide Cer[NSc34]. No significant decreases in Staphylococcus abundances were observed compared to the placebo. Omiganan is well tolerated but not efficacious in the treatment of facial SD. Previously established antimicrobial and antifungal properties of omiganan could not be demonstrated. Our multimodal characterization of the response to ketoconazole has reaffirmed previous insights into its mechanism of action. Full article
(This article belongs to the Special Issue Skin Diseases: Molecular Targets for New Therapeutics)
Show Figures

Figure 1

Back to TopTop