Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (84)

Search Parameters:
Keywords = the mobile skin image

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
17 pages, 2307 KiB  
Article
DeepBiteNet: A Lightweight Ensemble Framework for Multiclass Bug Bite Classification Using Image-Based Deep Learning
by Doston Khasanov, Halimjon Khujamatov, Muksimova Shakhnoza, Mirjamol Abdullaev, Temur Toshtemirov, Shahzoda Anarova, Cheolwon Lee and Heung-Seok Jeon
Diagnostics 2025, 15(15), 1841; https://doi.org/10.3390/diagnostics15151841 - 22 Jul 2025
Viewed by 340
Abstract
Background/Objectives: The accurate identification of insect bites from images of skin is daunting due to the fine gradations among diverse bite types, variability in human skin response, and inconsistencies in image quality. Methods: For this work, we introduce DeepBiteNet, a new [...] Read more.
Background/Objectives: The accurate identification of insect bites from images of skin is daunting due to the fine gradations among diverse bite types, variability in human skin response, and inconsistencies in image quality. Methods: For this work, we introduce DeepBiteNet, a new ensemble-based deep learning model designed to perform robust multiclass classification of insect bites from RGB images. Our model aggregates three semantically diverse convolutional neural networks—DenseNet121, EfficientNet-B0, and MobileNetV3-Small—using a stacked meta-classifier designed to aggregate their predicted outcomes into an integrated, discriminatively strong output. Our technique balances heterogeneous feature representation with suppression of individual model biases. Our model was trained and evaluated on a hand-collected set of 1932 labeled images representing eight classes, consisting of common bites such as mosquito, flea, and tick bites, and unaffected skin. Our domain-specific augmentation pipeline imputed practical variability in lighting, occlusion, and skin tone, thereby boosting generalizability. Results: Our model, DeepBiteNet, achieved a training accuracy of 89.7%, validation accuracy of 85.1%, and test accuracy of 84.6%, and surpassed fifteen benchmark CNN architectures on all key indicators, viz., precision (0.880), recall (0.870), and F1-score (0.875). Our model, optimized for mobile deployment with quantization and TensorFlow Lite, enables rapid on-client computation and eliminates reliance on cloud-based processing. Conclusions: Our work shows how ensemble learning, when carefully designed and combined with realistic data augmentation, can boost the reliability and usability of automatic insect bite diagnosis. Our model, DeepBiteNet, forms a promising foundation for future integration with mobile health (mHealth) solutions and may complement early diagnosis and triage in dermatologically underserved regions. Full article
(This article belongs to the Special Issue Artificial Intelligence in Biomedical Diagnostics and Analysis 2024)
Show Figures

Figure 1

19 pages, 1442 KiB  
Article
Hyperspectral Imaging for Enhanced Skin Cancer Classification Using Machine Learning
by Teng-Li Lin, Arvind Mukundan, Riya Karmakar, Praveen Avala, Wen-Yen Chang and Hsiang-Chen Wang
Bioengineering 2025, 12(7), 755; https://doi.org/10.3390/bioengineering12070755 - 11 Jul 2025
Viewed by 463
Abstract
Objective: The classification of skin cancer is very helpful in its early diagnosis and treatment, considering the complexity involved in differentiating AK from BCC and SK. These conditions are generally not easily detectable due to their comparable clinical presentations. Method: This paper presents [...] Read more.
Objective: The classification of skin cancer is very helpful in its early diagnosis and treatment, considering the complexity involved in differentiating AK from BCC and SK. These conditions are generally not easily detectable due to their comparable clinical presentations. Method: This paper presents a new approach to hyperspectral imaging for enhancing the visualization of skin lesions called the Spectrum-Aided Vision Enhancer (SAVE), which has the ability to convert any RGB image into a narrow-band image (NBI) by combining hyperspectral imaging (HSI) to increase the contrast of the area of the cancerous lesions when compared with the normal tissue, thereby increasing the accuracy of classification. The current study investigates the use of ten different machine learning algorithms for the purpose of classification of AK, BCC, and SK, including convolutional neural network (CNN), random forest (RF), you only look once (YOLO) version 8, support vector machine (SVM), ResNet50, MobileNetV2, Logistic Regression, SVM with stochastic gradient descent (SGD) Classifier, SVM with logarithmic (LOG) Classifier and SVM- Polynomial Classifier, in assessing the capability of the system to differentiate AK from BCC and SK with heightened accuracy. Results: The results demonstrated that SAVE enhanced classification performance and increased its accuracy, sensitivity, and specificity compared to a traditional RGB imaging approach. Conclusions: This advanced method offers dermatologists a tool for early and accurate diagnosis, reducing the likelihood of misclassification and improving patient outcomes. Full article
Show Figures

Figure 1

15 pages, 1454 KiB  
Article
A Thermal Imaging Camera as a Diagnostic Tool to Study the Effects of Occlusal Splints on the Elimination of Masticatory Muscle Tension
by Danuta Lietz-Kijak, Adam Andrzej Garstka, Lidia Szczucka, Roman Ardan, Monika Brzózka-Garstka, Piotr Skomro and Camillo D’Arcangelo
Dent. J. 2025, 13(7), 313; https://doi.org/10.3390/dj13070313 - 11 Jul 2025
Viewed by 405
Abstract
Medical Infrared Thermography (MIT) is a safe, non-invasive technique for assessing temperature changes on the skin’s surface that may reflect pathological processes in the underlying tissues. In temporomandibular joint disorders (TMDs), which are often associated with reduced mobility and muscle overactivity, tissue metabolism [...] Read more.
Medical Infrared Thermography (MIT) is a safe, non-invasive technique for assessing temperature changes on the skin’s surface that may reflect pathological processes in the underlying tissues. In temporomandibular joint disorders (TMDs), which are often associated with reduced mobility and muscle overactivity, tissue metabolism and blood flow may be diminished, resulting in localized hypothermia. Aim: The purpose of this study was to evaluate muscle tone in the masseter, suprahyoid, and sternocleidomastoid muscles following the application of two types of occlusal splints, a Michigan splint and a double repositioning splint, based on temperature changes recorded using a Fluke Ti401 PRO thermal imaging camera. Materials and Methods: Sixty dental students diagnosed with TMDs were enrolled in this study. After applying the inclusion and exclusion criteria, participants were randomly assigned to one of two groups. Group M received a Michigan splint, while group D was treated with a double repositioning splint. Results: The type of occlusal splint influenced both temperature distribution and muscle tone. In the double repositioning splint group, temperature decreased by approximately 0.8 °C between T1 and T3, whereas in the Michigan splint group, temperature increased by approximately 0.7 °C over the same period. Conclusions: Occlusal splint design has a measurable impact on temperature distribution and muscle activity. The double repositioning splint appears to be more effective in promoting short-term muscle relaxation and may provide relief for patients experiencing muscular or myofascial TMD symptoms. Full article
(This article belongs to the Special Issue Management of Temporomandibular Disorders)
Show Figures

Figure 1

18 pages, 1667 KiB  
Article
Multi-Task Deep Learning for Simultaneous Classification and Segmentation of Cancer Pathologies in Diverse Medical Imaging Modalities
by Maryem Rhanoui, Khaoula Alaoui Belghiti and Mounia Mikram
Onco 2025, 5(3), 34; https://doi.org/10.3390/onco5030034 - 11 Jul 2025
Viewed by 407
Abstract
Background: Clinical imaging is an important part of health care providing physicians with great assistance in patients treatment. In fact, segmentation and grading of tumors can help doctors assess the severity of the cancer at an early stage and increase the chances [...] Read more.
Background: Clinical imaging is an important part of health care providing physicians with great assistance in patients treatment. In fact, segmentation and grading of tumors can help doctors assess the severity of the cancer at an early stage and increase the chances of cure. Despite that Deep Learning for cancer diagnosis has achieved clinically acceptable accuracy, there still remains challenging tasks, especially in the context of insufficient labeled data and the subsequent need for expensive computational ressources. Objective: This paper presents a lightweight classification and segmentation deep learning model to assist in the identification of cancerous tumors with high accuracy despite the scarcity of medical data. Methods: We propose a multi-task architecture for classification and segmentation of cancerous tumors in the Brain, Skin, Prostate and lungs. The model is based on the UNet architecture with different pre-trained deep learning models (VGG 16 and MobileNetv2) as a backbone. The multi-task model is validated on relatively small datasets (slightly exceed 1200 images) that are diverse in terms of modalities (IRM, X-Ray, Dermoscopic and Digital Histopathology), number of classes, shapes, and sizes of cancer pathologies using the accuracy and dice coefficient as statistical metrics. Results: Experiments show that the multi-task approach improve the learning efficiency and the prediction accuracy for the segmentation and classification tasks, compared to training the individual models separately. The multi-task architecture reached a classification accuracy of 86%, 90%, 88%, and 87% respectively for Skin Lesion, Brain Tumor, Prostate Cancer and Pneumothorax. For the segmentation tasks we were able to achieve high precisions respectively 95%, 98% for the Skin Lesion and Brain Tumor segmentation and a 99% precise segmentation for both Prostate cancer and Pneumothorax. Proving that the multi-task solution is more efficient than single-task networks. Full article
Show Figures

Figure 1

24 pages, 9593 KiB  
Article
Deep Learning Approaches for Skin Lesion Detection
by Jonathan Vieira, Fábio Mendonça and Fernando Morgado-Dias
Electronics 2025, 14(14), 2785; https://doi.org/10.3390/electronics14142785 - 10 Jul 2025
Viewed by 355
Abstract
Recently, there has been a rise in skin cancer cases, for which early detection is highly relevant, as it increases the likelihood of a cure. In this context, this work presents a benchmarking study of standard Convolutional Neural Network (CNN) architectures for automated [...] Read more.
Recently, there has been a rise in skin cancer cases, for which early detection is highly relevant, as it increases the likelihood of a cure. In this context, this work presents a benchmarking study of standard Convolutional Neural Network (CNN) architectures for automated skin lesion classification. A total of 38 CNN architectures from ten families (ConvNeXt, DenseNet, EfficientNet, Inception, InceptionResNet, MobileNet, NASNet, ResNet, VGG, and Xception) were evaluated using transfer learning on the HAM10000 dataset for seven-class skin lesion classification, namely, actinic keratoses, basal cell carcinoma, benign keratosis-like lesions, dermatofibroma, melanoma, melanocytic nevi, and vascular lesions. The comparative analysis used standardized training conditions, with all models utilizing frozen pre-trained weights. Cross-database validation was then conducted using the ISIC 2019 dataset to assess generalizability across different data distributions. The ConvNeXtXLarge architecture achieved the best performance, despite having one of the lowest performance-to-number-of-parameters ratios, with 87.62% overall accuracy and 76.15% F1 score on the test set, demonstrating competitive results within the established performance range of existing HAM10000-based studies. A proof-of-concept multiplatform mobile application was also implemented using a client–server architecture with encrypted image transmission, demonstrating the viability of integrating high-performing models into healthcare screening tools. Full article
Show Figures

Figure 1

4 pages, 2945 KiB  
Interesting Images
Dynamic Digital Radiography in Ehlers–Danlos Syndrome: Visualizing Diaphragm Motility Impairment and Its Influence on Clinical Management
by Elisa Calabrò, Maurizio Cè, Francesca Lucrezia Rabaiotti, Laura Macrì and Michaela Cellina
Diagnostics 2025, 15(11), 1343; https://doi.org/10.3390/diagnostics15111343 - 27 May 2025
Viewed by 1320
Abstract
A 40-year-old woman with a known diagnosis of Ehlers–Danlos syndrome (EDS) began experiencing progressive shortness of breath and reduced exercise tolerance following her second pregnancy. The patient underwent an unenhanced computed tomography (CT) scan that showed a marked elevation of the left diaphragm. [...] Read more.
A 40-year-old woman with a known diagnosis of Ehlers–Danlos syndrome (EDS) began experiencing progressive shortness of breath and reduced exercise tolerance following her second pregnancy. The patient underwent an unenhanced computed tomography (CT) scan that showed a marked elevation of the left diaphragm. Suspecting diaphragm dysfunction, the patient underwent Dynamic Digital Radiography (DDR) that confirmed a reduction in left diaphragm motility, indicative of impaired diaphragm function. Based on the DDR findings, which demonstrated reduced but preserved diaphragmatic motion without paradoxical movement or complete immobility, the thoracic surgeon decided that surgical intervention, such as plication, was not necessary. Instead, rehabilitation exercises, including breathing techniques and diaphragm strengthening, were recommended. EDS includes connective tissue disorders that vary in severity but are typically characterized by hypermobility of the joints, skin hyper-elasticity, and a predisposition to vascular fragility. One of the complications of EDS is weakened connective tissues, which can affect the diaphragm, impairing the contractility of the muscle and leading to impaired mobility and respiratory symptoms such as shortness of breath. Diaphragm dysfunction can manifest as reduced movement, potentially related to the underlying connective tissue weakness. This case highlights the clinical value of DDR as a non-invasive, low-dose, and dynamic imaging modality in the diagnosis of diaphragmatic dysfunction in EDS patients, enabling individualized treatment planning and potentially avoiding unnecessary surgical interventions. Full article
(This article belongs to the Special Issue Advances in the Diagnosis and Management of Respiratory Illnesses)
Show Figures

Figure 1

28 pages, 813 KiB  
Systematic Review
Neuroscientific Insights into the Built Environment: A Systematic Review of Empirical Research on Indoor Environmental Quality, Physiological Dynamics, and Psychological Well-Being in Real-Life Contexts
by Aitana Grasso-Cladera, Maritza Arenas-Perez, Paulina Wegertseder-Martinez, Erich Vilina, Josefina Mattoli-Sanchez and Francisco J. Parada
Int. J. Environ. Res. Public Health 2025, 22(6), 824; https://doi.org/10.3390/ijerph22060824 - 23 May 2025
Viewed by 891
Abstract
The research aims to systematize the current scientific evidence on methodologies used to investigate the impact of the indoor built environment on well-being, focusing on indoor environmental quality (IEQ) variables such as thermal comfort, air quality, noise, and lighting. This systematic review adheres [...] Read more.
The research aims to systematize the current scientific evidence on methodologies used to investigate the impact of the indoor built environment on well-being, focusing on indoor environmental quality (IEQ) variables such as thermal comfort, air quality, noise, and lighting. This systematic review adheres to the Joanna Briggs Institute framework and PRISMA guidelines to assess empirical studies that incorporate physiological measurements like heart rate, skin temperature, and brain activity, which are captured through various techniques in real-life contexts. The principal results reveal a significant interest in the relationship between the built environment and physiological as well as psychological states. For instance, thermal comfort was found to be the most commonly studied IEQ variable, affecting heart activity and skin temperature. The research also identifies the need for a shift towards using advanced technologies like Mobile Brain/Body Imaging (MoBI) for capturing real-time physiological data in natural settings. Major conclusions include the need for a multi-level, evidence-based approach that considers the dynamic interaction between the brain, body, and environment. This study advocates for the incorporation of multiple physiological signals to gain a comprehensive understanding of well-being in relation to the built environment. It also highlights gaps in current research, such as the absence of noise as a studied variable of IEQ and the need for standardized well-being assessment tools. By synthesizing these insights, the research aims to pave the way for future studies that can inform better design and policy decisions for indoor environments. Full article
Show Figures

Figure 1

24 pages, 10867 KiB  
Article
Machine Learning-Based Smartphone Grip Posture Image Recognition and Classification
by Dohoon Kwon, Xin Cui, Yejin Lee, Younggeun Choi, Aditya Subramani Murugan, Eunsik Kim and Heecheon You
Appl. Sci. 2025, 15(9), 5020; https://doi.org/10.3390/app15095020 - 30 Apr 2025
Viewed by 655
Abstract
Uncomfortable smartphone grip postures resulting from inappropriate user interface design can degrade smartphone usability. This study aims to develop a classification model for smartphone grip postures by detecting the positions of the hand and fingers on smartphones using machine learning techniques. Seventy participants [...] Read more.
Uncomfortable smartphone grip postures resulting from inappropriate user interface design can degrade smartphone usability. This study aims to develop a classification model for smartphone grip postures by detecting the positions of the hand and fingers on smartphones using machine learning techniques. Seventy participants (35 males and 35 females with an average of 38.5 ± 12.2 years) with varying hand sizes participated in the smartphone grip posture experiment. The participants performed four tasks (making calls, listening to music, sending text messages, and web browsing) using nine smartphone mock-ups of different sizes, while cameras positioned above and below their hands recorded their usage. A total of 3278 grip posture images were extracted from the recorded videos and were preprocessed using a skin color and hand contour detection model. The grip postures were categorized into seven types, and three models (MobileNetV2, Inception V3, and ResNet-50), along with an ensemble model, were used for classification. The ensemble-based classification model achieved an accuracy of 95.9%, demonstrating higher accuracy than the individual models: MobileNetV2 (90.6%), ResNet-50 (94.2%), and Inception V3 (85.9%). The classification model developed in this study can efficiently analyze grip postures, thereby improving usability in the development of smartphones and other electronic devices. Full article
(This article belongs to the Special Issue Novel Approaches and Applications in Ergonomic Design III)
Show Figures

Figure 1

36 pages, 12865 KiB  
Article
Enhancing Recognition and Categorization of Skin Lesions with Tailored Deep Convolutional Networks and Robust Data Augmentation Techniques
by Syed Ibrar Hussain and Elena Toscano
Mathematics 2025, 13(9), 1480; https://doi.org/10.3390/math13091480 - 30 Apr 2025
Viewed by 822
Abstract
This study introduces deep convolutional neural network-based methods for the detection and classification of skin lesions, enhancing system accuracy through a combination of architectures, pre-processing techniques and data augmentation. Multiple networks, including XceptionNet, DenseNet, MobileNet, NASNet Mobile, and EfficientNet, were evaluated to test [...] Read more.
This study introduces deep convolutional neural network-based methods for the detection and classification of skin lesions, enhancing system accuracy through a combination of architectures, pre-processing techniques and data augmentation. Multiple networks, including XceptionNet, DenseNet, MobileNet, NASNet Mobile, and EfficientNet, were evaluated to test deep learning’s potential in complex, multi-class classification tasks. Training these models on pre-processed datasets with optimized hyper-parameters (e.g., batch size, learning rate, and dropout) improved classification precision for early-stage skin cancers. Evaluation measures such as accuracy and loss confirmed high classification efficiency with minimal overfitting, as the validation results aligned closely with training. DenseNet-201 and MobileNet-V3 Large demonstrated strong generalization abilities, whereas EfficientNetV2-B3 and NASNet Mobile achieved the best balance between accuracy and efficiency. The application of different augmentation rates per class also enhanced the handling of imbalanced data, resulting in more accurate large-scale detection. Comprehensive pre-processing ensured balanced class representation, and EfficientNetV2 models achieved exceptional classification accuracy, attributed to their optimized architecture balancing depth, width, and resolution. These models showed high convergence rates and generalization, supporting their suitability for medical imaging tasks using transfer learning. Full article
Show Figures

Figure 1

26 pages, 12422 KiB  
Article
Deep Learning-Based Web Application for Automated Skin Lesion Classification and Analysis
by Serra Aksoy, Pinar Demircioglu and Ismail Bogrekci
Dermato 2025, 5(2), 7; https://doi.org/10.3390/dermato5020007 - 24 Apr 2025
Viewed by 1244
Abstract
Background/Objectives: Skin lesions, ranging from benign to malignant diseases, are a difficult dermatological condition due to their great diversity and variable severity. Their detection at an early stage and proper classification, particularly between benign Nevus (NV), precancerous Actinic Keratosis (AK), and Squamous Cell [...] Read more.
Background/Objectives: Skin lesions, ranging from benign to malignant diseases, are a difficult dermatological condition due to their great diversity and variable severity. Their detection at an early stage and proper classification, particularly between benign Nevus (NV), precancerous Actinic Keratosis (AK), and Squamous Cell Carcinoma (SCC), are crucial for improving the effectiveness of treatment and patient prognosis. The goal of this study was to test deep learning (DL) models to determine the best architecture to use in classifying lesions and create a web-based platform for improved diagnostic and educational availability. Methods: Various DL models, like Xception, DenseNet169, ResNet152V2, InceptionV3, MobileNetV2, EfficientNetV2 Small, and NASNetMobile, were compared for classification accuracy. The top model was incorporated into a web application, allowing users to upload images for automatic classification, thereby offering confidence scores as a measure of the reliability of predictions. The tool also has enhanced visualization capabilities, which allow users to investigate feature maps derived from convolutional layers, enhancing interpretability. Web scraping and summarization techniques were also employed to offer concise, evidence-based dermatological information from established sources. Results: Of the models evaluated, DenseNet169 achieved the best classification accuracy of 85% and was, therefore, chosen as the base architecture for the web application. The application enhances diagnostic clarity by visualizing features and promotes access to trustworthy medical information on dermatological disorders. Conclusions: The developed web application serves as both a diagnostic support system for dermatologists and an educational system for the general public. By using DL-based classification, interpretability techniques, and automatic medical information extraction, it facilitates early intervention and increases awareness regarding skin health. Full article
(This article belongs to the Collection Artificial Intelligence in Dermatology)
Show Figures

Figure 1

18 pages, 5347 KiB  
Article
An Image Analysis for the Development of a Skin Change-Based AI Screening Model as an Alternative to the Bite Pressure Test
by Yoshihiro Takeda, Kanetaka Yamaguchi, Naoto Takahashi, Yasuhiro Nakanishi and Morio Ochi
Healthcare 2025, 13(8), 936; https://doi.org/10.3390/healthcare13080936 - 18 Apr 2025
Viewed by 651
Abstract
Background/Objectives: Oral function assessments in hospitals and nursing facilities are mainly performed by nurses and caregivers but are sometimes not properly assessed. As a result, elderly people are not provided with meals appropriate for their masticatory function, increasing the risk of aspiration and [...] Read more.
Background/Objectives: Oral function assessments in hospitals and nursing facilities are mainly performed by nurses and caregivers but are sometimes not properly assessed. As a result, elderly people are not provided with meals appropriate for their masticatory function, increasing the risk of aspiration and other complications. In the present study, we aimed to examine image analysis conditions in order to create an AI model that can easily and objectively screen masticatory function based on occlusal pressure. Methods: Sampling was conducted at the Hokkaido University of Health Sciences (Hokkaido, Japan) and the university’s affiliated dental clinic in Hokkaido. Results: We collected 241 waveform images of changes in skin shape during chewing over a 20 s test period from 110 participants. Our study used two approaches for image analysis: convolutional neural networks (CNNs) and transfer learning. In the transfer learning analysis, MobileNetV2 and Xception achieved the highest classification accuracy (validation accuracy: 0.673). Conclusions: Therefore, it was determined that analyses of waveform images of changes in skin shape may contribute to the development of a skin change-based screening model as an alternative to the bite pressure test. Full article
(This article belongs to the Special Issue Novel Therapeutic and Diagnostic Strategies for Oral Diseases)
Show Figures

Figure 1

22 pages, 5756 KiB  
Article
Optimizing Digital Image Quality for Improved Skin Cancer Detection
by Bogdan Dugonik, Marjan Golob, Marko Marhl and Aleksandra Dugonik
J. Imaging 2025, 11(4), 107; https://doi.org/10.3390/jimaging11040107 - 31 Mar 2025
Viewed by 903
Abstract
The rising incidence of skin cancer, particularly melanoma, underscores the need for improved diagnostic tools in dermatology. Accurate imaging plays a crucial role in early detection, yet challenges related to color accuracy, image distortion, and resolution persist, leading to diagnostic errors. This study [...] Read more.
The rising incidence of skin cancer, particularly melanoma, underscores the need for improved diagnostic tools in dermatology. Accurate imaging plays a crucial role in early detection, yet challenges related to color accuracy, image distortion, and resolution persist, leading to diagnostic errors. This study addresses these issues by evaluating color reproduction accuracy across various imaging devices and lighting conditions. Using a ColorChecker test chart, color deviations were measured through Euclidean distances (ΔE*, ΔC*), and nonlinear color differences (ΔE00, ΔC00), while the color rendering index (CRI) and television lighting consistency index (TLCI) were used to evaluate the influence of light sources on image accuracy. Significant color discrepancies were identified among mobile phones, DSLRs, and mirrorless cameras, with inadequate dermatoscope lighting systems contributing to further inaccuracies. We demonstrate practical applications, including manual camera adjustments, grayscale reference cards, post-processing techniques, and optimized lighting conditions, to improve color accuracy. This study provides applicable solutions for enhancing color accuracy in dermatological imaging, emphasizing the need for standardized calibration techniques and imaging protocols to improve diagnostic reliability, support AI-assisted skin cancer detection, and contribute to high-quality image databases for clinical and automated analysis. Full article
(This article belongs to the Special Issue Novel Approaches to Image Quality Assessment)
Show Figures

Figure 1

21 pages, 2382 KiB  
Article
Melanoma Skin Cancer Recognition with a Convolutional Neural Network and Feature Dimensions Reduction with Aquila Optimizer
by Jalaleddin Mohamed, Necmi Serkan Tezel, Javad Rahebi and Raheleh Ghadami
Diagnostics 2025, 15(6), 761; https://doi.org/10.3390/diagnostics15060761 - 18 Mar 2025
Viewed by 674
Abstract
Background: Melanoma is a highly aggressive form of skin cancer, necessitating early and accurate detection for effective treatment. This study aims to develop a novel classification system for melanoma detection that integrates Convolutional Neural Networks (CNNs) for feature extraction and the Aquila Optimizer [...] Read more.
Background: Melanoma is a highly aggressive form of skin cancer, necessitating early and accurate detection for effective treatment. This study aims to develop a novel classification system for melanoma detection that integrates Convolutional Neural Networks (CNNs) for feature extraction and the Aquila Optimizer (AO) for feature dimension reduction, improving both computational efficiency and classification accuracy. Methods: The proposed method utilized CNNs to extract features from melanoma images, while the AO was employed to reduce feature dimensionality, enhancing the performance of the model. The effectiveness of this hybrid approach was evaluated on three publicly available datasets: ISIC 2019, ISBI 2016, and ISBI 2017. Results: For the ISIC 2019 dataset, the model achieved 97.46% sensitivity, 98.89% specificity, 98.42% accuracy, 97.91% precision, 97.68% F1-score, and 99.12% AUC-ROC. On the ISBI 2016 dataset, it reached 98.45% sensitivity, 98.24% specificity, 97.22% accuracy, 97.84% precision, 97.62% F1-score, and 98.97% AUC-ROC. For ISBI 2017, the results were 98.44% sensitivity, 98.86% specificity, 97.96% accuracy, 98.12% precision, 97.88% F1-score, and 99.03% AUC-ROC. The proposed method outperforms existing advanced techniques, with a 4.2% higher accuracy, a 6.2% improvement in sensitivity, and a 5.8% increase in specificity. Additionally, the AO reduced computational complexity by up to 37.5%. Conclusions: The deep learning-Aquila Optimizer (DL-AO) framework offers a highly efficient and accurate approach for melanoma detection, making it suitable for deployment in resource-constrained environments such as mobile and edge computing platforms. The integration of DL with metaheuristic optimization significantly enhances accuracy, robustness, and computational efficiency in melanoma detection. Full article
(This article belongs to the Section Machine Learning and Artificial Intelligence in Diagnostics)
Show Figures

Figure 1

30 pages, 1914 KiB  
Article
Deep Learning Approaches for the Classification of Keloid Images in the Context of Malignant and Benign Skin Disorders
by Olusegun Ekundayo Adebayo, Brice Chatelain, Dumitru Trucu and Raluca Eftimie
Diagnostics 2025, 15(6), 710; https://doi.org/10.3390/diagnostics15060710 - 12 Mar 2025
Viewed by 1380
Abstract
Background/Objectives: Misdiagnosing skin disorders leads to the administration of wrong treatments, sometimes with life-impacting consequences. Deep learning algorithms are becoming more and more used for diagnosis. While many skin cancer/lesion image classification studies focus on datasets containing dermatoscopic images and do not include [...] Read more.
Background/Objectives: Misdiagnosing skin disorders leads to the administration of wrong treatments, sometimes with life-impacting consequences. Deep learning algorithms are becoming more and more used for diagnosis. While many skin cancer/lesion image classification studies focus on datasets containing dermatoscopic images and do not include keloid images, in this study, we focus on diagnosing keloid disorders amongst other skin lesions and combine two publicly available datasets containing non-dermatoscopic images: one dataset with keloid images and one with images of other various benign and malignant skin lesions (melanoma, basal cell carcinoma, squamous cell carcinoma, actinic keratosis, seborrheic keratosis, and nevus). Methods: Different Convolution Neural Network (CNN) models are used to classify these disorders as either malignant or benign, to differentiate keloids amongst different benign skin disorders, and furthermore to differentiate keloids among other similar-looking malignant lesions. To this end, we use the transfer learning technique applied to nine different base models: the VGG16, MobileNet, InceptionV3, DenseNet121, EfficientNetB0, Xception, InceptionRNV2, EfficientNetV2L, and NASNetLarge. We explore and compare the results of these models using performance metrics such as accuracy, precision, recall, F1score, and AUC-ROC. Results: We show that the VGG16 model (after fine-tuning) performs the best in classifying keloid images among other benign and malignant skin lesion images, with the following keloid class performance: an accuracy of 0.985, precision of 1.0, recall of 0.857, F1 score of 0.922 and AUC-ROC value of 0.996. VGG16 also has the best overall average performance (over all classes) in terms of the AUC-ROC and the other performance metrics. Using this model, we further attempt to predict the identification of three new non-dermatoscopic anonymised clinical images, classifying them as either malignant, benign, or keloid, and in the process, we identify some issues related to the collection and processing of such images. Finally, we also show that the DenseNet121 model has the best performance when differentiating keloids from other malignant disorders that have similar clinical presentations. Conclusions: The study emphasised the potential use of deep learning algorithms (and their drawbacks), to identify and classify benign skin disorders such as keloids, which are not usually investigated via these approaches (as opposed to cancers), mainly due to lack of available data. Full article
(This article belongs to the Special Issue AI in Dermatology)
Show Figures

Figure 1

22 pages, 1334 KiB  
Article
A Robust YOLOv8-Based Framework for Real-Time Melanoma Detection and Segmentation with Multi-Dataset Training
by Saleh Albahli
Diagnostics 2025, 15(6), 691; https://doi.org/10.3390/diagnostics15060691 - 11 Mar 2025
Cited by 1 | Viewed by 1926
Abstract
Background: Melanoma, the deadliest form of skin cancer, demands accurate and timely diagnosis to improve patient survival rates. However, traditional diagnostic approaches rely heavily on subjective clinical interpretations, leading to inconsistencies and diagnostic errors. Methods: This study proposes a robust YOLOv8-based [...] Read more.
Background: Melanoma, the deadliest form of skin cancer, demands accurate and timely diagnosis to improve patient survival rates. However, traditional diagnostic approaches rely heavily on subjective clinical interpretations, leading to inconsistencies and diagnostic errors. Methods: This study proposes a robust YOLOv8-based deep learning framework for real-time melanoma detection and segmentation. A multi-dataset training strategy integrating the ISIC 2020, HAM10000, and PH2 datasets was employed to enhance generalizability across diverse clinical conditions. Preprocessing techniques, including adaptive contrast enhancement and artifact removal, were utilized, while advanced augmentation strategies such as CutMix and Mosaic were applied to enhance lesion diversity. The YOLOv8 architecture unified lesion detection and segmentation tasks into a single inference pass, significantly enhancing computational efficiency. Results: Experimental evaluation demonstrated state-of-the-art performance, achieving a mean Average Precision (mAP@0.5) of 98.6%, a Dice Coefficient of 0.92, and an Intersection over Union (IoU) score of 0.88. These results surpass conventional segmentation models including U-Net, DeepLabV3+, Mask R-CNN, SwinUNet, and Segment Anything Model (SAM). Moreover, the proposed framework demonstrated real-time inference speeds of 12.5 ms per image, making it highly suitable for clinical deployment and mobile health applications. Conclusions: The YOLOv8-based framework effectively addresses the limitations of existing diagnostic methods by integrating detection and segmentation tasks, achieving high accuracy and computational efficiency. This study highlights the importance of multi-dataset training for robust generalization and recommends the integration of explainable AI techniques to enhance clinical trust and interpretability. Full article
(This article belongs to the Special Issue Deep Learning Techniques for Medical Image Analysis)
Show Figures

Figure 1

Back to TopTop