Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (3,285)

Search Parameters:
Keywords = medical dataset

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
21 pages, 1633 KiB  
Article
Efficient Deep Learning-Based Arrhythmia Detection Using Smartwatch ECG Electrocardiograms
by Herwin Alayn Huillcen Baca and Flor de Luz Palomino Valdivia
Sensors 2025, 25(17), 5244; https://doi.org/10.3390/s25175244 (registering DOI) - 23 Aug 2025
Abstract
According to the World Health Organization, cardiovascular diseases, including cardiac arrhythmias, are the leading cause of death worldwide due to their silent, asymptomatic nature. To address this problem, early and accurate diagnosis is crucial. Although this task is typically performed by a cardiologist, [...] Read more.
According to the World Health Organization, cardiovascular diseases, including cardiac arrhythmias, are the leading cause of death worldwide due to their silent, asymptomatic nature. To address this problem, early and accurate diagnosis is crucial. Although this task is typically performed by a cardiologist, diagnosing arrhythmias can be imprecise due to the subjectivity of reading and interpreting electrocardiograms (ECGs), and electrocardiograms are often subject to noise and interference. Deep learning-based approaches present methods for automatically detecting arrhythmias and are positioned as an alternative to support cardiologists’ diagnoses. However, these methods are trained and tested only on open datasets of electrocardiograms from Holter devices, whose results aim to improve the accuracy of the state of the art, neglecting the efficiency of the model and its application in a practical clinical context. In this work, we propose an efficient model based on a 1D CNN architecture to detect arrhythmias from smartwatch ECGs, for subsequent deployment in a practical scenario for the monitoring and early detection of arrhythmias. Two datasets were used: UMass Medical School Simband for a binary arrhythmia detection model to evaluate its efficiency and effectiveness, and the MIT-BIH arrhythmia database to validate the multiclass model and compare it with state-of-the-art models. The results of the binary model achieved an accuracy of 64.81%, a sensitivity of 89.47%, and a specificity of 6.25%, demonstrating the model’s reliability, especially in specificity. Furthermore, the computational complexity was 1.2 million parameters and 68.48 MFlops, demonstrating the efficiency of the model. Finally, the results of the multiclass model achieved an accuracy of 99.57%, a sensitivity of 99.57%, and a specificity of 99.47%, making it one of the best state-of-the-art proposals and also reconfirming the reliability of the model. Full article
(This article belongs to the Special Issue Advances in Wearable Sensors for Continuous Health Monitoring)
Show Figures

Figure 1

26 pages, 5260 KiB  
Article
Blurred Lesion Image Segmentation via an Adaptive Scale Thresholding Network
by Qi Chen, Wenmin Wang, Zhibing Wang, Haomei Jia and Minglu Zhao
Appl. Sci. 2025, 15(17), 9259; https://doi.org/10.3390/app15179259 - 22 Aug 2025
Abstract
Medical image segmentation is crucial for disease diagnosis, as precise results aid clinicians in locating lesion regions. However, lesions often have blurred boundaries and complex shapes, challenging traditional methods in capturing clear edges and impacting accurate localization and complete excision. Small lesions are [...] Read more.
Medical image segmentation is crucial for disease diagnosis, as precise results aid clinicians in locating lesion regions. However, lesions often have blurred boundaries and complex shapes, challenging traditional methods in capturing clear edges and impacting accurate localization and complete excision. Small lesions are also critical but prone to detail loss during downsampling, reducing segmentation accuracy. To address these issues, we propose a novel adaptive scale thresholding network (AdSTNet) that acts as a post-processing lightweight network for enhancing sensitivity to lesion edges and cores through a dual-threshold adaptive mechanism. The dual-threshold adaptive mechanism is a key architectural component that includes a main threshold map for core localization and an edge threshold map for more precise boundary detection. AdSTNet is compatible with any segmentation network and introduces only a small computational and parameter cost. Additionally, Spatial Attention and Channel Attention (SACA), the Laplacian operator, and the Fusion Enhancement module are introduced to improve feature processing. SACA enhances spatial and channel attention for core localization; the Laplacian operator retains edge details without added complexity; and the Fusion Enhancement module adapts concatenation operation and Convolutional Gated Linear Unit (ConvGLU) to improve feature intensities to improve edge and small lesion segmentation. Experiments show that AdSTNet achieves notable performance gains on ISIC 2018, BUSI, and Kvasir-SEG datasets. Compared with the original U-Net, our method attains mIoU/mDice of 83.40%/90.24% on ISIC, 71.66%/80.32% on BUSI, and 73.08%/81.91% on Kvasir-SEG. Moreover, similar improvements are observed in the rest of the networks. Full article
33 pages, 8496 KiB  
Article
Enhanced Multi-Class Brain Tumor Classification in MRI Using Pre-Trained CNNs and Transformer Architectures
by Marco Antonio Gómez-Guzmán, Laura Jiménez-Beristain, Enrique Efren García-Guerrero, Oscar Adrian Aguirre-Castro, José Jaime Esqueda-Elizondo, Edgar Rene Ramos-Acosta, Gilberto Manuel Galindo-Aldana, Cynthia Torres-Gonzalez and Everardo Inzunza-Gonzalez
Technologies 2025, 13(9), 379; https://doi.org/10.3390/technologies13090379 - 22 Aug 2025
Abstract
Early and accurate identification of brain tumors is essential for determining effective treatment strategies and improving patient outcomes. Artificial intelligence (AI) and deep learning (DL) techniques have shown promise in automating diagnostic tasks based on magnetic resonance imaging (MRI). This study evaluates the [...] Read more.
Early and accurate identification of brain tumors is essential for determining effective treatment strategies and improving patient outcomes. Artificial intelligence (AI) and deep learning (DL) techniques have shown promise in automating diagnostic tasks based on magnetic resonance imaging (MRI). This study evaluates the performance of four pre-trained deep convolutional neural network (CNN) architectures for the automatic multi-class classification of brain tumors into four categories: Glioma, Meningioma, Pituitary, and No Tumor. The proposed approach utilizes the publicly accessible Brain Tumor MRI Msoud dataset, consisting of 7023 images, with 5712 provided for training and 1311 for testing. To assess the impact of data availability, subsets containing 25%, 50%, 75%, and 100% of the training data were used. A stratified five-fold cross-validation technique was applied. The CNN architectures evaluated include DeiT3_base_patch16_224, Xception41, Inception_v4, and Swin_Tiny_Patch4_Window7_224, all fine-tuned using transfer learning. The training pipeline incorporated advanced preprocessing and image data augmentation techniques to enhance robustness and mitigate overfitting. Among the models tested, Swin_Tiny_Patch4_Window7_224 achieved the highest classification Accuracy of 99.24% on the test set using 75% of the training data. This model demonstrated superior generalization across all tumor classes and effectively addressed class imbalance issues. Furthermore, we deployed and benchmarked the best-performing DL model on embedded AI platforms (Jetson AGX Xavier and Orin Nano), demonstrating their capability for real-time inference and highlighting their feasibility for edge-based clinical deployment. The results highlight the strong potential of pre-trained deep CNN and transformer-based architectures in medical image analysis. The proposed approach provides a scalable and energy-efficient solution for automated brain tumor diagnosis, facilitating the integration of AI into clinical workflows. Full article
26 pages, 2295 KiB  
Article
Retrospective Urine Metabolomics of Clinical Toxicology Samples Reveals Features Associated with Cocaine Exposure
by Rachel K. Vanderschelden, Reya Kundu, Delaney Morrow, Simmi Patel and Kenichi Tamama
Metabolites 2025, 15(9), 563; https://doi.org/10.3390/metabo15090563 - 22 Aug 2025
Abstract
Background/Objectives: Cocaine is a widely used illicit stimulant with significant toxicity. Despite its clinical relevance, the broader metabolic alterations associated with cocaine use remain incompletely characterized. This study aims to identify novel biomarkers for cocaine exposure by applying untargeted metabolomics to retrospective urine [...] Read more.
Background/Objectives: Cocaine is a widely used illicit stimulant with significant toxicity. Despite its clinical relevance, the broader metabolic alterations associated with cocaine use remain incompletely characterized. This study aims to identify novel biomarkers for cocaine exposure by applying untargeted metabolomics to retrospective urine drug screening data. Methods: We conducted a retrospective analysis of a raw mass spectrometry (MS) dataset from urine comprehensive drug screening (UCDS) from 363 patients at the University of Pittsburgh Medical Center Clinical Toxicology Laboratory. The liquid chromatography–quadrupole time-of-flight mass spectrometry (LC-qToF-MS) data were preprocessed with MS-DIAL and subjected to multiple statistical analyses to identify features significantly associated with cocaine-enzyme immunoassay (EIA) results. Significant features were further evaluated using MS-FINDER for feature annotation. Results: Among 14,883 features, 262 were significantly associated with cocaine-EIA results. A subset of 37 more significant features, including known cocaine metabolites and impurities, nicotine metabolites, norfentanyl, and a tryptophan-related metabolite (3-hydroxy-tryptophan), was annotated. Cluster analysis revealed co-varying features, including parent compounds, metabolites, and related ion species. Conclusions: Features associated with cocaine exposure, including previously underrecognized cocaine metabolites and impurities, co-exposure markers, and alterations in an endogenous metabolic pathway, were identified. Notably, norfentanyl was found to be significantly associated with cocaine -EIA, reflecting current trends in illicit drug use. This study highlights the potential of repurposing real-world clinical toxicology data for biomarker discovery, providing a valuable approach to identifying exposure biomarkers and expanding our understanding of drug-induced metabolic disturbances in clinical toxicology. Further validation and exploration using complementary analytical platforms are warranted. Full article
Show Figures

Figure 1

15 pages, 622 KiB  
Review
Artificial Intelligence in the Diagnosis and Imaging-Based Assessment of Pelvic Organ Prolapse: A Scoping Review
by Marian Botoncea, Călin Molnar, Vlad Olimpiu Butiurca, Cosmin Lucian Nicolescu and Claudiu Molnar-Varlam
Medicina 2025, 61(8), 1497; https://doi.org/10.3390/medicina61081497 - 21 Aug 2025
Abstract
Background and Objectives: Pelvic organ prolapse (POP) is a complex condition affecting the pelvic floor, often requiring imaging for accurate diagnosis and treatment planning. Artificial intelligence (AI), particularly deep learning (DL), is emerging as a powerful tool in medical imaging. This scoping [...] Read more.
Background and Objectives: Pelvic organ prolapse (POP) is a complex condition affecting the pelvic floor, often requiring imaging for accurate diagnosis and treatment planning. Artificial intelligence (AI), particularly deep learning (DL), is emerging as a powerful tool in medical imaging. This scoping review aims to synthesize current evidence on the use of AI in the imaging-based diagnosis and anatomical evaluation of POP. Materials and Methods: Following the PRISMA-ScR guidelines, a comprehensive search was conducted in PubMed, Scopus, and Web of Science for studies published between January 2020 and April 2025. Studies were included if they applied AI methodologies, such as convolutional neural networks (CNNs), vision transformers (ViTs), or hybrid models, to diagnostic imaging modalities such as ultrasound and magnetic resonance imaging (MRI) to women with POP. Results: Eight studies met the inclusion criteria. In these studies, AI technologies were applied to 2D/3D ultrasound and static or stress MRI for segmentation, anatomical landmark localization, and prolapse classification. CNNs were the most commonly used models, often combined with transfer learning. Some studies used hybrid models of ViTs, demonstrating high diagnostic accuracy. However, all studies relied on internal datasets, with limited model interpretability and no external validation. Moreover, clinical deployment and outcome assessments remain underexplored. Conclusions: AI shows promise in enhancing POP diagnosis through improved image analysis, but current applications are largely exploratory. Future work should prioritize external validation, standardization, explainable AI, and real-world implementation to bridge the gap between experimental models and clinical utility. Full article
(This article belongs to the Section Obstetrics and Gynecology)
Show Figures

Graphical abstract

31 pages, 5221 KiB  
Article
Dynamic–Attentive Pooling Networks: A Hybrid Lightweight Deep Model for Lung Cancer Classification
by Williams Ayivi, Xiaoling Zhang, Wisdom Xornam Ativi, Francis Sam and Franck A. P. Kouassi
J. Imaging 2025, 11(8), 283; https://doi.org/10.3390/jimaging11080283 - 21 Aug 2025
Viewed by 29
Abstract
Lung cancer is one of the leading causes of cancer-related mortality worldwide. The diagnosis of this disease remains a challenge due to the subtle and ambiguous nature of early-stage symptoms and imaging findings. Deep learning approaches, specifically Convolutional Neural Networks (CNNs), have significantly [...] Read more.
Lung cancer is one of the leading causes of cancer-related mortality worldwide. The diagnosis of this disease remains a challenge due to the subtle and ambiguous nature of early-stage symptoms and imaging findings. Deep learning approaches, specifically Convolutional Neural Networks (CNNs), have significantly advanced medical image analysis. However, conventional architectures such as ResNet50 that rely on first-order pooling often fall short. This study aims to overcome the limitations of CNNs in lung cancer classification by proposing a novel and dynamic model named LungSE-SOP. The model is based on Second-Order Pooling (SOP) and Squeeze-and-Excitation Networks (SENet) within a ResNet50 backbone to improve feature representation and class separation. A novel Dynamic Feature Enhancement (DFE) module is also introduced, which dynamically adjusts the flow of information through SOP and SENet blocks based on learned importance scores. The model was trained using a publicly available IQ-OTH/NCCD lung cancer dataset. The performance of the model was assessed using various metrics, including the accuracy, precision, recall, F1-score, ROC curves, and confidence intervals. For multiclass tumor classification, our model achieved 98.6% accuracy for benign, 98.7% for malignant, and 99.9% for normal cases. Corresponding F1-scores were 99.2%, 99.8%, and 99.9%, respectively, reflecting the model’s high precision and recall across all tumor types and its strong potential for clinical deployment. Full article
(This article belongs to the Section Medical Imaging)
Show Figures

Figure 1

24 pages, 2959 KiB  
Article
From Detection to Diagnosis: An Advanced Transfer Learning Pipeline Using YOLO11 with Morphological Post-Processing for Brain Tumor Analysis for MRI Images
by Ikram Chourib
J. Imaging 2025, 11(8), 282; https://doi.org/10.3390/jimaging11080282 - 21 Aug 2025
Viewed by 31
Abstract
Accurate and timely detection of brain tumors from magnetic resonance imaging (MRI) scans is critical for improving patient outcomes and informing therapeutic decision-making. However, the complex heterogeneity of tumor morphology, scarcity of annotated medical data, and computational demands of deep learning models present [...] Read more.
Accurate and timely detection of brain tumors from magnetic resonance imaging (MRI) scans is critical for improving patient outcomes and informing therapeutic decision-making. However, the complex heterogeneity of tumor morphology, scarcity of annotated medical data, and computational demands of deep learning models present substantial challenges for developing reliable automated diagnostic systems. In this study, we propose a robust and scalable deep learning framework for brain tumor detection and classification, built upon an enhanced YOLO-v11 architecture combined with a two-stage transfer learning strategy. The first stage involves training a base model on a large, diverse MRI dataset. Upon achieving a mean Average Precision (mAP) exceeding 90%, this model is designated as the Brain Tumor Detection Model (BTDM). In the second stage, the BTDM is fine-tuned on a structurally similar but smaller dataset to form Brain Tumor Detection and Segmentation (BTDS), effectively leveraging domain transfer to maintain performance despite limited data. The model is further optimized through domain-specific data augmentation—including geometric transformations—to improve generalization and robustness. Experimental evaluations on publicly available datasets show that the framework achieves high mAP@0.5 scores (up to 93.5% for the BTDM and 91% for BTDS) and consistently outperforms existing state-of-the-art methods across multiple tumor types, including glioma, meningioma, and pituitary tumors. In addition, a post-processing module enhances interpretability by generating segmentation masks and extracting clinically relevant metrics such as tumor size and severity level. These results underscore the potential of our approach as a high-performance, interpretable, and deployable clinical decision-support tool, contributing to the advancement of intelligent real-time neuro-oncological diagnostics. Full article
(This article belongs to the Topic Machine Learning and Deep Learning in Medical Imaging)
Show Figures

Figure 1

27 pages, 33283 KiB  
Article
A Structure-Aware and Condition-Constrained Algorithm for Text Recognition in Power Cabinets
by Yang Liu, Shilun Li and Liang Zhang
Electronics 2025, 14(16), 3315; https://doi.org/10.3390/electronics14163315 - 20 Aug 2025
Viewed by 158
Abstract
Power cabinet OCR enables real-time grid monitoring but faces challenges absent in generic text recognition: 7.5:1 scale variation between labels and readings, tabular layouts with semantic dependencies, and electrical constraints (220 V ± 10%). We propose SACC (Structure-Aware and Condition-Constrained), an end-to-end framework [...] Read more.
Power cabinet OCR enables real-time grid monitoring but faces challenges absent in generic text recognition: 7.5:1 scale variation between labels and readings, tabular layouts with semantic dependencies, and electrical constraints (220 V ± 10%). We propose SACC (Structure-Aware and Condition-Constrained), an end-to-end framework integrating structural perception with domain constraints. SACC comprises (1) MAF-Detector with adaptive dilated convolutions (r{1,3,5}) for multi-scale text; (2) SA-ViT, combining Vision Transformer with GCN for tabular structure modeling; and (3) DCDecoder, enforcing real-time electrical constraints during decoding. Extensive experiments demonstrate SACC’s effectiveness: achieving 86.5%, 88.3%, and 83.4% character accuracy on PCSTD, YUVA EB, and ICDAR 2015 datasets, respectively, with consistent improvements over leading methods. Ablation studies confirm synergistic improvements: MAF-Detector increases recall by 12.3SACC provides a field-deployable solution achieving 30.3 ms inference on RTX 3090. The co-design of structural analysis with differentiable constraints establishes a framework for domain-specific OCR in industrial and medical applications. Full article
(This article belongs to the Section Artificial Intelligence)
Show Figures

Figure 1

10 pages, 511 KiB  
Article
Improving Benign and Malignant Classifications in Mammography with ROI-Stratified Deep Learning
by Kenji Yoshitsugu, Kazumasa Kishimoto and Tadamasa Takemura
Bioengineering 2025, 12(8), 885; https://doi.org/10.3390/bioengineering12080885 - 20 Aug 2025
Viewed by 139
Abstract
Deep learning has achieved widespread adoption for medical image diagnosis, with extensive research dedicated to mammographic image analysis for breast cancer screening. This study investigates the hypothesis that incorporating region-of-interest (ROI) mask information for individual mammographic images during deep learning can improve the [...] Read more.
Deep learning has achieved widespread adoption for medical image diagnosis, with extensive research dedicated to mammographic image analysis for breast cancer screening. This study investigates the hypothesis that incorporating region-of-interest (ROI) mask information for individual mammographic images during deep learning can improve the accuracy of benign/malignant diagnoses. Swin Transformer and ConvNeXtV2 deep learning models were used to evaluate their performance on the public VinDr and CDD-CESM datasets. Our approach involved stratifying mammographic images based on the presence or absence of ROI masks, performing independent training and prediction for each subgroup, and subsequently merging the results. Baseline prediction metrics (sensitivity, specificity, F-score, and accuracy) without ROI-stratified separation were the following: VinDr/Swin Transformer (0.00, 1.00, 0.00, 0.85), VinDr/ConvNeXtV2 (0.00, 1.00, 0.00, 0.85), CDD-CESM/Swin Transformer (0.29, 0.68, 0.41, 0.48), and CDD-CESM/ConvNeXtV2 (0.65, 0.65, 0.65, 0.65). Subsequent analysis with ROI-stratified separation demonstrated marked improvements in these metrics: VinDr/Swin Transformer (0.93, 0.87, 0.90, 0.87), VinDr/ConvNeXtV2 (0.90, 0.86, 0.88, 0.87), CDD-CESM/Swin Transformer (0.65, 0.65, 0.65, 0.65), and CDD-CESM/ConvNeXtV2 (0.74, 0.61, 0.67, 0.68). These findings provide compelling evidence that validate our hypothesis and affirm the utility of considering ROI mask information for enhanced diagnostic accuracy in mammography. Full article
Show Figures

Figure 1

25 pages, 9913 KiB  
Article
Video-Based CSwin Transformer Using Selective Filtering Technique for Interstitial Syndrome Detection
by Khalid Moafa, Maria Antico, Christopher Edwards, Marian Steffens, Jason Dowling, David Canty and Davide Fontanarosa
Appl. Sci. 2025, 15(16), 9126; https://doi.org/10.3390/app15169126 - 19 Aug 2025
Viewed by 114
Abstract
Interstitial lung diseases (ILD) significantly impact health and mortality, affecting millions of individuals worldwide. During the COVID-19 pandemic, lung ultrasonography (LUS) became an indispensable diagnostic and management tool for lung disorders. However, utilising LUS to diagnose ILD requires significant expertise. This research aims [...] Read more.
Interstitial lung diseases (ILD) significantly impact health and mortality, affecting millions of individuals worldwide. During the COVID-19 pandemic, lung ultrasonography (LUS) became an indispensable diagnostic and management tool for lung disorders. However, utilising LUS to diagnose ILD requires significant expertise. This research aims to develop an automated and efficient approach for diagnosing ILD from LUS videos using AI to support clinicians in their diagnostic procedures. We developed a binary classifier based on a state-of-the-art CSwin Transformer to discriminate between LUS videos from healthy and non-healthy patients. We used a multi-centric dataset from the Royal Melbourne Hospital (Australia) and the ULTRa Lab at the University of Trento (Italy), comprising 60 LUS videos. Each video corresponds to a single patient, comprising 30 healthy individuals and 30 patients with ILD, with frame counts ranging from 96 to 300 per video. Each video is annotated using the corresponding medical report as ground truth. The datasets used for training the model underwent selective frame filtering, including reduction in frame numbers to eliminate potentially misleading frames in non-healthy videos. This step was crucial because some ILD videos included segments of normal frames, which could be mixed with the pathological features and mislead the model. To address this, we eliminated frames with a healthy appearance, such as frames without B-lines, thereby ensuring that training focused on diagnostically relevant features. The trained model was assessed on an unseen, separate dataset of 12 videos (3 healthy and 9 ILD) with frame counts ranging from 96 to 300 per video. The model achieved an average classification accuracy of 91%, calculated as the mean of three testing methods: Random Sampling (92%), Key Featuring (92%), and Chunk Averaging (89%). In RS, 32 frames were randomly selected from each of the 12 videos, resulting in a classification with 92% accuracy, with specificity, precision, recall, and F1-score of 100%, 100%, 90%, and 95%, respectively. Similarly, KF, which involved manually selecting 32 key frames based on representative frames from each of the 12 videos, achieved 92% accuracy with a specificity, precision, recall, and F1-score of 100%, 100%, 90%, and 95%, respectively. In contrast, the CA method, where the 12 videos were divided into video segments (chunks) of 32 consecutive frames, with 82 video segments, achieved an 89% classification accuracy (73 out of 82 video segments). Among the 9 misclassified segments in the CA method, 6 were false positives and 3 were false negatives, corresponding to an 11% misclassification rate. The accuracy differences observed between the three training scenarios were confirmed to be statistically significant via inferential analysis. A one-way ANOVA conducted on the 10-fold cross-validation accuracies yielded a large F-statistic of 2135.67 and a small p-value of 6.7 × 10−26, indicating highly significant differences in model performance. The proposed approach is a valid solution for fully automating LUS disease detection, aligning with clinical diagnostic practices that integrate dynamic LUS videos. In conclusion, introducing the selective frame filtering technique to refine the dataset training reduced the effort required for labelling. Full article
Show Figures

Figure 1

35 pages, 11854 KiB  
Article
ODDM: Integration of SMOTE Tomek with Deep Learning on Imbalanced Color Fundus Images for Classification of Several Ocular Diseases
by Afraz Danish Ali Qureshi, Hassaan Malik, Ahmad Naeem, Syeda Nida Hassan, Daesik Jeong and Rizwan Ali Naqvi
J. Imaging 2025, 11(8), 278; https://doi.org/10.3390/jimaging11080278 - 18 Aug 2025
Viewed by 428
Abstract
Ocular disease (OD) represents a complex medical condition affecting humans. OD diagnosis is a challenging process in the current medical system, and blindness may occur if the disease is not detected at its initial phase. Recent studies showed significant outcomes in the identification [...] Read more.
Ocular disease (OD) represents a complex medical condition affecting humans. OD diagnosis is a challenging process in the current medical system, and blindness may occur if the disease is not detected at its initial phase. Recent studies showed significant outcomes in the identification of OD using deep learning (DL) models. Thus, this work aims to develop a multi-classification DL-based model for the classification of seven ODs, including normal (NOR), age-related macular degeneration (AMD), diabetic retinopathy (DR), glaucoma (GLU), maculopathy (MAC), non-proliferative diabetic retinopathy (NPDR), and proliferative diabetic retinopathy (PDR), using color fundus images (CFIs). This work proposes a custom model named the ocular disease detection model (ODDM) based on a CNN. The proposed ODDM is trained and tested on a publicly available ocular disease dataset (ODD). Additionally, the SMOTE Tomek (SM-TOM) approach is also used to handle the imbalanced distribution of the OD images in the ODD. The performance of the ODDM is compared with seven baseline models, including DenseNet-201 (R1), EfficientNet-B0 (R2), Inception-V3 (R3), MobileNet (R4), Vgg-16 (R5), Vgg-19 (R6), and ResNet-50 (R7). The proposed ODDM obtained a 98.94% AUC, along with 97.19% accuracy, a recall of 88.74%, a precision of 95.23%, and an F1-score of 88.31% in classifying the seven different types of OD. Furthermore, ANOVA and Tukey HSD (Honestly Significant Difference) post hoc tests are also applied to represent the statistical significance of the proposed ODDM. Thus, this study concludes that the results of the proposed ODDM are superior to those of baseline models and state-of-the-art models. Full article
(This article belongs to the Special Issue Advances in Machine Learning for Medical Imaging Applications)
Show Figures

Figure 1

25 pages, 2448 KiB  
Article
Marketing a Banned Remedy: A Topic Model Analysis of Health Misinformation in Thai E-Commerce
by Kanitsorn Suriyapaiboonwattana, Yuttana Jaroenruen, Saiphit Satjawisate, Kate Hone, Panupong Puttarak, Nattapong Kaewboonma, Puriwat Lertkrai and Siwanath Nantapichai
Informatics 2025, 12(3), 84; https://doi.org/10.3390/informatics12030084 - 18 Aug 2025
Viewed by 450
Abstract
Unregulated herbal products marketed via digital platforms present escalating risks to consumer safety and regulatory effectiveness worldwide. This study positions the case of Jindamanee herbal powder—a banned substance under Thai law—as a lens through which to examine broader challenges in digital health governance. [...] Read more.
Unregulated herbal products marketed via digital platforms present escalating risks to consumer safety and regulatory effectiveness worldwide. This study positions the case of Jindamanee herbal powder—a banned substance under Thai law—as a lens through which to examine broader challenges in digital health governance. Drawing on a dataset of 1546 product listings across major platforms (Facebook, TikTok, Shopee, and Lazada), we applied Latent Dirichlet Allocation (LDA) to identify prevailing promotional themes and compliance gaps. Despite explicit platform policies, 87.6% of listings appeared on Facebook. Medical claims, particularly for pain relief, featured in 77.6% of posts, while only 18.4% included any risk disclosure. These findings suggest a systematic exploitation of regulatory blind spots and consumer health anxieties, facilitated by templated cross-platform messaging. Anchored in Information Manipulation Theory and the Health Belief Model, the analysis offers theoretical insight into how misinformation is structured and sustained within digital commerce ecosystems. The Thai case highlights urgent implications for platform accountability, policy harmonization, and the design of algorithmic surveillance systems in global health product regulation. Full article
(This article belongs to the Section Health Informatics)
Show Figures

Figure 1

22 pages, 2087 KiB  
Article
Explainable AI-Based Feature Selection Approaches for Raman Spectroscopy
by Nicola Rossberg, Rekha Gautam, Katarzyna Komolibus, Barry O’Sullivan and Andrea Visentin
Diagnostics 2025, 15(16), 2063; https://doi.org/10.3390/diagnostics15162063 - 18 Aug 2025
Viewed by 334
Abstract
Background: Raman Spectroscopy is a non-invasive technique capable of characterising tissue constituents and detecting conditions such as cancer with high accuracy. Machine learning techniques can automate this task and discover relevant data patterns. However, the high-dimensional, multicollinear nature of Raman data makes [...] Read more.
Background: Raman Spectroscopy is a non-invasive technique capable of characterising tissue constituents and detecting conditions such as cancer with high accuracy. Machine learning techniques can automate this task and discover relevant data patterns. However, the high-dimensional, multicollinear nature of Raman data makes their deployment and explainability challenging. A model’s transparency and ability to explain decision pathways have become crucial for medical integration. Consequently, an effective method of feature-reduction while minimising information loss is sought. Methods: Two new feature selection methods for Raman spectroscopy are introduced. These methods are based on explainable deep learning approaches, considering Convolutional Neural Networks and Transformers. Their features are extracted using GradCam and attention scores, respectively. The performance of the extracted features is compared to established feature selection approaches across four classifiers and three datasets. Results: We compared the proposed method against established feature selection approaches over three real-world datasets and different compression levels. Comparable accuracy levels were obtained using only 10% of features. Model-based approaches are the most accurate. Using Convolutional Neural Networks and Random Forest-assigned feature importance performs best when maintaining between 5–20% of features, while LinearSVC with L1 penalisation leads to higher accuracy when selecting only 1% of them. The proposed Convolutional Neural Networks-based GradCam approach has the highest average accuracy. Conclusions: No approach is found to perform best in all scenarios, suggesting that multiple alternatives should be assessed in each application. Full article
(This article belongs to the Section Machine Learning and Artificial Intelligence in Diagnostics)
Show Figures

Figure 1

26 pages, 3103 KiB  
Article
An Interpretable Model for Cardiac Arrhythmia Classification Using 1D CNN-GRU with Attention Mechanism
by Waleed Ali, Talal A. A. Abdullah, Mohd Soperi Zahid, Adel A. Ahmed and Hakim Abdulrab
Processes 2025, 13(8), 2600; https://doi.org/10.3390/pr13082600 - 17 Aug 2025
Viewed by 294
Abstract
Accurate classification of cardiac arrhythmias remains a crucial task in biomedical signal processing. This study proposes a hybrid deep learning approach called 1D CNN-eGRU that integrates one-dimensional convolutional neural network models (1D CNN) and a gated recurrent unit (GRU) architecture with an attention [...] Read more.
Accurate classification of cardiac arrhythmias remains a crucial task in biomedical signal processing. This study proposes a hybrid deep learning approach called 1D CNN-eGRU that integrates one-dimensional convolutional neural network models (1D CNN) and a gated recurrent unit (GRU) architecture with an attention mechanism for the precise classification of cardiac arrhythmias based on ECG Lead II signals. To enhance the classification of cardiac arrhythmias, we also address data imbalances in the MIT-BIH arrhythmia dataset by introducing a hybrid data balancing method that blends resampling and class-weight learning. Additionally, we apply Sig-LIME, a refined variant of LIME tailored for signal datasets, to provide comprehensive insights into model decisions. The suggested hybrid 1D CNN-eGRU approach, a fusion of 1D CNN-GRU along with an attention mechanism, is designed to acquire intricate temporal and spatial dependencies in ECG signals. It aims to distinguish between four distinct arrhythmia classes from the MIT-BIH dataset, addressing a significant challenge in medical diagnostics. Demonstrating strong performance, the proposed hybrid 1D CNN-eGRU model achieves an overall accuracy of 0.99, sensitivity of 0.93, and specificity of 0.99. Per-class evaluation shows precision ranging from 0.80 to 1.00, sensitivity from 0.83 to 0.99, and F1-scores between 0.82 and 0.99 across four arrhythmia types (normal, supraventricular, ventricular, and fusion). The model also attains an AUC of 1.00 on average, with a final test loss of 0.07. These results not only demonstrate the model’s effectiveness in arrhythmia classification but also underscore the added value of interpretability enabled through the use of the Sig-LIME technique. Full article
(This article belongs to the Special Issue Design, Fabrication, Modeling, and Control in Biomedical Systems)
Show Figures

Figure 1

21 pages, 2065 KiB  
Article
FED-EHR: A Privacy-Preserving Federated Learning Framework for Decentralized Healthcare Analytics
by Rızwan Uz Zaman Wani and Ozgu Can
Electronics 2025, 14(16), 3261; https://doi.org/10.3390/electronics14163261 - 17 Aug 2025
Viewed by 407
Abstract
The Internet of Medical Things (IoMT) is revolutionizing healthcare by enabling continuous monitoring and real-time data collection through interconnected medical devices such as wearable sensors and smart health monitors. These devices generate sensitive physiological data, including cardiac signals, glucose levels, and vital signs, [...] Read more.
The Internet of Medical Things (IoMT) is revolutionizing healthcare by enabling continuous monitoring and real-time data collection through interconnected medical devices such as wearable sensors and smart health monitors. These devices generate sensitive physiological data, including cardiac signals, glucose levels, and vital signs, that are integrated into electronic health records (EHRs). Machine Learning (ML) and Deep Learning (DL) techniques have shown significant potential for predictive diagnostics and decision support based on such data. However, traditional centralized ML approaches raise significant privacy concerns due to the transmission and aggregation of sensitive health information. Additionally, compliance with data protection regulations, such as the Health Insurance Portability and Accountability Act (HIPAA) and General Data Protection Regulation (GDPR), restricts centralized data sharing and analytics. To address these challenges, this study introduces FED-EHR, a privacy-preserving Federated Learning (FL) framework that enables collaborative model training on distributed EHR datasets without transferring raw data from its source. The framework is implemented using Logistic Regression (LR) and Multi-Layer Perceptron (MLP) models and was evaluated using two publicly available clinical datasets: the UCI Breast Cancer Wisconsin (Diagnostic) dataset and the Pima Indians Diabetes dataset. The experimental results demonstrate that FED-EHR achieves a classification performance comparable to centralized learning, with ROC-AUC scores of 0.83 for the Diabetes dataset and 0.98 for the Breast Cancer dataset using MLP while preserving data privacy by ensuring data locality. These findings highlight the practical feasibility and effectiveness of applying the proposed FL approach in real-world IoMT scenarios, offering a secure, scalable, and regulation-compliant solution for intelligent healthcare analytics. Full article
Show Figures

Figure 1

Back to TopTop