Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (300)

Search Parameters:
Keywords = healthcare image generation

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
28 pages, 12461 KB  
Article
HCSS-GB and IBESS: Secret Image Sharing Schemes with Enhanced Shadow Management and Visual-Gradient Access Control
by Huanrong Pan, Wei Yan, Rui Wang and Yongqiang Yu
Entropy 2025, 27(9), 893; https://doi.org/10.3390/e27090893 - 23 Aug 2025
Viewed by 163
Abstract
Image protection in privacy-sensitive domains, such as healthcare and military, exposes critical limitations in existing secret image sharing (SIS) schemes, including cumbersome shadow management, coarse-grained access control, and an inefficient storage-speed trade-off, which limits SIS in practical scenarios. Thus, this paper proposes two [...] Read more.
Image protection in privacy-sensitive domains, such as healthcare and military, exposes critical limitations in existing secret image sharing (SIS) schemes, including cumbersome shadow management, coarse-grained access control, and an inefficient storage-speed trade-off, which limits SIS in practical scenarios. Thus, this paper proposes two SIS schemes to address the above issues: the hierarchical control sharing scheme with Gaussian blur (HCSS-GB) and the image bit expansion-based sharing scheme (IBESS). For scenarios with limited storage space, HCSS-GB employs Gaussian blur to generate gradient-blurred cover images and integrates a controllable sharing model to produce meaningful shadow images without pixel expansion based on Shamir’s secret sharing. Furthermore, to accommodate real-time application scenarios, IBESS employs bit expansion to combine the high bits of generated shadow images with those of blurred carrier images, enhancing operational efficiency at the cost of increased storage overhead. Experimental results demonstrate that both schemes achieve lossless recovery (with PSNR of , MSE of 0, and SSIM of 1), validating their reliability. Specifically, HCSS-GB maintains a 1:1 storage ratio with the original image, making it highly suitable for storage-constrained environments; IBESS exhibits exceptional efficiency, with sharing time as low as 2.1 s under the (7,8) threshold, ideal for real-time tasks. Comparative analyses further show that using carrier images with high standard deviation contrast (Cσ) and Laplacian-based sharpness (SL) significantly enhances shadow distinguishability, strengthening the effectiveness of hierarchical access control. Both schemes provide valuable solutions for secure image sharing and efficient shadow management, with their validity and practicality confirmed by experimental data. Full article
(This article belongs to the Special Issue Information-Theoretic Security and Privacy)
Show Figures

Figure 1

14 pages, 623 KB  
Review
AI-Driven Multimodal Brain-State Decoding for Personalized Closed-Loop TENS: A Comprehensive Review
by Jiahao Du, Shengli Luo and Ping Shi
Brain Sci. 2025, 15(9), 903; https://doi.org/10.3390/brainsci15090903 - 23 Aug 2025
Viewed by 263
Abstract
Chronic pain is a dynamic, brain-wide condition that eludes effective management by conventional, static treatment approaches. Transcutaneous Electrical Nerve Stimulation (TENS), traditionally perceived as a simple and generic modality, is on the verge of a significant transformation. Guided by advances in brain-state decoding [...] Read more.
Chronic pain is a dynamic, brain-wide condition that eludes effective management by conventional, static treatment approaches. Transcutaneous Electrical Nerve Stimulation (TENS), traditionally perceived as a simple and generic modality, is on the verge of a significant transformation. Guided by advances in brain-state decoding and adaptive algorithms, TENS can evolve into a precision neuromodulation system tailored to individual needs. By integrating multimodal neuroimaging—including the spatial resolution of functional magnetic resonance imaging (fMRI), the temporal sensitivity of an Electroencephalogram (EEG), and the ecological validity of functional near-infrared spectroscopy (fNIRS)—with real-time machine learning, we envision a paradigm shift from fixed stimulation protocols to personalized, closed-loop modulation. This comprehensive review outlines a translational framework to reengineer TENS from an open-loop device into a responsive, intelligent therapeutic platform. We examine the underlying neurophysiological mechanisms, artificial intelligence (AI)-driven infrastructures, and ethical considerations essential for implementing this vision in clinical practice—not only for chronic pain management but also for broader neuroadaptive healthcare applications. Full article
Show Figures

Figure 1

12 pages, 922 KB  
Proceeding Paper
FairCXRnet: A Multi-Task Learning Model for Domain Adaptation in Chest X-Ray Classification for Low Resource Settings
by Aminu Musa, Rajesh Prasad, Mohammed Hassan, Mohamed Hamada and Saratu Yusuf Ilu
Eng. Proc. 2025, 107(1), 16; https://doi.org/10.3390/engproc2025107016 - 22 Aug 2025
Abstract
Medical imaging analysis plays a pivotal role in modern healthcare, with physicians relying heavily on radiologists for disease diagnosis. However, many hospitals face a shortage of radiologists, leading to long queues at radiology centers and delays in diagnosis. Advances in artificial intelligence (AI) [...] Read more.
Medical imaging analysis plays a pivotal role in modern healthcare, with physicians relying heavily on radiologists for disease diagnosis. However, many hospitals face a shortage of radiologists, leading to long queues at radiology centers and delays in diagnosis. Advances in artificial intelligence (AI) have made it possible for AI models to analyze medical images and provide insights similar to those of radiologists. Despite their successes, these models face significant challenges that hinder widespread adoption. One major issue is the inability of AI models to generalize data from new populations, as performance tends to degrade when evaluated on datasets with different or shifted distributions, a problem known as domain shift. Additionally, the large size of these models requires substantial computational resources for training and deployment. In this study, we address these challenges by investigating domain shifts using ChestXray-14 and a Nigerian chest X-ray dataset. We propose a multi-task learning (MTL) approach that jointly trains the model on both datasets for two tasks, classification and segmentation, to minimize the domain gap. Furthermore, we replace traditional convolutional layers in the backbone model (Densenet-201) architecture with depthwise separable convolutions, reducing the model’s number of parameters and computational requirements. Our proposed model demonstrated remarkable improvements in both accuracy and AUC, achieving 93% accuracy and 96% AUC when tested across both datasets, significantly outperforming traditional transfer learning methods. Full article
Show Figures

Figure 1

21 pages, 243 KB  
Article
The Impact of Multiple Sclerosis on Work Productivity: A Preliminary Look at the North American Registry for Care and Research in Multiple Sclerosis
by Ahya Ali, Kottil Rammohan, June Halper, Terrie Livingston, Sara McCurdy Murphy, Lisa Patton, Jesse Wilkerson, Yang Mao-Draayer and on behalf of the NARCRMS Healthcare Economics Outcomes Research Advisory Group
NeuroSci 2025, 6(3), 82; https://doi.org/10.3390/neurosci6030082 - 22 Aug 2025
Viewed by 402
Abstract
Objective: We aimed to quantify multiple sclerosis (MS)-related work productivity and to illustrate the longitudinal trends for relapses, disease progression, and utilization of health care resources in a nationally representative cohort of working North Americans living with MS. Background: The North American Registry [...] Read more.
Objective: We aimed to quantify multiple sclerosis (MS)-related work productivity and to illustrate the longitudinal trends for relapses, disease progression, and utilization of health care resources in a nationally representative cohort of working North Americans living with MS. Background: The North American Registry for Care and Research in Multiple Sclerosis (NARCRMS) is a multicentered physician-reported registry which prospectively collects clinical information including imaging data over a long period of time from people with MS from sites across the U.S. and Canada. The Health Economics Outcomes Research (HEOR) Advisory Group has also incorporated Health-Related Productivity and Health Resource Utilization questionnaires, which collect information about health care economics of people with MS and its effects on daily life. Design/Methods: This is a prospective observational study utilizing data from NARCRMS. Socio-demographic, clinical, and health economic outcome data were collected through previously validated and structured questionnaires. Logistic regression was used to calculate the relative odds of symptom impact, with a generalized logit link for number of relapses. Cox proportional hazards regression was used to calculate hazard ratios for time to first relapse. Results: Six hundred and eighty-two (682) people with MS were enrolled in NARCRMS and had completed the HEOR questionnaires at the time of the analysis. Among the participants, 61% were employed full-time and 11% were employed part time. Fatigue was the leading symptom reported to impact both work and household chores. Among the employed participants, 13% reported having missed work with a median of 6.8 (IQR: 3.0–9.0) missed hours due to MS symptoms (absenteeism), while 35% reported MS having impacted their work output (presenteeism). The odds of higher disease severity (EDSS 2.0–6.5 vs. 0.0–1.5) were 2.29 (95% CI = 1.08, 4.88; p = 0.011) times higher for participants who identified reduction of work output. Fatigue was the most identified symptom attributed to work output reduction. Among all participants, 33% reported having missed planned household work with a median of 3.0 (IQR: 2.0–5.0) hours. The odds of higher disease severity were 2.49 (95% CI = 1.37, 4.53; p = 0.006) times higher for participants who identified reduction in household work output, and 1.70 (CI = 1.27, 2.49; p = 0.006) times higher for those whose fatigue affected housework output as compared to other symptoms. Conclusions: A preliminary review of the first 682 patients showed that people with MS had reduced work and housework productivity even at an early disease state. Multiple sclerosis (MS) can significantly impair individuals’ ability to function fully at work and at home, with fatigue overwhelmingly identified as the primary contributing factor. The economic value of finding an effective treatment for MS-related fatigue is substantial, underscoring the importance of these findings for policy development, priority setting, and the strategic allocation of healthcare resources for this chronic and disabling condition. Full article
20 pages, 4041 KB  
Article
Enhancing Cardiovascular Disease Detection Through Exploratory Predictive Modeling Using DenseNet-Based Deep Learning
by Wael Hadi, Tushar Jaware, Tarek Khalifa, Faisal Aburub, Nawaf Ali and Rashmi Saini
Computers 2025, 14(8), 330; https://doi.org/10.3390/computers14080330 - 15 Aug 2025
Viewed by 369
Abstract
Cardiovascular Disease (CVD) remains the number one cause of morbidity and mortality, accounting for 17.9 million deaths every year. Precise and early diagnosis is therefore critical to the betterment of the patient’s outcomes and the many burdens that weigh on the healthcare systems. [...] Read more.
Cardiovascular Disease (CVD) remains the number one cause of morbidity and mortality, accounting for 17.9 million deaths every year. Precise and early diagnosis is therefore critical to the betterment of the patient’s outcomes and the many burdens that weigh on the healthcare systems. This work presents for the first time an innovative approach using the DenseNet architecture that allows for the automatic recognition of CVD from clinical data. The data is preprocessed and augmented, with a heterogeneous dataset of cardiovascular-related images like angiograms, echocardiograms, and magnetic resonance images used. Optimizing the deep features for robust model performance is conducted through fine-tuning a custom DenseNet architecture along with rigorous hyper parameter tuning and sophisticated strategies to handle class imbalance. The DenseNet model, after training, shows high accuracy, sensitivity, and specificity in the identification of CVD compared to baseline approaches. Apart from the quantitative measures, detailed visualizations are conducted to show that the model is able to localize and classify pathological areas within an image. The accuracy of the model was found to be 0.92, precision 0.91, and recall 0.95 for class 1, and an overall weighted average F1-score of 0.93, which establishes the efficacy of the model. There is great clinical applicability in this research in terms of accurate detection of CVD to provide time-interventional personalized treatments. This DenseNet-based approach advances the improvement on the diagnosis of CVD through state-of-the-art technology to be used by radiologists and clinicians. Future work, therefore, would probably focus on improving the model’s interpretability towards a broader population of patients and its generalization towards it, revolutionizing the diagnosis and management of CVD. Full article
(This article belongs to the Special Issue Machine Learning and Statistical Learning with Applications 2025)
Show Figures

Figure 1

26 pages, 4766 KB  
Article
RetinoDeep: Leveraging Deep Learning Models for Advanced Retinopathy Diagnostics
by Sachin Kansal, Bajrangi Kumar Mishra, Saniya Sethi, Kanika Vinayak, Priya Kansal and Jyotindra Narayan
Sensors 2025, 25(16), 5019; https://doi.org/10.3390/s25165019 - 13 Aug 2025
Viewed by 453
Abstract
Diabetic retinopathy (DR), a leading cause of vision loss worldwide, poses a critical challenge to healthcare systems due to its silent progression and the reliance on labor-intensive, subjective manual screening by ophthalmologists, especially amid a global shortage of eye care specialists. Addressing the [...] Read more.
Diabetic retinopathy (DR), a leading cause of vision loss worldwide, poses a critical challenge to healthcare systems due to its silent progression and the reliance on labor-intensive, subjective manual screening by ophthalmologists, especially amid a global shortage of eye care specialists. Addressing the pressing need for scalable, objective, and interpretable diagnostic tools, this work introduces RetinoDeep—deep learning frameworks integrating hybrid architectures and explainable AI to enhance the automated detection and classification of DR across seven severity levels. Specifically, we propose four novel models: an EfficientNetB0 combined with an SPCL transformer for robust global feature extraction; a ResNet50 ensembled with Bi-LSTM to synergize spatial and sequential learning; a Bi-LSTM optimized through genetic algorithms for hyperparameter tuning; and a Bi-LSTM with SHAP explainability to enhance model transparency and clinical trustworthiness. The models were trained and evaluated on a curated dataset of 757 retinal fundus images, augmented to improve generalization, and benchmarked against state-of-the-art baselines (including EfficientNetB0, Hybrid Bi-LSTM with EfficientNetB0, Hybrid Bi-GRU with EfficientNetB0, ResNet with filter enhancements, Bi-LSTM optimized using Random Search Algorithm (RSA), Particle Swarm Optimization (PSO), Ant Colony Optimization (ACO), and a standard Convolutional Neural Network (CNN)), using metrics such as accuracy, F1-score, and precision. Notably, the Bi-LSTM with Particle Swarm Optimization (PSO) outperformed other configurations, achieving superior stability and generalization, while SHAP visualizations confirmed alignment between learned features and key retinal biomarkers, reinforcing the system’s interpretability. By combining cutting-edge neural architectures, advanced optimization, and explainable AI, this work sets a new standard for DR screening systems, promising not only improved diagnostic performance but also potential integration into real-world clinical workflows. Full article
Show Figures

Figure 1

17 pages, 3827 KB  
Article
A Deep Learning Approach to Teeth Segmentation and Orientation from Panoramic X-Rays
by Mou Deb, Madhab Deb and Mrinal Kanti Dhar
Signals 2025, 6(3), 40; https://doi.org/10.3390/signals6030040 - 8 Aug 2025
Viewed by 427
Abstract
Accurate teeth segmentation and orientation are fundamental in modern oral healthcare, enabling precise diagnosis, treatment planning, and dental implant design. In this study, we present a comprehensive approach to teeth segmentation and orientation from panoramic X-ray images, leveraging deep-learning techniques. We built an [...] Read more.
Accurate teeth segmentation and orientation are fundamental in modern oral healthcare, enabling precise diagnosis, treatment planning, and dental implant design. In this study, we present a comprehensive approach to teeth segmentation and orientation from panoramic X-ray images, leveraging deep-learning techniques. We built an end-to-end instance segmentation network that uses an encoder–decoder architecture reinforced with grid-aware attention gates along the skip connections. We introduce oriented bounding box (OBB) generation through principal component analysis (PCA) for precise tooth orientation estimation. Evaluating our approach on the publicly available DNS dataset, comprising 543 panoramic X-ray images, we achieve the highest Intersection-over-Union (IoU) score of 82.43% and a Dice Similarity Coefficient (DSC) score of 90.37% among compared models in teeth instance segmentation. In OBB analysis, we obtain the Rotated IoU (RIoU) score of 82.82%. We also conduct detailed analyses of individual tooth labels and categorical performance, shedding light on strengths and weaknesses. The proposed model’s accuracy and versatility offer promising prospects for improving dental diagnoses, treatment planning, and personalized healthcare in the oral domain. Full article
Show Figures

Figure 1

28 pages, 6199 KB  
Article
Dual Chaotic Diffusion Framework for Multimodal Biometric Security Using Qi Hyperchaotic System
by Tresor Lisungu Oteko and Kingsley A. Ogudo
Symmetry 2025, 17(8), 1231; https://doi.org/10.3390/sym17081231 - 4 Aug 2025
Viewed by 293
Abstract
The proliferation of biometric technology across various domains including user identification, financial services, healthcare, security, law enforcement, and border control introduces convenience in user identity verification while necessitating robust protection mechanisms for sensitive biometric data. While chaos-based encryption systems offer promising solutions, many [...] Read more.
The proliferation of biometric technology across various domains including user identification, financial services, healthcare, security, law enforcement, and border control introduces convenience in user identity verification while necessitating robust protection mechanisms for sensitive biometric data. While chaos-based encryption systems offer promising solutions, many existing chaos-based encryption schemes exhibit inherent shortcomings including deterministic randomness and constrained key spaces, often failing to balance security robustness with computational efficiency. To address this, we propose a novel dual-layer cryptographic framework leveraging a four-dimensional (4D) Qi hyperchaotic system for protecting biometric templates and facilitating secure feature matching operations. The framework implements a two-tier encryption mechanism where each layer independently utilizes a Qi hyperchaotic system to generate unique encryption parameters, ensuring template-specific encryption patterns that enhance resistance against chosen-plaintext attacks. The framework performs dimensional normalization of input biometric templates, followed by image pixel shuffling to permutate pixel positions before applying dual-key encryption using the Qi hyperchaotic system and XOR diffusion operations. Templates remain encrypted in storage, with decryption occurring only during authentication processes, ensuring continuous security while enabling biometric verification. The proposed system’s framework demonstrates exceptional randomness properties, validated through comprehensive NIST Statistical Test Suite analysis, achieving statistical significance across all 15 tests with p-values consistently above 0.01 threshold. Comprehensive security analysis reveals outstanding metrics: entropy values exceeding 7.99 bits, a key space of 10320, negligible correlation coefficients (<102), and robust differential attack resistance with an NPCR of 99.60% and a UACI of 33.45%. Empirical evaluation, on standard CASIA Face and Iris databases, demonstrates practical computational efficiency, achieving average encryption times of 0.50913s per user template for 256 × 256 images. Comparative analysis against other state-of-the-art encryption schemes verifies the effectiveness and reliability of the proposed scheme and demonstrates our framework’s superior performance in both security metrics and computational efficiency. Our findings contribute to the advancement of biometric template protection methodologies, offering a balanced performance between security robustness and operational efficiency required in real-world deployment scenarios. Full article
(This article belongs to the Special Issue New Advances in Symmetric Cryptography)
Show Figures

Figure 1

21 pages, 9010 KB  
Article
Dual-Branch Deep Learning with Dynamic Stage Detection for CT Tube Life Prediction
by Zhu Chen, Yuedan Liu, Zhibin Qin, Haojie Li, Siyuan Xie, Litian Fan, Qilin Liu and Jin Huang
Sensors 2025, 25(15), 4790; https://doi.org/10.3390/s25154790 - 4 Aug 2025
Viewed by 379
Abstract
CT scanners are essential tools in modern medical imaging. Sudden failures of their X-ray tubes can lead to equipment downtime, affecting healthcare services and patient diagnosis. However, existing prediction methods based on a single model struggle to adapt to the multi-stage variation characteristics [...] Read more.
CT scanners are essential tools in modern medical imaging. Sudden failures of their X-ray tubes can lead to equipment downtime, affecting healthcare services and patient diagnosis. However, existing prediction methods based on a single model struggle to adapt to the multi-stage variation characteristics of tube lifespan and have limited modeling capabilities for temporal features. To address these issues, this paper proposes an intelligent prediction architecture for CT tubes’ remaining useful life based on a dual-branch neural network. This architecture consists of two specialized branches: a residual self-attention BiLSTM (RSA-BiLSTM) and a multi-layer dilation temporal convolutional network (D-TCN). The RSA-BiLSTM branch extracts multi-scale features and also enhances the long-term dependency modeling capability for temporal data. The D-TCN branch captures multi-scale temporal features through multi-layer dilated convolutions, effectively handling non-linear changes in the degradation phase. Furthermore, a dynamic phase detector is applied to integrate the prediction results from both branches. In terms of optimization strategy, a dynamically weighted triplet mixed loss function is designed to adjust the weight ratios of different prediction tasks, effectively solving the problems of sample imbalance and uneven prediction accuracy. Experimental results using leave-one-out cross-validation (LOOCV) on six different CT tube datasets show that the proposed method achieved significant advantages over five comparison models, with an average MSE of 2.92, MAE of 0.46, and R2 of 0.77. The LOOCV strategy ensures robust evaluation by testing each tube dataset independently while training on the remaining five, providing reliable generalization assessment across different CT equipment. Ablation experiments further confirmed that the collaborative design of multiple components is significant for improving the accuracy of X-ray tubes remaining life prediction. Full article
Show Figures

Figure 1

20 pages, 3729 KB  
Article
Can AIGC Aid Intelligent Robot Design? A Tentative Research of Apple-Harvesting Robot
by Qichun Jin, Jiayu Zhao, Wei Bao, Ji Zhao, Yujuan Zhang and Fuwen Hu
Processes 2025, 13(8), 2422; https://doi.org/10.3390/pr13082422 - 30 Jul 2025
Viewed by 528
Abstract
More recently, artificial intelligence (AI)-generated content (AIGC) is fundamentally transforming multiple sectors, including materials discovery, healthcare, education, scientific research, and industrial manufacturing. As for the complexities and challenges of intelligent robot design, AIGC has the potential to offer a new paradigm, assisting in [...] Read more.
More recently, artificial intelligence (AI)-generated content (AIGC) is fundamentally transforming multiple sectors, including materials discovery, healthcare, education, scientific research, and industrial manufacturing. As for the complexities and challenges of intelligent robot design, AIGC has the potential to offer a new paradigm, assisting in conceptual and technical design, functional module design, and the training of the perception ability to accelerate prototyping. Taking the design of an apple-harvesting robot, for example, we demonstrate a basic framework of the AIGC-assisted robot design methodology, leveraging the generation capabilities of available multimodal large language models, as well as the human intervention to alleviate AI hallucination and hidden risks. Second, we study the enhancement effect on the robot perception system using the generated apple images based on the large vision-language models to expand the actual apple images dataset. Further, an apple-harvesting robot prototype based on an AIGC-aided design is demonstrated and a pick-up experiment in a simulated scene indicates that it achieves a harvesting success rate of 92.2% and good terrain traversability with a maximum climbing angle of 32°. According to the tentative research, although not an autonomous design agent, the AIGC-driven design workflow can alleviate the significant complexities and challenges of intelligent robot design, especially for beginners or young engineers. Full article
(This article belongs to the Special Issue Design and Control of Complex and Intelligent Systems)
Show Figures

Figure 1

26 pages, 14606 KB  
Review
Attribution-Based Explainability in Medical Imaging: A Critical Review on Explainable Computer Vision (X-CV) Techniques and Their Applications in Medical AI
by Kazi Nabiul Alam, Pooneh Bagheri Zadeh and Akbar Sheikh-Akbari
Electronics 2025, 14(15), 3024; https://doi.org/10.3390/electronics14153024 - 29 Jul 2025
Viewed by 880
Abstract
One of the largest future applications of computer vision is in the healthcare industry. Computer vision tasks are generally implemented in diverse medical imaging scenarios, including detecting or classifying diseases, predicting potential disease progression, analyzing cancer data for advancing future research, and conducting [...] Read more.
One of the largest future applications of computer vision is in the healthcare industry. Computer vision tasks are generally implemented in diverse medical imaging scenarios, including detecting or classifying diseases, predicting potential disease progression, analyzing cancer data for advancing future research, and conducting genetic analysis for personalized medicine. However, a critical drawback of using Computer Vision (CV) approaches is their limited reliability and transparency. Clinicians and patients must comprehend the rationale behind predictions or results to ensure trust and ethical deployment in clinical settings. This demonstrates the adoption of the idea of Explainable Computer Vision (X-CV), which enhances vision-relative interpretability. Among various methodologies, attribution-based approaches are widely employed by researchers to explain medical imaging outputs by identifying influential features. This article solely aims to explore how attribution-based X-CV methods work in medical imaging, what they are good for in real-world use, and what their main limitations are. This study evaluates X-CV techniques by conducting a thorough review of relevant reports, peer-reviewed journals, and methodological approaches to obtain an adequate understanding of attribution-based approaches. It explores how these techniques tackle computational complexity issues, improve diagnostic accuracy and aid clinical decision-making processes. This article intends to present a path that generalizes the concept of trustworthiness towards AI-based healthcare solutions. Full article
(This article belongs to the Special Issue Artificial Intelligence-Driven Emerging Applications)
Show Figures

Figure 1

22 pages, 1359 KB  
Article
Fall Detection Using Federated Lightweight CNN Models: A Comparison of Decentralized vs. Centralized Learning
by Qasim Mahdi Haref, Jun Long and Zhan Yang
Appl. Sci. 2025, 15(15), 8315; https://doi.org/10.3390/app15158315 - 25 Jul 2025
Viewed by 431
Abstract
Fall detection is a critical task in healthcare monitoring systems, especially for elderly populations, for whom timely intervention can significantly reduce morbidity and mortality. This study proposes a privacy-preserving and scalable fall-detection framework that integrates federated learning (FL) with transfer learning (TL) to [...] Read more.
Fall detection is a critical task in healthcare monitoring systems, especially for elderly populations, for whom timely intervention can significantly reduce morbidity and mortality. This study proposes a privacy-preserving and scalable fall-detection framework that integrates federated learning (FL) with transfer learning (TL) to train deep learning models across decentralized data sources without compromising user privacy. The pipeline begins with data acquisition, in which annotated video-based fall-detection datasets formatted in YOLO are used to extract image crops of human subjects. These images are then preprocessed, resized, normalized, and relabeled into binary classes (fall vs. non-fall). A stratified 80/10/10 split ensures balanced training, validation, and testing. To simulate real-world federated environments, the training data is partitioned across multiple clients, each performing local training using pretrained CNN models including MobileNetV2, VGG16, EfficientNetB0, and ResNet50. Two FL topologies are implemented: a centralized server-coordinated scheme and a ring-based decentralized topology. During each round, only model weights are shared, and federated averaging (FedAvg) is applied for global aggregation. The models were trained using three random seeds to ensure result robustness and stability across varying data partitions. Among all configurations, decentralized MobileNetV2 achieved the best results, with a mean test accuracy of 0.9927, F1-score of 0.9917, and average training time of 111.17 s per round. These findings highlight the model’s strong generalization, low computational burden, and suitability for edge deployment. Future work will extend evaluation to external datasets and address issues such as client drift and adversarial robustness in federated environments. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Figure 1

23 pages, 3506 KB  
Article
Evaluation of Vision Transformers for Multi-Organ Tumor Classification Using MRI and CT Imaging
by Óscar A. Martín and Javier Sánchez
Electronics 2025, 14(15), 2976; https://doi.org/10.3390/electronics14152976 - 25 Jul 2025
Viewed by 470
Abstract
Using neural networks has become the standard technique for medical diagnostics, especially in cancer detection and classification. This work evaluates the performance of Vision Transformer architectures, including Swin Transformer and MaxViT, for several datasets of magnetic resonance imaging (MRI) and computed tomography (CT) [...] Read more.
Using neural networks has become the standard technique for medical diagnostics, especially in cancer detection and classification. This work evaluates the performance of Vision Transformer architectures, including Swin Transformer and MaxViT, for several datasets of magnetic resonance imaging (MRI) and computed tomography (CT) scans. We used three training sets of images with brain, lung, and kidney tumors. Each dataset included different classification labels, from brain gliomas and meningiomas to benign and malignant lung conditions and kidney anomalies such as cysts and cancers. This work aims to analyze the behavior of the neural networks in each dataset and the benefits of combining different image modalities and tumor classes. We designed several experiments by fine-tuning the models on combined and individual datasets. The results revealed that the Swin Transformer achieved the highest accuracy, with an average of 99.0% on single datasets and reaching 99.43% on the combined dataset. This research highlights the adaptability of Transformer-based models to various human organs and image modalities. The main contribution lies in evaluating multiple ViT architectures across multi-organ tumor datasets, demonstrating their generalization to multi-organ classification. Integrating these models across diverse datasets could mark a significant advance in precision medicine, paving the way for more efficient healthcare solutions. Full article
(This article belongs to the Special Issue Convolutional Neural Networks and Vision Applications, 4th Edition)
Show Figures

Figure 1

21 pages, 4388 KB  
Article
An Omni-Dimensional Dynamic Convolutional Network for Single-Image Super-Resolution Tasks
by Xi Chen, Ziang Wu, Weiping Zhang, Tingting Bi and Chunwei Tian
Mathematics 2025, 13(15), 2388; https://doi.org/10.3390/math13152388 - 25 Jul 2025
Viewed by 415
Abstract
The goal of single-image super-resolution (SISR) tasks is to generate high-definition images from low-quality inputs, with practical uses spanning healthcare diagnostics, aerial imaging, and surveillance systems. Although cnns have considerably improved image reconstruction quality, existing methods still face limitations, including inadequate restoration of [...] Read more.
The goal of single-image super-resolution (SISR) tasks is to generate high-definition images from low-quality inputs, with practical uses spanning healthcare diagnostics, aerial imaging, and surveillance systems. Although cnns have considerably improved image reconstruction quality, existing methods still face limitations, including inadequate restoration of high-frequency details, high computational complexity, and insufficient adaptability to complex scenes. To address these challenges, we propose an Omni-dimensional Dynamic Convolutional Network (ODConvNet) tailored for SISR tasks. Specifically, ODConvNet comprises four key components: a Feature Extraction Block (FEB) that captures low-level spatial features; an Omni-dimensional Dynamic Convolution Block (DCB), which utilizes a multidimensional attention mechanism to dynamically reweight convolution kernels across spatial, channel, and kernel dimensions, thereby enhancing feature expressiveness and context modeling; a Deep Feature Extraction Block (DFEB) that stacks multiple convolutional layers with residual connections to progressively extract and fuse high-level features; and a Reconstruction Block (RB) that employs subpixel convolution to upscale features and refine the final HR output. This mechanism significantly enhances feature extraction and effectively captures rich contextual information. Additionally, we employ an improved residual network structure combined with a refined Charbonnier loss function to alleviate gradient vanishing and exploding to enhance the robustness of model training. Extensive experiments conducted on widely used benchmark datasets, including DIV2K, Set5, Set14, B100, and Urban100, demonstrate that, compared with existing deep learning-based SR methods, our ODConvNet method improves Peak Signal-to-Noise Ratio (PSNR) and Structural Similarity Index (SSIM), and the visual quality of SR images is also improved. Ablation studies further validate the effectiveness and contribution of each component in our network. The proposed ODConvNet offers an effective, flexible, and efficient solution for the SISR task and provides promising directions for future research. Full article
Show Figures

Figure 1

16 pages, 707 KB  
Review
The Role of Landiolol in Coronary Artery Disease: Insights into Acute Coronary Syndromes, Stable Coronary Artery Disease and Computed Tomography Coronary Angiography
by Athina Nasoufidou, Marios G. Bantidos, Panagiotis Stachteas, Dimitrios V. Moysidis, Andreas Mitsis, Barbara Fyntanidou, Konstantinos Kouskouras, Efstratios Karagiannidis, Theodoros Karamitsos, George Kassimis and Nikolaos Fragakis
J. Clin. Med. 2025, 14(15), 5216; https://doi.org/10.3390/jcm14155216 - 23 Jul 2025
Viewed by 478
Abstract
Coronary artery disease (CAD) constitutes a major contributor to morbidity, mortality and healthcare burden worldwide. Recent innovations in imaging modalities, pharmaceuticals and interventional techniques have revolutionized diagnostic and treatment options, necessitating the reevaluation of established drug protocols or the consideration of newer alternatives. [...] Read more.
Coronary artery disease (CAD) constitutes a major contributor to morbidity, mortality and healthcare burden worldwide. Recent innovations in imaging modalities, pharmaceuticals and interventional techniques have revolutionized diagnostic and treatment options, necessitating the reevaluation of established drug protocols or the consideration of newer alternatives. The utilization of beta blockers (BBs) in the setting of acute myocardial infarction (AMI), shifting from the pre-reperfusion to the thrombolytic and finally the primary percutaneous coronary intervention (pPCI) era, has become increasingly more selective and contentious. Nonetheless, the extent of myocardial necrosis remains a key predictor of outcomes in this patient population, with large trials establishing the beneficial use of beta blockers. Computed tomography coronary angiography (CTCA) has emerged as a highly effective diagnostic tool for delineating the coronary anatomy and atheromatous plaque characteristics, with the added capability of MESH-3D model generation. Induction and preservation of a low heart rate (HR), regardless of the underlying sequence, is of critical importance for high-quality results. Landiolol is an intravenous beta blocker with an ultra-short duration of action (t1/2 = 4 min) and remarkable β1-receptor specificity (β1/β2 = 255) and pharmacokinetics that support its potential for systematic integration into clinical practice. It has been increasingly recognized for its importance in both acute (primarily studied in STEMI and, to a lesser extent, NSTEMI pPCI) and chronic (mainly studied in elective PCI) CAD settings. Given the limited literature focusing specifically on landiolol, the aim of this narrative review is to examine its pharmacological properties and evaluate its current and future role in enhancing both diagnostic imaging quality and therapeutic outcomes in patients with CAD. Full article
Show Figures

Figure 1

Back to TopTop