Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (764)

Search Parameters:
Keywords = intelligent medical diagnosis

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
29 pages, 10437 KiB  
Review
Neuromorphic Photonic On-Chip Computing
by Sujal Gupta and Jolly Xavier
Chips 2025, 4(3), 34; https://doi.org/10.3390/chips4030034 (registering DOI) - 7 Aug 2025
Abstract
Drawing inspiration from biological brains’ energy-efficient information-processing mechanisms, photonic integrated circuits (PICs) have facilitated the development of ultrafast artificial neural networks. This in turn is envisaged to offer potential solutions to the growing demand for artificial intelligence employing machine learning in various domains, [...] Read more.
Drawing inspiration from biological brains’ energy-efficient information-processing mechanisms, photonic integrated circuits (PICs) have facilitated the development of ultrafast artificial neural networks. This in turn is envisaged to offer potential solutions to the growing demand for artificial intelligence employing machine learning in various domains, from nonlinear optimization and telecommunication to medical diagnosis. In the meantime, silicon photonics has emerged as a mainstream technology for integrated chip-based applications. However, challenges still need to be addressed in scaling it further for broader applications due to the requirement of co-integration of electronic circuitry for control and calibration. Leveraging physics in algorithms and nanoscale materials holds promise for achieving low-power miniaturized chips capable of real-time inference and learning. Against this backdrop, we present the State of the Art in neuromorphic photonic computing, focusing primarily on architecture, weighting mechanisms, photonic neurons, and training, while giving an overall view of recent advancements, challenges, and prospects. We also emphasize and highlight the need for revolutionary hardware innovations to scale up neuromorphic systems while enhancing energy efficiency and performance. Full article
(This article belongs to the Special Issue Silicon Photonic Integrated Circuits: Advancements and Challenges)
Show Figures

Figure 1

24 pages, 2572 KiB  
Article
DIALOGUE: A Generative AI-Based Pre–Post Simulation Study to Enhance Diagnostic Communication in Medical Students Through Virtual Type 2 Diabetes Scenarios
by Ricardo Xopan Suárez-García, Quetzal Chavez-Castañeda, Rodrigo Orrico-Pérez, Sebastián Valencia-Marin, Ari Evelyn Castañeda-Ramírez, Efrén Quiñones-Lara, Claudio Adrián Ramos-Cortés, Areli Marlene Gaytán-Gómez, Jonathan Cortés-Rodríguez, Jazel Jarquín-Ramírez, Nallely Guadalupe Aguilar-Marchand, Graciela Valdés-Hernández, Tomás Eduardo Campos-Martínez, Alonso Vilches-Flores, Sonia Leon-Cabrera, Adolfo René Méndez-Cruz, Brenda Ofelia Jay-Jímenez and Héctor Iván Saldívar-Cerón
Eur. J. Investig. Health Psychol. Educ. 2025, 15(8), 152; https://doi.org/10.3390/ejihpe15080152 (registering DOI) - 7 Aug 2025
Abstract
DIALOGUE (DIagnostic AI Learning through Objective Guided User Experience) is a generative artificial intelligence (GenAI)-based training program designed to enhance diagnostic communication skills in medical students. In this single-arm pre–post study, we evaluated whether DIALOGUE could improve students’ ability to disclose a type [...] Read more.
DIALOGUE (DIagnostic AI Learning through Objective Guided User Experience) is a generative artificial intelligence (GenAI)-based training program designed to enhance diagnostic communication skills in medical students. In this single-arm pre–post study, we evaluated whether DIALOGUE could improve students’ ability to disclose a type 2 diabetes mellitus (T2DM) diagnosis with clarity, structure, and empathy. Thirty clinical-phase students completed two pre-test virtual encounters with an AI-simulated patient (ChatGPT, GPT-4o), scored by blinded raters using an eight-domain rubric. Participants then engaged in ten asynchronous GenAI scenarios with automated natural-language feedback. Seven days later, they completed two post-test consultations with human standardized patients, again evaluated with the same rubric. Mean total performance increased by 36.7 points (95% CI: 31.4–42.1; p < 0.001), and the proportion of high-performing students rose from 0% to 70%. Gains were significant across all domains, most notably in opening the encounter, closure, and diabetes specific explanation. Multiple regression showed that lower baseline empathy (β = −0.41, p = 0.005) and higher digital self-efficacy (β = 0.35, p = 0.016) independently predicted greater improvement; gender had only a marginal effect. Cluster analysis revealed three learner profiles, with the highest-gain group characterized by low empathy and high digital self-efficacy. Inter-rater reliability was excellent (ICC ≈ 0.90). These findings provide empirical evidence that GenAI-mediated training can meaningfully enhance diagnostic communication and may serve as a scalable, individualized adjunct to conventional medical education. Full article
Show Figures

Figure 1

35 pages, 3289 KiB  
Review
Applications of Machine Learning Algorithms in Geriatrics
by Adrian Stancu, Cosmina-Mihaela Rosca and Emilian Marian Iovanovici
Appl. Sci. 2025, 15(15), 8699; https://doi.org/10.3390/app15158699 - 6 Aug 2025
Abstract
The increase in the elderly population globally reflects a change in the population’s mindset regarding preventive health measures and necessitates a rethinking of healthcare strategies. The integration of machine learning (ML)-type algorithms in geriatrics represents a direction for optimizing prevention, diagnosis, prediction, monitoring, [...] Read more.
The increase in the elderly population globally reflects a change in the population’s mindset regarding preventive health measures and necessitates a rethinking of healthcare strategies. The integration of machine learning (ML)-type algorithms in geriatrics represents a direction for optimizing prevention, diagnosis, prediction, monitoring, and treatment. This paper presents a systematic review of the scientific literature published between 1 January 2020 and 31 May 2025. The paper is based on the applicability of ML techniques in the field of geriatrics. The study is conducted using the Web of Science database for a detailed discussion. The most studied algorithms in research articles are Random Forest, Extreme Gradient Boosting, and support vector machines. They are preferred due to their performance in processing incomplete clinical data. The performance metrics reported in the analyzed papers include the accuracy, sensitivity, F1-score, and Area under the Receiver Operating Characteristic Curve. Nine search categories are investigated through four databases: WOS, PubMed, Scopus, and IEEE. A comparative analysis shows that the field of geriatrics, through an ML approach in the context of elderly nutrition, is insufficiently explored, as evidenced by the 61 articles analyzed from the four databases. The analysis highlights gaps regarding the explainability of the models used, the transparency of cross-sectional datasets, and the validity of the data in real clinical contexts. The paper highlights the potential of ML models in transforming geriatrics within the context of personalized predictive care and outlines a series of future research directions, recommending the development of standardized databases, the integration of algorithmic explanations, the promotion of interdisciplinary collaborations, and the implementation of ethical norms of artificial intelligence in geriatric medical practice. Full article
(This article belongs to the Special Issue Diet, Nutrition and Human Health)
Show Figures

Figure 1

16 pages, 3834 KiB  
Article
Deep Learning Tongue Cancer Detection Method Based on Mueller Matrix Microscopy Imaging
by Hanyue Wei, Yingying Luo, Feiya Ma and Liyong Ren
Optics 2025, 6(3), 35; https://doi.org/10.3390/opt6030035 - 4 Aug 2025
Viewed by 171
Abstract
Tongue cancer, the most aggressive subtype of oral cancer, presents critical challenges due to the limited number of specialists available and the time-consuming nature of conventional histopathological diagnosis. To address these issues, we developed an intelligent diagnostic system integrating Mueller matrix microscopy with [...] Read more.
Tongue cancer, the most aggressive subtype of oral cancer, presents critical challenges due to the limited number of specialists available and the time-consuming nature of conventional histopathological diagnosis. To address these issues, we developed an intelligent diagnostic system integrating Mueller matrix microscopy with deep learning to enhance diagnostic accuracy and efficiency. Through Mueller matrix polar decomposition and transformation, micro-polarization feature parameter images were extracted from tongue cancer tissues, and purity parameter images were generated by calculating the purity of the Mueller matrices. A multi-stage feature dataset of Mueller matrix parameter images was constructed using histopathological samples of tongue cancer tissues with varying stages. Based on this dataset, the clinical potential of Mueller matrix microscopy was preliminarily validated for histopathological diagnosis of tongue cancer. Four mainstream medical image classification networks—AlexNet, ResNet50, DenseNet121 and VGGNet16—were employed to quantitatively evaluate the classification performance for tongue cancer stages. DenseNet121 achieved the highest classification accuracy of 98.48%, demonstrating its potential as a robust framework for rapid and accurate intelligent diagnosis of tongue cancer. Full article
Show Figures

Figure 1

21 pages, 9010 KiB  
Article
Dual-Branch Deep Learning with Dynamic Stage Detection for CT Tube Life Prediction
by Zhu Chen, Yuedan Liu, Zhibin Qin, Haojie Li, Siyuan Xie, Litian Fan, Qilin Liu and Jin Huang
Sensors 2025, 25(15), 4790; https://doi.org/10.3390/s25154790 - 4 Aug 2025
Viewed by 184
Abstract
CT scanners are essential tools in modern medical imaging. Sudden failures of their X-ray tubes can lead to equipment downtime, affecting healthcare services and patient diagnosis. However, existing prediction methods based on a single model struggle to adapt to the multi-stage variation characteristics [...] Read more.
CT scanners are essential tools in modern medical imaging. Sudden failures of their X-ray tubes can lead to equipment downtime, affecting healthcare services and patient diagnosis. However, existing prediction methods based on a single model struggle to adapt to the multi-stage variation characteristics of tube lifespan and have limited modeling capabilities for temporal features. To address these issues, this paper proposes an intelligent prediction architecture for CT tubes’ remaining useful life based on a dual-branch neural network. This architecture consists of two specialized branches: a residual self-attention BiLSTM (RSA-BiLSTM) and a multi-layer dilation temporal convolutional network (D-TCN). The RSA-BiLSTM branch extracts multi-scale features and also enhances the long-term dependency modeling capability for temporal data. The D-TCN branch captures multi-scale temporal features through multi-layer dilated convolutions, effectively handling non-linear changes in the degradation phase. Furthermore, a dynamic phase detector is applied to integrate the prediction results from both branches. In terms of optimization strategy, a dynamically weighted triplet mixed loss function is designed to adjust the weight ratios of different prediction tasks, effectively solving the problems of sample imbalance and uneven prediction accuracy. Experimental results using leave-one-out cross-validation (LOOCV) on six different CT tube datasets show that the proposed method achieved significant advantages over five comparison models, with an average MSE of 2.92, MAE of 0.46, and R2 of 0.77. The LOOCV strategy ensures robust evaluation by testing each tube dataset independently while training on the remaining five, providing reliable generalization assessment across different CT equipment. Ablation experiments further confirmed that the collaborative design of multiple components is significant for improving the accuracy of X-ray tubes remaining life prediction. Full article
Show Figures

Figure 1

24 pages, 3553 KiB  
Article
A Hybrid Artificial Intelligence Framework for Melanoma Diagnosis Using Histopathological Images
by Alberto Nogales, María C. Garrido, Alfredo Guitian, Jose-Luis Rodriguez-Peralto, Carlos Prados Villanueva, Delia Díaz-Prieto and Álvaro J. García-Tejedor
Technologies 2025, 13(8), 330; https://doi.org/10.3390/technologies13080330 - 1 Aug 2025
Viewed by 226
Abstract
Cancer remains one of the most significant global health challenges due to its high mortality rates and the limited understanding of its progression. Early diagnosis is critical to improving patient outcomes, especially in skin cancer, where timely detection can significantly enhance recovery rates. [...] Read more.
Cancer remains one of the most significant global health challenges due to its high mortality rates and the limited understanding of its progression. Early diagnosis is critical to improving patient outcomes, especially in skin cancer, where timely detection can significantly enhance recovery rates. Histopathological analysis is a widely used diagnostic method, but it is a time-consuming process that heavily depends on the expertise of highly trained specialists. Recent advances in Artificial Intelligence have shown promising results in image classification, highlighting its potential as a supportive tool for medical diagnosis. In this study, we explore the application of hybrid Artificial Intelligence models for melanoma diagnosis using histopathological images. The dataset used consisted of 506 histopathological images, from which 313 curated images were selected after quality control and preprocessing. We propose a two-step framework that employs an Autoencoder for dimensionality reduction and feature extraction of the images, followed by a classification algorithm to distinguish between melanoma and nevus, trained on the extracted feature vectors from the bottleneck of the Autoencoder. We evaluated Support Vector Machines, Random Forest, Multilayer Perceptron, and K-Nearest Neighbours as classifiers. Among these, the combinations of Autoencoder with K-Nearest Neighbours achieved the best performance and inference time, reaching an average accuracy of approximately 97.95% on the test set and requiring 3.44 min per diagnosis. The baseline comparison results were consistent, demonstrating strong generalisation and outperforming the other models by 2 to 13 percentage points. Full article
(This article belongs to the Special Issue Application of Artificial Intelligence in Medical Image Analysis)
Show Figures

Figure 1

40 pages, 3463 KiB  
Review
Machine Learning-Powered Smart Healthcare Systems in the Era of Big Data: Applications, Diagnostic Insights, Challenges, and Ethical Implications
by Sita Rani, Raman Kumar, B. S. Panda, Rajender Kumar, Nafaa Farhan Muften, Mayada Ahmed Abass and Jasmina Lozanović
Diagnostics 2025, 15(15), 1914; https://doi.org/10.3390/diagnostics15151914 - 30 Jul 2025
Viewed by 564
Abstract
Healthcare data rapidly increases, and patients seek customized, effective healthcare services. Big data and machine learning (ML) enabled smart healthcare systems hold revolutionary potential. Unlike previous reviews that separately address AI or big data, this work synthesizes their convergence through real-world case studies, [...] Read more.
Healthcare data rapidly increases, and patients seek customized, effective healthcare services. Big data and machine learning (ML) enabled smart healthcare systems hold revolutionary potential. Unlike previous reviews that separately address AI or big data, this work synthesizes their convergence through real-world case studies, cross-domain ML applications, and a critical discussion on ethical integration in smart diagnostics. The review focuses on the role of big data analysis and ML towards better diagnosis, improved efficiency of operations, and individualized care for patients. It explores the principal challenges of data heterogeneity, privacy, computational complexity, and advanced methods such as federated learning (FL) and edge computing. Applications in real-world settings, such as disease prediction, medical imaging, drug discovery, and remote monitoring, illustrate how ML methods, such as deep learning (DL) and natural language processing (NLP), enhance clinical decision-making. A comparison of ML models highlights their value in dealing with large and heterogeneous healthcare datasets. In addition, the use of nascent technologies such as wearables and Internet of Medical Things (IoMT) is examined for their role in supporting real-time data-driven delivery of healthcare. The paper emphasizes the pragmatic application of intelligent systems by highlighting case studies that reflect up to 95% diagnostic accuracy and cost savings. The review ends with future directions that seek to develop scalable, ethical, and interpretable AI-powered healthcare systems. It bridges the gap between ML algorithms and smart diagnostics, offering critical perspectives for clinicians, data scientists, and policymakers. Full article
(This article belongs to the Special Issue Machine-Learning-Based Disease Diagnosis and Prediction)
Show Figures

Figure 1

13 pages, 532 KiB  
Article
Medical and Biomedical Students’ Perspective on Digital Health and Its Integration in Medical Curricula: Recent and Future Views
by Srijit Das, Nazik Ahmed, Issa Al Rahbi, Yamamh Al-Jubori, Rawan Al Busaidi, Aya Al Harbi, Mohammed Al Tobi and Halima Albalushi
Int. J. Environ. Res. Public Health 2025, 22(8), 1193; https://doi.org/10.3390/ijerph22081193 - 30 Jul 2025
Viewed by 317
Abstract
The incorporation of digital health into the medical curricula is becoming more important to better prepare doctors in the future. Digital health comprises a wide range of tools such as electronic health records, health information technology, telemedicine, telehealth, mobile health applications, wearable devices, [...] Read more.
The incorporation of digital health into the medical curricula is becoming more important to better prepare doctors in the future. Digital health comprises a wide range of tools such as electronic health records, health information technology, telemedicine, telehealth, mobile health applications, wearable devices, artificial intelligence, and virtual reality. The present study aimed to explore the medical and biomedical students’ perspectives on the integration of digital health in medical curricula. A cross-sectional study was conducted on the medical and biomedical undergraduate students at the College of Medicine and Health Sciences at Sultan Qaboos University. Data was collected using a self-administered questionnaire. The response rate was 37%. The majority of respondents were in the MD (Doctor of Medicine) program (84.4%), while 29 students (15.6%) were from the BMS (Biomedical Sciences) program. A total of 55.38% agreed that they were familiar with the term ‘e-Health’. Additionally, 143 individuals (76.88%) reported being aware of the definition of e-Health. Specifically, 69 individuals (37.10%) utilize e-Health technologies every other week, 20 individuals (10.75%) reported using them daily, while 44 individuals (23.66%) indicated that they never used such technologies. Despite having several benefits, challenges exist in integrating digital health into the medical curriculum. There is a need to overcome the lack of infrastructure, existing educational materials, and digital health topics. In conclusion, embedding digital health into medical curricula is certainly beneficial for creating a digitally competent healthcare workforce that could help in better data storage, help in diagnosis, aid in patient consultation from a distance, and advise on medications, thereby leading to improved patient care which is a key public health priority. Full article
Show Figures

Figure 1

13 pages, 311 KiB  
Article
Diagnostic Performance of ChatGPT-4o in Analyzing Oral Mucosal Lesions: A Comparative Study with Experts
by Luigi Angelo Vaira, Jerome R. Lechien, Antonino Maniaci, Andrea De Vito, Miguel Mayo-Yáñez, Stefania Troise, Giuseppe Consorti, Carlos M. Chiesa-Estomba, Giovanni Cammaroto, Thomas Radulesco, Arianna di Stadio, Alessandro Tel, Andrea Frosolini, Guido Gabriele, Giannicola Iannella, Alberto Maria Saibene, Paolo Boscolo-Rizzo, Giovanni Maria Soro, Giovanni Salzano and Giacomo De Riu
Medicina 2025, 61(8), 1379; https://doi.org/10.3390/medicina61081379 - 30 Jul 2025
Viewed by 255
Abstract
Background and Objectives: this pilot study aimed to evaluate the diagnostic accuracy of ChatGPT-4o in analyzing oral mucosal lesions from clinical images. Materials and Methods: a total of 110 clinical images, including 100 pathological lesions and 10 healthy mucosal images, were retrieved [...] Read more.
Background and Objectives: this pilot study aimed to evaluate the diagnostic accuracy of ChatGPT-4o in analyzing oral mucosal lesions from clinical images. Materials and Methods: a total of 110 clinical images, including 100 pathological lesions and 10 healthy mucosal images, were retrieved from Google Images and analyzed by ChatGPT-4o using a standardized prompt. An expert panel of five clinicians established a reference diagnosis, categorizing lesions as benign or malignant. The AI-generated diagnoses were classified as correct or incorrect and further categorized as plausible or not plausible. The accuracy, sensitivity, specificity, and agreement with the expert panel were analyzed. The Artificial Intelligence Performance Instrument (AIPI) was used to assess the quality of AI-generated recommendations. Results: ChatGPT-4o correctly diagnosed 85% of cases. Among the 15 incorrect diagnoses, 10 were deemed plausible by the expert panel. The AI misclassified three malignant lesions as benign but did not categorize any benign lesions as malignant. Sensitivity and specificity were 91.7% and 100%, respectively. The AIPI score averaged 17.6 ± 1.73, indicating strong diagnostic reasoning. The McNemar test showed no significant differences between AI and expert diagnoses (p = 0.084). Conclusions: In this proof-of-concept pilot study, ChatGPT-4o demonstrated high diagnostic accuracy and strong descriptive capabilities in oral mucosal lesion analysis. A residual 8.3% false-negative rate for malignant lesions underscores the need for specialist oversight; however, the model shows promise as an AI-powered triage aid in settings with limited access to specialized care. Full article
(This article belongs to the Section Dentistry and Oral Health)
35 pages, 4940 KiB  
Article
A Novel Lightweight Facial Expression Recognition Network Based on Deep Shallow Network Fusion and Attention Mechanism
by Qiaohe Yang, Yueshun He, Hongmao Chen, Youyong Wu and Zhihua Rao
Algorithms 2025, 18(8), 473; https://doi.org/10.3390/a18080473 - 30 Jul 2025
Viewed by 334
Abstract
Facial expression recognition (FER) is a critical research direction in artificial intelligence, which is widely used in intelligent interaction, medical diagnosis, security monitoring, and other domains. These applications highlight its considerable practical value and social significance. Face expression recognition models often need to [...] Read more.
Facial expression recognition (FER) is a critical research direction in artificial intelligence, which is widely used in intelligent interaction, medical diagnosis, security monitoring, and other domains. These applications highlight its considerable practical value and social significance. Face expression recognition models often need to run efficiently on mobile devices or edge devices, so the research on lightweight face expression recognition is particularly important. However, feature extraction and classification methods of lightweight convolutional neural network expression recognition algorithms mostly used at present are not specifically and fully optimized for the characteristics of facial expression images, yet fail to make full use of the feature information in face expression images. To address the lack of facial expression recognition models that are both lightweight and effectively optimized for expression-specific feature extraction, this study proposes a novel network design tailored to the characteristics of facial expressions. In this paper, we refer to the backbone architecture of MobileNet V2 network, and redesign LightExNet, a lightweight convolutional neural network based on the fusion of deep and shallow layers, attention mechanism, and joint loss function, according to the characteristics of the facial expression features. In the network architecture of LightExNet, firstly, deep and shallow features are fused in order to fully extract the shallow features in the original image, reduce the loss of information, alleviate the problem of gradient disappearance when the number of convolutional layers increases, and achieve the effect of multi-scale feature fusion. The MobileNet V2 architecture has also been streamlined to seamlessly integrate deep and shallow networks. Secondly, by combining the own characteristics of face expression features, a new channel and spatial attention mechanism is proposed to obtain the feature information of different expression regions as much as possible for encoding. Thus improve the accuracy of expression recognition effectively. Finally, the improved center loss function is superimposed to further improve the accuracy of face expression classification results, and corresponding measures are taken to significantly reduce the computational volume of the joint loss function. In this paper, LightExNet is tested on the three mainstream face expression datasets: Fer2013, CK+ and RAF-DB, respectively, and the experimental results show that LightExNet has 3.27 M Parameters and 298.27 M Flops, and the accuracy on the three datasets is 69.17%, 97.37%, and 85.97%, respectively. The comprehensive performance of LightExNet is better than the current mainstream lightweight expression recognition algorithms such as MobileNet V2, IE-DBN, Self-Cure Net, Improved MobileViT, MFN, Ada-CM, Parallel CNN(Convolutional Neural Network), etc. Experimental results confirm that LightExNet effectively improves recognition accuracy and computational efficiency while reducing energy consumption and enhancing deployment flexibility. These advantages underscore its strong potential for real-world applications in lightweight facial expression recognition. Full article
Show Figures

Figure 1

51 pages, 1874 KiB  
Review
Parkinson’s Disease: Bridging Gaps, Building Biomarkers, and Reimagining Clinical Translation
by Masaru Tanaka
Cells 2025, 14(15), 1161; https://doi.org/10.3390/cells14151161 - 28 Jul 2025
Viewed by 898
Abstract
Parkinson’s disease (PD), a progressive neurodegenerative disorder, imposes growing clinical and socioeconomic burdens worldwide. Despite landmark discoveries in dopamine biology and α-synuclein pathology, translating mechanistic insights into effective, personalized interventions remains elusive. Recent advances in molecular profiling, neuroimaging, and computational modeling have broadened [...] Read more.
Parkinson’s disease (PD), a progressive neurodegenerative disorder, imposes growing clinical and socioeconomic burdens worldwide. Despite landmark discoveries in dopamine biology and α-synuclein pathology, translating mechanistic insights into effective, personalized interventions remains elusive. Recent advances in molecular profiling, neuroimaging, and computational modeling have broadened the understanding of PD as a multifactorial systems disorder rather than a purely dopaminergic condition. However, critical gaps persist in diagnostic precision, biomarker standardization, and the translation of bench side findings into clinically meaningful therapies. This review critically examines the current landscape of PD research, identifying conceptual blind spots and methodological shortfalls across pathophysiology, clinical evaluation, trial design, and translational readiness. By synthesizing evidence from molecular neuroscience, data science, and global health, the review proposes strategic directions to recalibrate the research agenda toward precision neurology. Here I highlight the urgent need for interdisciplinary, globally inclusive, and biomarker-driven frameworks to overcome the fragmented progression of PD research. Grounded in the Accelerating Medicines Partnership-Parkinson’s Disease (AMP-PD) and the Parkinson’s Progression Markers Initiative (PPMI), this review maps shared biomarkers, open data, and patient-driven tools to faster personalized treatment. In doing so, it offers actionable insights for researchers, clinicians, and policymakers working at the intersection of biology, technology, and healthcare delivery. As the field pivots from symptomatic relief to disease modification, the road forward must be cohesive, collaborative, and rigorously translational, ensuring that laboratory discoveries systematically progress to clinical application. Full article
(This article belongs to the Special Issue Exclusive Review Papers in Parkinson's Research)
Show Figures

Graphical abstract

25 pages, 2887 KiB  
Article
Federated Learning Based on an Internet of Medical Things Framework for a Secure Brain Tumor Diagnostic System: A Capsule Networks Application
by Roman Rodriguez-Aguilar, Jose-Antonio Marmolejo-Saucedo and Utku Köse
Mathematics 2025, 13(15), 2393; https://doi.org/10.3390/math13152393 - 25 Jul 2025
Viewed by 248
Abstract
Artificial intelligence (AI) has already played a significant role in the healthcare sector, particularly in image-based medical diagnosis. Deep learning models have produced satisfactory and useful results for accurate decision-making. Among the various types of medical images, magnetic resonance imaging (MRI) is frequently [...] Read more.
Artificial intelligence (AI) has already played a significant role in the healthcare sector, particularly in image-based medical diagnosis. Deep learning models have produced satisfactory and useful results for accurate decision-making. Among the various types of medical images, magnetic resonance imaging (MRI) is frequently utilized in deep learning applications to analyze detailed structures and organs in the body, using advanced intelligent software. However, challenges related to performance and data privacy often arise when using medical data from patients and healthcare institutions. To address these issues, new approaches have emerged, such as federated learning. This technique ensures the secure exchange of sensitive patient and institutional data. It enables machine learning or deep learning algorithms to establish a client–server relationship, whereby specific parameters are securely shared between models while maintaining the integrity of the learning tasks being executed. Federated learning has been successfully applied in medical settings, including diagnostic applications involving medical images such as MRI data. This research introduces an analytical intelligence system based on an Internet of Medical Things (IoMT) framework that employs federated learning to provide a safe and effective diagnostic solution for brain tumor identification. By utilizing specific brain MRI datasets, the model enables multiple local capsule networks (CapsNet) to achieve improved classification results. The average accuracy rate of the CapsNet model exceeds 97%. The precision rate indicates that the CapsNet model performs well in accurately predicting true classes. Additionally, the recall findings suggest that this model is effective in detecting the target classes of meningiomas, pituitary tumors, and gliomas. The integration of these components into an analytical intelligence system that supports the work of healthcare personnel is the main contribution of this work. Evaluations have shown that this approach is effective for diagnosing brain tumors while ensuring data privacy and security. Moreover, it represents a valuable tool for enhancing the efficiency of the medical diagnostic process. Full article
(This article belongs to the Special Issue Innovations in Optimization and Operations Research)
Show Figures

Figure 1

15 pages, 1758 KiB  
Article
Eye-Guided Multimodal Fusion: Toward an Adaptive Learning Framework Using Explainable Artificial Intelligence
by Sahar Moradizeyveh, Ambreen Hanif, Sidong Liu, Yuankai Qi, Amin Beheshti and Antonio Di Ieva
Sensors 2025, 25(15), 4575; https://doi.org/10.3390/s25154575 - 24 Jul 2025
Viewed by 250
Abstract
Interpreting diagnostic imaging and identifying clinically relevant features remain challenging tasks, particularly for novice radiologists who often lack structured guidance and expert feedback. To bridge this gap, we propose an Eye-Gaze Guided Multimodal Fusion framework that leverages expert eye-tracking data to enhance learning [...] Read more.
Interpreting diagnostic imaging and identifying clinically relevant features remain challenging tasks, particularly for novice radiologists who often lack structured guidance and expert feedback. To bridge this gap, we propose an Eye-Gaze Guided Multimodal Fusion framework that leverages expert eye-tracking data to enhance learning and decision-making in medical image interpretation. By integrating chest X-ray (CXR) images with expert fixation maps, our approach captures radiologists’ visual attention patterns and highlights regions of interest (ROIs) critical for accurate diagnosis. The fusion model utilizes a shared backbone architecture to jointly process image and gaze modalities, thereby minimizing the impact of noise in fixation data. We validate the system’s interpretability using Gradient-weighted Class Activation Mapping (Grad-CAM) and assess both classification performance and explanation alignment with expert annotations. Comprehensive evaluations, including robustness under gaze noise and expert clinical review, demonstrate the framework’s effectiveness in improving model reliability and interpretability. This work offers a promising pathway toward intelligent, human-centered AI systems that support both diagnostic accuracy and medical training. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

16 pages, 1139 KiB  
Review
Student-Centered Curriculum: The Innovative, Integrative, and Comprehensive Model of “George Emil Palade” University of Medicine, Pharmacy, Sciences, and Technology of Targu Mures
by Leonard Azamfirei, Lorena Elena Meliț, Cristina Oana Mărginean, Anca-Meda Văsieșiu, Ovidiu Simion Cotoi, Cristina Bică, Daniela Lucia Muntean, Simona Gurzu, Klara Brînzaniuc, Claudia Bănescu, Mark Slevin, Andreea Varga and Simona Muresan
Educ. Sci. 2025, 15(8), 943; https://doi.org/10.3390/educsci15080943 - 23 Jul 2025
Viewed by 391
Abstract
Medical education is the paradigm of 21st century education and the current changes involve the adoption of integrative and comprehensive patient-centered teaching and learning approaches. Thus, curricular developers from George Emil Palade University of Medicine, Pharmacy, Sciences, and Technology of Targu Mures (G.E. [...] Read more.
Medical education is the paradigm of 21st century education and the current changes involve the adoption of integrative and comprehensive patient-centered teaching and learning approaches. Thus, curricular developers from George Emil Palade University of Medicine, Pharmacy, Sciences, and Technology of Targu Mures (G.E. Palade UMPhST of Targu Mures) have recently designed and implemented an innovative medical curriculum, as well as two valuable assessment tools for both theoretical knowledge and practical skills. Thus, during the first three preclinical years, the students will benefit from an organ- and system-centered block teaching approach, while the clinical years will focus on enabling students to achieve the most important practical skills in clinical practice, based on a patient bedside teaching system. In terms of theoretical knowledge assessment, the UNiX center at G.E. Palade UMPhST of Targu Mures, a recently designed center endowed with the latest next-generation technology, enables individualized, secured multiple-choice question-based assessments of the student’s learning outcomes. Moreover, an intelligent assessment tool for practical skills was also recently implemented in our branch in Hamburg, the Objective Structured Clinical Examination (O.S.C.E). This system uses direct observations for testing the student’s practical skills regarding anamnesis, clinical exams, procedures/maneuvers, the interpretation of laboratory tests and paraclinical investigations, differential diagnosis, management plans, communication, and medical counselling. The integrative, comprehensive, patient-centered curriculum and the intelligent assessment system, implemented in G.E Palade UMPhST of Targu Mures, help define innovation in education and enable the students to benefit from a high-quality medical education. Full article
Show Figures

Figure 1

22 pages, 4406 KiB  
Article
Colorectal Cancer Detection Tool Developed with Neural Networks
by Alex Ede Danku, Eva Henrietta Dulf, Alexandru George Berciu, Noemi Lorenzovici and Teodora Mocan
Appl. Sci. 2025, 15(15), 8144; https://doi.org/10.3390/app15158144 - 22 Jul 2025
Viewed by 270
Abstract
In the last two decades, there has been a considerable surge in the development of artificial intelligence. Imaging is most frequently employed for the diagnostic evaluation of patients, as it is regarded as one of the most precise methods for identifying the presence [...] Read more.
In the last two decades, there has been a considerable surge in the development of artificial intelligence. Imaging is most frequently employed for the diagnostic evaluation of patients, as it is regarded as one of the most precise methods for identifying the presence of a disease. However, a study indicates that approximately 800,000 individuals in the USA die or incur permanent disability because of misdiagnosis. The present study is based on the use of computer-aided diagnosis of colorectal cancer. The objective of this study is to develop a practical, low-cost, AI-based decision-support tool that integrates clinical test data (blood/stool) and, if needed, colonoscopy images to help reduce misdiagnosis and improve early detection of colorectal cancer for clinicians. Convolutional neural networks (CNNs) and artificial neural networks (ANNs) are utilized in conjunction with a graphical user interface (GUI), which caters to individuals lacking programming expertise. The performance of the artificial neural network (ANN) is measured using the mean squared error (MSE) metric, and the obtained performance is 7.38. For CNN, two distinct cases are under consideration: one with two outputs and one with three outputs. The precision of the models is 97.2% for RGB and 96.7% for grayscale, respectively, in the first instance, and 83% for RGB and 82% for grayscale in the second instance. However, using a pretrained network yielded superior performance with 99.5% for 2-output models and 93% for 3-output models. The GUI is composed of two panels, with the best ANN model and the best CNN model being utilized in each. The primary function of the tool is to assist medical personnel in reducing the time required to make decisions and the probability of misdiagnosis. Full article
Show Figures

Figure 1

Back to TopTop