Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (753)

Search Parameters:
Keywords = transformative learning outcomes

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
29 pages, 1029 KB  
Protocol
Secondary Prevention of AFAIS: Deploying Traditional Regression, Machine Learning, and Deep Learning Models to Validate and Update CHA2DS2-VASc for 90-Day Recurrence
by Jenny Simon, Łukasz Kraiński, Michał Karliński, Maciej Niewada and on behalf of the VISTA-Acute Collaboration
J. Clin. Med. 2025, 14(20), 7327; https://doi.org/10.3390/jcm14207327 (registering DOI) - 16 Oct 2025
Abstract
Backgrounds/Objectives: Atrial fibrillation (AF) confers a fivefold greater risk of acute ischaemic stroke (AIS) relative to normal sinus rhythm. Among patients with AF-related AIS (AFAIS), recurrence is common: AFAIS rate is sixfold higher in secondary versus primary prevention patients. Guidelines recommend oral anticoagulation [...] Read more.
Backgrounds/Objectives: Atrial fibrillation (AF) confers a fivefold greater risk of acute ischaemic stroke (AIS) relative to normal sinus rhythm. Among patients with AF-related AIS (AFAIS), recurrence is common: AFAIS rate is sixfold higher in secondary versus primary prevention patients. Guidelines recommend oral anticoagulation for primary and secondary prevention on the basis of CHA2DS2-VASc. However, guideline adherence is poor for secondary prevention. This is, in part, because the predictive value of CHA2DS2-VASc has not been ascertained with respect to recurrence: patients with and without previous stroke were not routinely differentiated in validation studies. We put forth a protocol to (1) validate, and (2) update CHA2DS2-VASc for secondary prevention, aiming to deliver a CPR that better captures 90-day recurrence risk for a given AFAIS patient. Overwhelmingly poor quality of reporting has been deplored among published clinical prediction rules (CPRs). Combined with the fact that machine learning (ML) and deep learning (DL) methods are rife with challenges, registered protocols are needed to make the CPR literature more validation-oriented, transparent, and systematic. This protocol aims to lead by example for prior planning of primary and secondary analyses to obtain incremental predictive value for existing CPRs. Methods: The Virtual International Stroke Trials Archive (VISTA), which has compiled data from 38 randomised controlled trials (RCTs) in AIS, was screened for patients that (1) had an AF diagnosis, and (2) were treated with vitamin K antagonists (VKAs) or without any antithrombotic medication. This yielded 2763 AFAIS patients. Patients without an AF diagnosis were also retained under the condition that they were treated with VKAs or without any antithrombotic medication, which yielded 7809 non-AF AIS patients. We will validate CHA2DS2-VASc for 90-day recurrence and secondary outcomes (7-day recurrence, 7- and 90-day haemorrhagic transformation, 90-day decline in functional status, and 90-day all-cause mortality) by examining discrimination, calibration, and clinical utility. To update CHA2DS2-VASc, logistic regression (LR), extreme gradient boosting (XGBoost), and multilayer perceptron (MLP) models will be trained using nested cross-validation. The MLP model will employ transfer learning to leverage information from the non-AF AIS patient cohort. Results: Models will be assessed on a hold-out test set (25%) using area under the receiver operating characteristic curve (AUC), calibration curves, and F1 score. Shapley additive explanations (SHAP) will be used to interpret the models and construct the updated CPRs. Conclusions: The CPRs will be compared by means of discrimination, calibration, and clinical utility. In so doing, the CPRs will be evaluated against each other, CHA2DS2-VASc, and default strategies, with test tradeoff analysis performed to balance ease-of-use with clinical utility. Full article
(This article belongs to the Special Issue Application of Anticoagulation and Antiplatelet Therapy)
Show Figures

Figure 1

20 pages, 1129 KB  
Article
Sustained Learning as a Dynamic Capability for Digital Transformation: A Multilevel Quantitative Study on Workforce Readiness and Digital Services in Healthcare
by Sandra Starke and Iveta Ludviga
Sustainability 2025, 17(20), 9184; https://doi.org/10.3390/su17209184 (registering DOI) - 16 Oct 2025
Abstract
In the context of the digital transformation of healthcare organisations, this study investigates the critical role of sustained learning, employee readiness, and supportive learning conditions to enable digital service offerings. Drawing on dynamic capabilities theory, we conceptualise and empirically test a multilevel model, [...] Read more.
In the context of the digital transformation of healthcare organisations, this study investigates the critical role of sustained learning, employee readiness, and supportive learning conditions to enable digital service offerings. Drawing on dynamic capabilities theory, we conceptualise and empirically test a multilevel model, exploring how sustained learning behaviour and mindset shape the Ability–Motivation–Opportunity (AMO) framework at the individual level. Furthermore, we analyse how workplace learning mediates the relationship between AMO on service outcomes at an organisational level, with sector affiliation as a moderating factor. Data were collected from 856 participants with online surveys and analysed with PLS-SEM. The results confirmed that sustained learning significantly enhances individual readiness (ability, motivation, and opportunity), which in turn positively influences digital services. Workplace learning was found to be a potent mediator, and sector affiliation significantly moderated the relationship between workforce enhancement and digital service outcomes. These findings underline the importance of embedding an employee sustained learning mindset and behaviour as an organisational capability, beyond technical implementation. The results suggest that a successful digital transformation hinges on cognitive and behavioural learning engagement, supported by supportive learning structures and context-specific strategies. Full article
Show Figures

Figure 1

15 pages, 2232 KB  
Article
Image-Based Deep Learning for Brain Tumour Transcriptomics: A Benchmark of DeepInsight, Fotomics, and Saliency-Guided CNNs
by Ali Alyatimi, Vera Chung, Muhammad Atif Iqbal and Ali Anaissi
Mach. Learn. Knowl. Extr. 2025, 7(4), 119; https://doi.org/10.3390/make7040119 - 15 Oct 2025
Abstract
Classifying brain tumour transcriptomic data is crucial for precision medicine but remains challenging due to high dimensionality and limited interpretability of conventional models. This study benchmarks three image-based deep learning approaches, DeepInsight, Fotomics, and a novel saliency-guided convolutional neural network (CNN), for transcriptomic [...] Read more.
Classifying brain tumour transcriptomic data is crucial for precision medicine but remains challenging due to high dimensionality and limited interpretability of conventional models. This study benchmarks three image-based deep learning approaches, DeepInsight, Fotomics, and a novel saliency-guided convolutional neural network (CNN), for transcriptomic classification. DeepInsight utilises dimensionality reduction to spatially arrange gene features, while Fotomics applies Fourier transforms to encode expression patterns into structured images. The proposed method transforms each single-cell gene expression profile into an RGB image using PCA, UMAP, or t-SNE, enabling CNNs such as ResNet to learn spatially organised molecular features. Gradient-based saliency maps are employed to highlight gene regions most influential in model predictions. Evaluation is conducted on two biologically and technologically different datasets: single-cell RNA-seq from glioblastoma GSM3828672 and bulk microarray data from medulloblastoma GSE85217. Outcomes demonstrate that image-based deep learning methods, particularly those incorporating saliency guidance, provide a robust and interpretable framework for uncovering biologically meaningful patterns in complex high-dimensional omics data. For instance, ResNet-18 achieved the highest accuracy of 97.25% on the GSE85217 dataset and 91.02% on GSM3828672, respectively, outperforming other baseline models across multiple metrics. Full article
Show Figures

Graphical abstract

32 pages, 5306 KB  
Review
Neuroimaging and Machine Learning in OCD: Advances in Diagnostic and Therapeutic Insights
by Norah A. Alturaiqi, Wijdan S. Aljebreen, Wedad Alawad, Shuaa S. Alharbi and Haifa F. Alhasson
Brain Sci. 2025, 15(10), 1106; https://doi.org/10.3390/brainsci15101106 - 14 Oct 2025
Abstract
Background/Objectives: Obsessive–Compulsive Disorder (OCD) is a chronic mental health condition characterized by intrusive thoughts and repetitive behaviors. Traditional diagnostic methods rely on subjective clinical assessments, delaying effective intervention. This review examines how advanced neuroimaging techniques, such as Magnetic Resonance Imaging (MRI) and Diffusion [...] Read more.
Background/Objectives: Obsessive–Compulsive Disorder (OCD) is a chronic mental health condition characterized by intrusive thoughts and repetitive behaviors. Traditional diagnostic methods rely on subjective clinical assessments, delaying effective intervention. This review examines how advanced neuroimaging techniques, such as Magnetic Resonance Imaging (MRI) and Diffusion Tensor Imaging (DTI), integrated with machine learning (ML), can improve OCD diagnostics by identifying structural and functional brain abnormalities, particularly in the cortico-striato-thalamo-cortical (CSTC) circuit. Methods: Findings from studies using MRI and DTI to identify OCD-related neurobiological markers are synthesized. Machine learning algorithms like Convolutional Neural Networks (CNNs) and Support Vector Machines (SVMs) are evaluated for their ability to analyze neuroimaging data. The role of transfer learning in overcoming dataset limitations and heterogeneity is also explored. Results: ML algorithms have achieved diagnostic accuracies exceeding 80%, revealing subtle neurobiological markers linked to OCD. Abnormalities in the CSTC circuit are consistently identified. Transfer learning shows promise in enhancing predictive modeling and enabling personalized treatment strategies, especially in resource-constrained settings. Conclusions: The integration of neuroimaging and ML represents a transformative approach to OCD diagnostics, offering improved accuracy and biologically informed insights. Future research should focus on optimizing multimodal imaging techniques, increasing data generalizability, and addressing interpretability challenges to enhance clinical applicability. These innovations have the potential to advance precision diagnostics and support more targeted therapeutic interventions, ultimately improving outcomes for individuals with OCD. Full article
Show Figures

Figure 1

14 pages, 1477 KB  
Article
Transformer-Based Deep Learning for Preoperative Prediction of Microvascular Invasion in Hepatocellular Carcinoma
by Ruilin He, Huilin Chen, Wenjie Zou, Mengting Gu, Xingyu Zhao, Ningyang Jia and Wanmin Liu
Cancers 2025, 17(20), 3314; https://doi.org/10.3390/cancers17203314 - 14 Oct 2025
Abstract
Background: Microvascular invasion (MVI) is a critical prognostic factor in hepatocellular carcinoma (HCC), but preoperative three-class prediction remains challenging. Radiomics and clinical biomarkers may enable more accurate and individualized assessment. Aim: The aim of this study was to develop and validate [...] Read more.
Background: Microvascular invasion (MVI) is a critical prognostic factor in hepatocellular carcinoma (HCC), but preoperative three-class prediction remains challenging. Radiomics and clinical biomarkers may enable more accurate and individualized assessment. Aim: The aim of this study was to develop and validate a Transformer-based deep learning framework that integrates radiomic and clinical features for direct three-class MVI classification in HCC patients. Methods: This retrospective study included 437 patients with pathologically confirmed hepatocellular carcinoma (HCC) and microvascular invasion (MVI) status from two campuses of a single institution. Patients from Hospital A (n = 305) were randomly divided into training and internal test cohorts, while patients from Hospital B (n = 132) were used as an independent external validation cohort. Radiomic features were extracted from preoperative Gd-BOPTA-enhanced MRI, and clinical laboratory data were collected. A two-stage feature selection strategy, combining univariate statistical testing and recursive feature elimination, was applied. A Transformer-based model was built to classify three MVI categories (M0, M1, M2), and its performance was evaluated in both the internal test cohort and the external validation cohort. Results were compared with those from traditional machine learning models, including Random Forest, Logistic Regression, XGBoost, and LightGBM. Results: On the internal test set (n = 76, Hospital A), the model achieved an accuracy of 0.733 (95% CI: 0.64–0.83), a weighted F1-score of 0.733, and a macro-average AUC of 0.880 (95% CI: 0.807–0.953). The sensitivity and specificity for M1 were 0.56 (95% CI: 0.31–0.78) and 0.86 (95% CI: 0.74–0.94), respectively; for high-risk M2 cases, the sensitivity was 0.73 (95% CI: 0.64–0.81) and the specificity was 0.91 (95% CI: 0.85–0.96). On the external validation set (n = 132, Hospital B), performance remained stable with an accuracy of 0.758, a weighted F1-score of 0.768, and a macro-average AUC of 0.886 (95% CI: 0.833–0.940). Conclusions: This Transformer-based model enables accurate and objective three-class MVI prediction using multi-modal features, supporting individualized surgical planning and improved clinical outcomes. In particular, the ability to preoperatively identify high-risk M2 patients may inform surgical margin design, guide adjuvant therapy strategies, and influence liver transplantation eligibility. Full article
(This article belongs to the Section Methods and Technologies Development)
Show Figures

Figure 1

31 pages, 1305 KB  
Review
Artificial Intelligence in Cardiac Electrophysiology: A Clinically Oriented Review with Engineering Primers
by Giovanni Canino, Assunta Di Costanzo, Nadia Salerno, Isabella Leo, Mario Cannataro, Pietro Hiram Guzzi, Pierangelo Veltri, Sabato Sorrentino, Salvatore De Rosa and Daniele Torella
Bioengineering 2025, 12(10), 1102; https://doi.org/10.3390/bioengineering12101102 - 13 Oct 2025
Viewed by 198
Abstract
Artificial intelligence (AI) is transforming cardiac electrophysiology across the entire care pathway, from arrhythmia detection on 12-lead electrocardiograms (ECGs) and wearables to the guidance of catheter ablation procedures, through to outcome prediction and therapeutic personalization. End-to-end deep learning (DL) models have achieved cardiologist-level [...] Read more.
Artificial intelligence (AI) is transforming cardiac electrophysiology across the entire care pathway, from arrhythmia detection on 12-lead electrocardiograms (ECGs) and wearables to the guidance of catheter ablation procedures, through to outcome prediction and therapeutic personalization. End-to-end deep learning (DL) models have achieved cardiologist-level performance in rhythm classification and prognostic estimation on standard ECGs, with a reported arrhythmia classification accuracy of ≥95% and an atrial fibrillation detection sensitivity/specificity of ≥96%. The application of AI to wearable devices enables population-scale screening and digital triage pathways. In the electrophysiology (EP) laboratory, AI standardizes the interpretation of intracardiac electrograms (EGMs) and supports target selection, and machine learning (ML)-guided strategies have improved ablation outcomes. In patients with cardiac implantable electronic devices (CIEDs), remote monitoring feeds multiparametric models capable of anticipating heart-failure decompensation and arrhythmic risk. This review outlines the principal modeling paradigms of supervised learning (regression models, support vector machines, neural networks, and random forests) and unsupervised learning (clustering, dimensionality reduction, association rule learning) and examines emerging technologies in electrophysiology (digital twins, physics-informed neural networks, DL for imaging, graph neural networks, and on-device AI). However, major challenges remain for clinical translation, including an external validation rate below 30% and workflow integration below 20%, which represent core obstacles to real-world adoption. A joint clinical engineering roadmap is essential to translate prototypes into reliable, bedside tools. Full article
(This article belongs to the Special Issue Mathematical Models for Medical Diagnosis and Testing)
Show Figures

Figure 1

36 pages, 4151 KB  
Review
Integration of Artificial Intelligence in Biosensors for Enhanced Detection of Foodborne Pathogens
by Riza Jane S. Banicod, Nazia Tabassum, Du-Min Jo, Aqib Javaid, Young-Mog Kim and Fazlurrahman Khan
Biosensors 2025, 15(10), 690; https://doi.org/10.3390/bios15100690 - 12 Oct 2025
Viewed by 193
Abstract
Foodborne pathogens remain a significant public health concern, necessitating the development of rapid, sensitive, and reliable detection methods for various food matrices. Traditional biosensors, while effective in many contexts, often face limitations related to complex sample environments, signal interpretation, and on-site usability. The [...] Read more.
Foodborne pathogens remain a significant public health concern, necessitating the development of rapid, sensitive, and reliable detection methods for various food matrices. Traditional biosensors, while effective in many contexts, often face limitations related to complex sample environments, signal interpretation, and on-site usability. The integration of artificial intelligence (AI) into biosensing platforms offers a transformative approach to address these challenges. This review critically examines recent advancements in AI-assisted biosensors for detecting foodborne pathogens in various food samples, including meat, dairy products, fresh produce, and ready-to-eat foods. Emphasis is placed on the application of machine learning and deep learning to improve biosensor accuracy, reduce detection time, and automate data interpretation. AI models have demonstrated capabilities in enhancing sensitivity, minimizing false results, and enabling real-time, on-site analysis through innovative interfaces. Additionally, the review highlights the types of biosensing mechanisms employed, such as electrochemical, optical, and piezoelectric, and how AI optimizes their performance. While these developments show promising outcomes, challenges remain in terms of data quality, algorithm transparency, and regulatory acceptance. The future integration of standardized datasets, explainable AI models, and robust validation protocols will be essential to fully harness the potential of AI-enhanced biosensors for next-generation food safety monitoring. Full article
(This article belongs to the Special Issue Biosensors for Environmental Monitoring and Food Safety)
Show Figures

Figure 1

8 pages, 628 KB  
Proceeding Paper
An Early Hair Loss Detection and Prediction Method Based on Machine Learning
by Muhammad Ahmad, Azka Mir and Anton Permana
Eng. Proc. 2025, 107(1), 126; https://doi.org/10.3390/engproc2025107126 - 11 Oct 2025
Viewed by 54
Abstract
Hair loss is a common issue that influences many people around the world and can lead to mental and social challenges, which can bring down self-esteem and social relationships. To overcome these challenges, this study investigates the promising role of machine learning (ML) [...] Read more.
Hair loss is a common issue that influences many people around the world and can lead to mental and social challenges, which can bring down self-esteem and social relationships. To overcome these challenges, this study investigates the promising role of machine learning (ML) in the early detection and determination of hair loss, clearing the way for personalized medicines. In order to arrive at a particular outcome, the research incorporates a few techniques, including Random Forest, Support Vector Machines (SVMs), as well as K-nearest neighbor (KNN). Important elements like feature engineering, preprocessing, and hyperparameter tweaking are used. Traditional approaches are outrun by the outcomes reached, and there is a clear difference when it comes to the accuracy and precision. This study shows the potential of automatic diagnostics that could transform the treatment of hair loss to the enormous benefit of the many afflicted by it. Full article
Show Figures

Figure 1

18 pages, 1542 KB  
Article
DiabCompSepsAI: Integrated AI Model for Early Detection and Prediction of Postoperative Complications in Diabetic Patients—Using a Random Forest Classifier
by Sri Harsha Boppana, Sachin Sravan Kumar Komati, Raja Hamsa Chitturi, Ritwik Raj and C. David Mintz
J. Clin. Med. 2025, 14(20), 7173; https://doi.org/10.3390/jcm14207173 - 11 Oct 2025
Viewed by 229
Abstract
Background/Objectives: Postoperative complications such as wound infections and sepsis are common in diabetic patients, often resulting in longer hospital stays and higher morbidity. This study hypothesizes that a Random Forest Classifier can accurately predict these complications, enabling early clinical interventions. The model utilizes [...] Read more.
Background/Objectives: Postoperative complications such as wound infections and sepsis are common in diabetic patients, often resulting in longer hospital stays and higher morbidity. This study hypothesizes that a Random Forest Classifier can accurately predict these complications, enabling early clinical interventions. The model utilizes ensemble learning to integrate diverse patient data and improve predictive accuracy beyond traditional risk assessments. Methods: A comprehensive retrospective analysis was performed using data extracted from the National Surgical Quality Improvement Program (NSQIP) database. The dataset encompassed a wide array of variables, including demographic factors, clinical markers, and detailed surgical data (specialty, type of anesthesia, duration of surgery). Each variable was meticulously encoded into numerical formats, with categorical variables transformed through one-hot encoding, and continuous variables were normalized. The dataset was partitioned into training (80%) and testing (20%) subsets, ensuring a balanced representation of the target outcomes. The Random Forest Classifier was selected due to its robustness in handling high-dimensional data and its ability to model complex interactions between variables. Results: The Random Forest model showed accuracy rates of 94.38% for wound infection and 94.94% for sepsis. Precision and recall metrics also exceeded 94%, highlighting the model’s accuracy in identifying true positives and reducing false positives. ROC curve analysis yielded AUC values of 0.92 for wound infection and 0.95 for sepsis, indicating strong discriminative capability. Feature importance analysis further identified key predictors, including surgical duration, specific laboratory markers, and patient comorbidities. Conclusions: This study demonstrates the Random Forest Classifier’s strong predictive ability for postoperative wound infections and sepsis in diabetic patients. The model’s high-performance metrics indicate its potential for real-time risk stratification in clinical workflows. Future research should validate these findings in diverse populations and surgical settings. Incorporating this predictive model into clinical practice has the potential to significantly improve patient outcomes and reduce healthcare costs. Full article
(This article belongs to the Section Endocrinology & Metabolism)
Show Figures

Figure 1

20 pages, 1358 KB  
Review
Artificial Intelligence in the Diagnosis and Management of Atrial Fibrillation
by Otilia Țica, Asgher Champsi, Jinming Duan and Ovidiu Țica
Diagnostics 2025, 15(20), 2561; https://doi.org/10.3390/diagnostics15202561 - 11 Oct 2025
Viewed by 388
Abstract
Artificial intelligence (AI) has increasingly become a transformative tool in cardiology, particularly in diagnosing and managing atrial fibrillation (AF), the most prevalent cardiac arrhythmia. This review aims to critically assess and synthesize current AI methodologies and their clinical relevance in AF diagnosis, risk [...] Read more.
Artificial intelligence (AI) has increasingly become a transformative tool in cardiology, particularly in diagnosing and managing atrial fibrillation (AF), the most prevalent cardiac arrhythmia. This review aims to critically assess and synthesize current AI methodologies and their clinical relevance in AF diagnosis, risk prediction, and therapeutic guidance. It systematically evaluates recent advancements in AI methodologies, including machine learning, deep learning, and natural language processing, for AF detection, risk stratification, and therapeutic decision-making. AI-driven tools have demonstrated superior accuracy and efficiency in interpreting electrocardiograms (ECGs), continuous monitoring via wearable devices, and predicting AF onset and progression compared to traditional clinical approaches. Deep learning algorithms, notably convolutional neural networks (CNNs) and recurrent neural networks (RNNs), have revolutionized ECG analysis, identifying subtle waveform features predictive of AF development. Additionally, AI models significantly enhance clinical decision-making by personalizing anticoagulation therapy, optimizing rhythm versus rate-control strategies, and predicting procedural outcomes for catheter ablation. Despite considerable potential, practical adoption of AI in clinical practice is constrained by challenges including data privacy, explainability, and integration into clinical workflows. Addressing these challenges through robust validation studies, transparent algorithm development, and interdisciplinary collaborations will be crucial. In conclusion, AI represents a paradigm shift in AF management, promising improvements in diagnostic precision, personalized care, and patient outcomes. This review highlights the growing clinical importance of AI in AF care and provides a consolidated perspective on current applications, limitations, and future directions. Full article
Show Figures

Figure 1

18 pages, 6555 KB  
Article
Bioinformatics Analysis of Tumor-Associated Macrophages in Hepatocellular Carcinoma and Establishment of a Survival Model Based on Transformer
by Zhuo Zeng, Shenghua Rao and Jiemeng Zhang
Int. J. Mol. Sci. 2025, 26(19), 9825; https://doi.org/10.3390/ijms26199825 - 9 Oct 2025
Viewed by 283
Abstract
Hepatocellular carcinoma (HCC) ranks among the most prevalent malignancies globally. Although treatment strategies have improved, the prognosis for patients with advanced HCC remains unfavorable. Tumor-associated macrophages (TAMs) play a dual role, exhibiting both anti-tumor and pro-tumor functions. In this study, we analyzed single-cell [...] Read more.
Hepatocellular carcinoma (HCC) ranks among the most prevalent malignancies globally. Although treatment strategies have improved, the prognosis for patients with advanced HCC remains unfavorable. Tumor-associated macrophages (TAMs) play a dual role, exhibiting both anti-tumor and pro-tumor functions. In this study, we analyzed single-cell RNA sequencing data from 10 HCC tumor cores and 8 adjacent non-tumor liver tissues available in the dataset GSE149614. Using dimensionality reduction and clustering approaches, we identified six major cell types and nine distinct TAM subtypes. We employed Monocle2 for cell trajectory analysis, hdWGCNA for co-expression network analysis, and CellChat to investigate functional communication between TAMs and other components of the tumor microenvironment. Furthermore, we estimated TAM abundance in TCGA-LIHC samples using CIBERSORT and observed that the relative proportions of specific TAM subtypes were significantly correlated with patient survival. To identify TAM-related genes influencing patient outcomes, we developed a high-dimensional, gene-based transformer survival model. This model achieved superior concordance index (C-index) values across multiple datasets, including TCGA-LIHC, OEP000321, and GSE14520, outperforming other methods. Our results emphasize the heterogeneity of tumor-associated macrophages in hepatocellular carcinoma and highlight the practicality of our deep learning framework in survival analysis. Full article
(This article belongs to the Section Molecular Informatics)
Show Figures

Graphical abstract

26 pages, 1116 KB  
Review
Optimizing Anti-PD1 Immunotherapy: An Overview of Pharmacokinetics, Biomarkers, and Therapeutic Drug Monitoring
by Joaquim Faria Monteiro, Alexandrina Fernandes, Diogo Gavina Tato, Elias Moreira, Ricardo Ribeiro, Henrique Reguengo, Jorge Gonçalves and Paula Fresco
Cancers 2025, 17(19), 3262; https://doi.org/10.3390/cancers17193262 - 8 Oct 2025
Viewed by 448
Abstract
Anti-PD-1 therapies have transformed cancer treatment by restoring antitumor T cell activity. Despite their broad clinical use, variability in treatment response and immune-related adverse events underscore the need for therapeutic optimization. This article provides an integrative overview of the pharmacokinetics (PKs) of anti-PD-1 [...] Read more.
Anti-PD-1 therapies have transformed cancer treatment by restoring antitumor T cell activity. Despite their broad clinical use, variability in treatment response and immune-related adverse events underscore the need for therapeutic optimization. This article provides an integrative overview of the pharmacokinetics (PKs) of anti-PD-1 antibodies—such as nivolumab, pembrolizumab, and cemiplimab—and examines pharmacokinetic–pharmacodynamic (PK-PD) relationships, highlighting the impact of clearance variability on drug exposure, efficacy, and safety. Baseline clearance and its reduction during therapy, together with interindividual variability, emerge as important dynamic biomarkers with potential applicability across different cancer types for guiding individualized dosing strategies. The review also discusses established biomarkers for anti-PD-1 therapies, including tumor PD-L1 expression and immune cell signatures, and their relevance for patient stratification. The evidence supports a shift from traditional weight-based dosing toward adaptive dosing and therapeutic drug monitoring (TDM), especially in long-term responders and cost-containment contexts. Notably, the inclusion of clearance-based biomarkers—such as baseline clearance and its reduction—into therapeutic models represents a key step toward individualized, dynamic immunotherapy. In conclusion, optimizing anti-PD-1 therapy through PK-PD insights and biomarker integration holds promise for improving outcomes and reducing toxicity. Future research should focus on validating PK-based approaches and developing robust algorithms (machine learning models incorporating clearance, tumor burden, and other validated biomarkers) for tailored cancer treatment. Full article
Show Figures

Figure 1

22 pages, 1014 KB  
Review
Advances in IoT, AI, and Sensor-Based Technologies for Disease Treatment, Health Promotion, Successful Ageing, and Ageing Well
by Yuzhou Qian and Keng Leng Siau
Sensors 2025, 25(19), 6207; https://doi.org/10.3390/s25196207 - 7 Oct 2025
Viewed by 583
Abstract
Recent advancements in the Internet of Things (IoT) and artificial intelligence (AI) are unlocking transformative opportunities across society. One of the most critical challenges addressed by these technologies is the ageing population, which presents mounting concerns for healthcare systems and quality of life [...] Read more.
Recent advancements in the Internet of Things (IoT) and artificial intelligence (AI) are unlocking transformative opportunities across society. One of the most critical challenges addressed by these technologies is the ageing population, which presents mounting concerns for healthcare systems and quality of life worldwide. By supporting continuous monitoring, personal care, and data-driven decision-making, IoT and AI are shifting healthcare delivery from a reactive approach to a proactive one. This paper presents a comprehensive overview of IoT-based systems with a particular focus on the Internet of Healthcare Things (IoHT) and their integration with AI, referred to as the Artificial Intelligence of Things (AIoT). We illustrate the operating procedures of IoHT systems in detail. We highlight their applications in disease management, health promotion, and active ageing. Key enabling technologies, including cloud computing, edge computing architectures, machine learning, and smart sensors, are examined in relation to continuous health monitoring, personalized interventions, and predictive decision support. This paper also indicates potential challenges that IoHT systems face, including data privacy, ethical concerns, and technology transition and aversion, and it reviews corresponding defense mechanisms from perception, policy, and technology levels. Future research directions are discussed, including explainable AI, digital twins, metaverse applications, and multimodal sensor fusion. By integrating IoT and AI, these systems offer the potential to support more adaptive and human-centered healthcare delivery, ultimately improving treatment outcomes and supporting healthy ageing. Full article
(This article belongs to the Section Internet of Things)
Show Figures

Figure 1

26 pages, 1191 KB  
Systematic Review
The Use of Multimedia in the Teaching and Learning Process of Higher Education: A Systematic Review
by Evelina Staneviciene and Gintarė Žekienė
Sustainability 2025, 17(19), 8859; https://doi.org/10.3390/su17198859 - 3 Oct 2025
Viewed by 824
Abstract
The integration of multimedia technologies is transforming teaching and learning in higher education, offering innovative ways to improve student engagement and learning outcomes. Although numerous studies investigate the impact of multimedia, there is still a clear need for a synthesis that brings together [...] Read more.
The integration of multimedia technologies is transforming teaching and learning in higher education, offering innovative ways to improve student engagement and learning outcomes. Although numerous studies investigate the impact of multimedia, there is still a clear need for a synthesis that brings together the latest evidence from a variety of disciplines and contexts. To address this need, this systematic review aims to summarize the empirical evidence and provide a clearer understanding of how multimedia is applied in higher education, to outline how educators can effectively design and the implications for curriculum design. This article focuses on three key research questions: (1) How does the integration of multimedia in higher education classrooms influence student engagement and learning outcomes? (2) How does the use of multimedia affect the development of specific skills? (3) What are the challenges and opportunities to integrate multimedia technologies into higher education? Relevant studies were systematically retrieved and screened from major academic databases, including ScienceDirect, Web of Science, IEEE Xplore, Wiley Online Library, Springer, Taylor & Francis, and Google Scholar. In total, 48 studies were selected from these sources for detailed analysis. The findings showed that multimedia tools enhance student engagement, motivation, and performance when integrated with clear pedagogical strategies. In addition, multimedia helps to develop skills such as creativity, digital literacy, and independent learning. However, challenges such as technical limitations, uneven infrastructure, and the need for ongoing teacher training remain significant difficulties in fully exploiting the benefits in higher education. Addressing these challenges requires coordinated institutional support, investment in professional development, and careful alignment of multimedia tools with pedagogical goals. Full article
(This article belongs to the Special Issue Digital Teaching and Development in Sustainable Higher Education)
Show Figures

Figure 1

20 pages, 27829 KB  
Article
Deep Learning Strategies for Semantic Segmentation in Robot-Assisted Radical Prostatectomy
by Elena Sibilano, Claudia Delprete, Pietro Maria Marvulli, Antonio Brunetti, Francescomaria Marino, Giuseppe Lucarelli, Michele Battaglia and Vitoantonio Bevilacqua
Appl. Sci. 2025, 15(19), 10665; https://doi.org/10.3390/app151910665 - 2 Oct 2025
Viewed by 360
Abstract
Robot-assisted radical prostatectomy (RARP) has become the most prevalent treatment for patients with organ-confined prostate cancer. Despite superior outcomes, suboptimal vesicourethral anastomosis (VUA) may lead to serious complications, including urinary leakage, prolonged catheterization, and extended hospitalization. A precise localization of both the surgical [...] Read more.
Robot-assisted radical prostatectomy (RARP) has become the most prevalent treatment for patients with organ-confined prostate cancer. Despite superior outcomes, suboptimal vesicourethral anastomosis (VUA) may lead to serious complications, including urinary leakage, prolonged catheterization, and extended hospitalization. A precise localization of both the surgical needle and the surrounding vesical and urethral tissues to coadapt is needed for fine-grained assessment of this task. Nonetheless, the identification of anatomical structures from endoscopic videos is difficult due to tissue distortions, changes in brightness, and instrument interferences. In this paper, we propose and compare two Deep Learning (DL) pipelines for the automatic segmentation of the mucosal layers and the suturing needle in real RARP videos by exploiting different architectures and training strategies. To train the models, we introduce a novel, annotated dataset collected from four VUA procedures. Experimental results show that the nnU-Net 2D model achieved the highest class-specific metrics, with a Dice Score of 0.663 for the mucosa class and 0.866 for the needle class, outperforming both transformer-based and baseline convolutional approaches on external validation video sequences. This work paves the way for computer-assisted tools that can objectively evaluate surgical performance during the critical phase of suturing tasks. Full article
Show Figures

Figure 1

Back to TopTop