Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (2,577)

Search Parameters:
Keywords = training volume

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
22 pages, 9667 KB  
Article
A Transfer Learning System for Skin Disease Classification Using EfficientNet-B5 with Grad-CAM Explainability
by Daniel Turuta, Raul Robu and Ioan Filip
Appl. Sci. 2026, 16(6), 3083; https://doi.org/10.3390/app16063083 - 23 Mar 2026
Abstract
Accurate medical diagnostics for skin affections such as skin cancer, psoriasis, vascular tumors, or exanthems have become increasingly difficult due to the growing volume and visual variability of dermatological cases, as well as limited specialist availability. To address this, the present work introduces [...] Read more.
Accurate medical diagnostics for skin affections such as skin cancer, psoriasis, vascular tumors, or exanthems have become increasingly difficult due to the growing volume and visual variability of dermatological cases, as well as limited specialist availability. To address this, the present work introduces a complete and deployable deep-learning-based system capable of detecting ten distinct skin disease categories, trained using transfer learning with EfficientNet-B5 and enhanced with explainable AI through Grad-CAM. The proposed system achieves a top-3 accuracy of 95.96%, a weighted F1-score of 0.87, and class-specific F1-scores reaching 0.96 for acne and 0.95 for nail fungus. These results demonstrate strong predictive performance for the deep learning model trained, validated, and evaluated on a ten-class subset of the Dermnet dataset. The research conducted covers the visual explainability of the AI model classification process, including integration into a fully functional web application, usable as an expert system for image uploading, data processing and visualization of results. The AI visualizing technology based on Grad-CAM provides clear, class-specific heatmaps that highlight the most influential regions in each prediction, improving transparency and supporting clinical interpretability. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Figure 1

16 pages, 1714 KB  
Systematic Review
Strategies to Address Difficult Venous Access in Blood Sampling: A Comprehensive Meta-Analysis
by Baudolino Mussa, Gloria Passarella, Mara Marchese and Barbara Defrancisco
Medicina 2026, 62(3), 604; https://doi.org/10.3390/medicina62030604 - 23 Mar 2026
Abstract
Background and Objectives: Difficult venous access (DVA) affects 10–26% of hospitalized patients and up to 60% in high-risk populations, leading to increased patient discomfort, delayed diagnosis, and substantial healthcare costs estimated at $4.7 billion annually in the United States. This meta-analysis aimed to [...] Read more.
Background and Objectives: Difficult venous access (DVA) affects 10–26% of hospitalized patients and up to 60% in high-risk populations, leading to increased patient discomfort, delayed diagnosis, and substantial healthcare costs estimated at $4.7 billion annually in the United States. This meta-analysis aimed to systematically evaluate the effectiveness, safety, and implementation considerations of traditional and emerging strategies for obtaining blood samples in patients with DVA. Materials and Methods: We conducted a comprehensive systematic review and meta-analysis following PRISMA guidelines. We searched MEDLINE, Embase, CINAHL, and Cochrane databases from January 2016 to December 2023. Inclusion criteria encompassed randomized controlled trials, systematic reviews, and observational studies examining DVA interventions in adult and pediatric populations. Primary outcomes included first-attempt success rates, overall success rates, and complication rates. Statistical analysis used random-effects models with risk ratios and 95% confidence intervals. Results: Forty-seven studies involving 12,847 patients met the inclusion criteria. Technology-assisted approaches demonstrated superior outcomes compared to traditional techniques. Ultrasound guidance showed the highest effectiveness with a first-attempt success increase of 42% (RR 1.42, 95% CI 1.26–1.58, p < 0.001), followed by near-infrared visualization with a 28% increase (RR 1.28, 95% CI 1.14–1.42, p < 0.001). Population-specific approaches yielded significant benefits, including the use of scalp veins for infants and external jugular approaches for extreme DVA cases. Cost-effectiveness analysis revealed that ultrasound guidance achieved break-even within 8–14 months in high-volume centers. Conclusions: A systematic, stepwise approach integrating appropriate technology and techniques significantly improves success rates while reducing patient discomfort and healthcare costs. Healthcare institutions should implement comprehensive DVA protocols with adequate training, equipment access, and quality monitoring. The proposed algorithm achieved a 93% overall success rate in validation studies, representing a substantial improvement over traditional approaches. Full article
Show Figures

Figure 1

14 pages, 3023 KB  
Article
Lightweight Stereo Vision for Obstacle Detection and Range Estimation in Micro-Mobility Vehicles
by Jiansheng Ruan, Hui Weng, Zhaojun Yuan, Guangyuan Jin and Liang Zhou
Sensors 2026, 26(6), 1988; https://doi.org/10.3390/s26061988 - 23 Mar 2026
Abstract
Micro-mobility vehicles operating in closed, low-speed environments (e.g., parks) require reliable obstacle detection and accurate range estimation under strict constraints on cost, power, and onboard computation. This paper proposes HAGVNet, a lightweight stereo matching network for embedded ranging and validates its practical deployability [...] Read more.
Micro-mobility vehicles operating in closed, low-speed environments (e.g., parks) require reliable obstacle detection and accurate range estimation under strict constraints on cost, power, and onboard computation. This paper proposes HAGVNet, a lightweight stereo matching network for embedded ranging and validates its practical deployability in a target-level ranging pipeline with YOLO11n as the front-end detector. HAGVNet builds a hierarchical attention-guided cost volume (HAGV) that uses coarse-scale geometric priors to modulate fine-scale cost modeling and adopts ConvNeXtV2-style 2D cost aggregation blocks to improve stability and boundary consistency with controlled complexity. For ranging, depth statistics within detected regions are used to estimate target distance and 3D position. The model is pre-trained on SceneFlow and evaluated on KITTI. On SceneFlow, HAGVNet reaches 0.73 px EPE with 20.08 G FLOPs, indicating a favorable accuracy–complexity trade-off under low computation budgets. On an embedded Jetson Orin Nano Super platform, HAGVNet achieves 46.3 FPS under TensorRT FP16, and field tests indicate relative ranging errors of 0.5–8.6% within 2–10 m, demonstrating its practical feasibility for low-speed target-level ranging. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

22 pages, 323 KB  
Perspective
Carnivore and Animal-Based Diets in Sport: A Critical Evaluation of Current Evidence and Future Perspectives for Precision Nutrition
by Zbigniew Waśkiewicz
Nutrients 2026, 18(6), 998; https://doi.org/10.3390/nu18060998 - 21 Mar 2026
Abstract
The increasing popularity of carnivore and animal-based diets among athletes has generated substantial interest, despite limited direct scientific evidence supporting their efficacy and safety in sport-specific contexts. This narrative review critically evaluates the current evidence and examines the physiological, performance, and health-related implications [...] Read more.
The increasing popularity of carnivore and animal-based diets among athletes has generated substantial interest, despite limited direct scientific evidence supporting their efficacy and safety in sport-specific contexts. This narrative review critically evaluates the current evidence and examines the physiological, performance, and health-related implications of these dietary models in athletic populations. These dietary models, characterized by the partial or complete exclusion of plant-derived foods, are often promoted on the basis of mechanistic arguments, anecdotal reports, and extrapolations from research on ketogenic and very low-carbohydrate diets. However, their physiological relevance, long-term health implications, and compatibility with the demands of athletic training remain poorly defined. This narrative review provides a critical perspective on the current evidence related to carnivore and animal-based diets in sport, integrating findings from studies on low-carbohydrate, ketogenic, high-protein, and elimination-based dietary patterns. The analysis focuses on metabolic adaptations, body composition, exercise performance, gastrointestinal function, micronutrient adequacy, hormonal responses, and potential long-term health risks. Particular attention is given to the distinction between metabolic adaptations and functional performance outcomes, as well as to the high interindividual variability in dietary responses. The available evidence suggests that while carbohydrate restriction may induce specific metabolic adaptations, such as increased fat oxidation, these changes do not consistently translate into improved performance, particularly in high-intensity or high-volume training contexts. Moreover, the highly restrictive nature of carnivore and animal-based diets raises concerns about micronutrient deficiencies, alterations in the gut microbiota, changes in the lipid profile, and potential effects on eating behaviours, particularly in competitive athletic populations. Given the absence of well-controlled, long-term intervention studies in athletes, carnivore and animal-based diets cannot currently be recommended as safe or optimal nutritional strategies for sports performance. Rather than representing viable alternatives to established sports nutrition guidelines, these dietary models may be better understood as experimental or short-term tools within highly controlled research or diagnostic frameworks. Future research should prioritize rigorous, sport-specific study designs, long-term safety outcomes, and personalized approaches that account for individual metabolic and physiological variability. Full article
(This article belongs to the Section Sports Nutrition)
27 pages, 22784 KB  
Article
Terrain-Aware Self-Supervised Representation Learning for Tree Species Mapping in Mountainous Regions Under Limited Field Samples
by Li He, Leiguang Wang, Liang Hong, Qinling Dai, Wei Gu, Xingyue Du, Mingqi Yang, Juanjuan Liu and Yaoming Feng
Remote Sens. 2026, 18(6), 951; https://doi.org/10.3390/rs18060951 - 21 Mar 2026
Abstract
Accurate tree species mapping is critical for forest inventory, biodiversity assessment, and ecosystem management. In mountainous regions, terrain-induced radiometric non-stationarity and limited field access often produce scarce, clustered, and environmentally biased samples, limiting model generalization. To address this issue, this study proposes a [...] Read more.
Accurate tree species mapping is critical for forest inventory, biodiversity assessment, and ecosystem management. In mountainous regions, terrain-induced radiometric non-stationarity and limited field access often produce scarce, clustered, and environmentally biased samples, limiting model generalization. To address this issue, this study proposes a terrain-aware self-supervised representation learning framework for tree species classification under small-sample conditions. The framework integrates terrain information into representation learning and adopts a hybrid contrastive–generative self-supervised strategy to learn discriminative and terrain-robust features from large volumes of unlabeled multi-source remote sensing data. These learned representations are subsequently combined with limited field samples to produce regional-scale tree species maps. Experiments conducted across Yunnan Province, China, using Sentinel-1, Sentinel-2 and Landsat time-series data show that the proposed framework substantially improvesa class separability and classification robustness in complex mountainous environments. The framework achieves an overall accuracy of 75.8%, significantly outperforming conventional feature engineering (38.3–40.6%) and supervised deep learning models (37.3–47.8%). Species with relatively homogeneous structure and strong ecological niche dependence can be accurately mapped with limited training samples, whereas structurally complex forest communities require broader environmental sample coverage. Overall, the results highlight the potential of terrain-aware self-supervised representation learning as a scalable and data-efficient paradigm for forest mapping in mountainous and environmentally heterogeneous regions. Full article
Show Figures

Figure 1

25 pages, 6493 KB  
Article
A Dynamic Prompt-Based Logic-Aided Compliance Checker
by Wenxi Sheng, Chi Wei, Yinuo Zhang, Bowen Zhang and Jingyun Sun
Big Data Cogn. Comput. 2026, 10(3), 95; https://doi.org/10.3390/bdcc10030095 - 21 Mar 2026
Abstract
Text-based automatic compliance checking (ACC) employs natural language processing technologies to scrutinize a corporation’s business documents, ensuring adherence to related normative texts. The current methods fall into two primary categories: symbol-based and embedding-based approaches. Symbol-based methods, noted for their accuracy and transparent processing, [...] Read more.
Text-based automatic compliance checking (ACC) employs natural language processing technologies to scrutinize a corporation’s business documents, ensuring adherence to related normative texts. The current methods fall into two primary categories: symbol-based and embedding-based approaches. Symbol-based methods, noted for their accuracy and transparent processing, suffer from limited versatility. Conversely, embedding-based methods operate independently of expert knowledge yet often yield challenging-to-interpret results and require substantial volumes of annotated data. While both types of methods exhibit advantages in different aspects, the current research fails to combine these advantages effectively. Therefore, the existing methods fail to balance interpretability, generalization ability, and accuracy, which are key requirements for practical compliance systems. To address this problem, we introduce a novel approach termed the Dynamic Prompt-based Logic-Aided Compliance Checker (DPLACC), which is grounded in the prompt learning framework. This method initially parses target texts, transforming the results into first-order logical expressions. It subsequently retrieves pertinent knowledge from a knowledge graph, converting the knowledge into analogous first-order logical expressions. These expressions are then encoded into a global semantic vector via a pre-trained first-order logistic encoder. Ultimately, the semantics of expressions and initial texts are amalgamated within the prompt template, facilitating the logical knowledge enhancement of model reasoning. Experiments on Chinese and English datasets demonstrate that DPLACC comprehensively outperforms existing methods based solely on symbols or embeddings in terms of accuracy, precision, recall, and F1 score and significantly surpasses current mainstream large language models. Furthermore, DPLACC exhibits enhanced interpretability and reduced data dependence, maintaining 70% checking accuracy with as few as ten training samples. This capability allows DPLACC to be rapidly deployed in data-scarce real-world scenarios with minimal annotation overhead, thus offering a practical pathway toward the scalable implementation of compliance inspection systems. Full article
(This article belongs to the Special Issue Artificial Intelligence (AI) and Natural Language Processing (NLP))
Show Figures

Figure 1

15 pages, 1115 KB  
Article
Alzheimer’s Disease Classification Using Population-Referenced Brain Volumetric Percentiles
by Jae Hyuk Shim and Hyeon-Man Baek
Brain Sci. 2026, 16(3), 334; https://doi.org/10.3390/brainsci16030334 - 20 Mar 2026
Abstract
Background/Objectives: Translating brain volumetric biomarkers to individual-level Alzheimer’s disease (AD) diagnosis remains challenging due to difficulty interpreting raw volumes without longitudinal monitoring or matched controls. We tested a classification model using population-referenced volumetric percentiles to distinguish AD from cognitively normal (CN) subjects [...] Read more.
Background/Objectives: Translating brain volumetric biomarkers to individual-level Alzheimer’s disease (AD) diagnosis remains challenging due to difficulty interpreting raw volumes without longitudinal monitoring or matched controls. We tested a classification model using population-referenced volumetric percentiles to distinguish AD from cognitively normal (CN) subjects and evaluated its generalization across independent cohorts. Methods: Brain volumes from 95 regions were extracted using an automated segmentation pipeline and converted to age and sex adjusted percentiles using a reference population (N = 1833). A logistic regression classifier was trained on ADNI subjects (N = 873; AD = 183, CN = 690) split into training (60%), validation (20%), and test (20%) sets. The model was evaluated on two independent validation datasets: the held-out ADNI validation set and an external Korean cohort (N = 72; AD = 36, CN = 36) acquired with different scanner protocols and demographic characteristics. Results: The model achieved excellent discrimination across all evaluation sets: ADNI validation (AUC = 0.963, accuracy = 90.3%), ADNI test (AUC = 0.960, accuracy = 89.7%), and Korean external validation (AUC = 0.981, accuracy = 87.5%). The minimal validation gap (0.018) demonstrated robust generalization. Positive coefficients for ventricular regions reflected AD-associated atrophy patterns, while negative coefficients for medial temporal structures indicated their contribution within multivariate patterns distinguishing AD from normal aging. Conclusions: Population-referenced brain volumetric percentiles enable accurate AD classification with robust generalization across populations and scanner protocols. By contextualizing individual brain structure relative to normative populations while accounting for age and sex, this approach demonstrates potential for clinical translation as an accessible neuroimaging-based diagnostic tool. Full article
Show Figures

Figure 1

14 pages, 18688 KB  
Article
Outdoor Motion Capture at Scale
by Michael Zwölfer, Martin Mössner, Helge Rhodin and Werner Nachbauer
Sensors 2026, 26(6), 1951; https://doi.org/10.3390/s26061951 - 20 Mar 2026
Abstract
Capturing kinematic data in outdoor sports is challenging, as motions span large capture volumes and occur under difficult environmental conditions. Video-based approaches, particularly with pan–tilt–zoom cameras, offer a practical solution, but the extensive manual post-processing required limits their use to short sequences and [...] Read more.
Capturing kinematic data in outdoor sports is challenging, as motions span large capture volumes and occur under difficult environmental conditions. Video-based approaches, particularly with pan–tilt–zoom cameras, offer a practical solution, but the extensive manual post-processing required limits their use to short sequences and few athletes. This study presents a motion capture pipeline that automates the detection of both reference points and sport-specific keypoints to overcome this limitation. The field test employed eight cameras covering a 250×80×30 m capture volume with nearly 300 reference points. Ten state-certified ski instructors performed eight standardized maneuvers. Reference points were localized through a hybrid approach combining YOLO object detection and ArUco marker identification. AlphaPose was fine-tuned on a new manually annotated dataset to detect skier-specific keypoints (e.g., skis, poles) alongside anatomical landmarks. Continuous frame-wise calibration and 3D reconstruction were performed using Direct Linear Transformation. Evaluation compared automated detections with manual annotations. Automated reference point detection achieved a mean localization error of 4.1 pixels (0.1% of 4K width) and reduced 3D segment-length variation by 23%. The skier-specific keypoint model reached 98% PCK, mAP of 0.97, and an MPJPE of 10.3 pixels while lowering 3D segment-length variation by 0.5 cm compared to manual digitization and 0.6 cm relative to a pretrained model. Replacing manual digitization with automated detection improves accuracy and facilitates kinematic data collection in large outdoor fields with many athletes and trials. The approach also enables the creation of sport-specific datasets valuable for biomechanical research and training next-generation 3D pose estimation models. Full article
(This article belongs to the Special Issue Advanced Sensors in Biomechanics and Rehabilitation—2nd Edition)
Show Figures

Graphical abstract

56 pages, 4081 KB  
Article
A Systematic Ablation Study of GAN-Based Minority Augmentation for Intrusion Detection on UWF-ZeekData22
by Asfaw Debelie, Sikha S. Bagui, Subhash C. Bagui and Dustin Mink
Electronics 2026, 15(6), 1291; https://doi.org/10.3390/electronics15061291 - 19 Mar 2026
Abstract
Generative adversarial networks (GANs) are increasingly applied to mitigate extreme class imbalance in intrusion detection systems, yet reported improvements often obscure role augmentation intensity and adversarial stability. This paper presents a controlled ablation study that isolates the impact of adversarial objective choice, augmentation [...] Read more.
Generative adversarial networks (GANs) are increasingly applied to mitigate extreme class imbalance in intrusion detection systems, yet reported improvements often obscure role augmentation intensity and adversarial stability. This paper presents a controlled ablation study that isolates the impact of adversarial objective choice, augmentation ratio, and training duration on GAN-based minority data augmentation for highly imbalanced tabular cybersecurity data. Using the UWF-ZeekData22 dataset, nine MITRE ATT&CK tactic-versus-benign classification tasks are evaluated under augmentation ratios of 0.25 and 0.50 and training durations of 400 and 800 epochs. Four GAN variants—Vanilla GAN, Conditional GAN (cGAN), WGAN, and WGAN-GP—are assessed using stratified cross-validation and five classical classifiers representing diverse inductive biases. The results reveal consistent structural patterns. Moderate augmentation (r = 0.25) with controlled training (400 epochs) yields the most stable and reliable improvement in minority recall. Wasserstein-based objectives demonstrate superior stability under aggressive augmentation and prolonged training, while conditional GANs frequently exhibit recall collapse in ultra-sparse regimes. Increasing augmentation volume does not uniformly improve performance and may introduce distributional overlaps that degrade linear and margin-based classifiers. Tree-based classifiers remain largely invariant once sufficient minority density is achieved. These findings demonstrate that adversarial calibration is more important than architectural complexity for improving the detection of rare attacks. The study provides practical guidance for designing robust GAN-based augmentation pipelines under extreme cybersecurity class imbalance. Full article
(This article belongs to the Special Issue Intelligent Solutions for Network and Cyber Security)
Show Figures

Figure 1

11 pages, 2109 KB  
Article
In-Depth Cost Analysis on the Purification of Bioethanol by Extractive Distillation
by Héctor Hernández-Escoto, Oscar Daniel Lara-Montaño, Fabricio Omar Barroso-Muñoz, Salvador Hernández and María Dolores López-Ramírez
Processes 2026, 14(6), 975; https://doi.org/10.3390/pr14060975 - 18 Mar 2026
Viewed by 52
Abstract
This work performed a sensitivity analysis based on a conventional extractive distillation system to thoroughly evaluate the cost of separating bioethanol from water. The analysis considers the compositions and production volumes that are likely to result from the fermentation process of various biorefineries, [...] Read more.
This work performed a sensitivity analysis based on a conventional extractive distillation system to thoroughly evaluate the cost of separating bioethanol from water. The analysis considers the compositions and production volumes that are likely to result from the fermentation process of various biorefineries, regardless of their specific generation. It also outlines how the cost of bioethanol purification decreases as the ethanol concentration in the fermentation broth increases. For each composition-flow point in a gridded workspace, a distillation train was designed using the Aspen Plus® simulation framework, focusing on minimizing the total annual cost. The results are discussed graphically, illustrating total annual costs and specific column costs in relation to feed stream composition and inflow. The findings quantitatively demonstrate that the cost of separation per mass unit of anhydrous ethanol decreases with higher inflow and increased input ethanol concentration. Additionally, it is evident that the primary cost is associated with the preconcentrator column. Full article
(This article belongs to the Section Biological Processes and Systems)
Show Figures

Figure 1

50 pages, 2911 KB  
Article
From LQ to AI-BED-Fx: A Unified Multi-Fraction Radiobiological and Machine-Learning Framework for Gamma Knife Radiosurgery Across Intracranial Pathologies
by Răzvan Buga, Călin Gheorghe Buzea, Valentin Nedeff, Florin Nedeff, Diana Mirilă, Maricel Agop, Letiția Doina Duceac and Lucian Eva
Cancers 2026, 18(6), 985; https://doi.org/10.3390/cancers18060985 - 18 Mar 2026
Viewed by 41
Abstract
Background: Gamma Knife radiosurgery (GKS) delivers highly conformal intracranial irradiation, yet clinical decision-making still relies predominantly on physical dose metrics that do not account for fractionation, dose rate, treatment time, or DNA repair. Classical radiobiological models—including the linear–quadratic (LQ) formula and the Jones–Hopewell [...] Read more.
Background: Gamma Knife radiosurgery (GKS) delivers highly conformal intracranial irradiation, yet clinical decision-making still relies predominantly on physical dose metrics that do not account for fractionation, dose rate, treatment time, or DNA repair. Classical radiobiological models—including the linear–quadratic (LQ) formula and the Jones–Hopewell single-session repair model—do not extend naturally to 3- and 5-fraction GKS. Meanwhile, growing evidence suggests that biologically effective dose (BED) may better capture radiosurgical response in selected pathologies. A unified, biologically grounded, multi-fraction GKS framework has been lacking. Methods: We developed AI-BED-Fx, the first multi-fraction extension of the Jones–Hopewell radiobiological model capable of computing fraction-resolved BED for 1-, 3-, and 5-fraction GKS. The framework incorporates α/β ratio, dual-component repair kinetics, isocentre geometry, beam-on–time structure, and lesion-specific biological parameters. Four synthetic pathology-specific cohorts—arteriovenous malformation (AVM), meningioma (MEN), vestibular schwannoma (VS), and brain metastasis (BM)—were generated using distinct radiobiological signatures. Machine-learning models were trained to quantify the predictive value of physical dose versus BED for local control or obliteration. Additional experiments included Bayesian estimation of α/β and a neural-network surrogate for fast BED prediction. An exploratory comparison with a 60-lesion clinical brain–metastasis dataset was performed to assess whether key trends observed in the synthetic BM cohort were consistent with real radiosurgical outcomes. Results: AI-BED-Fx produced realistic pathology-specific BED distributions (AVM 60–210 Gy2.47; MEN 41–85 Gy3.5; VS 46–68 Gy3; BM 37–75 Gy10) and biologically coherent dose–response relationships. Predictive modeling demonstrated strong pathology dependence. In AVM, the three models achieved AUCs of 0.921 (Model A), 0.922 (Model B), and 0.924 (Model C), with corresponding Brier scores of 0.054, 0.051, and 0.051, with BED-based models performing best. In meningioma, BED was the dominant predictor, with AUCs of 0.642 (Model A), 0.660 (Model B), and 0.661 (Model C) and Brier scores of 0.181, 0.177, and 0.179, respectively. In vestibular schwannoma, the narrow BED range resulted in minimal BED contribution, with AUCs of 0.812, 0.827, and 0.830 and Brier scores of 0.165, 0.160, and 0.162, with physical dose and tumor volume determining performance. In brain metastases, outcomes were driven primarily by volume and physical dose, with AUCs of 0.614, 0.630, and 0.629 and Brier scores of 0.254, 0.250, and 0.253, showing negligible improvement from BED. AI-BED-Fx also accurately recovered the true α/β from synthetic outcomes (posterior mean 2.54 vs. true 2.47), and a neural-network surrogate reproduced full radiobiological BED calculations with near-perfect fidelity (R2 = 0.9991). Conclusions: AI-BED-Fx provides the first unified, biologically explicit framework for modeling single- and multi-fraction Gamma Knife radiosurgery. The findings show that the predictive usefulness of BED is pathology-specific rather than universal, and that radiobiological dose provides additional predictive value only when repair kinetics and dose–response biology support it. By integrating mechanistic radiobiology with machine learning, AI-BED-Fx establishes the conceptual and computational foundations for biologically adaptive, AI-guided radiosurgery, and cross-pathology comparison of treatment response. This work uses large radiobiologically grounded synthetic cohorts for methodological validation; limited real-patient data are included only for exploratory consistency checks, and full clinical validation is planned. Full article
(This article belongs to the Special Issue Novel Insights into Glioblastoma and Brain Metastases (2nd Edition))
Show Figures

Figure 1

11 pages, 930 KB  
Article
Quantitative Comparative Analysis of Annual Training Volume and Intensity Distribution of Male Biathlon National Team and University Athletes Using Global Positioning Systems and Wearable Devices
by Guanmin Zhang, Qiuju Hu, Yonghwan Kim and Yongchul Choi
Sensors 2026, 26(6), 1910; https://doi.org/10.3390/s26061910 - 18 Mar 2026
Viewed by 37
Abstract
Background: Wearable sensors and global positioning systems (GPS) can enable objective monitoring of training loads in outdoor endurance sports. In biathlons, comparing training characteristics across developmental stages can help identify structural gaps and support evidence-informed progression within long-term athlete development (LTAD). This study [...] Read more.
Background: Wearable sensors and global positioning systems (GPS) can enable objective monitoring of training loads in outdoor endurance sports. In biathlons, comparing training characteristics across developmental stages can help identify structural gaps and support evidence-informed progression within long-term athlete development (LTAD). This study aimed to quantitatively compare the annual training characteristics of Korean male biathlon national team (NT) and university (UNV) athletes. Methods: Annual physical training data (2022–2024) from NT (n = 6) and UNV (n = 6) athletes were collected using Catapult Vector S7 GPS devices and Polar H10 heart rate monitors. Training volume, intensity distribution (zones 1–3 based on %HRmax), modality (skiing vs. running), and periodization were compared using Mann–Whitney U tests with rank-biserial correlation (r_rb). Results: NT athletes accumulated a higher annual training time and distance than UNV athletes (812 vs. 606 h; 6359 vs. 4130 km; p = 0.002, r_rb = 1.000 for both). The NT athletes spent a lower proportion of time on low-intensity training and a higher proportion on mid and high intensities than UNV athletes (p ≤ 0.015). During high-intensity training, NT athletes maintained a higher proportion of ski-specific training, whereas UNV athletes relied more on running (skiing: 78.5% vs. 46.4%; running: 21.5% vs. 53.6%; both p < 0.001, r_rb = 1.000). The UNV group also showed a more concentrated structure during competition periods than NT athletes (COMP: 28.3% vs. 14.6%; p < 0.05). The absolute annual strength training time did not differ, but UNV athletes showed a higher strength ratio (23.3% vs. 16.8%; p < 0.001, r_rb = 1.000). Conclusion: UNV athletes exhibited a lower total volume, more low-intensity-skewed distribution, and reduced ski-specific exposure during high-intensity training compared with NT athletes. These observed structural gaps can provide empirical benchmarks that may help coaches plan stage-appropriate progression, and they illustrate the practical value of GPS- and wearable-based monitoring for identifying training divergences across developmental stages. Full article
(This article belongs to the Section Wearables)
Show Figures

Figure 1

13 pages, 1640 KB  
Article
An AI-Driven Clinical Decision Support Model Based on Anemia and Fibroid Parameters to Guide Surgical Decision-Making
by İnci Öz, Ecem Esma Yegin, Ali Utku Öz and Engin Ulukaya
Medicina 2026, 62(3), 555; https://doi.org/10.3390/medicina62030555 - 17 Mar 2026
Viewed by 113
Abstract
Background and Objectives: This study aimed to identify the clinical factors associated with the need for surgical intervention in women with uterine fibroids (UFs) and develop a data-driven clinical decision helper algorithm. By comparing hematologic and fibroid characteristics and prospectively assessing clinical [...] Read more.
Background and Objectives: This study aimed to identify the clinical factors associated with the need for surgical intervention in women with uterine fibroids (UFs) and develop a data-driven clinical decision helper algorithm. By comparing hematologic and fibroid characteristics and prospectively assessing clinical concordance with the model predictions, we sought to create an objective tool for surgical decision-making. Materials and Methods: This retrospective study enrolled 618 women with UFs who were evaluated at three participating hospitals. Of these, 238 (38.5%) underwent surgery. Comparative statistical analyses were conducted between patients who underwent myomectomy and those who did not. Machine learning (ML) models were trained to predict myomectomy necessity. A clinical concordance assessment was conducted using 50 cases that were evaluated in real time by a gynecologist blinded to both the clinical outcomes and the model outputs. Agreement between clinical assessment and algorithm-based predictions was subsequently evaluated. Results: Hemoglobin and ferritin concentrations were significantly reduced in the surgery group compared with the non-surgery group (p < 0.001). ML analyses integrating fibroid characteristics with anemia-related markers identified support vector ML models as the most accurate classifiers. Ferritin-based models achieved accuracies of 98–99% and near-perfect ROC–AUC values. ML models combining UF number or volume with ferritin demonstrated the highest precision, sensitivity, and F1-scores. Clinical concordance analysis showed 98% agreement with the blinded gynecologist, with only one borderline discordant case. Conclusions: This decision helper algorithm provides a highly accurate and objective tool for predicting surgical necessity in patients with UFs. Anemia status and fibroid characteristics were the strongest predictors. By reducing subjective variability and closely reflecting expert reasoning, the model offers a practical framework for integration into routine gynecologic decision-making. Full article
(This article belongs to the Special Issue Gynecological Surgery: Bridging Research and Clinical Practice)
Show Figures

Figure 1

22 pages, 4393 KB  
Article
An Adaptive Attention 3D U-Net for High-Fidelity MRI-to-CT Synthesis: Bridging the Anatomical Gap with CBAM
by Chaima Bensebihi, Nacer Eddine Benzebouchi, Nawel Zemmal, Abdallah Namoun, Aida Chefrour and Siham Amrouch
Diagnostics 2026, 16(6), 875; https://doi.org/10.3390/diagnostics16060875 - 16 Mar 2026
Viewed by 160
Abstract
Background: The generation of synthetic CT images from MRI scans represents a crucial step toward enabling MRI-only clinical workflows and supporting multi-modal integration in medical imaging, particularly in radiotherapy planning. Despite significant advancements in deep learning models, many current methods still struggle to [...] Read more.
Background: The generation of synthetic CT images from MRI scans represents a crucial step toward enabling MRI-only clinical workflows and supporting multi-modal integration in medical imaging, particularly in radiotherapy planning. Despite significant advancements in deep learning models, many current methods still struggle to reconstruct high-density structures, especially bone, and exhibit limited accuracy in density values. This shortcoming is largely attributed to the passage of excessive or noisy features through skip connections in the traditional U-Net architecture, which degrade the quality of information transmitted to the decoder, negatively impacting the clarity of anatomical boundaries and the pixel-wise accuracy of the resulting synthetic image. Methods: In this work, we propose an enhanced 3D U-Net architecture in which the Convolutional Block Attention Module (CBAM) is systematically integrated within each skip connection. The CBAM sequentially applies channel and spatial attention to adaptively reweight encoder feature maps before fusion with the decoder, thereby emphasizing anatomically relevant structures while suppressing irrelevant feature propagation. The model was trained and evaluated on the SynthRAD2023 (Task 1—Brain) MRI–CT dataset. To rigorously assess the contribution of the attention mechanism, a dedicated ablation study was conducted comparing three variants: 3D U-Net with Squeeze-and-Excitation (SE), Coordinate Attention (CA), and the proposed CBAM module. Performance was evaluated using Mean Absolute Error (MAE), Root Mean Square Error (RMSE), Peak Signal-to-Noise Ratio (PSNR), Structural Similarity Index (SSIM), and Normalized Cross-Correlation (NCC). Results: The ablation study demonstrated that the CBAM-enhanced model consistently outperformed both SE- and CA-based variants across all quantitative metrics. Specifically, the proposed method achieved an MAE of 38.2±5.4 HU and an RMSE of 51.0±12.0 HU, representing the lowest reconstruction errors among the evaluated models. In addition, it obtained a PSNR of 29.45±2.10 dB, SSIM of 0.940±0.031, and NCC of 0.967±0.015, indicating superior structural preservation and strong voxel-wise correspondence between synthesized and reference CT volumes. These results confirm that the sequential integration of channel and spatial attention provides a statistically and practically meaningful improvement for high-fidelity MRI-to-CT synthesis. Conclusions: Generating high-resolution brain CT images from brain MRI scans using a 3D U-Net network enhanced with a CBAM module can contribute to supporting the clinical workflow by providing additional diagnostic data without the need for extra radiological examinations, thereby enhancing diagnostic efficiency and reducing radiation exposure. This technique helps reduce patient exposure to radiation and improves accessibility in resource-limited settings. Furthermore, this method is valuable for retrospective studies, surgical planning, and image-guided therapy, where complete multi-modal data may not always be available. Full article
(This article belongs to the Section Machine Learning and Artificial Intelligence in Diagnostics)
Show Figures

Figure 1

17 pages, 1067 KB  
Article
Real-World Multimodal Machine Learning for Risk Enrichment Across the Alzheimer’s Disease Spectrum
by Nazlı Gamze Bülbül, İnci Meliha Baytaş, Efekan Kavalcı, Elvan Karasu, Başak Ceren Okcu Korkmaz, Buse Gül Belen, İsmail Serhat Musaoğlu, Ayşe Rana Övüt, Nefise Eda Arslanoğlu, Muammer Urhan, Hakan Mutlu and Mehmet Fatih Özdağ
J. Clin. Med. 2026, 15(6), 2250; https://doi.org/10.3390/jcm15062250 - 16 Mar 2026
Viewed by 210
Abstract
Background and Objectives: Mild cognitive impairment (MCI) is heterogeneous within the Alzheimer’s disease (AD) continuum, and categorical labels may not reflect biological variability. We evaluated whether multimodal machine learning using routine clinical data and neuroimaging could support biologically informed enrichment across MCI and [...] Read more.
Background and Objectives: Mild cognitive impairment (MCI) is heterogeneous within the Alzheimer’s disease (AD) continuum, and categorical labels may not reflect biological variability. We evaluated whether multimodal machine learning using routine clinical data and neuroimaging could support biologically informed enrichment across MCI and AD in a real-world memory clinic cohort. Methods: We analyzed 474 patients (1547 visits) with clinical and cognitive measures, laboratory parameters, MRI regional volumes, and FDG-PET regional uptake. Elastic Net and gradient boosting models were trained using nested cross-validation with strict patient-level separation. Results: Model discrimination improved as additional data modalities were added, and FDG-PET contributed the largest performance improvement. Hypometabolism in posterior default mode network regions consistently emerged as the most influential predictor. In the MCI subgroup, AD-like scores showed a continuous distribution consistent with biological enrichment. Conclusions: Multimodal models may provide an interpretable enrichment framework in heterogeneous memory clinic populations. Full article
Show Figures

Graphical abstract

Back to TopTop