Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

Search Results (162)

Search Parameters:
Keywords = absolute instability

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
18 pages, 963 KB  
Article
An Improved Dung Beetle Optimizer with Kernel Extreme Learning Machine for High-Accuracy Prediction of External Corrosion Rates in Buried Pipelines
by Yiqiong Gao, Zhengshan Luo, Bo Wang and Dengrui Mu
Symmetry 2026, 18(1), 167; https://doi.org/10.3390/sym18010167 - 16 Jan 2026
Abstract
Accurately predict the external corrosion rate is crucial for the integrity management and risk assessment of buried pipelines. However, existing prediction models often suffer from limitations such as low accuracy, instability, and overfitting. To address these challenges, this study proposes a novel hybrid [...] Read more.
Accurately predict the external corrosion rate is crucial for the integrity management and risk assessment of buried pipelines. However, existing prediction models often suffer from limitations such as low accuracy, instability, and overfitting. To address these challenges, this study proposes a novel hybrid model, FA-IDBO-KELM. Firstly, Factor Analysis (FA) was employed to reduce the dimensionality of ten original corrosion-influencing factors, extracting seven principal components to mitigate multicollinearity. Subsequently, the hyperparameters (penalty coefficient C and kernel parameter γ) of the Kernel Extreme Learning Machine (KELM) were optimized using an Improved Dung Beetle Optimizer (IDBO). The IDBO included four key enhancements compared to the standard DBO: spatial pyramid mapping (SPM) for population initialization, a spiral search strategy, Lévy flight, and an adaptive t-distribution mutation strategy to prevent premature convergence. The model was validated using a dataset from the West–East Gas Pipeline, with 90% of the data being used for training and 10% for testing. The results demonstrate the superior performance of FA-IDBO-KELM, which achieved a root mean square error (RMSE) of 0.0028, a mean absolute error (MAE) of 0.0021, and a coefficient of determination (R2) of 0.9954 on the test set. Compared to benchmark models (FA-KELM, FA-SSA-KELM, FA-DBO-KELM), the proposed model reduced the RMSE by 93.0%, 89.1%, and 85.3%, and improved the R2 by 85.7%, 10.6%, and 7.4%, respectively. The FA-IDBO-KELM model provides a highly accurate and reliable tool for predicting the external corrosion rate, which can significantly support pipeline maintenance decision-making. Full article
(This article belongs to the Section Engineering and Materials)
Show Figures

Figure 1

27 pages, 1930 KB  
Article
SteadyEval: Robust LLM Exam Graders via Adversarial Training and Distillation
by Catalin Anghel, Marian Viorel Craciun, Adina Cocu, Andreea Alexandra Anghel and Adrian Istrate
Computers 2026, 15(1), 55; https://doi.org/10.3390/computers15010055 - 14 Jan 2026
Viewed by 32
Abstract
Large language models (LLMs) are increasingly used as rubric-guided graders for short-answer exams, but their decisions can be unstable across prompts and vulnerable to answer-side prompt injection. In this paper, we study SteadyEval, a guardrailed exam-grading pipeline in which an adversarially trained LoRA [...] Read more.
Large language models (LLMs) are increasingly used as rubric-guided graders for short-answer exams, but their decisions can be unstable across prompts and vulnerable to answer-side prompt injection. In this paper, we study SteadyEval, a guardrailed exam-grading pipeline in which an adversarially trained LoRA filter (SteadyEval-7B-deep) preprocesses student answers to remove answer-side prompt injection, after which the original Mistral-7B-Instruct rubric-guided grader assigns the final score. We build two exam-grading pipelines on top of Mistral-7B-Instruct: a baseline pipeline that scores student answers directly, and a guardrailed pipeline in which a LoRA-based filter (SteadyEval-7B-deep) first removes injection content from the answer and a downstream grader then assigns the final score. Using two rubric-guided short-answer datasets in machine learning and computer networking, we generate grouped families of clean answers and four classes of answer-side attacks, and we evaluate the impact of these attacks on score shifts, attack success rates, stability across prompt variants, and alignment with human graders. On the pooled dataset, answer-side attacks inflate grades in the unguarded baseline by an average of about +1.2 points on a 1–10 scale, and substantially increase score dispersion across prompt variants. The guardrailed pipeline largely removes this systematic grade inflation and reduces instability for many items, especially in the machine-learning exam, while keeping mean absolute error with respect to human reference scores in a similar range to the unguarded baseline on clean answers, with a conservative shift in networking that motivates per-course calibration. Chief-panel comparisons further show that the guardrailed pipeline tracks human grading more closely on machine-learning items, but tends to under-score networking answers. These findings are best interpreted as a proof-of-concept guardrail and require per-course validation and calibration before operational use. Full article
Show Figures

Figure 1

13 pages, 2066 KB  
Article
A Weighted NBTI/HCD Coupling Model in Full VG/VD Bias Space with Applications to SRAM Aging Simulation
by Zhen Chai and Zhenyu Wu
Micromachines 2026, 17(1), 101; https://doi.org/10.3390/mi17010101 - 12 Jan 2026
Viewed by 147
Abstract
In this paper, a coupled negative bias temperature instability (NBTI)/hot carrier degradation (HCD) failure model is proposed on the 2-D voltage plane for aging simulation of SRAM circuits. According to the physical mechanism of failure, based on the reaction–diffusion and hot carrier energy-driven [...] Read more.
In this paper, a coupled negative bias temperature instability (NBTI)/hot carrier degradation (HCD) failure model is proposed on the 2-D voltage plane for aging simulation of SRAM circuits. According to the physical mechanism of failure, based on the reaction–diffusion and hot carrier energy-driven theory, revised degradation models of threshold voltage shift (∆Vth) for the NBTI and HCD are established, respectively, with explicit expressions for gate voltage (VG)/drain voltage (VD). An NBTI/HCD coupling model is built on the 2-D {VG, VD} voltage plane with a weighting factor in the form of VG and VD power law. The model also takes into account the AC effect and long-term saturation behavior. The predicted ∆Vth under various stress conditions shows an average relative error of 11.6% with experimental data across the entire bias space. SRAM circuit simulation shows that the read static noise margin (RSNM) and write static noise margin (WSNM) have a maximum absolute error of 4.2% and 3.1%, respectively. This research provides a valuable reference for the reliability simulation of nanoscale integrated circuits. Full article
(This article belongs to the Section D1: Semiconductor Devices)
Show Figures

Figure 1

29 pages, 4367 KB  
Article
SARIMA vs. Prophet: Comparative Efficacy in Forecasting Traffic Accidents Across Ecuadorian Provinces
by Wilson Chango, Ana Salguero, Tatiana Landivar, Roberto Vásconez, Geovanny Silva, Pedro Peñafiel-Arcos, Lucía Núñez and Homero Velasteguí-Izurieta
Computation 2026, 14(1), 5; https://doi.org/10.3390/computation14010005 - 31 Dec 2025
Viewed by 298
Abstract
This study aimed to evaluate the comparative predictive efficacy of the SARIMA statistical model and the Prophet machine learning model for forecasting monthly traffic accidents across the 24 provinces of Ecuador, addressing a critical research gap in model selection for geographically and socioeconomically [...] Read more.
This study aimed to evaluate the comparative predictive efficacy of the SARIMA statistical model and the Prophet machine learning model for forecasting monthly traffic accidents across the 24 provinces of Ecuador, addressing a critical research gap in model selection for geographically and socioeconomically heterogeneous regions. By integrating classical time series modeling with algorithmic decomposition techniques, the research sought to determine whether a universally superior model exists or if predictive performance is inherently context-dependent. Monthly accident data from January 2013 to June 2025 were analyzed using a rolling-window evaluation framework. Model accuracy was assessed through Mean Absolute Percentage Error (MAPE) and Root Mean Square Error (RMSE) metrics to ensure consistency and comparability across provinces. The results revealed a global tie, with 12 provinces favoring SARIMA and 12 favoring Prophet, indicating the absence of a single dominant model. However, regional patterns of superiority emerged: Prophet achieved exceptional precision in coastal and urban provinces with stationary and high-volume time series—such as Guayas, which recorded the lowest MAPE (4.91%)—while SARIMA outperformed Prophet in the Andean highlands, particularly in non-stationary, medium-to-high-volume provinces such as Tungurahua (MAPE 6.07%) and Pichincha (MAPE 13.38%). Computational instability in MAPE was noted for provinces with extremely low accident counts (e.g., Galápagos, Carchi), though RMSE values remained low, indicating a metric rather than model limitation. Overall, the findings invalidate the notion of a universally optimal model and underscore the necessity of adopting adaptive, region-specific modeling frameworks that account for local geographic, demographic, and structural factors in predictive road safety analytics. Full article
Show Figures

Graphical abstract

12 pages, 2336 KB  
Article
Estimation of Soil Water Flux Using the Heat Pulse Technique and Vector Addition in Saturated Soils of Different Textures
by Fuyun Lu, Zhi Zhao, Qinghua Pan, Yuping Zhang, Dongye Lu and Yang Wu
Water 2026, 18(1), 67; https://doi.org/10.3390/w18010067 - 25 Dec 2025
Viewed by 403
Abstract
Soil water flux is a key parameter for understanding water and heat transport processes in the vadose zone. The heat pulse technique (HPT) has shown considerable potential for predicting soil water flux. Traditional three-needle probe methods, the maximum dimensionless temperature difference (MDTD [...] Read more.
Soil water flux is a key parameter for understanding water and heat transport processes in the vadose zone. The heat pulse technique (HPT) has shown considerable potential for predicting soil water flux. Traditional three-needle probe methods, the maximum dimensionless temperature difference (MDTD) method and the ratio of downstream to upstream temperature increases (Ratio) method, can only measure water flux along the probe alignment. To enhance the applicability of the HPT method, the five-needle probe with vector addition allows for the measurement of soil water flux in any direction within the plane perpendicular to the needles. However, its applicability across different soil textures remains unclear. The objective of this study was to evaluate the applicability of the MDTD and Ratio methods when combined with vector addition across different soil textures. Experimental results show that the vector MDTD and Ratio methods improve water flux measurement accuracy compared with traditional three-needle methods, confirming the reliability of the vector HPT approach. Specifically, the mean absolute percentage error (MAPE) of the vector MDTD method decreased by 1.69%, 1.04%, and 1.80% in sand, sandy loam, and silt loam, respectively, compared with the traditional MDTD method. In contrast, the MAPE of the vector Ratio method varied by +8.83%, −6.73%, and −18.20% in the same soils, relative to the traditional Ratio method. Examining the root mean square error (RMSE) of each method yields a similar conclusion. Similarly to traditional HPT methods, the measurement accuracy of the vector HPT approach is influenced by soil texture, water flux range, and probe spacing. Notably, because the vector HPT method involves four probe spacings, namely the distances between the heating needle and the temperature-sensing needles, it can exacerbate the instability of the resultant water flux measurements. These findings may facilitate the broader application of the HPT method. Full article
(This article belongs to the Section Soil and Water)
Show Figures

Figure 1

26 pages, 23681 KB  
Article
Semantic-Guided Spatial and Temporal Fusion Framework for Enhancing Monocular Video Depth Estimation
by Hyunsu Kim, Yeongseop Lee, Hyunseong Ko, Junho Jeong and Yunsik Son
Appl. Sci. 2026, 16(1), 212; https://doi.org/10.3390/app16010212 - 24 Dec 2025
Viewed by 448
Abstract
Despite advancements in deep learning-based Monocular Depth Estimation (MDE), applying these models to video sequences remains challenging due to geometric ambiguities in texture-less regions and temporal instability caused by independent per-frame inference. To address these limitations, we propose STF-Depth, a novel post-processing framework [...] Read more.
Despite advancements in deep learning-based Monocular Depth Estimation (MDE), applying these models to video sequences remains challenging due to geometric ambiguities in texture-less regions and temporal instability caused by independent per-frame inference. To address these limitations, we propose STF-Depth, a novel post-processing framework that enhances depth quality by logically fusing heterogeneous information—geometric, semantic, and panoptic—without requiring additional retraining. Our approach introduces a robust RANSAC-based Vanishing Point Estimation to guide Dynamic Depth Gradient Correction for background separation, alongside Adaptive Instance Re-ordering to clarify occlusion relationships. Experimental results on the KITTI, NYU Depth V2, and TartanAir datasets demonstrate that STF-Depth functions as a universal plug-and-play module. Notably, it achieved a 25.7% reduction in Absolute Relative error (AbsRel) and significantly enhanced temporal consistency compared to state-of-the-art backbone models. These findings confirm the framework’s practicality for real-world applications requiring geometric precision and video stability, such as autonomous driving, robotics, and augmented reality (AR). Full article
(This article belongs to the Special Issue Advances in Computer Vision and Digital Image Processing)
Show Figures

Figure 1

19 pages, 1381 KB  
Review
Sprayer Boom Balance Control Technologies: A Survey
by Songchao Zhang, Tianhong Liu, Chen Cai, Chun Chang, Zhiming Wei, Longfei Cui, Suming Ding and Xinyu Xue
Agronomy 2026, 16(1), 33; https://doi.org/10.3390/agronomy16010033 - 22 Dec 2025
Viewed by 336
Abstract
The operational efficiency and precision of boom sprayers, as critical equipment for protecting field crops, are vital to global food security and agricultural sustainability. In precision agriculture systems, achieving uniform pesticide application fundamentally depends on maintaining stable boom posture during operation. However, severe [...] Read more.
The operational efficiency and precision of boom sprayers, as critical equipment for protecting field crops, are vital to global food security and agricultural sustainability. In precision agriculture systems, achieving uniform pesticide application fundamentally depends on maintaining stable boom posture during operation. However, severe boom vibration not only directly causes issues like missed spraying, double spraying, and pesticide drift but also represents a critical bottleneck constraining its functional realization in cutting-edge applications. Despite its importance, achieving absolute boom stability is a complex task. Its suspension system design faces a fundamental technical contradiction: effectively isolating high-frequency vehicle vibrations caused by ground surfaces while precisely following large-scale, low-frequency slope variations in the field. This paper systematically traces the evolutionary path of self-balancing boom technology in addressing this core contradiction. First, the paper conducts a dynamic analysis of the root causes of boom instability and the mechanism of its detrimental physical effects on spray quality. This serves as a foundation for the subsequent discussion on technical approaches for boom support and balancing systems. The paper also delves into the evolution of sensing technology, from “single-point height measurement” to “point cloud morphology perception,” and provides a detailed analysis of control strategies from classical PID to modern robust control and artificial intelligence methods. Furthermore, this paper explores the deep integration of this technology with precision agriculture applications, such as variable rate application and autonomous navigation. In conclusion, the paper summarizes the main challenges facing current technology and outlines future development trends, aiming to provide a comprehensive reference for research and development in this field. Full article
Show Figures

Figure 1

12 pages, 686 KB  
Article
Sex Differences in Outcomes of Critically Ill Adults with Respiratory Syncytial Virus Pneumonia: A Retrospective Exploratory Cohort Study
by Josef Yayan and Kurt Rasche
Infect. Dis. Rep. 2025, 17(6), 151; https://doi.org/10.3390/idr17060151 - 18 Dec 2025
Viewed by 379
Abstract
Background: Respiratory syncytial virus (RSV) pneumonia is an underrecognized cause of critical illness in adults. However, the influence of biological sex on intensive care unit (ICU) outcomes in this population remains unclear. Due to limited case numbers and incomplete covariate data, this study [...] Read more.
Background: Respiratory syncytial virus (RSV) pneumonia is an underrecognized cause of critical illness in adults. However, the influence of biological sex on intensive care unit (ICU) outcomes in this population remains unclear. Due to limited case numbers and incomplete covariate data, this study was designed as exploratory and hypothesis-generating. Methods: We conducted a retrospective exploratory cohort study using the MIMIC-IV database and identified 105 adult ICU patients with laboratory-confirmed RSV pneumonia. Clinical variables included sex, age, ICU length of stay, use of mechanical ventilation, and weaning status. Exploratory multivariable logistic regression was performed to assess associations with in-hospital mortality and weaning success, acknowledging substantial missingness of comorbidity data, severity scores, and treatment variables. This limited adjustment for confounding and statistical power. Results: Overall, in-hospital mortality was 33.3%. Mortality was significantly higher among women than men (51.6% vs. 7.0%; p < 0.001), although the absolute number of deaths in men was very small. In adjusted models, female sex (OR 14.6, 95% CI 1.58–135.3, p = 0.018), reflecting model instability due to sparse events, as well as longer ICU stay (OR 1.22 per day, p = 0.001) were independently associated with higher mortality. Female sex was also associated with lower odds of successful weaning (OR 0.07, 95% CI 0.01–0.63, p = 0.018). These effect estimates must be interpreted cautiously due to the very small number of deaths in men and the resulting wide confidence intervals. Age and ventilation duration were not significant predictors. Conclusions: In this preliminary ICU cohort, female sex and prolonged ICU stay were linked to higher mortality and lower weaning success in adults with RSV pneumonia. However, given the very small number of events—particularly among male patients—together with the modest sample size, limited covariate availability, and unstable effect estimates, the findings should be viewed as exploratory rather than confirmatory. Larger, well-powered, prospective multicenter studies are needed to validate and further characterize potential sex-related differences in outcomes of RSV-associated critical illness. Full article
(This article belongs to the Section Viral Infections)
Show Figures

Figure 1

11 pages, 762 KB  
Article
Sufficient Standardization? Evaluating the Reliability of an Inertial Sensor (BeyondTM) for Ankle Dorsiflexion After a Brief Familiarization Period
by Giacomo Belmonte, Alberto Canzone, Marco Gervasi, Eneko Fernández-Peña, Angelo Iovane, Antonino Bianco and Antonino Patti
Sports 2025, 13(12), 447; https://doi.org/10.3390/sports13120447 - 11 Dec 2025
Viewed by 446
Abstract
(1) Background: Ankle joint range of motion is recognized as abnormal in individuals with ankle sprains and Chronic ankle instability (CAI), especially in the dorsiflexion movement. This research investigated the test–retest and inter-rater reliability of the Motustech Beyond IMU for dorsiflexion movement following [...] Read more.
(1) Background: Ankle joint range of motion is recognized as abnormal in individuals with ankle sprains and Chronic ankle instability (CAI), especially in the dorsiflexion movement. This research investigated the test–retest and inter-rater reliability of the Motustech Beyond IMU for dorsiflexion movement following only one hour of rater training and familiarization. (2) Methods: In total, 62 subjects were evaluated for the inter-rater reliability and test–retest with a one-week interval. The intraclass correlation coefficient (ICC), along with the Concordance Correlation Coefficient (CCC), was determined for each test of reliability. Standard error of measurement, coefficients of variation, limits of agreement (LoA) and minimal detectable change (MDC) were used for the measurement error analysis. (3) Results: Test–retest reliability was ranked excellent (ICC = 0.949) and very high (CCC = 0.897) for both ankle dorsiflexion measurements. On the other hand, Inter-Rater reliability was evaluated as good (ICC = 0.881–0.906) and very high (CCC = 0.783–0.811). However, the measurement error analysis showed poor absolute agreement (LoA), indicating that the resulting measurement variability is considered clinically unacceptable for high-precision applications. (4) Conclusions: Beyond Inertial demonstrated excellent test–retest reliability for ankle dorsiflexion movements, although measurement error analysis showed considerable absolute error. Consequently, it may be considered a reliable tool for single-rater monitoring of ankle dorsiflexion ROM in non-clinical settings such as general physical activity and amateur sports. Future research should investigate its potential role in injury prevention contexts. Full article
Show Figures

Figure 1

25 pages, 360 KB  
Review
Challenges in Biometry and Intraocular Lens Power Calculations in Keratoconus: A Review
by Mayank A. Nanavaty
Diagnostics 2025, 15(24), 3121; https://doi.org/10.3390/diagnostics15243121 - 8 Dec 2025
Viewed by 608
Abstract
Purpose: The purpose of this work was to conduct a comprehensive literature review of the challenges encountered in ocular biometry and intraocular lens (IOL) power calculations in patients with keratoconus undergoing cataract surgery and to evaluate the performance of various biometric techniques and [...] Read more.
Purpose: The purpose of this work was to conduct a comprehensive literature review of the challenges encountered in ocular biometry and intraocular lens (IOL) power calculations in patients with keratoconus undergoing cataract surgery and to evaluate the performance of various biometric techniques and IOL power calculation formulas in this population. Methods: A comprehensive literature search was conducted in PubMed for studies published until October 2025. Keywords included “keratoconus”, “biometry”, “IOL power calculation”, “cataract surgery”, “keratometry”, and related terms. Studies evaluating the repeatability of biometric measurement, the accuracy of IOL formulas, and surgical outcomes in keratoconus patients were included. Study quality was assessed using standardized criteria, including study design, measurement standardization, and statistical appropriateness. Results: Twenty studies comprising 1596 eyes with keratoconus were analyzed. Biometric challenges include reduced keratometry repeatability (especially with K > 55 D), altered anterior-to-posterior corneal curvature ratios, anterior chamber depth, unreliable corneal power measurements, and tear film instability affecting measurement consistency. Keratoconus-specific formulas (Barrett’s True-K for keratoconus and Kane’s formula for keratoconus) demonstrated superior accuracy compared to standard formulas. The Barrett True-K formula with predicted posterior corneal astigmatism showed median absolute errors of 0.10–0.35 D across all severity stages, with 39–72% of eyes within ±0.50 D of target refraction. Traditional formulas (excluding SRK/T) produced hyperopic prediction errors that increased with disease severity. Swept-source optical coherence tomography biometry with total keratometry measurements improved prediction accuracy, particularly in severe keratoconus. Conclusions: IOL power calculation in keratoconus remains challenging due to multiple biometric measurement errors. Keratoconus-specific formulas significantly improve refractive outcomes compared to standard formulas. The use of total keratometry and swept-source OCT biometry, as well as the incorporation of posterior corneal power measurements, enhances accuracy. A multimodal approach combining advanced biometry devices with keratoconus-specific formulas is recommended for optimal outcomes. Full article
(This article belongs to the Special Issue Latest Advances in Ophthalmic Imaging)
30 pages, 2574 KB  
Article
EvalCouncil: A Committee-Based LLM Framework for Reliable and Unbiased Automated Grading
by Catalin Anghel, Marian Viorel Craciun, Andreea Alexandra Anghel, Adina Cocu, Antonio Stefan Balau, Constantin Adrian Andrei, Calina Maier, Serban Dragosloveanu, Dana-Georgiana Nedelea and Cristian Scheau
Computers 2025, 14(12), 530; https://doi.org/10.3390/computers14120530 - 3 Dec 2025
Viewed by 596
Abstract
Large Language Models (LLMs) are increasingly used for rubric-based assessment, yet reliability is limited by instability, bias, and weak diagnostics. We present EvalCouncil, a committee-and-chief framework for rubric-guided grading with auditable traces and a human adjudication baseline. Our objectives are to (i) characterize [...] Read more.
Large Language Models (LLMs) are increasingly used for rubric-based assessment, yet reliability is limited by instability, bias, and weak diagnostics. We present EvalCouncil, a committee-and-chief framework for rubric-guided grading with auditable traces and a human adjudication baseline. Our objectives are to (i) characterize domain structure in Human–LLM alignment, (ii) assess robustness to concordance tolerance and panel composition, and (iii) derive a domain-adaptive audit policy grounded in dispersion and chief–panel differences. Authentic student responses from two domains–Computer Networks (CNs) and Machine Learning (ML)–are graded by multiple heterogeneous LLM evaluators using identical rubric prompts. A designated chief arbitrator operates within a tolerance band and issues the final grade. We quantify within-panel dispersion via MPAD (mean pairwise absolute deviation), measure chief–panel concordance (e.g., absolute error and bias), and compute Human–LLM deviation. Robustness is examined by sweeping the tolerance and performing leave-one-out perturbations of panel composition. All outputs and reasoning traces are stored in a graph database for full provenance. Human–LLM alignment exhibits systematic domain dependence: ML shows tighter central tendency and shorter upper tails, whereas CN displays broader dispersion with heavier upper tails and larger extreme spreads. Disagreement increases with item difficulty as captured by MPAD, concentrating misalignment on a relatively small subset of items. These patterns are stable to tolerance variation and single-grader removals. The signals support a practical triage policy: accept low-dispersion, small-gap items; apply a brief check to borderline cases; and adjudicate high-dispersion or large-gap items with targeted rubric clarification. EvalCouncil instantiates a committee-and-chief, rubric-guided grading workflow with committee arbitration, a human adjudication baseline, and graph-based auditability in a real classroom deployment. By linking domain-aware dispersion (MPAD), a policy tolerance dial, and chief–panel discrepancy, the study shows how these elements can be combined into a replicable, auditable, and capacity-aware approach for organizing LLM-assisted grading and identifying instability and systematic misalignment, while maintaining pedagogical interpretability. Full article
(This article belongs to the Section AI-Driven Innovations)
Show Figures

Figure 1

22 pages, 2030 KB  
Article
Synergistic Genotoxic Effects of Gamma Rays and UVB Radiation on Human Blood
by Angeliki Gkikoudi, Athanasia Adamopoulou, Despoina Diamadaki, Panagiotis Matsades, Ioannis Tzakakos, Sotiria Triantopoulou, Spyridon N. Vasilopoulos, Gina Manda, Georgia I. Terzoudi and Alexandros G. Georgakilas
Antioxidants 2025, 14(12), 1451; https://doi.org/10.3390/antiox14121451 - 2 Dec 2025
Viewed by 921
Abstract
Exposure to ionizing and non-ionizing radiation from environmental and clinical settings can significantly threaten genomic stability, especially when combined. This ex vivo study investigates the potential combined effects of gamma radiation and ultraviolet B (UVB) exposure on human peripheral blood mononuclear cells (PBMCs) [...] Read more.
Exposure to ionizing and non-ionizing radiation from environmental and clinical settings can significantly threaten genomic stability, especially when combined. This ex vivo study investigates the potential combined effects of gamma radiation and ultraviolet B (UVB) exposure on human peripheral blood mononuclear cells (PBMCs) from healthy donors by exposing whole blood and isolated PBMCs to 1 Gy of gamma rays, to an absolute dose of approximately 100 J/m2 of UVB, or to their combination. Combined exposure resulted in significantly elevated γH2AX foci formation and chromosomal aberrations relative to individual stressors, with the most pronounced effects observed in isolated PBMCs. Notably, lymphocytes from some donors failed to proliferate after UVB or co-exposure. Based on our results, a predictive biophysical model derived from dicentric yield was developed to estimate the gamma-ray equivalent dose from co-exposure, indicating up to ~9% increase in lifetime cancer risk. Although this proof-of-concept study included only a small number of donors and focused on two endpoints (γH2AX and dicentric assays), it provides a controlled framework for investigating mechanisms of radiation-induced genomic instability. The results emphasize the importance of accounting for mixed radiation exposures in genotoxic risk assessment and radiation protection. Full article
Show Figures

Figure 1

18 pages, 2235 KB  
Article
3D Latent Diffusion Model for MR-Only Radiotherapy: Accurate and Consistent Synthetic CT Generation
by Mohammed A. Mahdi, Mohammed Al-Shalabi, Ehab T. Alnfrawy, Reda Elbarougy, Muhammad Usman Hadi and Rao Faizan Ali
Diagnostics 2025, 15(23), 3010; https://doi.org/10.3390/diagnostics15233010 - 26 Nov 2025
Viewed by 689
Abstract
Background: The clinical imperative to reduce patient ionizing radiation exposure during diagnosis and treatment planning necessitates robust, high-fidelity synthetic imaging solutions. Current cross-modal synthesis techniques, primarily based on GANs and deterministic CNNs, exhibit instability and critical errors in modeling high-contrast tissues, thereby [...] Read more.
Background: The clinical imperative to reduce patient ionizing radiation exposure during diagnosis and treatment planning necessitates robust, high-fidelity synthetic imaging solutions. Current cross-modal synthesis techniques, primarily based on GANs and deterministic CNNs, exhibit instability and critical errors in modeling high-contrast tissues, thereby hindering their reliability for safety-critical applications such as radiotherapy. Objectives: Our primary objective was to develop a stable, high accuracy framework for 3D Magnetic Resonance Imaging (MRI) to Computed Tomography (CT) synthesis capable of generating clinically equivalent synthetic CTs (sCTs) across multiple anatomical sites. Methods: We introduce a novel 3D Latent Diffusion Model (3DLDM) that operates in a compressed latent space, mitigating the computational burden of 3D diffusion while leveraging the stability of the denoising objective. Results: Across the Head & Neck, Thorax, and Abdomen, the 3DLDM achieved a Mean Absolute Error (MAE) of 56.44 Hounsfield Units (HU). This result demonstrates a significant 3.63% reduction in overall error compared to the strongest adversarial baseline, CycleGAN (MAE = 60.07 HU, p < 0.05), a 10.76% reduction compared to NNUNet (MAE = 67.20 HU, p < 0.01), and a 20.79% reduction compared to the transformer-based SwinUNeTr (MAE = 77.23 HU, p < 0.0001). The model also achieved the highest structural similarity (SSIM = 0.885 ± 0.031), significantly exceeding SwinUNeTr (p < 0.0001), NNUNet (p < 0.01), and Pix2Pix (p < 0.0001). Likewise, the 3D-LDM achieved the highest peak signal-to-noise ratio (PSNR = 29.73 ± 1.60 dB), with statistically significant gains over CycleGAN (p < 0.01), NNUNet (p < 0.001), and SwinUNeTr (p < 0.0001). Conclusions: This work validates a scalable, accurate approach for volumetric synthesis, positioning the 3DLDM to enable MR-only radiotherapy planning and accelerate radiation-free multi-modal imaging in the clinic. Full article
(This article belongs to the Special Issue Medical Image Analysis and Machine Learning)
Show Figures

Figure 1

20 pages, 1908 KB  
Article
Triple-Flow Dynamic Graph Convolutional Network for Wind Power Forecasting
by Bin Li, Bo Ding, Wei Pang and Hongyin Ni
Symmetry 2025, 17(12), 2026; https://doi.org/10.3390/sym17122026 - 26 Nov 2025
Viewed by 488
Abstract
Wind energy is a clean but intermittent and volatile energy source, and its large-scale integration into power systems poses great challenges to ensuring safe and stable operation and achieving scheduling optimization and effective energy planning of the power systems. Accurate wind power forecasting [...] Read more.
Wind energy is a clean but intermittent and volatile energy source, and its large-scale integration into power systems poses great challenges to ensuring safe and stable operation and achieving scheduling optimization and effective energy planning of the power systems. Accurate wind power forecasting is an effective way to mitigate the impact of wind power instability on power systems. However, wind power data are often in the form of multivariate time series. Existing wind power forecasting research often directly models the temporal and spatial characteristics of coupled wind power time-series data, ignoring the heterogeneity of time and space, thereby limiting the model’s expressive power. To address the above problems, we propose a triple-flow dynamic graph convolutional network (TFDGCN) for short-term wind power forecasting. The proposed TFDGCN is a symmetric dynamic graph neural network with three branches. It decouples and learns features of three different dimensions: within a wind power variable sequence, between sequences, and between wind turbines. The proposed TFDGCN constructs dynamic sparse graphs based on cosine similarities within variable sequences, between variable sequences, and between wind turbine nodes, and feeds them into their respective dynamic graph convolution modules. Afterwards, TFDGCN utilizes linear attention encoders which fuse local position encoding (LePE) and rotational position encoding (RoPE) to learn global dependencies within variable sequences, between sequences, and between wind turbines, and provide prediction results. Extensive experimental results on two real-world datasets demonstrate that the proposed TFDGCN outperforms other state-of-the-art methods. On the SDWPF and SD23 datasets, the proposed TFDGCN achieved mean absolute error values of 37.16 and 14.63, respectively, as well as root mean square error values of 44.84 and 17.56, respectively. Full article
Show Figures

Figure 1

36 pages, 6756 KB  
Article
Enhancing Sustainable Supply Chain Performance Prediction Using an Augmented Algorithm-Optimized XGBOOST in Industry 4.0 Contexts
by Noreddin Nsir, Ahmad Bassam Alzubi and Oluwatayomi Rereloluwa Adegboye
Sustainability 2025, 17(22), 10344; https://doi.org/10.3390/su172210344 - 19 Nov 2025
Viewed by 668
Abstract
Accurate prediction of supply chain performance, particularly profitability, as a key indicator of economic sustainability, is essential for data-driven decision-making in Industry 4.0-enabled sustainable supply chains. Traditional machine learning models often underperform due to suboptimal hyperparameter configurations, especially when dealing with high-dimensional, nonlinear [...] Read more.
Accurate prediction of supply chain performance, particularly profitability, as a key indicator of economic sustainability, is essential for data-driven decision-making in Industry 4.0-enabled sustainable supply chains. Traditional machine learning models often underperform due to suboptimal hyperparameter configurations, especially when dealing with high-dimensional, nonlinear operational data. To address the limitations of conventional machine learning models, which often exhibit instability and weak generalization in high-dimensional data, this study introduces a novel Salp Swarm Algorithm with Local Escaping Operator (SSALEO) to optimize XGBOOST for sustainable supply chain profit prediction. The theoretical innovation lies in the integration of LEO, which dynamically perturbs stagnant solutions to enhance convergence reliability, robustness, and interpretability compared with conventional metaheuristic–ML hybrids. This enhanced metaheuristic optimizer fine-tunes XGBOOST to deliver highly accurate predictions of supply chain profit, a critical dimension of economic sustainability. Evaluated on real-world supply chain datasets, SSALEO-XGBOOST achieves a coefficient of determination (R2 of 0.985) and significantly outperforms benchmark models across error metrics Root Mean Squared Error (RMSE), Mean Squared Error (MSE), Maximum Error (ME), and Relative Absolute Error (RAE). By leveraging this enhanced optimizer, the proposed SSALEO-XGBOOST framework achieves superior predictive accuracy and stability, enabling more consistent profit estimation and performance forecasting. For decision-makers in industry environments, the framework offers a practical tool to support data-driven sustainability assessment and digital transformation strategies, fostering intelligent and resilient industrial ecosystems. Full article
(This article belongs to the Special Issue Sustainable Supply Chain Management in Industry 4.0)
Show Figures

Figure 1

Back to TopTop