Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (220)

Search Parameters:
Keywords = sequential calibration

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
22 pages, 6969 KB  
Article
Self-Supervised 3D Cloud Motion Inversion from Ground-Based Binocular All-Sky Images
by Shan Jiang, Chen Zhang, Xu Fu, Lei Lin, Zhikuan Wang, Xingtong Li, Tianying Liu and Jifeng Song
Atmosphere 2026, 17(3), 236; https://doi.org/10.3390/atmos17030236 - 25 Feb 2026
Viewed by 145
Abstract
Addressing the challenge of stable cloud velocity field estimation under complex sky conditions in ground-based cloud imaging, this paper proposes a comprehensive 3D cloud velocity calculation framework. The methodology integrates binocular stereo vision geometry, self-supervised deep feature learning, and graph attention-based matching. First, [...] Read more.
Addressing the challenge of stable cloud velocity field estimation under complex sky conditions in ground-based cloud imaging, this paper proposes a comprehensive 3D cloud velocity calculation framework. The methodology integrates binocular stereo vision geometry, self-supervised deep feature learning, and graph attention-based matching. First, a self-supervised feature detection and description model tailored to the radiometric characteristics of cloud images is developed. By incorporating a homography adaptation strategy constrained by physical priors, the model acquires robust feature representations for weakly textured and highly deformable cloud masses without requiring labeled datasets. Subsequently, a Transformer-based graph neural network matcher is employed to establish global feature correspondences across both cross-view and cross-temporal dimensions, thereby substantially augmenting matching robustness. On this basis, the framework establishes a rigorous calibration model for fisheye cameras to derive cloud base height (CBH) via binocular geometry. These geometric constraints are then coupled with sequential feature tracking results to construct 3D velocity inversion equations, enabling an end-to-end mapping from 2D pixel coordinates to 3D physical space and providing direct estimation of physical cloud motion velocity in meters per second (m/s). The experimental results show that the proposed method extracts 4.5 times more feature points than the traditional SIFT method. Furthermore, the Pearson correlation coefficient for cloud motion trends in continuous sequences reaches 0.662 relative to baseline models, indicating good relative consistency in motion estimation. The framework achieves high-precision and stable velocity estimation across diverse cloud types, including cirrus, cumulus, stratus, and mixed clouds. Full article
(This article belongs to the Section Atmospheric Techniques, Instruments, and Modeling)
16 pages, 2274 KB  
Article
Mine Ventilation Network Calibration Based on Slack Variables and Sequential Quadratic Programming
by Fengliang Wu, Ruitun Wang, Jun Cao and Jianan Gao
Processes 2026, 14(4), 715; https://doi.org/10.3390/pr14040715 - 21 Feb 2026
Viewed by 189
Abstract
In mine ventilation network calibration, sparse and inconsistent airflow measurements often lead to infeasibility in traditional optimization models. To overcome this challenge, this paper proposes a nonlinear programming calibration model incorporating slack variables. The model treats aerodynamic resistance corrections, airflow adjustments, unknown airflows, [...] Read more.
In mine ventilation network calibration, sparse and inconsistent airflow measurements often lead to infeasibility in traditional optimization models. To overcome this challenge, this paper proposes a nonlinear programming calibration model incorporating slack variables. The model treats aerodynamic resistance corrections, airflow adjustments, unknown airflows, and resistance lower-bound slack variables as decision variables. The objective function is formulated to minimize the weighted sum of squares of resistance corrections, while penalty terms account for airflow adjustments and slack variables. Constraints integrate Kirchhoff’s laws with relaxed inequality constraints for resistance lower bounds. A calibration tool integrated via the ObjectARX interface was developed using C++, utilizing the Sequential Quadratic Programming (SQP) algorithm for the solution. The method was validated via a case study of a network comprising 39 branches and 16 measured airflows, optimized under five distinct initial conditions. Results demonstrate that the inclusion of slack variables mathematically guarantees the existence of feasible solutions. With a resistance correction weight of 10−2 and a penalty coefficient of 105, the model applies only minimal necessary corrections to handle overly tight constraints or data conflicts. The SQP algorithm exhibits superior global convergence, consistently iterating to optimal solutions that satisfy network balance laws regardless of initial values. This approach effectively resolves the infeasibility and data conflict issues inherent in traditional methods, demonstrating significant robustness and practical engineering utility. Full article
(This article belongs to the Section Petroleum and Low-Carbon Energy Process Engineering)
Show Figures

Figure 1

37 pages, 20040 KB  
Article
Towards LLM-Driven Cybersecurity in Autonomous Vehicles: A Big Data-Empowered Framework with Emerging Technologies
by Aristeidis Karras, Leonidas Theodorakopoulos, Christos Karras and Alexandra Theodoropoulou
Mach. Learn. Knowl. Extr. 2026, 8(2), 43; https://doi.org/10.3390/make8020043 - 11 Feb 2026
Viewed by 318
Abstract
Modern Autonomous Vehicles generate large volumes of heterogeneous in-vehicle data, making cybersecurity a critical challenge as adversarial attacks become increasingly adaptive, stealthy, and multi-protocol. Traditional intrusion detection systems often fail under these conditions because of their limited contextual understanding, poor robustness to distribution [...] Read more.
Modern Autonomous Vehicles generate large volumes of heterogeneous in-vehicle data, making cybersecurity a critical challenge as adversarial attacks become increasingly adaptive, stealthy, and multi-protocol. Traditional intrusion detection systems often fail under these conditions because of their limited contextual understanding, poor robustness to distribution shifts, and insufficient regulatory transparency. This study introduces LLM-Guardian, a hierarchical intrusion detection framework with decision-making mechanisms that integrates Large Language Models (LLMs) with classical statistical detection theory, optimal transport drift analysis, graph neural networks, and formal uncertainty quantification. LLM-Guardian uses semantic anomaly scoring, conformal prediction for distribution-free confidence calibration, adaptive cumulative sum (CUSUM) sequential testing for low-latency detection, and topology-aware GNN reasoning designed to identify coordinated attacks across CAN, Ethernet, and V2X interfaces. In this work, the framework is empirically evaluated on four heterogeneous CAN-bus datasets, while the Ethernet and V2X components are instantiated at the architectural level and left as directions for future multi-protocol experimentation. Full article
Show Figures

Graphical abstract

17 pages, 510 KB  
Article
Does Eyewitness Confidence Calibration Vary by Target Race?
by Dilhan Töredi, Jamal K. Mansour, Sian E. Jones, Faye Skelton and Alex McIntyre
Behav. Sci. 2026, 16(2), 257; https://doi.org/10.3390/bs16020257 - 10 Feb 2026
Viewed by 258
Abstract
After making a lineup decision, eyewitnesses may be asked to indicate their confidence in their decision. Eyewitness confidence is considered an important reflector of accuracy. Previous studies have considered the confidence-accuracy (CA) relationship—that is, the relationship between participants’ confidence in their lineup decision [...] Read more.
After making a lineup decision, eyewitnesses may be asked to indicate their confidence in their decision. Eyewitness confidence is considered an important reflector of accuracy. Previous studies have considered the confidence-accuracy (CA) relationship—that is, the relationship between participants’ confidence in their lineup decision and the accuracy of that decision. However, the literature is limited and mixed concerning the CA relationship in cross-race scenarios. We considered the CA relationship for White and Asian participants and targets (fully crossed) using sequential lineups. Participants completed four trials (two White targets and two Asian targets). For each trial, they watched a mock-crime video, performed a distractor task, made a sequential lineup decision (target-present or target-absent), and indicated confidence in their lineup decision. White participants had higher identification accuracy with White than Asian targets, while Asian participants were similarly accurate with White and Asian targets. White participants’ confidence was better calibrated for White than Asian targets, except for when they had medium-high confidence (no difference). This finding is not only theoretically relevant—showing support for the optimality hypothesis—but also practically relevant—suggesting that the CA relationship may differ for target races at some levels of confidence. Full article
(This article belongs to the Special Issue Forensic and Legal Cognition)
Show Figures

Figure 1

17 pages, 4432 KB  
Article
Multi-Material Extrusion-Based 3D Printing of Hybrid Scaffolds for Tissue Engineering Application
by Andrey Abramov, Yan Sulkhanov and Natalia Menshutina
Gels 2026, 12(2), 123; https://doi.org/10.3390/gels12020123 - 29 Jan 2026
Viewed by 334
Abstract
Additive manufacturing of hydrogel-based scaffolds requires concurrent control of material rheology and extrusion dynamics, especially in multi-material architectures. In this work, we develop a modular multi-material extrusion-based 3D-printing platform that combines a filament-fed extruder for thermoplastic polymers with a piston-driven extruder for viscous [...] Read more.
Additive manufacturing of hydrogel-based scaffolds requires concurrent control of material rheology and extrusion dynamics, especially in multi-material architectures. In this work, we develop a modular multi-material extrusion-based 3D-printing platform that combines a filament-fed extruder for thermoplastic polymers with a piston-driven extruder for viscous gel inks, together with an empirical calibration procedure for gel dosing. The calibration algorithm optimizes the pre-extrusion and retraction displacement (EPr/R) based on stepwise extrusion experiments and reduces the discrepancy between theoretical and measured deposited mass for shear-thinning alginate gels to below the prescribed tolerance. The calibrated system is then used to fabricate two representative hybrid constructs: partially crosslinked sodium alginate scaffolds with an internal hollow channel supported by a removable polycaprolactone framework, and self-supporting structures based on a sodium alginate–chitosan polyelectrolyte complex obtained by sequential co-extrusion. The resulting constructs remain mechanically stable after ionic crosslinking and solvent treatment and can subsequently be converted into highly porous scaffolds by freeze- or supercritical drying. The proposed combination of hardware architecture and extrusion calibration enables reproducible multi-material 3D printing of hydrogel–thermoplastic hybrid scaffolds and can be readily adapted to other gel-based inks for tissue engineering applications. Full article
(This article belongs to the Special Issue 3D Printing of Gel-Based Materials (2nd Edition))
Show Figures

Figure 1

18 pages, 2686 KB  
Article
MRI-Based Bladder Cancer Staging via YOLOv11 Segmentation and Deep Learning Classification
by Phisit Katongtung, Kanokwatt Shiangjen, Watcharaporn Cholamjiak and Krittin Naravejsakul
Diseases 2026, 14(2), 45; https://doi.org/10.3390/diseases14020045 - 28 Jan 2026
Viewed by 306
Abstract
Background: Accurate staging of bladder cancer is critical for guiding clinical management, particularly the distinction between non–muscle-invasive (T1) and muscle-invasive (T2–T4) disease. Although MRI offers superior soft-tissue contrast, image interpretation remains opera-tor-dependent and subject to inter-observer variability. This study proposes an automated deep [...] Read more.
Background: Accurate staging of bladder cancer is critical for guiding clinical management, particularly the distinction between non–muscle-invasive (T1) and muscle-invasive (T2–T4) disease. Although MRI offers superior soft-tissue contrast, image interpretation remains opera-tor-dependent and subject to inter-observer variability. This study proposes an automated deep learning framework for MRI-based bladder cancer staging to support standardized radio-logical interpretation. Methods: A sequential AI-based pipeline was developed, integrating hybrid tumor segmentation using YOLOv11 for lesion detection and DeepLabV3 for boundary refinement, followed by three deep learning classifiers (VGG19, ResNet50, and Vision Transformer) for MRI-based stage prediction. A total of 416 T2-weighted MRI images with radiology-derived stage labels (T1–T4) were included, with data augmentation applied during training. Model performance was evaluated using accuracy, precision, recall, F1-score, and multi-class AUC. Performance un-certainty was characterized using patient-level bootstrap confidence intervals under a fixed training and evaluation pipeline. Results: All evaluated models demonstrated high and broadly comparable discriminative performance for MRI-based bladder cancer staging within the present dataset, with high point estimates of accuracy and AUC, particularly for differentiating non–muscle-invasive from muscle-invasive disease. Calibration analysis characterized the probabilistic behavior of predicted stage probabilities under the current experimental setting. Conclusions: The proposed framework demonstrates the feasibility of automated MRI-based bladder cancer staging derived from radiological reference labels and supports the potential of deep learning for stand-ardizing and reproducing MRI-based staging procedures. Rather than serving as an independent clinical decision-support system, the framework is intended as a methodological and work-flow-oriented tool for automated staging consistency. Further validation using multi-center datasets, patient-level data splitting prior to augmentation, pathology-confirmed reference stand-ards, and explainable AI techniques is required to establish generalizability and clinical relevance. Full article
Show Figures

Figure 1

41 pages, 2673 KB  
Article
Multi-Phase Demand Modeling and Simulation of Mission-Oriented Supply Chains Using Digital Twin and Adaptive PSO
by Jianbo Zhao, Ruikang Wang, Yijia Jing, Yalin Wang, Chenghao Pan and Yifei Tong
Processes 2026, 14(3), 468; https://doi.org/10.3390/pr14030468 - 28 Jan 2026
Viewed by 252
Abstract
Mission-oriented supply chains involve multi-phase tasks, strong resource interdependencies, and stringent reliability requirements, which make demand planning complex and uncertain. This study develops a structured demand modeling framework to support multi-phase mission-oriented supply chains under budget and reliability constraints by integrating digital twin [...] Read more.
Mission-oriented supply chains involve multi-phase tasks, strong resource interdependencies, and stringent reliability requirements, which make demand planning complex and uncertain. This study develops a structured demand modeling framework to support multi-phase mission-oriented supply chains under budget and reliability constraints by integrating digital twin technology with an adaptive inertia weight particle swarm optimization (AIW-PSO) algorithm. The supply support process is decomposed into four sequential phases—storage, transportation, preparation, and execution—and phase-specific demand models are constructed based on system reliability theory, explicitly incorporating redundancy, maintainability, and repairability. In this work, digital twin technology functions as a data acquisition and virtual experimentation layer that supports parameter calibration, state-aware scenario simulation, and event-triggered re-optimization rather than continuous real-time control. Physical-state updates are mapped to model parameters such as phase durations, failure rates, repair rates, and instantaneous availability, after which the integrated optimization model is re-solved using a warm-start strategy to generate updated demand plans. The resulting multi-phase demand optimization problem is solved using AIW-PSO to enhance global search performance and mitigate premature convergence. The proposed method is validated using a representative mission-oriented supply support scenario with operational and simulated data. Simulation results demonstrate that, under identical budget constraints, the proposed approach achieves higher mission completion capability than conventional PSO-based methods, providing effective and practical decision support for multi-phase mission-oriented supply chain planning. Full article
(This article belongs to the Section Manufacturing Processes and Systems)
Show Figures

Figure 1

20 pages, 5627 KB  
Article
A Practical Framework for Parameter Selection and Calibration of the Barcelona Basic Model for the Mechanical Behaviour of Unsaturated Collapsible Soils
by Soha Emad Said, Yasser Moghazy El-Mossallamy, Hossam El-Din Abdallah Ali and Ashraf Ahmed El-Shamy
Appl. Sci. 2026, 16(2), 1072; https://doi.org/10.3390/app16021072 - 21 Jan 2026
Viewed by 292
Abstract
The Barcelona Basic Model (BBM) is a well-established constitutive framework for describing the mechanical behaviour of unsaturated collapsible soils within the context of critical state soil mechanics. Despite its robustness, its application in engineering practice remains limited due to the complexity of its [...] Read more.
The Barcelona Basic Model (BBM) is a well-established constitutive framework for describing the mechanical behaviour of unsaturated collapsible soils within the context of critical state soil mechanics. Despite its robustness, its application in engineering practice remains limited due to the complexity of its formulation and challenges associated with reliable parameter determination. This study presents a practical framework for the selection and calibration of BBM parameters for Jossigny silt, using laboratory test data reported in the literature, employing a sequential approach supported by engineering judgement and a clear understanding of the original model formulation. The calibrated parameters are implemented in PLAXIS to simulate laboratory tests with different stress paths, allowing for the evaluation of the model’s ability to reproduce observed soil behaviour compared with those reported in the literature through a benchmark exercise conducted using the same reference tests. The calibrated parameter set successfully reproduces soil response under different stress paths, capturing the mechanical behaviour by achieving average values of R2 = 0.98, MAE = 0.01, and RMSE = 0.013. The proposed framework is intended to bridge the gap between advanced constitutive modelling and routine engineering analysis by providing a transparent, step-by-step calibration procedure readily implementable in commercial finite element software. Full article
(This article belongs to the Special Issue Mechanical Behaviour of Unsaturated Soil)
Show Figures

Graphical abstract

17 pages, 1590 KB  
Article
Integrating Contextual Causal Deep Networks and LLM-Guided Policies for Sequential Decision-Making
by Jong-Min Kim
Mathematics 2026, 14(2), 269; https://doi.org/10.3390/math14020269 - 10 Jan 2026
Cited by 1 | Viewed by 288
Abstract
Sequential decision-making is critical for applications ranging from personalized recommendations to resource allocation. This study evaluates three decision policies—Greedy, Thompson Sampling (via Monte Carlo Dropout), and a zero-shot Large Language Model (LLM)-guided policy (Gemini-1.5-Pro)—within a contextual bandit framework. To address covariate shift and [...] Read more.
Sequential decision-making is critical for applications ranging from personalized recommendations to resource allocation. This study evaluates three decision policies—Greedy, Thompson Sampling (via Monte Carlo Dropout), and a zero-shot Large Language Model (LLM)-guided policy (Gemini-1.5-Pro)—within a contextual bandit framework. To address covariate shift and assess subpopulation performance, we utilize a Collective Conditional Diffusion Network (CCDN) where covariates are partitioned into B=10 homogeneous blocks. Evaluating these policies across a high-dimensional treatment space (K=5, resulting in 25=32 actions), we tested performance in a simulated environment and three benchmark datasets: Boston Housing, Wine Quality, and Adult Income. Our results demonstrate that the Greedy strategy achieves the highest Model-Relative Optimal (MRO) coverage, reaching 1.00 in the Wine Quality and Adult Income datasets, though performance drops significantly to 0.05 in the Boston Housing environment. Thompson Sampling maintains competitive regret and, in the Boston Housing dataset, marginally outperforms Greedy in action selection precision. Conversely, the zero-shot LLM-guided policy consistently underperforms in numerical tabular settings, exhibiting the highest median regret and near-zero MRO coverage across most tasks. Furthermore, Wilcoxon tests reveal that differences in empirical outcomes between policies are often not statistically significant (ns), suggesting an optimization ceiling in zero-shot tabular settings. These findings indicate that while traditional model-driven policies are robust, LLM-guided approaches currently lack the numerical precision required for high-dimensional sequential decision-making without further calibration or hybrid integration. Full article
(This article belongs to the Special Issue Computational Methods and Machine Learning for Causal Inference)
Show Figures

Figure 1

19 pages, 3915 KB  
Article
Discrete Element Modelling Method and Parameter Calibration of Mussel Based on Bonding V2 Model
by Zhenhua Li, Xinyang Li, Chen Li and Hongbao Ye
Machines 2026, 14(1), 86; https://doi.org/10.3390/machines14010086 - 10 Jan 2026
Viewed by 316
Abstract
To address the inefficiency and high labor intensity associated with traditional manual mussel seedling unloading, this study proposes an automated traction-rope mussel unloading machine. This study focuses on the thick-shelled mussel (Mytilus coruscus) as the research subject. Furthermore, the key mussel [...] Read more.
To address the inefficiency and high labor intensity associated with traditional manual mussel seedling unloading, this study proposes an automated traction-rope mussel unloading machine. This study focuses on the thick-shelled mussel (Mytilus coruscus) as the research subject. Furthermore, the key mussel unloading processes were simulated using the EDEM software to analyze mechanical interactions during detachment. A breakable mussel discrete element model was developed, and its Bonding V2 model parameters were systematically calibrated. Using the ultimate crushing displacement (2.25 mm) and ultimate crushing load (552 N) as response variables, the model was optimized through a sequential experimental design comprising Plackett–Burman screening, the steepest ascent method, and the Box–Behnken response surface methodology. The results demonstrate that the optimal parameter combination consists of unit area normal stiffness (2.48 × 1011 N/m3), unit area tangential stiffness (3.80 × 108 N/m3), critical normal stress (3.15 × 106 Pa), critical tangential stress (2.90 × 107 Pa), and the contact radius (1.60 mm). The model’s accuracy was validated through integrated discrete element simulations and prototype testing. The equipment achieves an exceptionally low mussel damage rate of only 1.2%, effectively meeting the operational requirements for mussel unloading. This study provides both theoretical foundations and practical insights for the design of mechanized mussel unloading systems in China. Full article
(This article belongs to the Section Machine Design and Theory)
Show Figures

Figure 1

24 pages, 1788 KB  
Article
Uncertainty-Aware Machine Learning for NBA Forecasting in Digital Betting Markets
by Matteo Montrucchio, Enrico Barbierato and Alice Gatti
Information 2026, 17(1), 56; https://doi.org/10.3390/info17010056 - 8 Jan 2026
Viewed by 805
Abstract
This study introduces a fully uncertainty-aware forecasting framework for NBA games that integrates team-level performance metrics, rolling-form indicators, and spatial shot-chart embeddings. The predictive backbone is a recurrent neural network equipped with Monte Carlo dropout, yielding calibrated sequential probabilities. The model is evaluated [...] Read more.
This study introduces a fully uncertainty-aware forecasting framework for NBA games that integrates team-level performance metrics, rolling-form indicators, and spatial shot-chart embeddings. The predictive backbone is a recurrent neural network equipped with Monte Carlo dropout, yielding calibrated sequential probabilities. The model is evaluated against strong baselines including logistic regression, XGBoost, convolutional models, a GRU sequence model, and both market-only and non-market-only benchmarks. All experiments rely on strict chronological partitioning (train ≤ 2022, validation 2023, test 2024), ablation tests designed to eliminate any circularity with bookmaker odds, and cross-season robustness checks spanning 2012–2024. Predictive performance is assessed through accuracy, Brier score, log-loss, AUC, and calibration metrics (ECE/MCE), complemented by SHAP-based interpretability to verify that only pre-game information influences predictions. To quantify economic value, calibrated probabilities are fed into a frictionless betting simulator using fractional-Kelly staking, an expected-value threshold, and bootstrap-based uncertainty estimation. Empirically, the uncertainty-aware model delivers systematically better calibration than non-Bayesian baselines and benefits materially from the combination of shot-chart embeddings and recent-form features. Economic value emerges primarily in less-efficient segments of the market: The fused predictor outperforms both market-only and non-market-only variants on moneylines, while spreads and totals show limited exploitable edge, consistent with higher pricing efficiency. Sensitivity studies across Kelly multipliers, EV thresholds, odds caps, and sequence lengths confirm that the findings are robust to modelling and decision-layer perturbations. The paper contributes a reproducible, decision-focused framework linking uncertainty-aware prediction to economic outcomes, clarifying when predictive lift can be monetized in NBA markets, and outlining methodological pathways for improving robustness, calibration, and execution realism in sports forecasting. Full article
Show Figures

Graphical abstract

17 pages, 3005 KB  
Article
Methodological Advancement in Resistive-Based, Real-Time Spray Deposition Assessment with Multiplexed Acquisition
by Ayesha Ali, Lorenzo Becce, Andreas Gronauer and Fabrizio Mazzetto
AgriEngineering 2026, 8(1), 3; https://doi.org/10.3390/agriengineering8010003 - 1 Jan 2026
Viewed by 407
Abstract
The use of agrochemicals remains indispensable for ensuring fruit production; however, their excessive or inefficient application poses significant environmental and health concerns. Rapid detection of spray deposition is crucial for assessing sprayer performance, improving precision application, and reducing drift and chemical waste. In [...] Read more.
The use of agrochemicals remains indispensable for ensuring fruit production; however, their excessive or inefficient application poses significant environmental and health concerns. Rapid detection of spray deposition is crucial for assessing sprayer performance, improving precision application, and reducing drift and chemical waste. In this context, real-time monitoring technologies represent a promising tool to promote sustainable and efficient crop protection practices. This study refines previous experiences with an array of resistive sensors to quickly measure spray deposition. First, a multi-point calibration curve is introduced to improve the sensors’ accuracy. Furthermore, a multiplexed acquisition system (Sciospec ISX-5) is employed to enable time-resolved measurements of the whole sensor array. The method is validated by spectrophotometry and weight measurements. Wind tunnel trials with fluorescein (FLU) and fluorescein + potassium chloride (FLU + KCl) tracing solutions were conducted. The conductivity of the latter was higher than the former, without biasing the measurement. Both tracers showed good correlation between deposition and conductivity (R2 = 0.997 for FLU and 0.995 for FLU + KCl), and the maximum deviation from the spectrophotometric estimates was <10%. Time-resolved measurement showed the build-up of deposition over time, potentially indicating the dimensional composition of the sprayed cloud. The improved workflow provides array-wide, sequential deposition measurements, enabling faster on-site acquisition and efficient analysis. The results demonstrate strong potential for scaling the method to field applications, supporting its further development into real-time deposition mapping tools that could guide precision spraying, optimize agrochemical use, and reduce environmental drift. Full article
Show Figures

Figure 1

22 pages, 2575 KB  
Article
Sustained Release of Azoxystrobin from Clay Carriers for the Management of Maize Late Wilt Disease
by Ofir Degani, Adar Abramovici, Achinoam Levi-Lion, Daniel Demenchuk, Ariel Hadad and Elhanan Dimant
J. Fungi 2026, 12(1), 21; https://doi.org/10.3390/jof12010021 - 27 Dec 2025
Viewed by 498
Abstract
Controlled-release technologies based on natural clays offer a sustainable approach to enhance the efficacy and environmental compatibility of agrochemicals. This study reports the development and evaluation of clay-based azoxystrobin (Az) formulations for controlling Magnaporthiopsis maydis, the causal agent of maize late wilt [...] Read more.
Controlled-release technologies based on natural clays offer a sustainable approach to enhance the efficacy and environmental compatibility of agrochemicals. This study reports the development and evaluation of clay-based azoxystrobin (Az) formulations for controlling Magnaporthiopsis maydis, the causal agent of maize late wilt disease. Among six carriers tested, raw bentonite and sepiolite were selected for their comparable adsorption capacity (9.5% Az loading efficiency) and ease of preparation. A novel mycelial plug-immersion bioassay was established and calibrated (R2 = 0.92–0.95) to assess release kinetics and antifungal efficacy, showing approximately tenfold higher sensitivity than conventional disk-diffusion or mycelial-growth inhibition assays. Sequential wash and extended incubation experiments demonstrated sustained Az release equivalent to ≥1 mg L−1 over 144 h, resulting in approximately 50% (p < 0.05) fungal growth suppression. A comparative analysis of particle suspensions and supernatants revealed formulation-specific release behaviors, which differed among clay carriers. Overall, bentonite and sepiolite acted as efficient carriers that prolonged fungicide bioavailability, minimized leaching losses, and preserved biological activity. These findings provide proof of concept for clay–Az formulations as eco-friendly and cost-effective tools for late wilt management and advance understanding of clay–fungicide interactions that support sustainable, integrated disease-control strategies. Full article
(This article belongs to the Special Issue Plant Fungal Diseases and Crop Protection, 2nd Edition)
Show Figures

Graphical abstract

32 pages, 1486 KB  
Article
Optimal Carbon Emission Reduction Strategies Considering the Carbon Market
by Wenlin Huang and Daming Shan
Mathematics 2026, 14(1), 68; https://doi.org/10.3390/math14010068 - 24 Dec 2025
Viewed by 360
Abstract
In this study, we develop a stochastic optimal control model for corporate carbon management that synergistically combines emission reduction initiatives with carbon trading mechanisms. The model incorporates two control variables: the autonomous emission reduction rate and initial carbon allowance purchases, while accounting for [...] Read more.
In this study, we develop a stochastic optimal control model for corporate carbon management that synergistically combines emission reduction initiatives with carbon trading mechanisms. The model incorporates two control variables: the autonomous emission reduction rate and initial carbon allowance purchases, while accounting for both deterministic and stochastic carbon pricing scenarios. The solution is obtained through a two-step optimization procedure that addresses each control variable sequentially. In the first step, the problem is transformed into a Hamilton–Jacobi–Bellman (HJB) equation in the sense of viscosity solution. A key aspect of the methodology is deriving the corresponding analytical solution based on this equation’s structure. The second-step optimization results are shown to depend on the relationship between the risk-free interest rate and carbon price dynamics. Furthermore, we employ daily closing prices from 16 July 2021, to 31 December 2024, as the sample dataset to calibrate the parameters governing carbon allowance price evolution. The marginal abatement cost (MAC) curve is calibrated using data derived from the Emissions Prediction and Policy Analysis (EPPA) model, enabling the estimation of the emission reduction efficiency parameter. Additional policy-related parameters are obtained from relevant regulatory documents. The numerical results demonstrate how enterprises can implement the model’s outputs to inform carbon emission reduction decisions in practice and offer enterprises a decision-support tool that integrates theoretical rigor and practical applicability for achieving emission targets in the carbon market. Full article
Show Figures

Figure 1

26 pages, 1143 KB  
Article
Debiasing Session-Based Recommendation for the Digital Economy: Propensity-Aware Training and Temporal Contrast on Graph Transformers
by Yongjian Wang, Junru Si, Xuhua Qiu and Kunjie Zhu
Electronics 2026, 15(1), 84; https://doi.org/10.3390/electronics15010084 - 24 Dec 2025
Viewed by 535
Abstract
Session-based recommender systems (SBRs) are critically impaired by exposure bias in observational training logs, causing models to overfit to logging policies rather than true user preferences. This bias distorts offline evaluation and harms generalization, particularly for long-tail items. To address this, we propose [...] Read more.
Session-based recommender systems (SBRs) are critically impaired by exposure bias in observational training logs, causing models to overfit to logging policies rather than true user preferences. This bias distorts offline evaluation and harms generalization, particularly for long-tail items. To address this, we propose the Propensity- and Temporal-consistency Enhanced Graph Transformer (PTE-GT), a principled framework that enhances a recent interval-aware graph transformer backbone with two synergistic training-time modules. This Graph Neural Network -based architecture is adept at modeling the complex, graph-structured nature of session data, capturing intricate item transitions that sequential models might miss. First, we introduce a propensity-aware (PA) optimization objective based on the self-normalized inverse propensity scoring (SNIPS) estimator. This module leverages logs containing randomized exposure or logged behavior-policy propensities to learn an unbiased risk estimate, correcting for the biased data distribution. Second, we design a lightweight, view-free temporal consistency (TC) contrastive regularizer that enforces alignment between session prefixes and suffixes, improving representation robustness without computationally expensive graph augmentations, which are often a bottleneck for graph-based contrastive methods. We conduct comprehensive evaluations on three public session-based benchmarks—KuaiRand, the OTTO e-commerce challenge dataset (OTTO), and the YOOCHOOSE-1/64 split (YOOCHOOSE)—and additionally on the publicly available Open Bandit Dataset (OBD) containing logged bandit propensities. Our results demonstrate that PTE-GT significantly outperforms strong baselines. Critically, on datasets with randomized exposure or logged propensities, our unbiased evaluation protocol, using SNIPS-weighted metrics, reveals a substantial performance leap that is masked by standard, biased metrics. Our method also shows marked improvements in model calibration and long-tail item recommendation. Full article
(This article belongs to the Special Issue Advances in Deep Learning for Graph Neural Networks)
Show Figures

Figure 1

Back to TopTop