Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

Search Results (186)

Search Parameters:
Keywords = drift diffusion model

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
39 pages, 6278 KB  
Article
Towards Generative Interest-Rate Modeling: Neural Perturbations Within the Libor Market Model
by Anna Knezevic
J. Risk Financial Manag. 2026, 19(1), 82; https://doi.org/10.3390/jrfm19010082 - 21 Jan 2026
Viewed by 83
Abstract
This study proposes a neural-augmented Libor Market Model (LMM) for swaption surface calibration that enhances expressive power while maintaining the interpretability, arbitrage-free structure, and numerical stability of the classical framework. Classical LMM parametrizations, based on exponential decay volatility functions and static correlation kernels, [...] Read more.
This study proposes a neural-augmented Libor Market Model (LMM) for swaption surface calibration that enhances expressive power while maintaining the interpretability, arbitrage-free structure, and numerical stability of the classical framework. Classical LMM parametrizations, based on exponential decay volatility functions and static correlation kernels, are known to perform poorly in sparsely quoted and long-tenor regions of swaption volatility cubes. Machine learning–based diffusion models offer flexibility but often lack transparency, stability, and measure-consistent dynamics. To reconcile these requirements, the present approach embeds a compact neural network within the volatility and correlation layers of the LMM, constrained by structural diagnostics, low-rank correlation construction, and HJM-consistent drift. Empirical tests across major currencies (EUR, GBP, USD) and multiple quarterly datasets from 2024 to 2025 show that the neural-augmented LMM consistently outperforms the classical model. Improvements of approximately 7–10% in implied volatility RMSE and 10–15% in PV RMSE are observed across all datasets, with no deterioration in any region of the surface. These results reflect the model’s ability to represent cross-tenor dependencies and surface curvature beyond the reach of classical parametrizations, while remaining economically interpretable and numerically tractable. The findings support hybrid model designs in quantitative finance, where small neural components complement robust analytical structures. The approach aligns with ongoing industry efforts to integrate machine learning into regulatory-compliant pricing models and provides a pathway for future generative LMM variants that retain an arbitrage-free diffusion structure while learning data-driven volatility geometry. Full article
(This article belongs to the Special Issue Quantitative Finance in the Era of Big Data and AI)
Show Figures

Figure 1

15 pages, 1339 KB  
Article
Accounting the Role of Prosociality in the Disjunction Effect with a Drift Diffusion Model
by Xiaoyang Xin, Bo Liu, Bihua Yan and Ying Li
Behav. Sci. 2026, 16(1), 132; https://doi.org/10.3390/bs16010132 - 16 Jan 2026
Viewed by 113
Abstract
The disjunction effect in the prisoner’s dilemma game shows that humans tend to cooperate more under uncertain condition (U) than under the two complementary known conditions—one being competitive (D) and the other being cooperative (C)—a well-known violation of the classical decision principle. Our [...] Read more.
The disjunction effect in the prisoner’s dilemma game shows that humans tend to cooperate more under uncertain condition (U) than under the two complementary known conditions—one being competitive (D) and the other being cooperative (C)—a well-known violation of the classical decision principle. Our study explores the potential role of prosociality in the disjunction effect. We measured prosocial trait via the SVO Slider Measure, and prosocial bias via the drift diffusion model (DDM). By using the SVO Slider Measure (for prosocial trait) and the DDM starting-point bias parameter (for prosocial bias), we found that the variation in prosocial bias between uncertain and certain conditions substantially contributes to the disjunction effect. At the aggregate level, prosocial bias significantly decreased from U to D (competitive) but did not differ between U and C (cooperative). At the individual level, participants showed heterogeneous bias changes across prosocial-trait groups: intermediate participants had the largest bias shifts. This heterogeneity underlies the observed inverted U-shaped relationship between prosocial trait and effect size of the disjunction effect. Our study fills a critical gap by clarifying how prosocial inclination influences the disjunction effect. Full article
Show Figures

Figure 1

33 pages, 1141 KB  
Review
The Protonic Brain: Nanoscale pH Dynamics, Proton Wires, and Acid–Base Information Coding in Neural Tissue
by Valentin Titus Grigorean, Catalina-Ioana Tataru, Cosmin Pantu, Felix-Mircea Brehar, Octavian Munteanu and George Pariza
Int. J. Mol. Sci. 2026, 27(2), 560; https://doi.org/10.3390/ijms27020560 - 6 Jan 2026
Viewed by 285
Abstract
Emerging research indicates that neuronal activity is maintained by an architectural system of protons in a multi-scale fashion. Proton architecture is formed when organelles (such as mitochondria, endoplasmic reticulum, lysosomes, synaptic vesicles, etc.) are coupled together to produce dynamic energy domains. Techniques have [...] Read more.
Emerging research indicates that neuronal activity is maintained by an architectural system of protons in a multi-scale fashion. Proton architecture is formed when organelles (such as mitochondria, endoplasmic reticulum, lysosomes, synaptic vesicles, etc.) are coupled together to produce dynamic energy domains. Techniques have been developed to visualize protons in neurons; recent advances include near-atomic structural imaging of organelle interfaces using cryo-tomography and nanoscale resolution imaging of organelle interfaces and proton tracking using ultra-fast spectroscopy. Results of these studies indicate that protons in neurons do not diffuse randomly throughout the neuron but instead exist in organized geometric configurations. The cristae of mitochondrial cells create oscillating proton micro-domains that are influenced by the curvature of the cristae, hydrogen bonding between molecules, and localized changes in dielectric properties that result in time-patterned proton signals that can be used to determine the metabolic load of the cell and the redox state of its mitochondria. These proton patterns also communicate to the rest of the cell via hydrated aligned proton-conductive pathways at the mitochon-dria-endoplasmic reticulum junctions, through acidic lipid regions, and through nano-tethered contact sites between mitochondria and other organelles, which are typically spaced approximately 10–25 nm apart. Other proton architectures exist in lysosomes, endosomes, and synaptic vesicles. In each of these organelles, the V-ATPase generates steep concentration gradients across their membranes, controlling the rate of cargo removal from the lumen of the organelle, recycling receptors from the surface of the membrane, and loading neurotransmitters into the vesicles. Recent super-resolution pH mapping has indicated that populations of synaptic vesicles contain significant heterogeneity in the amount of protons they contain, thereby influencing the amount of neurotransmitter released per vesicle, the probability of vesicle release, and the degree of post-synaptic receptor protonation. Additionally, proton gradients in each organelle interact with the cytoskeleton: the protonation status of actin and microtubules influences filament stiffness, protein–protein interactions, and organelle movement, resulting in the formation of localized spatial structures that may possess some type of computational significance. At multiple scales, it appears that neurons integrate the proton micro-domains with mechanical tension fields, dielectric nanodomains, and phase-state transitions to form distributed computing elements whose behavior is determined by the integration of energy flow, organelle geometry, and the organization of soft materials. Alterations to the proton landscape in neurons (e.g., due to alterations in cristae structure, drift in luminal pH, disruption in the hydration-structure of the cell, or imbalance in the protonation of cytoskeletal components) could disrupt the intracellular signaling network well before the onset of measurable electrical or biochemical pathologies. This article will summarize evidence indicating that proton–organelle interaction provides a previously unknown source of energetic substrate for neural computation. Using an integrated approach combining nanoscale proton energy, organelle interface geometry, cytoskeletal mechanics, and AI-based multiscale models, this article outlines current principles and unresolved questions related to the subject area as well as possible new approaches to early detection and precise intervention of pathological conditions related to altered intracellular energy flow. Full article
(This article belongs to the Special Issue Molecular Synapse: Diversity, Function and Signaling)
Show Figures

Figure 1

21 pages, 1184 KB  
Perspective
Death as Rising Entropy: A Theory of Everything for Postmortem Interval Estimation
by Matteo Nioi and Ernesto d’Aloja
Forensic Sci. 2025, 5(4), 76; https://doi.org/10.3390/forensicsci5040076 - 11 Dec 2025
Viewed by 588
Abstract
Determining the postmortem interval remains one of the most persistent and fragmented challenges in forensic science. Conventional approaches—thermal, biochemical, molecular, or entomological—capture only isolated fragments of a single physical reality: the irreversible drift of a once-living system toward equilibrium. This Perspective proposes a [...] Read more.
Determining the postmortem interval remains one of the most persistent and fragmented challenges in forensic science. Conventional approaches—thermal, biochemical, molecular, or entomological—capture only isolated fragments of a single physical reality: the irreversible drift of a once-living system toward equilibrium. This Perspective proposes a unifying paradigm in which death is understood as a progressive rise in entropy, encompassing the loss of biological order across thermal, chemical, structural, and ecological domains. Each measurable postmortem variable—temperature decay, metabolite diffusion, macromolecular breakdown, tissue disorganization, and microbial succession—represents a distinct expression of the same universal law. Within this framework, entropy becomes a dimensionless index of disorder that can be normalized and compared across scales, transforming scattered empirical data into a coherent continuum. A Bayesian formulation further integrates these entropic signals according to their temporal reliability, yielding a probabilistic, multidomain equation for PMI estimation. By merging thermodynamics, information theory, and biology, the concept of death as rising entropy offers a comprehensive physical description of the postmortem process and a theoretical foundation for future computational, imaging, and metabolomic models in forensic time analysis. Full article
Show Figures

Graphical abstract

28 pages, 5083 KB  
Article
Optimizing Assessment Thresholds of a Computer Gaming Intervention for Students with or at Risk for Mathematics Learning Disabilities: Accuracy and Response Time Trade-Offs
by Sam Choo, Jechun An, Nancy Nelson and Derek Kosty
Educ. Sci. 2025, 15(12), 1660; https://doi.org/10.3390/educsci15121660 - 9 Dec 2025
Viewed by 414
Abstract
Students with mathematics learning disabilities often have difficulties in adding whole numbers. Such difficulties are evident in both response time and accuracy, but the relationship between accuracy and response time requires further consideration, especially in the context of technology-based interventions and assessments. In [...] Read more.
Students with mathematics learning disabilities often have difficulties in adding whole numbers. Such difficulties are evident in both response time and accuracy, but the relationship between accuracy and response time requires further consideration, especially in the context of technology-based interventions and assessments. In this article, we apply a novel approach using the drift-diffusion model to examine potential trade-offs and find balanced performance points that account for both accuracy and response time, using data from an efficacy trial of a mathematics technology gaming intervention for first-grade students with or at risk for learning disabilities. Results indicate that accuracy tends to increase as response time decreases, but only to a certain point. Practical implications include that educators should consider both accuracy and response time to intensify and individualize their instruction and take student background (i.e., gender, special education status, and English language status) into account. We suggest that developing technology-based mathematics interventions and assessments requires careful design and configuration to balance accuracy and response time, thereby enabling adaptive performance thresholds for better understanding and supporting student learning in early mathematical fluency. Full article
Show Figures

Figure 1

14 pages, 1754 KB  
Article
Computational Modeling of Uncertainty and Volatility Beliefs in Escape-Avoidance Learning: Comparing Individuals with and Without Suicidal Ideation
by Miguel Blacutt, Caitlin M. O’Loughlin and Brooke A. Ammerman
J. Pers. Med. 2025, 15(12), 604; https://doi.org/10.3390/jpm15120604 - 5 Dec 2025
Viewed by 442
Abstract
Background/Objectives: Computational studies using drift diffusion models on go/no-go escape tasks consistently show that individuals with suicidal ideation (SI) preferentially engage in active escape from negative emotional states. This study extends these findings by examining how individuals with SI update beliefs about [...] Read more.
Background/Objectives: Computational studies using drift diffusion models on go/no-go escape tasks consistently show that individuals with suicidal ideation (SI) preferentially engage in active escape from negative emotional states. This study extends these findings by examining how individuals with SI update beliefs about action–outcome contingencies and uncertainty when trying to escape an aversive state. Methods: Undergraduate students with (n = 58) and without (n = 62) a lifetime history of SI made active (go) or passive (no-go) choices in response to stimuli to escape or avoid an unpleasant state in a laboratory-based negative reinforcement task. A Hierarchical Gaussian Filter (HGF) was used to estimate trial-by-trial trajectories of contingency and volatility beliefs, along with their uncertainties, prediction errors (precision-weighted), and dynamic learning rates, as well as fixed parameters at the person level. Bayesian mixed-effects models were used to examine the relationship between trial number, SI history, trial type, and all two-way interactions on HGF parameters. Results: We did not find an effect of SI history, trial type, or their interactions on perceived volatility of reward contingencies. At the trial level, however, participants with a history of SI developed progressively stronger contingency beliefs while simultaneously perceiving the environment as increasingly stable compared to those without SI experiences. Despite this rigidity, they maintained higher uncertainty during escape trials. Participants with an SI history had higher dynamic learning rates during escape trials compared to those without SI experiences. Conclusions: Individuals with an SI history showed a combination of cognitive inflexibility and hyper-reactivity to prediction errors in escape-related contexts. This combination may help explain difficulties in adapting to changing environments and in regulating responses to stress, both of which are relevant for suicide risk. Full article
(This article belongs to the Special Issue Computational Behavioral Modeling in Precision Psychiatry)
Show Figures

Figure 1

10 pages, 213 KB  
Perspective
Implicit Measures of Risky Behaviors in Adolescence
by Silvia Cimino and Luca Cerniglia
Adolescents 2025, 5(4), 77; https://doi.org/10.3390/adolescents5040077 - 1 Dec 2025
Viewed by 428
Abstract
Background: Adolescence is marked by heightened reward sensitivity and incomplete maturation of cognitive control, creating conditions that favor engagement in risky behaviors. Traditional self-report methods often overlook the fast, automatic processes—such as attentional biases, approach–avoidance tendencies, and associative schemas—that shape adolescent decision-making [...] Read more.
Background: Adolescence is marked by heightened reward sensitivity and incomplete maturation of cognitive control, creating conditions that favor engagement in risky behaviors. Traditional self-report methods often overlook the fast, automatic processes—such as attentional biases, approach–avoidance tendencies, and associative schemas—that shape adolescent decision-making in real time. Aims: This Perspective aims to synthesize recent (2018–2025) advances in the study of implicit measures relevant to adolescent risk behaviors, evaluate their predictive value beyond explicit measures, and identify translational pathways for prevention and early intervention. Methods: A narrative synthesis was conducted, integrating evidence from eye-tracking, drift-diffusion modeling, approach–avoidance tasks, single-category implicit association tests, ecological momentary assessment (EMA), and passive digital phenotyping. Emphasis was placed on multi-method phenotyping pipelines and on studies validating these tools in adolescent populations. Results: Implicit indices demonstrated incremental predictive validity for risky behaviors such as substance use, hazardous driving, and problematic digital engagement, outperforming self-reports in detecting context-dependent and state-specific risk patterns. Integrative protocols combining laboratory-based measures with EMA and passive sensing captured the influence of peer presence, affective state, and opportunity structures on decision-making. Mobile-based interventions, including approach bias modification and attention bias training, proved feasible, scalable, and sensitive to change in implicit outcomes. Acoustic biomarkers further enhanced low-burden state monitoring. Conclusions: Implicit measures provide a mechanistic, intervention-sensitive complement to explicit screening, enabling targeted, context-aware prevention strategies in adolescents. Future priorities include multi-site validations, school-based implementation trials, and the use of implicit parameter change as a primary endpoint in prevention research. Full article
29 pages, 6701 KB  
Article
IFADiff: Training-Free Hyperspectral Image Generation via Integer–Fractional Alternating Diffusion Sampling
by Yang Yang, Xixi Jia, Wenyang Wei, Wenhang Song, Hailong Zhu and Zhe Jiao
Remote Sens. 2025, 17(23), 3867; https://doi.org/10.3390/rs17233867 - 28 Nov 2025
Viewed by 533
Abstract
Hyperspectral images (HSIs) provide rich spectral–spatial information and support applications in remote sensing, agriculture, and medicine, yet their development is hindered by data scarcity and costly acquisition. Diffusion models have enabled synthetic HSI generation, but conventional integer-order solvers such as Denoising Diffusion Implicit [...] Read more.
Hyperspectral images (HSIs) provide rich spectral–spatial information and support applications in remote sensing, agriculture, and medicine, yet their development is hindered by data scarcity and costly acquisition. Diffusion models have enabled synthetic HSI generation, but conventional integer-order solvers such as Denoising Diffusion Implicit Models (DDIM) and Pseudo Linear Multi-Step method (PLMS) require many steps and rely mainly on local information, causing error accumulation, spectral distortion, and inefficiency. To address these challenges, we propose Integer–Fractional Alternating Diffusion Sampling (IFADiff), a training-free inference-stage enhancement method based on an integer–fractional alternating time-stepping strategy. IFADiff combines integer-order prediction, which provides stable progression, with fractional-order correction that incorporates historical states through decaying weights to capture long-range dependencies and enhance spatial detail. This design suppresses noise accumulation, reduces spectral drift, and preserves texture fidelity. Experiments on hyperspectral synthesis datasets show that IFADiff consistently improves both reference-based and no-reference metrics across solvers without retraining. Ablation studies further demonstrate that the fractional order α acts as a controllable parameter: larger values enhance fine-grained textures, whereas smaller values yield smoother results. Overall, IFADiff provides an efficient, generalizable, and controllable framework for high-quality HSI generation, with strong potential for large-scale and real-time applications. Full article
Show Figures

Figure 1

17 pages, 1054 KB  
Article
Reliability Modeling Method for Constant Stress Accelerated Degradation Based on the Generalized Wiener Process
by Shanshan Li, Zaizai Yan and Junmei Jia
Entropy 2025, 27(12), 1197; https://doi.org/10.3390/e27121197 - 26 Nov 2025
Viewed by 392
Abstract
This paper aims to improve the accuracy of reliability estimates and the failure time prediction for products exhibiting nonlinear degradation behavior under constant-stress accelerated degradation test (CSADT). To achieve this, a novel degradation model and a life prediction method are proposed, which are [...] Read more.
This paper aims to improve the accuracy of reliability estimates and the failure time prediction for products exhibiting nonlinear degradation behavior under constant-stress accelerated degradation test (CSADT). To achieve this, a novel degradation model and a life prediction method are proposed, which are based on a generalized Wiener process. Some models assume that the drift coefficients are related to accelerated stress. However, in certain applications, the diffusion coefficients are also affected by accelerated stress. The relationship between the drift parameter and accelerated stress variables can be derived by the accelerated model, and so is the relationship between the diffusion parameter and stress variables based on the principle of invariance of the acceleration factor. To account for individual variability among products, random effects are introduced. Model parameters are estimated using a combination of maximum likelihood estimation (MLE) and the expectation-maximization (EM) algorithm. Furthermore, the probability density function (PDF) of the remaining useful life under normal stress conditions is derived using the law of total probability. The effectiveness and applicability of the proposed approach are validated using simulated constant stress accelerated degradation data and stress relaxation data. The results demonstrate that the model not only fits the degradation process well but also modestly improves the accuracy of the failure time prediction, providing valuable guidance for engineering maintenance and reliability management. Full article
Show Figures

Figure 1

22 pages, 853 KB  
Article
Diffusion-Based Parameters for Stock Clustering: Sector Separation and Out-of-Sample Evidence
by Piyarat Promsuwan, Paisit Khanarsa and Kittisak Chumpong
J. Risk Financial Manag. 2025, 18(11), 637; https://doi.org/10.3390/jrfm18110637 - 12 Nov 2025
Viewed by 695
Abstract
Clustering techniques are widely applied to equity markets to uncover sectoral structures and regime shifts, yet most studies rely solely on empirical returns. This paper introduces a novel perspective by using diffusion-based parameters from the Black–Scholes model, namely monthly drift and diffusion, as [...] Read more.
Clustering techniques are widely applied to equity markets to uncover sectoral structures and regime shifts, yet most studies rely solely on empirical returns. This paper introduces a novel perspective by using diffusion-based parameters from the Black–Scholes model, namely monthly drift and diffusion, as clustering features. Using SET100 stocks in 2020, we applied k-means clustering and evaluated performances with silhouette scores, the Adjusted Rand Index, Wilcoxon tests, and an out-of-sample portfolio exercise. The results showed that diffusion-based features achieved higher silhouette scores in turbulent months, where they revealed sectoral divergence that log-returns failed to capture. The partition for November 2020 provided clearer sector separation and smaller portfolio losses, demonstrating predictive value beyond in-sample fit. Practically, the findings indicate that diffusion-based parameters can signal early signs of market stress, guide sector rotation decisions during volatile regimes, and enhance portfolio risk management by isolating persistent volatility structures across sectors. Theoretically, this model-based framework bridges equity clustering with stochastic diffusion representations used in derivatives valuation, offering a unified and interpretable tool for data-driven market monitoring. Full article
(This article belongs to the Special Issue Machine Learning-Based Risk Management in Finance and Insurance)
Show Figures

Figure 1

18 pages, 1681 KB  
Article
Modeling Dynamic Regime Shifts in Diffusion Processes: Approximate Maximum Likelihood Estimation for Two-Threshold Ornstein–Uhlenbeck Models
by Svajone Bekesiene, Anatolii Nikitin and Serhii Nechyporuk
Mathematics 2025, 13(21), 3450; https://doi.org/10.3390/math13213450 - 29 Oct 2025
Viewed by 511
Abstract
This study addresses the problem of estimating parameters in a two-threshold Ornstein–Uhlenbeck diffusion process, a model suitable for describing systems that exhibit changes in dynamics when crossing specific boundaries. Such behavior is often observed in real economic and physical processes. The main objective [...] Read more.
This study addresses the problem of estimating parameters in a two-threshold Ornstein–Uhlenbeck diffusion process, a model suitable for describing systems that exhibit changes in dynamics when crossing specific boundaries. Such behavior is often observed in real economic and physical processes. The main objective is to develop and evaluate a method for accurately identifying key parameters, including the threshold levels, drift changes, and diffusion coefficient, within this stochastic framework. The paper proposes an iterative algorithm based on approximate maximum likelihood estimation, which recalculates parameter values step by step until convergence is achieved. This procedure simultaneously estimates both the threshold positions and the associated process parameters, allowing it to adapt effectively to structural changes in the data. Unlike previously studied single-threshold systems, two-threshold models are more natural and offer improved applicability. The method is implemented through custom programming and tested using synthetically generated data to assess its precision and reliability. The novelty of this study lies in extending the approximate maximum likelihood framework to a two-threshold Ornstein–Uhlenbeck process and in developing an iterative estimation procedure capable of jointly recovering both threshold locations and regime-specific parameters with proven convergence properties. Results show that the algorithm successfully captures changes in the process dynamics and provides consistent parameter estimates across different scenarios. The proposed approach offers a practical tool for analyzing systems influenced by shifting regimes and contributes to a better understanding of dynamic processes in various applied fields. Full article
(This article belongs to the Special Issue Stochastic Differential Equations and Applications)
Show Figures

Figure 1

20 pages, 963 KB  
Article
Dynamic Governance of China’s Copper Supply Chain: A Stochastic Differential Game Approach
by Yu Wang and Jingjing Yan
Systems 2025, 13(11), 947; https://doi.org/10.3390/systems13110947 - 24 Oct 2025
Viewed by 736
Abstract
As global copper demand continues to grow, China, being the largest copper consumer, faces increasingly complex challenges in ensuring the security of its supply chain. However, a substantive gap remains: prevailing assessments rely on static index systems and discrete scenario analyses that seldom [...] Read more.
As global copper demand continues to grow, China, being the largest copper consumer, faces increasingly complex challenges in ensuring the security of its supply chain. However, a substantive gap remains: prevailing assessments rely on static index systems and discrete scenario analyses that seldom model uncertainty-driven, continuous-time strategic interactions, leaving the conditions for self-enforcing cooperation and the attendant policy trade-offs insufficiently identified. This study models the interaction between Chinese copper importers and foreign suppliers as a continuous-time stochastic differential game, with feedback Nash equilibria derived from a Hamilton–Jacobi–Bellman system. The supply security utility is specified as a diffusion process perturbed by Brownian shocks, while regulatory intensity and profit-sharing are treated as structural parameters shaping its drift and volatility—thereby delineating the parameter region for self-enforcing cooperation and clarifying how sudden disturbances reconfigure equilibrium security. The research findings reveal the following: (i) the mean and variance of supply security utility progressively strengthen over time under the influence of both parties’ maintenance efforts, while stochastic disturbances causing actual fluctuations remain controllable within the contract period; (ii) spontaneous cooperation can be achieved under scenarios featuring strong regulation of domestic importers, weak regulation of foreign suppliers, and a profit distribution ratio slightly favoring foreign suppliers, thereby reducing regulatory costs; this asymmetry is beneficial because stricter oversight of domestic importers curbs the primary deviation risk, lighter oversight of foreign suppliers avoids cross-border enforcement frictions, and a modest supplier-favored profit-sharing ratio sustains participation—together expanding the self-enforcing cooperation set; (iii) sudden events exert only short-term impacts on supply security with controllable long-term effects; however, an excessively stringent regulatory environment can paradoxically reduce long-term supply security. Security effort levels demonstrate positive correlation with supply security, while regulatory intensity must be maintained within a moderate range to balance incentives and constraints. Full article
(This article belongs to the Special Issue Operation and Supply Chain Risk Management)
Show Figures

Figure 1

22 pages, 2578 KB  
Article
Controlling Spiral Wave Solutions in the Barkley System Using a Proportional Feedback Control
by Saad M. Almuaddi and H. Y. Alfifi
Symmetry 2025, 17(10), 1721; https://doi.org/10.3390/sym17101721 - 13 Oct 2025
Viewed by 615
Abstract
An important goal in cardiology and other fields is to identify and control dynamic spiral wave patterns in reaction–diffusion partial differential equations. This research focuses on the Barkley model. The spiral wave motion is controlled and suppressed within the Euclidean group rather than [...] Read more.
An important goal in cardiology and other fields is to identify and control dynamic spiral wave patterns in reaction–diffusion partial differential equations. This research focuses on the Barkley model. The spiral wave motion is controlled and suppressed within the Euclidean group rather than through Euclidean symmetry by applying a controller equation. The eigenfunctions associated with the left eigenspace of the adjoint linear equation can be used to characterize the drift or movement of the spiral wave tip trajectory when the system is perturbed. These eigenfunctions provide details regarding how the spiral wave reacts to disruptions. Perturbations to the Barkley system are examined by applying control functions and calculating the principle eigenvalue numerically. The left eigenfunctions of the Barkley equation are determined by solving the left problem associated with the 2D Barkley equation and a 1D dynamical controller. In addition, the control function can be used to suppress the periodic and meandering regimes of the system. In this work, the focus is on the periodic regime. Full article
Show Figures

Figure 1

30 pages, 4943 KB  
Article
Multivariate Decoding and Drift-Diffusion Modeling Reveal Adaptive Control in Trilingual Comprehension
by Yuanbo Wang, Yingfang Meng, Qiuyue Yang and Ruiming Wang
Brain Sci. 2025, 15(10), 1046; https://doi.org/10.3390/brainsci15101046 - 26 Sep 2025
Viewed by 825
Abstract
Background/Objectives: The Adaptive Control Hypothesis posits varying control demands across language contexts in production, but its role in comprehension is underexplored. We investigated if trilinguals, who manage three dual-language contexts (L1–L2, L2–L3, L1–L3), exhibit differential proactive and reactive control demands during comprehension across [...] Read more.
Background/Objectives: The Adaptive Control Hypothesis posits varying control demands across language contexts in production, but its role in comprehension is underexplored. We investigated if trilinguals, who manage three dual-language contexts (L1–L2, L2–L3, L1–L3), exhibit differential proactive and reactive control demands during comprehension across these contexts. Methods: Thirty-six Uyghur–Chinese–English trilinguals completed an auditory word-picture matching task across three dual-language contexts during EEG recording. We employed behavioral analysis, drift-diffusion modeling, event-related potential (ERP) analysis, and multivariate pattern analysis (MVPA) to examine comprehension efficiency, evidence accumulation, and neural mechanisms. The design crossed context (L1–L2, L2–L3, L1–L3) with trial type (switch vs. repetition) and switching direction (to dominant vs. non-dominant language). Results: Despite comparable behavioral performance, drift-diffusion modeling revealed distinct processing profiles across contexts, with the L1–L2 context showing the lowest comprehension efficiency due to slower evidence accumulation. In the L1–L3 context, comprehension-specific proactive control was indexed by a larger P300 and smaller N400 for L1-to-L3 switches. Notably, no reactive control (switch costs) was observed across any dual-language context. MVPA successfully classified contexts and switching directions, revealing distinct spatiotemporal neural patterns. Conclusions: Trilingual comprehension switching mechanisms differ from production. Reactive control is not essential, while proactive control is context-dependent, emerging only in the high-conflict L1–L3 context. This proactive strategy involves allocating more bottom-up attention to the weaker L3, which, unlike in production, enhances rather than hinders overall efficiency. Full article
(This article belongs to the Section Neurolinguistics)
Show Figures

Figure 1

16 pages, 2814 KB  
Article
LF-Net: A Lightweight Architecture for State-of-Charge Estimation of Lithium-Ion Batteries by Decomposing Global Trend and Local Fluctuations
by Ruidi Zhou, Xilin Dai, Jinhao Zhang, Keyi He, Fanfan Lin and Hao Ma
Electronics 2025, 14(18), 3643; https://doi.org/10.3390/electronics14183643 - 15 Sep 2025
Viewed by 699
Abstract
Accurate estimation of the State of Charge (SOC) of lithium-ion batteries under complex operating conditions remains challenging, as the SOC signal combines a global linear (quasi-linear) trend with localized dynamic fluctuations driven by polarization, ion diffusion, temperature gradients, and load transients. In practice, [...] Read more.
Accurate estimation of the State of Charge (SOC) of lithium-ion batteries under complex operating conditions remains challenging, as the SOC signal combines a global linear (quasi-linear) trend with localized dynamic fluctuations driven by polarization, ion diffusion, temperature gradients, and load transients. In practice, open-circuit-voltage (OCV) approaches are affected by hysteresis and parameter drift, while high-fidelity electrochemical models require extensive parameterization and significant computational resources that hinder their real-time deployment in battery management systems (BMS). Purely data-driven methods capture temporal patterns but may under-represent abrupt local fluctuations and blur the distinction between trend and fluctuation, leading to biased SOC tracking when operating conditions change. To address these issues, LF-Net is proposed. The architecture decomposes battery time series into long-term trend and local fluctuation components. A linear branch models the quasi-linear SOC evolution. Multi-scale convolutional and differential branches enhance sensitivity to transient dynamics. An adaptive Fusion Module aggregates the representations, improving interpretability and stability, and keeps the parameter budget small for embedded hardware. Our experimental results demonstrate that the proposed model achieves a mean absolute error (MAE) of 0.0085 and a root-mean-square error (RMSE) of 0.0099 at 40 °C, surpassing mainstream models and confirming the method’s efficacy. Full article
Show Figures

Figure 1

Back to TopTop