Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (73)

Search Parameters:
Keywords = bayesian interface

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
16 pages, 1728 KB  
Article
Phylogeographic and Host Interface Analyses Reveal the Evolutionary Dynamics of SAT3 Foot-And-Mouth Disease Virus
by Shuang Zhang, Jianing Lv, Yao Lin, Rong Chai, Jiaxi Liang, Yan Su, Zhuo Tian, Hanyu Guo, Fuyun Chen, Guanying Ni, Gang Wang, Chunmei Song, Baoping Li, Qiqi Wang, Sen Zhao, Qixin Huang, Xuejun Ji, Jieji Duo, Fengjun Bai, Jin Li, Shuo Chen, Xueying Pan, Qin La, Zhong Hong and Xiaolong Wangadd Show full author list remove Hide full author list
Viruses 2025, 17(12), 1641; https://doi.org/10.3390/v17121641 - 18 Dec 2025
Viewed by 459
Abstract
Foot-and-mouth disease virus (FMDV) serotype SAT3 is a rarely studied serotype primarily circulating in southern Africa, with African buffalo (Syncerus caffer) serving as its key reservoir. In this study, we performed a comprehensive phylogenetic and phylodynamic analysis of SAT3 based on [...] Read more.
Foot-and-mouth disease virus (FMDV) serotype SAT3 is a rarely studied serotype primarily circulating in southern Africa, with African buffalo (Syncerus caffer) serving as its key reservoir. In this study, we performed a comprehensive phylogenetic and phylodynamic analysis of SAT3 based on 81 full-length VP1 gene sequences collected between 1934 and 2018. Maximum likelihood and Bayesian analyses revealed five distinct topotypes, each with clear geographic and host associations. Notably, topotypes I, II and III were observed in both African buffalo and cattle (Bos taurus), while topotype IV appeared restricted to African buffalo. Likelihood mapping indicated moderate to strong phylogenetic signal, and the mean substitution rate was estimated at 3.709 × 10−3 substitutions/site/year under a relaxed molecular clock. The time to the most recent common ancestor (TMRCA) was traced back to 1875. Discrete phylogeographic reconstruction identified Zimbabwe as a major center, with multiple supported cross-border transmission routes. Host transition analysis further confirmed strong directional flow from buffalo to cattle (BF = 1631.09, pp = 1.0), highlighting the wildlife–livestock interface as a key driver of SAT3 persistence. Together, these results underscore the evolutionary complexity of SAT3 and the importance of integrating molecular epidemiology, spatial modeling, and host ecology to inform FMD control strategies in endemic regions. Full article
(This article belongs to the Special Issue Foot-and-Mouth Disease Virus)
Show Figures

Figure 1

29 pages, 549 KB  
Article
Catch Me If You Can: Rogue AI Detection and Correction at Scale
by Fatemeh Stodt, Jan Stodt, Mohammed Alshawki, Javad Salimi Sratakhti and Christoph Reich
Electronics 2025, 14(20), 4122; https://doi.org/10.3390/electronics14204122 - 21 Oct 2025
Viewed by 1326
Abstract
Modern AI systems can strategically misreport information when incentives diverge from truthfulness, posing risks for oversight and deployment. Prior studies often examine this behavior within a single paradigm; systematic, cross-architecture evidence under a unified protocol has been limited. We introduce the Strategy Elicitation [...] Read more.
Modern AI systems can strategically misreport information when incentives diverge from truthfulness, posing risks for oversight and deployment. Prior studies often examine this behavior within a single paradigm; systematic, cross-architecture evidence under a unified protocol has been limited. We introduce the Strategy Elicitation Battery (SEB), a standardized probe suite for measuring deceptive reporting across large language models (LLMs), reinforcement-learning agents, vision-only classifiers, multimodal encoders, state-space models, and diffusion models. SEB uses Bayesian inference tasks with persona-controlled instructions, schema-constrained outputs, deterministic decoding where supported, and a probe mix (near-threshold, repeats, neutralized, cross-checks). Estimates use clustered bootstrap intervals, and significance is assessed with a logistic regression by architecture; a mixed-effects analysis is planned once the per-round agent/episode traces are exported. On the latest pre-correction runs, SEB shows a consistent cross-architecture pattern in deception rates: ViT 80.0%, CLIP 15.0%, Mamba 10.0%, RL agents 10.0%, Stable Diffusion 10.0%, and LLMs 5.0% (20 scenarios/architecture). A logistic regression on per-scenario flags finds a significant overall architecture effect (likelihood-ratio test vs. intercept-only: χ2(5)=41.56, p=7.22×108). Holm-adjusted contrasts indicate ViT is significantly higher than all other architectures in this snapshot; the remaining pairs are not significant. Post-correction acceptance decisions are evaluated separately using residual deception and override rates under SEB-Correct. Latency varies by architecture (sub-second to minutes), enabling pre-deployment screening broadly and real-time auditing for low-latency classes. Results indicate that SEB-Detect deception flags are not confined to any one paradigm, that distinct architectures can converge to similar levels under a common interface, and that reporting interfaces and incentive framing are central levers for mitigation. We operationalize “deception” as reward-sensitive misreport flags, and we separate detection from intervention via a correction wrapper (SEB-Correct), supporting principled acceptance decisions for deployment. Full article
Show Figures

Figure 1

19 pages, 483 KB  
Article
Probabilistic Models for Military Kill Chains
by Stephen Adams, Alex Kyer, Brian Lee, Dan Sobien, Laura Freeman and Jeremy Werner
Systems 2025, 13(10), 924; https://doi.org/10.3390/systems13100924 - 20 Oct 2025
Cited by 1 | Viewed by 1463
Abstract
Military kill chains are the sequence of events, tasks, or functions that must occur to successfully accomplish a mission. As the Department of Defense moves towards Combined Joint All-Domain Command and Control, which will require the coordination of multiple networked assets with the [...] Read more.
Military kill chains are the sequence of events, tasks, or functions that must occur to successfully accomplish a mission. As the Department of Defense moves towards Combined Joint All-Domain Command and Control, which will require the coordination of multiple networked assets with the ability to share data and information, kill chains must evolve into kill webs with multiple paths to achieve a successful mission outcome. Mathematical frameworks for kill webs provide the basis for addressing the complexity of this system-of-systems analysis. A mathematical framework for kill chains and kill webs would provide a military decision maker a structure for assessing several key aspects to mission planning including the probability of success, alternative chains, and parts of the chain that are likely to fail. However, to the best of our knowledge, a generalized and flexible mathematical formulation for kill chains in military operations does not exist. This study proposes four probabilistic models for kill chains that can later be adapted to kill webs. For each of the proposed models, events in the kill chain are modeled as Bernoulli random variables. This extensible modeling scaffold allows flexibility in constructing the probability of success for each event and is compatible with Monte Carlo simulations and hierarchical Bayesian formulations. The probabilistic models can be used to calculate the probability of a successful kill chain and to perform uncertainty quantification. The models are demonstrated on the Find–Fix–Track–Target–Engage–Assess kill chain. In addition to the mathematical framework, the MIMIK (Mission Illustration and Modeling Interface for Kill webs) software package has been developed and publicly released to support the design and analysis of kill webs. Full article
Show Figures

Figure 1

27 pages, 6859 KB  
Article
An Explainable Machine Learning Framework for the Hierarchical Management of Hot Pepper Damping-Off in Intensive Seedling Production
by Zhaoyuan Wang, Kaige Liu, Longwei Liang, Changhong Li, Tao Ji, Jing Xu, Huiying Liu and Ming Diao
Horticulturae 2025, 11(10), 1258; https://doi.org/10.3390/horticulturae11101258 - 17 Oct 2025
Viewed by 1015
Abstract
Facility agriculture cultivation is the main production form of the vegetable industry in the world. As an important vegetable crop, hot peppers are easily threatened by many diseases in a facility microclimate environment. Traditional disease detection methods are time-consuming and allow the disease [...] Read more.
Facility agriculture cultivation is the main production form of the vegetable industry in the world. As an important vegetable crop, hot peppers are easily threatened by many diseases in a facility microclimate environment. Traditional disease detection methods are time-consuming and allow the disease to proliferate, so timely detection and inhibition of disease development have become the focus of global agricultural practice. This article proposed a generalizable and explainable machine learning model for hot pepper damping-off in intensive seedling production under the condition of ensuring the high accuracy of the model. Through Kalman filter smoothing, SMOTE-ENN unbalanced sample processing, feature selection and other data preprocessing methods, 19 baseline models were developed for prediction in this article. After statistical testing of the results, Bayesian Optimization algorithm was used to perform hyperparameter tuning for the best five models with performance, and the Extreme Random Trees model (ET) most suitable for this research scenario was determined. The F1-score of this model is 0.9734, and the AUC value is 0.9969 for predicting the severity of hot pepper damping-off, and the explainable analysis is carried out by SHAP (SHapley Additive exPlanations). According to the results, the hierarchical management strategies under different severities are interpreted. Combined with the front-end visualization interface deployed by the model, it is helpful for farmers to know the development trend of the disease in advance and accurately regulate the environmental factors of seedling raising, and this is of great significance for disease prevention and control and to reduce the impact of diseases on hot pepper growth and development. Full article
(This article belongs to the Special Issue New Trends in Smart Horticulture)
Show Figures

Figure 1

24 pages, 1690 KB  
Article
Bayesian-Optimized Ensemble Models for Geopolymer Concrete Compressive Strength Prediction with Interpretability Analysis
by Mehmet Timur Cihan and Pınar Cihan
Buildings 2025, 15(20), 3667; https://doi.org/10.3390/buildings15203667 - 11 Oct 2025
Cited by 3 | Viewed by 998
Abstract
Accurate prediction of geopolymer concrete compressive strength is vital for sustainable construction. Traditional experiments are time-consuming and costly; therefore, computer-aided systems enable rapid and accurate estimation. This study evaluates three ensemble learning algorithms (Extreme Gradient Boosting (XGB), Random Forest (RF), and Light Gradient [...] Read more.
Accurate prediction of geopolymer concrete compressive strength is vital for sustainable construction. Traditional experiments are time-consuming and costly; therefore, computer-aided systems enable rapid and accurate estimation. This study evaluates three ensemble learning algorithms (Extreme Gradient Boosting (XGB), Random Forest (RF), and Light Gradient Boosting Machine (LightGBM)), as well as two baseline models (Support Vector Regression (SVR) and Artificial Neural Network (ANN)), for this task. To improve performance, hyperparameter tuning was conducted using Bayesian Optimization (BO). Model accuracy was measured using R2, RMSE, MAE, and MAPE. The results demonstrate that the XGB model outperforms others under both default and optimized settings. In particular, the XGB-BO model achieved high accuracy, with RMSE of 0.3100 ± 0.0616 and R2 of 0.9997 ± 0.0001. Furthermore, Shapley Additive Explanations (SHAP) analysis was used to interpret the decision-making of the XGB model. SHAP results revealed the most influential features for compressive strength of geopolymer concrete were, in order, coarse aggregate, curing time, and NaOH molar concentration. The graphical user interface (GUI) developed for compressive strength prediction demonstrates the practical potential of this research. It contributes to integrating the approach into construction practices. This study highlights the effectiveness of explainable machine learning in understanding complex material behaviors and emphasizes the importance of model optimization for making sustainable and accurate engineering predictions. Full article
(This article belongs to the Section Building Materials, and Repair & Renovation)
Show Figures

Figure 1

30 pages, 8552 KB  
Article
Analytical–Computational Integration of Equivalent Circuit Modeling, Hybrid Optimization, and Statistical Validation for Electrochemical Impedance Spectroscopy
by Francisco Augusto Nuñez Perez
Electrochem 2025, 6(4), 35; https://doi.org/10.3390/electrochem6040035 - 8 Oct 2025
Viewed by 2107
Abstract
Background: Electrochemical impedance spectroscopy (EIS) is indispensable for disentangling charge-transfer, capacitive, and diffusive phenomena, yet reproducible parameter estimation and objective model selection remain unsettled. Methods: We derive closed-form impedances and analytical Jacobians for seven equivalent-circuit models (Randles, constant-phase element (CPE), and Warburg impedance [...] Read more.
Background: Electrochemical impedance spectroscopy (EIS) is indispensable for disentangling charge-transfer, capacitive, and diffusive phenomena, yet reproducible parameter estimation and objective model selection remain unsettled. Methods: We derive closed-form impedances and analytical Jacobians for seven equivalent-circuit models (Randles, constant-phase element (CPE), and Warburg impedance (ZW) variants), enforce physical bounds, and fit synthetic spectra with 2.5% and 5.0% Gaussian noise using hybrid optimization (Differential Evolution (DE) → Levenberg–Marquardt (LM)). Uncertainty is quantified via non-parametric bootstrap; parsimony is assessed with root-mean-square error (RMSE), Akaike Information Criterion (AIC), and Bayesian Information Criterion (BIC); physical consistency is checked by Kramers–Kronig (KK) diagnostics. Results: Solution resistance (Rs) and charge-transfer resistance (Rct) are consistently identifiable across noise levels. CPE parameters (Q,n) and diffusion amplitude (σ) exhibit expected collinearity unless the frequency window excites both processes. Randles suffices for ideal interfaces; Randles+CPE lowers AIC when non-ideality and/or higher noise dominate; adding Warburg reproduces the 45 tail and improves likelihood when diffusion is present. The (Rct+ZW)CPE architecture offers the best trade-off when heterogeneity and diffusion coexist. Conclusions: The framework unifies analytical derivations, hybrid optimization, and rigorous statistics to deliver traceable, reproducible EIS analysis and clear applicability domains, reducing subjective model choice. All code, data, and settings are released to enable exact reproduction. Full article
Show Figures

Graphical abstract

16 pages, 4919 KB  
Article
SCRATCH-AI: A Tool to Predict Honey Wound Healing Properties
by Simona Martinotti, Stefania Montani, Elia Ranzato and Manuel Striani
Information 2025, 16(10), 827; https://doi.org/10.3390/info16100827 - 24 Sep 2025
Viewed by 772
Abstract
In this work, we propose SCRATCH-AI, a tool which relies on interpretable machine learning (ML) methods (namely, Bayesian networks and decision trees) to classify honey samples into wound healing categories. Classification explores the impact of botanical origins (i.e., honey type) and key chemical–biological [...] Read more.
In this work, we propose SCRATCH-AI, a tool which relies on interpretable machine learning (ML) methods (namely, Bayesian networks and decision trees) to classify honey samples into wound healing categories. Classification explores the impact of botanical origins (i.e., honey type) and key chemical–biological characteristics such as antioxidant activity on healing, assessed through wound recovery metrics. The obtained classification performance results are very encouraging. Moreover, the models provide non-trivial insights about the causal dependencies of some specific honey features on wound healing properties and show the effect of different honey types (other than the well known Manuka) on cicatrization. The tool is inherently interpretable (due to the chosen ML techniques) and made user-friendly by a carefully designed graphical interface. We believe that the information provided by our tool will allow biologists and clinicians to better utilize honey, with the ultimate goal of leveraging honey capability to accelerate healing and reduce infection risks in clinical practice. Full article
(This article belongs to the Special Issue Artificial Intelligence and Decision Support Systems)
Show Figures

Graphical abstract

19 pages, 1880 KB  
Article
Development and Piloting of Co.Ge.: A Web-Based Digital Platform for Generative and Clinical Cognitive Assessment
by Angela Muscettola, Martino Belvederi Murri, Michele Specchia, Giovanni Antonio De Bellis, Chiara Montemitro, Federica Sancassiani, Alessandra Perra, Barbara Zaccagnino, Anna Francesca Olivetti, Guido Sciavicco, Rosangela Caruso, Luigi Grassi and Maria Giulia Nanni
J. Pers. Med. 2025, 15(9), 423; https://doi.org/10.3390/jpm15090423 - 3 Sep 2025
Cited by 1 | Viewed by 1013
Abstract
Background/Objectives: This study presents Co.Ge. a Cognitive Generative digital platform for cognitive testing. We describe its architecture and report a pilot study. Methods: Co.Ge. is modular and web-based (Laravel-PHP, MySQL). It can be used to administer a variety of validated cognitive [...] Read more.
Background/Objectives: This study presents Co.Ge. a Cognitive Generative digital platform for cognitive testing. We describe its architecture and report a pilot study. Methods: Co.Ge. is modular and web-based (Laravel-PHP, MySQL). It can be used to administer a variety of validated cognitive tests, facilitating administration and scoring while capturing Reaction Times (RTs), trial-level responses, audio, and other data. Co.Ge. includes a study-management dashboard, Application Programming Interfaces (APIs) for external integration, encryption, and customizable options. In this demonstrative pilot study, clinical and non-clinical participants completed an Auditory Verbal Learning Test (AVLT), which we analyzed using accuracy, number of recalled words, and reaction times as outcomes. We collected ratings of user experience with a standardized rating scale. Analyses included Frequentist and Bayesian Generalized Linear Mixed Models (GLMMs). Results: Mean ratings of user experience were all above 4/5, indicating high acceptability (n = 30). Pilot data from AVLT (n = 123, 60% clinical, 40% healthy) showed that Co.Ge. seamlessly provides standardized clinical ratings, accuracy, and RTs. Analyzing RTs with Bayesian GLMMs and Gamma distribution provided the best fit to data (Leave-One-Out Cross-Validation) and allowed to detect additional associations (e.g., education) otherwise unrecognized using simpler analyses. Conclusions: The prototype of Co.Ge. is technically robust and clinically precise, enabling the extraction of high-resolution behavioral data. Co.Ge. provides traditional clinical-oriented cognitive outcomes but also promotes complex generative models to explore individualized mechanisms of cognition. Thus, it will promote personalized profiling and digital phenotyping for precision psychiatry and rehabilitation. Full article
(This article belongs to the Special Issue Trends and Future Development in Precision Medicine)
Show Figures

Figure 1

19 pages, 4759 KB  
Article
Research on User Experience and Continuous Usage Mechanism of Digital Interactive Installations in Museums from the Perspective of Distributed Cognition
by Aili Zhang, Yanling Sun, Shaowen Wang and Mengjuan Zhang
Appl. Sci. 2025, 15(15), 8558; https://doi.org/10.3390/app15158558 - 1 Aug 2025
Cited by 3 | Viewed by 2307
Abstract
With the increasing application of digital interactive installations in museums, their role in enhancing audience engagement and cultural dissemination effectiveness has become prominent. However, ensuring the sustained use of these technologies remains challenging. Based on distributed cognition and perceived value theories, this study [...] Read more.
With the increasing application of digital interactive installations in museums, their role in enhancing audience engagement and cultural dissemination effectiveness has become prominent. However, ensuring the sustained use of these technologies remains challenging. Based on distributed cognition and perceived value theories, this study investigates key factors influencing users’ continuous usage of digital interactive installations using the Capital Museum in Beijing as a case study. A theoretical model was constructed and empirically validated through Bayesian Structural Equation Modeling (Bayesian-SEM) with 352 valid samples. The findings reveal that perceived ease of use plays a critical direct predictive role in continuous usage intention. Environmental factors and peer interaction indirectly influence user behavior through learner engagement, while user satisfaction serves as a core mediator between perceived ease of use and continuous usage intention. Notably, perceived usefulness and entertainment showed no direct effects, indicating that convenience and social experience outweigh functional benefits in this context. These findings emphasize the importance of optimizing interface design, fostering collaborative environments, and enhancing user satisfaction to promote sustained participation. This study provides practical insights for aligning digital innovation with audience needs in museums, thereby supporting the sustainable integration of technology in cultural heritage education and preservation. Full article
Show Figures

Figure 1

18 pages, 774 KB  
Article
Bayesian Inertia Estimation via Parallel MCMC Hammer in Power Systems
by Weidong Zhong, Chun Li, Minghua Chu, Yuanhong Che, Shuyang Zhou, Zhi Wu and Kai Liu
Energies 2025, 18(15), 3905; https://doi.org/10.3390/en18153905 - 22 Jul 2025
Viewed by 559
Abstract
The stability of modern power systems has become critically dependent on precise inertia estimation of synchronous generators, particularly as renewable energy integration fundamentally transforms grid dynamics. Increasing penetration of converter-interfaced renewable resources reduces system inertia, heightening the grid’s susceptibility to transient disturbances and [...] Read more.
The stability of modern power systems has become critically dependent on precise inertia estimation of synchronous generators, particularly as renewable energy integration fundamentally transforms grid dynamics. Increasing penetration of converter-interfaced renewable resources reduces system inertia, heightening the grid’s susceptibility to transient disturbances and creating significant technical challenges in maintaining operational reliability. This paper addresses these challenges through a novel Bayesian inference framework that synergistically integrates PMU data with an advanced MCMC sampling technique, specifically employing the Affine-Invariant Ensemble Sampler. The proposed methodology establishes a probabilistic estimation paradigm that systematically combines prior engineering knowledge with real-time measurements, while the Affine-Invariant Ensemble Sampler mechanism overcomes high-dimensional computational barriers through its unique ensemble-based exploration strategy featuring stretch moves and parallel walker coordination. The framework’s ability to provide full posterior distributions of inertia parameters, rather than single-point estimates, helps for stability assessment in renewable-dominated grids. Simulation results on the IEEE 39-bus and 68-bus benchmark systems validate the effectiveness and scalability of the proposed method, with inertia estimation errors consistently maintained below 1% across all generators. Moreover, the parallelized implementation of the algorithm significantly outperforms the conventional M-H method in computational efficiency. Specifically, the proposed approach reduces execution time by approximately 52% in the 39-bus system and by 57% in the 68-bus system, demonstrating its suitability for real-time and large-scale power system applications. Full article
Show Figures

Figure 1

22 pages, 260894 KB  
Article
Effects of Aging on Mode I Fatigue Crack Growth Characterization of Double Cantilever Beam Specimens with Thick Adhesive Bondline for Marine Applications
by Rahul Iyer Kumar and Wim De Waele
Materials 2025, 18(14), 3286; https://doi.org/10.3390/ma18143286 - 11 Jul 2025
Viewed by 994
Abstract
The use of adhesive joints in naval applications requires a thorough understanding of their fatigue performance. This paper reports on the fatigue experiments performed on double cantilever beam specimens with thick adhesive bondline manufactured under shipyard conditions. The specimens have an initial crack [...] Read more.
The use of adhesive joints in naval applications requires a thorough understanding of their fatigue performance. This paper reports on the fatigue experiments performed on double cantilever beam specimens with thick adhesive bondline manufactured under shipyard conditions. The specimens have an initial crack at the steel–adhesive interface and are tested in unaged, salt-spray-aged and immersion-aged conditions to determine the interface mode I fatigue properties. The strain energy release rate is calculated using the Kanninen–Penado model, and the fatigue crack growth curve is determined using a power law model. The crack growth rate slope for salt-spray-aged specimens is 16.5% lower than for unaged specimens, while that for immersion-aged specimens is 66.1% lower and is shown to be significantly different. The fracture surfaces are analyzed to identify the failure mechanisms and the influence of the aging process on the interface properties. Since the specimens are manufactured under shipyard conditions, the presence of voids and discontinuities in the adhesive bondline is observed and as a result leads to scatter. Hence, Bayesian linear regression is performed in addition to the ordinary least squares regression to account for the scatter and provide a distribution of plausible values for the power law coefficients. The results highlight the impact of aging on the fatigue property, underscoring the importance of considering environmental effects in the qualification of such joints for marine applications. Full article
Show Figures

Graphical abstract

21 pages, 3026 KB  
Article
Adaptive Multi-Timescale Particle Filter for Nonlinear State Estimation in Wastewater Treatment: A Bayesian Fusion Approach with Entropy-Driven Feature Extraction
by Xiaolong Chen, Hongfeng Zhang, Cora Un In Wong and Zhengchun Song
Processes 2025, 13(7), 2005; https://doi.org/10.3390/pr13072005 - 25 Jun 2025
Cited by 6 | Viewed by 1020
Abstract
We propose an adaptive multi-timescale particle filter (AMTS-PF) for nonlinear state estimation in wastewater treatment plants (WWTPs) to address multi-scale temporal dynamics. The AMTS-PF decouples the problem into minute-level state updates and hour-level parameter refinements, integrating adaptive noise tuning, multi-scale entropy-driven feature extraction, [...] Read more.
We propose an adaptive multi-timescale particle filter (AMTS-PF) for nonlinear state estimation in wastewater treatment plants (WWTPs) to address multi-scale temporal dynamics. The AMTS-PF decouples the problem into minute-level state updates and hour-level parameter refinements, integrating adaptive noise tuning, multi-scale entropy-driven feature extraction, and dual-timescale particle weighting. It dynamically adjusts noise covariances via Bayesian fusion and uses wavelet-based entropy analysis for adaptive resampling. The method interfaces seamlessly with existing WWTP control systems, providing real-time state estimates and refined parameters. Implemented on a heterogeneous computing architecture, it combines edge-level parallelism and cloud-based inference. Experimental validation shows superior performance over extended Kalman filters and single-timescale particle filters in handling nonlinearities and time-varying dynamics. The proposed AMTS-PF significantly enhances the accuracy of state estimation in WWTPs compared to traditional methods. Specifically, during the 14-day evaluation period using the Benchmark Simulation Model No. 1 (BSM1), the AMTS-PF achieved a root mean square error (RMSE) of 54.3 mg/L for heterotroph biomass (XH) estimation, which is a 37% reduction compared to the standard particle filter (PF) with an RMSE of 68.9 mg/L. For readily biodegradable substrate (Ss) and particulate products (Xp), the AMTS-PF also demonstrated superior performance with RMSE values of 7.2 mg/L and 9.8 mg/L, respectively, representing improvements of 24% and 21% over the PF. In terms of slow parameters, the AMTS-PF showed a 37% reduction in RMSE for the maximum heterotrophic growth rate (μH) estimation compared to the PF. These results highlight the effectiveness of the AMTS-PF in handling the multi-scale temporal dynamics and nonlinearities inherent in WWTPs. This work advances the state-of-the-art in WWTP monitoring by unifying multi-scale temporal modeling with adaptive Bayesian estimation, offering a practical solution for improving operational efficiency and process reliability. Full article
(This article belongs to the Special Issue Processes Development for Wastewater Treatment)
Show Figures

Figure 1

32 pages, 4701 KB  
Review
Machine-Learning-Guided Design of Nanostructured Metal Oxide Photoanodes for Photoelectrochemical Water Splitting: From Material Discovery to Performance Optimization
by Xiongwei Liang, Shaopeng Yu, Bo Meng, Yongfu Ju, Shuai Wang and Yingning Wang
Nanomaterials 2025, 15(12), 948; https://doi.org/10.3390/nano15120948 - 18 Jun 2025
Cited by 7 | Viewed by 2639
Abstract
The rational design of photoanode materials is pivotal for advancing photoelectrochemical (PEC) water splitting toward sustainable hydrogen production. This review highlights recent progress in the machine learning (ML)-assisted development of nanostructured metal oxide photoanodes, focusing on bridging materials discovery and device-level performance optimization. [...] Read more.
The rational design of photoanode materials is pivotal for advancing photoelectrochemical (PEC) water splitting toward sustainable hydrogen production. This review highlights recent progress in the machine learning (ML)-assisted development of nanostructured metal oxide photoanodes, focusing on bridging materials discovery and device-level performance optimization. We first delineate the fundamental physicochemical criteria for efficient photoanodes, including suitable band alignment, visible-light absorption, charge carrier mobility, and electrochemical stability. Conventional strategies such as nanostructuring, elemental doping, and surface/interface engineering are critically evaluated. We then discuss the integration of ML techniques—ranging from high-throughput density functional theory (DFT)-based screening to experimental data-driven modeling—for accelerating the identification of promising oxides (e.g., BiVO4, Fe2O3, WO3) and optimizing key parameters such as dopant selection, morphology, and catalyst interfaces. Particular attention is given to surrogate modeling, Bayesian optimization, convolutional neural networks, and explainable AI approaches that enable closed-loop synthesis-experiment-ML frameworks. ML-assisted performance prediction and tandem device design are also addressed. Finally, current challenges in data standardization, model generalizability, and experimental validation are outlined, and future perspectives are proposed for integrating ML with automated platforms and physics-informed modeling to facilitate scalable PEC material development for clean energy applications. Full article
(This article belongs to the Special Issue Nanomaterials for Novel Photoelectrochemical Devices)
Show Figures

Figure 1

19 pages, 2588 KB  
Article
Optimizing a Bayesian Method for Estimating the Hurst Exponent in Behavioral Sciences
by Madhur Mangalam, Taylor J. Wilson, Joel H. Sommerfeld and Aaron D. Likens
Axioms 2025, 14(6), 421; https://doi.org/10.3390/axioms14060421 - 29 May 2025
Cited by 3 | Viewed by 1638
Abstract
The Bayesian Hurst–Kolmogorov (HK) method estimates the Hurst exponent of a time series more accurately than the age-old Detrended Fluctuation Analysis (DFA), especially when the time series is short. However, this advantage comes at the cost of computation time. The computation time increases [...] Read more.
The Bayesian Hurst–Kolmogorov (HK) method estimates the Hurst exponent of a time series more accurately than the age-old Detrended Fluctuation Analysis (DFA), especially when the time series is short. However, this advantage comes at the cost of computation time. The computation time increases exponentially with the time series length N, easily exceeding several hours for N=1024, limiting the utility of the HK method in real-time paradigms, such as biofeedback and brain–computer interfaces. To address this issue, we have provided data on the estimation accuracy of the Hurst exponent H for synthetic time series as a function of a priori known values of H, the time series length, and the simulated sample size from the posterior distribution n—a critical step in the Bayesian estimation method. The simulated sample from the posterior distribution as small as n=25 suffices to estimate H with reasonable accuracy for a time series as short as 256. Using a larger simulated sample from the posterior distribution—that is, n>50—provides only a marginal gain in accuracy, which might not be worth trading off with computational efficiency. Results from empirical time series on stride-to-stride intervals in humans walking and running on a treadmill and overground corroborate these findings—specifically, allowing reproduction of the rank order of H^ for time series containing as few as 32 data points. We recommend balancing the simulated sample size from the posterior distribution of H with the user’s computational resources, advocating for a minimum of n=50. Larger sample sizes can be considered based on time and resource constraints when employing the HK process to estimate the Hurst exponent. The present results allow the reader to make judgments to optimize the Bayesian estimation of the Hurst exponent for real-time applications. Full article
(This article belongs to the Special Issue New Perspectives in Mathematical Statistics)
Show Figures

Figure 1

20 pages, 525 KB  
Article
Forecasting Robust Gaussian Process State Space Models for Assessing Intervention Impact in Internet of Things Time Series
by Patrick Toman, Nalini Ravishanker, Nathan Lally and Sanguthevar Rajasekaran
Forecasting 2025, 7(2), 22; https://doi.org/10.3390/forecast7020022 - 26 May 2025
Viewed by 2051
Abstract
This article describes a robust Gaussian Prior process state space modeling (GPSSM) approach to assess the impact of an intervention in a time series. Numerous applications can benefit from this approach. Examples include: (1) time series could be the stock price of a [...] Read more.
This article describes a robust Gaussian Prior process state space modeling (GPSSM) approach to assess the impact of an intervention in a time series. Numerous applications can benefit from this approach. Examples include: (1) time series could be the stock price of a company and the intervention could be the acquisition of another company; (2) the time series under concern could be the noise coming out of an engine, and the intervention could be a corrective step taken to reduce the noise; (3) the time series could be the number of visits to a web service, and the intervention is changes done to the user interface; and so on. The approach we describe in this article applies to any times series and intervention combination. It is well known that Gaussian process (GP) prior models provide flexibility by placing a non-parametric prior on the functional form of the model. While GPSSMs enable us to model a time series in a state space framework by placing a Gaussian Process (GP) prior over the state transition function, probabilistic recurrent state space models (PRSSM) employ variational approximations for handling complicated posterior distributions in GPSSMs. The robust PRSSMs (R-PRSSMs) discussed in this article assume a scale mixture of normal distributions instead of the usually proposed normal distribution. This assumption will accommodate heavy-tailed behavior or anomalous observations in the time series. On any exogenous intervention, we use R-PRSSM for Bayesian fitting and forecasting of the IoT time series. By comparing forecasts with the future internal temperature observations, we can assess with a high level of confidence the impact of an intervention. The techniques presented in this paper are very generic and apply to any time series and intervention combination. To illustrate our techniques clearly, we employ a concrete example. The time series of interest will be an Internet of Things (IoT) stream of internal temperatures measured by an insurance firm to address the risk of pipe-freeze hazard in a building. We treat the pipe-freeze hazard alert as an exogenous intervention. A comparison of forecasts and the future observed temperatures will be utilized to assess whether an alerted customer took preventive action to prevent pipe-freeze loss. Full article
(This article belongs to the Section AI Forecasting)
Show Figures

Figure 1

Back to TopTop