Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (143)

Search Parameters:
Keywords = two-term Gaussian distribution

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
21 pages, 843 KB  
Article
Assessing Hierarchical Temporal Memory Against an LSTM Baseline for Short-Term Smart-Meter Load Forecasting
by Antón Román-Portabales and Martín López-Nores
Future Internet 2026, 18(4), 222; https://doi.org/10.3390/fi18040222 - 21 Apr 2026
Viewed by 93
Abstract
Short-term load forecasting is a key capability for smart-grid operation, but real smart-meter streams are affected by missing values, communication noise, and non-stationary consumption patterns. This paper studies forecasting using raw smart-meter data collected from domestic consumers in a medium-sized city in southern [...] Read more.
Short-term load forecasting is a key capability for smart-grid operation, but real smart-meter streams are affected by missing values, communication noise, and non-stationary consumption patterns. This paper studies forecasting using raw smart-meter data collected from domestic consumers in a medium-sized city in southern Spain. In particular, we assess Hierarchical Temporal Memory (HTM), a biologically inspired online sequence learner, against a family of Long Short-Term Memory (LSTM)-based recurrent baselines. HTM offers continual adaptation and avoids a separate training phase, whereas LSTM relies on offline supervised training and may require retraining or fine-tuning under distribution shift. For five-step-ahead forecasting, HTM achieved a test RMSE of 251 kWh (about 15% of average consumption). After hyperparameter optimization, the best tested LSTM configuration achieved a test RMSE of approximately 250 kWh under clean conditions, indicating nearly identical point accuracy between the two approaches. Under synthetic Gaussian-noise injection, however, HTM remained comparatively stable, whereas the optimized LSTM configuration degraded markedly under the tested perturbation protocol. In addition, HTM exhibited a lower runtime in the tested CPU-based implementation. These findings suggest that HTM is a viable online alternative for aggregated smart-meter forecasting, offering competitive accuracy together with a favorable operational profile under the specific evaluation setup considered here. Full article
(This article belongs to the Special Issue Artificial Intelligence in Smart Grids)
Show Figures

Graphical abstract

30 pages, 4429 KB  
Article
Reliability Assessment of Harmonic Reducers Based on the Two-Phase Hybrid Stochastic Degradation Process
by Lai Wei, Peng Liu, Hailong Tian, Haoyuan Li and Yunshenghao Qiu
Sensors 2026, 26(8), 2437; https://doi.org/10.3390/s26082437 - 15 Apr 2026
Viewed by 297
Abstract
Harmonic reducers exhibit non-stationary and phase-dependent degradation behavior during long-term service, challenging the ability of classical stochastic degradation models to accurately assess reliability. To address phase-dependent differences in degradation behavior, this paper proposes a reliability assessment model based on a two-phase hybrid stochastic [...] Read more.
Harmonic reducers exhibit non-stationary and phase-dependent degradation behavior during long-term service, challenging the ability of classical stochastic degradation models to accurately assess reliability. To address phase-dependent differences in degradation behavior, this paper proposes a reliability assessment model based on a two-phase hybrid stochastic degradation process. In the proposed framework, the Wiener process is employed to characterize early-phase gradual degradation dominated by stochastic fluctuations, while the Inverse Gaussian process is used to describe later-phase monotonically accelerated degradation driven by cumulative damage. The framework allows for sample-level variability in transition times to more realistically capture individual degradation behavior. The Schwarz Information Criterion is also adopted to detect change points. Maximum likelihood estimation is performed for model parameter inference, and analytical expressions for the reliability function, cumulative distribution function, and probability density function are derived. Numerical results indicate that a change point exists for each tested product and that the proposed model achieves the best goodness of fit among the considered candidates, demonstrating its superiority in capturing phase-dependent characteristics of harmonic reducer degradation. In terms of reliability assessment bias, the proposed model (0.06%) significantly outperforms the Wiener degradation model (32%) and the IG degradation model (9.9%). These results further confirm that, under an identical failure threshold, the proposed approach yields more accurate and realistic reliability assessment outcomes. Full article
Show Figures

Figure 1

25 pages, 5309 KB  
Article
DTTE-Net: Prediction of SCR-Inlet NOx Concentration in Coal-Fired Boilers Based on Time–Frequency Feature Fusion
by Cheng Huang, Yi An, Mengting Li, Haiyang Zhang and Jiwei Wang
Appl. Sci. 2026, 16(7), 3495; https://doi.org/10.3390/app16073495 - 3 Apr 2026
Viewed by 312
Abstract
Against the backdrop of large-scale integration of renewables into the power grid, frequent load-following operation of thermal power units substantially increases the difficulty of controlling boiler NOx emissions. Accurate forecasting of boiler NOx emissions is crucial for guiding efficient and clean operation under [...] Read more.
Against the backdrop of large-scale integration of renewables into the power grid, frequent load-following operation of thermal power units substantially increases the difficulty of controlling boiler NOx emissions. Accurate forecasting of boiler NOx emissions is crucial for guiding efficient and clean operation under such flexible operating conditions. However, under frequent load-following conditions, NOx dynamics are highly nonlinear and non-stationary, making it challenging to achieve accurate prediction using only time-domain information. To address these issues, we propose DTTE-Net, a time–frequency feature fusion framework for predicting SCR-inlet NOx concentration in coal-fired boilers. DTTE-Net consists of three components: a time-domain branch, a frequency-domain branch, and a gated feature fusion module. The time-domain branch captures short-term fluctuations and long-range temporal dependencies, while the frequency-domain branch extracts complementary spectral representations to enhance the characterization of non-stationary fluctuations. The gated feature fusion module then adaptively integrates the two-domain features by using a gated mechanism and produces the NOx concentration forecast. In addition, a Gaussian kernel-based loss is introduced to improve robustness to nonlinear error structures. Experiments on real distributed control system data from a 660 MW ultra-supercritical coal-fired unit show that DTTE-Net outperforms existing baseline models, achieving lower forecasting errors and higher R2. Full article
(This article belongs to the Section Energy Science and Technology)
Show Figures

Figure 1

16 pages, 2311 KB  
Article
The Novel Models for Identifying the Vertical Structure of Urban Vegetation from UAV LiDAR Data
by Hang Yang, Rongxin Deng, Xinmeng Jing, Zhen Dong, Xiaoyu Yang, Jingyi Li and Zhiwen Mei
Remote Sens. 2026, 18(5), 692; https://doi.org/10.3390/rs18050692 - 26 Feb 2026
Viewed by 493
Abstract
Accurate quantification of vegetation vertical structure is crucial for analyzing the ecological functions of urban green spaces. However, constrained by the complexity of vegetation structure and spatial heterogeneity, current approaches for extracting vegetation vertical structure by airborne LiDAR have limitations in terms of [...] Read more.
Accurate quantification of vegetation vertical structure is crucial for analyzing the ecological functions of urban green spaces. However, constrained by the complexity of vegetation structure and spatial heterogeneity, current approaches for extracting vegetation vertical structure by airborne LiDAR have limitations in terms of layer boundary identification stability, threshold dependency, and ecological plausibility. This study developed two integrated UAV LiDAR-based stratification frameworks for identifying urban riparian vegetation vertical structure by combining established statistical modeling and signal processing techniques: (1) a Gaussian Mixture Model with Bayesian Information Criterion (GMM-BIC)-based probabilistic stratification framework; (2) a Savitzky–Golay filtering and Pruned Exact Linear Time (SG-PELT)-based change-point detection framework. Furthermore, the ecological height constraint was incorporated into the model to achieve biological adjustments. Two models were applied in the study area and compared using reference data. The results showed that the GMM-BIC method achieved an overall classification accuracy of 91.06%, with a macro-averaged F1-score of 87.77%, while the SG-PELT method attained an overall accuracy of 84.57%, with a macro-averaged F1-score of 79.20%. These results demonstrate that both models can effectively identify the vertical structure of urban vegetation. In particular, the two models exhibited distinct characteristics across different scenarios. The GMM-BIC model showed superior stratification accuracy in regions where vegetation height distribution displayed pronounced multi-peak characteristics and distinct differences among height segments. In comparison, the SG-PELT model demonstrated greater sensitivity in areas with significant height variation and clearly defined abrupt transitions between layers. These models could provide new methodologies for monitoring vegetation vertical structure and offer data support for biodiversity monitoring and ecological function assessment within urban ecosystems. Full article
Show Figures

Figure 1

36 pages, 3000 KB  
Article
Bivariate Generalized Split-BREAK Process with Application in Modeling Crime Dynamics
by Snežana Stojičić, Vladica S. Stojanović, Mihailo Jovanović, Dušan Joksimović and Radovan Radovanović
Mathematics 2026, 14(5), 754; https://doi.org/10.3390/math14050754 - 24 Feb 2026
Viewed by 289
Abstract
The manuscript proposes a new non-linear and non-stationary bivariate stochastic model, termed the two-dimensional Gaussian (generalized) Split-BREAK (2D-GSB) process, as a multivariate extension of the univariate GSB framework. The generalization consists in introducing a common threshold mechanism based on the norm of a [...] Read more.
The manuscript proposes a new non-linear and non-stationary bivariate stochastic model, termed the two-dimensional Gaussian (generalized) Split-BREAK (2D-GSB) process, as a multivariate extension of the univariate GSB framework. The generalization consists in introducing a common threshold mechanism based on the norm of a bivariate innovation vector and a single synchronized Bernoulli indicator which jointly governs regime activation in both components. This structure induces cross-dependent regime shifts and yields a binomial–Gaussian mixture representation of the joint distribution, explicitly linking contemporaneous dependence with a common latent regime mechanism. The fundamental properties of the proposed model are established, with particular emphasis on its asymptotic behavior. Parameter estimation procedure is developed using both the method of moments (MoM) and the empirical characteristic function (ECF) approach, and their performance is evaluated through Monte Carlo simulations. An empirical application to daily crime data illustrates how the proposed framework captures synchronized structural shocks and heavy-tailed features in related crime categories. In comparison with a standard VAR(1) benchmark, the 2D-GSB specification provides a parsimonious yet substantially improved likelihood-based fit, thus offering a theoretically sound framework for analyzing multivariate time series characterized by synchronized regime shifts and heavy-tailed behavior. Full article
Show Figures

Figure 1

24 pages, 4769 KB  
Article
A QGIS-Based Gaussian Plume Dispersion Model for Point Sources: Development and Intercomparison of Reflective and Non-Reflective Formulations
by Marius Daniel Bontos, Georgiana-Claudia Vasiliu, Elena-Laura Barbu, Corina Boncescu and Diana Mariana Cocârță
Appl. Sci. 2026, 16(4), 1833; https://doi.org/10.3390/app16041833 - 12 Feb 2026
Viewed by 628
Abstract
Air pollution from industrial point sources remains a major concern in urban environments, highlighting the need for accessible tools that support both education and preliminary environmental assessment. This study presents the development and intercomparison of an open-source, QGIS-based geospatial model for simulating atmospheric [...] Read more.
Air pollution from industrial point sources remains a major concern in urban environments, highlighting the need for accessible tools that support both education and preliminary environmental assessment. This study presents the development and intercomparison of an open-source, QGIS-based geospatial model for simulating atmospheric pollutant dispersion from fixed point sources using the Gaussian plume formulation. The model integrates emission parameters, meteorological conditions, and terrain data within a fully spatial workflow implemented through the QGIS graphical modeler, enabling the generation of ground-level concentration fields without advanced programming expertise. Dispersion is simulated with and without inclusion of a ground reflection term, allowing comparative analysis of boundary condition effects. The model was applied to a representative urban industrial source at the National University of Science and Technology POLITEHNICA Bucharest, using CO2 emissions treated as a passive tracer. Model outputs were evaluated through descriptive statistics and quantitative comparison with two established open-source Gaussian plume implementations developed in Python. Ground reflection leads to an increase of approximately 60% in modeled near-surface concentrations, particularly in the upper tail of the distribution, underscoring its importance for screening-level exposure assessment. The proposed model provides a transparent, reproducible, and user-friendly framework suitable for teaching activities, rapid screening analyses, and exploratory air quality assessments. Full article
(This article belongs to the Section Environmental Sciences)
Show Figures

Graphical abstract

19 pages, 2954 KB  
Article
An Adaptive Hybrid Short-Term Load Forecasting Framework Based on Improved Rime Optimization Variational Mode Decomposition and Cross-Dimensional Attention
by Aodi Zhang, Daobing Liu and Jianquan Liao
Energies 2026, 19(2), 497; https://doi.org/10.3390/en19020497 - 19 Jan 2026
Viewed by 301
Abstract
Accurate Short-Term Load Forecasting (STLF) is paramount for the stable and economical operation of power systems, particularly in the context of high renewable energy penetration, which exacerbates load volatility and non-stationarity. The prevailing advanced “decomposition–ensemble” paradigm, however, faces two significant challenges when processing [...] Read more.
Accurate Short-Term Load Forecasting (STLF) is paramount for the stable and economical operation of power systems, particularly in the context of high renewable energy penetration, which exacerbates load volatility and non-stationarity. The prevailing advanced “decomposition–ensemble” paradigm, however, faces two significant challenges when processing non-stationary signals: (1) The performance of Variational Mode Decomposition (VMD) is highly dependent on its hyperparameters (K, α), and traditional meta-heuristic algorithms (e.g., GA, GWO, PSO) are prone to converging to local optima during the optimization process; (2) Deep learning predictors struggle to dynamically weigh the importance of multi-dimensional, heterogeneous features (such as the decomposed Intrinsic Mode Functions (IMFs) and external climatic factors). To address these issues, this paper proposes a novel, adaptive hybrid forecasting framework, namely IRIME-VMD-CDA-LSTNet. Firstly, an Improved Rime Optimization Algorithm (IRIME) integrated with a Gaussian Mutation strategy is proposed. This algorithm adaptively optimizes the VMD hyperparameters by targeting the minimization of average sample entropy, enabling it to effectively escape local optima. Secondly, the optimally decomposed IMFs are combined with climatic features to construct a multi-dimensional information matrix. Finally, this matrix is fed into an innovative Cross-Dimensional Attention (CDA) LSTNet model, which dynamically allocates weights to each feature dimension. Ablation experiments conducted on a real-world dataset from a distribution substation demonstrate that, compared to GA-VMD, GWO-VMD, and PSO-VMD, the proposed IRIME-VMD method achieves a reduction in Root Mean Square Error (RMSE) of up to 18.9%. More importantly, the proposed model effectively mitigates the “prediction lag” phenomenon commonly observed in baseline models, especially during peak load periods. This framework provides a robust and high-accuracy solution for non-stationary load forecasting, holding significant practical value for the operation of modern power systems. Full article
(This article belongs to the Section F: Electrical Engineering)
Show Figures

Figure 1

52 pages, 782 KB  
Article
Single-Stage Causal Incentive Design via Optimal Interventions
by Sebastián Bejos, Eduardo F. Morales, Luis Enrique Sucar and Enrique Munoz de Cote
Entropy 2026, 28(1), 4; https://doi.org/10.3390/e28010004 - 19 Dec 2025
Cited by 1 | Viewed by 682
Abstract
We introduce Causal Incentive Design (CID), a framework that applies causal inference to canonical single-stage principal–agent problems (PAPs) characterized by bilateral private information. Within CID, the operating rules of PAPs are formalized using an additive-noise causal graphical model (CGM). Incentives are modeled as [...] Read more.
We introduce Causal Incentive Design (CID), a framework that applies causal inference to canonical single-stage principal–agent problems (PAPs) characterized by bilateral private information. Within CID, the operating rules of PAPs are formalized using an additive-noise causal graphical model (CGM). Incentives are modeled as interventions on a function space variable, Γ, which correspond to policy interventions in the principal–follower causal relation. The causal inference target estimand V(Γ) is defined as the expected value of the principal’s utility variable under a specified policy intervention in the post-intervention distribution. In the context of additive-Gaussian independent noise, the estimand V(Γ) decomposes into a two-layer expectation: (i) an inner Gaussian smoothing of the principal’s utility regression; and (ii) an outer averaging over the conditional probability of the follower’s action given the incentive policy. A Gauss–Hermite quadrature method is employed to efficiently estimate the first layer, while a policy-local kernel reweighting approach is used for the second. For offline selection of a single incentive policy, a Functional Causal Bayesian Optimization (FCBO) algorithm is introduced. This algorithm models the objective functional γV(γ) using a functional Gaussian process surrogate defined on a Reproducing Kernel Hilbert Space (RKHS) domain and utilizes an Upper Confidence Bound (UCB) acquisition functional. Consequently, the policy value V(γ) becomes an interventional query that can be answered using offline observational data under standard identifiability assumptions. High-probability cumulative-regret bounds are established in terms of differential information gain for the proposed FBO algorithm. Collectively, these elements constitute the central contributions of the CID framework, which integrates causal inference through identification and estimation with policy search in principal–agent problems under private information. This approach establishes a causal decision-making pipeline that enables commitment to a high-performing incentive in a single-shot game, supported by regret guarantees. Provided that the data used for estimation is sufficient, the resulting offline pipeline is appropriate for scenarios where adaptive deployment is impractical or costly. Beyond the methodological contribution, this work introduces a novel application of causal graphical models and causal reasoning to incentive design and principal–agent problems, which are central to economics and multi-agent systems. Full article
(This article belongs to the Special Issue Causal Graphical Models and Their Applications)
Show Figures

Figure 1

41 pages, 7185 KB  
Article
Two-Stage Dam Displacement Analysis Framework Based on Improved Isolation Forest and Metaheuristic-Optimized Random Forest
by Zhihang Deng, Qiang Wu and Minshui Huang
Buildings 2025, 15(24), 4467; https://doi.org/10.3390/buildings15244467 - 10 Dec 2025
Cited by 1 | Viewed by 544
Abstract
Dam displacement monitoring is crucial for assessing structural safety; however, conventional models often prioritize single-task prediction, leading to an inherent difficulty in balancing monitoring data quality with model performance. To bridge this gap, this study proposes a novel two-stage analytical framework that synergistically [...] Read more.
Dam displacement monitoring is crucial for assessing structural safety; however, conventional models often prioritize single-task prediction, leading to an inherent difficulty in balancing monitoring data quality with model performance. To bridge this gap, this study proposes a novel two-stage analytical framework that synergistically integrates an improved isolation forest (iForest) with a metaheuristic-optimized random forest (RF). The first stage focuses on data cleaning, where Kalman filtering is applied for denoising, and a newly developed Dynamic Threshold Isolation Forest (DTIF) algorithm is introduced to effectively isolate noise and outliers amidst complex environmental loads. In the second stage, the model’s predictive capability is enhanced by first employing the LASSO algorithm for feature importance analysis and optimal subset selection, followed by an Improved Reptile Search Algorithm (IRSA) for fine-tuning RF hyperparameters, thereby significantly boosting the model’s robustness. The IRSA incorporates several key improvements: Tent chaotic mapping during initialization to ensure population diversity, an adaptive parameter adjustment mechanism combined with a Lévy flight strategy in the encircling phase to dynamically balance global exploration and convergence, and the integration of elite opposition-based learning with Gaussian perturbation in the hunting phase to refine local exploitation. Validated against field data from a concrete hyperbolic arch dam, the proposed DTIF algorithm demonstrates superior anomaly detection accuracy across nine distinct outlier distribution scenarios. Moreover, for long-term displacement prediction tasks, the IRSA-RF model substantially outperforms traditional benchmark models in both predictive accuracy and generalization capability, providing a reliable early risk warning and decision-support tool for engineering practice. Full article
(This article belongs to the Special Issue Structural Health Monitoring Through Advanced Artificial Intelligence)
Show Figures

Figure 1

22 pages, 2792 KB  
Article
Compression of High-Component Gaussian Mixture Model (GMM) Based on Multi-Scale Mixture Compression Model
by Linwei Zhang, Jin Zhang, Mingye Tan and Shi Liang
Electronics 2025, 14(24), 4858; https://doi.org/10.3390/electronics14244858 - 10 Dec 2025
Viewed by 606
Abstract
This study addresses the redundancy problem caused by an excessive number of components in Gaussian mixture models (GMMs) in practical applications, as well as the derivative issues such as overfitting and exponential growth of computational complexity, and proposes a component reduction method based [...] Read more.
This study addresses the redundancy problem caused by an excessive number of components in Gaussian mixture models (GMMs) in practical applications, as well as the derivative issues such as overfitting and exponential growth of computational complexity, and proposes a component reduction method based on the GMM multi-scale mixture compression model (GMMultiMixer). Traditional GMM compression methods are limited by local optima, which can lead to model distortion and difficulty in handling complex multi-peak distributions. This paper draws on the multi-scale hybrid architecture and dynamic feature extraction capabilities of the TimeMixer++ model to propose the GMMultiMixer model for reconstructing the weights, means, and covariance parameters of GMM, thereby achieving optimal approximation of the original model. Experimental results demonstrate that this method significantly outperforms traditional strategies in terms of KL divergence metrics, particularly when fitting multi-modal, high-dimensional complex distributions, and it can also handle the compression task of two-dimensional GMM. Additionally, when combined with Kalman filtering for unmanned aerial vehicle (UAV) state estimation, this compression strategy effectively improves the system’s computational efficiency and state estimation accuracy. Full article
Show Figures

Figure 1

17 pages, 1877 KB  
Article
Does Score Bias Correction Improve the Fusion of Classifiers?
by Luis Vergara and Addisson Salazar
Mach. Learn. Knowl. Extr. 2025, 7(4), 151; https://doi.org/10.3390/make7040151 - 24 Nov 2025
Viewed by 558
Abstract
We demonstrate that the potential bias in the scores generated by individual classifiers negatively affects their fusion. Consequently, we present an algorithm to improve the effectiveness of score fusion in classification. The algorithm corrects the score class conditional bias before fusion. The interest [...] Read more.
We demonstrate that the potential bias in the scores generated by individual classifiers negatively affects their fusion. Consequently, we present an algorithm to improve the effectiveness of score fusion in classification. The algorithm corrects the score class conditional bias before fusion. The interest of the procedure is demonstrated theoretically, first in general terms and then considering exponential models for the score class conditional distributions. The case of beta distributions is also addressed using Monte Carlo simulations. Finally, a real-life application of fusion of two modalities (EEG, ECG) and two classifiers (Gaussian Bayes and Logistic Regression) is included, showing significant improvement with respect to conventional fusion without bias correction. Full article
(This article belongs to the Section Learning)
Show Figures

Figure 1

28 pages, 623 KB  
Article
Representative Points of the Inverse Gaussian Distribution and Their Applications
by Wen-Wen Hu, Kai-Tai Fang and Xiao-Ling Peng
Entropy 2025, 27(12), 1190; https://doi.org/10.3390/e27121190 - 24 Nov 2025
Viewed by 720
Abstract
The inverse Gaussian (IG) distribution, as an important class of skewed continuous distributions, is widely applied in fields such as lifetime testing, financial modeling, and volatility analysis. This paper makes two primary contributions to the statistical inference of the IG distribution. First, a [...] Read more.
The inverse Gaussian (IG) distribution, as an important class of skewed continuous distributions, is widely applied in fields such as lifetime testing, financial modeling, and volatility analysis. This paper makes two primary contributions to the statistical inference of the IG distribution. First, a systematic investigation is presented, for the first time, into three types of representative points (RPs)—Monte Carlo (MC-RPs), quasi-Monte Carlo (QMC-RPs), and mean square error RPs (MSE-RPs)—as a tool for the efficient discrete approximation of the IG distribution, thereby addressing the common scenario where practical data is discrete or requires discretization. The performance of these RPs is thoroughly examined in applications such as low-order moment estimation, density function approximation, and resampling. Simulation results demonstrate that the MSE-RPs consistently outperform the other two types in terms of approximation accuracy and robustness. Second, the Harrell–Davis (HD) and three Sfakianakis–Verginis (SV1, SV2, SV3) quantile estimators are introduced to enhance the representativeness of samples from the IG distribution, thereby significantly improving the accuracy of parameter estimation. Moreover, case studies based on real-world data confirm the effectiveness and practical utility of this quantile estimator methodology. Full article
(This article belongs to the Section Information Theory, Probability and Statistics)
Show Figures

Figure 1

10 pages, 5564 KB  
Proceeding Paper
Bayesian Regularization for Dynamical System Identification: Additive Noise Models
by Robert K. Niven, Laurent Cordier, Ali Mohammad-Djafari, Markus Abel and Markus Quade
Phys. Sci. Forum 2025, 12(1), 17; https://doi.org/10.3390/psf2025012017 - 14 Nov 2025
Viewed by 707
Abstract
Consider the dynamical system x ˙ = f ( x ) , where x R n is the state vector, x ˙ is the time or spatial derivative, and f is the system model. We wish to identify unknown f from its [...] Read more.
Consider the dynamical system x ˙ = f ( x ) , where x R n is the state vector, x ˙ is the time or spatial derivative, and f is the system model. We wish to identify unknown f from its time-series or spatial data. For this, we propose a Bayesian framework based on the maximum a posteriori (MAP) point estimate, to give a generalized Tikhonov regularization method with the residual and regularization terms identified, respectively, with the negative logarithms of the likelihood and prior distributions. As well as estimates of the model coefficients, the Bayesian interpretation provides access to the full Bayesian apparatus, including the ranking of models, the quantification of model uncertainties, and the estimation of unknown (nuisance) hyperparameters. For multivariate Gaussian likelihood and prior distributions, the Bayesian formulation gives a Gaussian posterior distribution, in which the numerator contains a Mahalanobis distance or “Gaussian norm”. In this study, two Bayesian algorithms for the estimation of hyperparameters—the joint maximum a posteriori (JMAP) and variational Bayesian approximation (VBA)—are compared to the popular SINDy, LASSO, and ridge regression algorithms for the analysis of several dynamical systems with additive noise. We consider two dynamical systems, the Lorenz convection system and the Shil’nikov cubic system, with four choices of noise model: symmetric Gaussian or Laplace noise and skewed Rayleigh or Erlang noise, with different magnitudes. The posterior Gaussian norm is found to provide a robust metric for quantitative model selection—with quantification of the model uncertainties—across all dynamical systems and noise models examined. Full article
Show Figures

Figure 1

17 pages, 591 KB  
Article
Extending Approximate Bayesian Computation to Non-Linear Regression Models: The Case of Composite Distributions
by Mostafa S. Aminzadeh and Min Deng
Risks 2025, 13(11), 220; https://doi.org/10.3390/risks13110220 - 5 Nov 2025
Viewed by 699
Abstract
Modeling loss data is a crucial aspect of actuarial science. In the insurance industry, small claims occur frequently, while large claims are rare. Traditional heavy-tail distributions, such as Weibull, Log-Normal, and Inverse Gaussian distributions, are not suitable for describing insurance data, which often [...] Read more.
Modeling loss data is a crucial aspect of actuarial science. In the insurance industry, small claims occur frequently, while large claims are rare. Traditional heavy-tail distributions, such as Weibull, Log-Normal, and Inverse Gaussian distributions, are not suitable for describing insurance data, which often exhibit skewness and fat tails. The literature has explored classical and Bayesian inference methods for the parameters of composite distributions, such as the Exponential–Pareto, Weibull–Pareto, and Inverse Gamma–Pareto distributions. These models effectively separate small to moderate losses from significant losses using a threshold parameter. This research aims to introduce a new composite distribution, the Gamma–Pareto distribution with two parameters, and employ a numerical computational approach to find the maximum likelihood estimates (MLEs) of its parameters. A novel computational approach for a nonlinear regression model where the loss variable is distributed as the Gamma–Pareto and depends on multiple covariates is proposed. The maximum likelihood (ML) and Approximate Bayesian Computation (ABC) methods are used to estimate the regression parameters. The Fisher information matrix, along with a multivariate normal distribution as the prior distribution, is utilized through the ABC method. Simulation studies indicate that the ABC method outperforms the ML method in terms of accuracy. Full article
Show Figures

Figure 1

19 pages, 1620 KB  
Article
Secure Quantum Teleportation of Squeezed Thermal States
by Alexei Zubarev, Marina Cuzminschi and Aurelian Isar
Symmetry 2025, 17(11), 1804; https://doi.org/10.3390/sym17111804 - 26 Oct 2025
Viewed by 1129
Abstract
Quantum teleportation is a fundamental protocol in quantum information science. It represents a critical resource for quantum communication and distributed quantum computing. We derive an analytical expression of the fidelity of teleportation of an input squeezed thermal state using for teleportation a bipartite [...] Read more.
Quantum teleportation is a fundamental protocol in quantum information science. It represents a critical resource for quantum communication and distributed quantum computing. We derive an analytical expression of the fidelity of teleportation of an input squeezed thermal state using for teleportation a bipartite Gaussian resource state shared between Alice and Bob. Each mode of the resource state is susceptible to the influence of the environment. We employ the characteristic function approach in conjunction with the covariance matrix formalism. The fidelity of teleportation is expressed in terms of input and resource state covariance matrices. We investigate, as an example, the feasibility of secure quantum teleportation of a squeezed thermal state using a two-mode resource state whose modes are placed in separate thermal baths. A successful quantum teleportation requires meeting two criteria: the presence of two-way quantum steering and a teleportation fidelity exceeding the classical threshold. The quantum steering is by nature asymmetric and has found applications in quantum cryptography and secure quantum teleportation. Weak squeezing and a high number of average thermal photons in the input states lead to an increase in the fidelity of teleportation. Generally, steering disappears much faster than the fidelity of teleportation decreases below its classical limit. Full article
Show Figures

Figure 1

Back to TopTop