Previous Article in Journal
Automatic Modeling and Object Identification in Radio Astronomy
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Proceeding Paper

Bayesian Regularization for Dynamical System Identification: Additive Noise Models †

by
Robert K. Niven
1,2,*,
Laurent Cordier
3,
Ali Mohammad-Djafari
4,
Markus Abel
5 and
Markus Quade
5
1
School of Engineering and Technology, The University of New South Wales, Canberra, ACT 2600, Australia
2
Department of Mechanical Engineering, Auckland University of Technology, Auckland 1142, New Zealand
3
Institut Pprime, CNRS, Université de Poitiers, ISAE-ENSMA, 86360 Chasseneuil-du-Poitou, France
4
Laboratoire des Signaux et Systèmes (L2S), CentraleSupélec, 91190 Gif-sur-Yvette, France
5
Ambrosys GmbH, 14482 Potsdam, Germany
*
Author to whom correspondence should be addressed.
Presented at the 43rd International Workshop on Bayesian Inference and Maximum Entropy Methods in Science and Engineering, Ghent, Belgium, 1–5 July 2024.
Phys. Sci. Forum 2025, 12(1), 17; https://doi.org/10.3390/psf2025012017
Published: 14 November 2025

Abstract

Consider the dynamical system x ˙ = f ( x ) , where x R n is the state vector, x ˙ is the time or spatial derivative, and f is the system model. We wish to identify unknown f from its time-series or spatial data. For this, we propose a Bayesian framework based on the maximum a posteriori (MAP) point estimate, to give a generalized Tikhonov regularization method with the residual and regularization terms identified, respectively, with the negative logarithms of the likelihood and prior distributions. As well as estimates of the model coefficients, the Bayesian interpretation provides access to the full Bayesian apparatus, including the ranking of models, the quantification of model uncertainties, and the estimation of unknown (nuisance) hyperparameters. For multivariate Gaussian likelihood and prior distributions, the Bayesian formulation gives a Gaussian posterior distribution, in which the numerator contains a Mahalanobis distance or “Gaussian norm”. In this study, two Bayesian algorithms for the estimation of hyperparameters—the joint maximum a posteriori (JMAP) and variational Bayesian approximation (VBA)—are compared to the popular SINDy, LASSO, and ridge regression algorithms for the analysis of several dynamical systems with additive noise. We consider two dynamical systems, the Lorenz convection system and the Shil’nikov cubic system, with four choices of noise model: symmetric Gaussian or Laplace noise and skewed Rayleigh or Erlang noise, with different magnitudes. The posterior Gaussian norm is found to provide a robust metric for quantitative model selection—with quantification of the model uncertainties—across all dynamical systems and noise models examined.

1. Introduction

Consider a dynamical system, commonly represented by
x ˙ ( t ) = f ( x ( t ) ) ,
where x R n is the observable state vector and x ˙ R n their derivatives, both a function of time t (or some other parameters), and f R n is the system model. Given a set of discrete time series data, how should a user identify the model f? In dynamical systems theory, this is referred to as system identification. Bayesian practitioners will recognise this as an inverse problem, for which the Bayesian inferential framework is eminently suited.
Recently, a number of researchers in dynamical systems have applied regularization methods for system identification from time-series or spatial data (e.g., [1,2,3]). This is used to determine a matrix of coefficients which, when multiplied by a matrix of functional operations, can reproduce the data. Such methods generally involve a sparsification technique to remove unnecessary coefficients. However, both the regularization term and its coefficient are usually implemented in a heuristic or ad hoc manner, without much guidance on how they should be selected for any particular dynamical system. Traditional regularization methods are also unable to provide any further information, for example on the model uncertainty.
In this study, we present a Bayesian framework for dynamical system identification based on the Bayesian maximum a posteriori (MAP) point estimate. This is shown to provide a generalized form of Tikhonov regularization, with the residual and regularization terms corresponding, respectively, to the negative logarithms of the likelihood and prior distributions. The Bayesian interpretation also enables other features of Bayesian inference, including model uncertainty quantification and the estimation of unknown (nuisance) hyperparameters. The present study employs two Bayesian algorithms for hyperparameter estimation, the joint maximum a posteriori (JMAP) and variational Bayesian approximation (VBA), which are compared to the popular SINDy, LASSO, and ridge regression algorithms. We consider two dynamical systems (Lorenz and Shil’nikov) with four choices of noise model (Gaussian, Laplace, Rayleigh, and Erlang), to examine the robustness of the Bayesian algorithms to different additive noise.

2. Theoretical Foundations

Traditional regularization methods for system identification (e.g., [1,2,3]) start from a recorded time (or spatial) series x ( t ) and its derivatives x ˙ ( t ) , written, respectively, as the m × n matrices
X = x ( t 1 ) x ( t m ) = x 1 ( t 1 ) x n ( t 1 ) x 1 ( t m ) x n ( t m ) , X ˙ = x ˙ ( t 1 ) x ˙ ( t m ) = x ˙ 1 ( t 1 ) x ˙ n ( t 1 ) x ˙ 1 ( t m ) x ˙ n ( t m ) .
The user then chooses an alphabet of c functions based on X to populate a m × c matrix library; for example,
Θ ( X ) = 1 X X 2 X 3 ,
using polynomial functions in this case. The dynamical system (1) is then represented by the matrix product:
X ˙ = Θ ( X ) Ξ ,
where Ξ is a c × n matrix of model coefficients ξ i j R . The computation of each column j of the coefficient matrix Ξ requires the inversion of (4), commonly using a regularization method of the form
Ξ j ^ = arg min Ξ j J ( Ξ j ) , J ( Ξ j ) = | | X ˙ j Θ ( X ) Ξ j | | β α + κ j | | Ξ j | | δ γ ,
where y p = ( i | y i | p ) 1 / p for p R is the L p norm, κ j R is the regularization coefficient, and α , β , γ , δ R are constants. Several methods have been implemented for (5), including ridge regression [4] for α , β , γ , δ = 2 ; the least absolute shrinkage and selection operator (LASSO) [5] for α , β = 2 and γ , δ = 1 ; and strong sparsity [3] for α , β = 2 , γ = 1 , and δ = 0 . Instead of (5), many authors use least squares regression with iterative thresholding, known as the sparse identification of nonlinear dynamics (SINDy) method [1]:
J ( Ξ j ) = | | X ˙ j Θ ( X ) Ξ j | | 2 2 with | ξ i j | λ , ξ i j Ξ j .
where λ is the threshold.
In the Bayesian approach to this problem (e.g., [6]), it is recognized that the time series decomposition for each column j should be written as
X ˙ j = Θ ( X ) Ξ j + ϵ j ,
where ϵ j is a noise or error term. All variables are considered probabilistic, leading to the posterior probability of Ξ j subject to the data, given by Bayes’ rule:
p ( Ξ j | X ˙ j ) = p ( X ˙ j | Ξ j ) p ( Ξ j ) p ( X ˙ j ) p ( X ˙ j | Ξ j ) p ( Ξ j ) .
The simplest Bayesian method is to calculate the maximum a posteriori (MAP) point estimate of Ξ j by maximizing (8). It is convenient to consider the logarithmic maximum, giving
Ξ j ^ = arg max Ξ j ln p ( Ξ j | X ˙ j ) = arg max Ξ j ln p ( X ˙ j | Ξ j ) + ln p ( Ξ j ) .
To reduce (9), we make two assumptions. First, we assume unbiased multivariate Gaussian noise with covariance matrix V ϵ j :
p ( ϵ j | Ξ j ) = N ( ϵ j | 0 , V ϵ j ) = exp 1 2 ϵ j T V ϵ j 1 ϵ j ( 2 π ) m det V ϵ j exp ( 1 2 | | ϵ j | | V ϵ j 1 2 ) ,
where det is the determinant, and the second form is written in terms of the Mahalanobis distance | | y | | A : = y A y , with respect to a symmetric positive semi-definite matrix A, referred to here as a “Gaussian norm”. From (7), this gives the likelihood
p ( X ˙ j | Ξ j ) exp ( 1 2 | | X ˙ j Θ ( X ) Ξ j | | V ϵ j 1 2 ) .
Second, we assume a multivariate unbiased Gaussian prior with covariance matrix V Ξ j :
p ( Ξ j ) = N ( Ξ j | 0 , V Ξ j ) = exp ( 1 2 | | Ξ j | | V Ξ j 1 2 ) ( 2 π ) c det V Ξ j exp ( 1 2 | | Ξ j | | V Ξ j 1 2 ) ,
The MAP estimator (9)—which for Gaussians, is equivalent to the mean—reduces to [6]:
Ξ ^ = arg min Ξ j [ | | X ˙ j Θ ( X ) Ξ j | | V ϵ j 1 2 + | | Ξ j | | V Ξ j 1 2 ] .
Equation (13) therefore reduces to an objective function very similar to that used for regularization (5), with the likelihood identified with the residual term and the prior identified with the regularization term. For Gaussian likelihood (11) and prior (12) distributions, the posterior is also Gaussian, with the analytical solution [7]
p ( Ξ j | X ˙ j ) = N ( Ξ j | Ξ ^ j , Δ ^ j ) exp ( 1 2 | | Ξ j Ξ ^ j | | Δ ^ j 1 2 ) Ξ ^ j = Δ ^ j ( Θ V ϵ j 1 X ˙ ) , Δ ^ j = ( Θ V ϵ j 1 Θ + V Ξ j 1 ) 1 .
The covariance matrices V ϵ j and V Ξ j are unknown and must be determined. In the Bayesian framework, these are incorporated into a joint posterior, which can be simplified to
p ( Ξ j , V ϵ j , V Ξ j | X ˙ j ) p ( X ˙ j | Ξ j , V ϵ j ) p ( Ξ j | V Ξ j ) p ( V ϵ j ) p ( V Ξ j ) .
The covariance priors can be represented by products of inverse gamma distributions:
p ( V ϵ j ) = i = 1 m p ( v ϵ i j ) = i = 1 m IG ( v ϵ i j | α ϵ i j , β ϵ i j ) , p ( V Ξ j ) = = 1 c p ( v Ξ j ) = = 1 c IG ( v Ξ j | α Ξ j , β Ξ j )
with hyperparameters { α ϵ i j , β ϵ i j } and { α Ξ j , β Ξ j } . Two Bayesian algorithms are used in this study. In the joint maximum a posteriori (JMAP) algorithm, (15) is maximized iteratively with respect to { Ξ j , V ϵ j , V Ξ j } , to give the estimated parameters Ξ ^ j , V ^ ϵ j , and V ^ Ξ j [8]. In the variational Bayesian approximation (VBA), the posterior in (15) is approximated by q ( Ξ j , V ϵ j , V Ξ j ) = q 1 ( Ξ j ) q 2 ( V ϵ j ) q 3 ( V Ξ j ) . The individual MAP estimates are then calculated iteratively, using a Kullback–Leibler distance between p ( Ξ j , V ϵ j , V Ξ j | X ˙ j ) and q as the convergence criterion [8].

3. Application

In this study, two dynamical systems are considered. The first is the well-known Lorenz system, a quadratic system given by [9]
x ˙ = [ σ ( y x ) , x ( ρ z ) y , x y β z ] ,
which is chaotic for [ σ , ρ , β ] = [ 10 , 8 3 , 28 ] . The second is the Shil’nikov system, a cubic system given by [10]
x ˙ = [ y , x ( 1 z ) B x 3 λ y , α ( z x 2 ) ] ,
which is chaotic for [ α , λ , B ] = 4 30 135 , 11 30 90 , 2 13 . Phase plots of these systems with added Gaussian noise (see below) are shown in Figure 1a,b.
The analyses were conducted in Matlab 2021b on a MacBook Pro with 2.3 GHz Intel Core i9, with numerical integration by the ode45 function. For each system, the data X were calculated and augmented by additive random noise using the function X n o i s y = X + ε X d i s t r i b , where X d i s t r i b is sampled from a univariate noise distribution and ε is a scaling parameter. Four noise models with random variable y were considered, illustrated in Table 1. These include two symmetric noise models (Gaussian and Laplace with μ = 0 ) and two non-negative skewed noise models (Rayleigh and Erlang). Only the Gaussian noise model conforms to the assumption of Gaussian error (10), so the other models provide a test of robustness of the algorithms used. The noisy derivatives X ˙ n o i s y were then calculated from the noisy data X n o i s y by a dynamical system function call.
The data were then analyzed by three traditional regularization algorithms (SINDy, LASSO, ridge regression) and two Bayesian algorithms (JMAP and VBA), using inbuilt Matlab functions for LASSO and ridge regression, and modified forms of published codes for SINDy [2], JMAP, and VBA [6]. All analyses were conducted using an inner iteration for fixed hyperparameter(s), and an outer iteration over a sequence of hyperparameter(s). For SINDy, iteration was conducted over λ , proceeding from a high value, while for LASSO and ridge regression, iteration was conducted over each κ j . For JMAP and VBA, a broad Gaussian model prior and covariance prior with fixed hyperparameters { α Ξ j , β Ξ j } were selected, to represent the a priori belief that the model coefficients should be symmetric about zero, penalising coefficients approaching ± . The outer iteration was then conducted over the error hyperparameters { α ϵ i j , β ϵ i j } , combined into the inverse gamma expectation β ϵ i j / ( α ϵ i j 1 ) . In each method, the 2-norms and (where available) Gaussian norms were examined.

4. Results

The outputs from several sets of analyses are shown below, including for the Lorenz system with Gaussian noise analyzed by different methods (Table 2), and the Lorenz or Shil’nikov systems with different noise models analyzed by JMAP (respectively, Table 3 and Table 4). From these and other analyses, we can interpret the following:
  • For JMAP and VBA, the optimum in { α ϵ i j , β ϵ i j } is identified by a minimum in the posterior Gaussian norm | | Ξ j Ξ ^ j | | Δ ^ j 1 2 (green curves, column 1 of Table 2; column 2 of Table 3 and Table 4). This optimum corresponds to the maximum in the posterior (14), and is distinct in all examples considered. Both algorithms are fairly efficient in finding the optimum (∼5–10 iterations).
  • For SINDy and ridge regression, the optimum in the threshold λ or regularization parameter κ j is identified by a turning point in the residual | | X ˙ j Θ Ξ j | | 2 2 (blue curves, column 1 of Table 2). In contrast, for LASSO, the residuals follow strange curves; the optimum is instead identified by a turning point in the objective function (brown curves, column 1 of Table 2). The SINDy method is very efficient (∼4–5 iterations) in finding the optimum, while LASSO and ridge regression require many more iterations.
  • The Bayesian methods provide standard deviations on the predicted coefficients, extracted from the posterior covariance Δ ^ j in (14) (error bars, column 3 of Table 2, Table 3 and Table 4). These are of order ∼ 10 8 10 6 in all examples. In contrast, SINDy and ridge regression report a coefficient precision of ∼ 10 15 10 14 . The inconsistency with JMAP and VBA suggest that this high precision is artificial. LASSO was unsuccessful in all examples examined, with a precision of ∼ 10 3 and poor recovery of the dynamical system.
  • Contrary to our expectations, JMAP and VBA based on Gaussian noise (13) were highly robust to different choices of additive noise, for all dynamical systems, noise models, and parameter choices examined. Little difference was observed between the symmetric or non-negative noise models, even for such high magnitudes of added noise ε that the dynamical system becomes overwhelmed by noise (column 1 of Table 2 and Table 3).

5. Conclusions

In this study, we consider the problem of system identification of a dynamical system, represented by a nonlinear equation system x ˙ = f ( x ) , from discrete time-series or spatial data. We propose a Bayesian framework based on the maximum a posteriori (MAP) point estimate, to give a generalized form of Tikhonov regularization with the residual and regularization terms identified, respectively, with the negative logarithms of the likelihood and prior distributions. As well as estimates of the model coefficients, the Bayesian interpretation provides access to the full Bayesian apparatus, including the ranking of models, the quantification of model uncertainties, and the estimation of unknown (nuisance) hyperparameters. The present study employs two Bayesian algorithms for hyperparameter estimation, the joint maximum a posteriori (JMAP) and variational Bayesian approximation (VBA), which are compared to the popular SINDy, LASSO, and ridge regression algorithms. For multivariate Gaussian likelihood and prior distributions, the Bayesian formulation gives a Gaussian posterior distribution, in which the numerator contains a Mahalanobis distance or “Gaussian norm”. We consider two dynamical systems (Lorenz and Shil’nikov), with four choices of noise model (Gaussian, Laplace, Rayleigh, and Erlang). Both JMAP and VBA algorithms were successful for model identification and uncertainty quantification, with predicted standard deviations in the model coefficients of ∼ 10 8 10 6 in all examples. This compares to the calculated precision of ∼ 10 15 10 14 by SINDy and ridge regression, which we consider to be artificially low. LASSO was unsuccessful in all examples examined. For the Bayesian algorithms, the posterior Gaussian norm is found to provide a robust metric for quantitative model selection for all combinations of dynamical system and noise model examined.

Author Contributions

Conceptualization and methodology, R.K.N., L.C., M.A. and A.M.-D.; software and analysis, R.K.N., L.C., M.A. and M.Q.; writing—original draft preparation, R.K.N., L.C. and M.A.; writing—review and editing, all authors; funding, R.K.N., L.C. and A.M.-D. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by Institute Pprime, CNRS, Poitiers, France, and CentraleSupélec, Gif-sur-Yvette, France.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The BDSI MATLAB code is available in GitHub (https://github.com/markusabel/BDSI).

Conflicts of Interest

The authors declare no conflicts of interest. M.A. and M.Q. are from Ambrosys GmbH, while this research is unrelated to the company’s primary business activities.

References

  1. Brunton, S.L.; Proctor, J.L.; Kutz, J.N. Discovering governing equations from data by sparse identification of nonlinear dynamical systems. Proc. Natl. Acad. Sci. USA 2016, 113, 3932–3937. [Google Scholar] [CrossRef] [PubMed]
  2. Mangan, N.M.; Kutz, J.N.; Brunton, S.L.; Proctor, J.L. Model selection for dynamical systems via sparse regression and information criteria. Proc. R. Soc. Proc. A 2017, 473, 20170009. [Google Scholar] [CrossRef] [PubMed]
  3. Rudy, S.H.; Brunton, S.L.; Proctor, J.L.; Kutz, J.N. Data-driven discovery of partial differential equations. Sci. Adv. 2017, 3, e1602614. [Google Scholar] [CrossRef] [PubMed]
  4. Hoerl, A.E.; Kennard, R.W. Ridge regression: Biased estimation for nonorthogonal problems. Technometrics 1970, 12, 55–67. [Google Scholar] [CrossRef]
  5. Santosa, F.; Symes, W.W. Linear inversion of band-limited reflection seismograms. SIAM J. Sci. Stat. Comput. 1986, 7, 1307–1330. [Google Scholar] [CrossRef]
  6. Mohammad-Djafari, A. Inverse problems in signal and image processing and Bayesian inference framework: From basic to advanced Bayesian computation. In Proceedings of the Scube Seminar, L2S, CentraleSupelec, Gif-sur-Yvette, France, 27 March 2015. [Google Scholar]
  7. Tarantola, A. Inverse Problem Theory and Methods for Model Parameter Estimation; SIAM: Philadelphia, PA, USA, 2005. [Google Scholar]
  8. Mohammad-Djafari, A.; Dumitru, M. Bayesian sparse solutions to linear inverse problems with non-stationary noise with Student-t priors. Digit. Signal Proc. 2015, 47, 128–156. [Google Scholar] [CrossRef]
  9. Lorenz, E.N. Deterministic nonperiodic flow. J. Atmos. Sci. 1963, 20, 130–141. [Google Scholar] [CrossRef]
  10. Shil’nikov, A.L.; Shil’nikov, L.P.; Turaev, D.V. Normal forms and Lorenz attractors. Int. J. Bifurc. Chaos 1993, 3, 1123–1139. [Google Scholar] [CrossRef]
Figure 1. Calculated noisy data for added Guassian noise with μ = 0 , σ = 1 , and ε = 0.2 : (a) Lorenz system and (b) Shil’nikov system.
Figure 1. Calculated noisy data for added Guassian noise with μ = 0 , σ = 1 , and ε = 0.2 : (a) Lorenz system and (b) Shil’nikov system.
Psf 12 00017 g001
Table 1. Noise models adopted in this study.
Table 1. Noise models adopted in this study.
Psf 12 00017 i001Psf 12 00017 i002Psf 12 00017 i003Psf 12 00017 i004
Gaussian noiseLaplace noiseRayleigh noiseErlang noise
η = 1 σ 2 π exp ( y μ ) 2 2 σ 2 η = 1 σ 2 exp 2 | y μ | σ η = y σ 2 exp y 2 2 σ 2 η = λ s y s 1 e λ y ( s 1 ) !
for y 0 for y 0
mean μ , standard deviation σ mean μ , scale σ / 2 scale σ shape s, rate λ
Table 2. Analyses of the Lorenz system ( t = 10 , t s t e p = 0.01 ) with Gaussian noise ( μ = 0 , σ = 1 , ε = 0.2 ) by SINDy, LASSO, ridge regression, JMAP, and VBA, showing (column 1) plots of residual, regularization term and the objective function; (column 2) plots of prior, likelihood, and posterior Gaussian norms (where available); and (column 3) differences in predicted parameters ξ i j ξ ^ i j , with predicted standard deviations shown as error bars for JMAP and VBA.
Table 2. Analyses of the Lorenz system ( t = 10 , t s t e p = 0.01 ) with Gaussian noise ( μ = 0 , σ = 1 , ε = 0.2 ) by SINDy, LASSO, ridge regression, JMAP, and VBA, showing (column 1) plots of residual, regularization term and the objective function; (column 2) plots of prior, likelihood, and posterior Gaussian norms (where available); and (column 3) differences in predicted parameters ξ i j ξ ^ i j , with predicted standard deviations shown as error bars for JMAP and VBA.
2-Norm PlotsGaussian Norm PlotsCoefficient Plots
Psf 12 00017 i005Psf 12 00017 i006Psf 12 00017 i007
Psf 12 00017 i008Not applicablePsf 12 00017 i009
Psf 12 00017 i010Not applicablePsf 12 00017 i011
Psf 12 00017 i012Psf 12 00017 i013Psf 12 00017 i014
Psf 12 00017 i015Psf 12 00017 i016Psf 12 00017 i017
Table 3. Analyses of the Lorenz system ( t = 10 , t s t e p = 0.01 ) by JMAP for various noise models ( μ = 0 , σ = s = λ = 1 ) of different scales ε , showing (column 1) the phase plot of noisy data; (column 2) plots of prior, likelihood, and posterior Gaussian norms; and (column 3) differences in predicted parameters ξ i j ξ ^ i j , with predicted standard deviations shown as error bars. The legend for column 2 is for Table 2.
Table 3. Analyses of the Lorenz system ( t = 10 , t s t e p = 0.01 ) by JMAP for various noise models ( μ = 0 , σ = s = λ = 1 ) of different scales ε , showing (column 1) the phase plot of noisy data; (column 2) plots of prior, likelihood, and posterior Gaussian norms; and (column 3) differences in predicted parameters ξ i j ξ ^ i j , with predicted standard deviations shown as error bars. The legend for column 2 is for Table 2.
Phase PlotsGaussian Norm PlotsCoefficient Plots
Psf 12 00017 i018Psf 12 00017 i019Psf 12 00017 i020
Psf 12 00017 i021Psf 12 00017 i022Psf 12 00017 i023
Psf 12 00017 i024Psf 12 00017 i025Psf 12 00017 i026
Psf 12 00017 i027Psf 12 00017 i028Psf 12 00017 i029
Psf 12 00017 i030Psf 12 00017 i031Psf 12 00017 i032
Table 4. Analyses of the Shil’nikov system ( t = 50 , t s t e p = 0.02 ) by JMAP for various noise models ( μ = 0 , σ = s = λ = 1 ) of different scales ε , showing (column 1) the phase plot of noisy data; (column 2) plots of prior, likelihood, and posterior Gaussian norms; and (column 3) differences in predicted parameters ξ i j ξ ^ i j , with predicted standard deviations shown as error bars. The legend for column 2 is for Table 2.
Table 4. Analyses of the Shil’nikov system ( t = 50 , t s t e p = 0.02 ) by JMAP for various noise models ( μ = 0 , σ = s = λ = 1 ) of different scales ε , showing (column 1) the phase plot of noisy data; (column 2) plots of prior, likelihood, and posterior Gaussian norms; and (column 3) differences in predicted parameters ξ i j ξ ^ i j , with predicted standard deviations shown as error bars. The legend for column 2 is for Table 2.
Phase PlotsGaussian Norm PlotsCoefficient Plots
Psf 12 00017 i033Psf 12 00017 i034Psf 12 00017 i035
Psf 12 00017 i036Psf 12 00017 i037Psf 12 00017 i038
Psf 12 00017 i039Psf 12 00017 i040Psf 12 00017 i041
Psf 12 00017 i042Psf 12 00017 i043Psf 12 00017 i044
Psf 12 00017 i045Psf 12 00017 i046Psf 12 00017 i047
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Niven, R.K.; Cordier, L.; Mohammad-Djafari, A.; Abel, M.; Quade, M. Bayesian Regularization for Dynamical System Identification: Additive Noise Models. Phys. Sci. Forum 2025, 12, 17. https://doi.org/10.3390/psf2025012017

AMA Style

Niven RK, Cordier L, Mohammad-Djafari A, Abel M, Quade M. Bayesian Regularization for Dynamical System Identification: Additive Noise Models. Physical Sciences Forum. 2025; 12(1):17. https://doi.org/10.3390/psf2025012017

Chicago/Turabian Style

Niven, Robert K., Laurent Cordier, Ali Mohammad-Djafari, Markus Abel, and Markus Quade. 2025. "Bayesian Regularization for Dynamical System Identification: Additive Noise Models" Physical Sciences Forum 12, no. 1: 17. https://doi.org/10.3390/psf2025012017

APA Style

Niven, R. K., Cordier, L., Mohammad-Djafari, A., Abel, M., & Quade, M. (2025). Bayesian Regularization for Dynamical System Identification: Additive Noise Models. Physical Sciences Forum, 12(1), 17. https://doi.org/10.3390/psf2025012017

Article Metrics

Back to TopTop