Message Passing-Based Inference for Time-Varying Autoregressive Models

Time-varying autoregressive (TVAR) models are widely used for modeling of non-stationary signals. Unfortunately, online joint adaptation of both states and parameters in these models remains a challenge. In this paper, we represent the TVAR model by a factor graph and solve the inference problem by automated message passing-based inference for states and parameters. We derive structured variational update rules for a composite “AR node” with probabilistic observations that can be used as a plug-in module in hierarchical models, for example, to model the time-varying behavior of the hyper-parameters of a time-varying AR model. Our method includes tracking of variational free energy (FE) as a Bayesian measure of TVAR model performance. The proposed methods are verified on a synthetic data set and validated on real-world data from temperature modeling and speech enhancement tasks.


Introduction
Autoregressive (AR) models are capable of describing a wide range of time series patterns [1,2]. The extension to Time-Varying AR (TVAR) models, where the AR coefficients are allowed to vary over time, supports tracking of non-stationary signals. TVAR models have been successfully applied to a wide range of applications, including speech signal processing [3][4][5], signature verification [6], cardiovascular response modeling [7], acoustic signature recognition of vehicles [8], radar signal processing [9], and EEG analysis [10,11].
The realization of TVAR models in practice often poses some computational issues. For many applications, such as speech processing in a hearing aid, both a low computational load and high modeling accuracy are essential.
The problem of parameter tracking in TVAR models has been extensively explored in a non-Bayesian setting. For example, ref. [12] employs over-determined modified Yule-Walker equations and [13] applies the covariance method to track the parameters in a TVAR model. In [14], expressions for the mean vector and covariance matrix of TVAR model coefficients are derived and [15] uses wavelets for TVAR model identification. Essentially, all these approaches provide maximum likelihood estimates of coefficients for TVAR models without measurement noise. In [16], autoregressive parameters were estimated from noisy observations by using a recursive least-squares adaptive filter.
We take a Bayesian approach since we are also interested in tracking Bayesian evidence (or an approximation thereof) as a model performance measure. Bayesian evidence can be used to track the optimal AR model order or more generally, to compare the performance of a TVAR model to an alternative model. To date, Bayesian parameter tracking in AR models has mostly been achieved by Monte Carlo sampling methods [17][18][19][20][21]. Sampling-based inference is highly accurate, but it is very often computationally too expensive for real-time processing on wearable devices such as hearing aids, smartwatches, etc.
In this paper, we develop a low-complexity variational message passing-based (VMP) realization for tracking of states, parameters and free energy (an upper bound on Bayesian evidence) in TVAR models. All update formulas are closed-form and the complete inference process can easily be realized.
VMP is a low-complexity distributed message passing-based realization of variational Bayesian inference on a factor graph representation of the model [22,23]. Previous work on message passing-based inference for AR models include [24], but their work describes maximum likelihood estimation and therefore does not track proper posteriors and free energy. In [25], variational inference is employed to estimate the parameters of a multivariate AR model, but their work does not take advantage of the factor graph representation.
The factor graph representation that we employ in this paper provides some distinct advantages from other works on inference in TVAR models. First, a factor graph formulation is by definition completely modular and supports re-using the derived TVAR inference equations as a plug-in module in other factor graph-based models. In particular, since we allow for measurement noise in the TVAR model specification, the proposed TVAR factor can easily be used as a latent module at any level in hierarchical dynamical models. Moreover, due to the modularity, VMP update rules can easily be mixed with different update schemes such as belief propagation and expectation [26,27] in other modules, leading to hybrid message passing schemes for efficient inference in complex models. We have implemented the TVAR model in the open source and freely available factor graph toolbox ForneyLab [28].
The rest of this paper is organized as follows. In Section 2, we specify the TVAR model as a probabilistic state space model and define the inference tasks that relate to tracking of states, parameters, and Bayesian evidence. After a short discussion on the merits of using Bayesian evidence as a model performance criterion (Section 3.1), we formulate Bayesian inference in the TVAR model as a set of sequential prediction-correction processes (Section 3.2). We will realize these processes as VMP update rules and proceed with a short review of Forney-style factor graphs and message passing in Section 4. Then, in Section 5, the VMP equations are worked out for the TVAR model and summarized in Table 1. Section 6 discusses a verification experiment on a synthetic data set and applications of the proposed TVAR model to temperature prediction and speech enhancement problems. Full derivations of the closed-form VMP update rules are presented in Appendix A. Table 1. Variational message update rules for the autoregressive (AR) node (dashed box) of Equation (30).

Model Specification and Problem Definition
In this section, we first specify TVAR as a state-space model. This is followed by an inference problem formulation.

Model Specification
A TVAR model is specified as where y t ∈ R, x t ∈ R and θ m,t ∈ R represent the the observation, state and parameters at time t, respectively. M denotes the order of the AR model. As a notational convention, we use N (µ, Σ) to denote a Gaussian distribution with mean µ and co-variance matrix Σ. We can re-write (1) in state-space form as Technically, a TVAR model usually assumes τ = 0 indicating that there is no measurement noise. Note that the presence of measurement noise in (2c) "hides" the states x t in the generative model (2) from the observation sequence y t , yielding a latent TVAR. We add measurement noise explicitly, so the model is able to accept information from likelihood functions that are not constrained to be delta functions with hard observations. As a result, the AR model that we define here can be used at any level in deep hierarchical structures such as [29] as a plug-in module.
In a time-invariant AR model, θ are part of the parameters of the system. In a timevarying AR model, we consider θ t and x t together the set of time-varying states. The parameters of the TVAR model are {θ 0 , x 0 , ω, γ, τ}.
At the heart of the TVAR model is the transition model (2b), where A(θ t ) is a companion matrix with AR coefficients. The multiplication A(θ)x t−1 performs two operations: a dot product θ t x t−1 and a vector shift of x t−1 by one time step. The latter operation can be interpreted as bookkeeping, as it shifts each entry of x t−1 one position down and discards x t−M .

Problem Definition
For a given time series y = (y 1 , y 2 , . . . , y T ), we are firstly interested in recursively updating posteriors for the states p(x t |y 1:l ) and p(θ t |y 1:l ). In this context, prediction, filtering and smoothing are recovered for l < t, l = t and l > t, respectively. Furthermore, we are interested in computing posteriors for the parameters p(θ 0 |y), p(x 0 |y), p(ω|y), p(γ|y) and p(τ|y).
Finally, we are interested in scoring the performance of a proposed TVAR model m with specified priors for the parameters. In this paper, we take a full Bayesian approach and select Bayesian evidence p(y|m) as the performance criterion. Section 3.1 discusses the merits of Bayesian evidence as a model performance criterion.

Inference in TVAR Models
In this section, we first discuss some of the merits of using Bayesian evidence as a model performance criterion. This is followed by an exposition of how to compute Bayesian evidence and the desired posteriors in the TVAR model.

Bayesian Evidence as a Model Performance Criterion
Consider a model m with parameters θ and observations y. Bayesian evidence p(y|m) is considered an excellent model performance criterion. Note the following decomposition [ The first term (data fit or sometimes called accuracy) measures how well the model predicts the data y, after having learned from the data. We want this term to be large (al-though only focusing on this term could lead to over-fitting). The second term (complexity) quantifies the amount of information that the model absorbed through learning by moving parameter beliefs from p(θ|m) to p(θ|y, m). To see this, note that the mutual information between two variables θ and y, which is defined as can be interpreted as expected complexity. The complexity term regularizes the Bayesian learning process automatically. Preference for models with high Bayesian evidence implies a preference for models that get a good data fit without the need to learn much from the data set. These types of models are said to generalize well, since they can be applied to different data sets without specific adaptations for each data set. Therefore, Bayesian learning automatically leads to models that tend to generalize well. Note that Bayesian evidence for a model m, given a full times series y = (y 1 , y 2 , . . . , y T ), can be computed by multiplication of the sample-based evidences:

Inference as a Prediction-Correction Process
To illustrate the type of calculations that are needed for computing Bayesian model evidence and the posteriors for states and parameters, we now proceed to write out the needed calculations for the TVAR model in a filtering context.
Assume that at the beginning of time step t, we are given the state posteriors q(x t−1 |y 1:t−1 ), q(θ t−1 |y 1:t−1 ). We will denote the inferred probabilities by q(·), in contrast to factors from the generative model that are written as p(·). We start the procedure by setting the state priors for the generative model at step t to the posteriors of the previous time step p(x t−1 |y 1:t−1 ) := q(x t−1 |y 1:t−1 ) (6) p(θ t−1 |y 1:t−1 ) := q(θ t−1 |y 1:t−1 ) Given a new observation y t , we are now interested inferring the evidence q(y t |y t−1 ), and in inferring posteriors q(x t |y 1:t ) and q(θ t |y 1:t ).
This involves a prediction (forward) pass through the system that leads to the evidence update, followed by a correction (backward) pass that updates the states. We work this out in detail below. For clarity of exposition, in this section we call x t states and θ t parameters. Starting with the forward pass (from latent variables toward observation), we first compute a parameter prior predictive: Then the prior predictive for the state transition becomes: Note that the state transition prior predictive, due to its dependency on time-varying θ t , is a function of the observed data sequence. The state transition prior predictive can be used together with the state prior to inferring the state prior predictive: The evidence for model m that is provided by observation y t is then given by When y t has not yet been observed, q(y t |y 1:t−1 ) is a prediction for y t . After plugging in the observed value for y t , the evidence is a scalar that scores how well the model performed in predicting y t . As discussed in (5), the results q(y t |y 1:t−1 ) for t = 1, 2, . . . , T in (11) can be used to score the model performance for a given time series y = (y 1 , y 2 , . . . , y T ). Note that to update the evidence, we need to integrate over all latent variables θ t−1 , θ t , x t−1 and x t (by (8)- (11)). In principle, this scheme needs to be extended with integration over the parameters ω, γ and τ.
Once we have inferred the evidence, we proceed by a backward corrective pass through the model to update the posterior over the latent variables given the new observation y t . The state posterior can be updated by Bayes rule: state prior predictive q(x t |y 1:t−1 ) q(y t |y 1:t−1 ) evidence (12) Next, we need to compute a likelihood function for the parameters. Fortunately, we can re-use some intermediate results from the forward pass. The likelihood for the parameters is given by q(y t |θ t , y 1:t−1 ) The parameter posterior then follows from Bayes rule: q(θ t |y 1:t ) parameter posterior = parameter likelihood q(y t |θ t , y 1:t−1 ) parameter prior predictive q(θ t |y 1:t−1 ) q(y t |y 1:t−1 ) evidence (14) Equations (11), (12) and (14) contain the solutions to our inference task. Note that the evidence q(y t |y 1:t−1 ) is needed to normalize the latent variable posteriors in (12) and (14). Moreover, while we integrate over the states by (11) to compute the evidence, (14) reveals that the evidence can alternatively be computed by integrating over the parameters through q(y t |y 1:t−1 ) evidence = q(y t |θ t , y 1:t−1 ) parameter likelihood q(θ t |y 1:t−1 ) This latter method of evidence computation may be useful if re-using (11) in (14) leads to numerical rounding issues.
Unfortunately, many of Equations (8) through (14) are not analytically tractable for the TVAR model. This happens due to (1) integration over large state spaces, (2) non-conjugate prior-posterior pairing, and (3) the absence of a closed-form solution for the evidence factor.
To overcome this challenge, we will perform inference by a hybrid message passing scheme in a factor graph. In the next section, we give a short review of Forney-Style Factor Graphs (FFG) and Message-Passing (MP) based inference techniques.

Factor Graphs and Message Passing-Based Inference
In this section, we make a brief introduction of Forney-Style Factor graph (FFG) and sum-product (SP) algorithm. After that we review the minimization of variational free energy and Variational Message Passing (VMP) algorithm.

Forney-Style Factor Graphs
A Forney-style Factor graph is a representation of a factorized function where the factors and variables are represented by nodes and edges, respectively. An edge is connected to a node if and only if the (edge) variable is an argument of the node function. In our work, we use FFGs to represent factorized probability distributions. FFGs provide both an attractive visualization of the model and a highly efficient and modular inference method based on message passing. An important component of the FFG representation is the equality node. If a variable x is shared between more than two nodes, then we introduce two auxiliary variables x and x and use an equality node to constrain the marginal beliefs over x, x , x to be equal. With this mechanism, any factorized function can be represented as an FFG. An FFG visualization of the TVAR model is depicted in Figure 3, but for illustrative purposes, we first consider an example factorized distribution This distribution can be visualized by an FFG shown in Figure 1. An FFG is in principle an undirected graph but we often draw arrows on the edges in the "generative" direction, which is the direction that describes how the observed data is generated. Assume that we are interested in computing the marginal for x 2 , given by We can reduce the complexity of computing this integral by rearranging the factors over the integration signs as These re-arranged integrals can be interpreted as messages that are passed over the edges, see Figure 1. It is a notational convention to call a message − → µ (·) that aligns with the direction of the edge arrow a forward message and similarly, a message ← − µ (·) that opposes the direction of the edge is called a backward message.
An FFG corresponding to model (17), including messages as per (19). For graphical clarity, This message passed-based algorithm of computing the marginal is called belief propagation (BP) or the sum-product algorithm. As can be verified in (19), for a node with factor f (y, x 1 , . . . , x n ), the outgoing BP message − → µ (y) to variable y can be expressed as where − → µ i (x i ) is an incoming message over edge x i . If the factor graph is a tree, meaning that the graph contains no cycles, then BP leads to exact Bayesian inference. A more detailed explanation of belief propagation message passing in FFGs can be found in [26].

Free Energy and Variational Message Passing
Technically, BP is a message passing algorithm that belongs to a family of message passing algorithms that minimize a constrained variational free energy functional [31]. Unfortunately, the sum-product rule (20) only has a closed-form solution for Gaussian incoming messages − → µ i (x i ) and linear variable relations in f (y, x 1 , . . . , x n ). Another important member of the free energy minimizing algorithms is the Variational Message Passing (VMP) algorithm [22]. VMP enjoys a wider range of analytically computable message update rules.
We shortly review variational Bayesian inference and VMP next. Consider a model p(y, x) with observations y and unobserved (latent) variables x. We are interested in inferring the posterior distribution p(x|y). In variational inference we introduce an approximate posterior q(x) and define a variational free energy functional as The second term in (21) (log-evidence) is not a function of the argument of F. The first term is a KL-divergence, which is by definition non-negative and only equals zero for q(x) = p(x|y). As a result, variational inference by minimization of F[q] provides which is an approximation to the Bayesian posterior p(x|y). Moreover, the minimized free energy F[q * ] is an upper bound for minus log-evidence and in practice is used a model performance criterion. Similarly to (4), the free energy can be decomposed as (23) which underwrites its usage as a performance criterion for model m, given observations y.
In an FFG context, the model p(y, x) is represented by a set of connected nodes. Consider a generic node of the FFG, given by f (y, x 1 , . . ., x n ) where in the case of VMP, the incoming messages are approximations to the marginals q i (x i ), i = 1, . . . , n, see Figure 2.
A generic node f (y, x 1 , . . ., x n ) with incoming variational messages q i (x i ) and outgoing variational message − → ν (y) , see Equation (24). Note that the marginals q(·) propagate in the graph as messages.
It can be shown that the outgoing VMP message of f towards edge y is given by [32] − In this paper, we adopt the notational convention to denote belief propagation messages (computed by (20)) by µ and VMP messages (computed by (24)) by ν. The approximate marginal q(y) can be obtained by multiplying incoming and outgoing messages on the edge for y This process (compute forward and backward messages for an edge and update the marginal) is executed sequentially and repeatedly for all edges in the graph until convergence. In contrast to BP-based inference, the VMP and marginal update rules (24) and (25) lead to closed-form expressions for a large set of conjugate node pairs from the exponential family of distributions. For instance, updating the variance parameter of a Gaussian node with a connected inverse-gamma distribution node results in closed-form VMP updates.
In short, both BP-and VMP-based message passing can be interpreted as minimizing variational free energy, albeit under a different set of local constraints [31]. Typical constraints include factorization and form constraints on the posterior such as q(x) = ∏ i q i (x i ) and q(x) = N (x|µ, Σ), respectively. Since the constraints are local, BP and VMP can be combined in a factor graph to create hybrid message passing-based variational inference algorithms. For a more detailed explanation of VMP in FFGs, we refer to [32]. Note that hybrid message passing does in general not guarantee to minimize variational free energy [33]. However, in our experiments in Section 6 we will show that iterating our stationary solutions by message passing does lead to free energy minimization.

Variational Message Passing for TVAR Models
In this section, we focus on deriving message passing-based inference in the TVAR model. We develop a TVAR composite factor for the FFG framework and specify the intractable BP messages around the TVAR node. Then we present a message passing-based inference solution.

Message Passing-Based Inference in the TVAR Model
The TVAR model at time step t can be represented by an FFG as shown in Figure 3. We are interested in providing a message passing solution to the inference tasks as specified by Equations (8)- (14). At the left-hand side of Figure 3, the incoming messages are the priors p(θ t−1 |y 1:t−1 ) and p(x t−1 |y 1:t−1 ). At the bottom of the graph, there is a new observation y t . The goal is to pass messages in the graph to compute posteriors q(θ t |y 1:t ) (message 16 ) and q(x t |y 1:t ) (message 11 ). In order to support smoothing algorithms, we also want to be able to pass incoming prior messages from the right-hand side to outgoing messages 13 and 18 at the left-hand side. Forward and backward messages are drawn as open and closed circles respectively. Figure 3. One time segment of an FFG corresponding to the TVAR model. We use small black nodes to denote observations and fixed given parameter values. The observation node for y t sends a message δ(y t −ŷ t ) into the graph to indicate that y t =ŷ t has been observed. Dashed undirected edges denote time-invariant variables. Circled numbers indicate a selected computation schedule. Backward messages are marked by black circles. The intractable messages are labeled with red. The dashed box represents a composite AR node as specified by (30).
Technically, the generative model (2) at time step t for the TVAR model can shortly be written as p(y t |z t )p(z t |z t−1 ), where z t = {x t , θ t , ω, γ, τ} are the latent variables. On this view, we can write the free energy functional for the TVAR model at time step t as and minimize F[q] by message passing. In a smoothing context, we would include a prior from the future p(z t |y t+1:t+T ) := q(z t |y t+1:t+T ), yielding a free energy functional In a filtering context, q(z t |y t+1:t+T ) ∝ 1 and the functional (27) simplifies to (26).

Intractable Messages and the Composite AR Node
The modularity of message passing in FFGs allows us to focus on only the intractable message and marginal updates. For instance, while there is no problem with the analytical computation of the backward message 12 , the corresponding forward message 4 , cannot be analytically solved [34]. Similarly, some other messages 13 , 14 and 15 do not have a closed-form solution in the constrained free energy minimization framework. For purpose of identification, in Figure 3 intractable messages are marked in red color.
In an FFG framework, we can isolate the problematic part of the TVAR model (Figure 3) by introducing a "composite" AR node. Composite nodes conceal their internal operations from the rest of the graph. As a result, inference can proceed as long as each composite node follows proper message passing communication rules at its interfaces to the rest of the graph. The composite AR node is indicated in Figure 3 by a dashed box. Note that the internal shuffling of the parameters θ t and γ, respectively by means of A(θ t ) and V(γ), is hidden from the network outside the composite AR node.

VMP Update Rules for the Composite AR Node
We isolate the composite AR node by the specification where, relative to (29), we used substitutions y = x t , x = x t−1 , θ = θ t . Under the structural factorization constraint (See Appendix A.1 for more on structural VMP).
q(y, x, θ, γ) = q(y, x)q(θ)q(γ) , and consistency constraints q(y) = q(y, x)dx , q(x) = q(y, x)dy (32) the marginals q(θ), q(x), q(y) and q(γ) can be obtained from the minimisation of the composite-AR free energy functional Recalling (25), we can write the minimizer of FE functional (33) with respect to θ as where q(θ) is associated with the incoming message to AR node and − → ν (θ) is a variational outgoing message. Hence, the outgoing message from the AR node toward θ can be written as In Appendix A we work out a closed-form solution for this and all other update rules plus an evaluation of free energy for the composite AR node. The results are reported in Table 1. With these rules in hand, the composite AR node can be plugged into any factor graph and take part in a message passing-based free energy minimization process.

Experiments
In this section, we first verify the proposed methodology by a simulation of the proposed TVAR model on synthetic data, followed by validation experiments on two real world problems. We implemented all derived message passing rules in the open source Julia package ForneyLab.jl [28]. The code for the experiments and for the AR node can be found in public Github repositories. (https://github.com/biaslab/TVAR_FFG, accessed on 27 May 2021, https://github.com/biaslab/LAR, accessed on 27 May 2021) We used the following computer configuration to run the experiments. Operation system: macOS Big Sur, Processor: 2,7 GHz Quad-Core Intel Core i7, RAM: 16 GB.

Verification on a Synthetic Data Set
To verify the proposed TVAR inference methods, we synthesized data from two generative models m 1 and m 2 , as follows: with priors where M is the number of AR coefficients. Although these models differ only with respect to the properties of the AR coefficients θ, this variation has an important influence on the data generative process. The first model m 1 specifies a stationary AR process, since δ(θ t − θ t−1 ) in (36a) indicates that θ is not time-varying in m 1 . The second model m 2 represents a proper TVAR process as the prior evolution of the AR coefficients follows a random walk. One-time segment FFGs corresponding to the Equation (36) are depicted in Figure 4. For each model, we generated a data set of 100 different time series, each of length 100 (so we have 2 × 100 × 100 data points). For each time series, as indicated by (37a), the AR order M of the generative process was randomly drawn from the set {1, 2, . . . , 10}. We used rather non-informative/broad priors for states and parameters for both models, see (37). This was done to ensure that the effect of the prior distributions is negligible relative to the information in the data set.
These time series were used in the following experiments. We selected two recognition models m 1 and m 2 with the same specifications as were used for generating the data set. The recognition models were trained on time series that were generated by models with the same AR order.
We proceeded by computing the quantities q(x 1:T |y 1:T ), q(θ 1:T |y 1:T ), q(γ|y 1:T ) and F[q(z t−1 , z t |y 1:T )] (where z comprises all latent states and parameters) for both models, following the proposed rules from Table 1.
As a verification check, we first want to ensure that inference recovers the hidden states x t for each t ∈ (1, 2, . . ., 100). Secondly, we want to verify the convergence of FE. As we have not used any approximations along the derivations of variational messages, we expect a smoothly decreasing curve for FE until convergence. The results of the verification stage are highlighted for a typical case in Figure 5. The figure confirms that states x t are accurately tracked and that a sliding average of the AR coefficients θ t is also nicely tracked. Figure 5 also indicates that the FE uniformly decreases towards lower values as we spend more computational power.
We note that the FE score by itself does not explain whether the model is good or not, but it serves as a good measure for model comparison. In the following subsection, we demonstrate how FE scores can be used for model selection.

Temperature Modeling
AR models are well-known for predicting different weather conditions such as wind, temperature, precipitation, etc. Here, we will revisit the problem of modeling daily temperature. We used a data set of daily minimum temperatures (in • C) in Melbourne, Australia, 1981-1990 (3287 days) (https://www.kaggle.com/paulbrabban/daily-minimumtemperatures-in-melbourne, accessed on 27 May 2021). We then corrupted the data set by adding random noise sampled from N (0, 10.0) to the actual temperatures. A fragment of the time-series is depicted in Figure 6. To estimate the actual temperature based on past noisy observations by computing q(x t |y 1:t ), we use a TVAR model with measurement noise to simulate uncertainty about corrupted observations. The model is specified by the following equation set with priors Since the temperature data is not centered around 0 • C, we added a bias term η to the state x t . The corresponding FFG is depicted in Figure 7.
Note that we put a Gamma prior on the measurement noise precision τ, meaning that we are uncertain about the size of the error of the thermometer reading. The inference task for the model is computing q(x t |y 1:t ), in other words, we track the states based only on past data. Of course, after training, we could use the model for temperature prediction by tracking q(x t+k |y 1:t ) for k ≥ 1. We compare the performance of four TVAR models with AR orders M = {1, 2, 3, 4}. To choose the best model, we computed the average FE score for each TVAR(M) model. Figure 8 shows that on average TVAR(3) outperforms its competitors. The complexity vs accuracy decomposition (23) of FE explains why a lower order model may outperform higher order models. TVAR(4) maybe as accurate or more accurate than TVAR(3) but the increase in accuracy is more than offset by the increase in complexity. For the lower order models, it is the other way around: they are less complex and involve fewer computations than TVAR(3), but the loss in model complexity leads to too much loss in data modeling accuracy. Overall, TVAR(3) is the best model for this data set. Practically, we always favor the model that features the lowest FE score. In the next subsection we will use this technique (scoring FE) for online model selection.

Single-Channel Speech Enhancement
Single-channel speech enhancement (SCSE) is a well-known challenging task that aims to enhance noisy speech signals that were recorded by a single microphone. In single microphone recordings, we cannot use any spatial information that is commonly used in beamforming applications. Much work has been done to solve the SCSE task, ranging from Wiener filter-inspired signal processing techniques [35,36] to deep learning neural networks [37]. In this paper, we use data from the speech corpus (NOIZEUS) (https://ecs.utdallas.edu/loizou/speech/noizeus/, accessed on 27 May 2021) [38] and corrupted clean speech signals with white Gaussian noise, leading to a signal-to-noise ratio (SNR) SNR(s 1:T , y 1:T ) = 10 log 10 where s 1:T = (s 1 , . . ., s T ) and y 1:T = (y 1 , . . ., y T ) are clean and corrupted speech signals. s t is a speech signal at time t and T is the length of the signal. Historically, AR models have shown to perform well for modeling speech signals in the time (waveform) domain [39,40]. Despite the fact that speech is a highly nonstationary signal, we may assume it to be stationary within short time intervals (frames) of about 10 [ms] each [41]. Since voiced, unvoiced and silence frames have very different characteristics, we used 5 different models (a random walk model (RW), AR(1), AR(2), TVAR(1) and TVAR (2)) for each frame of 10 [ms] with 2.5 [ms] overlap. Given a sampling frequency of 8 [kHZ], each frame results into 80 samples with 20 samples overlap. The AR and TVAR models were specified by Equation (36). For each frame, we evaluated the model performance by minimized FE and selected the model with minimal FE score. We used identical prior parameters for all models where possible. To recover the speech signal we computed the mean values of q(x t |y 1:T ) of the selected model for each frame. The SNR gain of this SCSE system was SNR(s 1:T , x 1:T ) − SNR(s 1:T , y 1:T ) ≈ 3.7 dB.
(41) Figure 9 show the spectrograms of the clean, noisy and filtered signal respectively.  Figure 9. Spectrogram of recovered speech signal in the experiment of Section 6.3.
Next, we analyze the inference results in a bit more detail. Table 2 shows the percentage of winning models for each frame based on the free energy score. As we can see, for more than 30% of all frames, the random walk model performs best. This happens mostly because for a silent frame the AR model gets penalized by its complexity term. We recognize that in about 90% of the frames the best models are AR (1) and RW. On the other hand, for the frames where the speech signal transitions from silent or unvoiced to voiced, these fixed models start to fail and the time-varying AR models perform better. This effect can be seen in Figure 10.  Finally, Figure 12 shows how the TVAR(2) model compares to the RW model on one of the unvoiced/silence frames. While the estimates of TVAR(2) appear to be more accurate, it pays a bigger "price" for the model complexity term in the FE score and the RW model wins the frame.

Discussion
We have introduced a TVAR model that includes efficient joint variational Bayesian tracking of states, parameters and free energy. The system can be used as a plug-in module in factor graph-based representations of other models. At several points in this paper, we have made some design decisions that we shortly review here.
While FE computation for the AR node provides a convenient performance criterion for model selection, we noticed in the speech enhancement simulation that separate FE tracking for each candidate model leads to a large computational overhead. There are ways to improve the model selection process that we used in the speech enhancement simulation. One way is to consider a mixture model of candidate models and track the posterior over the mixture coefficients [42]. Alternatively, a very cheap method for online Bayesian model selection may be the recently developed Bayesian Model Reduction (BMR) method [43]. The BMR method is based on a generalization of the Savage-Dickey Density Ratio and supports tracking of free energy of multiple nested models with almost no computational overhead. Both methods seem to integrate well with a factor graph representation and we plan to study this issue in future work.
In this paper, the posterior factorization (31) supports the modeling of temporal dependencies between input and output of the AR node in the posterior. Technically, (31) corresponds to a structural VMP assumption, in contrast to the more constrained meanfield VMP algorithm that would be based on q(z) = ∏ i q i (z i ), where z is the set of all latent variables [44]. We could have also worked out alternative update rules for the assumption of a joint factorization of precision γ and AR coefficients θ. In that case, the prior (incoming message − → ν (θ, γ) to AR node) would be in the form of a Normal-Gamma distribution. While any of these these assumptions are technically valid, each choice accepts a different trade-off in the accuracy vs. complexity space. We review structural VMP in Appendix A.1.
In the temperature modelling task, we added some additional random variables (bias, measurement noise precision). To avoid identifiability issues, in (38a) we fixed the covariance matrix of the time-varying AR coefficient to the identity matrix. In principle, this constraint can be relaxed. For example, an Inverse-Wishart prior distribution can be added to the covariance matrix.
In our speech enhancement experiments in Section 6.3, we assume that the measurement noise variance is known. In a real-world scenario, this information is usually not accessible. However, online tracking of measurement noise or other (hyper-)parameters is usually not a difficult extension when the process is simulated in a factor graph toolbox such as ForneyLab [28]. If so desired, we could add a prior on the measurement noise variance and track the posterior. The online free energy criterion (23) can be used to determine if the additional computational load (complexity) of Bayesian tracking of the variance parameter has been compensated by the increase in modeling accuracy.
The realization of the TVAR model in ForneyLab comes with some limitations. For large smoothing problems (say, >1000 data points), the computational load of message passing in ForneyLab becomes too heavy for a standard laptop (as was used in the paper). Consequently, in the current implementation it is difficult to employ the AR node for processing large time series on a standard laptop. To circumvent this issue, when using ForneyLab, one can combine filtering and smoothing solutions into a batch learning procedure. In future work we plan to remedy this issue by some ForneyLab refactoring work. Additionally, the implemented AR node does not provide a closed-form update rule for the marginal distribution when the probability distribution types of the incoming messages (priors) are different from the ones used in our work. Fortunately, ForneyLab supports resorting to (slower) sampling-based update rules when closed-form update rules are not available.

Conclusions
We presented a variational message passing approach to tracking states and parameters in latent TVAR models. The required update rules have been summarized and implemented in the factor graph package ForneyLab.jl, thus making transparent usage of TVAR factors available in freely definable stochastic dynamical systems. Aside from VMP update rules, we derived a closed-form expression for the variational free energy (FE) of an AR factor. Free Energy can be used as a proxy for Bayesian model evidence and as such allows for model performance comparisons between the TVAR models and alternative structures. Owing to the locality and modularity of the FFG framework, we demonstrated how AR nodes can be applied as plug-in modules in various dynamic models. We verified the correctness of the rules on a synthetic data set and applied the proposed TVAR model to a few relatively simple but different real-world problems. In future work, we plan to extend the current factor graph-based framework to efficient and transparent tracking of AR model order and to online model comparison and selection with alternative models. Acknowledgments: The authors would like to thank the BIASlab team members, in particular Thijs van de Laar, Ismail Senoz and Semih Akbayrak for insightful discussions on various topics related to this work.

Conflicts of Interest:
The authors declare no conflict of interest. Figure A1 represents a composite AR node.

Appendix A. Derivations
The corresponding node function of Figure A1 f (y x, θ, γ): The message update rule (24) implies a mean-field factorization, meaning that all variables represented by edges around the factor node f are independent. In this paper, we impose a structural dependence between states. To illustrate how structured VMP works, let us consider the example depicted in Figure A2. Figure A2. A node f (x, y, z) representing an arbitrary joint distribution. Arrows above the messages ν(·) indicate the direction (incoming or outgoing).
Suppose that we constrain the joint posterior (A1) as q(x, y, z) = q(x, y)q(z) (A1) The message passing algorithm for updating the marginal posteriors q * (x, y) and q * (z) can now be executed as follows: (2) update joint posterior q * (x, y): (4) update posterior q * (z): Every marginal update rule (Equations (25), (A3) and (A5)) corresponds to a coordinate descent step on the variational free energy, and therefore the free energy is guaranteed to converge to a local minimum.

Appendix A.2. Auxiliary Node Function
Before obtaining the update messages for TVAR we need to evaluate the auxiliary node functionf (x, y) ∝ exp E q(γ)q(θ) log[ f (y x, θ, γ)] . We also need to address the issue of invertability of the covariance matrix V. To tackle this problem, we assume > 0, 2 ≈ 0 which allows us to introduce matrix We work out the expectation term inside the trace separately. To do this, we notice, that the product Ax can be separated in the shifting operator Sx and the inner vector product cx θ in the following way: We can write the auxiliary node function as In this way, the message − → ν (y) Let us consider the log of N (y|m A x, m −1 W ): The outgoing variational message to θ is defined as Instead of working out ← − ν (θ), we will work with corresponding log message To proceed further, we recall one useful property where Σ is an arbitrary diagonal matrix. Now, let us work out the following term tr m W (Sx + cx θ)(Sx + cx θ) = tr m W Sxx S + Sxθ xc + cx θx S + cx θθ xc We move terms which do not depend on θ to the const, hence We split the expression under the expectation into four terms: I: W E q(x,y)q(θ) yy II: W E q(x,y)q(θ) Axy III: W E q(x,y)q(θ) yx A and IV: W E q(x,y)q(θ) Axx A Term I: Recalling Equation (A6), term II: Term III: Term IV: W E q(x,y)q(θ) Axx A = W E (S + cθ )xx (S + cθ ) = W E q(x,y)q(θ) Sxx S + cθ xx S + Sxx θc + cθ xx θc = W E q(x,y) Sxx S + cm θ xx S + Sxx m θ c As the resulting message should depend solely on γ we need to get rid of all terms which incorporate matrix W. We notice that where M is an arbitrary matrix of the same dimensionality as the matrix W (M denote the first element of the matrix). In this way After exponentiating log ← − ν (γ) it yields the gamma distribution: Appendix A.7. Derivation of q(x, y) The joint recognition distribution is given by Let us rearrange the terms in the Gaussian N (y|m A x, m −1 W ) The precision matrix W q , to put it mildly, is quite far from a nice shape as it contains "unpleasant" matrix m W with −1 on the diagonal. Let us workout the covariance matrix V q = W −1 q . To do this, we recall two important matrix identities: and Let us denote the block elements of W q as follows: In this way, Let us work out the auxiliary terms Next, let us consider D −1 , D −1 C and BD −1 : Although the resulting expressions do not have a nice form, we got rid of "unpleasant" matrix m W .