Next Article in Journal
A Dynamic Competition Analysis of Stochastic Fractional Differential Equation Arising in Finance via Pseudospectral Method
Previous Article in Journal
Material Property Characterization and Parameter Estimation of Thermoelectric Generator by Using a Master–Slave Strategy Based on Metaheuristics Techniques
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

On Filtering and Smoothing Algorithms for Linear State-Space Models Having Quantized Output Data

by
Angel L. Cedeño
1,2,*,
Rodrigo A. González
3,
Boris I. Godoy
4,
Rodrigo Carvajal
5 and
Juan C. Agüero
1,2
1
Electronics Engineering Department, Universidad Técnica Federico Santa María, Av. España 1680, Valparaíso 2390123, Chile
2
Advanced Center for Electrical and Electronic Engineering, AC3E, Gral. Bari 699, Valparaíso 2390136, Chile
3
Department of Mechanical Engineering, Eindhoven University of Technology, 5612 AZ Eindhoven, The Netherlands
4
Department of Automatic Control, Lund University, 221 00 Lund, Sweden
5
School of Electrical Engineering, Pontificia Universidad Católica de Valparaíso, Av. Brasil 2147, Valparaíso 2374631, Chile
*
Author to whom correspondence should be addressed.
Mathematics 2023, 11(6), 1327; https://doi.org/10.3390/math11061327
Submission received: 30 January 2023 / Revised: 6 March 2023 / Accepted: 7 March 2023 / Published: 9 March 2023
(This article belongs to the Section Engineering Mathematics)

Abstract

:
The problem of state estimation of a linear, dynamical state-space system where the output is subject to quantization is challenging and important in different areas of research, such as control systems, communications, and power systems. There are a number of methods and algorithms to deal with this state estimation problem. However, there is no consensus in the control and estimation community on (1) which methods are more suitable for a particular application and why, and (2) how these methods compare in terms of accuracy, computational cost, and user friendliness. In this paper, we provide a comprehensive overview of the state-of-the-art algorithms to deal with state estimation subject to quantized measurements, and an exhaustive comparison among them. The comparison analysis is performed in terms of the accuracy of the state estimation, dimensionality issues, hyperparameter selection, user friendliness, and computational cost. We consider classical approaches and a new development in the literature to obtain the filtering and smoothing distributions of the state conditioned to quantized data. The classical approaches include the extended Kalman filter/smoother, the quantized Kalman filter/smoother, the unscented Kalman filter/smoother, and the sequential Monte Carlo sampling method, also called particle filter/smoother, with its most relevant variants. We also consider a new approach based on the Gaussian sum filter/smoother. Extensive numerical simulations—including a practical application—are presented in order to analyze the accuracy of the state estimation and the computational cost.

1. Introduction

In the last two decades, there has been a growing number of applications for sensors, networks, and sensor networks, where common problems include dealing with the loss of information in signals measured with low-resolution sensors, or storing and/or transmitting a reduced representation of such signals—in order to minimize the resource consumption in a communication channel [1]. This kind of problem encompasses a nonlinear process called quantization, which divides the input signal space into a finite (or infinite but countable) number of intervals and each of them being represented by a single output value [2]. Applications using quantized data include networked control [3,4], fault detection [3,5,6,7], cyber-physical systems [8,9], multi-target tracking [10], and system identification [11,12,13,14], just to mention a few. In these applications, a key element is the state estimation of a dynamical system conditioned upon the available quantized observations. For instance, ref [15] deals with the problem of state estimation and control of a microgrid incorporating multiple distributed energy resources, where the state estimation and control are based on uniform quantized observations that are transmitted through a wireless channel.
For linear systems subject to additive Gaussian noise, expressions for the filtering and smoothing probability density functions (filtering and smoothing PDFs) that give estimators with optimal mean-square-error properties can be obtained from the Kalman Filter and the Rauch–Tung–Striebel smoother (KS), respectively, [16]. However, for general nonlinear systems subject to non-Gaussian additive noise, it is difficult (or even impossible in most cases) to obtain closed-form expressions for these PDFs and, therefore, expressions for a state estimator, given input/output data. Many sub-optimal methods have been developed in order to obtain an approximation of the desired PDFs and an estimate of the state vector; see, e.g., [17,18,19]. For example, the extended Kalman filter (EKF) was studied in [20] to deal with quantized data. This approach is difficult to implement since the quantizer is a non-differentiable nonlinear function and requires the computation of a Jacobian matrix. The authors in [20] proposed to approximate the quantizer by a smooth function to compute an approximation of the Jacobian matrix. However, since the quantization function is highly nonlinear, the EKF/EKS approach typically produces inaccurate estimates of the system state. The Kalman filter was modified to include the quantization effect in the computation of the filtering state, which has been referred to as the quantized Kalman filter (QKF) [21,22]. The unscented Kalman filter (UKF) was applied by [23] to deal with quantized innovation systems in a wireless sensor network environment. The UKF is based on the Unscented transformation [18] that represents the mean and covariance of a Gaussian random variable through a reduced number of points. Then, these points are propagated through the nonlinear function to accurately capture the mean and covariance of the propagated Gaussian random variable. An advantage of the UKF lies in its high estimation accuracy and convergence rate, together with its simplicity of implementation compared with EKF, since it avoids the computation of the Jacobian matrix required in the EKF method.
One of the most used methods in nonlinear filtering is the Sequential Monte Carlo sampling approach called particle filtering (PF) [24]. The PF uses a set of weights and samples to create an approximation of the filtering and smoothing PDFs. The main advantages of the combined PF and particle smoothing (PS) are the implementation and ability to deal with nonlinear and non-Gaussian systems. Nevertheless, the PF also has drawbacks. One of them is the degeneracy problem, where most weights—calculated at one iteration of the PF approach—go to zero [25]. To get around this issue, [24] proposed an approach called resampling. Here, the heavily weighted particles are replicated sometimes, and the rest of the particles are discarded. A number of resampling techniques have been developed in the literature, such as systematic (SYS), multinomial (ML), Metropolis (MT), and local selection (LS) resampling methods [26]. These resampling techniques have advantages such as unbiasedness and parallelism capacity. Naturally, the performance of the particle filter depends upon the implemented resampling method [27]. Unfortunately, the resampling process produces a loss of diversity in the particle set since the particles with high weights are replicated. This problem is called sample impoverishment [25], and to mitigate it, a Markov chain Monte Carlo (MCMC) move is usually introduced after the resampling step to provide diversity to the samples so that the new particle set is still distributed according to the posterior PDF [28]. There are mainly two MCMC methods that we can use to deal with the impoverishment problem: the Gibbs and the Metropolis–Hasting (MH) sampling. Here, we discuss the MH algorithm and a special MH algorithm called random walk Metropolis (RWM [29,30]) in conjunction with the aforementioned resampling methods.
In addition to the methods detailed above, a new algorithm to deal with quantized output data is proposed in [31,32]. In these works, the authors defined the probability mass function of the quantized data conditioned upon the system state as an integral whose limits depend on the quantizer regions. This integral is approximated by using Gauss–Legendre quadrature [33], yielding a model with a Gaussian sum structure. Such a model is later used to develop a Gaussian sum—filter GSF and smoother GSS—to obtain closed-form filtering and smoothing distributions for systems with quantized output data.
Despite the wide availability of the commonly used and also novel state-estimation methods described above, there is no consensus among the control and estimation community on (1) which methods are more suitable for a particular application and why, and (2) how these methods compare in terms of accuracy, computational cost, and user friendliness. This paper provides a comprehensive overview of the state-of-the-art algorithms to deal with the problem of state estimation of linear dynamical systems using quantized observations. This work aims to serve both as an introduction to the EKF/EKS, QKF/QKS, UKF/UKS, GSF/GSS, and PF/PS algorithms for estimation using quantized output data, and to provide clear guidelines on the advantages and shortcomings of each algorithm based on the accuracy of the state estimation, dimensionality issues, hyperparameter selection, user friendliness, and computational cost.
The organization of this paper is as follows: In Section 2, we define the problem of state estimation with quantized output data. In Section 3, we present a comprehensive review of the most effective filtering and smoothing methods that are available in the literature. In Section 4, a numerical example to show the traits of each method is presented, and in Section 5 a practical application is used for testing the algorithms. In Section 6, general user guidelines are provided. Finally, concluding remarks are given in Section 7.

2. Statement of the Problem

This paper considers the filtering and smoothing problem for the following discrete-time, LTI state-space system with quantized output (see Figure 1):
x t + 1 = A x t + B u t + w t ,
z t = C x t + D u t + v t ,
y t = q z t ,
where x t R n is the state vector, u t R m is the input of the system, z t R is the non-quantized output, and y t R is the quantized output. The matrix A R n × n , B R n × m , C R 1 × n , and D R 1 × m . The nonlinear map q · is the quantizer. The state noise w t R n and the output noise v t R are zero-mean white Gaussian noises with covariance matrix Q and R, respectively. Due to the random components (i.e., the noises w t and v t ) in (1) and (2), the state-space model can be described using the state transition PDF p ( x t + 1 | x t ) N ( x t + 1 ; A x t + B u t , Q ) and the non-quantized output PDF p ( z t | x t ) N ( z t ; C x t + D u t , R ) with x 1 N ( x 1 ; μ 1 , P 1 ) , where N ( x ; μ , P ) represents a PDF corresponding to a Gaussian distribution with mean μ and covariance matrix P of the random variable x . The initial condition x 1 , the model noise w t , and the measurement noise v t are statistically independent random variables.
On the other hand, the nonlinear map q · : R Ψ is the quantizer, where Ψ R is the output set. More precisely, q · is given by [1]:
y t = q z t = ψ k if z t R k , k K ,
where ψ k is the kth output value in the output set Ψ , R k is the kth interval mapped to the value ψ k , and the indices set K defines the number of quantization levels of the output set Ψ . Here we consider two types of quantizers: (i) an infinite-level quantizer (ILQ), in which its output has infinite (but countable) levels of quantization with
K = , 1 , 2 , , L , .
Here, R k = z t : q k 1 z t < q k are disjoint intervals, and each ψ k is the value that the quantizer takes in the region R k , and (ii) a finite-level quantizer (FLQ), in which the output of the quantizer is limited to minimum and maximum values (saturated quantizer) similar to (4) with
K = 1 , 2 , , L 1 , L .
Notice that the FLQ quantizer is comprised of finite and semi-infinite intervals, given by R 1 = z t : z t < q 1 , R L = z t : q L 1 z t , and R k = z t : q k 1 z t < q k , with k = 2 , L 1 . Usually, the sets R k and the output values ψ k are defined in terms of the quantization step Δ , for instance, see, e.g., [12,34].
Thus, the problem of interest can be defined as follows: given the available data u 1 : N = u 1 , u 2 , , u N and y 1 : N = y 1 , y 2 , , y N , where N is the data length, we can obtain the filtering and smoothing PDFs of the state given the quantized measurements, p ( x t | y 1 : t ) and p ( x t | y 1 : N ) , respectively, the state estimators
x ^ t | t = E x t | y 1 : t = x t p ( x t | y 1 : t ) d x t ,
x ^ t | N = E x t | y 1 : N = x t p ( x t | y 1 : N ) d x t ,
and the corresponding covariance matrices of the estimation error:
Σ t | t = E ( x t x ^ t | t ) ( x t x ^ t | t ) | y 1 : t = ( x t x ^ t | t ) ( x t x ^ t | t ) p ( x t | y 1 : t ) d x t ,
Σ t | N = E ( x t x ^ t | N ) ( x t x ^ t | N ) | y 1 : N = ( x t x ^ t | t ) ( x t x ^ t | t ) p ( x t | y 1 : N ) d x t ,
where t N and E x | y denotes the conditional expectation of x given y.

2.1. Practical Application: Liquid-Level System

To give context to the problem stated above, we included a practical application. Consider the liquid-level system shown in Figure 2; see, e.g., [35]. The goal is to estimate the liquid level in a tank using the measurements obtained by a low-cost sensor based on a variable resistor that is attached to an arm with a floater. This sensor varies its resistance in discrete steps providing the quantized measurements y t 0 , 1 , , 9 , 10 . F 1 and F 2 are the total inflow and outflow rates, respectively, and f 1 with f 2 are small deviations of the inflow and outflow rate from the steady-state value F 0 . Additionally, h is a small deviation of the liquid level of the tank from the steady-state value H 0 . The total liquid level is H = H 0 + h .
The linearized model (around the operating points F 0 and H 0 ) that relates the input f 1 to the output h is given by the differential equation a 1 d h d t + h = a 2 f 1 . Setting a 1 = 0.1 , a 2 = 1 , and a sampling period T = 0.1 leads to the following state-space model:
x t + 1 = 0.3678 x t + u t + w t ,
z t = 0.6321 x t + v t ,
where x t is the level of the liquid present in the tank and z t is a nonavailable signal which is transformed into quantized measurements by the sensor with the following model:
q z t = 0 if 0 z t < 1 , 1 if 1 z t < 2 , 9 if 9 z t < 10 , 10 if z t 10 .
Using only input data and the quantized sensor measurements, the goal is to estimate the liquid level at every time instant t = n T , where n Z .

3. Recursive Filtering and Smoothing Methods for Quantized Output Data

3.1. Bayesian Filtering and Smoothing

Under a Bayesian framework, the filtering distributions admit the following recursions; see, e.g., [16]:
p ( x t | y 1 : t 1 ) = p ( x t | x t 1 ) p ( x t 1 | y 1 : t 1 ) d x t 1 ,
p ( x t | y 1 : t ) = p ( y t | x t ) p ( x t | y 1 : t 1 ) p ( y t | y 1 : t 1 ) ,
where (14) and (15) are the measurement- and time-update equations. The PDF p ( x t | x t 1 ) is directly obtained from the model in (1), and p ( y t | y 1 : t 1 ) is a normalization constant. On the other hand, the Bayesian smoothing equation, see, e.g., [16], is defined by the following:
p ( x t | y 1 : N ) = p ( x t | y 1 : t ) p ( x t + 1 | y 1 : N ) p ( x t + 1 | x t ) p ( x t + 1 | y 1 : t ) d x t + 1 .
Notice that to obtain p ( x t | y 1 : t ) in (15), we need the probability density function (PDF) p ( y t | x t ) . Since y t is a discrete random variable, the probabilistic model of p ( y t | x t ) is a probability mass function (PMF). Then, the measurement-update equation in (15) combines both PDFs and a PMF. Here, we use generalized probability density functions; see, e.g., [36]. They combine both discrete and absolutely continuous distributions. In [32], an integral equation for p ( y t | x t ) is defined in order to solve the filtering recursion as follows
p ( y t | x t ) = a t b t N v t ; 0 , R d v t .
Here, a t and b t are functions of the boundary values of each region of the quantizers defined in Table 1.
Notice that y t | x t in (17) is a non-Gaussian random variable. This leads to obtaining non-Gaussian measurement- and time-update distributions. However, the EKF, QKF, and UKF filters are developed under the assumption that the measurement- and time-update distributions are Gaussian, which yields a loss of accuracy in the estimation. On the other hand, (17) is used in GSF/GSS and PF/PS, where the Gaussian assumption is not needed.

3.2. Extended State-Space System

To implement filtering and smoothing algorithms such as the EKF/EKS and the UKF/UKS, the state-space model in (1)–(3) is rewritten in an extended form as follows
x t + 1 e = A x t e + B u t e + w t e ,
y t = q C x t e + ξ t .
The extended system matrices are given by
A = A 0 C A 0 , B = B 0 C B D , C = [ 0 1 ] ,
where x t e = [ x t z t ] is the extended state, u t e = [ u t u t + 1 ] is the extended input with u N + 1 = 0 . Notice that the extended system transforms the algebraic equation of the linear output z t into a recursive equation. It is then necessary to define an initial condition for z t at t = 1 . Considering z 1 N ( z 1 ; 0 , σ z ) , the initial condition of the extended vector become x 1 e N ( x 1 e ; μ 1 e , P 1 e ) where μ 1 e = [ μ 1 0 ] and P 1 e = diag P 1 , σ z . The noise w t e = [ w t ( C w t + v t + 1 ) ] satisfies w t e N w t e ; 0 , Q with
Q = Q Q C C Q C Q C + R .
The noise ξ t is added in order to obtain the adequate structure of the system to implement the EKF/EKS and the UKF/UKS. However, this does not imply that the measurements are corrupted by the noise ξ t . The idea of including an extra noise in the model is proposed to ensure a full-rank covariance matrix in the EM algorithm; see, e.g., [37]. We assume that ξ t N ξ t ; 0 , ε , where the variance ε is small.

3.3. Extended Kalman Filtering and Smoothing

The idea of EKF [38,39] is to have a linear approximation around a state estimate, using a Taylor series expansion. The EKF is not directly applied to the problem of interest in this paper because the quantizer is a non-differentiable nonlinear function. In [20], it was suggested that it is possible to compute the Kalman gain using a smooth arctan-based approximation of the quantizer. Thus, following the idea in [20] and the representation of the arctan function found in [40], the following approximation for the quantizer is proposed:
q z t h ( z t ) = ( Δ / π ) ( z t 1.5 Δ ) / ρ + 1.5 Δ if Δ z t < 2 Δ , ( Δ / π ) ( z t 0.5 Δ ) / ρ + 0.5 Δ if 0 z t < Δ , ( Δ / π ) ( z t + 0.5 Δ ) / ρ 0.5 Δ if Δ z t < 0 , ( Δ / π ) ( z t + 1.5 Δ ) / ρ 1.5 Δ if 2 Δ z t < Δ ,
Here, ρ is a user parameter that defines how well the approximation fits the quantizer function in the switch point, as shown in Figure 3.
On the other hand, by using the smooth approximation of the quantizer, it is possible to approximate the nonlinear system as a linear time-varying system as follows:
x t + 1 e = A x t e + B u t e + w t e ,
y t = H t x t e + F t + ξ t ,
where H t is the Jacobian matrix of h ( C x t e ) with respect to x t e , and evaluated at x ^ t | t 1 e and F t = h ( C x ^ t | t 1 e ) H t x ^ t | t 1 e . Then, the equations of the EKF and EKS are summarized in Algorithms 1 and 2, respectively. One of the difficulties in applying the EKF to deal with quantized data is the computation of the Jacobian matrix H t . Despite the approximation of the quantizer, the Jacobian is nearly zero for all values of x ^ t | t 1 e , except for the exact switch points, as shown in Figure 4 (left), where ρ = 0.001 Δ . Additionally, Figure 4 (center and right) shows that the quantizer and Jacobian approximations worsen as ρ increases, which reduces the accuracy of the estimation.
Algorithm 1: Extended Kalman filter algorithm for quantized output data
Mathematics 11 01327 i001
Algorithm 2: Extended Kalman smoother algorithm for quantized output data
Mathematics 11 01327 i002

3.4. Unscented Kalman Filtering and Smoothing

The unscented Kalman Filter [18] is a deterministic sampling-based approach that uses samples called sigma points to propagate the mean and covariance of the system state (assumed to be a Gaussian random variable) through the nonlinear functions of the system. These propagated points accurately capture the mean and covariance of the posterior state to the 3rd-order Taylor series expansion for any nonlinear function [41]. The key idea of UKF is to directly approximate the mean and covariance of the posterior distribution instead of approximating nonlinear functions [18]. The unscented Kalman Filter is based on the unscented transformation of the random variable x R n into the random variable y = g ( x ) + v , where g ( · ) is a nonlinear function, x N x ; m , Γ , and v N v ; 0 , P . Thus, the sigma points are defined by
X 0 = m ,
X τ = m + n + λ Γ 1 / 2 τ ,
X τ + n = m n + λ Γ 1 / 2 τ .
Here, τ = 1 , , n , the scaling parameter λ = α 2 ( n + κ ) n , the parameters α and κ determine the propagation of the sigma points around the mean, and the notation P τ refers to the τ th column of the matrix P . The weights associated with the unscented transformation are the sets
φ 0 , φ 1 , , φ 2 n = λ ζ , 0.5 ζ , , 0.5 ζ ,
σ 0 , σ 1 , , σ 2 n = λ ζ + ϱ , 0.5 ζ , , 0.5 ζ ,
where ζ = ( n + λ ) 1 and ϱ = 1 α 2 + β . In this set of equations, β is an additional parameter that can be used to incorporate prior information on the x distribution. Then, the statistics of the transformed random variable are the mean μ = τ = 0 2 n φ τ g ( X τ ) and the covariance matrix Φ = τ = 0 2 n σ τ g ( X τ ) μ g ( X τ ) μ + P . Additionally, the cross-covariance matrix between x and y is given by Ψ = τ = 0 2 n σ τ X τ m g ( X τ ) μ . The steps to implement the UKF are summarized in Algorithm 3. Notice that for the problem of interest in this paper, the process equation is a linear function. Thus, the UKS algorithm is similar to EKS but uses the filtering and predictive distributions obtained from the UKF algorithm.
Algorithm 3: Unscented Kalman filter algorithm for quantized output data
Mathematics 11 01327 i003

3.5. Quantized Kalman Filtering and Smoothing

The quantized Kalman filter is an alternative version of the Kalman filter that modifies the measurement update equation to include the quantization effect in the computation of the filtering distributions. This modification can be performed in different ways [21,22]. In this work, we use the following modification of the KF:
x ^ t | t = x ^ t | t 1 + K t ( y t q { C x ^ t | t 1 D u t } ) ,
where K t is the Kalman gain. Notice that, with the modification in (30), the QKS algorithm is similar to the standard Kalman smoother (KS).

3.6. Gaussian Sum Filtering and Smoothing

The Gaussian Sum Filter [31,32] is a novel approach to deal with quantized output data. The key idea of GSF is to approximate the integral of p ( y t | x t ) given in (17) using the Gauss–Legendre quadrature rule. This approximation produces a model with a Gaussian sum structure as follows:
p ( y t | x t ) τ = 1 K ς t τ N η t τ ; C x t + D u t + μ t τ , R .
Here, K is the number of points from the Gauss–Legendre quadrature rule, ς t τ , η t τ , and μ t τ are defined in Table 2, and ω τ and ψ τ are weights and points defined by the quadrature rule, given in, e.g., [33].
Using the approximation of p ( y t | x t ) given in (31), the Gaussian sum filter iterates between the following two steps:
p ( x t | y 1 : t ) = k = 1 M t | t γ t | t k N ( x t ; x ^ t | t k , Σ t | t k ) ,
p ( x t + 1 | y 1 : t ) = k = 1 M t + 1 | t γ t + 1 | t k N ( x t + 1 ; x ^ t + 1 | t k , Σ t + 1 | t k ) .
All quantities in this recursion can be computed following the algorithm in Algorithm 4. On the other hand, to compute the smoothing distribution, (16) is separated into two formulas, to avoid the division by a non-Gaussian distribution [42]. The first formula is the backward recursion that is defined as follows (obtained by using the approximation of p ( y t | x t ) given in (31)):
p ( y t + 1 : N | x t ) = k = 1 S t | t + 1 ϵ t | t + 1 k λ t | t + 1 k exp 1 2 x t F t | t + 1 k x t 2 G t | t + 1 k T x t + H t | t + 1 k ,
p ( y t : N | x t ) = k = 1 S t | t ϵ t | t k λ t | t k exp 1 2 x t F t | t k x t 2 G t | t k T x t + H t | t k .
All quantities in this recursion can be computed following the algorithm in Algorithm 5. The second formula computes the smoothing distribution as follows:
p ( x t | y 1 : N ) = k = 1 S t | N ϵ t | N k N ( x t ; x ^ t | N k , Σ t | N k ) .
In this equation, all quantities can be computed following Algorithm 6.
Algorithm 4: Gaussian sum filter algorithm for quantized output data.
1 Input: The PDF of the initial state p ( x 1 ) , e.g., M 1 | 0 = 1 , γ 1 | 0 = 1 , x ^ 1 | 0 = μ 1 , Σ 1 | 0 = P 1 . The points of the Gauss–Legendre quadrature ω τ , ψ τ τ = 1 K .
2 for t = 1 toNdo
3   Compute and store ς t τ , η t τ , and  μ t τ according to Table 2.
4   Measurement Update:
5   Set M t | t = K M t | t 1 .
6   for  = 1 to M t | t 1 do
7     for  τ = 1 toKdo
8       Set the index k = ( 1 ) K + τ .
9       Compute the weights, means, and covariances matrices as follows:
γ t | t k = γ ¯ t | t k s = 1 M t | t γ ¯ t | t s 1 , K t = Σ t | t 1 C R + C Σ t | t 1 C 1 , x ^ t | t k = x ^ t | t 1 + K t ( η t τ C x ^ t | t 1 D u t μ t τ , ) , Σ t | t k = ( I K t C ) Σ t | t 1 , γ ¯ t | t k = ς t τ γ t | t 1 N η t τ ; C x ^ t | t 1 + D u t + μ t τ , R + C Σ t | t 1 C .
10     end
11   end
12   Perform the Gaussian-sum-reduction algorithm according to [43] to obtain the reduced GMM of p ( x t | y 1 : t ) .
13   Time Update
14   Set M t + 1 | t = M t | t .
15   for k = 1 to M t + 1 | t do
16     Compute and store the weights, means, and covariance matrices as follows:
γ t + 1 | t k = γ t | t k , x ^ t + 1 | t k = A x ^ t | t k + B u t , Σ t + 1 | t k = Q + A Σ t | t k A .
17   end
18 end
19 Output: The filtering PDFs p ( x t | y 1 : t ) , the predictive PDFs p ( x t + 1 | y 1 : t ) , and the set ς t τ , η t τ , μ t τ , for  t = 1 , , N .
Algorithm 5: Backward-filtering algorithm for quantized output data.
1  Input: The initial backward measurement p ( y N | x N ) at t = N with parameters: S N | N = K and
ϵ N | N k = ς N k , λ N | N k = det 2 π R 1 / 2 , θ N k = η N k D u N μ N k , F N | N k = C R 1 C , G N | N k T = θ N k T R 1 C , H N | N k = θ N k T R 1 θ N k ,
with the set ς t τ , η t τ , μ t τ for t = 1 , , N being computed from Algorithm 4.
2  for  t = N 1 to1do
3    Backward Prediction
4     Set S t | t + 1 = S t + 1 | t + 1 .
5    for  k = 1 to S t | t + 1 do
6      Compute and store the backward prediction update quantities as follows
ϵ t | t + 1 k = ϵ t + 1 | t + 1 k , λ t | t + 1 k = det Q det F q k 1 / 2 λ t + 1 | t + 1 k , F t | t + 1 k = A M q k A , G t | t + 1 k T = G t + 1 | t + 1 k T F q k 1 Q 1 A u t B M q k A , H t | t + 1 k = H t + 1 | t + 1 k G t + 1 | t + 1 k T F q k 1 G t + 1 | t + 1 k + u t B M q k B u t 2 u t B Q 1 F q k 1 G t + 1 | t + 1 k ,
where F q k = F t + 1 | t + 1 k + Q 1 and M q k = Q 1 Q 1 F q k 1 Q 1 .
7    end
8    Backward Measurement Update:
9    Set S t | t = K S t | t + 1 .
10    for  = 1 to S t | t + 1 do
11      for  τ = 1 toKdo
12        Set the index k = ( 1 ) K + τ .
13        Compute the backward measurement update quantities as follows
ϵ t | t k = ς t τ ϵ t | t + 1 , θ t τ = η t τ D u t μ t τ , λ t | t k = det 2 π R 1 / 2 λ t | t + 1 , F t | t k = F t | t + 1 + C R 1 C , G t | t k T = G t | t + 1 T + θ t τ T R 1 C , H t | t k = H t | t + 1 + θ t τ T R 1 θ t τ .
14      end
15    end
16     Compute the GMM structure of p ( y t : N | x t ) using Lemma A.3 in [32].
17    Perform the Gaussian sum reduction algorithm according to [43] to obtain the reduced GMM structure of p ( y t : N | x t ) , see Equation (54) in [32]:
p ( y t : N | x t ) = k = 1 S red δ t | t k N x t ; z t | t k , U t | t k ,
where S red , δ t | t k , z t | t k , and U t | t k are the number of Gaussian components kept after the Gaussian reduction procedure, the weight, mean, and covariance matrix, respectively.
18    Compute and store the backward filter form of the reduced version of p ( y t : N | x t ) using Lemma A.3 in [32].
19  end
20  Output: The backward prediction p ( y t + 1 : N | x t ) and the backward measurement update p ( y t : N | x t ) for t = N , , 1 .
Algorithm 6: Gaussian sum smoothing algorithm for quantized output data.
1  Input: The PDFs p ( x t | y 1 : t 1 ) and p ( x N | y 1 : N ) obtained from Algorithm 4 and the reduced version of p ( y t : N | x t ) obtained from Algorithm 5, see (54) in [32].
2  Save the PDF p ( x N | y 1 : N ) .
3  for  t = N 1 to 1do
4    Set S t | N = M t | t 1 S red .
5    for  = 1 to S red do
6      for  τ = 1 to M t | t 1 do
7        Set the index k = ( 1 ) M t | t 1 + τ .
8        Compute the weights, means, and covariance matrices as follows:
ϵ t | N k = ϵ ¯ t | N k s = 1 S t | N ϵ ¯ t | N s 1 , x ^ t | N k = Σ t | t 1 τ U t | t + Σ t | t 1 τ 1 z t | t + U t | t U t | t + Σ t | t 1 τ 1 x ^ t | t 1 τ , Σ t | N k = Σ t | t 1 τ U t | t + Σ t | t 1 τ 1 U t | t ,
where
ϵ ¯ t | N k = γ t | t 1 τ δ t | t exp 1 2 ( x ^ t | t 1 τ z t | t ) U t | t + Σ t | t 1 τ 1 ( x ^ t | t 1 τ z t | t ) ( 2 π ) n 2 det U t | t + Σ t | t 1 τ .
9      end
10    end
11  end
12  Output: The smoothing PDFs p ( x t | y 1 : N ) , for t = 1 , , N .

3.7. Particle Filtering and Smoothing

Particle filtering [24,25] is a Monte Carlo method that approximately represents the filtering distributions p ( x t | y 1 : t ) of the state variables conditioned to the observations y 1 : t by using a set of weighted random samples, called particles, so that
p ( x t | y 1 : t ) i = 1 M w t ( i ) δ x t x t ( i ) ,
where δ · is the Dirac delta function, w t ( i ) denotes the ith weight, x t ( i ) denotes the ith particle sampled from the filtering distribution p ( x t | y 1 : t ) , and M is the number of particles. Since the filtering distribution is unknown in the current iteration, it is difficult or impossible to sample directly from it. In this case, the particles are usually generated from a known density that is chosen (by the user) to facilitate the sampling process. This is called importance sampling, and the PDF is called importance density. Then, the importance weight computation can be carried out in a recursive fashion (sequential importance sampling, SIS) as follows:
w t ( i ) w t 1 ( i ) p ( y t | x t ( i ) ) p ( x t ( i ) | x t 1 ( i ) ) h ( x t | x t 1 ( i ) , y t ) ,
where h ( x t | x t 1 ( i ) , y t ) is the importance density and w t 1 ( i ) are the importance weights of the previous iteration. On the other hand, the choice of importance distribution is critical for performing particle filtering and smoothing. The particle filter literature shows that the importance density p ( x t | x t 1 ( i ) , y t ) is optimal in the sense that it minimizes the variance in the importance weights w t ( i ) [16,25]. However, in most cases, it is difficult or impossible to draw samples for this optimal importance density, except for particular cases such as a state-space model with a nonlinear process and linear output Equation [25]. Many sub-optimal methods have been developed to approximate the importance density, such as Markov chain Monte Carlo [44], ensemble Kalman filter [28], local linearization of the state-space model, and local linearization of the optimal importance distribution [25], among others. One of the most commonly used importance densities in the literature is the state transition prior p ( x t | x t 1 ) ; see, e.g., [25,31,45]. This choice yields an intuitive and simple-to-implement algorithm with w t ( i ) w t 1 ( i ) p ( y t | x t ( i ) ) . This algorithm is called a bootstrap filter [24].
The particle filter suffers from a problem called the degeneracy phenomenon. As shown in [25], the variance in the importance weights can only increase over time. This implies that after a few iterations, most particles have negligible weights; see also [46]. A consequence of the degeneracy problem is that a large computational effort is devoted to updating particles whose contribution to the final estimate is nearly zero. To solve the degeneracy problem, the resampling approach was proposed in [24]. The resampling method eliminates the particles that have small weights, and the particles with large weights are replicated, generating a new set (with replacement) of equally weighted particles.
Additionally, the resampling method used to reduce the degeneracy effect on the particles produces another unwanted issue called particle impoverishment. This effect implies a loss of diversity in the sample set since the resampled particles will contain many repeated points that were generated from heavily weighted particles. In the worst-case scenario, all particles can be produced by a single particle with a large weight [29]. To solve the impoverishment problem, methods such as roughening and regularization have been suggested in the literature [26]. Markov chain Monte Carlo (MCMC) is another method used after the resampling step to add variability to the resampled particles [47]. The basic idea is to apply the MCMC algorithm to each resampled particle with p ( x t | y 1 : t ) as the target distribution. That is, we need to build a Markov chain by sampling a proposal particle x t from the proposal density. Then, x t is accepted only if u ϖ ( x t , x t ) , with u U [ 0 , 1 ] , where U [ a 1 , a 2 ] corresponds to the uniform distribution over the real numbers in the interval [ a 1 , a 2 ] , and ϖ ( x t , x t ) is the acceptance ratio given by
ϖ ( x t , x t ) = min 1 , p ( y t | x t ) p ( y t | x t ) .
With this process, the diversity of the new particles is greater than the resampled ones, reducing the risk of particle impoverishment. Additionally, the new particles are distributed according to p ( x t | y 1 : t ) . In this paper, we use the MH and RWM algorithms to build on the MCMC step. In Algorithm 7, we summarize the steps to implement the particle filter with the MCMC step.
Algorithm 7: MCMC-based particle filter algorithm for quantized output data
1  Input:   p ( x 1 ) , the number of particles M.
2  Draw the samples x 1 ( i ) p ( x 1 ) and set w 1 ( i ) = 1 / M for i = 1 , , M .
3  for   t = 1 to N do
4     From the importance distribution draw the samples x t ( i ) h ( x t | x t 1 ( i ) , y t ) for i = 1 , , M .
5     Calculate the weights w t ( i ) using p ( y t | x t ) given in (17) according to (38) for i = 1 , , M .
6    Normalize the weights w t ( i ) to sum one.
7    Perform resampling and generates a new set of weights w t ( i ) and particles x t ( i ) for i = 1 , , M . Notice that in the resampling algorithms MR, SR, and MTR w t ( i ) = 1 / M for i = 1 , , M . The LS algorithm produces a new set of weights; see, e.g., [26].
8    Implement the MCMC move: for = 1 , , M .
9      Pick the sample x t ( ) from the set of the resampled particles.
10        MH: Sample a proposal particle x t from the proposal PDF.
11        RWM: Generate x t + from N ( 0 , Λ 2 ) ( Λ is defined by the user) and compute x t = x t ( ) + x t + .
12       Evaluate ϖ ( x t , x t ( ) ) given in (39). If u ϖ ( x t , x t ( ) ) , then accept the move ( x t ( ) = x t ) else reject the move ( x t ( ) = x t ( ) ).
13  end
14  Output:  w t ( i ) , and x t ( i ) p ( x t | y 1 : t ) , i = 1 , , M .
Similar to particle filtering, particle smoothing is a Monte Carlo method that approximately represents the smoothing distributions p ( x t | y 1 : N ) of the state variables conditioned upon the observations y 1 : N , using random samples as follows:
p ( x t | y 1 : N ) i = 1 M w t | N ( i ) δ x t x ˜ t ( i ) ,
where w t | N ( i ) denotes the ith weight, x ˜ t ( i ) denotes the ith particle sampled from the smoothing distribution p ( x t | y 1 : N ) , and M is the number of particles. Some smoothing algorithms are based on the particles provided by the particle filtering, i.e., x t ( i ) , such as the backward-simulation particle smoother [48] and marginal particle smoother [25]. Particularly in the marginal particle smoother, the weights w t | N ( i ) are updated in reverse time as follows:
w t | N ( i ) = j = 1 M w t + 1 | N ( j ) w t ( i ) p ( x t + 1 ( j ) | x t ( i ) ) k = 1 M w t ( k ) p ( x t + 1 ( j ) | x t ( k ) ) ,
where w N | N ( i ) = w N ( i ) for i = 1 , , M and the approximation of (40) is performed using x ˜ t ( i ) = x t ( i ) for t = N , , 1 . In this paper, we use the smoothing method developed in [49]. The problem of interest in this work admits further simplifications; see also [50]. This smoothing method [49] requires the evaluation of the function f ( x t + 1 ( i ) , x t ( τ ) ) given by
f ( x t + 1 ( i ) , x t ( τ ) ) = exp 1 2 η t Q 1 η t ,
where η t = x t + 1 ( i ) A x t ( τ ) B u t . In Algorithm 8, we summarize the steps to implement the particle smoother.
Algorithm 8: Rejection-based particle smoother algorithm for quantized output data
Mathematics 11 01327 i004
On the other hand, provided the weights w ( i ) and particles x ( i ) (from particle filter or smoother), the state estimators in (7) and (8) and the covariance matrices of the estimation error in (9) and (10) can be computed from
E g ( x t ) | s i = 1 M w ( i ) g ( x ( i ) ) ,
where g ( x t ) represents a function of x t . For example, the mean and covariance matrix of the filtering and smoothing distributions can be computed with g ( x t ) = x t and g ( x t ) = x t E x t x t E x t , respectively. The variable s represents the observation set that is used, which is s = y 1 : t for filtering and s = y 1 : N for smoothing.

4. Numerical Experiment

In this section, we present a numerical example to analyze the performance of KF/KS, EKF/EKS, QKF/QKS, UKF/UKS, GSF/GSS, and MCMC-based PF/PS having quantized observations. We use the discrete-time system in the state-space form given in (1) and (2) with
y t = Δ round z t / Δ .
In (44), Δ is a quantization step, and round is the Matlab function that computes the nearest decimal or integer. The sets R k are computed using q k 1 = y t 0.5 Δ and q k = y t + 0.5 Δ . We compare the performance of all filtering and smoothing algorithms considering eight variations of the PF, where we use the Markov chain Monte Carlo method MH and RWM with the following resampling methods: SYS, ML, MT, and LS. For clarity of presentation, we use the bootstrap filter, and we solve the integral in (17) using the cumulative distribution function computed with the Matlab function mvncdf. We consider the state-space system given by (1) and (2) with A = 0.9 , B = 1.2 , C = 2.2 , and D = 0.75 . We also consider that w t N w t ; 0 , 1 , v t N v t ; 0 , 0.5 , the input signal is drawn from N 0 , 1 , and x 1 N ( x 1 ; 1 , 0.01 ) , and the quantization step Δ = 8 . To implement EKF/EKS ρ = 0.1 , to implement UKF/UKS α = 0.001 , κ = 0.001 , and β = 1 . To implement the GSF/GSS K = 10 , and to implement PF/PS we consider Λ 2 = 100 , and M = 100 , 500 , 1000 .
In Figure 5 and Figure 6, we show the filtering and smoothing distributions, i.e., p ( x t | y 1 : t ) and p ( x t | y 1 : N ) , for a time instant. We freeze the results of KF, QKF, EKF, UKF, and GSF to observe the behavior of the PF when varying the number of used samples, and when different MCMC methods are used with different resampling algorithms. These figures show that the PDFs obtained using GSF/GSS are the ones that best fit the ground truth, followed by PF/PS. Furthermore, in Figure 7 and Figure 8, we show the boxplot of the mean square error (MSE) between the estimated and the true state after running 1000 Monte Carlo experiments. These figures show the loss of accuracy of the state estimates obtained with KF/KS, EKF/EKS, QKF/QKS, and UKF/UKS, and better performance for GSF/GSS and PF/PS (except for the PF version that uses LS resampling). Additionally, we can observe that in terms of accuracy, the PF/PS implementation that gives the lower MSE is the one that uses the MCMC move RWM with SYS, ML, and MT resampling methods. On the other hand, in terms of the computational load, PS in almost all its versions exhibits the highest execution time, followed by the GSS, EKS, UKS, and KS (see Figure 9). In Table 3, we ranked all the algorithms studied in the present manuscript in terms of the mean of the MSE and the execution time. This table suggests that there is a trade-off between the accuracy of the estimates and the execution time in the case of PF/PS. The GSF/GSS, on the other hand, exhibits high accuracy in the estimation compared with all its analogs and exhibits a relatively short execution time compared with (i) the PF/PS using 500 and 1000 particles, (ii) the PF/PS using the MT resampling method with 100 particles and (iii) the EKF/EKS algorithms.

5. Practical Application Revisited: Liquid-Level System

We now consider the liquid-level system detailed in Section 2.1. To simulate the system in (11) and (12), we consider w t N ( w t ; 0 , 0.1 ) and v t N ( v t ; 0 , 0.05 ) . The input u t is drawn from u t N ( u t ; 8 , 25 ) , and the initial condition satisfies x 1 N ( x 1 ; 1 , 0.01 ) . In this example, we implement the following filtering and smoothing algorithms: EKF/EKS, UKF/UKS, QKF/QKS, GSF/GSS, and PF/PS. The latter filtering algorithm was implemented using the systematic resampling and MH methods with 500 particles. Additionally, to implement the Gaussian Sum algorithms, we consider K = 10 points from the Gauss–Legendre quadrature rule. We simulate 100 Monte Carlo experiments for both the filtering and smoothing algorithms with N = 100 .
Figure 10a shows one realization of the non-quantized signal z t and the quantized output y t . Figure 10b shows the execution time of all smoothing algorithms, where we see that PS has the highest computational cost by a considerable margin, followed by the GSS. In terms of estimation accuracy, the MSE between the real and estimated tank liquid level corresponding to the filtered and smoothed states is documented in Figure 10c,d, respectively. These boxplots show that GSF and GSS exhibit the lowest MSE, followed by the UKF/UKS, QKF/QKS, PF/PS, and EKF/EKS. Taking into consideration these results, this practical setup also illustrates that the Gaussian sum filter and smoother provides the best trade-off between estimation accuracy and computational cost.

6. User Guidelines and Comments

All the filtering and smoothing algorithms studied in this paper have a number of hyperparameters that need to be chosen based on their purpose. Hence, some guidelines are provided based on both the numerical analysis and also on the authors’ practical experience.
  • To implement the EKF/EKS, the user parameter ρ defines the arctan-approximation accuracy of the quantizer, which impacts the accuracy of the approximation for H t in (24). Choose a small value of ρ to obtain a high accuracy for the quantizer. The disadvantage of these algorithms is that despite an accurate approximation of the quantizer, the estimation of the state of the system is not accurate for a coarse quantization scheme;
  • To implement UKF/UKS, the parameter α is usually set to a small positive value, for instance, α = 0.01 , 0.001 , 0.0001 . The parameter κ is typically set to zero or a very small positive value, for instance, κ = 0 , 0.001 , 1 × 10 10 . For the extra parameter β , if the random variable to transform is Gaussian distributed, it is known that β = 2 is optimal [41]. In the problem of interest in this work, the random variables—after the unscented transformation—are non-Gaussian, and the parameter β can be chosen heuristically so that the estimation error is acceptable;
  • Based on the authors’ practical experience, the QKF/QKS produces an accurate estimation of the system states (under the assumption that filtering and smoothing distributions are Gaussian) if the quantization step is small compared to the amplitude of the output signal. However, the accuracy of the estimates decreases as the quantization step increases. The advantage of this algorithm is that it is easy to implement, and it is faster compared to more sophisticated implementations such as the PF/PS and the GSF/GSS;
  • To implement the GSF/GSS, choose the number of Gauss–Legendre quadrature points as K = 4 , 6 , 10 . These values of K produce highly accurate estimates for the system states and the filtering/smoothing PDFs with a low computational cost. Additionally, these algorithms produce directly an explicit model for the filtering and smoothing PDFs without extra algorithms. The disadvantage of the GSF/GSS algorithms is that they are difficult to implement since they require the backward filter recursion and the Gaussian sum reduction algorithms;
  • The PF/PS produces accurate estimations of the system state with a relatively low amount of particles. For instance, M = 100 , 200 , 500 are good choices for low-order models. These algorithms are easy to implement, and there are many resampling methods that can be replicated. The disadvantage of the PF/PS algorithms is that the computational cost increases rapidly as the number of particles and the system order increases. Additionally, the PF/PS does not directly produce the filtering and smoothing PDFs unless a PDF-fitting algorithm is implemented. This introduces an extra computational cost if filtering and smoothing PDFs are required;
  • In some situations, as is the case shown in Figure 7 and Figure 8, the QKF/QKS performs worse than the standard KF/KS in terms of estimation accuracy. This suggests that there are cases with fine quantization, where if the accuracy of the estimation is not critical, but the execution time is, then the user can choose to neglect the quantization block and pick the standard KF/KS algorithms for state estimation.

7. Conclusions

In this paper, we investigated the performance of the extended Kalman filter/smoother, quantized Kalman filter/smoother, unscented Kalman filter/smoother, Gaussian sum filter/smoother, and particle filter/smoother for state-space models with quantized observations. The analysis was carried out in terms of the accuracy of the estimation, using the MSE and the computational cost as performance indexes. Simulations show that the PDFs of Gaussian sum filter/smoother and particle filter/smoother with a high number of particles are the ones best fitting the ground-truth PDFs. However, contrary to the particle filter/smoother, the Gaussian sum filter/smoother does not require a high computational load to achieve accurate results. The extended Kalman filter/smoother, quantized Kalman filter/smoother, and unscented Kalman filter/smoother produce results with low accuracy, although their execution time is minor. From simulations, we observed that the performance of the particle filter is closely related to the number of samples, the choice of the resampling method, and the MCMC algorithms, which address the degeneracy problem and mitigate the sample impoverishment. We used four different resampling schemes combined with two MCMC algorithms. We found out that the implementation of the MCMC-based particle filter and smoothing that produces the lower MSE is the one using random walk Metropolis combined with the systematic resampling technique.

Author Contributions

Conceptualization, A.L.C., R.A.G. and J.C.A.; methodology, A.L.C. and J.C.A.; software, A.L.C.; validation, R.C. and B.I.G.; formal analysis, A.L.C., R.A.G., R.C., B.I.G. and J.C.A.; investigation, A.L.C., R.A.G. and J.C.A.; resources, J.C.A.; writing—original draft preparation, A.L.C., R.A.G., R.C., B.I.G. and J.C.A.; writing—review and editing, A.L.C., R.C., B.I.G. and J.C.A.; visualization, R.C. and B.I.G.; supervision, J.C.A. All authors have read and agreed to the published version of the manuscript.

Funding

Grants ANID-Fondecyt 1211630 and 11201187, and ANID-Basal Project FB0008 (AC3E). Chilean National Agency for Research and Development (ANID) Scholarship Program/Doctorado Nacional/2020-21202410. VIDI Grant 15698, which is (partly) financed by the Netherlands Organization for Scientific Research (NWO). Excellence Center at Linköping, Lund, in Information Technology, ELLIIT.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

No new data were created or analyzed in this study. Data sharing is not applicable to this article.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
KF/KSKalman Filter/Rauch–Tung–Striebel smoother
EKF/EFSExtended Kalman filter/smoother
UKF/UKSUnscented Kalman filter/smoother
QKF/QKSQuantized Kalman filter/smoother
PF/PSParticle filter/smoother
GSF/GSSGaussian sum filter/smoother
MCMCMarkov chain Monte Carlo
MHMetropolis–Hasting
RWMRandom walk Metropolis
PDFProbability density function
PMFProbability mass function
FLQFinite level quantizer
ILQInfinite level quantizer
SYSSystematic (resampling)
MLMultinomial (resampling)
MTMetropolis (resampling)
LSLocal selection (resampling)
GTGround truth
MSEMean square error

References

  1. Gersho, A.; Gray, R.M. Vector Quantization and Signal Compression; Springer Science & Business Media: Berlin/Heidelberg, Germany, 2012; Volume 159. [Google Scholar]
  2. Widrow, B.; Kollár, I. Quantization Noise: Roundoff Error in Digital Computation, Signal Processing, Control, and Communications; Cambridge University Press: Cambridge, UK, 2008. [Google Scholar]
  3. Li, S.; Sauter, D.; Xu, B. Fault isolation filter for networked control system with event-triggered sampling scheme. Sensors 2011, 11, 557–572. [Google Scholar] [CrossRef] [Green Version]
  4. Zhang, X.; Han, Q.; Ge, X.; Ding, D.; Ding, L.; Yue, D.; Peng, C. Networked control systems: A survey of trends and techniques. IEEE CAA J. Autom. Sin. 2020, 7, 1–17. [Google Scholar] [CrossRef]
  5. Zhang, L.; Liang, H.; Sun, Y.; Ahn, C.K. Adaptive Event-Triggered Fault Detection Scheme for Semi-Markovian Jump Systems with Output Quantization. IEEE Trans. Syst. Man Cybern. Syst. 2021, 51, 2370–2381. [Google Scholar] [CrossRef]
  6. Noshad, Z.; Javaid, N.; Saba, T.; Wadud, Z.; Saleem, M.Q.; Alzahrani, M.E.; Sheta, O.E. Fault Detection in Wireless Sensor Networks through the Random Forest Classifier. Sensors 2019, 19, 1568. [Google Scholar]
  7. Huang, C.; Shen, B.; Zou, L.; Shen, Y. Event-Triggering State and Fault Estimation for a Class of Nonlinear Systems Subject to Sensor Saturations. Sensors 2021, 21, 1242. [Google Scholar] [CrossRef]
  8. Liu, S.; Wang, Z.; Hu, J.; Wei, G. Protocol-based extended Kalman filtering with quantization effects: The Round-Robin case. Int. J. Robust Nonlinear Control 2020, 30, 7927–7946. [Google Scholar] [CrossRef]
  9. Ding, D.; Han, Q.L.; Ge, X.; Wang, J. Secure State Estimation and Control of Cyber-Physical Systems: A Survey. IEEE Trans. Syst. Man Cybern. Syst. 2021, 51, 176–190. [Google Scholar] [CrossRef]
  10. Wang, X.; Li, T.; Sun, S.; Corchado, J.M. A Survey of Recent Advances in Particle Filters and Remaining Challenges for Multitarget Tracking. Sensors 2017, 17, 2707. [Google Scholar] [CrossRef] [Green Version]
  11. Curry, R.E. Estimation and Control with Quantized Measurements; MIT Press: Cambridge, MA, USA, 1970. [Google Scholar]
  12. Gustafsson, F.; Karlsson, R. Statistical results for system identification based on quantized observations. Automatica 2009, 45, 2794–2801. [Google Scholar] [CrossRef] [Green Version]
  13. Wang, L.Y.; Yin, G.G.; Zhang, J.; Zhao, Y. System Identification with Quantized Observations; Springer: Berlin/Heidelberg, Germany, 2010. [Google Scholar]
  14. Marelli, D.E.; Godoy, B.I.; Goodwin, G.C. A scenario-based approach to parameter estimation in state-space models having quantized output data. In Proceedings of the 49th IEEE Conference on Decision and Control (CDC), Atlanta, GE, USA, 15–17 December 2010; pp. 2011–2016. [Google Scholar]
  15. Rana, M.M.; Li, L. An Overview of Distributed Microgrid State Estimation and Control for Smart Grids. Sensors 2015, 15, 4302–4325. [Google Scholar] [CrossRef] [Green Version]
  16. Särkkä, S. Bayesian Filtering and Smoothing; Cambridge University Press: Cambridge, UK, 2013; Volume 3. [Google Scholar]
  17. Anderson, B.D.O.; Moore, J.B. Optimal Control: Linear Quadratic Methods; Courier Corporation: Mineola, NY, USA, 2007. [Google Scholar]
  18. Julier, S.J.; Uhlmann, J.K. New extension of the Kalman filter to nonlinear systems. In Proceedings of the Signal Processing, Sensor Fusion, and Target Recognition VI, Orlando, FL, USA, 21–24 April 1997; International Society for Optics and Photonics: Bellingham, WA, USA, 1997; Volume 3068, pp. 182–193. [Google Scholar]
  19. Arasaratnam, I.; Haykin, S.; Elliott, R.J. Discrete-time nonlinear filtering algorithms using Gauss–Hermite quadrature. Proc. IEEE 2007, 95, 953–977. [Google Scholar] [CrossRef]
  20. Sviestins, E.; Wigren, T. Nonlinear techniques for Mode C climb/descent rate estimation in ATC systems. IEEE Trans. Control. Syst. Technol. 2001, 9, 163–174. [Google Scholar] [CrossRef]
  21. Gómez, J.C.; Sad, G.D. A State Observer from Multilevel Quantized Outputs. In Proceedings of the 2020 Argentine Conference on Automatic Control (AADECA), Buenos Aires, Argentina, 28–30 October 2020; pp. 1–6. [Google Scholar]
  22. Leong, A.S.; Dey, S.; Nair, G.N. Quantized Filtering Schemes for Multi-Sensor Linear State Estimation: Stability and Performance under High Rate Quantization. IEEE Trans. Signal Process. 2013, 61, 3852–3865. [Google Scholar] [CrossRef]
  23. Zhou, Y.; Li, J.; Wang, D. Unscented Kalman Filtering based quantized innovation fusion for target tracking in WSN with feedback. In Proceedings of the 2009 International Conference on Machine Learning and Cybernetics, Baoding, China, 12–15 July 2009; Volume 3, pp. 1457–1463. [Google Scholar]
  24. Gordon, N.J.; Salmond, D.J.; Smith, A.F.M. Novel approach to nonlinear/non-Gaussian Bayesian state estimation. In Proceedings of the IEE Proceedings F (Radar and Signal Processing); IET: London, UK, 1993; Volume 140, pp. 107–113. [Google Scholar]
  25. Doucet, A.; Godsill, S.; Andrieu, C. On sequential Monte Carlo sampling methods for Bayesian filtering. Stat. Comput. 2000, 10, 197–208. [Google Scholar] [CrossRef]
  26. Li, T.; Bolic, M.; Djuric, P.M. Resampling Methods for Particle Filtering: Classification, implementation, and strategies. IEEE Signal Process. Mag. 2015, 32, 70–86. [Google Scholar] [CrossRef]
  27. Douc, R.; Cappe, O. Comparison of resampling schemes for particle filtering. In Proceedings of the ISPA 2005, Proceedings of the 4th International Symposium on Image and Signal Processing and Analysis, Zagreb, Croatia, 15–17 September 2005; pp. 64–69. [Google Scholar]
  28. Bi, H.; Ma, J.; Wang, F. An Improved Particle Filter Algorithm Based on Ensemble Kalman Filter and Markov Chain Monte Carlo Method. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2015, 8, 447–459. [Google Scholar] [CrossRef]
  29. Zhai, Y.; Yeary, M. Implementing particle filters with Metropolis-Hastings algorithms. In Proceedings of the Region 5 Conference: Annual Technical and Leadership Workshop, Norman, OK, USA, 24 May 2004; pp. 149–152. [Google Scholar]
  30. Sherlock, C.; Thiery, A.H.; Roberts, G.O.; Rosenthal, J.S. On the efficiency of pseudo-marginal random walk Metropolis algorithms. Ann. Stat. 2015, 43, 238–275. [Google Scholar] [CrossRef] [Green Version]
  31. Cedeño, A.L.; Albornoz, R.; Carvajal, R.; Godoy, B.I.; Agüero, J.C. On Filtering Methods for State-Space Systems having Binary Output Measurements. IFAC PapersOnLine 2021, 54, 815–820. [Google Scholar] [CrossRef]
  32. Cedeño, A.L.; Albornoz, R.; Carvajal, R.; Godoy, B.I.; Agüero, J.C. A Two-Filter Approach for State Estimation Utilizing Quantized Output Data. Sensors 2021, 21, 7675. [Google Scholar] [CrossRef]
  33. Cohen, H. Numerical Approximation Methods; Springer: Berlin/Heidelberg, Germany, 2011. [Google Scholar]
  34. Agüero, J.C.; González, K.; Carvajal, R. EM-based identification of ARX systems having quantized output data. IFAC PapersOnLine 2017, 50, 8367–8372. [Google Scholar] [CrossRef]
  35. Ogata, K. Modern Control Engineering; Prentice Hall: Upper Saddle River, NJ, USA, 2010; Volume 5. [Google Scholar]
  36. DeGroot, M.H. Optimal Statistical Decisions; Wiley Classics Library, Wiley: Hoboken, NJ, USA, 2005. [Google Scholar]
  37. Solo, V. An EM algorithm for singular state space models. In Proceedings of the 42nd IEEE International Conference on Decision and Control (IEEE Cat. No.03CH37475), Maui, HI, USA, 9–12 December 2003; Volume 4, pp. 3457–3460. [Google Scholar]
  38. Gelb, A.; Kasper, J.; Nash, R.; Price, C.; Sutherland, A. Applied Optimal Estimation; MIT Press: Cambridge, MA, USA, 1974. [Google Scholar]
  39. Grewal, M.S.; Andrews, A.P. Kalman Filtering: Theory and Practice with MATLAB; John Wiley & Sons: Hoboken, NJ, USA, 2014. [Google Scholar]
  40. Traoré, N.; Le Pourhiet, L.; Frelat, J.; Rolandone, F.; Meyer, B. Does interseismic strain localization near strike-slip faults result from boundary conditions or rheological structure? Geophys. J. Int. 2014, 197, 50–62. [Google Scholar] [CrossRef] [Green Version]
  41. Wan, E.A.; Merwe, R.V.D. The unscented Kalman filter for nonlinear estimation. In Proceedings of the IEEE 2000 Adaptive Systems for Signal Processing, Communications, and Control Symposium (Cat. No.00EX373), Lake Louise, AB, Canada, 4 October 2000; pp. 153–158. [Google Scholar]
  42. Kitagawa, G. The two-filter formula for smoothing and an implementation of the Gaussian-sum smoother. Ann. Inst. Stat. Math. 1994, 46, 605–623. [Google Scholar] [CrossRef]
  43. Balenzuela, M.P.; Dahlin, J.; Bartlett, N.; Wills, A.G.; Renton, C.; Ninness, B. Accurate Gaussian Mixture Model Smoothing using a Two-Filter Approach. In Proceedings of the 2018 IEEE Conference on Decision and Control (CDC), Miami Beach, FL, USA, 17–19 December 2018; pp. 694–699. [Google Scholar]
  44. Liu, J.S.; Chen, R. Sequential Monte Carlo Methods for Dynamic Systems. J. Am. Stat. Assoc. 1998, 93, 1032–1044. [Google Scholar] [CrossRef]
  45. Hostettler, R. A two filter particle smoother for Wiener state-space systems. In Proceedings of the 2015 IEEE Conference on Control and Applications, CCA 2015—Proceedings, Dubrovnik, Croatia, 3–5 October 2012; pp. 412–417. [Google Scholar]
  46. Arulampalam, M.S.; Maskell, S.; Gordon, N.; Clapp, T. A tutorial on particle filters for online nonlinear/non-Gaussian Bayesian tracking. IEEE Trans. Signal Process. 2002, 50, 174–188. [Google Scholar] [CrossRef] [Green Version]
  47. Doucet, A.; de Freitas, N.; Gordon, N. Sequential Monte Carlo Methods in Practice; Springer: Berlin/Heidelberg, Germany, 2001. [Google Scholar]
  48. Godsill, S.J.; Doucet, A.; West, M. Monte Carlo Smoothing for Nonlinear Time Series. J. Am. Stat. Assoc. 2004, 99, 156–168. [Google Scholar] [CrossRef]
  49. Douc, R.; Garivier, A.; Moulines, E.; Olsson, J. Sequential Monte Carlo smoothing for general state space hidden Markov models. Ann. Appl. Probab. 2011, 21, 2109–2145. [Google Scholar] [CrossRef]
  50. Wills, A.; Schön, T.B.; Ljung, L.; Ninness, B. Identification of Hammerstein–Wiener models. Automatica 2013, 49, 70–81. [Google Scholar] [CrossRef] [Green Version]
Figure 1. State-space model with quantized output.
Figure 1. State-space model with quantized output.
Mathematics 11 01327 g001
Figure 2. Liquid-level system.
Figure 2. Liquid-level system.
Mathematics 11 01327 g002
Figure 3. Quantizer approximation by using the arctan function.
Figure 3. Quantizer approximation by using the arctan function.
Mathematics 11 01327 g003
Figure 4. Jacobian matrix of the quantizer approximation.
Figure 4. Jacobian matrix of the quantizer approximation.
Mathematics 11 01327 g004
Figure 5. Filtering PDFs for a time instant. GT stands for the ground truth. KF, EKF, QKF, UKF, GSF, and PF stand for Kalman filter, extended Kalman filter, quantized Kalman filter, unscented Kalman filter, Gaussian sum filter, and particle filter, respectively. The PDFs given by the KF, EKF, QKF, UKF were frozen in all plots to observe the behavior of the PF (with RWM moves) when the number of particles increased. SYS, ML, MT, LS stand for systematic, multinomial, metropolis, and local selection resampling algorithms, respectively.
Figure 5. Filtering PDFs for a time instant. GT stands for the ground truth. KF, EKF, QKF, UKF, GSF, and PF stand for Kalman filter, extended Kalman filter, quantized Kalman filter, unscented Kalman filter, Gaussian sum filter, and particle filter, respectively. The PDFs given by the KF, EKF, QKF, UKF were frozen in all plots to observe the behavior of the PF (with RWM moves) when the number of particles increased. SYS, ML, MT, LS stand for systematic, multinomial, metropolis, and local selection resampling algorithms, respectively.
Mathematics 11 01327 g005
Figure 6. Smoothing PDFs for a time instant. GT stands for the ground truth. KS, EKS, QKS, UKS, GSS, and PS stand for Kalman smoother, extended Kalman smoother, quantized Kalman smoother, unscented Kalman smoother, Gaussian sum smoother, and particle smoother, respectively. The PDFs given by the KS, EKS, QKS, and UKS were frozen in all plots to observe the behavior of the PS (with RWM moves) when the number of particles increased. SYS, ML, MT, LS stand for systematic, multinomial, metropolis, and local selection resampling algorithms, respectively.
Figure 6. Smoothing PDFs for a time instant. GT stands for the ground truth. KS, EKS, QKS, UKS, GSS, and PS stand for Kalman smoother, extended Kalman smoother, quantized Kalman smoother, unscented Kalman smoother, Gaussian sum smoother, and particle smoother, respectively. The PDFs given by the KS, EKS, QKS, and UKS were frozen in all plots to observe the behavior of the PS (with RWM moves) when the number of particles increased. SYS, ML, MT, LS stand for systematic, multinomial, metropolis, and local selection resampling algorithms, respectively.
Mathematics 11 01327 g006
Figure 7. Boxplot of the MSE between the estimated and true state for 1000 Monte Carlo experiments. KF, EKF, QKF, UKF, GSF, and PF stand for Kalman filter, extended Kalman filter, quantized Kalman filter, unscented Kalman filter, Gaussian sum filter, and particle filter, respectively. Additionally, SYS, ML, MT, LS stand for systematic, multinomial, metropolis, and local selection resampling algorithms, respectively. RWM and MH denote random walk Metropolis and Metropolis–Hasting moves.
Figure 7. Boxplot of the MSE between the estimated and true state for 1000 Monte Carlo experiments. KF, EKF, QKF, UKF, GSF, and PF stand for Kalman filter, extended Kalman filter, quantized Kalman filter, unscented Kalman filter, Gaussian sum filter, and particle filter, respectively. Additionally, SYS, ML, MT, LS stand for systematic, multinomial, metropolis, and local selection resampling algorithms, respectively. RWM and MH denote random walk Metropolis and Metropolis–Hasting moves.
Mathematics 11 01327 g007
Figure 8. Boxplot of the MSE between the estimated and true state for 1000 Monte Carlo experiments. KS, EKS, QKS, UKS, GSS, and PS stand for Kalman smoother, extended Kalman smoother, quantized Kalman smoother, unscented Kalman smoother, Gaussian sum smoother, and particle smoother, respectively. Additionally, SYS, ML, MT, and LS stand for systematic, multinomial, metropolis, and local selection resampling algorithms, respectively. RWM and MH denote random walk Metropolis and Metropolis–Hasting moves.
Figure 8. Boxplot of the MSE between the estimated and true state for 1000 Monte Carlo experiments. KS, EKS, QKS, UKS, GSS, and PS stand for Kalman smoother, extended Kalman smoother, quantized Kalman smoother, unscented Kalman smoother, Gaussian sum smoother, and particle smoother, respectively. Additionally, SYS, ML, MT, and LS stand for systematic, multinomial, metropolis, and local selection resampling algorithms, respectively. RWM and MH denote random walk Metropolis and Metropolis–Hasting moves.
Mathematics 11 01327 g008
Figure 9. Boxplot of the execution time for 1000 Monte Carlo experiments. KS, EKS, QKS, UKS, GSS, and PS stand for Kalman smoother, extended Kalman smoother, quantized Kalman smoother, unscented Kalman smoother, Gaussian sum smoother, and particle smoother, respectively. Additionally, SYS, ML, MT, and LS stand for systematic, multinomial, metropolis, and local selection resampling algorithms, respectively. RWM and MH denote random walk Metropolis and Metropolis–Hasting moves.
Figure 9. Boxplot of the execution time for 1000 Monte Carlo experiments. KS, EKS, QKS, UKS, GSS, and PS stand for Kalman smoother, extended Kalman smoother, quantized Kalman smoother, unscented Kalman smoother, Gaussian sum smoother, and particle smoother, respectively. Additionally, SYS, ML, MT, and LS stand for systematic, multinomial, metropolis, and local selection resampling algorithms, respectively. RWM and MH denote random walk Metropolis and Metropolis–Hasting moves.
Mathematics 11 01327 g009
Figure 10. Practical application: (a) A realization of the non-quantized signal z t and the quantized output y t . (b) Boxplot of the execution time of the smoothing algorithms. (c) Boxplot of the MSE between the real and estimated (filtered) tank liquid level. (d) Boxplot of the MSE between the real and estimated (smoothed) tank liquid level. EKF/EKS, QKF/QKS, UKF/UKS, GSF/GSS, and PF/PS stand for extended Kalman filter/smoother, quantized Kalman filter/smoother, unscented Kalman filter/smoother, Gaussian sum filter/smoother, and particle filter/smoother, respectively.
Figure 10. Practical application: (a) A realization of the non-quantized signal z t and the quantized output y t . (b) Boxplot of the execution time of the smoothing algorithms. (c) Boxplot of the MSE between the real and estimated (filtered) tank liquid level. (d) Boxplot of the MSE between the real and estimated (smoothed) tank liquid level. EKF/EKS, QKF/QKS, UKF/UKS, GSF/GSS, and PF/PS stand for extended Kalman filter/smoother, quantized Kalman filter/smoother, unscented Kalman filter/smoother, Gaussian sum filter/smoother, and particle filter/smoother, respectively.
Mathematics 11 01327 g010
Table 1. Integral limits of Equation (17).
Table 1. Integral limits of Equation (17).
FLQ y t a t b t
ψ 1 q 1 C x t D u t
ψ k , k = 2 , , L 1 q k 1 C x t D u t q k C x t D u t
ψ L q L 1 C x t D u t
ILQ ψ k , k = , 1 , , L , q k 1 C x t D u t q k C x t D u t
Table 2. Parameters of the p ( y t | x t ) approximation using the Gauss–Legendre quadrature.
Table 2. Parameters of the p ( y t | x t ) approximation using the Gauss–Legendre quadrature.
FLQ y t ς t τ η t τ μ t τ
ψ 1 2 ω τ / ( 1 + ψ τ ) 2 ( 1 ψ τ ) / ( 1 + ψ τ ) q 1
ψ k , k = 2 , , L 1 ω τ ( q k q k 1 ) / 2 ψ τ ( q k q k 1 ) / 2 ( q k + q i 1 ) / 2
ψ L 2 ω τ / ( 1 + ψ τ ) 2 ( 1 ψ τ ) / ( 1 + ψ τ ) q L 1
ILQ ψ k , k = , 1 , , L , ω τ ( q k q i 1 ) / 2 ψ τ ( q k q i 1 ) / 2 ( q k + q i 1 ) / 2
Table 3. Rank of the filtering and smoothing recursive algorithms for quantized data. References: KF/KS, EKF/EKS [16], QKF/QKS [21,22], PF/PS [24], UKF/UKS [16,18], GSF/GSS [31,32]. The notation XX-YY-ZZ(M) denotes the following: XX stands for the filtering or smoothing algorithm (PF or PS), YY stands for the MCMC algorithm (RWM or MH), ZZ stands for the resampling method (SYS, ML, MT or LS), and (M) stands for the number of particles used (100, 500, or 1000).
Table 3. Rank of the filtering and smoothing recursive algorithms for quantized data. References: KF/KS, EKF/EKS [16], QKF/QKS [21,22], PF/PS [24], UKF/UKS [16,18], GSF/GSS [31,32]. The notation XX-YY-ZZ(M) denotes the following: XX stands for the filtering or smoothing algorithm (PF or PS), YY stands for the MCMC algorithm (RWM or MH), ZZ stands for the resampling method (SYS, ML, MT or LS), and (M) stands for the number of particles used (100, 500, or 1000).
RankFilteringSmoothingSmoothing Execution Time
MSEAlgorithmMSEAlgorithmExecution TimeAlgorithm
10.6724GSF0.5207GSS0.0026KS
20.6740PF-RWM-SYS (1000)0.5212PS-RWM-SYS (1000)0.0031QKS
30.6744PF-RWM-ML (1000)0.5220PS-RWM-ML (1000)0.0111UKS
40.6754PF-RWM-SYS (500)0.5231PS-RWM-SYS (500)0.1453PS-RWM-SYS (100)
50.6765PF-RWM-ML (500)0.5247PS-RWM-ML (500)0.1644PS-RWM-LS (100)
60.6880PF-RWM-SYS (100)0.5393PS-RWM-MT (1000)0.1718PS-RWM-ML (100)
70.6948PF-RWM-ML (100)0.5415PS-RWM-SYS (100)0.2077PS-MH-LS (100)
80.7588PF-RWM-MT (1000)0.5420PS-RWM-MT (500)0.2109PS-MH-SYS (100)
90.7830PF-RWM-MT (500)0.5470PS-RWM-ML (100)0.2354PS-MH-ML (100)
100.9590PF-MH-SYS (1000)0.5689PS-RWM-MT (100)0.3931GSS
110.9593PF-MH-MT (1000)0.6708PS-MH-SYS (1000)0.3984PS-RWM-MT (100)
120.9595PF-MH-ML (1000)0.6711PS-MH-ML (1000)0.4579PS-MH-MT (100)
130.9608PF-MH-SYS (500)0.6737PS-MH-SYS (500)0.4676EKS
140.9612PF-MH-MT (500)0.6746PS-MH-ML (500)0.6048PS-RWM-SYS (500)
150.9612PF-MH-ML (500)0.6752PS-MH-MT (1000)0.6348PS-RWM-LS (500)
160.9686PF-MH-SYS (100)0.6781PS-MH-MT (500)0.7469PS-RWM-ML (500)
170.9697PF-MH-ML (100)0.6927PS-MH-SYS (100)0.9772PS-MH-LS (500)
180.9715PF-MH-MT (100)0.6927PS-MH-ML (100)1.2054PS-RWM-SYS (1000)
191.0138KF0.6974PS-MH-MT (100)1.2274PS-RWM-LS (1000)
201.6731PF-RWM-MT (100)0.7469PS-MH-LS (1000)1.2709PS-MH-SYS (500)
211.8616QKF0.7497PS-MH-LS (500)1.4192PS-MH-ML (500)
225.0549UKF0.7667PS-MH-LS (100)1.5197PS-RWM-ML (1000)
237.3381PF-RWM-LS (1000)0.9100KS1.8362PS-RWM-MT (500)
247.3602PF-RWM-LS (500)0.9393PS-RWM-LS (1000)2.0277PS-MH-LS (1000)
257.6912PF-RWM-LS (100)1.2900PS-RWM-LS (500)2.3945PS-MH-MT (500)
268.3846PF-MH-LS (1000)1.6693QKS3.0254PS-MH-SYS (1000)
278.4079PF-MH-LS (500)5.0545UKS3.3505PS-MH-ML (1000)
288.6717PF-MH-LS (100)6.4904PS-RWM-LS (100)3.6364PS-RWM-MT (1000)
2947.7827EKF33.8842EKS5.0651PS-MH-MT (1000)
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Cedeño, A.L.; González, R.A.; Godoy, B.I.; Carvajal, R.; Agüero, J.C. On Filtering and Smoothing Algorithms for Linear State-Space Models Having Quantized Output Data. Mathematics 2023, 11, 1327. https://doi.org/10.3390/math11061327

AMA Style

Cedeño AL, González RA, Godoy BI, Carvajal R, Agüero JC. On Filtering and Smoothing Algorithms for Linear State-Space Models Having Quantized Output Data. Mathematics. 2023; 11(6):1327. https://doi.org/10.3390/math11061327

Chicago/Turabian Style

Cedeño, Angel L., Rodrigo A. González, Boris I. Godoy, Rodrigo Carvajal, and Juan C. Agüero. 2023. "On Filtering and Smoothing Algorithms for Linear State-Space Models Having Quantized Output Data" Mathematics 11, no. 6: 1327. https://doi.org/10.3390/math11061327

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop