Open Access
This article is

- freely available
- re-usable

*Sensors*
**2010**,
*10*(7),
6406-6420;
doi:10.3390/s100706406

Article

Optimization of the Sampling Periods and the Quantization Bit Lengths for Networked Estimation

Department of Electrical Engineering, University of Ulsan, Namgu, Ulsan 680-749, Korea

^{⋆}

Author to whom correspondence should be addressed; Tel: +82-52-259-2196; Fax: +82-52-259-1686.

Received: 1 February 2010; in revised form: 17 April 2010 / Accepted: 12 May 2010 / Published: 29 June 2010

## Abstract

**:**

This paper is concerned with networked estimation, where sensor data are transmitted over a network of limited transmission rate. The transmission rate depends on the sampling periods and the quantization bit lengths. To investigate how the sampling periods and the quantization bit lengths affect the estimation performance, an equation to compute the estimation performance is provided. An algorithm is proposed to find sampling periods and quantization bit lengths combination, which gives good estimation performance while satisfying the transmission rate constraint. Through the numerical example, the proposed algorithm is verified.

Keywords:

networked estimation; sampling periods; quantization; Kalman filter## 1. Introduction

Recently, networked monitoring systems are becoming increasingly popular, where sensor data are transmitted to a monitoring station through wired or wireless networks [1, 2]. In a monitoring station, estimation algorithms (such as a Kalman filter) are used to estimate the system states. A network between sensor nodes and a monitoring station can induce many problems such as time delays, packet dropouts, and limited bandwidth, where they depend on network types and scheduling methods. We note that the network issue (for example, what kinds of scheduling methods should be used?) itself is a big research area [3]. Also time delays and packet dropouts [4–7] are one of most important problems in networked estimation problems.

In this paper we focus on the case where there are many sensors and the network bandwidth is limited. For example, suppose there are three sensors (A,B,C) and 90 bytes/s can be transmitted over a network. How does one assign 90 bytes/s to each sensor? One method is to assign 30 bytes/s to each sensor. If sensor A monitors fast changing value and sensor B monitors slowly changing value, it might not be the best strategy: more data rate should be assigned to sensor A and the data rate of sensor B should be reduced. These issues are discussed quantitatively in this paper.

Note that the data rate of each sensor depends on the sampling frequency and the quantization bit length. For example, (100 Hz, 8 bit) case and (50 Hz, 16 bit) case have the same data rate (100 bytes/s). Thus given the same data rate, we have many possible combinations of the sampling frequencies and the quantization bit lengths. We first investigate how the sampling frequency and the quantization bit length affect the estimation performance and then propose a method to choose the sampling frequency and the quantization bit length of each sensor.

Using different sampling frequencies for different sensors is discussed in [10], where the sampling frequencies is chosen by minimizing the Kalman filter error covariance. In [11], the sampling frequency assignment algorithm is given, where a sampling frequency is chosen from a finite discrete set. In [12], a similar sampling frequency assignment is considered, where location of sensors and cost of measurement are also considered in the optimization problem. We note there are other attempts, where an event-based transmission method [8, 9] is used instead of a periodic transmission. In this paper, we assume that periodic sampling of sensor data.

Quantization is an extensively studied area [13]. Relating the estimation problem, a logarithm quantizer is proposed in [14]. Although theoretically appealing, the quantizer is applied to the innovation of a filter rather than to an output directly. The effect of quantization can be reduced by treating the quantization error as measurement noises as in [15]. In [16] and [17], quantization bit length assignment algorithms are proposed, where the bit length is computed by minimizing a performance index. The performance index is not directly related to estimation performance (e.g., the filter error covariance).

Simultaneous optimization of the sampling frequency and the quantization bit length has not been reported yet. In this paper, both parameters are selected so that the estimation performance is optimized given the transmission rate constraint.

The paper is organized as follows. In Section 2., estimation performance P is defined, which depends on the sampling periods and quantization bit lengths. In Section 3., a suboptimal algorithm to compute sampling period and quantization bit length combination is proposed. Through numerical examples, the proposed method is verified in Section 4. and conclusion is given in Section 5.

## 2. Problem Formulation

In this section, estimation performance is defined when the sampling frequency and the quantization bit length of each sensor are given. How to optimize the sampling frequency and the quantization bit length is discussed in Section 3.

Consider a linear time-invariant system given by
where x ∈ R
where E{·} denotes the mean and Q and R are a process noise covariance and a measurement noise covariance, respectively.

$$\begin{array}{lll}\dot{x}(t)\hfill & =\hfill & Ax(t)\hspace{0.17em}+w(t)\hfill \\ y(t)\hfill & =\hfill & Cx(t)\hspace{0.17em}+v(t)\hfill \end{array}$$

^{n}is the state we want to estimate and y ∈ R^{p}is the measurement. Process noise w(t) and measurement noise υ(t), which are uncorrelated, zero mean white Gaussian random processes, satisfy:
$$\begin{array}{lll}\mathrm{E}\{w(t)w(s)\prime \}\hfill & =\hfill & Q\delta (t-s)\hfill \\ \mathrm{E}\{v(t)v(s)\prime \}\hfill & =\hfill & R\delta (t-s)\hfill \\ \mathrm{E}\{w(t)v(s)\prime \}\hfill & =\hfill & 0\hfill \end{array}$$

Let T
where M

_{i}be the sampling period of the i-th output and thus the corresponding sampling frequency is 1/T_{i}. We assume that T_{i}is an integer multiple of constant T: that is, T_{i}satisfies the following condition:
$${T}_{i}\hspace{0.17em}=\hspace{0.17em}{M}_{i}T$$

_{i}is an integer and T is the base sampling period.Let l
where y

_{i}be the quantization bit length of the i-th output. Let y_{max,i}be the absolute maximum value of the i-th output: that is,
$$\left|{y}_{k,i}\right|\hspace{0.17em}\le \hspace{0.17em}{y}_{\mathit{max},i}$$

_{k,i}is the i-th element of y_{k}. The index k is used to denote the discrete time index. We assume that the uniform quantizer is used. Let δ_{i}be the quantization level of the i-th output, which is given by
$${\delta}_{i}\hspace{0.17em}=\hspace{0.17em}\frac{{y}_{\mathit{max},i}}{{2}^{{l}_{i}-1}}$$

Let q
where q

_{k}be the quantization error in y_{k}, then the following is satisfied
$$\left|{q}_{k,i}\right|\hspace{0.17em}\le \hspace{0.17em}\frac{{\delta}_{i}}{2}$$

_{k,i}is the i-th element of q_{k}.Now we are going to model (1) in the discrete time considering the sampling period T
where x
where

_{i}and the quantization length bit l_{i}. Assume M_{i}= 1 (1 ≤ i ≤ p) temporarily:
$$\begin{array}{lll}{x}_{k+1}\hfill & =\hfill & \Phi {x}_{k}\hspace{0.17em}+\hspace{0.17em}{w}_{k}\hfill \\ {y}_{k}\hfill & =\hfill & C{x}_{k}\hspace{0.17em}+\hspace{0.17em}{v}_{k}\hspace{0.17em}+\hspace{0.17em}{q}_{k}\hfill \end{array}$$

_{k}≜ x(kT), Φ ≜ exp(AT) and y_{k}is the quantized output of y(kT). Process noise w_{k}and υ_{k}are uncorrelated and satisfy
$$\mathrm{E}\{{w}_{k}{w}_{k}^{\prime}\}\hspace{0.17em}=\hspace{0.17em}{Q}_{d},\hspace{0.17em}\mathrm{E}\{{v}_{k}{v}_{k}^{\prime}\}\hspace{0.17em}=\hspace{0.17em}R$$

$${Q}_{d}\hspace{0.17em}\triangleq \hspace{0.17em}{\int}_{0}^{T}\text{exp}(\mathit{Ar})Q\text{exp}(\mathit{Ar})\prime \hspace{0.17em}\mathit{dr}$$

We treat the quantization error as an additional measurement noise as in (6), which is also considered in [15]. The quantization error q

_{k}is assumed to be uncorrelated with w_{k}and υ_{k}. If the uniform distribution is assumed, the covariance is given by
$$\mathrm{E}\{{q}_{k}{q}_{k}^{\prime}\}\hspace{0.17em}=\hspace{0.17em}\Delta \hspace{0.17em}=\hspace{0.17em}\text{Diag}(\frac{{\delta}_{1}^{2}}{12},\hspace{0.17em}\cdots ,\hspace{0.17em}\frac{{\delta}_{p}^{2}}{12})$$

Now the temporary assumption (M

_{i}= 1) is removed. The second equation of (6) is no longer true and y_{k,i}is available if k is an integer multiple of M_{i}. Let ỹ_{k}be a collection of all available y_{k,i}at time k. To define ỹ_{k}in a more formal way, let {r_{k}_{,1}, r_{k}_{,2}, . . ., r_{k,pk}} be a set of all row numbers of available y_{k}. Then ỹ_{k}is given by
$${\tilde{y}}_{k}\hspace{0.17em}\triangleq \hspace{0.17em}\left[\begin{array}{c}\underset{\_}{{y}_{k,{r}_{k},1}}\\ \underset{\_}{{y}_{k,{r}_{k,2}}}\\ \underset{\_}{\cdots}\\ {y}_{k,{r}_{k,{p}_{k}}}\end{array}\right]$$

Similarly υ̃
where C
where
R̃

_{k}and q̃_{k}can be defined and C̃_{k}is defined as follows:
$${\tilde{C}}_{k}\hspace{0.17em}=\hspace{0.17em}\left[\begin{array}{c}\underset{\_}{{C}_{{r}_{k,1}}}\\ \underset{\_}{{C}_{{r}_{k,2}}}\\ \underset{\_}{\cdots}\\ {C}_{{r}_{k,{p}_{k}}}\end{array}\right]$$

_{i}is the i-th row of C. Thus the measurement equation at time k is given by
$${\tilde{y}}_{k}\hspace{0.17em}=\hspace{0.17em}{\tilde{C}}_{k}{x}_{k}\hspace{0.17em}+\hspace{0.17em}{\tilde{v}}_{k}\hspace{0.17em}+\hspace{0.17em}{\tilde{q}}_{k}$$

$$\begin{array}{lll}\mathrm{E}\{{\tilde{v}}_{k}{\tilde{v}}_{k}^{\prime}\}\hfill & =\hfill & {\tilde{R}}_{k}\hfill \\ \mathrm{E}\{{\tilde{q}}_{k}{\tilde{q}}_{k}^{\prime}\}\hfill & =\hfill & {\tilde{\Delta}}_{k}\hfill \end{array}$$

_{k}∈ R^{pk×pk}is a matrix extracted from R so that R̃_{k}(i, j) = R(r_{k,i}, r_{k,j}). Δ̃_{k}is defined in the same way.For example, if M

_{1}= 1 and M_{2}= 2, then ỹ_{k}and C̃_{k}are given in Table 1. We can see that C̃_{k}is periodic with the period 2, which is the least common multiple of M_{1}= 1 and M_{2}= 2.Generally C̃

_{k}is periodic with the period M, where M is the least common multiple of {M_{1}, M_{2}, . . ., M_{p}}.Using the first equation of (6) repeatedly b times, we have

$${x}_{b+a}\hspace{0.17em}=\hspace{0.17em}{\Phi}^{b}{x}_{a}\hspace{0.17em}+\hspace{0.17em}\sum _{i=0}^{b-1}{\Phi}^{b-1-i}{w}_{a+i}$$

It is known that a periodic system can be transformed into a time-invariant system [18, 19]. From (12) with a = kM and b = M, we have
Also from (12) with a = kM − j and b = j, we have
Multiplying Φ

$${x}_{(k+1)M}\hspace{0.17em}=\hspace{0.17em}{\Phi}^{M}\hspace{0.17em}{x}_{\mathit{kM}}\hspace{0.17em}+\hspace{0.17em}\sum _{i=0}^{M-1}{\Phi}^{M-1-i}{w}_{Mk+i}$$

$${x}_{\mathit{kM}}\hspace{0.17em}=\hspace{0.17em}{\Phi}^{j}\hspace{0.17em}{x}_{\mathit{kM}-j}\hspace{0.17em}+\hspace{0.17em}\sum _{i=0}^{j-1}{\Phi}^{j-1-i}\hspace{0.17em}{w}_{\mathit{kM}-j+i}$$

^{−}^{j}, we obtain a backward equation:
$${x}_{\mathit{kM}-j}\hspace{0.17em}=\hspace{0.17em}{\Phi}^{-j}\hspace{0.17em}{x}_{\mathit{kM}}\hspace{0.17em}-\hspace{0.17em}\sum _{i=0}^{j-1}{\Phi}^{-1-i}{w}_{\mathit{kM}-j+i}$$

Let x̄
Combining (13), (15) and (10), we have the following time invariant system:
where

_{k}, w̄_{k}ȳ_{k}, ῡ_{k}and q̄_{k}be defined by
$${\overline{x}}_{k}\hspace{0.17em}\triangleq \hspace{0.17em}{x}_{\mathit{kM}}$$

$$\begin{array}{cc}{\overline{w}}_{k}\hspace{0.17em}\triangleq \hspace{0.17em}\left[\begin{array}{c}{w}_{\mathit{kM}}\\ {w}_{\mathit{kM}+1}\\ \vdots \\ {w}_{\mathit{kM}+M-1}\end{array}\right],& {\overline{y}}_{k}\hspace{0.17em}\triangleq \hspace{0.17em}\left[\begin{array}{c}{\tilde{y}}_{(k-1)M+1}\\ {\tilde{y}}_{(k-1)M+2}\\ \vdots \\ {\tilde{y}}_{kM}\end{array}\right]\\ {\tilde{v}}_{k}\hspace{0.17em}\triangleq \hspace{0.17em}\left[\begin{array}{c}{\tilde{v}}_{(k-1)M+1}\\ {\tilde{v}}_{(k-1)M+2}\\ \vdots \\ {\tilde{v}}_{kM}\end{array}\right],& {\overline{q}}_{k}\hspace{0.17em}\triangleq \hspace{0.17em}\left[\begin{array}{c}{\tilde{q}}_{(k-1)M+1}\\ {\tilde{q}}_{(k-1)M+2}\\ \vdots \\ {\tilde{q}}_{kM}\end{array}\right]\end{array}$$

$$\begin{array}{lll}{\overline{x}}_{k+1}\hfill & =\hfill & \overline{A}{\overline{x}}_{k}\hspace{0.17em}+\hspace{0.17em}\overline{B}{\overline{w}}_{k}\hfill \\ \hfill {\overline{y}}_{k}& =\hfill & \overline{C}{\overline{x}}_{k}\hspace{0.17em}+\hspace{0.17em}{\overline{v}}_{k}\hspace{0.17em}+\hspace{0.17em}{\overline{q}}_{k}\hspace{0.17em}+\hspace{0.17em}\overline{D}{\overline{w}}_{k-1}\hfill \end{array}$$

$$\overline{A}\hspace{0.17em}=\hspace{0.17em}{\Phi}^{M}$$

$$\overline{B}\hspace{0.17em}\triangleq \hspace{0.17em}\hspace{0.17em}\left[{\Phi}^{M-1}\hspace{0.17em}{\Phi}^{M-2}\hspace{0.17em}\cdots \hspace{0.17em}I\right]$$

$$\overline{C}\hspace{0.17em}\triangleq \hspace{0.17em}\left[\begin{array}{c}{\tilde{C}}_{1}{\Phi}^{-M+1}\\ {\tilde{C}}_{2}{\Phi}^{-M+2}\\ \vdots \\ {\tilde{C}}_{M-1}{\Phi}^{-1}\\ {\tilde{C}}_{M}\end{array}\right]$$

$$\overline{D}\hspace{0.17em}\triangleq \hspace{0.17em}\left[\begin{array}{ccccc}0& {\tilde{C}}_{1}{\Phi}^{-1}& {\tilde{C}}_{1}{\Phi}^{-2}& \cdots & {\tilde{C}}_{1}{\Phi}^{-M+1}\\ 0& 0& {\tilde{C}}_{2}{\Phi}^{-1}& \cdots & {\tilde{C}}_{2}{\Phi}^{-M+2}\\ \vdots & \vdots & \vdots & \ddots & \vdots \\ 0& 0& 0& \cdots & {\tilde{C}}_{M-1}{\Phi}^{-1}\\ 0& 0& 0& \cdots & 0\end{array}\right]$$

We will apply a Kalman filter to (16). Note that
where
where

$$\mathrm{E}\{{\overline{w}}_{k}{\overline{w}}_{k}^{\prime}\}\hspace{0.17em}=\hspace{0.17em}\overline{Q}$$

$$\overline{Q}\hspace{0.17em}=\hspace{0.17em}\text{Diag}({Q}_{d},\hspace{0.17em}{Q}_{d},\hspace{0.17em}\cdots ,\hspace{0.17em}{Q}_{d},\hspace{0.17em}{Q}_{d})$$

$$\mathrm{E}\{({\overline{v}}_{k}\hspace{0.17em}+\hspace{0.17em}{\overline{q}}_{k}+\hspace{0.17em}\overline{D}{\overline{w}}_{k-1})\hspace{0.17em}({\overline{v}}_{k}\hspace{0.17em}+\hspace{0.17em}{\overline{q}}_{k}\hspace{0.17em}+\hspace{0.17em}\overline{D}{\overline{w}}_{k-1})\prime \}\hspace{0.17em}=\hspace{0.17em}\overline{R}$$

$$\begin{array}{lll}\overline{R}\hfill & =\hfill & \text{Diag}({\tilde{R}}_{1},\hspace{0.17em}\cdots ,\hspace{0.17em}{\tilde{R}}_{M})\hspace{0.17em}+\hspace{0.17em}\overline{\Delta}\hspace{0.17em}+\hspace{0.17em}\overline{D}\overline{Q}\overline{D}\prime \hfill \\ \overline{\Delta}\hfill & =\hfill & \text{Diag}({\tilde{\Delta}}_{1},\hspace{0.17em}\cdots ,\hspace{0.17em}{\tilde{\Delta}}_{M}).\hfill \end{array}$$

$$\mathrm{E}\{\overline{B}{\overline{w}}_{k-1}\hspace{0.17em}({\overline{v}}_{k}^{\prime}\hspace{0.17em}+\hspace{0.17em}{\overline{q}}_{k}^{\prime}\hspace{0.17em}+\hspace{0.17em}(\overline{D}{\overline{w}}_{k-1})\prime )\}\hspace{0.17em}=\hspace{0.17em}\overline{M}\hspace{0.17em}=\hspace{0.17em}\overline{B}\overline{Q}\overline{D}\prime $$

It is standard to apply a Kalman filter to (16) using (17), (18) and (19): the measurement update and the time update equations are given as follows [20]:

- measurement update$$\begin{array}{l}{K}_{k}\hspace{0.17em}=\hspace{0.17em}({P}_{k}^{-}\overline{C}\prime \hspace{0.17em}+\hspace{0.17em}\overline{M})\hspace{0.17em}(\overline{C}{P}_{k}^{-}\overline{C}\prime \hspace{0.17em}+\hspace{0.17em}\overline{R}\hspace{0.17em}+\hspace{0.17em}\overline{C}\overline{M}\\ \hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}+\hspace{0.17em}\overline{M}\prime \overline{C}\prime {)}^{-1}\end{array}$$$${P}_{k}\hspace{0.17em}=\hspace{0.17em}{P}_{k}^{-}\hspace{0.17em}-\hspace{0.17em}{K}_{k}\hspace{0.17em}(\overline{C}{P}_{k}^{-}\overline{C}\prime \hspace{0.17em}+\hspace{0.17em}\overline{R}\hspace{0.17em}+\hspace{0.17em}\overline{C}\overline{M}\hspace{0.17em}+\hspace{0.17em}\overline{M}\prime \overline{C}\prime ){K}_{k}^{\prime}$$
- time update$${P}_{k+1}^{-}\hspace{0.17em}=\hspace{0.17em}\overline{A}{P}_{k}\overline{A}\prime \hspace{0.17em}+\hspace{0.17em}\overline{B}\overline{Q}\overline{B}\prime $$

We will use P as an estimation performance, where P is the steady-state value of
${P}_{k}^{-}$:
In the steady state, we have
${P}_{k+1}^{-}\hspace{0.17em}=\hspace{0.17em}{P}_{k}^{-}\hspace{0.17em}=\hspace{0.17em}P$. Inserting this into the Kalman filter equation, we have the following Riccati equation:

$$P\hspace{0.17em}\triangleq \hspace{0.17em}\underset{k\to \infty}{\text{lim}}\hspace{0.17em}{P}_{k}^{-}$$

$$\begin{array}{c}P\hspace{0.17em}=\hspace{0.17em}\overline{A}P\overline{A}\prime \hspace{0.17em}-\hspace{0.17em}(\overline{A}P\overline{C}\prime \hspace{0.17em}+\hspace{0.17em}\overline{A}\overline{M})\\ {(\overline{C}P\overline{C}\prime \hspace{0.17em}+\hspace{0.17em}\overline{R}\hspace{0.17em}+\overline{C}\overline{M}\hspace{0.17em}+\hspace{0.17em}\overline{M}\prime \overline{C}\prime )}^{-1}\hspace{0.17em}(\overline{A}P\overline{C}\prime \hspace{0.17em}+\hspace{0.17em}\overline{A}\overline{M})\prime \\ +\hspace{0.17em}\overline{B}\overline{Q}\overline{B}\prime \end{array}$$

If the sampling period T

_{i}and the quantization bit length l_{i}are given, the corresponding estimation performance can be computed from (20).## 3. T_{i} and l_{i} Optimization

In this section, a method to select the sampling period T

_{i}and the quantization bit l_{i}are proposed. The main trade-off is between the transmission rate and the estimation performance.The optimization problem can be formulated as follows:
where λ ∈ R
and S

$$\begin{array}{c}{\text{min}}_{{T}_{i},{l}_{i}}\hspace{0.17em}\lambda \prime \hspace{0.17em}\text{Diag}\hspace{0.17em}P\\ \text{subject to}\hspace{0.17em}{\Sigma}_{i=1}^{p}\hspace{0.17em}\frac{{l}_{i}}{{T}_{i}}\le \hspace{0.17em}{S}_{\mathit{max}}\end{array}$$

^{n}is a weighting vector. Note that the transmission rate S is given by
$$S\hspace{0.17em}\triangleq \hspace{0.17em}\sum _{i=1}^{p}\frac{{l}_{i}}{{T}_{i}}$$

_{max}is the transmission rate constraint. The transmission rate S is defined as the sum of each sensor data rate without considering packet overhead. When we apply the algorithm to a specific network, the transmission rate S should be modified to take the packet overhead into account.Note that T
With this assumption, the least common multiple M of all possible combinations of M
We assume l

_{i}= M_{i}T and P depends on M, which is the least common multiple of M_{1}, . . ., M_{p}. To make M constant, M_{i}is assumed to satisfy
$$\begin{array}{c}{M}_{i}\hspace{0.17em}=\hspace{0.17em}{2}^{{m}_{i}}\hspace{0.17em}({m}_{i}\hspace{0.17em}\text{is an integer})\\ {m}_{i,min}\hspace{0.17em}\le \hspace{0.17em}{m}_{i}\hspace{0.17em}\le \hspace{0.17em}{m}_{i,\mathit{max}}\end{array}$$

_{i}= 2^{mi}is given by
$$M\hspace{0.17em}=\hspace{0.17em}{2}^{{\mathit{max}}_{i}\left\{{m}_{i,\mathit{max}}\right\}}$$

_{i}satisfies
$${l}_{i,\mathit{min}}\hspace{0.17em}\le \hspace{0.17em}{l}_{i}\hspace{0.17em}\le \hspace{0.17em}{l}_{i,\mathit{max}}$$

If the number of combinations is small, P can be computed for all possible combinations. For a case that the number is too large, we propose a suboptimal algorithm. The proposed algorithm is based on the following lemma.

**Lemma 1**Let P (m

_{1}, l

_{1}, . . ., m

_{i}, l

_{i}, . . ., m

_{p}, l

_{p}) be the solution to (20). With the assumption (23), the following is satisfied.

$$P({m}_{1},\hspace{0.17em}{l}_{1},\hspace{0.17em}\cdots ,\hspace{0.17em}{m}_{i}\hspace{0.17em}-\hspace{0.17em}1,\hspace{0.17em}{l}_{i},\hspace{0.17em}\cdots ,\hspace{0.17em}{m}_{p},\hspace{0.17em}{l}_{p})\hspace{0.17em}\le \hspace{0.17em}P({m}_{1},\hspace{0.17em}{l}_{1},\hspace{0.17em}\cdots ,\hspace{0.17em}{m}_{i},\hspace{0.17em}{l}_{i},\hspace{0.17em}\cdots ,\hspace{0.17em}{m}_{p},\hspace{0.17em}{l}_{p})$$

$$P({m}_{1},\hspace{0.17em}{l}_{1},\hspace{0.17em}\cdots ,\hspace{0.17em}{m}_{i},\hspace{0.17em}{l}_{i}\hspace{0.17em}+\hspace{0.17em}1,\hspace{0.17em}\cdots ,\hspace{0.17em}{m}_{p},\hspace{0.17em}{l}_{p})\hspace{0.17em}\le \hspace{0.17em}P({m}_{1},\hspace{0.17em}{l}_{1},\hspace{0.17em}\cdots ,\hspace{0.17em}{m}_{i},\hspace{0.17em}{l}_{i},\hspace{0.17em}\cdots ,\hspace{0.17em}{m}_{p},\hspace{0.17em}{l}_{p})$$

$$P({m}_{1},\hspace{0.17em}{l}_{1},\hspace{0.17em}\cdots ,\hspace{0.17em}{m}_{i}\hspace{0.17em}-\hspace{0.17em}1,\hspace{0.17em}{l}_{i}\hspace{0.17em}+\hspace{0.17em}1,\hspace{0.17em}\cdots ,\hspace{0.17em}{m}_{p},\hspace{0.17em}{l}_{p})\hspace{0.17em}\le \hspace{0.17em}P({m}_{1},\hspace{0.17em}{l}_{1},\hspace{0.17em}\cdots ,\hspace{0.17em}{m}_{i},\hspace{0.17em}{l}_{i},\hspace{0.17em}\cdots ,\hspace{0.17em}{m}_{p},\hspace{0.17em}{l}_{p})$$

Proof: We will prove with a simple case with m
The subscript (m

_{1}= 0 m_{2}= 0, T = 1, M = 2, and p = 2: note that T_{1}= 2^{m1}T = T and T_{2}= 2^{m2}T = T. C̄ for m_{1}= 0 and m_{2}= 0 is given by
$${\overline{C}}_{{m}_{1}=0,{m}_{2}=0}\hspace{0.17em}=\hspace{0.17em}\left[\left[\begin{array}{c}{C}_{1}\\ {C}_{2}\\ {C}_{1}\\ {C}_{2}\end{array}\right]\right]$$

_{1}= 0, m_{2}= 0) is used to emphasize that m_{1}= 0 and m_{2}= 0. Also let R̄_{m1=0,m2=0}be R̄ defined in (18) and P(0, l_{1}, 0, l_{2}) be a solution to (20) when m_{1}= 0 and m_{2}= 0.Now consider a case with m
Note that adding ∞ to the (2, 2) element of R̄
The general case for (25) can be proved similarly.

_{1}= 0 and m_{2}= 1 Instead of computing P(0, l_{1}, 1, l_{2}) using (20) with C̄_{m1=0,m2=1}, we can compute P(0, l_{1}, 1, l_{2}) using (20) with m_{1}= 0 and m_{2}= 0 except that R̄_{m1=0,m2=0}is replaced by R̄_{modified}, which is defined by
$${\overline{R}}_{\mathit{modified}}\hspace{0.17em}=\hspace{0.17em}{\overline{R}}_{({m}_{1}=0,{m}_{2}=0})\hspace{0.17em}+\hspace{0.17em}\text{Diag}(0,\infty ,0,0).$$

_{m1=0,m2=0}is equivalent to ignoring the second output of y_{k}when k is an integer multiple of 2. Thus P(0, l_{1}, 1, l_{2}) computed in this way is the estimation error covariance when m_{1}= 0 and m_{2}= 1. Since R̄_{modified}≥ R̄, we have from the monotonicity of the Riccati equation (see Corollary 5.2 in [21])
$$P(0,\hspace{0.17em}{l}_{1},\hspace{0.17em}0,\hspace{0.17em}{l}_{2})\hspace{0.17em}\le \hspace{0.17em}P(0,\hspace{0.17em}{l}_{1},\hspace{0.17em}1,\hspace{0.17em}{l}_{2})$$

Proving the second inequality is more straightforward from the monotonicity of the Riccati equation [21] and from the fact

$${\overline{R}}_{({m}_{1},{l}_{1},\cdots ,{m}_{i},{l}_{i}+1,\cdots ,{m}_{p},{l}_{p}})\hspace{0.17em}\le \hspace{0.17em}{\overline{R}}_{({m}_{1},{l}_{1},\cdots ,{m}_{i},{l}_{i},\cdots ,{m}_{p},{l}_{p})}$$

The third inequality (27) is just a combination of (25) and (26):

$$\begin{array}{c}P({m}_{1},\hspace{0.17em}{l}_{1},\hspace{0.17em}\cdots ,\hspace{0.17em}{m}_{i}\hspace{0.17em}-1,\hspace{0.17em}{l}_{i}\hspace{0.17em}+\hspace{0.17em}1,\hspace{0.17em}\cdots ,\hspace{0.17em}{m}_{p},\hspace{0.17em}{l}_{p})\hspace{0.17em}\le \hspace{0.17em}P({m}_{1},\hspace{0.17em}{l}_{1},\hspace{0.17em}\cdots ,\hspace{0.17em}{m}_{i},\hspace{0.17em}{l}_{i}\hspace{0.17em}+\hspace{0.17em}1,\hspace{0.17em}\cdots ,\hspace{0.17em}{m}_{p},\hspace{0.17em}{l}_{p})\\ \le \hspace{0.17em}P({m}_{1},\hspace{0.17em}{l}_{1},\hspace{0.17em}\cdots ,\hspace{0.17em}{m}_{i},\hspace{0.17em}{l}_{i},\hspace{0.17em}\cdots ,\hspace{0.17em}{m}_{p},\hspace{0.17em}{l}_{p})\end{array}$$

To explain Lemma 1, we consider a simple example with the following parameters:

$$\begin{array}{c}p=2\\ {m}_{1,\mathit{min}}\hspace{0.17em}=\hspace{0.17em}{m}_{2,\mathit{min}}\hspace{0.17em}=\hspace{0.17em}0,\hspace{0.17em}{m}_{1,\mathit{max}}\hspace{0.17em}=\hspace{0.17em}{m}_{2,\mathit{max}}\hspace{0.17em}=\hspace{0.17em}4\\ {l}_{1,\mathit{min}}\hspace{0.17em}=\hspace{0.17em}{l}_{2,\mathit{min}}\hspace{0.17em}=\hspace{0.17em}9,\hspace{0.17em}{l}_{1,\mathit{max}}\hspace{0.17em}=\hspace{0.17em}{l}_{2,\mathit{max}}\hspace{0.17em}=\hspace{0.17em}16\end{array}$$

There are 5 × 8 × 5 × 8 = 1, 600 possible combinations (see Figure 1). Using the result in Lemma 1, we know that P is the smallest when (m

_{1}, l_{1}, m_{2}, l_{2}) = (0, 16, 0, 16) and the largest when (m_{1}, l_{1}, m_{2}, l_{2}) = (4, 9, 4, 9). On the other hand, the transmission rate S is the largest when (m_{1}, l_{1}, m_{2}, l_{2}) = (0, 16, 0, 16) and the smallest when (m_{1}, l_{1}, m_{2}, l_{2}) = (4, 9, 4, 9). Note that m_{i}− 1 and l_{i}+ 1 in Lemma 1 corresponds to the upper and right combination of (m_{i}, l_{i}), respectively. Thus as we move the combination from the left-bottom corner toward the right-top corner, λ′P becomes smaller while the transmission rate increases.In the proposed algorithm, we start from the left-bottom corner combination and move the combination toward the right-top corner combination while the transmission constrained is satisfied. The proposed algorithm is stated in pseudo-codes.

- (m
_{i}, l_{i}) = (m_{i,max}, l_{i,min}), i = 1, . . ., p - compute λ′P and S
- while (S < S
_{max}) - L = { }
- for i = 1:p
- if (m
_{i}> m_{i,min}) - L = L ∪ {(m
_{1}, l_{1}, . . ., m_{i}− 1, l_{i}, . . ., m_{p}, l_{p})} - if (l
_{i}< l_{i,max}) - L = L ∪ {(m
_{1}, l_{1}, . . ., m_{i}, l_{i}+ 1, . . ., m_{p}, l_{p})} - end
- for every element of L, compute G̃
_{j}$${\tilde{G}}_{j}\hspace{0.17em}=\hspace{0.17em}\frac{\lambda \prime P\hspace{0.17em}-\hspace{0.17em}\lambda \prime {\tilde{P}}_{j}}{{\tilde{S}}_{j}\hspace{0.17em}-\hspace{0.17em}S}$$_{j}= P for the j-th element of Land S̃_{j}= S for the j-th element of L - (m
_{i,old}, l_{i,old}) = (m_{i}, l_{i}), i = 1, . . ., p - Find the maximum of G
_{j}and choose thecorresponding combination as (m_{i}, l_{i}) - compute λ′P and S
- end
- Choose the combination (m
_{i,old}, l_{i,old})

Note that G̃

_{j}represents the estimation performance improvement per transmission rate increase. For each given (m_{i}, l_{i}), we choose (m_{i}− 1, l_{i}) and (m_{i}, l_{i}+ 1) for the next combination candidates. There are at most 2^{p}combinations. Among the combinations, we find a combination of which G̃_{j}is the largest. This process is continued until the current transmission rate exceeds S_{max}.The number of combinations tested in the proposed algorithm is small compared with the brute force search. For example, the proposed algorithm starts with the top-right-most combination (m

_{1}, l_{1}, m_{2}, l_{2}) = (0, 16, 0, 16) and moves toward bottom-left-most combination (m_{1}, l_{1}, m_{2}, l_{2}) = (4, 9, 4, 9) at each step until S ≤ S_{max}is no longer satisfied. Unfortunately, there is no guarantee that the solution found by the proposed method is near the optimal solution. In Section 4., it is shown, however, through a numerical example that the gap between the suboptimal and optimal solution is not large.We note that the optimization algorithm is applied once when the networked system is designed. Once the sampling period T

_{i}and quantization length l_{i}are determined, they are programmed in each sensor node. Thus no additional computation is needed in the sensor node.## 4. Numerical Example

In this section, the proposed method is verified for the one dimensional attitude estimation problem. The state is defined by
where θ is the attitude we want to estimate. An accelerometer-based inclinometer and a gyroscope are used as sensors. The system model is given by
The values given in (28), y

$$x(t)\hspace{0.17em}=\hspace{0.17em}\left[\begin{array}{c}\theta \\ \dot{\theta}\\ \ddot{\theta}\end{array}\right]$$

$$\begin{array}{l}A\hspace{0.17em}=\hspace{0.17em}\left[\begin{array}{ccc}0& 1& 0\\ 0& 0& 1\\ 0& 0& 0\end{array}\right],\hspace{0.17em}C=\hspace{0.17em}\left[\begin{array}{lll}1\hfill & 0\hfill & 0\hfill \\ 0\hfill & 1\hfill & 0\hfill \end{array}\right]\hfill \\ Q\hspace{0.17em}=\hspace{0.17em}\left[\begin{array}{ccc}0& 0& 0\\ 0& 0& 0\\ 0& 0& 0.2\end{array}\right],\hspace{0.17em}R\hspace{0.17em}=\hspace{0.17em}\left[\begin{array}{cc}0.0056& 0\\ 0& 0.003\end{array}\right]\hfill \end{array}$$

_{max}_{,1}= 3.1416, y_{max}_{,2}= 2.6180 and T = 1 are used.The optimization problem (21) with S

_{max}= 500 and λ = [1 0 0]′ is considered. λ is a natural choice since we want to estimate θ.First the optimization problem is solved by a brute force search: all possible combinations are examined. The optimal solution is given by
and S and λP at the combination are

$$({m}_{1},\hspace{0.17em}{l}_{1},\hspace{0.17em}{m}_{2},\hspace{0.17em}{l}_{2})\hspace{0.17em}=\hspace{0.17em}(2,\hspace{0.17em}10,\hspace{0.17em}2,\hspace{0.17em}10)$$

$$S\hspace{0.17em}=\hspace{0.17em}500,\hspace{0.17em}\lambda P\hspace{0.17em}=\hspace{0.17em}\mathrm{0.00259.}$$

Secondly the proposed suboptimal algorithm is used, where the solution is given by
and S and λP at the combination are
The proposed method was able to find a nearly optimal solution with less computation time. In the brute force search, 479 combinations are tested while 21 combinations are tested in the proposed algorithm.

$$({m}_{1},\hspace{0.17em}{l}_{1},\hspace{0.17em}{m}_{2},\hspace{0.17em}{l}_{2})\hspace{0.17em}=\hspace{0.17em}(2,\hspace{0.17em}9,\hspace{0.17em}2,\hspace{0.17em}9)$$

$$S\hspace{0.17em}=\hspace{0.17em}450,\hspace{0.17em}\lambda P\hspace{0.17em}=\hspace{0.17em}\mathrm{0.00260.}$$

To test whether the proposed λP is a good indicator of the estimation performance, the data is generated with Matlab and tested with a Kalman filter. The estimation performance is evaluated with the following:
where N is the number of data and θ
and S and P

$${P}_{\mathit{experiment}}\hspace{0.17em}=\hspace{0.17em}\frac{1}{N}\hspace{0.17em}\sum _{k,1}^{N}{\theta}_{\mathit{error},k}^{2}$$

_{error,k}= θ − θ̂. Note that θ̂ is computed by [1 0 0] ◯_{k}. We computed P_{experiment}for all possible combinations and the minimizing combination is given by
$$({m}_{1},\hspace{0.17em}{l}_{1},\hspace{0.17em}{m}_{2},\hspace{0.17em}{l}_{2})\hspace{0.17em}=\hspace{0.17em}(2,\hspace{0.17em}9,\hspace{0.17em}2,\hspace{0.17em}10)$$

_{experiment}at the combination is
$$S=\hspace{0.17em}475,\hspace{0.17em}{P}_{\mathit{experiment}}\hspace{0.17em}=\hspace{0.17em}0.00159$$

It can be seen that the optimal solution predicted by λP nearly coincides with the real optimal solution. To see how similar λP and P

_{experiment}are, λP and P_{experiment}are plotted for different (m_{1}, l_{1}, m_{2}, l_{2}) combinations. Since the parameter space is four dimensional, it is not easy to visualize the result. Thus we fix (m_{1}, l_{1}) = (2, 9) and plot λP and P_{experiment}for (m_{2}, l_{2}) combinations in Figures 2 and 3. The data marked with “o” satisfies S ≤ S_{max}and the data marked with “*” does not satisfy S ≤ S_{max}.It can be seen that the trend of λP is almost similar to that of P

_{experiment}. Thus λP can be used to predict the estimation performance given the sampling periods and the quantization bit lengths.The transmission rate S is given in Figure 4. To see the trade-off between S and the estimation performance, S and λP are given for three (m

_{1}, l_{1}, m_{2}, l_{2}) combinations in Table 2. We can see that when S decreases (that is, if we transmit less data), λP tends to increase (that is, the estimation performance degrades).Finally, to test the efficiency of the proposed algorithm, we applied the proposed algorithm to 100 random models, where A is randomly generated and the same C as in the previous simulation is used. In the brute force method, 479 combinations are tested as in the previous simulation since the same setting is used. In the proposed algorithm, the number of combinations tested is between 13 and 21. That is, the number of combinations tested in the worst case is 21. Thus we can see the convergence rate is relatively fast. To see the accuracy of the proposed algorithm, the following value is computed:
where P

$$\text{Accuracy}\hspace{0.17em}=\hspace{0.17em}\lambda \frac{100\hspace{0.17em}\times \hspace{0.17em}({P}_{\mathit{proposed}}\hspace{0.17em}-\hspace{0.17em}{P}_{\mathit{optimal}})}{{P}_{\mathit{optimal}}}$$

_{optimal}is computed using the brute force method. In the 100 trials, the worst case accuracy was 7.26% while the average value is 1.73%. Thus we believe the proposed method can find near optimal value while avoiding the large computation.## 5. Conclusions

In this paper, attitude estimation over a network with a transmission rate constraint is considered. The transmission rate depends on the sampling periods and the quantization bit lengths. Basically the problem is trade-off between the estimation performance and the transmission rate, where the parameters are the sampling period and the quantization bit length.

First, how the sampling period and the quantization bit length affect the estimation performance is investigated. To do this, we introduced an augmented system and defined the estimation performance P. Secondly, the trade-off problem is formulated as an optimization problem and a suboptimal algorithm is provided. Through numerical examples, we showed that the defined estimation performance matches the real estimation performance in the sense that graphs of P and the real estimation performance are similar. We also showed the proposed algorithm could find a reasonably good solution.

While defining P, we made assumption (23), which makes the derivation of P easier but not essential. Removing that assumption and obtaining a general result is a future work. Also to test an algorithm using a real network is a future work.

## Acknowledgments

This work was supported by National Research Foundation of Korea Grant funded by the Korean Government (No. 2009-0067447).

## References

- Zhao, F; Guibas, L. Wireless Sensor Networks; Elsevier: San Francisco, CA, USA, 2004. [Google Scholar]
- Choi, DH; Kim, DS. Wireless fieldbus for networked control systems using LR-WPAN. Int. J. Control Autom. Syst
**2008**, 6, 119–125. [Google Scholar] - Wu, W; Arapostathis, A. Optimal sensor querying: General markovian and LQG models with controlled observations. IEEE Trans. Automat. Contr
**2008**, 53, 1392–1405. [Google Scholar] - He, X; Wang, Z; Zhou, DH. Robust fault detection for networked systems with communication delay and data missing. Automatica
**2009**, 45, 2634–2639. [Google Scholar] - Wei, G; Wang, Z; Shu, H. Robust filtering with stochastic nonlinearities and multiple missing measurements. Automatica
**2009**, 45, 836–841. [Google Scholar] - Dong, H; Wang, Z; Gao, J. Robust H
_{∞}Filtering for a class of nonlinear networked systems with multiple stochastic communication delays and packet dropouts. IEEE Trans. Signal Process**2010**, 58, 1957–1966. [Google Scholar] - Lee, I; Choi, S. Discrimination of visual and haptic rendering delays in networked environments. Int. J. Control Autom. Syst
**2009**, 7, 25–31. [Google Scholar] - Miskowicz, M. Send-on-delta concept: An event-based data reporting strategy. Sensors
**2006**, 65, 49–63. [Google Scholar] - Suh, YS; Nguyen, VH; Ro, YS. Modified Kalman filter for networked monitoring systems employing a send-on-delta method. Automatica
**2007**, 43, 332–338. [Google Scholar] - Mehra, RK. Optimization of measurement schedules and sensor designs for linear dynamic systems. IEEE Trans. Automat. Contr
**1976**, 21, 55–64. [Google Scholar] - Do, LMK; Suh, YS; Nguyen, VH. Networked Kalman filter with sensor transmission interval optimization. Proceedings of SICE-ICASE International Joint Conference, Busan, Korea, 18–21 October 2006; pp. 1047–1052.
- Kadu, SC; Bhushan, M; Gudi, R. Optimal sensor network design for multirate systems. J. Process Control
**2008**, 18, 594–609. [Google Scholar] - Gersho, A; Gray, RM. Vector Quantization and Signal Compression; Kluwer Academic Publishers: Norwell, MA, USA, 1992. [Google Scholar]
- Elia, N; Mitter, SK. Stabilization of linear systems with limited information. IEEE Trans. Automat. Contr
**2001**, 46, 1384–1400. [Google Scholar] - Luong-Van, D; Tordon, MJ; Katupitiya, J. Covariance profiling for an adaptive Kalman filter to suppress sensor quantization effects. Proceedings of the 43rd IEEE Conference on Decision and Control, Paradise Island, Bahamas, 14–17 December 2004; pp. 2680–2685.
- Sun, S; Lin, J; Xie, L; Xiao, W. Quantized Kalman filtering. Proceedings of 22nd IEEE International Symposium on Intelligent Control, Singapore, 26–28 September 2007; pp. 7–12.
- Wen, C; Tang, X; Ge, Q. Decentralized quantized Kalman filter with limited bandwidth. Proceedings of Second International Symposium on Intelligent Information Technology Application, Shanghai, China, 21–22 December 2008; pp. 291–295.
- Lee, DJ; Tomizuka, M. Multirate Optimal State Estimation with Sensor Fusion. Proceedings of the American Control Control Conference, Denver, CO, USA, 4–6 June 2003; pp. 2887–2892.
- Chen, T; Francis, B. Optimal Sampled-Data Control Systems; Springer-Verlag: Tokyo, Japan, 1995. [Google Scholar]
- Brown, RG; Hwang, PYC. Introduction to Random Signals and Applied Kalman Filtering; John Wiley & Sons: New York, NY, USA, 1997. [Google Scholar]
- Clements, DJ; Wimmer, HK. Monotonicity of the optimal cost in the discrete-time regulator problem and Schur complements. Automatica
**2001**, 37, 1779–1786. [Google Scholar]

**Figure 1.**P (estimation error covariance) and S (transmission rate) relationship according to combinations.

k | 1 | 2 | 3 | 4 |
---|---|---|---|---|

ỹ_{k} | [y_{k,}_{1}] | $\left[\begin{array}{c}{y}_{k,1}\\ {y}_{k,2}\end{array}\right]$ | [y_{k,}_{1}] | $\left[\begin{array}{c}{y}_{k,1}\\ {y}_{k,2}\end{array}\right]$ |

C̃_{k} | [C_{1}] | $\left[\begin{array}{c}{C}_{1}\\ {C}_{2}\end{array}\right]$ | [C_{1}] | $\left[\begin{array}{c}{C}_{1}\\ {C}_{2}\end{array}\right]$ |

m_{1} | l_{1} | m_{2} | l_{2} | S | λP |
---|---|---|---|---|---|

0 | 16 | 0 | 16 | 3200 | 0.000080 |

2 | 12 | 2 | 12 | 600 | 0.000165 |

4 | 9 | 4 | 9 | 112.5 | 0.001282 |

© 2010 by the authors licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution license (http://creativecommons.org/licenses/by/3.0/).