Open Access
This article is

- freely available
- re-usable

*Sensors*
**2009**,
*9*(8),
5919-5932;
doi:10.3390/s90805919

Article

Inertial and Magnetic Sensor Data Compression Considering the Estimation Error

Department of Electrical Engineering, University of Ulsan, Namgu, Ulsan, Korea; Tel. +82-52-259-2196; Fax: +82-52-259-1686

Received: 22 June 2009; in revised form: 15 July 2009 / Accepted: 17 July 2009 / Published: 24 July 2009

## Abstract

**:**

This paper presents a compression method for inertial and magnetic sensor data, where the compressed data are used to estimate some states. When sensor data are bounded, the proposed compression method guarantees that the compression error is smaller than a prescribed bound. The manner in which this error bound affects the bit rate and the estimation error is investigated. Through the simulation, it is shown that the estimation error is improved by 18.81% over a test set of 12 cases compared with a filter that does not use the compression error bound.

Keywords:

compression; estimation; inertial sensor; magnetic sensor## 1. Introduction

Largely because of the MEMS technology, inertial sensors (accelerometers and gyroscopes) are becoming smaller and cheaper [1], which makes it possible to use inertial sensors in many applications. Inertial sensors are used in motion trackers [2], personal navigation systems [3] and remote control systems [4].

In some applications such as body motion trackers (for example, the product ‘Moven’ by XSENS), inertial sensors are used to track body movement. As the number of inertial sensors increases, the size of sensor data increases accordingly. The sensor data is transmitted to the microprocessor board through wired or wireless communication channels. In a wireless communication channel, the transmission speed is relatively low compared with a wired communication channel. The size of the sensor data needs to be reduced if it exceeds the capacity of the communication channel. For example, the transmission rate of raw sensor data for one MTx (commercial inertial and magnetic sensor) could be as high as 72 Kbps: 9 sensor outputs (3 accelerometers, 3 gyroscopes, and 3 magnetic sensors) × 500 Hz (maximum sampling rate) × 16 bits (16 bit A/D conversion for each sensor). If four MTx are used, the size of the sensor data exceeds the capacity of Zigbee (maximum rate is 250 Kbps [5]). More applications are expected as networked control and monitoring systems are becoming more popular [6].

One way to reduce the size of the sensor data is to compress the sensor data before transmission and decompress the received data in the microprocessor board. Data compression has been extensively studied in many areas [7]. In applications such as body motion trackers, real-time compression is preferable, in order to avoid delayed sensor data transmission and consequently the delay in motion estimation. One of the most popular real-time compression methods is ADPCM (Adaptive Differential Pulse Coded Modulation) [8], which is optimized for voice data. In [9], a simplified ADPCM method is used for inertial sensor data compression, where the maximum error (the difference between the original data and the compressed-and-then-decompressed data) is only relatively bounded (e.g., 1% of the sensor data).

The performance indices of data compression are the bit rate and the quality of compressed data. In voice data compression, the quality of compressed data is evaluated by listening to the compressed-and-then-decompressed voice data. This rather subjective evaluation makes sense since the final destination of compressed data is the human ear. On the other hand, the final destination of compressed inertial sensor data is usually a filter (such as a Kalman filter), where orientation is estimated. Thus the quality of compression should be judged by how the compression affects the estimation error.

In this paper, a modified ADPCM method is proposed, where the absolute maximum error bound is explicitly given. Also, we investigate how this error affects the estimation error. A part of this paper was presented in [10].

## 2. Inertial and Magnetic Sensor Data Compression and Estimation

The overall process of compression and estimation is given in Figure 1, where k is a discrete time index. The objective is to estimate some states x(k) (attitude, heading, position, etc.) using inertial sensor data y(k) at a limited data transmission rate. The inertial sensor data y(k) is compressed into d̃(k) and transmitted to the microprocessor board. The compressed data is decompressed into ŷ(k) and the state x(k) is estimated using a filter.

Since the objective is to find a good estimator of x(k), the quality of compression is considered good if the estimation error x(k) − x̂(k) is small. The quality of the compression algorithm is evaluated using the following estimation error covariance:
where x̂(ŷ) is an estimator when ŷ is used as an output.

$${P}_{\mathit{error}}=\mathrm{E}\{(x-\widehat{x}(\widehat{y}))\prime (x-\widehat{x}(\widehat{y}))\}$$

Note that P

_{error}depends on the filter algorithm used to compute x̂(ŷ) in addition to the compression algorithm.The ideal compression algorithm minimizes both P

_{error}and the bit rate. However, usually if P_{error}is small, the bit rate tends to be large. In Section 4, we propose a compression algorithm where the maximum compression error is bounded. The maximum compression error bound plays a role of design parameter to adjust P_{error}and the bit rate.## 3. Modified ADPCM Algorithm

The ADPCM block scheme is given in Figure 2. We assume that y(k) is the output of n

_{y}bit uniform quantizer, where y(k) satisfies
$$|y(k)|\le {y}_{\mathit{max}}$$

Let the quantization size δ of y(k) be defined by
If there is more than one sensor, we need one encoder for each sensor.

$$\delta =\frac{{y}_{\mathit{max}}}{{2}^{{n}_{y}-1}}$$

The sensor data y(k) is compared with the predictor output ỹ(k). The difference d(k) is coded into d̃(k) and this d̃(k) is transmitted to the estimator board. In the standard ADPCM, d̃(k) is a quantization index i(k). In this paper, d̃(k) consists of one bit mode information m(k) and a quantization index i(k):
In the decoder, the decompressed data is ŷ(k) = ỹ(k)+ d̂(k). The predictor output ỹ(k) can be computed from d̂(k) and thus does not need to be transmitted.

$$\tilde{d}(k)=\left[\begin{array}{c}m\left(k\right)\\ i(k)\end{array}\right]$$

The adaptive predictor uses the same pole-zero configuration as that in CCITT G.726 ADPCM, which is an ADPCM speech compressor/decompressor protocol proposed in 1990 [11] :
From the assumption (2), if ỹ(k) > y

$$\tilde{y}(k)=\sum _{i=1}^{2}{a}_{i}(k-i)\widehat{y}(k-i)+\sum _{i=1}^{6}{b}_{i}(k-1)\widehat{d}(k-i)$$

_{max}from (5), we set ỹ(k) = y_{max}. Similarly, if ỹ(k) < −y_{max}, we set ỹ(k) = −y_{max}.The adaptive algorithm in the G.726 protocol is used to adjust a

_{i}and b_{i}and the detail is given in [11]; the tone and transition detector part was omitted since the part is only for voice data.The compression error e

_{c}(k) is the difference between the original signal y(k) and the decompressed signal ŷ(k):
$$\begin{array}{lll}{e}_{c}(k)\hfill & =\hfill & y(k)-\widehat{y}(k)\hfill \\ \hfill & =\hfill & (\tilde{y}(k)+d(k))-(\tilde{y}(k)+\widehat{d}(k))\hfill \\ \hfill & =\hfill & d(k)-\widehat{d}(k)\hfill \end{array}$$

Standard ADPCM algorithms [8] will be modified so that the maximum error is bounded as follows:

$$|{e}_{c}(k)|\le {e}_{\mathit{max}}$$

Now d̃(k) coding is explained. The mode bit m(k) in d̃(k) is used to ensure (7). If the compression error e
where y

_{c}(k) of a standard ADPCM method satisfies (7), then the mode is 0 (i.e., m(k) = 0). As will be seen in Section 3.1, this is true if |d(k)| < 2^{ys(k)}δ, where y_{s}(k) is an adaptive scaling factor. On the other hand, if the compression error e_{c}(k) of a standard ADPCM method does not satisfy (7), then the mode is 1 (i.e., m(k) = 1) and a special uniform quantizer is used as in Section 3.2. Thus the mode m(k) is given by
$$m(k)=\{\begin{array}{ll}0,\hfill & \text{if}\frac{|d(k)|}{{2}^{{y}_{s}(k)}\delta}\le 1\hfill \\ 1,\hfill & \text{otherwise}\hfill \end{array}$$

_{s}(k) is an adaptive scaling factor.The quantization index i(k) is defined differently when m(k) = 0 and when m(k) = 1.

#### 3.1. Quantization index when m(k) = 0

If m(k) = 0, a signal d(k) is quantized with n
The sign of the index i(k) is the same as that of d(k). If d(k) = 0, then i = 1.

_{d}bits with a logarithm quantizer with an adaptive scaling factor y_{s}(k), where the quantized index i(k) (1 ≤ |i| ≤ 2^{nd−1}) satisfies
$${f}_{i-1}<\frac{|d(k)|}{{2}^{{y}_{s}(k)}\delta}\le {f}_{i}$$

Coefficients f

_{i}in (9) are computed from μ law [8] so that f_{i}satisfies the following:
$$\frac{{\text{log}}_{e}\left(1+\mu \left|{f}_{i}\right|\right)}{{\text{log}}_{e}(1+\mu )}=\frac{i}{{2}^{{n}_{d}}-1}$$

The scaling adaptation factor y
We note that ȳ

_{s}(k) is computed similarly with the standard ADPCM algorithm except that y_{s}(k) is bounded as follows:
$$3\le {y}_{s}(k)\le {\overline{y}}_{s}$$

_{s}is chosen so that (7) is satisfied. First we are going to derive the upper bound of e_{c}(k) when ȳ_{s}is given.The decompressed signal d̂(k) is computed as follows:
where

$$\widehat{d}(k)=\text{sgn}(i(k)){2}^{{y}_{s}(k)}\delta \frac{{f}_{i-1}+{f}_{i}}{2}$$

$$\text{sgn}(\alpha )=\{\begin{array}{ll}1,\hfill & \alpha >0\hfill \\ 0,\hfill & \alpha =0\hfill \\ -1,\hfill & \alpha <0\hfill \end{array}$$

The error e
From the fact that f

_{c}(k) is then
$$\begin{array}{lll}\left|{e}_{c}(k)\right|\hfill & =\hfill & |d(k)-\widehat{d}(k)|\hfill \\ \hfill & \le \hfill & {2}^{{y}_{s}(k)}\delta \frac{{f}_{i}-{f}_{i-1}}{2}\hfill \end{array}$$

_{i}is monotonically increasing and (10), we have
$$\begin{array}{lll}\left|{e}_{c}(k)\right|\hfill & \le \hfill & {2}^{{y}_{s}(k)}\delta \frac{{f}_{{2}^{n}{d}^{-1}}-{f}_{{2}^{n}{d}^{-1}}{}_{-1}}{2}\hfill \\ \hfill & \le \hfill & {2}^{{\overline{y}}_{s}}\delta \frac{{f}_{{2}^{n}{d}^{-1}}-{f}_{{2}^{n}{d}^{-1}}{}_{-1}}{2}\hfill \end{array}$$

Given e
Thus if ȳ
This bound will be used later in the estimation problem.

_{max}, to satisfy (7), ȳ_{s}should satisfy the following
$${2}^{{\overline{y}}_{s}}\delta \frac{{f}_{{2}^{n}{d}^{-1}}-{f}_{{2}^{n}{d}^{-1}}{}_{-1}}{2}\le {e}_{\mathit{max}}$$

_{s}is chosen to satisfy (12), the quantization error is always smaller than e_{max}when m(k) = 0. We also note that in addition to the global bound e_{max}, if index i(k) is known, we have a less conservative bound given in (11):
$${\overline{e}}_{c}(k)={2}^{{y}_{s}(k)}\delta \frac{{f}_{i}-{f}_{i-1}}{2}$$

#### 3.2. Quantization index when m(k) = 1

From (8), m(k) = 1 if |d(k)| > 2

^{ys(k)}δ. If m(k) = 1, then the logarithm quantizer used cannot guarantee the maximum error (7). Noting that d(k) = y(k) − ỹ(k), we can see that the mode is 1 if the difference between the output y(k) and the predicted value ỹ(k) is large, which happens when the signal change is not smooth but instead rather abrupt.To ensure the maximum error condition (7), we introduce a uniform quantizer when the mode is 1. An example is given in Figure 3, where y(k) is outside the [ỹ(k) − δ2
Let u
ceil(α) is the smallest integer no smaller than α.

^{ys(k)}, ỹ(k) + δ2^{ys(k)}] interval and the mode is 1. The upper and lower intervals of the mode 1 interval (the mode 0 interval is the shaded area) are quantized with a uniform quantizer (quantization size is 2e_{max}). The formal definition of the index in mode 1 is given as follows. Let u_{length}and l_{length}be defined by
$$\begin{array}{lll}{u}_{\mathit{length}}\hfill & =\hfill & {y}_{\mathit{max}}-(\tilde{y}(k)+\delta {2}^{{y}_{s}(k)})\hfill \\ {l}_{\mathit{length}}\hfill & =\hfill & (\tilde{y}(k)-\delta {2}^{{y}_{s}(k)})+{y}_{\mathit{max}}\hfill \end{array}$$

_{level}(the number of quantization levels for the upper interval u_{length}) be defined by
$${u}_{\mathit{level}}=\{\begin{array}{ll}0,\hfill & \text{if}\hspace{0.17em}{u}_{\mathit{length}}\le 0\hfill \\ \text{ceil}\left(\frac{{u}_{\mathit{length}}}{2{e}_{\mathit{max}}}\right),\hfill & \text{otherwise}\hfill \end{array}\mathit{where}$$

The index i(k) is given by
where floor(α) is the largest integer no larger than α.

$$i(k)=\{\begin{array}{ll}\text{floor}\left(\frac{y(k)-\tilde{y}(k)+\delta {2}^{{y}_{s}(k)}}{2{e}_{\mathit{max}}}\right)\hfill & \text{if}\hspace{0.17em}y(k)>\tilde{y}(k)+\delta {2}^{{y}_{s}(k)}\hfill \\ {u}_{\mathit{level}}+\text{floor}\left(\frac{\tilde{y}-\delta {2}^{{y}_{s}(k)}-y(k)}{2{e}_{\mathit{max}}}\right)\hfill & \text{if}\hspace{0.17em}y(k)<\tilde{y}(k)-\delta {2}^{{y}_{s}(k)}\hfill \end{array}$$

The decomposed signal d̂(k) is computed as follows.

$$\widehat{d}(k)=\{\begin{array}{ll}\text{max}\left\{{y}_{\mathit{max}},\tilde{y}(k)+\delta {2}^{{y}_{s}(k)}+{e}_{\mathit{max}}+i(k)2{e}_{\mathit{max}}\right\},\hfill & \text{if}\hspace{0.17em}i<{u}_{\mathit{level}}\hfill \\ \text{min}\left\{-{y}_{\mathit{max}},\tilde{y}(k)-\delta {2}^{{y}_{s}(k)}-{e}_{\mathit{max}}-\left(i(k)-{u}_{\mathit{level}}\right)2{e}_{\mathit{max}}\right\},\hfill & \text{otherwise}\hfill \end{array}$$

Since a uniform quantizer is used, the compression error e

_{c}(k) in mode 1 is bounded by
$$\left|{e}_{c}(k)\right|\le {\overline{e}}_{c}(k)={e}_{\mathit{max}}$$

Note that the number of bits for the index i(k) is given by
When m(k) = 1, n

$${n}_{i}(k)=\text{ceil}\left({\text{log}}_{2}\left(\text{ceil}\left(\frac{{u}_{\mathit{length}}}{2{e}_{m}}\right)+\text{ceil}\left(\frac{{l}_{\mathit{length}}}{2{e}_{m}}\right)\right)\right)$$

_{i}(k) changes depending on ỹ(k) and y(k). Note that when m(k) = 0, n_{d}(the number of bits for i(k)) is constant. Also note that n_{i}can be computed in the estimation board and thus does not need to be transmitted.## 4. Kalman Filter Compensating the Compression Error

In Section 3, a compression method is proposed, where the maximum compression error is e

_{max}. Also if m(k) and i(k) are known, bounds of the compression error are given by (13) and (15). In this section, we use this information in a Kalman filter.We assume that y(k) is generated by a linear system
where x ∈ R

$$\begin{array}{rrr}\hfill x\left(k+1\right)& \hfill =\hfill & \hfill Ax(k)+w(k)\hfill \\ \hfill y(k)& \hfill =\hfill & \hfill Cx(k)+u(k)\hfill \end{array}$$

^{n}is the state, y ∈ R^{p}is the output, and w(k) and v(k) are uncorrelated, zero-mean, white Gaussian noises that satisfy
$$\begin{array}{ll}\mathrm{E}\{w(k){w}^{\prime}(k)\}=Q,\hfill & \mathrm{E}\{v(k){v}^{\prime}(k)\}=R\hfill \end{array}$$

In the standard Kalman filter, x(k) is estimated using y(k). If y(k) is compressed, ŷ(k) = y(k) −e
where

_{c}(k) is used instead. From (13) and (15), ē_{c}(k) can be computed, which is used to reduce the estimation error. If we assume e_{c,i}(k) (i-th element of e_{c}(k)) has a uniform distribution, $\mathrm{E}\left\{{e}_{c,i}^{2}(k)\right\}=\frac{1}{3}{\overline{e}}_{c,i}^{2}(k)$. By treating the compression error e_{c}(k) as measurement noise in y(k), the following model can be used for an estimator.
$$\begin{array}{rrr}\hfill x\left(k+1\right)& \hfill =\hfill & \hfill Ax(k)+w(k)\hfill \\ \hfill \widehat{y}(k)& \hfill =\hfill & \hfill Cx(k)+\overline{v}(k)\hfill \end{array}$$

$$\overline{R}(k)=\mathrm{E}\{\overline{v}(k){\overline{v}}^{\prime}(k)\}=R+\frac{1}{3}\left[\begin{array}{lll}{\overline{e}}_{c,1}^{2}(k)\hfill & \hfill & \hfill \\ \hfill & \ddots \hfill & \hfill \\ \hfill & \hfill & {\overline{e}}_{c,p}^{2}(k)\hfill \end{array}\right]$$

Since the compression error is compensated in the estimation algorithm, we can expect a smaller estimation error, which is verified through the simulations in Section 5. When a Kalman filter is used for (17), the estimation error covariance P(k) = E{(x(k) − x̂(k))(x(k) − x̂(k))′} can be computed from a Riccati equation [12].
Since R̄(k) depends on ŷ(k), we cannot compute P(k) before simulation. To evaluate the estimation error covariance without simulation, we use an upper bound of R̄(k):
Using this R̄
P̄ can be considered as an upper bound of P(k) in (19). From P̄, we can see how e

$$P(k+1)=AP(k){A}^{\prime}+Q-AP(k){C}^{\prime}{(CP(k){C}^{\prime}+\overline{R}(k))}^{-1}CP(k){A}^{\prime}$$

$$\overline{R}(k)\le R+\frac{1}{3}\left[\begin{array}{lll}{e}_{\mathit{max},1}^{2}\hfill & \hfill & \hfill \\ \hfill & \ddots \hfill & \hfill \\ \hfill & \hfill & {e}_{\mathit{max},p}^{2}\hfill \end{array}\right]={\overline{R}}_{\mathit{max}}$$

_{max}in place of R̄(k), we can find a steady-state solution to (19).
$$\overline{P}=A\overline{P}{A}^{\prime}+Q-A\overline{P}{C}^{\prime}{\left(C\overline{P}{C}^{\prime}+{\overline{R}}_{\mathit{max}}\right)}^{-1}C\overline{P}{A}^{\prime}$$

_{max,i}affects the estimation error.A similar idea is used in [13], where a networked estimation problem is considered.

## 5. Simulation

We compared three data sets using the proposed compression algorithm. Original data is 1600 bits/s for each sensor : 16 bit A/D converted data (i.e., n

_{y}= 16) with the sampling rate being 100 Hz. We used n_{b}= 5: that is, the number of bits for the quantization index when m(k) = 0 is 5. e_{max}for each sensor is chosen so that e_{max}= 300δ, where δ is different for each type of sensors. ȳ_{s}= 12 is found to satisfy (12).Bit rates for the three compressed data sets are given in Table 2. All three data sets are obtained using XSENS MTx (3 accelerometer, 3 gyroscopes, and 3 magnetic sensors). Holding MTx with a hand, we moved MTx slowly (data set 1) and fast (data set 2). Data set 3 is obtained from a personal navigation system, where MTx is attached on the shoe of a pedestrian [3].

The bit rates of data set 3 is the largest because the change of data is the most abrupt. In particular, when the shoe contacts the floor, there is a large change in the accelerometer and gyroscope outputs, and consequently the compression algorithm becomes m(k) = 1 more often.

To see how the compression error affects the estimation error, a simple one dimensional attitude estimation problem is considered. An attitude (θ) is estimated using two outputs: y
where v

_{i}is an inclinometer output and y_{g}is a gyroscope output, where
$$\begin{array}{lll}{y}_{i}\hfill & =\hfill & \theta +{v}_{i}\hfill \\ {y}_{g}\hfill & =\hfill & \dot{\theta}+{v}_{g}\hfill \end{array}$$

_{i}and v_{g}are measurement noises and ${q}_{i}=\mathrm{E}\{{v}_{i}^{2}\}=0.13\times {10}^{-1}$ and ${q}_{g}=\mathrm{E}\{{v}_{g}^{2}\}=0.78\times {10}^{-5}$.An indirect Kalman filter ([12]) is used to estimate θ using y
Defining θ
we obtain
In the indirect Kalman filter, θ

_{i}and y_{g}. In the indirect Kalman filter, the gyroscope output is integrated to compute θ_{i}, i.e.,
$${\dot{\theta}}_{i}={y}_{g}$$

_{δ}as
$${\theta}_{\delta}={\theta}_{i}-\theta $$

$${\dot{\theta}}_{\delta}={v}_{g}$$

_{δ}is first estimated and θ̂ is obtained indirectly from θ̂ = θ_{i}− θ̂_{δ.}By discretizing (22) with the sampling period T (T = 0.01sec), we have

$$\begin{array}{lll}{\theta}_{\delta}(k+1)\hfill & =\hfill & {\theta}_{\delta}(k)+\sqrt{T}{v}_{g}\hfill \\ \hfill {\theta}_{i}-{y}_{a}\hfill & =\hfill & {\theta}_{\delta}(k)-{v}_{i}\hfill \end{array}$$

Now the proposed compression algorithm is used to compress y

_{i}and y_{g}. The maximum compression errors e_{max,i}(e_{max}for y_{i}) and e_{max,g}(e_{max}for y_{g}) affect both the bit rate and the estimation error. If e_{max}is small, the chance that the mode becomes 1 increases. Thus the bit rate becomes large. How the bit rate changes with the changing e_{max}is given in Figure 4. The data y_{i}and y_{g}are generated using Matlab.From Figure 4, we can see that the bit rate of the inclinometer outputs increases rapidly as e

_{max,i}is decreased. On the other hand, the bit rate of the gyroscope outputs does not change much as e_{max,g}is decreased. The bit rate depends on how often the mode becomes 1: note that n_{i}(the number of bits needed for the quantized data when the mode is 1) is generally larger than n_{d}(the number of bits needed when the mode is 0). If the original signal is sufficiently smooth, d(k) is small since the predicted value ỹ(k) is very close to y(k). Thus even if we decrease e_{max}, d(k) still satisfies |d(k)| ≤ 2^{ys(k)}δ condition in (8).The y

_{i}and y_{g}signals and the compressed signals are given in Figures 5 and 6, where e_{max,i}= 0.2876 and e_{max,g}= 0.0122. We can see that the gyroscope output is relatively smooth compared with the inclinometer output. Thus the bit rate of the gyroscope output is relatively insensitive to the changes in e_{max,g}.The effects of changes in e

_{max}on the estimation error are given in Figure 7, where the estimation error is predicted using (20). Actual estimation error from simulation is given in Figure 8, where $\sqrt{\sum {\left(\theta (k)-\widehat{\theta}(k)\right)}^{2}}$ is computed.The relationship between bit rates and estimation error is presented in Figure 9, where data are from Figures 4, 7 and 8. In the left graph, the points 1–6 have similar bit rates but different P

_{error}. Thus in the following simulation, we chose the point 1, which corresponds to e_{max,i}= 0.2876 and e_{max,g}= 0.0122. We compared three different filters: (a) a standard Kalman filter using y_{i}and y_{g}; (b) a Kalman filter using ŷ_{i}and ŷ_{g}with compression error compensation (proposed in Section 4); (c) a Kalman filter using ŷ_{i}and ŷ_{g}without compression error compensation. We randomly generated 12 data sets and the results are given in Table 3.It is not surprising that the P

_{error}of the standard Kalman filter is the smallest because the original data y_{i}and y_{g}are used for measurements. We can also see that the P_{error}of the proposed filter is smaller than that of the filter (c). On average, the estimation error of the proposed filter is smaller by 18.81% compared with that of the filter (c). Note that the proposed filter (b) and the filter (c) use the same decompressed data ŷ_{i}and ŷ_{g}for measurements. However, in the proposed filter, the compression error information (13) and (15) are explicitly used in (18), whereas they are ignored in the filter (c). In summary, the estimation error reduction was possible because of two facts: (1) the proposed compression method provides the compression error bound (13) and (15), and (2) the proposed filter algorithm explicitly uses the error compression bound.## 6. Conclusion

In this paper, we have proposed a compression method for inertial and magnetic sensor data. The proposed compression method guarantees that the compression error is bounded by a prescribed e

_{max}value. Smaller e_{max}value usually increases the bit rate and reduces the estimation error of the filter when the decompressed data is used. Thus e_{max}plays the role of a trade-off parameter between the bit rate and the estimation error. Also we have seen that by using a bound on compression error, the estimation error can be reduced.## Acknowledgments

This work was supported by the Korea Research Foundation(KRF) grant funded by the Korea government(MEST) (No. 2009-0074773).

## References and Notes

- Barbour, N.; Schmidt, G. Inertial sensor technology trends. IEEE Sens. J
**2001**, 1, 332–339. [Google Scholar] - Welch, G.; Foxlin, E. Motion tracking: no silver bullet, but a respectable arsenal. IEEE Comput. Graph. Appl
**2002**, 22, 24–38. [Google Scholar] - Foxlin, E. Pedestrian tracking with shoe-mounted inertial sensors. IEEE Comput. Graph. Appl
**2005**, 25, 38–46. [Google Scholar] - Suh, Y.S.; Park, S.K.; Kim, D.; Jo, K. Remote control of a moving robot using the virtual link. Proceedings of IEEE International Conference on Robotics and Automation, Roma, Italy, 2007; pp. 2343–2348.
- Zigbee-Alliance. Network Specification, Version 1
**2004**. - Choi, D.H.; Kim, D.S. Wireless fieldbus for networked control systems using lr-wpan. Int. J. Contr. Autom. Syst
**2008**, 6, 119–125. [Google Scholar] - Gersho, A.; Gray, R.M. Vector Quantization and Signal Compression; Kluwer Academic Publishers: Norwell, MA, USA, 1992. [Google Scholar]
- Jayant, N.; Noll, P. Digital Coding of Waveforms: Principles and Applications to Speech and Video; Prentice-Hall: New Jersey, USA, 1984. [Google Scholar]
- Cheng, L.; Hailes, S.; Cheng, Z.; Fan, F.Y.; Hang, D.; Yang, Y. Compressing inertial motion data in wireless sensing systems - an initial experiment. Proceedings of the 5th International Workshop on Wearable and Implantable Body Sensor Networks, Hong Kong, China, 2008.
- Suh, Y.S. Compression of inertial and magnetic sensor data for network transmission. Proceedings of 8th IFAC International Conference on Fieldbuses & Networks in Industrial & Embedded Systems, Ansan, Korea, 2009; pp. 226–229.
- ITU. Recommendation G.726 : 40, 32, 24, 16 kbit/s Adaptive Differential Pulse Code Modulation; ITU: Geneva, Switzerland, 1990. [Google Scholar]
- Brown, R.G.; Hwang, P.Y.C. Introduction to Random Signals and Applied Kalman Filtering; John Wiley & Sons: New York, NY, USA, 1997. [Google Scholar]
- Suh, Y.S.; Nguyen, V.H.; Ro, Y.S. Modified Kalman filter for networked monitoring systems employing a send-on-delta method. Automatica
**2007**, 43, 332–338. [Google Scholar]

**Figure 8.**Estimation error $(\sqrt{\sum {\left(\theta (k)-\widehat{\theta}(k)\right)}^{2}})$ of the proposed filter.

f_{0} | f_{1} | f_{2} | f_{3} | f_{4} |

0 | 0.0237 | 0.0503 | 0.0799 | 0.1131 |

f_{5} | f_{6} | f_{7} | f_{8} | f_{9} |

0.1501 | 0.1915 | 0.2379 | 0.2899 | 0.3479 |

f_{10} | f_{11} | f_{12} | f_{13} | f_{14} |

0.4129 | 0.4855 | 0.5667 | 0.6575 | 0.7593 |

f_{15} | f_{16} | |||

0.8729 | 1.0000 |

accelerometers | gyroscopes | magnetic sensors | |
---|---|---|---|

data set 1 | 643.2 | 641.6 | 617.6 |

648.0 | 632.0 | 622.4 | |

656.0 | 628.8 | 614.4 | |

data set 2 | 659.2 | 660.8 | 617.6 |

662.4 | 640.0 | 622.4 | |

668.8 | 638.4 | 619.2 | |

data set 3 | 771,2 | 715.2 | 638.4 |

798.4 | 696.0 | 633.6 | |

694.4 | 686.4 | 627.2 |

**Table 3.**Bit rates and estimation error of 3 filters: (a) a standard Kalman filter with uncompressed data, (b) the proposed method with ŷ

_{i}and ŷ

_{g}, and (c) a Kalman filter with ŷ

_{i}and ŷ

_{g}.

experiment | Bit rate | P_{error} | % improvement ((c) - (b)) / (c) | |||
---|---|---|---|---|---|---|

ŷ_{g} | ŷ_{i} | (a) | (b) | (c) | ||

1 | 684.8 | 620.3 | 0.318 | 0.437 | 0.452 | 3.32 |

2 | 654.4 | 623.3 | 0.173 | 0.361 | 0.492 | 26.62 |

3 | 640.0 | 621.3 | 0.138 | 0.345 | 0.402 | 14.19 |

4 | 645.7 | 622.2 | 0.139 | 0.312 | 0.455 | 31.39 |

5 | 631.2 | 625.7 | 0.161 | 0.382 | 0.447 | 14.50 |

6 | 633.6 | 621.6 | 0.121 | 0.310 | 0.422 | 26.55 |

7 | 692.7 | 692.7 | 0.308 | 0.427 | 0.445 | 3.96 |

8 | 640.8 | 622.2 | 0.120 | 0.280 | 0.317 | 11.77 |

9 | 634.4 | 618.6 | 0.134 | 0.295 | 0.537 | 45.15 |

10 | 696.1 | 618.8 | 0.423 | 0.487 | 0.503 | 3.14 |

11 | 638.4 | 632.9 | 0.191 | 0.394 | 0.464 | 14.99 |

12 | 638.4 | 621.5 | 0.136 | 0.292 | 0.417 | 30.13 |

average | 652.5 | 628.4 | 0.197 | 0.360 | 0.446 | 18.81 |

© 2009 by the authors; licensee MDPI, Basel, Switzerland This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution license (http://creativecommons.org/licenses/by/3.0/).