Improving Estimation Performance in Networked Control Systems Applying the Send-on-delta Transmission Method.

This paper is concerned with improving performance of a state estimation problem over a network in which a send-on-delta (SOD) transmission method is used. The SOD method requires that a sensor node transmit data to the estimator node only if its measurement value changes more than a given specified δ value. This method has been explored and applied by researchers because of its efficiency in the network bandwidth improvement. However, when this method is used, it is not ensured that the estimator node receives data from the sensor nodes regularly at every estimation period. Therefore, we propose a method to reduce estimation error in case of no sensor data reception. When the estimator node does not receive data from the sensor node, the sensor value is known to be in a (−δi,+δi) interval from the last transmitted sensor value. This implicit information has been used to improve estimation performance in previous studies. The main contribution of this paper is to propose an algorithm, where the sensor value interval is reduced to (−δi/2,+δi/2) in certain situations. Thus, the proposed algorithm improves the overall estimation performance without any changes in the send-on-delta algorithms of the sensor nodes. Through numerical simulations, we demonstrate the feasibility and the usefulness of the proposed method.

explored and applied by researchers because of its efficiency in the network bandwidth improvement. However, when this method is used, it is not ensured that the estimator node receives data from the sensor nodes regularly at every estimation period. Therefore, we propose a method to reduce estimation error in case of no sensor data reception. When the estimator node does not receive data from the sensor node, the sensor value is known to be in a ( ) , i i δ δ − + interval from the last transmitted sensor value. This implicit information has been used to improve estimation performance in previous studies. The main contribution of this paper is to propose an algorithm, where the sensor value interval is reduced to ( )

Introduction
Recently, interest in the study of networked control system (NCS) and sensor networks has increased widely due to its low cost, high flexibility, simple installation and maintenance [1]. In such systems, a center station has the task of receiving, processing and sending data to the slave nodes such as sensor nodes, actuator nodes, etc. over a serial network. It is easy to add or remove slave nodes in the network without changing the system structure much. Aside from the advantages as mentioned above, however, there are several problems affecting to the system such as bandwidth, networkinduced delay, packet loss rate.
One of the most interesting problems is how to reduce network bandwidth when there are many nodes on the network. This can be achieved by reducing either data packet size or data packet transmission rate. In [2], an adjustable deadband was defined on each node to reduce network traffic. The node does not broadcast a new message if its signal is within the deadband. In [3], estimators were used at each sensor node instead. When the estimated value deviates from the actual output by more than a prespecified tolerance, the actual sensor data are transmitted. To overcome the limited network bandwidth, transmission data size reduction using a special encoder-decoder was considered in [4]. Another method for reduction of data transmission rate called send-on-delta (SOD) transmission was explored in [5][6][7][8]. This method requires that a sensor node transmit data to the estimator node only if its measurement value changes more than a given specified δ value. By adjusting the δ value at each sensor node, data transmission rate is reduced so that the network bandwidth is increased and can be used for other traffic.
The purpose of this paper is to extend our work on the modified Kalman filter employing a SOD transmission method [5], where the states are periodically estimated by the estimator node regardless of whether the sensor nodes transmit data or not. A challenging issue is how the estimator node determines the measurement value at a sensor node if it does not send data. If this problem is well solved, the estimation performance will be significantly improved. In this paper, we examine and evaluate the measurement output value as well as measurement noise arising when a sensor node does not transmit data. Then, a computed output with the new measurement value and new noise covariance is compensated to the system in order to reduce estimation error. Through simulations, we show that the proposed method gives better estimation performance than that of [5].

Sensor Output Evaluation
Consider a networked control system illustrated as in Figure 1, where the linear continuous-time system is described as: where n x R ∈ is the state of the plant, u is the deterministic input signal, p y R ∈ is the measurement output which is sent to the estimator node by the sensor nodes. ( ) w t is the process noise with covariance Q , and ( ) v t is the measurement noise with covariance R . ( ) w t and ( ) v t are uncorrelated, zero mean white Gaussian random processes.
The following assumptions are made on the data transmission over network: are sampled at period T, but their data are only transmitted to the estimator node when the difference between the current value and the previously transmitted one is greater than i δ .
2. For simplicity in problem formulation, any transmission delay from the sensor nodes to the estimator node is ignored.
The estimator node estimates the states of the plant regularly at period T, regardless of whether or not any sensor data arrive. If the i -th sensor data do not arrive, the estimator node knows that the current value of i -th sensor output has not changed more than the range ( ) δ δ − + , i i , compared with the last arriving one. This implicit information is used to estimate the current states in case i -th sensor data do not arrive.
Let the last received value of i -th sensor output be but the measurement noise is increased from ( ) In [5], it was assumed that Δ , ( , ) i l a s ti t t had a uniform distribution with zero mean and a variance δ 2 / 3 i . However, this assumption is incorrect in some cases where that measurement noise ( , ) i l a s ti t t in [5] is more likely affected by the zero-mean measurement noise ( ) v t ; thus zero mean assumption is a more valid assumption.
To demonstrate this argument, consider an example of the output response of an 2 nd order system with step input shown in Figure 2. The top graph is the output signal without noise ( ) Cx t and the measurement output ( ) y t . In this example, ( ) y t is sampled by the send-on-delta scheme with 0.1 δ = . The new measurement noise ( ) n v t is shown in the middle and bottom graphs for two cases: δ , respectively. Obviously, the waveform of ( ) n v t is rather different in two cases: It is a uniform distribution with zero mean in the first case, but with non zero mean in the second case. For example, ( ) n v t has a positive mean value in time range (0,10 ) s but a negative mean value in (10,20 ) s . Therefore, the assumption in [5] is only correct in the case 2 R δ > . Furthermore, in realistic systems, measurement noise is normally small, while δ is set to a greater value to reduce data transmission rate. For some types of digital sensor such as encoders, measurement noise is even equal to zero. This means that the second case (non-zero mean case) happens more usually than the first case in the real applications. We propose an estimation algorithm which exploits this non-zero mean case in order to improve the estimation performance. Now we compute the mean value and variance of , ( ) We see that variance of , ( ) n i v t in the second case (4) is smaller than in (3) which was used in [5].

State estimation with the SOD transmission method
In this section, we propose a method to adaptively use (3) or (4) in the modified Kalman filter for the state estimation problem. The main issue here is when we should use (4) instead of (3)? And if (4) is used, how then to determine when the mean value of , ( ) E v t is obtained in (6), to satisfy zero mean noise condition in the Kalman filter, we just add this value to the measurement value , last i y . Now we investigate the system response to determine when (3) or (4) is chosen to the filter. The basic principle is that if it is certain that ( ) i y t is increasing or decreasing, we should use (4). And if it is not certain, we should use (3). This decision is made based on the absolute value of ( ) i y t ′ as follows: where 0 i ε > is a sufficiently small threshold. If

( )
, i lasti y t ′ is small, it is difficult to draw a meaningful conclusion about whether ( ) i y t is increasing or decreasing, so in that case, we use (3). Otherwise, we could be fairly certain about whether ( ) i y t is increasing or decreasing; in that case, we use (4). Some remarks about the parameter i ε are warranted. If i ε = ∞ , (3) is always used in the filter as in [5]. Therefore, in case 2 ( , ) i i i R δ > , the proposed filter will become to the filter in [5] if setting i ε = ∞ . It is very flexible to switch the filter to [5] or to the the proposed filter so that estimation performance is improved in both case 2 R δ > and 2 R δ .
A modified Kalman filter for state estimation at step k , where there is a change in the measurement update part of the discrete Kalman filter algorithm, is given as in the Fig.3 when both (3) and (4) are used in case 2

R
δ . Basic principle of Kalman filters could be found in [10,11]. In Fig.3  In the modified filter above, vector m presents mean value of ( ) n v t for all p measurement outputs.
In case the filter uses (4) for estimation, m becomes non-zero. In order to satisfy zero mean noise condition, we add m to last y and consider ( ) last y m + as the measurement values. The i ε thresholds determine when the filter uses (4) instead of (3). The smaller i ε is, the more the filter relies on (4). In practical systems, i ε thresholds could be determined by monitoring derivative of the sensor output ( ) It is difficult to derive an explicit expression of performance improvement. However, we could say that the error covariance k P becomes smaller when (4) is used more often. This is because smaller R value in the Kalman filter results in smaller k P value. We note that (4) is more often used if ( ) y t is either monotonically increasing or decreasing.

Simulation
To verify the proposed filter, we consider an example of the step response of a second-order system where the output is sampled by the SOD method: The system parameters are given in 3 following cases for performance evaluation: 1. The simulation process is implemented for 50 seconds, and 0.001 ε = for all cases. In each case, we use 3 filters for performance comparison: the filter using (3) only, the filter using (4) only, and the proposed filter. Estimation error is evaluated by the criterion: where 1 x is the reference state, 5000 N = .
Estimation errors for different δ values in 3 cases, where the condition 2 R δ is satisfied, are given in Table 1, 2, and 3. We see that estimation performance of the proposed filter is the best and significantly improved in comparison with the filter using (3). For example, in the case of 0.2 δ = , estimation error of the proposed filter is reduced 36.75% compared to the filter using (3) in case 1, reduced 0.88% in case 2, and reduced 7.49% in case 3.
The results also show that the filter using (4) is better than the filter using (3) in case 1, but worse in case 2 and case 3. In theory, the filter using (4) must be always better than the filter using (3). This happens because there exists a delay between the sensor output signal ( ) y t ′ and the computed value ( ) last y t ′ in (5). Therefore, if ( ) y t ′ changes sign while condition ( ) ( ) last y t y t δ − < still holds, the estimator keeps using the old value ( ) last y t ′ whose sign is incorrect compared with ( ) y t ′ . This makes (6) incorrect, and leads to increase of estimation error. As we see in the Fig.5 and Fig.6, the estimation error of the filter using (4) becomes large at time ( ) y t ′ changes the sign.

Conclusion
In this paper, the state estimation problem with SOD transmission method over the network has been considered. The main objective of this paper is how to reduce the estimation performance degradation when the SOD transmission method is used. With given δ value and measurement noise R , a suitable new noise is computed and compensated to the estimator so that estimation error is as small as possible. Accordingly, not only sensor data transmission rate can be significantly reduced, but also estimation performance degradation is relatively small. Through the simulations, it is shown that estimation performance of the proposed method is better than that of [5], where the total number of sensor data transmissions is identical.