Next Article in Journal
Scheduling for Emergency Tasks in Industrial Wireless Sensor Networks
Next Article in Special Issue
A Time-Space Domain Information Fusion Method for Specific Emitter Identification Based on Dempster–Shafer Evidence Theory
Previous Article in Journal
A Novel Physical Sensing Principle for Liquid Characterization Using Paper-Based Hygro-Mechanical Systems (PB-HMS)
Previous Article in Special Issue
A Reliability-Based Method to Sensor Data Fusion
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Online Denoising Based on the Second-Order Adaptive Statistics Model

1
School of Computer Information and Engineering, Beijing Technology and Business University, Beijing 100048, China
2
Beijing Key Laboratory of Big Data Technology for Food Safety, Beijing Technology and Business University, Beijing 100048, China
3
The Key Laboratory of Urban Security and Disaster Engineering, Ministry of Education, Beijing University of Technology, Beijing 100124, China
*
Author to whom correspondence should be addressed.
Sensors 2017, 17(7), 1668; https://doi.org/10.3390/s17071668
Submission received: 24 April 2017 / Revised: 7 July 2017 / Accepted: 10 July 2017 / Published: 20 July 2017

Abstract

:
Online denoising is motivated by real-time applications in the industrial process, where the data must be utilizable soon after it is collected. Since the noise in practical process is usually colored, it is quite a challenge for denoising techniques. In this paper, a novel online denoising method was proposed to achieve the processing of the practical measurement data with colored noise, and the characteristics of the colored noise were considered in the dynamic model via an adaptive parameter. The proposed method consists of two parts within a closed loop: the first one is to estimate the system state based on the second-order adaptive statistics model and the other is to update the adaptive parameter in the model using the Yule–Walker algorithm. Specifically, the state estimation process was implemented via the Kalman filter in a recursive way, and the online purpose was therefore attained. Experimental data in a reinforced concrete structure test was used to verify the effectiveness of the proposed method. Results show the proposed method not only dealt with the signals with colored noise, but also achieved a tradeoff between efficiency and accuracy.

1. Introduction

With recent improvements in sensor technologies, information networks, and telemetry, an enormous amount of data is collected every day. At the same time, with the help of data processing techniques, policy-makers and scientists are now able to deploy these sampled data in significant applications such as target location [1], disease case count prediction [2], structural health monitoring [3], and financial forecasting [4]. However, signals may be subject to random noise in practical processes, due to such reasons as incorrect measurements, faulty sensors, or imperfect data collection. Any noise and instability can be considered as the source of error, which would result in signal distortion.
How to eliminate the influence of the noise in measured data and extract the useful information has been a focus of information science research. Currently, existing algorithms primarily focus on the offline denoising problem, which requires a full set of data to accomplish the denoising process. Common solutions can be divided into two categories, i.e., offline denoising in the time domain [5,6,7] and in the frequency domain [8]. Specifically, in the time domain, Weissman et al. [5] proposed the discrete universal denoiser (DUDE) algorithm for offline denoising. DUDE assumes the statistical knowledge of the noise mechanism, but makes no assumptions on the distribution of the underlying data. Furthermore, Motta et al. [6] presented an extension of the DUDE specialized for the denoising of grayscale images. Moon et al. [7] introduced S-DUDE, a new algorithm for denoising discrete memoryless channel(DMC)-corrupted data. Aside from DUDE, the wavelet transform method is another classical offline method. Various offline denoising techniques based on the modified wavelet transform method can be found in different denoising-related applications [9,10,11]. The methods above possess strong capabilities for removing noise and preserving data details. However, the commonality is that they require all the data as a necessary condition.
In the frequency domain, Bhati et al. [12] designed a time–frequency localized three-band biorthogonal linear phase wavelet filter bank for epileptic seizure electroencephalograph (EEG) signal classification. For nonlinear and nonstationary signals, Gao et al. [13] proposed a novel framework of amalgamating empirical mode decomposition (EMD) with variable regularized two-dimensional sparse non-negative matrix factorization (v−SNMF2D) for single-channel source separation. The method can avoid certain strong constraints in separating blind source signals, and provide a robust sparse decomposition. However, it also needs all the data for the denoising process, and is not suitable for online purposes. Yin et al. [8] presented a novel approach for mechanical vibration signal denoising filter using partial differential equation(PDE) and its numerical solution. This method is not only easy to achieve but also can obtain smooth filtering results. However, this method has some limitations. First, the cut-off frequency in practical systems is difficult to determine. Second, as the amount of data increases, the arithmetic speed is slower, and the inverse matrix will be out of memory.
In real-world applications, such as condition monitoring [14], critical event forecasting [2] or health monitoring [3], data is susceptible to the noise and instability of the measurement process. Existence of serious noise in real-time data may cause not only the inaccurate outcomes but also the failure of the entire system. Meanwhile, in many physical systems, we need to collect and process data in real time. Therefore, online denoising is very necessary in data processing.
On one hand, smoothing and trend filtering methods are typically used in data processing to remove useless information. The moving average method is a well-known and popular technique due to its simplicity for online denoising [15] and long-term forecasting applications [16,17]. However, when there is too much random impulse disturbance in the experiment process, the measured data will show a strong maneuvering feature, and the smoothing filtering is not applicable to deal with it. In recent years, further study has improved these canonical methods. Exponential smoothing was used in prediction applications such as tourism demand [18], composite index data [19] or inflation rate [20]. Jere et al. [20] described the Holt’s exponential smoothing algorithm based on the assumption of a model consisting of a trend. Recent observations are expected to have significant influence on the future values in a series. Therefore, the prediction of the future value could be implemented through some previous points. Online denoising could also be achieved in a similar way. For instance, Goh et al. [21] proposed a sequential myriad smoothing approach for tracking a time-varying location parameter corrupted by impulsive symmetric α stable noise.
On the other hand, the Kalman filter, as a popular estimation method, has been widely used in various online applications, like navigation [22], target tracking [23], and signal processing [24], etc. The most important advantage is that it can not only retain the useful information but also obtain the most optimal estimate online. For example, the Kalman filter is used to minimize the error in stereo vision-based distance measurement data (3D position of pedestrians) in [25]. Huang et al. [26] developed the noise reduction method by a hybrid Kalman filter with an autoregressive moving average (ARMA) model. The coefficients of the AR model for the Kalman filter are calculated by solving for the minimum square error solutions. Rosa et al. [27] presented a Kalman filter-based approach for track reconstruction in a neutrino telescope, which can effectively remove the errors caused by noise and improve the accuracy of the data.
It needs to be pointed out that, when using the Kalman filter, an accurate system dynamic model would offer great help to achieve the optimal estimation. Miao et al. [28] used the Kalman filter with several different kinds of system models to remove the noise of the storage volume data of the internet center. Due to the difficulty in obtaining the density characteristic of the practical data, the adaptive model was proposed to capture the characteristics of the moving targets in [29], and estimate the acceleration based on the adaptive parameter.
In this paper, a denoising method for real-time data with unstable fluctuation and colored noise was investigated. For the sake of the data features and the online requirement, the Kalman filtering method based on a second-order adaptive statistics model was proposed here, and its performance was verified by some real test data. Moreover, the test data was processed via another two representative methods: first-order exponential smoothing [18] and Holt’s exponential smoothing [20], and the results demonstrated that the proposed method could give a better effect.
Compared to previous works, the contribution of this work is that we used a second-order adaptive model for online denoising, which can obtain a better denoising performance for the measurements in the reinforced concrete structure test experiment. The comparison between our model and the third-order model [29] is given in Section 3, and the results show that the developed second-order adaptive model here can obtain a smaller error and consume less time.
The structure of this paper is as follows. Section 2 presents the specific method of the second-order adaptive statistics model. The overview of the experiment is provided in Section 3. Section 4 discusses the robustness and the real-time performance. Some conclusions are given in Section 5.

2. Online Denoising Algorithm Based on Kalman Filtering and the Adaptive Statistics Model

For the purpose of removing the unexpected noise in an online mode, Kalman filtering was actually a competitive solution, where only the estimation derived in the previous step and the measurements in the current step were required to compute the new estimated values. However, this is not enough to obtain the desired results. A reasonable model that could describe the dynamic features of the data is another impact factor in the denoising process. Therefore, a second-order adaptive statistics model is presented later in this section, and the method to compute the adaptive parameter is explained in detail as well.

2.1. Online Denoising Algorithm Based on Kalman Filtering

Kalman filtering is one of the most classical recursive algorithms that gives the optimal estimation of the state vector. The Kalman filter estimates a process by using a form of feedback control: the filter estimates the process state at some time and then obtains feedback in the form of (noisy) measurements. As such, the equations for the Kalman filter fall into two groups: state update equation and measurement update equation, which can be expressed as:
x ( k + 1 ) = Φ ( k + 1 | k ) x ( k ) + U ( k ) u ( k ) + w ( k ) z ( k + 1 ) = H ( k + 1 ) x ( k + 1 ) + v ( k + 1 )
where x is the state vector of the system to be estimated, whose initial value and covariance are known as x 0 and P 0 . Φ ( k + 1 | k ) is the state-transition matrix. u ( k ) is the system input and U ( k ) is the corresponding matrix. w ( k ) and v ( k ) are the process noise and measurement noise respectively, and the variance of v ( k ) is known (as R). Note that both w ( k ) and v ( k ) are white noise with zero mean and independent of the initial state x 0 . z ( k ) is the measurement vector and H ( k ) is the observation matrix.
The Kalman filtering considers the correlation between errors in the prediction and the measurements. The algorithm is in a predict-correct form, which is convenient for implementation as follows:
(1)
Initialization:
x ^ ( 0 | 0 ) = x 0 P ( 0 | 0 ) = P 0
(2)
Prediction:
x ^ ( k + 1 | k ) = Φ ( k + 1 | k ) x ^ ( k | k ) + U ( k ) u ( k )
P ( k + 1 | k ) = Φ ( k + 1 | k ) P ( k | k ) Φ T ( k + 1 , k ) + Q ( k )
(3)
Correction:
K ( k + 1 ) = P ( k + 1 | k ) H T ( k + 1 ) [ H ( k + 1 ) P ( k + 1 | k ) H T ( k + 1 ) + R ] T
x ^ ( k + 1 | k + 1 ) = x ^ ( k + 1 | k ) + K ( k + 1 ) [ y ( k + 1 ) - H ( k + 1 ) x ^ ( k + 1 | k ) ]
P ( k + 1 | k + 1 ) = [ I - K ( k + 1 ) H ( k + 1 ) ] P ( k + 1 | k )
According to the equations above. The algorithm works in a two-step process. In the prediction step, the Kalman filter produces estimates of the current state variables along with their uncertainties. Once the outcome of the next measurement (necessarily corrupted with some amount of error, including random noise) is observed, these estimates are updated using a weighted average, with more weight being given to estimates with higher certainty. Since the algorithm can run recursively, we can implement it step by step, that is, the denoised data can be obtained in real time.

2.2. Adaptive Statistics Model for Online Denoising

Considering the unstable fluctuation of the data and the existence of colored noise, the linear time-invariant model with noise as used in Section 2.1 may not be suitable for describing this kind of data. Therefore, we proposed a second-order adaptive statistics model to deal with these challenges. Let x , x ˙ be the data itself and the gradient, respectively. The state vector is expressed as x = [ x , x ˙ ] T throughout this paper unless stated otherwise explicitly.
Referring to the colored noise, it mainly lies in the changing process of the data gradient. When the data is varying with time, its gradient will follow certain rule: value of the gradient at the next time tick is always within the neighborhood of the current predicted gradient value. Therefore, the gradient can be computed as:
x ˙ ( t ) = g ¯ ( t ) + Δ ( t )
where g ¯ ( t ) is the predicted value of x ˙ ( t ) in current interval. In particular, Δ ( t ) stands for the maneuvering change with colored noise. Considering that Kalman filter has specific requirements for the type of the noise, colored noise in Δ ( t ) needs to be processed. Therefore, the Wiener–Khinchin theorem was introduced here, which assumes it corresponds to the first-order stationary Markov process:
Δ ˙ ( t ) = - α Δ ( t ) + w ( t )
where α is the parameter of maneuvering frequency [29], and w ( t ) is a Gussian white noise with zero mean and a variance of σ Δ 2 . With the two equations above, the change of the gradient can be written as:
x ¨ ( t ) = - α x ˙ ( t ) + α g ¯ ( t ) + w ( t )
since x ¨ ( t ) = Δ ˙ ( t ) over any sampling interval.
Therefore, the state-space representation of the continuous-time adaptive model is:
x ˙ ( t ) = 0 1 0 - α x ( t ) + 0 α g ¯ ( t ) + 0 1 w ( t )
Let A = 0 1 0 - α , B = 0 - α , C = 0 1 . The solution of equations is:
x ( t ) = e A t x ( t 0 ) + 0 t e A ( t - λ ) B g ¯ ( λ ) d λ + 0 t e A ( t - λ ) C w ( λ ) d λ
We assume t = t 0 + T and t 0 = k T . Then we can get the discrete-time equivalent as the following:
x ( k + 1 ) = Φ ( k + 1 | k ) x ( k ) + U ( k ) g ¯ ( k ) + w ( k )
With Laplace transforms, matrix Φ ( k + 1 | k ) can be expressed as:
Φ ( k + 1 | k ) = e A T = 1 ( 1 - e - α T ) α 0 e - α T
Matrix U ( k ) can be described as:
U ( k ) = 0 T e A ( T - λ ) B d λ = T 1 - ( 1 - e - α T ) α e - α T = T - 1 - e - α T α 1 - e - α T .
The variance of the w ( k ) can be computed in the following way:
Q ( k ) = E [ w ( k ) w T ( k ) ] = 0 T e A ( T - λ ) C σ Δ 2 C e A ( T - λ ) d λ = 2 α σ Δ 2 q 11 q 12 q 12 q 22
where
q 11 = 1 2 α 3 4 e - α T - 3 - e - 2 α T + 2 α T q 12 = 1 2 α 2 e - 2 α T + 1 - 2 α T q 22 = 1 2 α 1 - e - 2 α T

2.3. Adaptive Parameter Adjustment via the Yule-Walker Algorithm

In the previous subsection, a statistics model was presented to capture the fluctuation features in the measured data. It needs to be pointed out that in the proposed model, the adaptive parameter α is not only unknown, but also self-adaptive.
We adopted the following method to update parameter α and σ Δ 2 based on the Yule–Walker estimated algorithm [29]. First of all, we need to discretize the Equation (9). Through substituting A to - α and C to 1 in Equations (14) and (16), we can obtain its discrete-time equivalent:
Δ ( k ) = β Δ ( k - 1 ) + w ( k - 1 )
where w ( k - 1 ) is a discrete-time zero-mean white noise sequence with variance σ Δ ω 2 = σ Δ 2 ( 1 - β 2 ) and β = e - α T . Then, the method of parametric update is as follows:
(1)
Initialization:
α ( 0 ) = α 0 σ Δ 2 ( 0 ) = σ Δ 0 2 r 0 ( 0 ) = x ˙ 0 · x ˙ 0 r 0 ( 1 ) = x ˙ 0
(2)
Set the estimation of gradient and g ¯ ( k ) as:
x ˙ ^ ( k ) = x ˙ ( k | k )
g ¯ ( k ) = 1 k k = 0 k x ˙ ^ ( k | k )
The parameter of σ ( k ) is satisfied with the first-order stationary Markov process:
β ( k ) = r k ( 1 ) r k ( 0 ) σ Δ ω 2 ( k ) = r k ( 0 ) - α ( k ) r k ( 1 )
(3)
Parameter update:
r k ( 1 ) = r k - 1 ( 1 ) + 1 k x ˙ ^ ( k ) x ˙ ^ ( k - 1 ) - r k - 1 ( 1 ) r k ( 0 ) = r k - 1 ( 0 ) + 1 k x ˙ ^ ( k ) x ˙ ^ ( k ) - r k - 1 ( 0 )
and
α ( k ) = - ln r k ( 1 ) - ln r k ( 0 ) T σ Δ 2 ( k ) = r k ( 0 ) - α ( k ) r k ( 1 ) 1 - r k ( 1 ) r k ( 0 ) 2
Then, we can use the Equation (24) to get α and σ Δ 2 so that we can achieve the purpose of updating the system parameters.
Using the method described in this section, online denoising of data with unstable fluctuation and colored noise was then accomplished. The flow chart of the proposed method was given in Figure 1. It can be seen that the method consists of two parts within a closed loop. The first one is to estimate the system state with the Kalman filter based on the second-order adaptive statistics model, and the other is to update the adaptive parameter in the model by the Yule–Walker algorithm. In the next section, the effectiveness of this method will be evaluated via the experiment data from a reinforced concrete structure test, and the results will also be compared to some other representative online denoising methods.

3. Experiments

In order to verify the effectiveness of the proposed algorithm, experimental data from the test of a reinforced concrete structure was adopted. The configuration of the experiment is shown in Figure 2. It was a quasi-static test for the column made by Chinese Grade 345 steel and C30 Grade concrete [30]. During the experiment, the column was tested under constant axial load and cyclic bending. Through this experiment, deformation displacement at different time samples were obtained, which correspond to the measurements in the proposed algorithm. Although the entire data was ready before denoising as well, the process was implemented in an ‘online’ mode, i.e., only the measurement of the ‘current’ sampling time and previous result would be used in computation. The necessity of the online mode for this background is because the actual value of the measured state has great effect on the identification of the structure security, and it needs to be known during the monitoring process. In this experiment, the sampling time was set as 0.001 s.
Figure 3 gives the measurement and the real data, which is used to test the performance of the developed method. The measurement data came from the experiment and the real data came from the offline filter with high degree of accuracy. As can be clearly seen from Figure 3, the measured data possessed a unstable fluctuation as well as the existence of the colored noise.
In this paper, we compared a second-order adaptive statistics model with various other methods such as first-order exponential filtering, Holt’s exponential filtering or a third-order adaptive statistics model to deal with the denoising problem for the real-time deformation displacement data. In order to evaluate these methods, mean and covariance of the error were compared. In addition, the root-mean-square error (RMSE) was used. The RMSE is very commonly used and makes for an excellent general purpose error metric for numerical predictions. Specifically, ‘mean’ here represents averaged absolute value of difference between the real data and the denoised data, i.e.,
mean = i = 1 n r i - d i n
where n is the number of the measurements, r i is the i th real data and d i is the corresponding denoised data.
Then, the covariance is defined as the following:
cov = i = 1 n ( mean - r i - d i ) 2 n
Finally, the RMSE can be expressed as the following:
RMSE = i = 1 n ( d i - r i ) 2 n
In the following context, three cases are implemented. In the first two cases the comparison between different denoising methods is depicted, while a third case is given to discuss the effect of the initial value on the denoising performance. In Section 3.1, the adaptive statistics models, including the second-order model and the third-order model, were used to deal with the data; in Section 3.2, we compared the developed method with the first-order exponential filtering and Holt’s exponential filtering, respectively; in Section 3.3, through eliminating data within the adjustment process and retaining the posterior convergent data, the denoising effect was obviously improved.

3.1. The Denoising Effect of the Adaptive Statistics Model

The performances of the second-order and the third-order adaptive methods for online denoising were compared in this part. The denoised results are shown in Figure 4. Since the difference was too small, we provide a detailed part of the curves in the small picture, and 500 points from 6.3 s to 6.8 s are shown there. The results demonstrated that this algorithm is feasible and reliable with reasonable precision. Furthermore, through comparing the real data and the denoised data, the satisfactory denoising effect of the second-order adaptive statistics model was illustrated.
Comparing the second-order and third-order adaptive statistics models, we can find a satisfactory denoising effect in Figure 4a,b. However, from the result before 3 s, we might notice the third-order adaptive statistics model performs with poorer precision. Thus, the second-order adaptive statistics model can have advantages with respect to accuracy. Meanwhile, in order to better describe the error and compare the denoising precision, Figure 5 gives the error of the both models.
The results in Figure 5 show that the second-order adaptive statistics model has the smaller error. In order to better prove this conclusion, more groups of data were adopted to test the method, and each group contained 10,000 points. The symbol m e a n m here represents the mean of the whole data set. The results of the tests are shown in Table 1. Obviously, for each group, results from the second-order model all showed better performance both in mean, covariance and RMSE. As a whole, variance and RMSE of the second-order model was only about 0.0223 and 0.1461, respectively, better than that of the third-order model (0.1407 and 0.3129). On the other hand, Kalman filtering is an estimation algorithm which shows resemblance and proximity with the one-step prediction. We can estimate next step value by merely using the last measurement. Therefore, it is an online algorithm, that is, there is the negligible delay with the denoising process. In addition, the calculated amount of the second-order model is lower than for the third-order model. This is due to the more computational expense caused by the larger matrices in the higher-order model. Therefore, results showed the second-order adaptive statistics model could not only deal with the signals with colored noise in real time, but also achieve a tradeoff between efficiency and accuracy.
Based on the results in the Table 1, it can be clearly seen that the second-order adaptive statistics model is better than the third-order one, because it provided better precision and faster speed in online denoising. Meanwhile, as we can see in Figure 6, more stable denoising effect and smaller RMSE can be offered by the second-order statistics model, in which the ‘orange column’ is the RMSE and the ‘blue column’ is the covariance for each group.

3.2. Comparison of the Denoising Effect between the Proposed Method and the Exponential Smoothing

Formerly, the exponential smoothing was typically for forecasting. Simultaneously, it could also be applied in online denoising [21]. When using the exponential smoothing, parameter selection is very important, as it can adjust the development tendency of the data trend. However, it is usually very subjective. Nowadays, the primary methods for parameters selection can be divided into two ways: one is the empirical method, the other is trial method. In this paper, we adopted the empirical method. Finally, we decided to utilize first-order exponential smoothing and Holt’s exponential smoothing for comparison with the result in Section 3.1.

3.2.1. The Denoising Effect of the First-Order Exponential Smoothing

We utilized priori knowledge to select the parameters of 0.2, 0.5 and 0.8. A first-order exponential smoothing with different parameters was used to denoise the same five groups of data as those in Section 3.1, and the results are given in Table 2. According to those test results, we can draw a conclusion that first-order exponential smoothing [18] with a parameter of 0.2 possessed the best denoising effect.

3.2.2. The Denoising Effect of the Holt’s Exponential Smoothing

Within Holt’s exponential smoothing [20], two kinds of states were usually used: one was the signal of the backward-smoothing, and the other was the tendency of the backward-smoothing. As a result, we introduced two parameters a and b. b was set to be 0.8 as empirical value, meanwhile, the parameter a was selected the same as the first-order exponential smoothing method, which was 0.2, 0.5 and 0.8. The same data was used as before, and the results are shown in Table 3.
It can be clearly seen in Table 3 that the best denoising effect can be acquired with the parameter a of 0.2 and b of 0.8, but the value of different indicators was still obviously larger than those of the proposed adaptive method. Table 4 gives a summary of performance comparison among different methods.
In these three categories of online denoising methods, the mean, covariance and RMSE of the adaptive statistics model are obviously the smallest. The results indicated that online denoising could be better achieved via the adaptive statistics model, because the system parameter could be adjusted dynamically as the denoising process was implemented. Furthermore, by contrasting the second-order adaptive model and the third-order adaptive model, we have come to the tentative conclusion that the effect of the second-order adaptive model is more outstanding. To sum up, between the two exponential smoothing methods, the Holt’s exponential smoothing with the parameter a of 0.2 and b of 0.8 has better denoising effect. However, among all the different methods conducted in this paper, the second-order adaptive statistics model presented the best performance. It not only showed good denoising accuracy, but also gave a faster processing speed.

3.3. The Effect of Initial Value on the Denoising Performance

In this case, we would analyze the figure of the error data, as shown in Figure 7.
From the figure above it can be clearly seen that online denoising based on the adaptive statistics model had a regulatory process at the beginning. This is because the initial value of x 0 was zero and P 0 was very big. It thus appears that we could obtain the more precise filtering results through the index for selection. Actually, it needs to be emphasized that the convergence procedure existed in the adaptive model, that is, the denoising effect is be better as time goes on. Finally, we selected the last 5000 points to calculate the covariance and the mean.
As can be clearly seen from the Table 5 and Figure 8, mean, covariance and RMSE decreased significantly compared with those in Table 1 and Figure 6. By assessing the data, the covariance of the second-order model is only 0.0171 and RMSE is only 0.1200, while for the third-order model these values are 0.0345 and 0.1760, respectively. Recall that the best filter effect of exponential smoothing is about 0.2 and 0.43. This leads one to believe that the adaptive statistics model was superior to the exponential smoothing. When comparing two approaches using the adaptive statistics models, we can find the denoising effect of the second-order adaptive statistics model is better than the third-order adaptive statistics model. This is because the general trend of data seems more consistent with the second order.
In fact, except for precision, the second-order model has another preponderance. It possesses a smaller computation burden. We computed the runtime for each denoising process, and found the second-order adaptive model is faster than the smoothing filter and the third-order model. If we started to denoise the data with 52,741 counts, the elapsed time of the second-order model is 9.142300 s. On the contrary, we need 13.124500 s for the third-order model. Considering the statements above, we can come to the conclusion that the second-order adaptive statistics model is kind of more accurate and efficient method to proceed online denoising.

4. Discussion

In the previous section, through the experiment data and the comparison with other classical denoising methods, the effectiveness and superiority of the proposed method have been verified. In this part, we will focus on some other features of our denoising method, that is, the robustness and the real-time performance.
Firstly, as a good denoising method, it should be able to deal with various kinds of data. In order to prove this, two groups of superposed sinusoidal signals with colored noise were adopted. The sampling time for both groups was 0.001 s. The main difference between the two reference curves was that one had more sharp points while the other changed more gently, and the curves were shown respectively in Figure 9 and Figure 10 for comparison purpose.
The first group of data with noise is given in Figure 11, where the reference curve was totally drowned. With the proposed online denoising method, the estimated curve in Figure 9 could be derived. According to the comparison with the reference curve, the original noised signal was successfully processed.
For the second group of data with noise, as was shown in Figure 12, the denoising method was applied again. The difference between the denoised result and reference values was given in Figure 10. It can be seen in the figure that the overall trend of the curve was in good accordance with the reference values, and the oscillation was because some features of the noise was reserved due to a high-dimension process model.
Secondly, we would like to discuss the real-time performance of the proposed method. In order to achieve online denoising, the algorithm should have a fast processing speed. If not, latency would exist and might affect the result. As was stated before, the method proposed in this paper was based on Kalman filtering, which was a recursive algorithm. As long as the filtering process could finish before the new measurement was collected, the method was able to be implemented in real time. In the two simulations above, the time needed for one iteration was on average of 0.0003 s, which was far smaller than the sampling time of 0.001 s. It needs to be pointed out the difference like the subfigure shown in Figure 9 was not caused by the latency; it was mainly because of the sharp point A. The estimated points were changed with inertia, and they were then corrected to the measurement values by the recursive process. Therefore, this difference actually resulted from an estimation error other than the latency of the algorithm. In fact, the algorithm indeed performed in a real-time way as described above.

5. Conclusions

A huge amount of the real-time data is collected every second around the world. However, due to the imperfect measurement and data collection mechanisms, real-time data is distorted by various types of noise and instability. Therefore, working with noisy time series is an inevitable part of any real-time data processing task and must be addressed precisely. In the past decades, the demand for real-time data analysis techniques such as the first-order exponential smoothing and Holt’s exponential smoothing has grown dramatically. In this paper, we proposed an online denoising method for the real-time data with unstable fluctuation and colored noise.
This method consists of two parts within a closed loop. The first one is to estimate state based on the second-order adaptive statistics model. The other is to update the adaptive parameter in the model by the Yule–Walker algorithm. The effectiveness of method was demonstrated via an experiment, which not only processed the signals with colored noise, but also achieved a tradeoff between efficiency and accuracy. In addition, the performance of the proposed method was compared with some existing methods. Results showed that a more accurate and efficient denoising effect could be performed by employing the second-order adaptive statistics model with the Kalman filter for online denoising.

Acknowledgments

This work is partially supported by NSFC under Grant Nos. 61273002, 61673002 and 51608016, the Beijing Natural Science Foundation under Grant No. 9162002, and the Key Science and Technology Project of Beijing Municipal Education Commission of China under Grant No. KZ201510011012.

Author Contributions

Sheng-Lun Yi proposed the method and conceived and wrote the paper. Xue-Bo Jin analyzed the method. Ting-Li Su analyzed the results. Zhen-Yun Tang designed the experiment and provided the experimental data. Fa-Fa Wang, Na Xiang and Jian-Lei Kong gave advice and suggestions.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Lu, H.H.; Zhang, A.J. Improved adaptive Kalman filtering algorithm for vehicular positioning. In Proceedings of the 34th Chinese Control Conference, Hangzhou, China, 28–30 July 2015; pp. 5125–5129. [Google Scholar]
  2. Chakraborty, P.; Khadivi, P.; Lewis, B.; Mahendiran, A.; Chen, J.; Butler, P.; Nsoesie, E.O.; Mekaru, S.R.; Brownstein, J.S.; Marathe, M.V.; et al. Forecasting a Moving Target: Ensemble Models for ILI Case Count Predictions. In Proceedings of the SIAM International Conference on Data Mining, Philadelphia, PA, USA, 24–26 April 2014; pp. 262–270. [Google Scholar]
  3. Gui, G.; Pan, H.; Lin, Z.; Li, Y.; Yuan, Z. Data–driven support vector machine with optimization techniques for structural health monitoring and damage detection. KSCE J. Civ. Eng. 2017, 21, 523–534. [Google Scholar] [CrossRef]
  4. Sapankevych, N.I.; Sankar, R. Time series prediction using support vector machines: A survey. Comput. Intell. Mag. IEEE 2009, 4, 24–38. [Google Scholar] [CrossRef]
  5. Weissman, T.; Ordentlich, E.; Seroussi, G.; Verd, S.; Weinberger, M.J. Universal discrete denoising: Known channel. IEEE Trans. Inf. Theory 2005, 51, 5–28. [Google Scholar] [CrossRef]
  6. Motta, G.; Ordentlich, E.; Ramirez, I.; Seroussi, G.; Weinberger, M.J. The idude framework for grayscale image denoising. IEEE Trans. Image Process. 2011, 20, 1–21. [Google Scholar] [CrossRef] [PubMed]
  7. Moon, T.; Weissman, T. Discrete denoising with shifts. IEEE Trans. Inf. Theory 2009, 11, 5284–5301. [Google Scholar] [CrossRef]
  8. Yin, A.; Zhao, L.; Gao, B.; Woo, W.L. Fast partial differential equation de-noising filter for mechanical vibration signal. Math. Methods Appl. Sci. 2016, 38, 4879–4890. [Google Scholar] [CrossRef]
  9. Pan, Q.; Zhang, D.; Dai, G.; Zhang, H. Two Denoising Methods by Wavelet Transform. IEEE Trans. Signal Process. 2000, 12, 3401–3406. [Google Scholar]
  10. Ukil, A. Practical Denoising of MEG Data Using Wavelet Transform. In Proceedings of the International Conference on Neural Information Processing, Hong Kong, China, 3–6 October 2006; pp. 578–585. [Google Scholar]
  11. Zou, B.; Liu, H.; Shang, Z.; Li, R. Image denoising based on wavelet transform. In Proceedings of the 6th IEEE International Conference on Software Engineering and Service Science (ICSESS), Beijing, China, 23–25 September 2015; pp. 342–344. [Google Scholar]
  12. Bhati, D.; Sharma, M.; Pachori, R.B.; Gadre, V.M. Time-frequency localized three-band biorthogonal wavelet filter bank using semidefinite relaxation and nonlinear least squares with epileptic seizure EEG signal classification. Digit. Signal Process. 2016, 62, 259–273. [Google Scholar] [CrossRef]
  13. Gao, B.; Woo, W.L.; Dlay, S.S. Single-channel source separation using EMD-subband variable regularized sparse features. IEEE Trans. Audio Speech Lang. Process. 2011, 19, 961–976. [Google Scholar] [CrossRef]
  14. Hussein, R.; Bashirshaban, K.; El-Hag, A.H. Denoising of acoustic partial discharge signals corrupted with random noise. IEEE Trans. Dielectr. Electr. Insul. 2016, 23, 1453–1459. [Google Scholar] [CrossRef]
  15. Salih, S.K.; Aljunid, S.A.; Aljunid, S.M.; Maskon, O. Adaptive Filtering Approach for Denoising Electrocardiogram Signal Using Moving Average Filter. J. Med. Imaging Health Inform. 2015, 5, 1065–1069. [Google Scholar] [CrossRef]
  16. Mhamdi, F.; Poggi, J.; Jaidane, M. Trend extraction for seasonal time series using ensemble empirical mode decomposition. Adv. Adapt. Data Anal. 2011, 3, 363–383. [Google Scholar] [CrossRef]
  17. Alexandrov, T. A method of trend extraction using singular spectrum analysis. Stat. J. 2009, 7, 1–22. [Google Scholar]
  18. Athanasopoulos, G.; Silva, A.D. Multivariate Exponential Smoothing for Forecasting Tourist Arrivals to Australia and New Zealand. Available online: http://core.ac.uk/download/pdf/6340782.pdf (accessed on 20 July 2017).
  19. Seng, H. A New Approach of Brown’s Double Exponential Smoothing Method in Time Series Analysis. Balk. J. Electr. Comput. Eng. 2016, 4, 75–78. [Google Scholar]
  20. Jere, S.; Siyanga, M. Forecasting Inflation Rate of Zambia Using Holt’s Exponential Smoothing. Open J. Stat. 2016, 6, 363–372. [Google Scholar] [CrossRef]
  21. Goh, B.M.K.; Lim, H.S.; Tan, A.W.C. Exponential Myriad Smoothing Algorithm for Robust Signal Processing in α-Stable Noise Environments. Circuits Syst. Signal Process. 2017, 8, 1–14. [Google Scholar] [CrossRef]
  22. Wang, D.; Lv, H.; Wu, J. In-flight initial alignment for small UAV MEMS-based navigation via adaptive unscented Kalman filtering approach. Aerosp. Sci. Technol. 2017, 61, 73–84. [Google Scholar] [CrossRef]
  23. Zhou, S.; Hu, P.; Li, K.; Liu, Y. A New Target Tracking Scheme Based on Improved Mean Shift and Adaptive Kalman Filter. Int. J. Adv. Comput. Technol. 2012, 4, 291–301. [Google Scholar]
  24. Rivas, M.; Xie, S.; Su, D. An adaptive wideband beamformer using Kalman filter with spatial signal processing characteristics. In Proceedings of the International Conference on Advanced Communication Technology, Seoul, Korea, 13–16 February 2011; pp. 321–325. [Google Scholar]
  25. Sinharay, A.; Pal, A.; Bhowmick, B. A Kalman Filter Based Approach to De-noise the Stereo Vision Based Pedestrian Position Estimation. In Proceedings of the 13th International Conference on Computer Modelling and Simulation, Cambridge, UK, 30 March–1 April 2011; pp. 110–115. [Google Scholar]
  26. Huang, Z.K.; Liu, D.H.; Zhang, X.W.; Hou, L.Y. Application of Kalman Filtering for Natural Gray Image Denoising. Adv. Mater. Res. 2011, 187, 92–96. [Google Scholar] [CrossRef]
  27. Rosa, G.D.; Petukhov, Y. A Kalman Filter approach for track reconstruction in a neutrino telescope. Nucl. Instrum. Methods Phys. Res. 2013, 725, 118–121. [Google Scholar] [CrossRef]
  28. Miao, B.B.; Dou, C.; Jin, X.B. Main Trend Extraction Based on Irregular Sampling Estimation and Its Application in Storage Volume of Internet Data Center. Comput. Intell. Neurosci. 2016, 2016, 9328062. [Google Scholar] [CrossRef] [PubMed]
  29. Jin, X.B.; D, J.J.; Bao, J. Maneuvering target tracking by adaptive statistics model. J. China Univ. Posts Telecommun. 2013, 20, 108–114. [Google Scholar] [CrossRef]
  30. Tang, Z.Y.; Ma, H.; Guo, J.; Xie, Y.; Li, Z. Experimental research on the propagation of plastic hinge 305 length for multi-scale reinforced concrete columns under cyclic loading. Earthq. Struct. 2016, 11, 823–840. [Google Scholar] [CrossRef]
Figure 1. The flow chart of the proposed online denoising method.
Figure 1. The flow chart of the proposed online denoising method.
Sensors 17 01668 g001
Figure 2. The configuration of the experiment.
Figure 2. The configuration of the experiment.
Sensors 17 01668 g002
Figure 3. The real data and measurement data.
Figure 3. The real data and measurement data.
Sensors 17 01668 g003
Figure 4. The denoised result of the adaptive statistics models.
Figure 4. The denoised result of the adaptive statistics models.
Sensors 17 01668 g004
Figure 5. The error comparison of the adaptive statistics models for online denoising.
Figure 5. The error comparison of the adaptive statistics models for online denoising.
Sensors 17 01668 g005
Figure 6. Covariance and RMSE of the adaptive statistics models.
Figure 6. Covariance and RMSE of the adaptive statistics models.
Sensors 17 01668 g006
Figure 7. The real data and measurement data.
Figure 7. The real data and measurement data.
Sensors 17 01668 g007
Figure 8. Covariance and RMSE of the adaptive statistics models of the last 5000 points.
Figure 8. Covariance and RMSE of the adaptive statistics models of the last 5000 points.
Sensors 17 01668 g008
Figure 9. The denoised result and the reference value.
Figure 9. The denoised result and the reference value.
Sensors 17 01668 g009
Figure 10. The reference value and the denoised result.
Figure 10. The reference value and the denoised result.
Sensors 17 01668 g010
Figure 11. The signal with noise and the reference value.
Figure 11. The signal with noise and the reference value.
Sensors 17 01668 g011
Figure 12. The reference value and the signal with noise.
Figure 12. The reference value and the signal with noise.
Sensors 17 01668 g012
Table 1. Performance comparison between different adaptive statistics models.
Table 1. Performance comparison between different adaptive statistics models.
Second-Order Adaptive Statistics ModelThird-Order Adaptive Statistics Model
Mean/mmCovarianceRMSEMean/mmCovarianceRMSE
F i r s t g r o u p 0.13010.02690.16400.17620.06750.2598
S e c o n d g r o u p 0.11300.02410.15520.48520.51790.7197
T h i r d g r o u p 0.13750.03550.18840.17570.06070.2463
F o u r t h g r o u p 0.09300.01530.12370.11070.02870.1694
F i f t h g r o u p 0.07210.00980.09900.11790.02860.1691
M e a n m 0.10910.02230.14610.21310.14070.3129
Table 2. Mean, covariance and RMSE of first-order exponential smoothing with different parameters.
Table 2. Mean, covariance and RMSE of first-order exponential smoothing with different parameters.
Various ModelsThe Parameter of 0.2The Parameter of 0.5The Parameter of 0.8
MeanCovarianceRMSEMeanCovarianceRMSEMeanCovarianceRMSE
F i r s t g r o u p 0.58640.43520.65970.92821.08131.03991.01121.26451.1245
S e c o n d g r o u p 0.58670.43510.65960.92841.08141.03401.01131.26421.1244
T h i r d g r o u p 0.58700.43530.65980.92871.08131.03991.01181.26371.1241
F o u r t h g r o u p 0.58610.43400.65880.92791.07941.03891.01081.26171.1233
F i f t h g r o u p 0.58640.43400.65880.92821.08021.03931.01121.26251.1236
M e a n m 0.58650.43470.65930.92831.08071.03841.01131.26331.1240
Table 3. Mean, covariance and RMSE the of Holt’s exponential smoothing with different parameter a.
Table 3. Mean, covariance and RMSE the of Holt’s exponential smoothing with different parameter a.
Various ModelsThe Parameter a of 0.2The Parameter a of 0.5The Parameter a of 0.8
MeanCovarianceRMSEMeanCovarianceRMSEMeanCovarianceRMSE
F i r s t g r o u p 0.38320.18910.43490.59780.46790.68400.93261.05421.0267
S e c o n d g r o u p 0.38240.18700.43240.59730.46690.68330.93291.05321.0262
T h i r d g r o u p 0.38240.18700.43240.59750.46670.68320.93261.05321.0262
F o u r t h g r o u p 0.38160.18610.43140.59650.46550.68230.93211.05051.0249
F i f t h g r o u p 0.38180.18570.43090.59680.46570.68240.93241.05121.0253
M e a n m 038230.18700.43240.59720.46650.68300.93251.05251.0259
Table 4. The results by several kinds of online denoising methods.
Table 4. The results by several kinds of online denoising methods.
Various modelsVarious orders/parameters Mean m of mean/mm Mean m of covariance Mean m of RMSE
Adaptive
statistics model
Second-order0.10910.02230.1461
Third-order0.21310.14070.3129
First-order
exponential
smoothing
Parameter of 0.20.58650.43470.6593
Parameter of 0.50.92831.08071.0384
Parameter of 0.81.01131.26331.1240
Holt’s
exponential
smoothing
Parameter a of 0.20.38230.18700.4324
Parameter a of 0.50.59720.46650.6830
Parameter a of 0.80.93251.05251.0259
Table 5. Mean, covariance and RMSE of the last 5000 points derived by the adaptive statistics model.
Table 5. Mean, covariance and RMSE of the last 5000 points derived by the adaptive statistics model.
Various ModelSecond-Order Adaptive Statistics ModelThird-Order Adaptive Statistics Model
MeanCovarianceRMSEMeanCovarianceRMSE
F i r s t g r o u p 0.11060.01780.13340.13330.02350.1533
S e c o n d g r o u p 0.08880.01200.10950.23010.08080.2842
T h i r d g r o u p 0.15280.04540.21310.13430.03040.1743
F o u r t h g r o u p 0.06820.00640.08000.12600.02690.1640
F i f t h g r o u p 0.05360.00410.06400.08420.01090.1044
M e a n m 0.09480.01710.12000.14160.03450.1760

Share and Cite

MDPI and ACS Style

Yi, S.-L.; Jin, X.-B.; Su, T.-L.; Tang, Z.-Y.; Wang, F.-F.; Xiang, N.; Kong, J.-L. Online Denoising Based on the Second-Order Adaptive Statistics Model. Sensors 2017, 17, 1668. https://doi.org/10.3390/s17071668

AMA Style

Yi S-L, Jin X-B, Su T-L, Tang Z-Y, Wang F-F, Xiang N, Kong J-L. Online Denoising Based on the Second-Order Adaptive Statistics Model. Sensors. 2017; 17(7):1668. https://doi.org/10.3390/s17071668

Chicago/Turabian Style

Yi, Sheng-Lun, Xue-Bo Jin, Ting-Li Su, Zhen-Yun Tang, Fa-Fa Wang, Na Xiang, and Jian-Lei Kong. 2017. "Online Denoising Based on the Second-Order Adaptive Statistics Model" Sensors 17, no. 7: 1668. https://doi.org/10.3390/s17071668

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop